Portal for car enthusiasts

Expert factor method for predicting reliability indicators of electrical equipment. Prediction of reliability indicators of on-board equipment of spacecraft when exposed to low-intensity ionizing radiation

A random event leading to the complete or partial loss of functionality of a product is called a failure.

Failures, based on the nature of changes in equipment parameters before their occurrence, are divided into gradual and sudden (catastrophic). Gradual failures are characterized by a fairly smooth temporary change in one or more parameters, sudden– their abrupt change. Based on the frequency of occurrence, failures can be one-time (failures) or intermittent.

Crash– a one-time self-correcting failure, intermittent Failure is a failure of the same nature that occurs multiple times.

Depending on the cause of occurrence, failures are divided into stable and self-correcting. A persistent failure is eliminated by replacing the failed component, while a self-resolving failure disappears on its own, but may recur. A self-correcting failure may appear as a crash or as an intermittent failure.

Failures occur both due to the internal properties of the equipment and due to external influences and are random in nature. To quantify failures, probabilistic methods of the theory of random processes are used.

Reliability– the property of an object to continuously maintain an operational state for some time. The ability of a product to continuously maintain specified functions for the time specified in the technical documentation is characterized by the probability of failure-free operation, failure rate and average time between failures. The failure-free operation of a product (for example, a cell) in turn is determined by the failure rates of the components λi included in its composition.

The theory of reliability assessment methodologically allows us to see and “justify” previously existing specific models for assessing reliability, in particular components, and also to foresee the degree of their completeness, sufficiency and adequacy for solving practical reliability problems.

Component failure researchers have used the principle of causality and applied knowledge from physics, chemistry, thermodynamics, and materials science to explain the degradation processes that lead to failure. As a result, synthetic terms and concepts appeared - “failure mechanism”, “activation energy of the degradation process”, which form the basis of physical methods of analysis (reliability physics, aging physics, failure physics), which form the basis for the development of models for assessing reliability indicators in order to predict the reliability of components. Such models are widely used in practical work in analyzing and assessing the reliability of products, including MEA components, and are given in official standards and catalogs of microcircuits, which are the main type of element base products of modern technical objects. Therefore, knowledge of these models is useful for proper engineering application.

To give an idea of ​​the nature of degradation processes in products, we first show how the concepts of chemical equilibrium, statistical mechanics and the theory of absolute reaction rates can be applied to a system consisting of many particles. This will further allow us to introduce both the empirical Arrhenius model for estimating reaction rates and the more general Eyring model.

Under failure mechanisms understands the microscopic change processes leading to product failure. The failure mechanism is a theoretical model designed to explain the external manifestations of product failure at the atomic and molecular levels. These external manifestations are determined by the type of failure and represent specific, physically measurable states of the product.

The failure mechanism model is usually highly idealized. It does, however, predict interdependencies that lead to a better understanding of the phenomenon under consideration, although the quantitative results depend on the specific components, composition and configuration of the product.

Failure mechanisms may be physical and/or chemical in nature. In practice, it is difficult to separate failure mechanisms. Therefore, during the analysis process, a complex series of mechanisms is often considered as a single generalized failure mechanism. As a rule, of particular interest is one mechanism among a number of mechanisms acting simultaneously, which determines the rate of the degradation process and itself develops most quickly.

Failure mechanisms can be represented either by continuous functions of time, which usually characterize the processes of aging and wear, or by discontinuous functions, reflecting the presence of many undetected defects or qualitative weaknesses.

The first group of mechanisms is caused by subtle defects that lead to component parameters drifting beyond tolerances, and is typical for most components; the second group of mechanisms manifests itself in a small number of components and is caused by gross defects, which are eliminated through technological rejection tests (TRT).

Even the simplest component of a product (including IMNE) is a multicomponent heterogeneous system, multiphase, having boundary areas between phases. To describe such a system, either a phenomenological or a molecular kinetic approach is used.

Phenomenological approach– purely empirical, describing the state of the system based on measurable macroscopic parameters. For example, for a transistor, based on the results of measuring the time drift of the leakage current and breakdown voltage at certain points in time, the relationship between these parameters is established, on the basis of which the properties and states of the transistor as a system are predicted. However, these parameters are averaged over many microscopic characteristics, which reduces their sensitivity as indicators of degradation mechanisms.

Molecular kinetic approach primarily relates the macroscopic properties of a system to a description of its molecular structure. In a system of many particles (atoms and molecules), their movements can be described based on the laws of classical and quantum mechanics. However, due to the need to take into account a large number of interacting particles, the problem is very voluminous and difficult to solve. Therefore, the molecular kinetic approach also remains purely empirical.

Interest in the kinetics of degradation of components leads to an analysis of how transformations (transitions) from one equilibrium state to another occur, taking into account the nature and rate of transformations. There are some difficulties with such an analysis.

The operation of components depends mainly on irreversible phenomena such as electrical and thermal conductivity, i.e. is determined by nonequilibrium processes, to study the dependence of which it is necessary to resort to approximation methods, since the components are multicomponent systems consisting of a number of phases of matter. The presence of many nonequilibrium factors can, under certain conditions, influence the nature and rate of change in the equilibrium states of the system. Therefore, it is necessary to take into account not only combinations of mechanisms that can change depending on time and load, but also changes in time of the mechanisms themselves.

Despite these difficulties, it is possible to formulate a general concept of consideration and analysis, based on the fact that in component technology, based on monitoring of their parameters and the results of a certain period of testing, it is customary to decide which of a given set of components are suitable for a particular application. The rejection process is carried out throughout the entire production cycle: from materials to testing of finished products.

Thus, all that remains is to understand the mechanism of evolution of the finished component from the “good” state to the “reject” state. Experience shows that such a transformation requires overcoming a certain energy barrier, schematically shown in Fig. 5.13.

Rice. 5.13.

R 1, r, r 2 energy levels characterizing the normal, activated and failure states of the system; E a – activation energy; δ – space of instability of the system; A, B, C– interacting particles of the system

The minimum energy level required to transition from a state p 1 in state R, called activation energy E a process that can be of a mechanical, thermal, chemical, electrical, magnetic or other nature. In semiconductor solid-state products, this is often thermal energy.

If the condition R 1 is the minimum possible energy level of this system, and the component corresponds to the “go” state, then the state R corresponds to an unstable equilibrium of the system, and the component corresponds to a pre-failure state; R 2 corresponds to the "failure" state of the component.

Let's consider the case where there is one failure mechanism. The state of a system (good or bad) can be characterized by a number of measurable macroscopic parameters. The change, or drift, of these parameters can be recorded as a function of time and load. However, it is necessary to make sure that the adopted group of macroparameters does not reflect a special case of the microstate of the system (good or bad). A sign of a special case is the absence of two identical products from the point of view of their microstate. Then the rate of degradation will not be the same for them, and the mechanisms themselves may turn out to be different at any given period of time, which means that technological rejection tests (TRTs) will be ineffective. If the microstates of the components are identical, the failure statistics after testing will be identical.

Let's consider analysis of degradation processes. In a simple system consisting of many particles, consider a certain limited number of particles actively participating in the degradation process leading to degradation of the component parameters. In many cases, the degree of degradation is proportional to the number of activated particles.

For example, dissociation of molecules into their constituent atoms or ions may occur. The rate of this process (chemical dissociation) will depend on the number of dissociating particles and on their average speed of passage through the energy barrier.

Let us assume that we have a measurable parameter P. Properties of the product or a certain function of the parameter f(P) changes in proportion to the rate of chemical dissociation of some substances that make up the materials of the product, and dissociation itself is the main degradation mechanism leading to product failure. In this case, the rate of change P or f(P) in time t can be expressed as follows:

Where N a the number of particles that have reached an energy level sufficient to overcome the energy barrier; – the average speed of movement of activated particles through the barrier; – the transparency coefficient of the barrier (it is less than unity, since some of the active particles roll back from the energy top of the barrier).

Definition task N a from the total number of particles in the system can be solved under the following assumptions:

  • 1) only a small part of all particles of the system always has the energy necessary to activate the degradation process;
  • 2) there is a balance between the number of activated particles and the number of remaining particles in the system, i.e. the rate of emergence (birth) of activated particles is equal to the rate of their disappearance (death):

Problems of the type under consideration are the subject of study of statistical mechanics and are associated with the statistics of Maxwell - Boltzmann, Fermi - Dirac, Bose - Einstein.

If you apply classical Maxwell statisticsBoltzmann, used as a satisfactory approximation for particles of all types (all particles are distinguishable), then the number of particles that will be at the same energy level in an equilibrium system of many particles is described as follows:

Where E a activation energy; k– Boltzmann constant; T– absolute temperature.

In the process of many years of research into reaction kinetics, it was empirically established that in most chemical reactions and some physical processes there is a similar dependence of their reaction rate on temperature and loss

(decrease) of the initial concentration of the substance WITH, those.

In other words, the Arrhenius equation is valid for thermally activated chemical reactions. Let's write it taking into account quantum mechanical corrections:

Where A - proportionality factor.

Most accelerated testing of components is based on the use of the Arrhenius equation, which is widely used, although often not providing the necessary accuracy, to analyze the degradation processes of products and predict their reliability.

In relation to electronics products, its earliest use was in the study of electrical insulation faults.

Factor A must be calculated taking into account:

  • average speed of particles overcoming the energy barrier;
  • the total number of particles present (participating in the process);
  • functions of particle energy distribution in the system.

Where f* And f n – distribution functions of activated and normal particles; δ – reaction path length; WITH n – concentration of normal particles.

Taking into account the translational, rotational and vibrational energies of particles, the last expression is written in a form suitable for use in failure physics:

Where ; k – Boltzmann's constant; h – constant

Plank; T- temperature; – activation energy, standard Gibbs activation energy, entropy and enthalpy of activation, universal gas constant, respectively.

The importance of reducing entropy in a system consisting of many particles lies in slowing down the rate of degradation of the product parameter due to the increasing order of the system. This means an increase in MTBF, which can be shown by integrating the last equations:

Expression for the time it takes for a component to reach a failure state t f from the nominal permissible value of the electrical parameter P0 to the failure value Pf after integration, substitution of limits and logarithm will take the form

Where ; coefficient A" is determined during reliability testing and reflects the pre-failure (i.e., energetically activated) state of the component.

If under time t f to understand the mean time between failures, then for the exponential distribution law the failure rate λ can be determined as follows:

The considered approach allows us to make only qualitative and semi-quantitative conclusions when theoretically analyzing the reliability of components, both due to the multiphase and heterogeneity of the multicomponent supersystem, of which the component (and even an element of the component) is a part, and because of the type of temporary experimental models of component degradation. This is obvious from the summary of causes, mechanisms and physical and mathematical models of failures of IS components presented in Table. 5.20 (time models do not always follow a logarithmic relationship; in practice, there may be power-law relationships).

The advantage of the approach based on the use of the Arrhenius equation is the ability to predict parametric failures of products based on accelerated tests. The disadvantage of this approach is the lack of consideration of the design and technological parameters of elements and components.

Thus, the Arrhenius approach is based on the empirical connection between the electrical parameter of a component or element and the failure mechanism with the activation energy Ea. This drawback was overcome by the theory of G. Eyring, who introduced the concept of an activated complex of particles and found its justification using the methods of statistical and quantum mechanics. However, his theory does not take into account the achievements of the Russian thermodynamic school of materials scientists, who creatively reworked the ideas of D. Gibbs.

Nevertheless Arrhenius–Eyring approachGibbs is actively used to solve reliability issues under the assumption of temperature dependence of failure mechanisms and is the basis of various models used to find failure rates of electrical equipment given in reference literature, manuals and databases of programs for calculating reliability indicators.

Eyring’s theory does not take into account the achievements of the Russian thermodynamic school of materials scientists, who creatively mastered and reworked the ideas of D. Gibbs, who is not very revered in America, but loved in Russia and in the vast expanses of the former USSR. It is known, for example, that V.K. Semenchenko, based on generalized functions associated with Pfaff’s equations (1815 - the so-called Pfaffian form), proposed his approach (his C-model) and modified the fundamental equation of D. Gibbs.

Table 5.20

Causes, characteristic mechanisms and failure models of components and their elements

Reliability parameter (indicator)

Cause (mechanism) of failures

Failure model

Activation energy value E a, eV

Physico-chemical system

Time of spontaneous exit from a stable state τ

Degradation processes

Sealing coatings (polymers)

Mean time between failures tr

Destruction (processes of sorption, desorption, migration)

/7-type semiconductor surface

Surface ion concentration n s

Inversion, electromigration

Solid aluminum (volumetric)

Mean time between failures t f

Thermomechanical stresses

Metallization (film)

Mean time between failures t f

Electromigration, oxidation, corrosion, electrocorrosion

Interconnections

Contact resistance R

Formation of intermetallic compounds

Resistors

Contact resistance R

Oxidation

Capacitors

Capacity WITH

Diffusion, oxidation

Micromechanical accelerometer (MMA)

Sensing element of the mechanical deformation to acceleration converter

Microcreep

1,5-2

* Data taken from the book: VLSI technology. In 2 books. Book 2 / K. Mogeb [et al.]; lane from English; edited by S. Zee. M.: Mir, 1986. P. 431.

It should be noted that D. Gibbs prophetically pushed him to develop his ideas. As it was said in the preface to the “Principles...”, he “recognizes the inferiority of any theory” that does not take into account the properties of substances, the presence of radiation and other electrical phenomena.

The fundamental equation of matter according to Gibbs (taking into account thermal, mechanical and chemical properties) has the form of a complete differential:

or, what is the same, for the convenience of visual analysis:

here Gibbs uses the following notation: ε – energy; t – temperature; η – entropy; R - pressure; V- volume; μ, – chemical potential; m i is the mole fraction of the ith component ( i= 1, ..., P).

Semenchenko, using the method of generalized functions (Pfaffian forms), introduced the electric intensity into the G-model ( E) and magnetic (I) fields, as well as the corresponding “coordinates” - electric ( R) and magnetic ( M) polarization, modified the G-model to the form

The step-by-step procedure for applying the simplest model - Arrhenius - to analyze test data to determine the temperature dependence of component degradation processes looks like this:

In connection with the above, it is important to make comments about the concept of reliability adopted by the company Motorola for semiconductor diodes, transistors and ICs.

As is known, reliability is the probability that an IS will be able to successfully perform its functions under given operating conditions over a certain period of time. This is the classic definition.

Another definition of reliability is related to quality. Since quality is a measure of variability, i.e. variability, up to potential, hidden inconsistency or failure in a representative sample, then reliability is a measure of variability over time under operating conditions. Consequently, reliability is a quality developed over time under operating conditions.

Finally, the reliability of products (products, including components) is a function of correct understanding of customer requirements and the introduction or implementation of these requirements into the design, manufacturing technology and operation of products and their structures.

Method QFD (quality function deployment) is a technology for deploying quality functions, structuring the quality function (which means product design, in which consumer needs are first identified, then the technical characteristics of products and manufacturing processes are determined that best meet the identified needs, resulting in higher product quality) . Method QFD useful for establishing and identifying quality and reliability requirements for their implementation in innovative projects.

The number of observed failures over the total number of hours at the end of the observation period is called a point estimate of the failure rate. This estimate is obtained from observations of a sample, for example, IS subjects. Failure rate assessment is performed using the χ2 distribution:

where λ* – failure rate; A– confidence level of significance; v = 2r 2 – number of degrees of freedom; r– number of failures; P– number of products; t– test duration.

Example 5.6

Calculate the values ​​of the χ2 function for the 90% confidence level.

Solution

The calculation results are given in table. 5.21.

Table 5.21

Calculated values ​​of the function χ 2 for 90% confidence level

To increase the reliability of the confidence level of the assessment of the company’s required operating time today Motorola An approach is used based on determining the failure rate of components in the form of the Eyring equation:

Where A, B, C – coefficients determined based on test results; T- temperature; RH– relative humidity; E– electric field strength.

Thus, the presented material indicates that in conditions of fairly widespread use of foreign electronic products with unknown reliability indicators, it is possible to recommend the use of the methods and models presented in this chapter to determine and predict the reliability indicators of components and systems: for components - using physical concepts based on the equations of Arrhenius, Eyring, Semenchenko, Gibbs; for systems – using combinatorial analysis (parallel, sequential and hierarchical types).

  • The term “Valley” used in the figure is a term in physical chemistry (not officially defined), used in particle state diagrams for particles that have lowered their energy, “fell” from a peak into a valley (by analogy with mountaineering), overcome an energy barrier and lost energy after the work has been completed, i.e. who have made a transition to a lower energy level, characterized by a lower Gibbs energy, which is a consequence of the implementation of the principle of minimum energy, described in thermodynamic potentials and introduced into science (for example, into theoretical physics) by D. Gibbs himself.
  • Gibbs J.W. Basic principles of statistical mechanics, developed with special application to the rational basis of thermodynamics // Gibbs J.W. Thermodynamics. Statistical mechanics: transl. from English; edited by B. M. Zubareva; comp. W. I. Frakfurt, A. I. Frank (series "Classics of Science"). M.: Nauka, 1982. pp. 352-353.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

1. Forecasting methods

2. Scheme for predicting the parametric reliability of the machine

3. Application of the Monte Carlo method for predicting reliability

4. Possibilities of the statistical modeling method

5. Assessment of extreme situations

List of used literature

1. Forecasting methods

In recent years, predicting the behavior of complex systems has developed into an independent science that uses a variety of methods and tools.

Forecasting differs from system calculation in that a probabilistic problem is solved, in which the behavior of a complex system in the future is determined only with varying degrees of reliability and the probability of its being in a certain state under various operating conditions is assessed. In relation to reliability, the forecasting task comes down mainly to predicting the probability of failure-free operation of a product P(t) depending on possible operating modes and operating conditions. The quality of the forecast largely depends on the source of information about the reliability of individual elements and the processes of their loss of performance. For forecasting in general, a variety of methods are used using modeling, analytical calculations, statistical information, expert assessments, the method of analogies, information-theoretic and logical analysis, etc.

Typically, forecasting associated with the use of mathematical apparatus (elements of numerical analysis and the theory of random functions) is called analytical. The specificity of reliability prediction is that when estimating the probability of failure-free operation P (t), this function in the general case cannot be extrapolated. If it is defined in a certain area, then outside of it nothing can be said about the function P(t). Therefore, the main method for predicting the reliability of complex systems is to assess the change in its output parameters over time for various input data, on the basis of which one can draw a conclusion about reliability indicators for various possible situations and methods of operating a given product.

We will consider the case of predicting the parametric reliability of a machine when the structure of the formation of the performance area is known, but the parameters that determine this area depend on a large number of factors and have dispersion.

Rice. 1 Area of ​​reliability prediction

2. Scheme for predicting the parametric reliability of a machine

Rice. 2 Scheme of loss of machine performance for a given duration of continuous operation

Based on the general diagram of the loss of machine performance (Fig. 2), we can present three main tasks for predicting reliability (Fig. 1).

1. The behavior of the entire general population of these machines is predicted, i.e., both the variation in the initial characteristics of the machine and the possible conditions of its operation (area) are taken into account.

2. The behavior of a specific machine sample is predicted, i.e. the initial parameters of the machine become non-random values, and the modes and operating conditions of the machine can change within a certain range. In this case, the region of states narrows (region) and becomes a subset of the set.

3. The behavior of this machine is predicted under certain operating conditions under constant operating conditions. In this case, it is necessary to identify the implementation of a random process that corresponds to the given operating conditions.

Thus, if in the first two cases it is necessary to predict the possible area of ​​existence of output parameters and estimate the probability of their location in each zone of this area, then in the third case there is no uncertainty in the operating conditions of the product, and the forecast is associated only with identifying those patterns that describe the process of change output parameter in time.

Rice. 3 The aging process as a random function

As is known (Fig. 3), the occurrence of a random process can occur with a greater or lesser degree of “mixing” of implementations. It should be noted that if the forecast concerns a set of products, then the degree of mixing does not affect the assessment of the region of existence of the parameters, since it is not the behavior of a given product that is revealed, but the probability of any instance from a given set going beyond the permissible limits.

If the behavior of a given product instance within the region is predicted, then the possible rate of change in the process of loss of performance in the near future should be assessed, i.e., use the correlation function.

The accuracy of forecasting depends on a number of factors. Firstly, it depends on the extent to which the accepted scheme for the loss of machine performance reflects objective reality. Secondly, how reliable is the information about the modes and conditions of the intended operation of the product, as well as about its initial parameters.

Finally, the accuracy of the forecast is decisively influenced by the reliability of information about the patterns of changes in the output parameters of the product during operation, i.e., about random functions X 1 (t); ...; Xn, (t). Information about the reliability of the product (understanding by this the assessment of the mentioned functions Xi (t) or data on the reliability of the product elements) can be obtained from various sources. Forecasting can be carried out at the design stage (there are technical specifications for the product, design data about the machine and its elements, possible operating conditions are known). In the presence of a prototype of the product (you can obtain the initial characteristics of the machine, estimate the margin of safety) and during operation (there is information about the loss of performance of products under various operating conditions). When predicting the reliability of a product at the design stage, there is the greatest uncertainty (entropy) in assessing the possible states of the product. However, the methodological approach to solving this problem remains general.

In the case under consideration, it consists in using the corresponding failure models as a basis for assessing the probability of failure-free operation of the product and consists of the following stages.

1. Determination of the initial parameters of the product (a o; a), as a function of the technological process of manufacturing the machine. This is due to changes within the tolerance of part sizes, material properties, build quality and other indicators. The values ​​of the initial parameters may also depend on the operating modes of the machine.

2. Establishment of maximum permissible values ​​of output parameters.

3. Estimation by calculation of changes in output parameters during the intercommissioning period To (v, n, s, a s) taking into account similar characteristics of the prototype by testing in the presence of a prototype or by taking into account the standards established by the standard for machine parameters.

4.Assessment of the influence of aging processes () on the output parameters of the product based on the physical patterns of failures, taking into account their stochastic nature.

5.Evaluation of the spectra of operating modes (loads, speeds and operating conditions), which reflect possible operating conditions and determine the dispersion of the rates of change in output parameters (x).

6. Calculation of the probability of failure-free operation of the machine for each of the parameters as a function of time.

7. When receiving information about the operation of the product for which a forecast was made, actual and calculated data are compared and the reasons for their discrepancies are analyzed.

Depending on the task at hand, areas should be identified and ( or ) the implementation was assessed (Fig. 1), i.e., distribution laws f (T) or f (T), or respectively P (T) or P (T), were obtained, reflecting the dispersion ranges of service life for the entire population (D /) or for this machine (D //). If the operating conditions for a given sample are strictly specified, the service life (time to failure) T is predicted.

3. Application of the Monte Carlo method for reliability prediction

Discussed in Chap. 3 failure models are a formalized description of the process of a machine losing its functionality and make it possible to establish functional connections between reliability indicators and initial parameters.

The statistical nature of these patterns is manifested in the fact that the arguments of the obtained functions are random and depend on a large number of factors. Therefore, it is impossible to accurately predict the behavior of the system, but you can only determine the probability of one or another of its states.

To predict the behavior of a complex system, the method of statistical modeling (statistical testing), which is called the Monte Carlo method (184), can be successfully used.

The main idea of ​​this method is to repeatedly calculate parameters according to some formalized scheme, which is a mathematical description of a given process (in our case, the process of loss of performance).

In this case, for the random parameters included in the formulas, their most probable values ​​are selected in accordance with the distribution laws.

Thus, each statistical “test” consists in identifying one of the implementations of a random process, since by substituting, albeit randomly, selected but fixed arguments, we obtain a deterministic dependence that describes this process under accepted conditions. By repeatedly repeating tests according to this scheme (which is practically possible in complex cases only with the use of a computer), we will obtain a large number of realizations of a random process, which will allow us to evaluate the progress of this process and its main parameters.

Let's consider a simplified block diagram of an algorithm for calculating on a computer the reliability of a product, the loss of performance of which can be described by the diagram in Fig. 4 and equation

Fig. 4 Scheme of formation of gradual failure of this product

Let the change in the output parameter X depend on the wear U of one of the elements of the product, i.e. X = F (U), where F is a known function that depends on the design of the product. Let us assume that wear is related to the specific pressure p and the sliding speed of the rubbing pair v by a power-law dependence U=kp m 1 v m 2 t, where the coefficients m1 and m2 are known (for example, from testing the materials of the pair). Coefficient k evaluates the wear resistance of materials and operating conditions of the interface (lubrication, surface contamination).

This product may be exposed to different operating conditions and operate under different modes. In order to predict the process of a product losing its functionality, it is necessary to know the probabilistic characteristics of the conditions under which the product will be operated. Such characteristics may be the laws of distribution of loads f (P), speeds f (v) and operating conditions f (k). Note that these patterns evaluate the conditions in which the product will be located and therefore can be obtained regardless of its design using statistics on the operation of similar machines or on the requirements for future products. For example, the spectrum of loads and speeds under various operating conditions of transport vehicles, the necessary cutting modes when processing a given type of parts on metal-cutting machines, loads on mining machine components when mining various rocks, etc. can be predetermined in the form of histograms or distribution laws.

The algorithm for assessing reliability using the Monte Carlo method (Fig. 5) consists of a program for one random test, which determines the specific value of the rate of change of parameter x. This test is repeated N times (where N must be large enough to obtain reliable statistical data, for example N? 50), and based on the results of these tests, the mathematical expectation cp and standard deviation x of the random process are estimated, i.e., the data necessary to determine P (t). The calculation sequence (statistical test) is as follows. After entering the necessary data (operator /), the values ​​p, v and k specific for this test are selected (operator 2). For this purpose, there are subroutines that contain histograms or distribution laws that characterize these values ​​or determine their values. For example, instead of pressures on the friction surface p, the law of distribution of external loads P acting on the node can be specified. In this case, in the subroutine, based on the selected value P,

p = F (P), in the simplest case,

where S is the friction surface.

To select a specific value for each of the parameters, taking into account their distribution laws, a random number generator is used, with the help of which a given random number is played (selected). Typically, the generator is built in such a way that it produces uniformly distributed numbers, which, using standard subroutines, can be transformed so that their distribution density will correspond to a given law. For example, for the normal distribution law, random numbers r are generated for the mathematical expectation M (z) = O and the standard deviation z = 1. In the subroutine, for each case, a playing formula is used that takes into account the characteristics of the underlying distribution. So, if p is distributed according to a normal law with parameters p cf and p, then the gaming formula will be p = p cf + p z, where z is obtained using random number generators. It is possible to create subroutines for playing random parameter values ​​when specifying their distribution using histograms. After obtaining random values ​​for each experiment, the speed of the damage process is calculated (operator 3) and from it the speed of the process of changing the parameter x (operator 4). This procedure is repeated N times and each obtained x value is sent to the external memory of the machine. After accumulating the required amount of statistical data, i.e. when n = N, cp and x are determined (operators 6 and 7), after which it is possible to both calculate the probability of failure-free operation P (T) (operator 8) and construct a distribution histogram x (or time to failure Ti) and printing all the necessary data.

4. Features methodand statistical modeling

The considered case is the simplest, but illustrates the general methodological approach to solving this problem.

In a more complex case, for example, when using a failure model taking into account the dispersion of initial parameters (Fig. 6), the program contains information about the distribution laws of the initial characteristics of the machine.

According to the normal law, and such positive quantities as the error in the eccentricity of the shaft - according to Maxwell’s law, etc.

Considered in Fig. Example 5 is also characterized by the fact that the speed of the process here is constant x = const, and each implementation of a random function is characterized by one specific value x.

Fig. 6 Scheme of failure formation when the initial parameters of the product dissipate.

For example, manufacturing errors in parts are usually distributed within tolerance limits.

according to the normal law, and

such positive quantities as the error in the eccentricity of the shaft - according to Maxwell’s law, etc.

Considered in Fig. Example 5 is also characterized by the fact that the speed of the process here is constant x = const, and each implementation of a random function is characterized by one specific value x. Therefore, modeling a random function here is reduced to modeling a random variable.

If we consider a nonlinear problem, when the speed of the process changes over time (t), then each test will give the implementation of a random function. For further actions, each implementation can be represented as numbers in the given sections t 1 ;t 2 ...t n , covering the studied range of product performance.

It is often convenient to represent a random function in the form of its canonical expansion

In this case, the coefficients for non-random functions will be random

The development of implementations of a random function on an electronic digital computer is simplified in the case of its stationarity.

Even more complex cases may occur if there is a relationship between adjacent values ​​of random parameters. Then it is necessary to take into account the correlation coefficient between adjacent members or even several adjacent members (multiple correlation). This case can also be solved by the Monte Carlo method, but modeling of the correlation function is required.

It should also be noted that this method is also applicable to patterns that characterize the process in the form of implicit functions, as well as when describing the process not necessarily in the form of mathematical formulas. Predicting reliability using the Monte Carlo method allows us to reveal the statistical nature of the process of loss of product performance and evaluate the specific weight of the influence of individual factors. For example, for the problem considered, you can calculate how much the probability of failure-free operation will increase if a number of measures are taken to reduce pressure in the friction zone (the design of the unit is changed), the value of the coefficient k is reduced (a new material is used), the range of operating modes of the machine is narrowed [the parameters of the laws are changed f (P) and f (v)].

The specificity of using the statistical modeling method for calculating reliability is that if usually in the statistical modeling of complex systems the required values ​​are the average values ​​of the characteristics, then here we are interested in the region of extreme realizations (values ​​close to max), since they determine the values ​​of P (T ) .

Therefore, to assess the reliability of critical products, the study of emergency and extreme situations is important, when process implementations with the highest rate of change in output parameters x max are identified.

5. Assessment of extreme situations

When predicting reliability, identifying the extreme boundary of the product state region is of particular importance, since it is this that determines its proximity to failure. This boundary is formed due to the implementations that have the highest values ​​of the process speed x. Although the probability of their occurrence is small (it corresponds to the probability of failure), their role in assessing the reliability of the product is fundamental. We will call such realizations extremal. They can be of two types: actually extreme, as a result of the most unfavorable combination of external factors, but within acceptable limits, and emergency, which are associated with violation of operating conditions or manifestation of violations of specifications during the manufacture of the product.

Extreme implementation of IV in Fig. 1 can be identified as the result of the most unfavorable combination of factors affecting the rate of change of parameters 7l Often these are extreme modes in which dynamic loads increase significantly. If for simple systems the formulation of extreme conditions, as a rule, does not cause difficulties (these are the highest loads, speeds, temperatures), then for complex systems it is necessary to conduct research to identify such a combination of parameters that will lead to x max Indeed, for example, increasing the speed of the mechanism for some elements can lead to an increase in their performance (transition to liquid friction in a sliding bearing, better circulation of coolant, exit of the mechanism from the resonance zone, etc.), and for others - to a deterioration in their operating conditions (increase in dynamic loads, increase heat generation, etc.). Therefore, the total impact on the mechanism will be greatest only under certain operating modes. If it is necessary to identify the worst initial state of the product, then it is also necessary to solve the problem of the most unfavorable distribution of tolerances (TS) for elements and assess the probability of this situation (for example, finding the dimensions for all parts on the boundaries of the tolerance fields is unlikely).

In addition, when assessing the reliability of a product taking into account all its main parameters X 1, X 2, ..., X n, the modes will have different effects on their changes, which eliminates the possibility of predetermining in advance the worst combination of them. All this indicates that identifying extreme situations is also a task of statistical research, which can be carried out using the Monte Carlo method. However, the game must be played in an area corresponding to a low probability of failure, but with acceptable values ​​of the input parameters (values ​​of random arguments).

Emergency situations are associated with two main reasons. Firstly, this is an increase in external loads and impacts beyond the established specifications when the machine finds itself in unacceptable operating conditions. For individual components and elements of the machine, this situation may arise due to damage to adjacent unimportant parts, which will affect the operation of this component. For example, increased wear of an unimportant connection does not in itself affect the performance of this pair, but wear products clog the lubricant and damage other connections. Increased heat generation can lead to unacceptable deformations of neighboring elements.

Secondly, the occurrence of emergency situations is associated with violation of specifications for the manufacture and assembly of products. Manufacturing defects may appear unexpectedly and lead to product failure.

While the probability of extreme situations can be assessed, the occurrence of an emergency condition is difficult to predict, and in most cases almost impossible. Usually it is possible to compile a list of typical emergency situations, prove that the probability of their occurrence is extremely low (if this is not the case, the design must be changed) and, most importantly, assess the possible consequences of each situation. An assessment of the nature of the consequences and the time required to eliminate the situation that has arisen determines the degree of danger of a given emergency situation.

Thus, the forecast of the range of possible product states and its reliability indicators for highly critical objects is complemented by an analysis of emergency and extreme situations with an assessment of their consequences.

In conclusion, it should be noted that the development of methods for predicting the reliability of machines will have a huge economic effect, since, firstly, the time and money spent on testing prototypes will be reduced, and secondly, there will be a more rational use of the potential durability of the product due to proper construction repair and operation systems; thirdly, even at the design stage it will be possible to select the optimal design solution from the point of view of reliability.

List of used literature:

1. Pronikov A.S. Machine reliability Moscow "Machine Building" 1978

2. Buslenko N.P. Modeling of complex systems M.; "Science" 1969

3. Elizavetin M.A. Increasing the reliability of M machines; "Mechanical Engineering" 1973

Similar documents

    Description of the design of a gas turbine engine compressor. Calculation of the probability of failure-free operation of the blade and impeller disk of the input stage of a subsonic axial compressor. Calculation of compressor blade reliability under repeated static loading.

    course work, added 03/18/2012

    Construction of the empirical probability of failure-free operation. Determination of distribution parameters by iterative method. Consideration of the quantitative characteristics of each factor separately. Determination of the average time until the first failure of the device.

    practice report, added 12/13/2017

    Brief description of the engine design. Standardization of the turbine blade reliability level. Determination of mean time between failures. Calculation of turbine reliability under repeated static loading and reliability of parts taking into account long-term strength.

    course work, added 03/18/2012

    Purpose, classification and rationale for choosing a mining machine depending on working conditions. Static calculation of technological parameters of machine operation. Design, principle of operation, operation of mechanical equipment and drive. Lifting mechanism.

    course work, added 11/08/2011

    Requirements for product reliability. Reliability analysis of duplicated systems. Probability of failure-free operation according to a given criterion. Distribution of failures over time. Basics of calculation of threaded and bolted connections under constant load.

    test, added 11/09/2011

    State standards on the problem of reliability of energy facilities during operation. Change in failure rate with increasing operating time of an object. Probability of failure-free operation. Durability indicators and gamma-percentage life model.

    presentation, added 04/15/2014

    Concepts of reliability theory. Probability of failure-free operation. Failure rate indicators. Methods for increasing the reliability of equipment. Cases of failures, preservation of equipment functionality. Criteria and quantitative characteristics of its assessment.

    course work, added 04/28/2014

    Analysis of changes in the probability of failure-free operation of the system depending on operating time. The concept of percentage operating time of a technical system, the features of ensuring its increase by increasing the reliability of elements and structural redundancy of system elements.

    test, added 04/16/2010

    System reliability indicators. Classification of failures of a complex of technical means. The probability of restoring their working condition. Analysis of operating conditions of automatic systems. Methods for increasing their reliability during design and operation.

    abstract, added 04/02/2015

    Block diagram of technical system reliability. Graph of changes in the probability of failure-free operation of the system from operating time to the level of 0.1-0.2. 2. Determination of the Y-percent operating time of the technical system.

Forecasting the reliability of a technical object is a scientific direction that studies methods for predicting the technical condition of an object when exposed to specified factors.

Forecasting is used to determine the residual life of systems, their technical condition, the number of repairs and maintenance, the consumption of spare parts and solving other problems in the field of reliability.

Prediction of reliability indicators can be made using various parameters (for example, fatigue strength, dynamics of the wear process, vibroacoustic parameters, content of wear elements in oil, cost and labor costs, etc.).

Modern forecasting methods are divided into three main groups.

1. Methods of expert assessments, the essence of which boils down to generalization, statistical processing and analysis of the opinions of specialists. The latter justify their point of view using information about similar objects and analyzing the state of specific objects.

2. Modeling methods based on the basic principles of similarity theory. These methods consist of forming a model of the research object, conducting experimental studies of the model, and recalculating the obtained values ​​from the model to a natural object. For example, by conducting accelerated tests, the durability of the product under forced (harsh) operating conditions is first determined, and then the durability under real operating conditions is determined using appropriate formulas and graphs.

3. Statistical methods, of which the extrapolation method is most widely used. It is based on patterns of changes in predicted parameters over time. To describe these patterns, select the simplest possible analytical function with a minimum number of variables.

Thus, through statistical processing, a parameter is determined that serves as a diagnostic sign of the technical condition of the engine, for example, crankcase gas breakthrough or oil consumption. Based on this parameter, the residual resource is predicted. It should be taken into account that the actual resource may fluctuate around the obtained value.

The main reasons for inaccurate forecasting are insufficient completeness, reliability and homogeneity of information ( homogeneous refers to information about identical products operated under the same conditions), low qualifications of the forecaster.

The effectiveness of forecasting is determined by changes in the reliability indicator as a result of the implementation of recommended means of increasing it.

Determining reliability indicators at the design stage is the most important task in the theory of reliability, contributing to the greatest efficiency in using an object. Reliability prediction at the design stage is much cheaper (~ 1000 times) than at the manufacturing and operation stage, because a significant machine park and expensive labor are not involved.

There are three groups of methods for predicting reliability.

1st group - theoretical calculation and analytical methods, or methods of mathematical modeling. Mathematical modeling is the process of creating a mathematical model, i.e. it is a description of the complex process being studied using mathematical signs and symbols. Uncertain phenomena can be described in different ways, that is, several mathematical models can be created.

Probabilistic-analytical methods is the application of theoretical principles of probability theory to engineering problems. These methods have a significant drawback for real practice: some of them can be used only if analytical expressions for the distributions of random variables are available. It is usually very difficult to derive and obtain analytical expressions for distributions of random variables, therefore, at the design stage, when an approximate estimate of reliability indicators is given, these methods are not always suitable. Although calculating the probability of finding a random variable within the specified limits of its values, ensuring the normal, failure-free functioning of the object used, in mathematical terms it is a very simple operation if there is a distribution law for this random variable.

Then we have:

Where R- reliability, i.e. the probability of finding a random variable X within permissible limits X min extra, X max extra - minimum permissible and maximum permissible.

This means that the task of calculating reliability comes down to finding the theoretical continuous and discrete probability density of the state of one X or several , X 1, X2, ..., X n random variables. Knowledge of the distribution φ(X) is a necessary condition for the calculator. Let us list the most common theoretical calculation and analytical methods:

1. Based on known distribution laws for reliability indicators of the system as a whole.

2. Based on known distribution laws for reliability indicators of individual system elements.

3. A simplified method based on the adoption of normal distribution laws for reliability indicators of individual system elements.

4. Statistical modeling method, or Monte Carlo method, based on any laws of distribution of system parameters.


5. Combinatorial-matrix method with any probability distributions of system parameters.

The listed methods represent the bulk of a large number of calculation and analytical methods.

2nd group - experimental and experimental-analytical methods - physical modeling.

1. Based on the collection and processing of retrospective and current information about the reliability of the object.

2. Based on special reliability tests under normal operating conditions and accelerated or forced tests.

3. Based on tests of object models under normal operating conditions and accelerated tests.

3rd group - heuristic methods, or heuristic modeling methods.

Heuristic- a science that studies the nature of human mental operations in the process of solving various problems.

Here we note the following methods:

1. Method of expert or point assessments. A commission is selected, consisting of experienced, highly professional experts in this matter, who, by assigning points, evaluate the reliability indicator under consideration. Then
mathematical processing of the assessment results is carried out (concordance coefficient, etc.). This is a well-known method for evaluating sports competitions (gymnastics, figure skating, boxing, etc.).

2. Majoritarian method, or voting method based on the use of the majority function. The majority function takes two values ​​“yes” or “no” - “1” or “O”, and takes the value “1” when the number of variables included in it and taking the value “1” is greater than the number of variables taking the value “ ABOUT". Otherwise, the function takes the value "O".

All of the methods listed are non-deterministic, or statistically based, or subjective, meaning the answer is uncertain. But despite this, these methods make it possible to compare different system options in terms of reliability, select the optimal system, find weak points and develop recommendations for optimizing the reliability and efficiency of the facility.

If it is not possible to test a system, reliability can be predicted by combining testing of individual system elements with analytical methods. The reliability forecast allows you to make calculations for the provision of spare parts, organize maintenance and repairs, and therefore ensure rational operation of the facility.

The more complex the system, the greater the effect of calculation methods at all stages of development and operation.

The discovery of new technical solutions entails an analysis of their level and the competitiveness of those technical objects in which these solutions are used. For this purpose, patent research is carried out, the main task of which is to assess the patent purity and patentability of the technical solutions used.

In accordance with GOST R 15.011-96, patent research refers to applied research work and is an integral part of the rationale for decisions made by business entities related to the creation, production, sale, improvement, repair and decommissioning of objects of economic activity. At the same time, participants in economic activities include enterprises, organizations, concerns, joint-stock companies and other associations, regardless of the form of ownership and subordination, the state customer, as well as persons engaged in individual labor activities.

Patent research is carried out at all stages of the life cycle of technical objects: when developing scientific and technical forecasts and plans for the development of science and technology, when creating objects, equipment, certification of industrial products, determining the feasibility of their export, sale and acquisition of licenses, when protecting state interests in the field protection of industrial property.

This document establishes the order of work on patent research: development of a task for conducting patent research; development of information search regulations; search and selection of patent, other scientific and technical, including market and economic information; summarizing the results and drawing up a report on patent research.

As a task for conducting patent research, a technical document drawn up in the prescribed manner or other documents are provided: work program, patent research schedule, etc.; the latter must contain all the information required by GOST and be properly prepared. All types of patent research work are carried out under the scientific and methodological guidance of the patent department. To conduct a search through the collections of patent and other scientific and technical, including market-economic, information, search regulations (program) are drawn up. To determine the search area, it is necessary to formulate the subject of the search, select sources of information, determine the retrospective of the search, the countries in which the search should be carried out, and classification headings (MKI, NKI, UDC).

· research of the technical level of objects of economic activity, identification of trends, substantiation of the forecast of their development;

  • study of the state of the markets for these products, the current patent situation, the nature of national production in the countries of study;

· research of consumer requirements for products and services;

· research into the areas of research and production activities of organizations and firms that operate or may operate in the market of the products under study;

· analysis of commercial activities, including licensing activities of developers (organizations and firms), manufacturers (suppliers) of products and companies providing services, and patent policy to identify competitors, potential counterparties, licensors and licensees, cooperation partners;

· identification of trademarks (trademarks) used by a competing company;

  • analysis of the activities of an economic entity; selection of optimal directions for the development of its scientific, technical, production and commercial activities, patent and technical policies and justification of measures for their implementation;
  • justification of specific requirements for improving existing and creating new products and technology, as well as organizing the provision of services; justification of specific requirements to ensure the efficiency of use and competitiveness of products and services; justification for carrying out the work necessary for this and the requirements for their results;
  • technical and economic analysis and justification for the choice of technical, artistic and design solutions (from among the known objects of industrial property) that meet the requirements for creating new and improving existing equipment and services;
  • substantiation of proposals on the feasibility of developing new objects of industrial property for use at equipment facilities that ensure the achievement of technical indicators provided for in the technical specifications;
  • identification of technical, artistic, design, software and other solutions created in the process of carrying out research and development work with the aim of classifying them as protectable objects of intellectual property, including industrial property;
  • justification of the feasibility of legal protection of intellectual property (including industrial property) in the country and abroad, selection of countries for patenting; registration;
  • study of the patent purity of technical objects (examination of technical objects for patent purity, justification of measures to ensure their patent purity and unhindered production and sale of technical objects in the country and abroad);

· analysis of the competitiveness of economic objects, the effectiveness of their use for their intended purpose, compliance with trends and development forecasts; identification and selection of objects of licenses and services, for example engineering;

· study of the conditions for the implementation of economic objects, justification of measures for their optimization;

· justification of the feasibility and forms of carrying out commercial activities in the country and abroad for the implementation of economic activities, for the purchase and sale of licenses, equipment, raw materials, components, etc.

· carrying out other work that meets the interests of business entities.

In accordance with the assigned tasks, the final report on patent research includes the following materials: on the analysis and synthesis of information in accordance with the tasks assigned to patent research; substantiation of optimal ways to achieve the final result of the work; to assess the compliance of completed patent research with the task for conducting it, the reliability of their results, the degree to which the tasks assigned to patent research have been solved, and the justification for the need to conduct additional patent research.

The main (analytical) part of the patent research report contains information: on the technical level and development trends of the object of economic activity; on the use of industrial (intellectual) property objects and their legal protection; on the study of the patent purity of a piece of technology.

Materials for practical lessons No. 6 and 7.

Reliability prediction.

Reliability prediction. Predicting reliability taking into account preliminary information. Using indirect signs of failure prediction. Individual reliability prediction. Individual prediction of reliability using the pattern recognition method (Procedure for testing. Procedure for training the recognition function. Procedure for predicting product quality. An example of a method for individual prediction of product quality.).

PZ.6-7.1. Reliability prediction.

In accordance with current GOSTs, technical specifications for designed products (objects) include: experimental confirmation requirements a given level of reliability taking into account the existing loads.

For highly reliable objects (for example, space technology), this requirement is overly tough(in the sense of the need to test a large number of similar objects) and not always practically feasible. In fact, in order to confirm the probability of failure-free operation P = 0.999 with a 95% confidence probability, 2996 successful tests should be carried out. If at least one test is unsuccessful, then the number of required tests will increase even more. To this should be added a very long test duration, since many objects must combine a high level of reliability with a long operating time (resource). It follows from this important requirement: when assessing reliability, it is necessary to take into account all accumulated preliminary information about the reliability of technical objects.

Forecasting reliability and failures is a prediction of expected reliability indicators and the probability of failures in the future based on information obtained in the past, or on the basis of indirect predictive signs.

Reliability calculations at the product design stage have the features of such forecasting, since an attempt is made to foresee the future state of a product that is still at the development stage.

Some of the tests discussed above contain elements of predicting the reliability of a batch of products based on the reliability of their sample, for example, according to test schedule. These forecasting methods are based on the study of statistical patterns of failures.

But it is possible to predict reliability and failures based on studying the factors causing failures. In this case, along with statistical patterns, physical and chemical factors affecting reliability are also considered, which complicates its analysis, but makes it possible to reduce its duration and make it more informative.

PZ.6-7.2. Predicting reliability taking into account preliminary information.

When assessing reliability, it is necessary to take into account all accumulated preliminary information about the reliability of technical objects. For example, it is important to combine the calculated information obtained at the preliminary design stage with the results of testing the object. In addition, the tests themselves are also very diverse and are carried out at different stages of the creation of an object and at different levels of its assembly (elements, blocks, units, subsystems, systems). Taking into account information characterizing changes in reliability in the process of improving an object makes it possible to significantly reduce the number of tests necessary for experimental confirmation of the achieved level of reliability.

In the process of creating technical objects, tests are carried out. Based on the analysis of the results of these tests, changes are made to the design aimed at improving their characteristics. Therefore, it is important to evaluate how effective these measures were and whether the reliability of the facility actually improved after the changes were made. Such an analysis can be performed using methods of mathematical statistics and mathematical models of changes in reliability.

If the probability of some event in a single experiment is equal to R and at n independent experiments, this event (failure) occurred m times, then the confidence limits for p found as follows:

Case 1. Let m¹ 0 , Then:

(PZ.6-7.2.)

where are the coefficients R 1 And R 2 are taken from the corresponding statistical tables.

Case 2. Let m=0, Then pH=0, and the upper bound is

. (PZ.6-7.3.)

Calculation R0 is produced by the equation

(PZ.6-7.4.)

One-sided confidence probabilities g 1 And g 2 associated with two-sided confidence level γ * known addiction

(PZ.6-7.5.)

Bench and ground tests provide basic information about the reliability of the object. Based on the results of such tests, they determine reliability indicators. If a technical product is a complex system, and the reliability of some elements is determined experimentally, and some by calculations, then to predict the reliability of a complex system they use method of equivalent parts.

During flight tests receive additional information about the reliability of the object and this information should be used to clarify and adjust the reliability indicators obtained during bench tests. Let it be necessary to clarify lower limit probability of failure-free operation of an object that has passed bench ground tests and flight tests and at the same time m=0.