All about car tuning

How is the error measured? Measurement of physical quantities. Causes of measurement errors

The components of the measurement result error are presented in Figure 1.1.

According to the form of quantitative expression, measurement errors are divided into absolute and relative.

Absolute error(a), expressed in units of the measured value, is the deviation of the measurement result (x) from the true value (X and or actual value (x 4). Thus, the formula Dhism = X iyam ~ X and (Ho) may be applicable to quantify absolute error.

The absolute error characterizes the magnitude and sign of the resulting error, but does not determine the quality of the measurement itself.

The concept of error characterizes the imperfection of measurement. A characteristic of measurement quality is the concept of measurement accuracy used in metrology, which reflects, as shown above, the measure of closeness of measurement results to the true value of the physical quantity being measured. Accuracy and error are inversely related. In other words, high measurement accuracy corresponds to a small error. Therefore, in order to be able to compare the quality of measurements, the concept of relative error was introduced.

Relative error() is the ratio of the absolute measurement error to the true value of the measured quantity. It is calculated by the formula:

A measure of measurement accuracy is the reciprocal of the relative error modulus, i.e. . Error ($) often expressed in

percent:

If the measurement is performed once and the absolute error of the measurement result d is the difference between the instrument reading and the true value of the accepted value X and (Хд) then from relation (1.3) it follows that the value of the relative error b decreases with increasing value X and (X d). Therefore, for measurements, it is advisable to choose a device whose readings would be in the last part of its scale (measurement range), and to compare different devices to use the concept of reduced error. The expression of the error in the given form is used to quantify the component of the measurement error caused by the instrumental error (hardware, instrumental) - it will be discussed below (see paragraph 1.4.2 of the manual).

According to the nature (pattern) of changes in measurement error, they are divided into systematic and random. Gross errors are also considered random.

Systematic errors(d c) - components of measurement error that remain constant or change naturally during multiple (repeated) measurements of the same quantity under the same conditions. Of all types of errors, systematic ones are the most dangerous and difficult to eliminate. This is understandable for a number of reasons:

firstly, systematic error constantly distorts the actual value of the obtained measurement result towards its increase or decrease. Moreover, the direction of such distortion is difficult to determine in advance;

  • - secondly, the magnitude of the systematic error cannot be found by methods of mathematical processing of the obtained measurement results. It cannot be reduced by repeated measurements with the same measuring instruments;
  • - thirdly, it can be constant, it can change monotonically, it can change periodically, but from the obtained measurement results the law of its change is difficult, and sometimes impossible, to determine;
  • - fourthly, the measurement result is influenced by several factors, each of which causes its own systematic error depending on the measurement conditions.

Moreover, each new measurement method can produce its own, previously unknown, systematic errors, and it is necessary to look for techniques and ways to eliminate the influence of this systematic error in the measurement process.

The statement that there is no systematic error or that it is negligibly small must not only be shown, but also proven.

Such errors can only be identified through a detailed analysis of their possible sources and reduced (by using more accurate instruments, calibrating instruments using operational measures, etc.). However, they cannot be completely eliminated.

We should not forget that an undetected systematic error is “more dangerous” than a random one. If random errors characterize the spread of the value of a measured parameter relative to its actual value, then a systematic error consistently distorts the value of the measured parameter itself, and thereby “removes” it from the true (or conditionally true) value. Sometimes, to detect a systematic error, it is necessary to conduct labor-intensive and long-term (up to several months) experiments, and as a result it will be discovered that the systematic error was negligibly small. This is a very valuable result. It shows that this measurement technique will give accurate results by eliminating systematic error.

One of the ways to eliminate systematic errors is discussed in the fourth section of this teaching aid. However, in real conditions It is impossible to completely eliminate the systematic component of the error. There are always some non-excluded residues that need to be taken into account in order to assess their boundaries. This will be the systematic measurement error. That is, in principle, the systematic error is also random, and this division is due only to the established traditions of processing and presenting measurement results.

According to the nature of changes over time, systematic errors are divided into constant (maintaining magnitude and sign), progressive (increasing or decreasing with time), periodic, and also changing over time according to a complex non-periodic law. The main ones of these errors are progressive.

Progressive (drift) error is an unpredictable error that changes slowly over time. The distinctive features of progressive errors are as follows:

  • a) they can be corrected by amendments only in this moment time, and then they change unpredictably again;
  • b) changes in progressive errors over time, non-stationary (the characteristics of which change over time) represent a random process, and therefore, within the framework of a well-developed theory of stationary random processes, they can be described only with certain reservations.

Based on the sources of manifestation, the following systematic errors are distinguished:

  • - methodological, caused by the measurement method used;
  • - instrumental, caused by the error of the used SI (determined by the SI accuracy class);
  • - errors caused by incorrect installation of measuring instruments or the influence of uninformative external factors;
  • - errors caused by incorrect operator actions(ingrained incorrect skill in carrying out the measurement procedure).

In RMG 29-2013, the systematic error, depending on the nature of the change over time, is divided into constant, progressive, periodic and errors that change according to a complex law. Depending on the nature of the change over the measurement range, systematic errors are divided into constant and proportional.

Constant errors- errors that remain constant (or unchanged) over a long period of time, for example, during the entire series of measurements. They are the most common.

Progressive errors- continuously increasing or decreasing errors. These include, for example, errors due to wear of the measuring tips that come into contact with the part when monitoring it with an active control device.

Periodic errors- errors, the value of which is a periodic function of time or movement of the pointer of the measuring device.

Errors varying according to a complex law, occur due to the combined action of several systematic errors.

Proportional errors errors, the value of which is proportional to the value of the measured quantity.

The remaining systematic measurement error after making corrections is called non-excluded systematic error (PSE).

Random errors(A) - components of measurement error that change randomly during repeated (multiple) measurements of the same quantity under the same conditions. There is no pattern in the appearance of such errors; they appear during repeated measurements of the same quantity in the form of some scatter in the results obtained.

Random errors are inevitable, unavoidable and always occur as a result of measurement. Description of random errors is possible only on the basis of the theory of random processes and mathematical statistics.

Unlike systematic errors, random errors cannot be excluded from measurement results by introducing a correction, but they can be significantly reduced by repeated measurements of this quantity and subsequent static processing of the results obtained.

Gross errors (misses)- errors significantly exceeding those expected under given measurement conditions. Such errors arise due to operator errors or unaccounted external influences. They are identified when processing measurement results and excluded from consideration using certain rules. It should be noted that the attribution of observation results to the number of misses cannot always be carried out unambiguously.

Two points should be taken into account: on the one hand, the limited number of observations performed does not allow a high degree of

reliability, evaluate the form and type (identify) of the distribution law, and therefore select appropriate criteria for assessing the result for the presence of a “miss”. The second point is related to the characteristics of the object (or process), the indicators (parameters) of which form a random population (sample). Thus, in medical research, and even in everyday medical practice, individual outlier results may represent a variant of the “biological norm”, and therefore they require consideration, on the one hand, and analysis of the reasons that lead to their occurrence, on the other.

As was shown (section 1.2) the mandatory components of any

measurements are SI (instrument, measuring installation, measuring system), the method of measurement and the person performing the measurement.

The imperfection of each of these components leads to the appearance of its own component of error in the measurement result. In accordance with this, according to the source (reasons) of occurrence, instrumental, methodological and personal (subjective) are distinguished. errors._

Instrumental (hardware, instrumental) measurement errors are caused by the error of the applied SI and arise due to its imperfection. Sources of instrumental errors can be, for example, inaccurate instrument calibration and zero offset, variations in instrument readings during operation, etc.

The accuracy of the SI is a characteristic of the quality of the SI and reflects the proximity of its error to zero. It is believed that the smaller the error, the more accurate the SI. An integral characteristic of SI is the accuracy class.

The term “accuracy class of measuring instruments” has not changed in the ND. Accuracy class- this is a generalized characteristic of this type of SI. The accuracy class of SI, as a rule, reflects the level of their accuracy, is expressed by accuracy characteristics - the limits of permissible main and additional errors, as well as other characteristics affecting accuracy. Speaking about the accuracy class, two points were noted in RMG 29-99:

  • 1) the accuracy class makes it possible to judge the limits within which the SI error of one type lies, but is not a direct indicator of the accuracy of measurements performed using each of these means. This is important to take into account when choosing SI depending on the specified measurement accuracy;
  • 2) the accuracy class of a specific type of SI is established in the standards of technical requirements (conditions) or in other ND.

The note to this term in RMG 29-2013 says:

  • - the accuracy class makes it possible to judge the values ​​of instrumental errors or instrumental uncertainties of measuring instruments of this type when performing measurements;
  • - the accuracy class also applies to material measures.

RMG 29-2013 introduced a new term for domestic metrology "instrumental uncertainty"- this is the component of measurement uncertainty due to the use of a measuring instrument or measuring system.

Instrumental uncertainty is usually determined when calibrating an SI or measuring system, with the exception of the primary standard. Instrumental uncertainty is used when assessing measurement uncertainty according to type B. Information regarding instrumental uncertainty can be given in the SI specification (passport, calibration certificate, verification certificate).

Possible components of the instrumental error are presented in Figure 1.8. Reduce instrumental errors by using a more accurate instrument.


Figure 1.8 - Instrumental error and its components

Measurement method error represents a component of the systematic measurement error due to the imperfection of the adopted measurement method.

The error of the measurement method is due to:

  • - the difference between the adopted model of the measured object and the model that adequately describes its property, which is determined by measurement (this expresses the imperfection of the measurement method);
  • - the influence of methods of using SI. This occurs, for example, when measuring voltage with a voltmeter with a finite value of internal resistance. In this case, the voltmeter shunts the section of the circuit on which the voltage is measured, and it turns out to be less than it was before connecting the voltmeter;
  • - the influence of algorithms (formulas) by which measurement results are calculated (for example, incorrectness of calculation formulas);
  • - the influence of the selected SI on the signal parameters;
  • - the influence of other factors not related to the properties of the used

Methodological errors are often called theoretical, because they are associated with various kinds of deviations from the ideal model of the measurement process and the use of incorrect theoretical premises (assumptions) in measurements. Due to simplifications adopted in measurement equations, significant errors often arise, to compensate for which corrections must be introduced. The corrections are equal in magnitude to the error and opposite in sign.

Separately, among the methodological errors there are errors in statistical processing of observation results. In addition to errors associated with rounding of intermediate and final results, they contain errors associated with the replacement of point (numerical) and probabilistic characteristics of measured quantities with their approximate (experimental) values. Such errors arise when replacing a theoretical distribution with an experimental one, which always occurs with a limited number of observed values ​​(observation results).

A distinctive feature of methodological errors is that they cannot be indicated in the documentation for the used SI, since they do not depend on it; they must be determined by the operator in each specific case. In this regard, the operator must clearly distinguish between the actual quantity he is measuring and the quantity to be measured.

Sometimes the error of the method may appear as random. If, for example, an electronic voltmeter has an insufficiently high input resistance, then connecting it to the circuit under study can change the distribution of currents and voltages in it. In this case, the measurement result may differ significantly from the actual one. Methodological error can be reduced by using a more accurate measurement method.

Subjective error- component of the systematic measurement error due to individual characteristics operator (observer).

Subjective (personal) errors are caused by operator errors when taking SI readings. Errors of this kind are caused, for example, by delays or advances in signal registration, incorrect counting of tenths of a scale division, and asymmetry that occurs when setting a stroke in the middle between two marks.

According to the canceled RMG 29-99 operator error

(subjective error) - an error caused by the operator’s error in reading the readings on the SI scale and recording instrument diagrams. Currently, this term is not regulated in the ND.

Subjective errors, as follows from the definition, are caused by the state of the operator, his position during work, imperfection of the senses, and the ergonomic properties of the measuring instrument. Thus, errors occur from the negligence and inattention of the operator, from parallax, i.e. from the wrong direction of view when taking readings from a pointer instrument, etc.

Such errors are eliminated by the use of modern digital instruments or automatic measurement methods.

Based on the nature of the behavior of the measured physical quantity during the measurement process, static and dynamic errors are distinguished.

Static errors arise when measuring a steady-state value of the measured quantity, i.e. when this quantity stops changing over time.

Dynamic error (measuring instruments): the difference between the SI error in dynamic mode and its static error corresponding to the value of the quantity at a given time. Dynamic errors occur during dynamic measurements, when the measured quantity changes over time and it is necessary to establish the law of its change, i.e., errors inherent in the conditions of dynamic measurement. The reason for the appearance of dynamic errors is the discrepancy between the speed (time) characteristics of the device and the rate of change of the measured value.

Depending on the influence of the measured quantity on the nature of the error accumulation during the measurement process, it can be additive or multiplicative.

In all of the above cases, the measurement result is influenced by the measurement conditions, they form an error from the influencing factors - external error.

External error- an important component of the error of the measurement result, associated with the deviation of one or more influencing quantities from normal values ​​or their exit beyond the normal range (for example, the influence of humidity, temperature, external electric and magnetic fields, instability of power supplies, mechanical influences, etc. ). In most cases, external errors are systematic and are determined by additional errors of the applied measuring instruments, in contrast to the main error obtained under normal measurement conditions.

RMG 29-2013 standardizes the term “additional error (measuring instrument)”: a component of the SI error that arises in addition to the main error due to the deviation of any of the influencing values ​​from the normal value or due to it going beyond the normal range of values.

There are normal and standardized conditions (working conditions) for measurements. The value of the influence quantity set as the nominal value is taken as the normal value of the influence quantity. So, when measuring many quantities, the normal temperature value is 20 °C or 293 K, and in other cases it is normalized to 296 K (23 °C). To the normal value to which the results of many measurements performed in different conditions, the basic error of the SI is usually calculated. The range of values ​​of the influencing quantity, within which the change in the measurement result under its influence can be neglected in accordance with established accuracy standards, is accepted as the normal range of values ​​of the influencing quantity.

For example, the normal range of temperature values ​​when checking normal elements of an accuracy class of 0.005 in a thermostat should not change by more than ±0.05 °C from the set temperature of 20 °C, i.e. be in the range from 19.95 °C to 20.05 °C.

Standardized (operating) measurement conditions- these are the measurement conditions that must be met during measurements in order for the measuring instrument or measuring system to function in accordance with its intended purpose (RMG 29-2013).

A change in SI readings over time due to changes in influencing quantities or other factors is called drift of SI readings. For example, the progress of a chronometer, defined as the difference in corrections to its readings, calculated at different times. Typically, the chronometer's rate is determined per day (daily rate). If the zero readings drift, the term “zero drift” is used.

RMG 29-2013 standardizes the definition "instrumental drift" which is understood as “a continuous or stepwise change in readings over time caused by changes in the metrological characteristics (MC) of the SI.” SI instrumental drift is not associated with a change in the measured quantity or with a change in any identified influencing quantity.

Thus, the error from the influencing measurement conditions should be considered as a component of the systematic measurement error, which is a consequence of the unaccounted influence of deviations in one direction of any of the parameters characterizing the measurement conditions from the established value.

This term is used in the case of unaccounted for or insufficiently taken into account the action of one or another influencing quantity. However, it should be noted that the error from influencing conditions can also appear as random if active factor has a random nature (the temperature of the room in which the measurements are performed manifests itself in a similar way).

An integral part of any measurement is measurement error. With the development of instrumentation and measurement techniques, humanity strives to reduce the influence of this phenomenon on the final measurement result. I propose to understand in more detail the question of what measurement error is.

Measurement error is the deviation of the measurement result from the true value of the measured value. The measurement error is the sum of errors, each of which has its own cause.

According to the form of numerical expression, measurement errors are divided into absolute And relative

– this is the error expressed in units of the measured value. It is defined by the expression.

(1.2), where X is the measurement result; X 0 is the true value of this quantity.

Since the true value of the measured quantity remains unknown, in practice only an approximate estimate of the absolute measurement error is used, determined by the expression

(1.3), where X d is the actual value of this measured quantity, which, with an error in its determination, is taken as the true value.

is the ratio of the absolute measurement error to the actual value of the measured quantity:

According to the pattern of occurrence of measurement errors, they are divided into systematic, progressive, And random.

Systematic error is a measurement error that remains constant or changes naturally with repeated measurements of the same quantity.

Progressive error– This is an unpredictable error that changes slowly over time.

Systematic And progressive errors in measuring instruments are caused by:

  • the first - by the scale calibration error or its slight shift;
  • the second - aging of the elements of the measuring instrument.

The systematic error remains constant or changes naturally with repeated measurements of the same quantity. The peculiarity of the systematic error is that it can be completely eliminated by introducing corrections. The peculiarity of progressive errors is that they can only be corrected at a given point in time. They require continuous correction.

Random error– this measurement error varies randomly. When taking repeated measurements of the same quantity. Random errors can only be detected through repeated measurements. Unlike systematic errors, random ones cannot be eliminated from measurement results.

By origin they distinguish instrumental And methodological errors of measuring instruments.

Instrumental errors- these are errors caused by the properties of measuring instruments. They arise due to insufficiently high quality of measuring instrument elements. These errors include the manufacture and assembly of measuring instrument elements; errors due to friction in the mechanism of the device, insufficient rigidity of its elements and parts, etc. We emphasize that the instrumental error is individual for each measuring instrument.

Methodological error- this is the error of a measuring instrument that arises due to the imperfection of the measurement method, the inaccuracy of the ratio used to estimate the measured value.

Errors of measuring instruments.

is the difference between its nominal value and the true (real) value of the quantity reproduced by it:

(1.5), where X n is the nominal value of the measure; X d – actual value of the measure

is the difference between the instrument reading and the true (actual) value of the measured value:

(1.6), where X p – instrument readings; X d – actual value of the measured quantity.

is the ratio of the absolute error of a measure or measuring device to the true one

(real) value of the reproduced or measured quantity. The relative error of a measure or measuring device can be expressed in (%).

(1.7)

– the ratio of the error of the measuring device to the standard value. The normalizing value XN is a conventionally accepted value equal to either the upper measurement limit, or the measurement range, or the scale length. The given error is usually expressed in (%).

(1.8)

Limit of permissible error of measuring instruments– the largest error of a measuring instrument, without taking into account the sign, at which it can be recognized and approved for use. This definition applies to the main and additional errors, as well as to the variation of indications. Since the properties of measuring instruments depend on external conditions, their errors also depend on these conditions, therefore the errors of measuring instruments are usually divided into basic And additional.

Main– this is the error of a measuring instrument used under normal conditions, which are usually defined in the regulatory and technical documents for this measuring instrument.

Additional– this is a change in the error of a measuring instrument due to the deviation of influencing quantities from normal values.

The errors of measuring instruments are also divided into static And dynamic.

Static is the error of the measuring instrument used to measure a constant value. If the measured quantity is a function of time, then due to the inertia of the measuring instruments, a component of the total error arises, called dynamic error of measuring instruments.

There are also systematic And random the errors of measuring instruments are similar with the same measurement errors.

Factors influencing measurement error.

Errors arise for various reasons: these may be errors of the experimenter or errors due to the use of the device for other purposes, etc. There are a number of concepts that define factors influencing measurement error

Variation of instrument readings– this is the largest difference in the readings obtained during the forward and reverse strokes with the same actual value of the measured quantity and constant external conditions.

Instrument accuracy class– this is a generalized characteristic of a measuring instrument (device), determined by the limits of permissible main and additional errors, as well as other properties of measuring instruments that affect the accuracy, the value of which is established for certain types of measuring instruments.

The accuracy classes of a device are established upon release, calibrating it against a standard device under normal conditions.

Precision- shows how accurately or clearly a reading can be made. It is determined by how close the results of two identical measurements are to each other.

Device resolution is the smallest change in the measured value to which the device will respond.

Instrument range- determined by the minimum and maximum value input signal for which it is intended.

Device bandwidth is the difference between the minimum and maximum frequencies for which it is intended.

Device sensitivity- defined as the ratio of the output signal or reading of the device to the input signal or measured value.

Noises- any signal that does not carry useful information.

Error is the deviation of the measurement result from the true value of the measured value.

The true value of the PV can only be determined by carrying out an infinite number of measurements, which is impossible to implement in practice. The true value of the measured value is unattainable, and to analyze errors, the actual value of the measured value is used as the value closest to the true value; the value is obtained using the most advanced measurement methods and the most high-precision measuring instruments. Thus, the measurement error is a deviation from the actual value ∆=Xd – Khiz

Error accompanies all measurements and is associated with the imperfection of the method, measuring instrument, and measurement conditions (when they differ from standard conditions).

Depending on the operating principles of the device, certain factors have an influence.

Errors of SI and measurement results are distinguished due to the influence of external conditions, features of the measured quantity, and imperfections of SI.

The error of the measurement result includes the error and the measuring instrument, as well as the influence of the measurement conditions, the properties of the object and the measured value ∆рi=∆сi+∆ву+∆sv.o+∆siv.

Error classification:

1) By way of expression:

a) Absolute– error expressed in units of the measured value ∆=Хд-Хзм

b) Relative– error expressed as the ratio of the absolute error to the measurement result or the actual value of the measured value γrel=(∆/Xd)* 100.

c) Given– this is a relative error, expressed by the ratio of the absolute error of the measuring instrument to the condition, the accepted value of the quantity is constant throughout the entire measurement range (or part of the range) γpriv=(∆/Xnorm)*100, where Xnorm is the normalizing value established for the given values. The choice of Khnorm is made in accordance with GOST 8.009-84. This may be the upper limit of the measuring instrument, measurement range, scale length, etc. For many measuring instruments, the accuracy class is determined based on the given error. The given error is introduced because the relative error characterizes the error only at a given point on the scale and depends on the value of the measured quantity.

2) For reasons and conditions of occurrence:

a) Main- this is the error of measuring instruments, which are in normal operating conditions, arises due to the imperfection of the conversion function and, in general, the imperfection of the properties of measuring instruments and reflects the difference in the actual function of converting measuring instruments into standard values. from the nominal value normalized by documents for measuring instruments (standards, technical conditions). Regulatory documents provide for the following n.s.:

  • Temperature environment(20±5)°С;
  • Relative humidity (65±15)%;
  • mains supply voltage (220±4.4)V;
  • network supply frequency (50±1)Hz;
  • lack of email and mag. fields;
  • The device is positioned horizontally, with a deviation of ±2°.

Operating conditions of measurements– these are conditions under which the values ​​of the influencing quantities are within the working areas for which the additional error or change in SI readings is normalized.

For example, for capacitors, an additional error associated with temperature deviation from normal is normalized; for ammeter frequency deviation alternating current 50 Hz.

b) Additional– this is a component of the error of measuring instruments that arises in addition to the main one, as a result of the deviation of any of the influencing quantities from the norm of its value or as a result of its going beyond the normalized range of values. Usually normalized highest value additional error.

Limit of permissible basic error– max. the main error of measuring instruments, at which SI can be suitable and approved for use according to technical requirements. conditions.

Limit of permissible additional error– the largest additional error for which the SI is approved for use.

For example, for a device with KT 1.0, the given additional temperature error should not exceed ±1% for a temperature change of every 10°.

The limits of permissible main and additional errors can be expressed in the form of absolute, relative or reduced error.

In order to be able to select SI by comparing their characteristics, enter generalized characteristics of this type of SI – accuracy class (CT) . Usually this is the limit of permissible main and additional errors. CT makes it possible to judge within what limits the error of a measuring instrument of one type lies, but is not a direct indicator of the accuracy of measurements performed using each of these measuring instruments, because the error also depends on the method, measurement conditions, etc. This must be taken into account when choosing SI depending on the specified accuracy.

CT values ​​are established in standards or in technical conditions or other regulatory documents and are selected in accordance with GOST 8.401-80 from a standard range of values. For example, for electromechanical devices: 0.05; 0.1; 0.2; 0.5; 1.0; 2.5; 4.0; 6.0.

Knowing the CT SI, you can find the maximum permissible value of the absolute error for all points of the measurement range from the formula for the given error: ∆maxadd=(γacc*Xnorm)/100.

CT is usually marked on the instrument scale in different forms, for example, (2.5) (circled).

3) By the nature of the changes:

a) systematic– error component that remains constant or changes according to a known pattern throughout the entire measurement period. Can be excluded from measurement results by adjustment or correction. These include: methodological P, instrumental P, subjective P, etc. This quality of SI, when the systematic error is close to zero, is called correctness.

b) random- these are error components that change randomly; the reasons cannot be precisely specified, and therefore cannot be eliminated. Lead to ambiguity of evidence. A reduction is possible with repeated measurements and subsequent statistical processing of the results. Those. the average result of multiple measurements is closer to the actual value than the result of a single measurement. Quality, which is characterized by closeness to zero of the random component of the error, is called convergence readings from this device.

c) misses – gross errors associated with operator errors or unaccounted external influences. They are usually excluded from measurement results and are not taken into account when processing the results.

4) Depending on the measured value:

a) Additive errors(does not depend on the measured value)

b) Multiplicative errors(proportional to the value of the measured quantity).

Multiplicative error is also called sensitivity error.

Additive error usually occurs due to noise, interference, vibration, and friction in supports. Example: zero error and discreteness (quantization) error.

The multiplicative error is caused by the adjustment error of individual elements of measuring instruments. For example, due to aging (SI sensitivity error).

Depending on which instrument error is significant, metrological characteristics are normalized.

If the additive error is significant, then the limit of the permissible main error is normalized in the form of a reduced error.

If the multiplicative error is significant, then the limit of the permissible main error is determined using the relative error formula.

Then the relative total error: γrel=Δ/Х= γadd + γmult= γadd+ γmult+ γadd*Xnorm/Х– γadd=±, where с= γadd+ γmult; d= γadd.

This is a method of standardizing metrological characteristics when the additive and multiplicative components of the error are comparable, i.e. the relative permissible basic error limit is expressed in a binomial formula, respectively, and the CT designation consists of two numbers expressing c and d in %, separated by a slash. For example, 0.02/0.01. This is convenient because... number c - this is the relative SI error in n.s. The second term of the formula characterizes the increase in the relative measurement error with increasing X value, i.e. characterizes the influence of the additive component of the error.

5) Depending on the influence of the nature of the change in the measured quantity:

a) Static– SI error when measuring a constant or slowly changing quantity.

b) Dynamic is the SI error that occurs when measuring the PV that rapidly changes over time. Dynamic error is a consequence of the inertia of the device.

The result of a measurement is the value of a quantity found by measuring it. The result obtained always contains some error.

Thus, the measurement task includes not only finding the value itself, but also estimating the error allowed during the measurement.

The absolute measurement error D refers to the deviation of the measurement result of a given value A from its true meaning A x

D= A – A x. (IN 1)

In practice, instead of the true value which is unknown, the actual value is usually used.

The error calculated using formula (B.1) is called the absolute error and is expressed in units of the measured value.

The quality of measurement results is usually conveniently characterized not by the absolute error D, but by its ratio to the measured value, which is called the relative error and is usually expressed as a percentage:

ε = (D / A) 100 %. (AT 2)

The relative error ε is the ratio of the absolute error to the measured value.

The relative error ε is directly related to the measurement accuracy.

Measurement accuracy is the quality of a measurement, reflecting the closeness of its results to the true value of the measured value. Measurement accuracy is the reciprocal of its relative error. High measurement accuracy corresponds to small relative errors.

The magnitude and sign of the error D depends on the quality of the measuring instruments, the nature and conditions of the measurements, and the experience of the observer.

All errors, depending on the reasons for their occurrence, are divided into three types: A) systematic; b) random; V) misses.

Systematic errors are errors whose magnitude is the same in all measurements carried out by the same method using the same measuring instruments.

Systematic errors can be divided into three groups.

1. Errors, the nature of which is known and the magnitude can be determined quite accurately. Such errors are called corrections. For example, A) when determining the length, the elongation of the measured body and the measuring ruler due to temperature changes; b) when determining weight - an error caused by “weight loss” in the air, the magnitude of which depends on temperature, humidity and atmospheric air pressure, etc.

The sources of such errors are carefully analyzed, the magnitude of the corrections is determined and taken into account in the final result.

2. Errors of measuring instruments δ cl t. For the convenience of comparing devices with each other, the concept of reduced error d pr (%) has been introduced

Where A k– some normalized value, for example, the final value of the scale, the sum of the values ​​of a two-sided scale, etc.

The accuracy class of a device d class t is a physical quantity that is numerically equal to the greatest permissible reduced error, expressed
as a percentage, i.e.

d cl p = d pr max

Electrical measuring instruments are usually characterized by an accuracy class ranging from 0.05 to 4.

If an accuracy class of 0.5 is indicated on the device, this means that the device readings have an error of up to 0.5% of the entire operating scale of the device. Errors in measuring instruments cannot be excluded, but their largest value D max can be determined.

The value of the maximum absolute error of a given device is calculated according to its accuracy class

(AT 4)

When measuring with a device whose accuracy class is not specified, the absolute measurement error is usually equal to half the value of the smallest scale division.

3. The third type includes errors whose existence is not suspected. For example: it is necessary to measure the density of some metal; for this, the volume and mass of the sample are measured.

If the sample being measured contains voids inside, for example, air bubbles trapped during casting, then the density measurement is carried out with systematic errors, the magnitude of which is unknown.

Random errors are those errors whose nature and magnitude are unknown.

Random measurement errors arise due to the simultaneous influence on the measurement object of several independent quantities, the changes of which are of a fluctuation nature. It is impossible to exclude random errors from measurement results. It is only possible, on the basis of the theory of random errors, to indicate the limits between which the true value of the measured quantity lies, the probability of the true value being within these limits, and its most probable value.

Misses are observational errors. The source of errors is the lack of attention of the experimenter.

You should understand and remember:

1) if the systematic error is decisive, that is, its value is significantly greater than the random error inherent in this method, then it is enough to perform the measurement once;

2) if random error is decisive, then the measurement should be performed several times;

3) if the systematic Dsi and random Dcl errors are comparable, then the total D total measurement error is calculated based on the law of adding errors, as their geometric sum

Measurement result error - deviation of the measurement result from the true (actual) value of the measured value:

Since the true value of the measured quantity is always unknown and in practice we deal with the actual values ​​of quantities XD, then the formula for determining the error in this regard takes the form:

Main sources of measurement error

sources appearance of measurement errors:

· incomplete compliance of the measurement object with its accepted model;

· incomplete knowledge of the measured quantity;

· incomplete knowledge of the influence of environmental conditions on the measurement;

· imperfect measurement of environmental parameters;

· the final resolution of the device or its sensitivity threshold;

· inaccuracy in transferring the value of a unit of quantity from standards to working measuring instruments;

· inaccurate knowledge of constants and other parameters used in the algorithm for processing measurement results;

· approximations and assumptions implemented in the measurement method;

· subjective error of the operator when taking measurements;

· changes in repeated observations of the measured quantity under apparently identical conditions, and others.

Methodological error arises due to shortcomings of the measurement method used. Most often, this is a consequence of various assumptions when using empirical relationships between measured quantities or design simplifications in the instruments used in this measurement method.
Subjective error is associated with such individual characteristics of operators as attentiveness, concentration, speed of reaction, and degree of professional preparedness. Such errors are more common when there is a large proportion of manual labor when carrying out measurements and are almost absent when using automated measuring instruments.

Classification of measurement errors according to the form of presentation of the error and the nature of the change in results during repeated measurements

According to presentation form

Absolute error- is an estimate of the absolute measurement error. Calculated different ways. The method of calculation is determined by the distribution of the random variable. Accordingly, the magnitude of the absolute error depending on the distribution of the random variable may be different. If is the measured value and is the true value, then the inequality must hold with some probability close to 1. If the random variable is distributed over normal law, then it is usually taken as the absolute error standard deviation. Absolute error is measured in the same units as the quantity itself.

There are several ways to write a quantity along with its absolute error.

· Signed notation is usually used ± . For example, the record in run onHYPERLINK "https://ru.wikipedia.org/wiki/%D0%91%D0%B5%D0%B3_%D0%BD%D0%B0_100_%D0%BC%D0%B5%D1%82%D1% 80%D0%BE%D0%B2" HYPERLINK "https://ru.wikipedia.org/wiki/%D0%91%D0%B5%D0%B3_%D0%BD%D0%B0_100_%D0%BC%D0 %B5%D1%82%D1%80%D0%BE%D0%B2"100 meters, established in 1983, is equal to 9.930±0.005 s.

· To record quantities measured with very high accuracy, another notation is used: numbers corresponding to the error of the last digits mantissa, are added in brackets. For example, measured value Boltzmann constant equals 1.3806488(13)×10 −23 J/TO , which can also be written much longer as 1.3806488×10 −23 ±0.0000013×10 −23 J/K.

Relative error- measurement error, expressed as the ratio of the absolute measurement error to the actual or average value of the measured value (RMG 29-99): , .

The relative error is dimensionless quantity; its numerical value can be indicated, for example, in percent.

Reduced error- error expressed as the ratio of the absolute error of the measuring instrument to the conventionally accepted value of a quantity, constant over the entire measurement range or in part of the range. It is calculated by the formula , where is the normalizing value, which depends on the type of scale of the measuring device and is determined by its calibration:

· if the instrument scale is one-sided, that is, the lower measurement limit is zero, then it is determined to be equal to the upper measurement limit;

· if the instrument scale is double-sided, then the normalizing value is equal to the width of the instrument’s measurement range.

The given error is also a dimensionless quantity.

By nature of manifestation (properties of errors) they are divided into systematic and random, according to ways of expressing - into absolute and relative.
Absolute error expressed in units of the measured quantity, and relative error represents the ratio of the absolute error to the measured (real) value of a quantity and its numerical value is expressed either as a percentage or as a fraction of a unit.
Experience in carrying out measurements shows that with repeated measurements of the same unchanged physical quantity under constant conditions, the measurement error can be represented in the form of two terms, which manifest themselves differently from measurement to measurement. There are factors that constantly or naturally change during the measurement process and affect the measurement result and its error. Errors caused by such factors are called systematic.
Systematic error – component of the measurement error that remains constant or changes naturally with repeated measurements of the same quantity. Depending on the nature of the change, systematic errors are divided into constant, progressive, periodic, changing according to a complex law.
Closeness to zero systematic error reflects correctness of measurements .
Systematic errors are usually estimated either by theoretical analysis of measurement conditions , based on the known properties of measuring instruments, or using more accurate measuring instruments . As a rule, efforts are made to eliminate systematic errors using corrections. Amendment represents the value of the quantity entered into the uncorrected measurement result in order to eliminate systematic error. The sign of the correction is opposite to the sign of the magnitude. The occurrence of errors is also influenced by factors that appear irregularly and disappear unexpectedly. Moreover, their intensity also does not remain constant. The results of measurements under such conditions have differences that are individually unpredictable, and their inherent patterns appear only with a significant number of measurements. Errors resulting from the action of such factors are called random errors .
Random error – component of the measurement error that changes randomly (in sign and value) during repeated measurements of the same quantity, carried out with the same care.
The insignificance of random errors indicates good convergence of measurements, that is, the proximity to each other of the results of measurements performed repeatedly by the same means, by the same method, under the same conditions and with the same care.
Random errors are detected by repeated measurements the same magnitude under the same conditions. They cannot be excluded empirically, but can be assessed when processing observational results. Dividing measurement errors into random and systematic is very important, because taking into account and assessing these error components requires different approaches.
Factors causing errors, as a rule, can be reduced to a general level when their influence on the formation of errors is more or less the same. However, some factors can be unexpectedly strong, for example, a sharp drop in network voltage. In this case, errors may arise that significantly exceed the errors justified by the measurement conditions, the properties of the measuring instruments and measurement method, and the qualifications of the operator. Such errors are called rude or blunders .
Gross error (miss ) – the error of the result of an individual measurement included in a series of measurements, which for given conditions differs sharply from the other error values. Gross errors should always be excluded from consideration if they are known to be the result of obvious errors in measurement. If the reasons for the appearance of outlier observations cannot be established, then statistical methods are used to resolve the issue of their exclusion. There are several criteria that allow you to identify gross errors. Some of them are discussed below in the section on processing measurement results.