How Do You Know When to Add or Subtract 0.5 in Normal Distribution
Measurements and Error Analysis
"Information technology is meliorate to be roughly correct than precisely incorrect." — Alan Greenspan
The Uncertainty of Measurements
Some numerical statements are exact: Mary has 3 brothers, and ii + 2 = 4. However, all measurements have some caste of dubiety that may come from a diverseness of sources. The process of evaluating the dubiousness associated with a measurement result is often called uncertainty analysis or error analysis. The complete statement of a measured value should include an estimate of the level of confidence associated with the value. Properly reporting an experimental result along with its uncertainty allows other people to make judgments about the quality of the experiment, and it facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an uncertainty approximate, information technology is impossible to answer the bones scientific question: "Does my result hold with a theoretical prediction or results from other experiments?" This question is fundamental for deciding if a scientific hypothesis is confirmed or refuted. When we make a measurement, nosotros by and large presume that some exact or true value exists based on how we ascertain what is being measured. While we may never know this true value exactly, we attempt to notice this ideal quantity to the best of our power with the time and resource available. As we make measurements past different methods, or even when making multiple measurements using the same method, we may obtain slightly different results. So how exercise we study our findings for our all-time estimate of this elusive truthful value? The most common way to show the range of values that we believe includes the true value is:
( 1 )
measurement = (best estimate ± uncertainty) units
Allow'southward take an example. Suppose you desire to find the mass of a gilded ring that y'all would like to sell to a friend. You lot practice not want to jeopardize your friendship, so you desire to get an authentic mass of the ring in order to charge a off-white market price. Y'all estimate the mass to be between 10 and 20 grams from how heavy it feels in your mitt, merely this is non a very precise approximate. After some searching, you lot observe an electronic balance that gives a mass reading of 17.43 grams. While this measurement is much more than precise than the original estimate, how practice yous know that information technology is accurate, and how confident are y'all that this measurement represents the true value of the ring'south mass? Since the digital display of the residual is limited to ii decimal places, you could study the mass as m = 17.43 ± 0.01 g. 17.44 ± 0.02 thousand. Accuracy is the closeness of understanding between a measured value and a true or accustomed value. Measurement fault is the amount of inaccuracy. Precision is a measure of how well a result can be determined (without reference to a theoretical or truthful value). It is the degree of consistency and understanding amongst independent measurements of the same quantity; as well the reliability or reproducibility of the result. The uncertainty estimate associated with a measurement should business relationship for both the accurateness and precision of the measurement.
( 2 )
Relative Dubiety =
uncertainty measured quantity
Example: m = 75.5 ± 0.5 g = 0.00half dozen = 0.7%.
( iii )
Relative Error =
measured value − expected value expected value
If the expected value for m is 80.0 g, then the relative error is: = −0.056 = −5.6% Note: The minus sign indicates that the measured value is less than the expected value.
Types of Errors
Measurement errors may be classified as either random or systematic, depending on how the measurement was obtained (an instrument could crusade a random error in one situation and a systematic fault in another).
Random errors are statistical fluctuations (in either direction) in the measured information due to the precision limitations of the measurement device. Random errors tin can exist evaluated through statistical analysis and can be reduced by averaging over a large number of observations (encounter standard error).
Systematic errors are reproducible inaccuracies that are consistently in the same direction. These errors are difficult to detect and cannot exist analyzed statistically. If a systematic error is identified when calibrating against a standard, applying a correction or correction factor to recoup for the result can reduce the bias. Unlike random errors, systematic errors cannot be detected or reduced by increasing the number of observations.
When making careful measurements, our goal is to reduce as many sources of fault equally possible and to keep runway of those errors that we can non eliminate. It is useful to know the types of errors that may occur, so that we may recognize them when they arise. Common sources of error in physics laboratory experiments:
Incomplete definition (may exist systematic or random) — One reason that it is impossible to brand exact measurements is that the measurement is not e'er conspicuously divers. For example, if two dissimilar people measure the length of the same string, they would probably get different results considering each person may stretch the string with a different tension. The best way to minimize definition errors is to advisedly consider and specify the atmospheric condition that could touch the measurement. Failure to account for a factor (usually systematic) — The most challenging part of designing an experiment is trying to control or account for all possible factors except the one independent variable that is being analyzed. For instance, y'all may inadvertently ignore air resistance when measuring free-fall acceleration, or you may fail to account for the event of the Earth's magnetic field when measuring the field about a small magnet. The all-time way to account for these sources of error is to brainstorm with your peers nigh all the factors that could possibly impact your result. This brainstorm should exist done earlier beginning the experiment in social club to plan and account for the confounding factors earlier taking data. Sometimes a correction can be applied to a issue later on taking data to account for an error that was not detected before. Ecology factors (systematic or random) — Exist aware of errors introduced by your immediate working environment. You lot may need to take account for or protect your experiment from vibrations, drafts, changes in temperature, and electronic noise or other effects from nearby apparatus. Instrument resolution (random) — All instruments have finite precision that limits the ability to resolve pocket-sized measurement differences. For example, a meter stick cannot be used to distinguish distances to a precision much better than most half of its smallest scale division (0.v mm in this instance). One of the all-time means to obtain more precise measurements is to use a null difference method instead of measuring a quantity direct. Cypher or balance methods involve using instrumentation to measure the difference between two similar quantities, ane of which is known very accurately and is adjustable. The adaptable reference quantity is varied until the departure is reduced to zip. The 2 quantities are so balanced and the magnitude of the unknown quantity can exist establish by comparing with a measurement standard. With this method, bug of source instability are eliminated, and the measuring musical instrument can be very sensitive and does not even need a scale. Calibration (systematic) — Whenever possible, the calibration of an musical instrument should exist checked before taking information. If a calibration standard is not bachelor, the accuracy of the instrument should exist checked past comparing with another instrument that is at least equally precise, or by consulting the technical data provided by the manufacturer. Calibration errors are usually linear (measured as a fraction of the full scale reading), so that larger values issue in greater absolute errors. Zero beginning (systematic) — When making a measurement with a micrometer caliper, electronic balance, or electric meter, always cheque the cipher reading first. Re-zero the instrument if possible, or at least measure and record the zero beginning so that readings can be corrected subsequently. It is also a adept idea to check the aught reading throughout the experiment. Failure to zippo a device will result in a constant mistake that is more significant for smaller measured values than for larger ones. Concrete variations (random) — It is e'er wise to obtain multiple measurements over the widest range possible. Doing so oft reveals variations that might otherwise go undetected. These variations may call for closer examination, or they may be combined to find an average value. Parallax (systematic or random) — This error tin occur whenever at that place is some distance between the measuring scale and the indicator used to obtain a measurement. If the observer'southward heart is not squarely aligned with the pointer and scale, the reading may exist too loftier or low (some analog meters have mirrors to help with this alignment). Instrument drift (systematic) — Most electronic instruments take readings that drift over time. The amount of drift is generally non a business, but occasionally this source of mistake tin can exist pregnant. Lag time and hysteresis (systematic) — Some measuring devices require fourth dimension to achieve equilibrium, and taking a measurement before the instrument is stable will result in a measurement that is too high or low. A mutual example is taking temperature readings with a thermometer that has not reached thermal equilibrium with its surroundings. A similar effect is hysteresis where the musical instrument readings lag behind and announced to accept a "memory" effect, as information are taken sequentially moving upwards or downwardly through a range of values. Hysteresis is near ordinarily associated with materials that become magnetized when a changing magnetic field is applied. Personal errors come from abandon, poor technique, or bias on the function of the experimenter. The experimenter may measure incorrectly, or may use poor technique in taking a measurement, or may introduce a bias into measurements past expecting (and inadvertently forcing) the results to agree with the expected consequence.
Gross personal errors, sometimes called mistakes or blunders, should be avoided and corrected if discovered. Equally a rule, personal errors are excluded from the mistake analysis discussion considering it is generally assumed that the experimental effect was obtained by post-obit correct procedures. The term human mistake should also exist avoided in mistake assay discussions because it is too full general to be useful.
Estimating Experimental Uncertainty for a Unmarried Measurement
Any measurement you brand will have some dubiety associated with it, no thing the precision of your measuring tool. So how do you determine and report this incertitude?
The dubiety of a single measurement is express by the precision and accuracy of the measuring instrument, along with whatever other factors that might impact the power of the experimenter to make the measurement.
For example, if you are trying to apply a meter stick to measure out the diameter of a lawn tennis ball, the uncertainty might be ± v mm, ± 2 mm.
( 4 )
Measurement = (measured value ± standard incertitude) unit of measurement
where the ± standard dubiousness indicates approximately a 68% confidence interval (meet sections on Standard Deviation and Reporting Uncertainties). six.vii ± 0.2 cm.
Example: Diameter of tennis brawl =
Estimating Uncertainty in Repeated Measurements
Suppose you time the period of oscillation of a pendulum using a digital musical instrument (that you assume is measuring accurately) and detect: T = 0.44 seconds. This single measurement of the period suggests a precision of ±0.005 s, but this instrument precision may not give a complete sense of the doubtfulness. If you repeat the measurement several times and examine the variation among the measured values, you tin become a better idea of the uncertainty in the menses. For case, hither are the results of 5 measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.
( 5 )
Average (mean) =
ten 1 + x 2 + + ten Due north N
For this situation, the best gauge of the period is the average, or mean.
Whenever possible, repeat a measurement several times and average the results. This average is by and large the all-time estimate of the "true" value (unless the data set up is skewed by one or more outliers which should be examined to determine if they are bad data points that should be omitted from the average or valid measurements that require further investigation). Generally, the more repetitions you brand of a measurement, the better this estimate volition be, but be careful to avoid wasting fourth dimension taking more measurements than is necessary for the precision required.
Consider, equally another case, the measurement of the width of a slice of paper using a meter stick. Being conscientious to keep the meter stick parallel to the edge of the paper (to avoid a systematic error which would cause the measured value to exist consistently college than the correct value), the width of the paper is measured at a number of points on the sheet, and the values obtained are entered in a data table. Annotation that the final digit is only a crude estimate, since it is difficult to read a meter stick to the nearest tenth of a millimeter (0.01 cm).
( six )
Average =
= = 31.19 cm sum of observed widths no. of observations
This boilerplate is the best available approximate of the width of the piece of paper, but information technology is certainly non exact. Nosotros would take to average an space number of measurements to approach the true hateful value, and fifty-fifty so, we are not guaranteed that the mean value is accurate because there is even so some systematic error from the measuring tool, which tin never be calibrated perfectly. Then how do we express the doubtfulness in our average value? One style to express the variation amid the measurements is to utilise the average deviation. This statistic tells us on average (with l% conviction) how much the private measurements vary from the hateful.
( vii )
d =
|x 1 − x | + |x 2 − x | + + |x N − x | N
Withal, the standard deviation is the well-nigh common way to characterize the spread of a data gear up. The standard deviation is always slightly greater than the boilerplate deviation, and is used because of its clan with the normal distribution that is frequently encountered in statistical analyses.
Standard Departure
To calculate the standard departure for a sample of N measurements:
-
i
Sum all the measurements and divide by N to go the average, or mean. -
2
Now, subtract this average from each of the N measurements to obtain Northward "deviations". -
3
Square each of these N deviations and add them all up. -
4
Separate this outcome by( North − ane)
and take the square root.
We can write out the formula for the standard deviation as follows. Permit the Northward measurements be called x 1, 10 ii, ..., xDue north . Let the average of the N values be chosen ten . δ x i = x i − x , for i = 1, ii, , N .
In our previous case, the boilerplate width ten d = 0.086 cm. s = 10 ± two due south,
= 0.12 cm.
(0.fourteen)ii + (0.04)2 + (0.07)2 + (0.17)ii + (0.01)2 5 − one
Figure 1
Standard Deviation of the Hateful (Standard Error)
When we report the average value of N measurements, the doubtfulness nosotros should associate with this average value is the standard divergence of the mean, often chosen the standard error (SE).
( 9 )
σ x =
s
Due north
The standard error is smaller than the standard divergence by a factor of 1/ Average newspaper width = 31.19 ± 0.05 cm.
. N
. 5
Anomalous Data
The first step you should accept in analyzing data (and even while taking data) is to examine the information fix equally a whole to look for patterns and outliers. Anomalous information points that prevarication outside the general trend of the data may advise an interesting phenomenon that could lead to a new discovery, or they may simply exist the issue of a mistake or random fluctuations. In any case, an outlier requires closer exam to determine the cause of the unexpected result. Farthermost data should never be "thrown out" without articulate justification and caption, considering you may be discarding the most pregnant part of the investigation! All the same, if you can clearly justify omitting an inconsistent data point, and so you should exclude the outlier from your assay so that the average value is not skewed from the "true" mean.
Fractional Uncertainty Revisited
When a reported value is determined past taking the average of a set of independent readings, the fractional doubt is given by the ratio of the doubtfulness divided by the boilerplate value. For this example,
( 10 )
Partial doubt = = = 0.0016 ≈ 0.2%
Notation that the fractional uncertainty is dimensionless only is often reported every bit a pct or in parts per million (ppm) to emphasize the fractional nature of the value. A scientist might also make the statement that this measurement "is good to about ane part in 500" or "precise to about 0.two%". The fractional doubtfulness is also of import because it is used in propagating uncertainty in calculations using the result of a measurement, as discussed in the next section.
Propagation of Uncertainty
Suppose we desire to make up one's mind a quantity f, which depends on x and perhaps several other variables y, z, etc. We want to know the error in f if we measure ten, y, ... with errors σ ten , σ y , ... Examples:
( eleven )
f = xy (Area of a rectangle)
( 12 )
f = p cos θ ( x -component of momentum)
( 13 )
f = x / t (velocity)
For a single-variable office f(x), the departure in f tin can be related to the deviation in x using calculus:
( 14 )
δ f =
Thus, taking the square and the average:
( xv )
δ f 2 =
δ x two 2
and using the definition of σ , nosotros become:
( 16 )
σ f =
Examples: (a) f =
x
( 17 )
=
1 two
x
( 18 )
σ f =
, or = σ x 2
x
(b) f = ten 2
(c) f = cos θ
( 22 )
σ f = |sin θ | σ θ , or = |tan θ | σ θ Note : in this situation, σ θ must be in radians.
In the instance where f depends on two or more than variables, the derivation above tin be repeated with minor modification. For two variables, f(x, y), nosotros have:
The partial derivative means differentiating f with respect to x holding the other variables fixed. Taking the square and the average, nosotros get the police force of propagation of uncertainty:
If the measurements of ten and y are uncorrelated, then δ x δ y = 0,
Examples: (a) f = 10 + y
( 27 )
∴ σ f =
σ ten ii + σ y ii
When adding (or subtracting) independent measurements, the absolute uncertainty of the sum (or difference) is the root sum of squares (RSS) of the individual absolute uncertainties. When adding correlated measurements, the uncertainty in the event is simply the sum of the accented uncertainties, which is always a larger uncertainty estimate than calculation in quadrature (RSS). Adding or subtracting a abiding does non change the absolute doubtfulness of the calculated value as long as the constant is an verbal value.
(b) f = xy
( 29 )
∴ σ f =
y ii σ ten 2 + 10 2 σ y 2
Dividing the previous equation by f = xy, we get:
(c) f = ten / y
Dividing the previous equation by f = x / y ,
When multiplying (or dividing) contained measurements, the relative incertitude of the product (quotient) is the RSS of the individual relative uncertainties. When multiplying correlated measurements, the uncertainty in the result is just the sum of the relative uncertainties, which is always a larger doubt gauge than adding in quadrature (RSS). Multiplying or dividing by a abiding does not change the relative uncertainty of the calculated value.
Annotation that the relative dubiousness in f, as shown in (b) and (c) in a higher place, has the aforementioned form for multiplication and division: the relative uncertainty in a product or caliber depends on the relative doubtfulness of each individual term. Example: Detect uncertainty in v, where v = at
( 34 )
= = =
= 0.031 or 3.ane% (0.010)2 + (0.029)2
Notice that the relative uncertainty in t (2.9%) is significantly greater than the relative doubtfulness for a (one.0%), and therefore the relative uncertainty in five is substantially the same as for t (most 3%). Graphically, the RSS is similar the Pythagorean theorem:
Figure ii
The total uncertainty is the length of the hypotenuse of a correct triangle with legs the length of each incertitude component.
Timesaving approximation: "A concatenation is simply every bit strong every bit its weakest link."
If ane of the uncertainty terms is more than 3 times greater than the other terms, the root-squares formula can exist skipped, and the combined uncertainty is but the largest uncertainty. This shortcut can save a lot of time without losing any accurateness in the estimate of the overall doubt.
The Upper-Lower Bound Method of Doubt Propagation
An alternative, and sometimes simpler procedure, to the boring propagation of uncertainty police force is the upper-lower spring method of dubiety propagation. This culling method does not yield a standard uncertainty judge (with a 68% confidence interval), but information technology does requite a reasonable estimate of the incertitude for practically any situation. The basic idea of this method is to use the uncertainty ranges of each variable to summate the maximum and minimum values of the function. You tin likewise think of this procedure as examining the best and worst case scenarios. For example, suppose you measure an angle to be: θ = 25° ± 1° and you needed to find f = cos θ , then:
( 35 )
f max = cos(26°) = 0.8988
( 36 )
f min = cos(24°) = 0.9135
( 37 )
∴ f = 0.906 ± 0.007
Note that fifty-fifty though θ was only measured to ii significant figures, f is known to three figures. By using the propagation of uncertainty constabulary: σ f = |sin θ | σ θ = (0.423)( π /180) = 0.0074
The uncertainty estimate from the upper-lower bound method is mostly larger than the standard uncertainty estimate found from the propagation of uncertainty police force, just both methods will give a reasonable estimate of the doubt in a calculated value.
The upper-lower spring method is especially useful when the functional relationship is not clear or is incomplete. One practical application is forecasting the expected range in an expense budget. In this example, some expenses may be fixed, while others may be uncertain, and the range of these uncertain terms could exist used to predict the upper and lower bounds on the total expense.
Significant Figures
The number of meaning figures in a value tin be defined as all the digits between and including the first not-zip digit from the left, through the terminal digit. For instance, 0.44 has two pregnant figures, and the number 66.770 has five pregnant figures. Zeroes are significant except when used to locate the decimal betoken, as in the number 0.00030, which has 2 significant figures. Zeroes may or may not be significant for numbers like 1200, where information technology is non clear whether ii, three, or four meaning figures are indicated. To avoid this ambiguity, such numbers should be expressed in scientific note to (e.one thousand. 1.20 × 103 clearly indicates three significant figures). When using a computer, the display will oftentimes show many digits, but some of which are meaningful (meaning in a different sense). For example, if yous desire to estimate the expanse of a circular playing field, y'all might pace off the radius to be 9 meters and utilize the formula: A = π r 2. When you compute this surface area, the calculator might report a value of 254.4690049 k2. It would be extremely misleading to report this number every bit the area of the field, because information technology would advise that you know the area to an absurd degree of precision—to inside a fraction of a square millimeter! Since the radius is just known to one significant figure, the final answer should besides incorporate only one significant effigy: Area = 3 × 10ii m2. From this example, we can run into that the number of pregnant figures reported for a value implies a certain caste of precision. In fact, the number of meaning figures suggests a rough estimate of the relative doubt: The number of meaning figures implies an approximate relative uncertainty:
1 significant effigy suggests a relative uncertainty of about x% to 100%
2 meaning figures propose a relative uncertainty of about 1% to ten%
iii meaning figures suggest a relative incertitude of about 0.1% to 1%
Use of Significant Figures for Elementary Propagation of Doubt
By post-obit a few simple rules, pregnant figures can exist used to find the appropriate precision for a calculated result for the four most basic math functions, all without the use of complicated formulas for propagating uncertainties.
For multiplication and partition, the number of significant figures that are reliably known in a product or quotient is the same as the smallest number of significant figures in any of the original numbers.
Example:
half-dozen.6 × 7328.7 48369.42 = 48 × 103
(ii significant figures) (5 pregnant figures) (two significant figures)
For add-on and subtraction, the result should be rounded off to the last decimal identify reported for the to the lowest degree precise number.
Examples:
223.64 5560.5 + 54 + 0.008 278 5560.5
Doubt, Significant Figures, and Rounding
For the same reason that it is dishonest to report a result with more significant figures than are reliably known, the uncertainty value should also not exist reported with excessive precision. For example, it would be unreasonable for a pupil to written report a issue like:
( 38 )
measured density = 8.93 ± 0.475328 g/cmiii WRONG!
The dubiousness in the measurement cannot possibly be known and so precisely! In most experimental work, the confidence in the uncertainty gauge is not much better than well-nigh ±50% because of all the diverse sources of fault, none of which tin can be known exactly. Therefore, doubt values should exist stated to just 1 pregnant figure (or possibly two sig. figs. if the get-go digit is a 1). Because experimental uncertainties are inherently imprecise, they should be rounded to 1, or at most two, significant figures. = measured density = eight.ix ± 0.5 g/cmthree.
i
2(Northward − ane)
An experimental value should exist rounded to be consequent with the magnitude of its uncertainty. This mostly means that the last significant figure in whatever reported value should be in the same decimal place as the incertitude.
In most instances, this practice of rounding an experimental result to exist consistent with the doubtfulness estimate gives the same number of meaning figures as the rules discussed earlier for unproblematic propagation of uncertainties for adding, subtracting, multiplying, and dividing.
Circumspection: When conducting an experiment, it is important to proceed in heed that precision is expensive (both in terms of fourth dimension and textile resources). Do not waste your time trying to obtain a precise result when only a rough guess is required. The price increases exponentially with the amount of precision required, then the potential benefit of this precision must be weighed confronting the actress cost.
Combining and Reporting Uncertainties
In 1993, the International Standards Organisation (ISO) published the beginning official worldwide Guide to the Expression of Dubiousness in Measurement. Before this fourth dimension, incertitude estimates were evaluated and reported according to different conventions depending on the context of the measurement or the scientific bailiwick. Here are a few key points from this 100-page guide, which tin be establish in modified form on the NIST website. When reporting a measurement, the measured value should exist reported along with an estimate of the total combined standard dubiety U c
Conclusion: "When do measurements agree with each other?"
We at present have the resources to answer the key scientific question that was asked at the outset of this error analysis word: "Does my outcome concur with a theoretical prediction or results from other experiments?" Generally speaking, a measured upshot agrees with a theoretical prediction if the prediction lies inside the range of experimental dubiety. Similarly, if two measured values have standard uncertainty ranges that overlap, then the measurements are said to be consistent (they hold). If the uncertainty ranges practice not overlap, then the measurements are said to exist discrepant (they do not concur). However, you should recognize that these overlap criteria can requite two opposite answers depending on the evaluation and confidence level of the uncertainty. Information technology would exist unethical to arbitrarily inflate the uncertainty range only to make a measurement concur with an expected value. A better procedure would be to talk over the size of the difference between the measured and expected values within the context of the incertitude, and effort to discover the source of the discrepancy if the deviation is truly significant. To examine your ain information, yous are encouraged to utilize the Measurement Comparison tool available on the lab website. Here are some examples using this graphical analysis tool:
Figure 3
A = 1.2 ± 0.4 B = 1.viii ± 0.four
Figure 4
An alternative method for determining understanding betwixt values is to calculate the difference between the values divided by their combined standard uncertainty. This ratio gives the number of standard deviations separating the ii values. If this ratio is less than i.0, then it is reasonable to conclude that the values agree. If the ratio is more than 2.0, and so it is highly unlikely (less than virtually 5% probability) that the values are the same. Example from above with u = 0.4: = 1.one. u = 0.2: = 2.1.
References
Baird, D.C. Experimentation: An Introduction to Measurement Theory and Experiment Design, 3rd. ed. Prentice Hall: Englewood Cliffs, 1995. Bevington, Phillip and Robinson, D. Data Reduction and Error Analysis for the Physical Sciences, 2nd. ed. McGraw-Hill: New York, 1991. ISO. Guide to the Expression of Uncertainty in Measurement. International Arrangement for Standardization (ISO) and the International Committee on Weights and Measures (CIPM): Switzerland, 1993. Lichten, William. Information and Error Assay., 2nd. ed. Prentice Hall: Upper Saddle River, NJ, 1999. NIST. Essentials of Expressing Measurement Uncertainty. http://physics.nist.gov/cuu/Dubiety/ Taylor, John. An Introduction to Fault Analysis, iind. ed. University Science Books: Sausalito, 1997.
hefleyandelibubled.blogspot.com
Source: https://www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html
0 Response to "How Do You Know When to Add or Subtract 0.5 in Normal Distribution"
Post a Comment