Students often have hard time understanding the difference between accuracy and precision, particularly when they read quack “science” “studies” when surfing the Web. This post might help them to grasp these concepts.

**What is Accuracy?**

Accuracy is a term describing deviation of an experimental value from a target value. A target value is a value accepted as ‘true’. Constants, fundamental quantities, and theoretical values are considered ‘true values’. Thus, accuracy is proximity to a true value.

To illustrate, assume that a quantity x is measured. Its true value is x_{t} =1.00 and we report an experimental value x_{e} of 0.90. The absolute error of this observation is | x_{e} – x_{t} | = 0.10 and its relative error is (| x_{e} – x_{t} |/ x_{t})*100 = 10%. The accuracy is the ratio between the experimental to true value. When expressed as a percent, it is called relative accuracy. In this case, x_{e}/ x_{t} = 0.90/1.00. This corresponds to a 90% accuracy.

**What is Precision?**

Precision has been loosely defined as how reproducible experimental results are. However, modern convention makes a careful distinction between reproducibility (between-run precision) and repeatability (within-run precision). Furthermore according to Freiser (1992),

**Repeatability**is the closeness of agreement between individual experimental results obtained with the same method on identical test material or samples, under the same conditions (same operator, same apparatus, same laboratories, and same intervals of time).**Reproducibility**is the closeness of agreement between individual experimental results obtained with the same method on identical test material or samples, but under different conditions (different operator, different apparatus, different laboratories, and different intervals of time).

Note that the source of dispersion and errors in the experimental results is different in each case. Therefore arbitrarily expressing the precision of results in terms of standard deviations without considering how the data was collected (within- or between-run precision) should be avoided.

Similarly, comparing any two standard deviations, or standard errors for that matter, without regard for how the data was collected (experimental conditions, number of degrees of freedom, different sampling times, etc) should also be avoided. In particular, estimates of precision or comparisons of precisions from data set that constantly change within sampling times is a futile exercise.

Last but not least, the precision of a measurement depends on the measuring scale used. For instance, saying “He is about 55 years old.” is less precise than saying “He is 660 months old.” or than saying “He is 20,075 days old”.

References

Freiser, H. (1992). Concept Calculations in Analytical Chemistry. Chapter 12, p. 203. CRC Press, Boca Raton.

Miller, J. C. & Miller, J. N. (1984). Statistics for Analytical Chemistry. Chapter 1, p.19. Wiley, New York.

PS. I misplaced repeatability and reproducibility and fixed few more typos. Well and done. Thanks Dr. J. C. for pointing that out.

E. Garcia

said:George Rodrigues from http://www.qualitydigest.com/inside/fda-compliance-article/defining-accuracy-and-precision.html has alternate definitions:

Accuracy refers to the deviation of a measurement from a standard or true value of the quantity being measured.

Precision tells us how close a group of measurements are to one another. The closer the data replicates, the more likely the results will be similar in the future. For this reason, good precision has predictive value; it gives us confidence in future results. Precision is usually calculated and discussed in terms of standard deviationsand coefficient of variation (CV).