Advertisement

Calibration Science, Part I: Precision, Accuracy, and Random Error

Published on: 
Cannabis Science and Technology, November/December 2023, Volume 6, Issue 9
Pages: 6-9

Columns | <b>Cannabis Analysis</b>

Here, we begin a new column series on calibration science. The goal of these columns will be to teach you the foundational theory behind how to calibrate analytical instruments that are used for quantitative analysis in the cannabis analysis industry, such as potency and pesticide analyzers.

Here, we begin a new column series on calibration science. The goal of these columns will be to teach you the foundational theory behind how to calibrate analytical instruments that are used for quantitative analysis in the cannabis analysis industry, such as potency and pesticide analyzers. We begin with an in-depth discussion of accuracy, precision, random error sources, and how to reduce them.

Review

Early in the history of the “Cannabis Analysis” column series, I wrote an article titled, “Error, Precision, and Accuracy” (1). Here, we review the salient parts of that article and expand upon that discussion to give you the full story on these topics.

Let’s get the definitions of precision and accuracy out of the way first. Precision is the spread in the repeated measurements of the same quantity. Imagine we are going to weigh the same sample three times on two different balances. Balance 1 gives us values of 2.11, 2.12, and 2.13 grams (g), whereas balance 2 gives us values of 2.0, 2.2, and 2.4 g. We define the range in any set of numbers as the highest minus the lowest value as such in Equation 1:

where R is the range, Max is the maximum value in a data set, and Min is the minimum value in a data set.

Thus, the range for the measurements from Balance 1 is (2.13 g – 2.11 g) or 0.02 g, whereas the range for the measurements from Balance 2 is (2.4 g – 2.0 g) or 0.4 g. In the case of our two balances, then Balance 1 has a precision of 0.02 g whereas Balance 2 has a precision of 0.4 g. We would then say that Balance 1 is more precise, so of course it is preferred. Imagine the dots on the targets in Figure 1 are measurements of the mass of the same sample on a balance. For the leftmost target the data are imprecise because there is wide scatter in the data; the measurements are all over the place. Assuming the mass of our sample did not change during these measurements, the spread in the values was caused by random error. Precision then is a measure of the amount of random error in a set of measurements.

The ideas of precision and random error were driven home for me in the very first laboratory experiment I performed when I took Physics 101 as a freshman. There was a team of three of us, the professor handed us a meter stick, pointed at the table in front of us, and said, “I want each of you to measure the length of that table 10 times” for a total of 30 measurements. I thought this was the most stupid laboratory experiment ever concocted. The table wasn’t going to change length while we were measuring it, so of course all the measurements were going to produce the same number. Boy was I wrong! Not only did we end up with 30 completely different table lengths, but even my own 10 measurements had a spread to them—despite my best efforts to be reproducible. This spread, or range as we have defined it, in a set of measurements has a bunch of names but can be referred to as error, spread, dispersion, or variance: I prefer the term error.

Accuracy—note in Figure 1 that in addition to the word precision the word accuracy is used. Accuracy is a measure of how far a measurement is off from its true value. The true value being supplied by a standard or reference sample. For example, for balances 1.0 g weights certified by the National Institutes of Standards and Technology (NIST) (2) are available. By the way, NIST is the Federal government agency in the US that is in charge of issuing standard reference materials. Imagine then that we weigh our NIST certified 1 g standard on balance 1 and obtain a value of 1.1 g, and on Balance 2 we obtain a value of 1.01 g. Balance 2 is said to be more accurate than balance 1 because it was off from the true value by 0.01 g whereas Balance 2 was off from the true value by 0.1 g.

Based on these measurements then Balance 1 is more precise, but Balance 2 is more accurate. How can this be? Because precision and accuracy are not the same thing! In Figure 1 the data in the middle target are precise because the measurements are all tightly clustered together, however these data are inaccurate because they cluster far from the bullseye, the true value. The ideal situation is illustrated by the target to the right in Figure 1. The data are tightly clustered, hence are precise, and are centered around the bullseye, which means they are close to the true value and are therefore accurate.

Advertisement

Random Error and Its Sources

As illustrated by my table measuring anecdote above, there is always error in any measurement of any quantity. Random error is caused by the fact that human beings are not omnipotent and cannot control all the variables in the universe all the time while making a measurement. Thus, random error is caused by random fluctuations in things we cannot control. Also, the sign of random error is random, that is, a positive or negative fluctuation are equally likely at any given point in time. Sources of random error are myriad and include the following:

1. Misalignment – particularly for optical systems such as spectrometers. It is important that all optical elements are aligned properly so that the correct part of the sample is analyzed, and a maximum amount of light hits the detector.

2. Scale difficult to read – This applies to any scale with hash marks on it that must be read by eye, such as rulers and graduated cylinders. Rule of thumb—the error in measurements taken by eye is ±1/2 the space between hash marks. Thus, if a ruler has lines every 1 mm, the error in a length measured with that ruler is ±0.5 mm.

3. Fluctuations in electrical power – These days most of our analytical instruments are powered by electricity, which comes from a plug in the wall and is something we rarely think about. However, electrical power has an amperage, voltage, and frequency and all the electronics on our instruments are designed to function with known and fixed values of these quantities. If anything in what is called the condition of the electrical power changes this can throw off the measurements made by your instrument. I never thought much about this source of error until I was awakened to it by the experiments of some very smart service engineers at PerkinElmer, a place I used to work. They ran some standards on a gas chromatograph-mass spectrometer (GC–MS) using standard wall power, measured the accuracy of the instrument, and then repeated the experiment with the instrument plugged into what is called a line conditioner, which simply smooths out fluctuations in voltage, amperage, and frequency of the electrical power coming into an instrument. The use of the line conditioner significantly improved the accuracy of the instrument. Given that line conditioners only cost a few hundred dollars and are readily available and easy to use, I highly recommend all analytical instruments use one. An investment in a line conditioner may give you the biggest bang for the buck in terms of improving accuracy.

4. Instrument not in calibration or in need of repair – The performance of all analytical instruments will degrade over time. This means their performance needs to be constantly monitored and routine maintenance needs to be performed at required intervals. It’s like driving your car, if you don’t get your oil changed at regular intervals your engine will break down. To prevent unwanted breakdowns, I highly recommend a service contract for your most important analytical instruments. With a service contract, for an annual fee the manufacturer will send someone out to inspect your instrument, perform routine maintenance, replace any worn parts, and tweak the instrument so it meets specifications. This does not guarantee that unexpected instrument breakdowns won’t happen, but it does mean they will happen less often.

5. Chemicals – Many samples are prepared and analyzed using solvents and other chemicals. For example, in liquid chromatography, significant amounts of solvent are used as the mobile phase. These chemicals need to be of consistent composition and free of any impurities that may interfere with the measurement of your analytes. For example, if a solvent batch has an impurity in it not normally found in the solvent that happens to elute in chromatography at the same time as an analyte peak, it will interfere with the area measurement of the analyte peak throwing off its quantitation. Now, I am not saying the chemicals used in sample analyses need to be ultra-pure, what is more important is that they are consistent chemically, so their properties are consistent and their performance in the analysis is consistent.

6. Samples and Standards – Samples and standards are not necessarily stable over time, as I have found out the hard way. Early in my career I worked for a high pressure liquid chromatography (HPLC) company, and I was tasked with troubleshooting problems with a separation. I ran the standard over and over again, with the problem still present. I troubleshot and replaced the injector, pump, column, and detector; when I ran the standard sample again the problem was still present. My laboratory supervisor finally suggested I make up a fresh standard sample and analyze it. This did the trick; the problem was solved. Turns out, the standard sample was a mixture of carbohydrates in water, and of course there were bacteria to whom my standard sample was food. So, they chomped on the molecules in my standard, destroying it. The moral of the story here is that both standards and samples can degrade and should be analyzed as soon as possible after being generated. If necessary, standards and samples should be stored under conditions to slow degradation, such as in a refrigerator or freezer, or in conditions free of oxygen or light.

7. Matrix Variations – The matrix of a sample is also what I call its chemical environment. This is comprised of the condition and composition of the sample including temperature, pressure, humidity, pH, composition, and concentration. All these variables can affect the properties of your sample and hence the measurement of its properties. Figure 2 shows an example of how changing chemical environment changes sample measurement.

The mid-infrared spectrum in purple is of pure liquid water, the peak to the left of it in brown is the mid-infrared spectrum of sepharose dissolved in water. Note that even though both these peaks are caused by water, they are shifted because the chemical environments in the two samples were different. In this case, the sepharose molecules interacted with the water molecules changing their infrared spectrum. This is why when developing calibrations, it is so important to perform matrix matching, which means the chemical environment or matrix of the standard samples used to generate a calibration must match as closely as possible the matrix of the unknown samples to which the calibration will be applied.

8. Taking non-representative samples – Not all samples are homogeneous, with cannabis buds being the perfect example. Work conducted by myself and others (3) has shown that buds from the same plant vary measurably in potency, and even sections of the same bud can vary in composition. In these cases, non-representative samples were taken, and the source of error is that the aliquot analyzed may not be representative of the sample as a whole. This sampling error is the source of the infamous “margin of error” in opinion polls. If you ask one person who they want for president, it is of course not representative of the hundreds of millions of voters across the US. However, if you ask thousands of people who they want elected as president, the sampling error is reduced enough that conclusions can be made from the results of the poll. In analytical chemistry, the solution to the non-representative sampling problem is to homogenize samples. For liquids this is easy, they can be stirred. For solids, such as cannabis buds, this is more difficult. Solids can be ground before analysis, which is how cannabis buds should always be prepared, but even then, the resulting powder might not be homogeneous. This is where we take a clue from pollsters and analyze multiple aliquots and average the results. For example, if you have to analyze a container full of solid or powder, an excellent way to ensure a representative analysis is performed is to take 5 aliquots from the container using the sampling pattern as shown in Figure 3; left top, right top, center, left bottom, and right bottom.

These five samples should be analyzed separately, and the results averaged to reduce sampling error.

9. You – The most irreproducible thing in your laboratory is you. Despite our best efforts at implementing clear standard operating procedures and extensive training, different people do things differently and people can always make mistakes. These include carelessness, not following directions, not paying attention, poor sample handling, undue exposure of the sample to heat, light, and air, sample spillage, and so on. The list of mistakes any of us can make is pretty extensive, which is why all methods should be tested in the hands of multiple people to make sure reproducible results are obtained even if different people perform a method.

Conclusion

We reviewed the concepts of accuracy and precision. Precision is the spread of values of a measurement made multiple times. Accuracy is how far away we are from the true value. Measured values vary because of random error. We then discussed the many experimental sources of random error with an eye towards correcting these problems.

References

  1. Smith, B., Error, Accuracy, and Precision, Cannabis Science and Technology, 2018, 1(4), 12-16.
  2. Standard Reference Materials https://www.nist.gov/srm.
  3. Giese, M.W., Lewis, M.A., Giese, L., and Smith, K.M., Journal of AOAC International, 2015, 98(6) pg. 1503.


About the Columnist

Brian C. Smith, PhD, is Founder, CEO, and Chief Technical Officer of Big Sur Scientific. He is the inventor of the BSS series of patented mid-infrared based cannabis analyzers. Dr. Smith has done pioneering research and published numerous peer-reviewed papers on the application of mid-infrared spectroscopy to cannabis analysis, and sits on the editorial board of Cannabis Science and Technology. He has worked as a laboratory director for a cannabis extractor, as an analytical chemist for Waters Associates and PerkinElmer, and as an analytical instrument salesperson. He has more than 30 years of experience in chemical analysis and has written three books on the subject. Dr. Smith earned his PhD on physical chemistry from Dartmouth College.
Direct correspondence to: brian@bigsurscientific.com.

How to Cite this Article

Smith, B., Calibration Science, Part I: Precision, Accuracy, and Random Error, Cannabis Science and Technology, 2023, 6(9), 6-9.


Advertisement
Advertisement