>>
Testing

# Detection Limits

Whenever you get an analytical result, it should be accompanied by a qualification of how precise the result is.  This is usually implied by how many significant digits you report.  Ideally you should run a standard sample enough times to get good statistical measures of both the average value, and the size of the spread.

The mean or average is the best guess of the true value.

The “Standard Deviation” (root-mean-square average of the differences from the average) is an important way to characterize the spread.  This spread is due both to the variation in the testing, and variabilitly in the thing itself being measured.

One “Standard Deviation” (usually abbreviated with a greek “sigma”) gets you the middle chunk of the bell curve, including 2/3 of the probability distribution.  Two standard deviations on either side of the center gets you most of the probability, but still misses too much of the possibilities to be useful as an alarm level.  Most people set the action point, where you are sure that you’re seeing something definitely out of the ordinary, is +/- three standard deviations, or 3 sigma.  The false alarm rate is then 3 out of a 1000.  People who set “six sigma” as a standard are worrying about production fault rates in a large-scale manufacturing operation.

This analysis is behind the rule of thumb that says an analytical signal needs to be three times the background noise to be declared real.  The detection limit is thus 3 times sigma.  Mathematically this is written:

S = “signal” = average or mean

N = “noise” = standard deviation or sigma

S/N = signal to noise ratio, which must be > 3.

Inverting the result,

N/S = the relative error of a measurement, which must be < 33%.

Detection Limit = S(min) = 3 x N

If your unknown concentration is not above the detection limit for a given method, you must either use a higher concentration of your unknown, or take steps to improve the method somehow.