We are often asked to make something “just like that, but better” in some regards. In formulating, there is no free lunch, so you must always trade off one property against another, and such requests drive us crazy.
In analytical work, the paradox is that improving a method does not always make it better. Tradition often wins out over innovation.
Yes, a new method may remove some bias or correct some deficiency found in the old method. However, many uses of analytical data are fundamentally looking for differences and changes, rather than absolute values.
For instance, a customer knows that a certain lot of material “worked” in their application. We are monitoring, not to know the properties, but to guard against unwanted changes that might upset their manufacturing process. We might not understand fully the nature of those changes, but the fact of a change is the all-important alarm system used to track down problems.
In such a case, a change in analytical method might result in a false alarm, or obscure the truth about something that is actually happening. Such changes should ONLY be done after there is a significant overlap in which a test is run BOTH ways for comparison. Otherwise, the QC process will be causing more harm than good, by trying to make things “better”. In other words, “if it’s not broke, don’t fix it”.
Discussion
No comments yet.