By Dr. Scott Rudge
Risk reduction tools are all the rage in pharmaceutical, biotech and medical device process/product development and manufacturing. The International Conference on Harmonization has enshrined some of the techniques common in risk management in their Q9 guidance, “Quality Risk Management”. The Failure Modes and Effects Analysis, or FMEA, is one of the most useful and popular tools described. The FMEA stems from a military procedure, published in 1949 as MIL-P-1629, and has been applied in many different ways. The most used method in the health care involves making a list of potential ways in which a process can “fail”, or produce out of specification results. After this list has been generated, each failure mode in the list is assessed for its proclivity to “Occur”, “Be Severe” and “Be Detected”. Typically, these are scored from 1 to 10, with 10 being the worst case for each category, and 1 being the best case. The scores are multiplied together, and the product (mathematically speaking) is called the “Risk Priority Number”, or RPN. Then, typically, development work is directed towards the failure modes with the highest RPN.
The problem is, it’s very hard to assign a ranking from 1 to 10 for each of these categories in a scientific manner. More often, a diverse group of experts from process and product development, quality, manufacturing, regulatory, analytical and other stake holding departments, convene a meeting and assign rankings based on their experience. This is done once in the product life-cycle, and never revisited as actual manufacturing data start to accumulate. And, while large companies with mature products have become more sophisticated, and can pull data from other similar or “platform” products, small companies and development companies can really only rely on the opinion of experts, either from internal or external sources. The same considerations apply to small market or orphan drugs.
Each of these categories can probably be informed by data, but by far the easiest to assign a numerical value to is the “Occurrence” ranking. A typical Occurrence ranking chart might look something like this:
These rankings come from “piece” manufacturing, where thousands to millions of widgets might be manufactured in a short period of time. This kind of manufacturing rarely applies in the health care industry. However, this evaluation fits very nicely with the Capability Index analysis.
The Capability Index is calculated by dividing the variability of a process into its allowable variable range. Or, said less obtusely, dividing the specification range by the standard deviation of the process performance. The capability index is directly related to the probability that a process will operate out of range or out of specification. This table, found on Wikipedia (my source for truth), gives an example of the correlation between the calculated capability index to the probability of failure:
As a reminder, the capability index is the upper specification limit minus the lower specification limit divided by six times the standard deviation of the process. The two tables can be combined to be approximately:
How many process data points are required to calculate a capability index? Of course, the larger the number of points, the better the estimate of average and standard deviation, but technically, two or three data points will get you started. Is it better than guessing?
No comments:
Post a Comment