Showing posts with label process variability. Show all posts
Showing posts with label process variability. Show all posts

Wednesday, September 28, 2011

Is Your Chromatography in Control, or in Transition?

By Dr. Scott Rudge
While chromatography continues to be an essential tool in pharmaceutical manufacturing, it remains frustratingly opaque and resistant to feedback control of any kind.  Once you load your valuable molecule, and insufferable companion impurities, onto the column, there is little that you can do to affect the purification outcome that waits you some minutes to hours later.
Many practitioners of preparative and manufacturing scale chromatography perform “Height Equivalent to a Theoretical Plate” testing prior to putting a column into service, and periodically throughout the column’s lifetime.  Others also test for peak shape, using a measurement of peak skewness or Asymmetry.  However, these measurements can’t be made continuously or even frequently, and definitely cannot be made with the column in normal operation.  More over, the standard methods for performing these tests leave a lot of information “on the table” so to speak, by making measurements at half peak height, for example.
To address this shortcoming, many have started to use transition analysis to get more frequent snapshots of column suitability during column operation.  This option has been made possible by advances in computer technology and data acquisition.
Transition analysis is based on fairly old technology called moment theory.  It was originally developed to describe differences in population distributions, and applied to chromatography after the groundbreaking work of Martin and Synge (Biochem. J. 35, 1358 (1941)).  Eugene Kucera (J. Chromatog. 19, 237, (1965) derived the zeroth through fifth moments based on a linear model for chromatography that included pore diffusion in resins, which is fine reading for the mathematically enlightened.  Larson et al. (Biotech. Prog. 19, 485, (2003)) applied the theory to in-process chromatography data.  These authors examined over 300 production scale transitions resulting from columns ranging from 44 to 140 cm in diameter.  They found that the methods of transition/moment analysis were more informative than measurements of HETP and Asymmetry traditionally applied to process chromatography.
What is transition analysis, and how is it applied?  Any time there is a step change in conditions at the column inlet, there will occur, some time later, a transition in that condition at the column outlet.  For example, when the column is taken out of storage and equilibrated, there is commonly a change in conductivity and pH.  Ultimately, a wave of changing conductivity, or pH, or likely both, exits the column.  The shape of this wave gives important information on the health of the column, as described below.  Any and all transitions will do.  When the column is loaded, there is likely a transition in conductivity, UV, refractive index and/or pH.  When the column is washed or eluted, similar transitions occur.  As with HETPs, the purest transitions are those that don’t also have thermodynamic implications, such as those in which chemicals are binding to or exchanging with the resin.  However, the measurements associated with a particular transition should be compared “intra-cycle” to the same transition in subsequent chromatography cycles, not “inter-cycle” to different transitions of different natures within the same chromatography cycle.
Since transition analysis uses all the information in a measured wave, it can be very sensitive to effects that are observed any where along the wave, not just at, for example, half height.  For example, consider the two contrived transitions shown below:
In Case 1, a transition in conductivity is shown that is perfectly normally distributed.  In Case 2, an anomaly has been added to the baseline, representing a defect in the chromatography packing, for example.  Transition analysis consists of finding the zeroth, first and second moments of the conductivity wave as it exits the column.  These moments are defined as:

These are very easy calculations to make numerically, with appropriate filtering of noise in the data, and appropriate time steps between measurements.  The zeroth moment describes the center of the transition relative to the inlet step change.  It does not matter whether or not the peak is normally distributed.  The zeroth moments are nearly identical for Case 1 and Case 2, to several decimal places.  The first moment describes the variance in the transition, while the second moment describes the asymmetry of the peak.  These are markedly different between the two cases, due to the anomaly in the Case 2 transition.  Values for the zeroth, first and second moments are in the following table:


Case 1
Case 2
Zeroth moment
50.0
50.0
First moment
1002
979.6
Second moment
20,300
19,623


It would be sufficient to track the moments for transitions from cycle to cycle.  However, there is a transformation of the moments into a “non-Gaussian” HETP, suggested by McCoy and Goto (Chem. Eng. Sci., 49, 2351 (1994)):

Where

Using these relationships the variance and non-Gaussian HETP are shown in the table below for Case 1 and Case 2:

Using this method, a comparative measure of column performance can be calculated several times per chromatography cycle without making any chemical additions, breaking the column fluid circuit, or adding steps.  The use of transition analysis is still just gaining foothold in the industry, are you ahead of the curve, or behind?

Wednesday, October 27, 2010

Risky Business

By Dr. Scott Rudge

“Risk Analysis” is a big topic in pharmaceutical and biotech product development. The International Conference on Harmonization (ICH) has even issued a guidance document on Risk Analysis (Q9). Despite the documentation available and the regulatory emphasis, these tools remain poorly understood. They are used to justify limiting the extent of fundamental understanding that can be gained on a process, while simultaneously used as a cure-all for management challenges that we face in pharmaceutical process development.

Risk analysis focuses only on failure modes. Failure mode effect and analysis (FMEA) was developed by the military, first published as MIL-P-1629, “Procedures for Performing a Failure Mode, Effects and Criticality Analysis” in November 1949. The procedure was established to discern the effects of system failure on mission success and personnel safety. Since then, we have found a much broader range of applications for this type of analysis. The methodology is used in the manufacture of airplanes, automobiles, software and elsewhere. People have devised “design” FMEAs and “process” FMEAs (DFMEA and PFMEA). These are all great tools, and help us to design and build better, safer products with better, safer processes.

Where Risk Analysis Falls Down

The FMEA is such a great hammer, it can make everything look like a nail. And when regulatory authorities are encouraging companies to use Risk Analysis for product design and process validation, the temptation to apply it further can be overwhelming. In particular, risk analysis is used in three inappropriate ways, in my estimation:

 
  1. Decision analysis
  2. Project management
  3. Work avoidance

Quite often, risk analysis tools are used to guide decisions. Here, the pros and cons of selecting a particular path are weighed, using the three criteria of risk analysis (occurrence, detectability and severity) as guides. However, not every decision path leads to a failure mode or an outcome that could be measured as a consequence. And detectability and occurrence may not be the only or most appropriate factors by which to weight a consequence. There are excellent decision tools that are designed specifically for weighting and evaluating preference criteria. A very simple tool is the Kepner Tregoe (KT) decision matrix. Decision analysis uses very detailed descriptions of the decision to model the potential outcomes. The KT decision matrix is sufficient for many decisions, large and small. But if you really want to study the anatomy of a decision, decision analysis is the most satisfactory method.
 
FMEAs are sometimes inappropriately applied in project management, to assign prioritization and order in which tasks should be completed. This may be handy in some instances, but is somewhat misleading or inappropriate in others. The riskiness of an objective should not be the sole determinant in its prioritization. Quite often, the fundamentals or building blocks need to be in place in order to best address the riskiest proposition in a project. Prematurely addressing the pieces of a project that present the greatest risk of failure may lead to that failure. On the other hand, being “fast to fail” and eliminate projects that might not bear fruit with the least amount of resources spent, is critical to overall company or project success. Project management requires consideration of failure modes, but also resource programming and timeline management. FMEAs can be an element of that formula, but should not be the focus.
 
Finally, and perhaps most painfully, FMEAs are used to justify avoiding work. Too often, risk analysis is applied to a problem, not to identify the elements that are most deserving of attention, but to justify neglecting areas that do not rank sufficiently high in the risk matrix. Sometimes, the smallest risks are the easiest to address, and in addressing them, variability can be removed from the process. Variability is the “elephant in the room” when it comes to pharmaceutical quality, as has been concisely pointed out by Johnston and Zhang.
 
The FMEA is a stellar tool, and it is applicable to problems in design, process and strategy across many industries. Its quantitative feel makes practitioners feel as though they are actually measuring something, and can make fine distinctions between risks that they were unable to articulate before. However, the FMEA can be applied rather too widely, or sometimes unscrupulously, yielding bad data and bad decisions.