Showing posts with label ICH Q8. Show all posts
Showing posts with label ICH Q8. Show all posts

Wednesday, September 28, 2011

Is Your Chromatography in Control, or in Transition?

By Dr. Scott Rudge
While chromatography continues to be an essential tool in pharmaceutical manufacturing, it remains frustratingly opaque and resistant to feedback control of any kind.  Once you load your valuable molecule, and insufferable companion impurities, onto the column, there is little that you can do to affect the purification outcome that waits you some minutes to hours later.
Many practitioners of preparative and manufacturing scale chromatography perform “Height Equivalent to a Theoretical Plate” testing prior to putting a column into service, and periodically throughout the column’s lifetime.  Others also test for peak shape, using a measurement of peak skewness or Asymmetry.  However, these measurements can’t be made continuously or even frequently, and definitely cannot be made with the column in normal operation.  More over, the standard methods for performing these tests leave a lot of information “on the table” so to speak, by making measurements at half peak height, for example.
To address this shortcoming, many have started to use transition analysis to get more frequent snapshots of column suitability during column operation.  This option has been made possible by advances in computer technology and data acquisition.
Transition analysis is based on fairly old technology called moment theory.  It was originally developed to describe differences in population distributions, and applied to chromatography after the groundbreaking work of Martin and Synge (Biochem. J. 35, 1358 (1941)).  Eugene Kucera (J. Chromatog. 19, 237, (1965) derived the zeroth through fifth moments based on a linear model for chromatography that included pore diffusion in resins, which is fine reading for the mathematically enlightened.  Larson et al. (Biotech. Prog. 19, 485, (2003)) applied the theory to in-process chromatography data.  These authors examined over 300 production scale transitions resulting from columns ranging from 44 to 140 cm in diameter.  They found that the methods of transition/moment analysis were more informative than measurements of HETP and Asymmetry traditionally applied to process chromatography.
What is transition analysis, and how is it applied?  Any time there is a step change in conditions at the column inlet, there will occur, some time later, a transition in that condition at the column outlet.  For example, when the column is taken out of storage and equilibrated, there is commonly a change in conductivity and pH.  Ultimately, a wave of changing conductivity, or pH, or likely both, exits the column.  The shape of this wave gives important information on the health of the column, as described below.  Any and all transitions will do.  When the column is loaded, there is likely a transition in conductivity, UV, refractive index and/or pH.  When the column is washed or eluted, similar transitions occur.  As with HETPs, the purest transitions are those that don’t also have thermodynamic implications, such as those in which chemicals are binding to or exchanging with the resin.  However, the measurements associated with a particular transition should be compared “intra-cycle” to the same transition in subsequent chromatography cycles, not “inter-cycle” to different transitions of different natures within the same chromatography cycle.
Since transition analysis uses all the information in a measured wave, it can be very sensitive to effects that are observed any where along the wave, not just at, for example, half height.  For example, consider the two contrived transitions shown below:
In Case 1, a transition in conductivity is shown that is perfectly normally distributed.  In Case 2, an anomaly has been added to the baseline, representing a defect in the chromatography packing, for example.  Transition analysis consists of finding the zeroth, first and second moments of the conductivity wave as it exits the column.  These moments are defined as:

These are very easy calculations to make numerically, with appropriate filtering of noise in the data, and appropriate time steps between measurements.  The zeroth moment describes the center of the transition relative to the inlet step change.  It does not matter whether or not the peak is normally distributed.  The zeroth moments are nearly identical for Case 1 and Case 2, to several decimal places.  The first moment describes the variance in the transition, while the second moment describes the asymmetry of the peak.  These are markedly different between the two cases, due to the anomaly in the Case 2 transition.  Values for the zeroth, first and second moments are in the following table:


Case 1
Case 2
Zeroth moment
50.0
50.0
First moment
1002
979.6
Second moment
20,300
19,623


It would be sufficient to track the moments for transitions from cycle to cycle.  However, there is a transformation of the moments into a “non-Gaussian” HETP, suggested by McCoy and Goto (Chem. Eng. Sci., 49, 2351 (1994)):

Where

Using these relationships the variance and non-Gaussian HETP are shown in the table below for Case 1 and Case 2:

Using this method, a comparative measure of column performance can be calculated several times per chromatography cycle without making any chemical additions, breaking the column fluid circuit, or adding steps.  The use of transition analysis is still just gaining foothold in the industry, are you ahead of the curve, or behind?

Thursday, August 26, 2010

Do We Have Clearance, Clarence?

By Dr. Scott Rudge

As in take offs and landings in civil aviation, the ability of a pharmaceutical manufacturing process to give clearance of impurities is vital to customer safety. It’s also important that clearance mechanism be clear, and not confused, as the conversation in the classic movie “Airplane!” surely was (and don’t call me Shirley).

There are two ways to demonstrate clearance of impurities.

The first is to track the actual impurity loads. That is, if an impurity comes into a purification step at 10%, and is reduced through that step to 1%, then the clearance would typically be called 1 log, or 10 fold.

The second is to spike impurities. This is typically done when an impurity is not detectable in the feed to the purification step, or when, even though detectable, it is thought desirable to demonstrate that even more of the impurity could be eliminated if need be.

The first method is very usable, but suffers from uneven loads. That is, batch to batch, the quantity and concentration of an impurity can vary considerably. And the capacity of most purification steps to remove impurities is based on quantity and concentration. Results from batch to batch can vary correspondingly. Typically, these results are averaged, but it would be better to plot them in a thermodynamic sense, with unit operation impurity load on the x-axis and efflux on the y-axis. The next figures give three of many possible outcomes of such a graph.


In the first case, there is proportionality between the load and the efflux. This would be the case if the capacity of the purification step was linearly related to the load. This is typically the case for absorbents, and adsorbents at low levels of impurity. In this case (and only this case, actually) does calculating log clearance apply across the range of possible loads. The example figure shows a constant clearance of 4.5 logs.


In the second case, the impurity saturates the purification medium. In this case, a maximum amount of impurity can be cleared, and no more. The closer to loading at just this capacity, the better the log removal looks. This would be the point where no impurity is found in the purification step effluent. All concentrations higher than this show increasing inefficiency in clearance.


In the third case, the impurity has a thermodynamic or kinetic limit in the step effluent. For example, it may have some limited solubility, and reaches that solubility in nearly all cases. The more impurity that is loaded, the more proportionally is cleared. There is always a constant amount of impurity recovered.

For these reasons, simply measuring the ratio of impurity in the load and effluent to a purification step is inadequate. This reasoning applies even more so to spiking studies, where the concentration of the impurity is made artificially high. In these cases, it is even more important to vary the concentration or mass of the impurity in the load, and to determine what the mechanism of clearance is (proportional, saturation or solubility).

Understanding the mechanism of clearance would be beneficial, in that it would allow the practitioner to make more accurate predictions of the effect of an unusual load of impurity. For example, in the unlikely event that a virus contaminates an upstream step in the manufacture of a biopharmaceutical, but the titer is lower than spiking studies had anticipated, if the virus is cleared by binding to a resin, and is below the saturation limit, it’s possible to make the argument that the clearance is much larger, perhaps complete. On the other hand, claims of log removal in a solubility limit situation can be misleading. The deck can be stacked by spiking extraordinary amounts of impurity. The reality may be that the impurity is always present at a level where it is fully soluble in the effluent, and is never actually cleared from the process.

Clearance studies are good and valuable, and help us to protect our customers, but as long as they are done as single points on the load/concentration curve, their results may be misleading. When the question comes, “Do we have clearance, Clarence?” we want to be ready to answer the call with clear and accurate information. Surely varying the concentration of the impurity to understand the nature of the clearance is a proper step beyond the single point testing that is common today.

And stop calling me Shirley.

Wednesday, May 12, 2010

Bending the Curve

By Dr. Scott Rudge

To understand the best ways to develop preparative and industrial scale adsorption separations in biotechnology, it’s critical to understand the thermodynamics of solute binding. In this blog, I’ll review some basics of the Langmuir binding isotherm. This isotherm is a fairly simplistic view of adsorption and desorption, however, it applies fairly well to typical protein separations, such as ion exchange and affinity chromatography.


A chemical solution that is brought into contact with a resin that has binding sites for that chemical will partition between the solution phase and the resin phase. The partitioning will be driven by some form of affinity or equilibrium, that can be considered fairly constant at constant solution phase conditions. By “solution phase conditions”, I mean temperature, pH, conductivity, salt and other modifier concentrations. Changing these conditions changes the equilibrium partitioning. If we represent the molecule in solution by “c” and the same molecule adsorbed to the resin by “q”, then the simple mathematical relationship is:

If the capacity of the resin for the chemical is unlimited, then this is the end of the story, the equilibrium is “linear” and the behavior of the adsorption is easy to understand as the dispersion is completely mass transfer controlled. A example of this is size exclusion chromatography, where the resin has no affinity for the chemical, it simply excludes solutes larger than the pore or polymer mesh length. For resins where there are discrete “sites” to which the chemical might bind, or a finite “surface” of some kind with which the chemical has some interaction, then the equilibrium is described by:

and the maximum capacity of the resin has to be accounted for with a “site” balance, such as shown below:

Where Stot represents the total number of binding sites, and S0 represents the number of binding sites not occupied by the chemical of interest. The math becomes a little more complicated when you worry about what might be occupying that site, or if you want to know what happens when the molecule of interest occupies more than one site at a time. We’ll leave these important considerations for another day. Typically, the total sites can be measured. Resin vendors use terms such as “binding capacity” or “dynamic binding capacity” to advertise the capacity of their resins. The capacity is often dependent on the chemical of interest. The resulting relationship between c and q is no longer linear, it is represented by this equation:

When c is small, the denominator of this equation becomes 1, and the equilibrium equation looks like the linear equilibrium equation. When c is large, the denominator becomes Keqc, and the resin concentration of the chemical is equal to the resin capacity, Stot. When c is in between small and large, the isotherm bends over in a convex shape. This is shown in the graph below.

There are three basic conditions in preparative and industrial chromatographic operations. In the first, Keq is very low, and there is little or no binding of the chemical to the resin. This is represented by the red squares in the graph above. This is the case with “flow through” fractions in chromatography, and would generally be the case when the chemical has the same charge as the resin. In the third, Keq is very high, and the chemical is bound quantitatively to the resin, even at low concentrations. This is represented by the green triangles in the graph above. This is the case with chemicals that are typically only released when the column is “stripped” or “regenerated”. In these cases, the solution phase conditions are changed to turn Keq from a large number to a small number during the regeneration by using a high salt concentration or an extreme pH. The second case is the most interesting, and is the condition for most “product” fractions, where a separation is being made. That is, when the solution phase conditions are tuned so that the desired product is differentially adsorbing and desorbing, allowing other chemicals with slightly higher or lower affinities to elute either before or after the desired product, it is almost always the case that the equilibrium constant is not such that binding is quantitative or non-existent. In these cases, the non-linearity of the isotherm has consequences for the shape of the elution peak. We will discuss these consequences in a future blog.

In a “Quality-by-Design” world, these non-linearities would be understood and accounted for the in the design of the chromatography operation. An excellent example of the resulting non-linearity of the results was shown by Oliver Kaltenbrunner in 2008

Relying on linear statistics to uncover this basic thermodynamic behavior is a fool’s errand. However, using basic lab techniques (a balance and a spectrophotometer) the isotherm for your product of interest can be determined directly, and the chromatographic behavior understood. This is the path to process understanding!

Thursday, March 4, 2010

QbDer, Know Thy Model!

By Dr. Scott Rudge

Resolution in chromatography is critical, from analytical applications to large scale process chromatography. While baseline resolution is the gold standard in analytical chromatography, it is seldom achieved in process chromatography, where “samples” are concentrated and “sample volumes” represent a large fraction of the column bed volume, if not multiples of the bed volume. How do you know if your resolution is changing in process chromatography if you can’t detect changes from examining the chromatogram?


Many use the Height Equivalent to a Theoretical Plate technique to test the column’s resolution. In this technique, a small pulse of a non-offensive but easily detected material is injected onto the column, and the characteristics of the resulting peak are measured. The following equation is used:


Where H is the “height” in distance, tr and tw,1/2 are the retention time and time of the width of the peak at ½ height, L is the length of the column and N is the “number” of theoretical plates in the column. The lower the H, the smaller the dispersion, the greater the resolution.

It is well established that the flowrate and temperature affect the plate height. In fact, when the plate height is plotted against flow rate, we generate what is typically called the “van Deemter plot”, after Dutch scientists (van Deemter et al., Chem. Eng. Sci.,5, 271 (1956)) who established a common relationship in all chromatography (gas and liquid) according to the following equation:


Where A, B and C are constants and v is the linear velocity (flow rate divided by the column cross sectional area) of fluid in the column. It was later proposed by Knox (Knox, J., and H. Scott, J. Chromatog., 282, 297 (1983)) that van Deemter plots could be reduced to a common line for all chromatography if the plate height was normalized to resin particle size, and linear velocity was normalize to the resin particle size divided by the chemical’s diffusivity. While this did not turn out to be generally true, it is very close to true. Chemical engineers will recognize the ratio of linear velocity to diffusivity over particle size as the Peclet number, “Pe”, a standard dimensionless number used in many mass transfer analyses.

Since diffusivity is sensitive to temperature, it is logical that Pe is also sensitive to temperature, decreasing temperature decreases the diffusivity, and increases Pe. Thus, Pe is inversely proportional to temperature. We measured plate height as a function of linear flowrate and temperature in our lab on a Q-Sepharose FF column, using a sodium chloride pulse, and found the expected result, shown in the graph below.


We can easily use this graph as a measurement of van Deemter’s parameters A and C, and find the dependence of the diffusivity of sodium chloride on temperature. Based on these two points, we find the Peclet number proportional to 0.085/T where T is in degrees Kelvin. We also find the dependence of plate height on linear velocity, and we can predict that resolution will deteriorate in the column as flow rate increases.

We can also use Design of Experiments to find the same information. Analyzing the same data set with ANOVA yields significant factors for both flow rate and temperature, as shown in the following table:

Term Coef SE Coef T P
Constant0.097560.056041.740.157
Temp-0.0079350.001858-4.270.013
v0.04970.021472.320.082




But since the statistics don’t know that the temperature dependence is inverse, or that the Peclet number is a function of both temperature and flowrate, the model yields no additional understanding of the process.

It is possible to use analysis of variance to fit a model other than linear, as the van Deemter model clearly is. But one must know that the phenomenon being measured behaves non-linearly in order to use the appropriate statistics. Using Design of Experiments blindly, without knowing the relationship of the factors and responses, leads to empirical correlations only relevant to the system studied, and should only be extended with caution.

Wednesday, February 3, 2010

Mixed Up?

By Dr. Scott Rudge

Determining and defending mixing time is a common nuisance in process validation. There are rarely data existing from process development, and there is rarely time or enthusiasm for actually studying the tank dynamics to set mixing times appropriately. Although there typically are design criteria for tank and impeller dimensions, motor size and power input into the tank, these design criteria are rarely translated to process development and process validation functionaries.


There are resources for mixing time determination if the basic initial work has been done. A very elegant study is included in the recent PQLI A-Mab Case Study produced by the ISPE Biotech Working Group. This study shows how to scale mixing from a lab scale 50 L vessel where a correlation between power and Reynolds number has been developed, to mixing vessels of 500 and 1500 L scales. The study requires that two critical dimensional ratios remain nearly constant on scale up, the diameter of the impeller divided by the diameter of the tank, and the height of the fluid level to the diameter of the tank. The study shows very close agreement between predicted mixing time and actual mixing time. The basis for the scale up is that the power input per unit mixing volume should be constant from scale to scale.

When the dimensional ratios cannot be kept constant, there are still rules for scale up. For example, as shown in the chart below (from Perry and Chilton’s Chemical Engineers’ Handbook, 5th edition, 1973), as the impeller diameter increases relative to the tank diameter, the relative power requirement declines, but the torque required to turn the impeller increases. This correlation can be used to adjust the power requirement to the scale up condition.


Additionally, there is “general agreement that the effect of mixer power level on mass-transfer coefficient is greater before than after off-bottom motion of all particles in a solute-solvent suspension is achieved (op.cit.)”.

In other words, once particles have been fluidized off the bottom of the vessel, whether they are carried all the way to the top of the vessel or not is not so important when it comes to predicting complete dissolution of the solids. At that point, the mass transfer coefficient is related only weakly to the power input, as shown below. Mass transfer coefficients for the dissolution of solids can be easily determined in the lab, and do not have to be determined again and again for new processes.
Knowing the minimum power requirements for particle suspension and the mass transfer coefficients for the solids being dissolved allows estimation of mixing times required for preparing a buffer. Knowing the mixing time allows the manufacturer to schedule buffer or medium preparation more precisely, eliminating over-processing or incorrect processing (a principle of lean manufacturing) and helps to guarantee a quality reagent/intermediate is produced each time, on time, and ready to implement in the next manufacturing step.

Thursday, January 7, 2010

Quality by Design: Dissolution Time

By Dr. Scott Rudge

In a previous post, I discussed the prevalence of statistics used in Quality by Design. These statistical tools are certainly useful and can provide (within their limits of error) prediction of future effects of excursions from control ranges for operating parameters, specifically for Critical Quality Attributes (CQA’s). The limitations of this approach were discussed in the previous blog. In the next series of blogs on Quality by Design, I will discuss opportunities for increasing quality, consistency and compliance for biotechnology products by building quality from the ground up.
While active pharmaceutical ingredient (API) manufacture by biosynthesis is a complicated and difficult to control prospect, there are a number of fundamental operations that are imminently controllable. Media and buffers must be compounded, sometimes adjusted or supplemented, stored and ultimately used in reactors and separators to produce and purify the API. These solutions are fairly easy to make with precision. Three factors come immediately to mind that can be known in a fashion that is scale independent and rigorous, 1) the dissolution rate, 2) the mixing power required and 3) the chemical stability of the solution.

The dissolution rate is a matter of mass transfer from a saturated solution at the dissolving solid interface to the bulk solution concentration. If particle size is fairly consistent, then the dissolution rate is represented by this equation:



where k is the mass transfer coefficient, provided the dissolving solid is fully suspended. It is easy to measure this mass transfer rate in the laboratory with an appropriate measure of solution concentration. For example, for the dissolution of sodium chloride, conductivity can be used. We conducted such experiments in our lab across a range of volumes salt concentrations, and found a scale independent mass transfer coefficient of approximately 0.4 s-1. An example of the results is shown in the accompanying figure.




With the mass transfer coefficient in hand, the mixing time can be precisely specified, and an appropriately short additional engineering safety factor added. If the times are known for dissolution, and mixing is scaled appropriately (as will be shown in future blogs) then buffers and other solutions can be made with high precision and little wasted labor or material. In addition, the properties of the solutions should be constant within a narrow range, and the reproducibility of more complicated unit operations such as reactors and separators, much improved.

A design based on engineering standards such as this produces predictable results.  Predictable results are the basis of process validation.  As the boiling point of water drives the design of the WFI still, we should let engineering design equations drive Quality by Design for process unit operations.

Leah Choi contributed to this work

Tuesday, November 10, 2009

The Good Buffer

By Scott Rudge

“A Good Buffer” has a number of connotations in biochemistry and biochemical engineering. A “good buffer” would be one that has good buffering capacity at the desired pH. The best buffering capacity is at the pK of the buffer of course, although it seems buffer salts are rarely used at their pK.

Second, a good buffer would be one matched to the application. Or maybe that’s first. For example, the buffering ion in an ion exchange chromatography step should be the same charge as the resin (so as not to bind and take up resin capacity). For example, phosphate ion (negative) is a good choice for cation exchange resins (also negatively charged) like S and CM resins.

Another meaning of a “Good” buffer is a buffer described by Dr. Norman Good and colleagues in 1966 (N. E. Good, G. D. Winget, W. Winter, T. N. Connolly, S. Izawa and R. M. M. Singh (1966). "Hydrogen Ion Buffers for Biological Research". Biochemistry 5 (2): 467–477.). These twelve buffers have pK’s spanning the range 6.15 to 8.35, and are a mixture of organic acids, organic bases and zwitterions (having both an acidic and basic site). All twelve of Good’s buffers have pK’s that are fairly strongly temperature dependent, meaning that, in addition to the temperature correction required for the activity of hydrogen ion, there is an actual shift in pH that is temperature dependent. So, while a buffer can be matched to the desired pH approximately every 0.2 pH units across pH 7 ± 1, the buffers are expensive and not entirely suited to manufacturing applications.

In our view, a good buffer is one that is well understood and is designed for its intended purpose. To be designed for its intended purpose, it should be well matched to provide adequate buffering capacity at the desired pH and desired temperature. As shown in the figure, the buffering capacity of a buffer with a pK of 8 is nearly exhausted below pH 7 and above pH 9.




It’s easy to overshoot the desired pH at these “extremes”, but just such a mismatch between buffering ion and desired pH is often specified. Furthermore, buffers are frequently made by titrating to the desired pH from the pK of the base or the acid. This leads to batch to batch variation in the amount of titrant used, because of overshooting and retracing. In addition, the temperature dependence of the pK is not taken into account when specifying the temperature of the buffer. Tris has a pK of 8.06 at 20°C, so a Tris buffer used at pH 7.0 is already not a good idea at 20°C. The pK of Tris changes by -0.03 pH units for every 1°C in positive temperature change. So if the temperature specified for the pH 7.0 buffer is 5°C, the pK will have shifted to 8.51. Tris has 3% of its buffering capacity available at pH 7.0, 5°C, it’s not well matched at all.

A good buffer will have a known mass transfer rate in water, so that its mixing time can be predicted. Precise amounts of buffering acid or base and cosalt are added to give the exact pH required at the exact temperature specified. This actually reduces our reliance on measurements like pH and conductivity that can be inexact. Good buffers can be made with much more precision than ± 0.2 pH units and ± 10% of nominal conductivity, and when you start to make buffers this way, you will rely more on your balance and understanding of appropriate storage conditions for your raw materials, than making adjustments in the field with titrants, time consuming mixing and guessing whether variation in conductivity is going to upset the process.