Showing posts with label chromatography. Show all posts
Showing posts with label chromatography. Show all posts

Thursday, March 15, 2012

Moving Past the Bottleneck

By Dr. Scott Rudge

Is there a bottleneck in Downstream Processing? The membrane chromatography vendors certainly want you to think so.


The problem is in the efficiency of chromatographic purification.  Without a doubt, chromatography is slow and inefficient. A typical protein loading for commercial scale chromatography is 25 to 40 g/L, and a typical cycle is on the order of 8 hours.  So the productivity of a chromatography column is 3 to 5 g/L/hr.  Compare this to an aggressive microbial fermentation, which produces 10 g/L in a 40 hour fermentation (0.25 g/L/hr) or a very aggressive cell culture which produces 10 g/L of antibody in seven days (0.06 g/L/hr). Clearly, even with the inefficiency of chromatography, there is plenty of volumetric productivity to keep up with modern cell culture and fermentation.

As was pointed out in a previous blog, there is no total capacity difference between typcal chromatography resins and derivatized membranes, and the dynamic binding capacity, where membrane chromatography should be superior, is also not different.So membrane chromatography does not appear to be the answer.

One technology that could intensify the performance of chromatography is “simulated moving bed” chromatography. With simulated moving beds (or SMB), the non-productive volumes of the chromatography column are put into use. This is done by segmenting the column, or making a series of much smaller columns, each of which can be operated differently at any given time.  For example, a section of the column near the classic “inlet” would be regenerated after the product has passed through it.

A section of the column downstream of the product front would be equilibrated just before the product front entered it.

In its simplest form, the SMB column is thought of in four sections, one for feed, one for product, one for regeneration, and one for raffinate, as shown in the figure below:


(from Imamoglu, S., Advances in Biochemical Engineering/Biotechnology, Vol. 76, Springer-Verlag, Berlin, p 212 (2002).)


The flow of mobile phase moves countercurrent to the direction of switching of the columns, and the velocity of switching the columns is in between the velocity of the product and the next fastest or slowest contaminant.  In the configuration shown above the raffinate moves more quickly than the product (extract), and as the solid moves counterclockwise, the extract moves backwards to the elution zone.  Meanwhile, the fast moving raffinate is allowed to exit the loop to waste.  Column segments in Zone IV are regenerated and re-equilibrated to a condition where the extract is bound but the raffinate continues to travel down column. SMB can increase the productivity of chromatography resin by a large factor.  In the simplistic diagrammatic example, the productivity could be increased by a factor of 4.  Depending on the length of the zone required for separation, the increase can be much higher.

SMB has been used for the production of amino acids, enantiomers and many other small molecules. More recently, it has been used for purification of proteins such as albumin, antibodies and some artificial demonstration mixtures such as myoglobin/lysozyme.  Innovations such as the application of gradients to SMB have been developed.  This technology has the potential to reduce cycle times and increase efficiency by smarter use of existing resources.

Tuesday, January 31, 2012

Is Membrane Chromatography the Answer?

by Dr. Scott Rudge

Membrane chromatography gets a fair amount of hype.  It’s supposed to be faster, cheaper, it can be made disposable.  But is it the real answer to the “bottleneck” in downstream processing?  Was Allen Iverson the answer to the Nugget’s basketball dilemma?  I’m still skeptical.

The idea to add ligand functionality to membranes was not new at the time, but the idea really got some traction when it was endorsed by Ed Lightfoot in 1986.  Lightfoot’s paper pointed out that the hydrodynamic price paid for averaging of flow paths in a packed bed might not be worth it.  If thousands of parallel hollow fibers of identical length and diameter could be placed in a bundle, and the diameter of these fibers could be small enough to make the diffusion path length comparable to that in a bed of packed spheres, or smaller, then performance would be equivalent or superior at a fraction of the pressure drop.  This is undoubtedly true; there is no reason to have a random packing if flowpaths can be guaranteed to be exactly equivalent.  However, every single defect in this kind of system works against its success.  For example, hollow fibers that are slightly more hollow will have lower pressure drop, lower surface to volume ratio, lower binding capacity and higher proportional flow.  Slightly longer fibers will have slightly higher pressure drop, slightly higher binding capacity, carry proportionally less of the flow.  Length acts linearly on pressure drop and flow rate, but internal diameter acts to the fourth power, so minor variations in internal diameter would dominate performance of such systems. 
Indeed, according to Mark Etzel, these systems were abandoned as impractical for membrane chromatography based on conventional membrane formats that have been derivatized to add binding functionality.  As this technology has been developed, its application and scale up has begun to look very much like packed bed chromatography.  Here are some particulars:
1.       Development and scale up is based on membrane volume.   However, breakthrough curves are measured in 10’s, or even 70’s of equivalent volumes (see Etzel, 2007) instead of 2’s or 3’s as found in packed beds
2.       Binding capacities are less in membrane chromatography.  In a recent publication by Sartorious, the ligand density in Sartobind Q is listed as 50 mM, while for Sepharose Q-HP it is 140 mM.  In theory, the membrane format has a higher relative dynamic binding capacity, but this has yet to be demonstrated (see above)
3.       The void volume in membranes is surprisingly high, at 70%, compared to packed beds at 30%.  This is a reason for the low relative binding capacity.
4.       Disposable is all the rage, but there’s no evidence that, on a volume basis, derivatized membranes are cheaper than chromatography resins.  In fact, economic comparisons published by Gottshalk always have to make the assumption that the packed bed will de facto be loaded 100 times less efficiently than membranes, just to make the numbers work.  The cost per volume per binding event goes down dramatically during the first 10 reuses of chromatography resins.
It turns out that membrane chromatography has a niche, and that is for flow-through operations in which some trace contaminant, like the residual endotoxin or DNA in a product is removed.  This too can be done efficiently with column chromatography when operated in a high capacity (for the contaminant) mode.  But there is a mental block among chromatographers who want to operate adsorption steps in chromatographic, resolution preserving modes. This block has not yet affected membraners.  A small, high-capacity column operated at an equivalent flowrate to a membrane (volumes per bed or membrane volume) will work as well, and in my opinion more cheaply if regenerated.
These factors should be considered when choosing between membrane and packed bed chromatography.

Wednesday, September 28, 2011

Is Your Chromatography in Control, or in Transition?

By Dr. Scott Rudge
While chromatography continues to be an essential tool in pharmaceutical manufacturing, it remains frustratingly opaque and resistant to feedback control of any kind.  Once you load your valuable molecule, and insufferable companion impurities, onto the column, there is little that you can do to affect the purification outcome that waits you some minutes to hours later.
Many practitioners of preparative and manufacturing scale chromatography perform “Height Equivalent to a Theoretical Plate” testing prior to putting a column into service, and periodically throughout the column’s lifetime.  Others also test for peak shape, using a measurement of peak skewness or Asymmetry.  However, these measurements can’t be made continuously or even frequently, and definitely cannot be made with the column in normal operation.  More over, the standard methods for performing these tests leave a lot of information “on the table” so to speak, by making measurements at half peak height, for example.
To address this shortcoming, many have started to use transition analysis to get more frequent snapshots of column suitability during column operation.  This option has been made possible by advances in computer technology and data acquisition.
Transition analysis is based on fairly old technology called moment theory.  It was originally developed to describe differences in population distributions, and applied to chromatography after the groundbreaking work of Martin and Synge (Biochem. J. 35, 1358 (1941)).  Eugene Kucera (J. Chromatog. 19, 237, (1965) derived the zeroth through fifth moments based on a linear model for chromatography that included pore diffusion in resins, which is fine reading for the mathematically enlightened.  Larson et al. (Biotech. Prog. 19, 485, (2003)) applied the theory to in-process chromatography data.  These authors examined over 300 production scale transitions resulting from columns ranging from 44 to 140 cm in diameter.  They found that the methods of transition/moment analysis were more informative than measurements of HETP and Asymmetry traditionally applied to process chromatography.
What is transition analysis, and how is it applied?  Any time there is a step change in conditions at the column inlet, there will occur, some time later, a transition in that condition at the column outlet.  For example, when the column is taken out of storage and equilibrated, there is commonly a change in conductivity and pH.  Ultimately, a wave of changing conductivity, or pH, or likely both, exits the column.  The shape of this wave gives important information on the health of the column, as described below.  Any and all transitions will do.  When the column is loaded, there is likely a transition in conductivity, UV, refractive index and/or pH.  When the column is washed or eluted, similar transitions occur.  As with HETPs, the purest transitions are those that don’t also have thermodynamic implications, such as those in which chemicals are binding to or exchanging with the resin.  However, the measurements associated with a particular transition should be compared “intra-cycle” to the same transition in subsequent chromatography cycles, not “inter-cycle” to different transitions of different natures within the same chromatography cycle.
Since transition analysis uses all the information in a measured wave, it can be very sensitive to effects that are observed any where along the wave, not just at, for example, half height.  For example, consider the two contrived transitions shown below:
In Case 1, a transition in conductivity is shown that is perfectly normally distributed.  In Case 2, an anomaly has been added to the baseline, representing a defect in the chromatography packing, for example.  Transition analysis consists of finding the zeroth, first and second moments of the conductivity wave as it exits the column.  These moments are defined as:

These are very easy calculations to make numerically, with appropriate filtering of noise in the data, and appropriate time steps between measurements.  The zeroth moment describes the center of the transition relative to the inlet step change.  It does not matter whether or not the peak is normally distributed.  The zeroth moments are nearly identical for Case 1 and Case 2, to several decimal places.  The first moment describes the variance in the transition, while the second moment describes the asymmetry of the peak.  These are markedly different between the two cases, due to the anomaly in the Case 2 transition.  Values for the zeroth, first and second moments are in the following table:


Case 1
Case 2
Zeroth moment
50.0
50.0
First moment
1002
979.6
Second moment
20,300
19,623


It would be sufficient to track the moments for transitions from cycle to cycle.  However, there is a transformation of the moments into a “non-Gaussian” HETP, suggested by McCoy and Goto (Chem. Eng. Sci., 49, 2351 (1994)):

Where

Using these relationships the variance and non-Gaussian HETP are shown in the table below for Case 1 and Case 2:

Using this method, a comparative measure of column performance can be calculated several times per chromatography cycle without making any chemical additions, breaking the column fluid circuit, or adding steps.  The use of transition analysis is still just gaining foothold in the industry, are you ahead of the curve, or behind?

Wednesday, May 12, 2010

Bending the Curve

By Dr. Scott Rudge

To understand the best ways to develop preparative and industrial scale adsorption separations in biotechnology, it’s critical to understand the thermodynamics of solute binding. In this blog, I’ll review some basics of the Langmuir binding isotherm. This isotherm is a fairly simplistic view of adsorption and desorption, however, it applies fairly well to typical protein separations, such as ion exchange and affinity chromatography.


A chemical solution that is brought into contact with a resin that has binding sites for that chemical will partition between the solution phase and the resin phase. The partitioning will be driven by some form of affinity or equilibrium, that can be considered fairly constant at constant solution phase conditions. By “solution phase conditions”, I mean temperature, pH, conductivity, salt and other modifier concentrations. Changing these conditions changes the equilibrium partitioning. If we represent the molecule in solution by “c” and the same molecule adsorbed to the resin by “q”, then the simple mathematical relationship is:

If the capacity of the resin for the chemical is unlimited, then this is the end of the story, the equilibrium is “linear” and the behavior of the adsorption is easy to understand as the dispersion is completely mass transfer controlled. A example of this is size exclusion chromatography, where the resin has no affinity for the chemical, it simply excludes solutes larger than the pore or polymer mesh length. For resins where there are discrete “sites” to which the chemical might bind, or a finite “surface” of some kind with which the chemical has some interaction, then the equilibrium is described by:

and the maximum capacity of the resin has to be accounted for with a “site” balance, such as shown below:

Where Stot represents the total number of binding sites, and S0 represents the number of binding sites not occupied by the chemical of interest. The math becomes a little more complicated when you worry about what might be occupying that site, or if you want to know what happens when the molecule of interest occupies more than one site at a time. We’ll leave these important considerations for another day. Typically, the total sites can be measured. Resin vendors use terms such as “binding capacity” or “dynamic binding capacity” to advertise the capacity of their resins. The capacity is often dependent on the chemical of interest. The resulting relationship between c and q is no longer linear, it is represented by this equation:

When c is small, the denominator of this equation becomes 1, and the equilibrium equation looks like the linear equilibrium equation. When c is large, the denominator becomes Keqc, and the resin concentration of the chemical is equal to the resin capacity, Stot. When c is in between small and large, the isotherm bends over in a convex shape. This is shown in the graph below.

There are three basic conditions in preparative and industrial chromatographic operations. In the first, Keq is very low, and there is little or no binding of the chemical to the resin. This is represented by the red squares in the graph above. This is the case with “flow through” fractions in chromatography, and would generally be the case when the chemical has the same charge as the resin. In the third, Keq is very high, and the chemical is bound quantitatively to the resin, even at low concentrations. This is represented by the green triangles in the graph above. This is the case with chemicals that are typically only released when the column is “stripped” or “regenerated”. In these cases, the solution phase conditions are changed to turn Keq from a large number to a small number during the regeneration by using a high salt concentration or an extreme pH. The second case is the most interesting, and is the condition for most “product” fractions, where a separation is being made. That is, when the solution phase conditions are tuned so that the desired product is differentially adsorbing and desorbing, allowing other chemicals with slightly higher or lower affinities to elute either before or after the desired product, it is almost always the case that the equilibrium constant is not such that binding is quantitative or non-existent. In these cases, the non-linearity of the isotherm has consequences for the shape of the elution peak. We will discuss these consequences in a future blog.

In a “Quality-by-Design” world, these non-linearities would be understood and accounted for the in the design of the chromatography operation. An excellent example of the resulting non-linearity of the results was shown by Oliver Kaltenbrunner in 2008

Relying on linear statistics to uncover this basic thermodynamic behavior is a fool’s errand. However, using basic lab techniques (a balance and a spectrophotometer) the isotherm for your product of interest can be determined directly, and the chromatographic behavior understood. This is the path to process understanding!

Wednesday, April 7, 2010

What's Your Velocity?

By Dr. Scott Rudge

With the development of very high titer cell culture and fermentation processes, downstream processing has been identified as a new bottleneck in biotechnology. The productivity of chromatography in particular, has become a bottleneck. There are two schools of thought for scaling up chromatography: in one, linear velocity (flow rate divided by column cross sectional area) is held constant; in the other, the total (volumetric) flow rate divided by the volume of the column is held constant. In the former method, the length of the column has to be held constant. In the latter method, the geometry of the column is not important, as long as the column can be packed efficiently and flow is evenly distributed. This makes the latter method more flexible, and accommodating of commercially available off the shelf column hardware packages. But does it work?

In my experience, holding flow rate divided by column volume constant between scales works very well. There is plenty of theoretical basis for the methodology as well. Yamamoto has published extensively on the reasons that this technique works. This method is also the basis for scale up described in my textbook. Here, briefly, and using plate height theory, is the theoretical basis:

The basic goal in chromatography scale up is to maintain resolution. “Resolution” is a way to describe the power of a chromatography column to separate two components. It depends on the relative retention of the components, which is fixed by the thermodynamics of the column and remains constant as long as the chemistry (the resin type, the buffer composition) remains constant. It also depends on the peak dispersion in the column, which is a function of the transport phenomena, and is only related to the chemistry by the inherent diffusivity of the molecules involved. Otherwise, it is dependent on mass transfer, flow rate, temperature, flow distribution. Treating the thermodynamics as constant, we can say:

where Rs is Resolution, and N is the number of theoretical plates. N is the ratio of the column length L to the plate height, H, so
 
In liquids, H is approximately a linear function of linear velocity v, according to van Deemter, as discussed in a previous post. So we can say that H = Cv (where C= the van Deemter constant). Now, the linear velocity is the flow rate divided by the cross sectional area of the column, A, and the column volume, V, is the cross sectional area times the column length. The total flow rate (F) divided by the column volume is held constant between scales, we’ll call this constant “f”.
 
This bit of mathematical gymnastics says that Resolution only depends on two fundamental properties of the scale up, van Deemter’s term “C”, which considers the dispersion caused by convection relative to mass transfer to and from a resin particle, and “f”, the flow rate relative to the column volume that is chosen. There is no need to hold column length or linear velocity constant as long as the flow rate relative to column volume is held constant. You might also notice from the math that doubling the column length has the same effect on resolution as dropping the flowrate by half. However, doubling the length costs more in terms of resin, and consumes more solvent than decreasing the flow rate (that’s for you, Mike Klein!) and increases the pressure.

The plate count analysis is very phenomenological, but it does hold up under practice (otherwise it would be abandoned). And the more delicate mathematical models predict the same performance, so confidence in this scale up model is high.

One common mistake made by those using the constant linear velocity model is in adding extra column capacity. Since most people are unwilling to pay for a custom diameter column, but base their loadings on the total volume of resin, they add bed volume by adding length. But since they are unwilling to change the linear velocity, they end up decreasing the productivity of the column (because, for example, the resolution they achieved at a smaller scale in a 12 cm long column at 60 cm/hr is now being performed in a 15 cm long column, at 60 cm/hr, therefore taking 25% more time).

If the less well known constant F/V model is used for a process involving mammalian cells, it would be imperative to explain and demonstrate this model in the scale down validation that is a critical part of the viral clearance package.

But how can you get even more performance out of your chromatography? Treating the unit operation as an adsorption step, and scaling up using Mass Transfer Zone (MTZ) concepts will be treated in a future posting.

Thursday, March 4, 2010

QbDer, Know Thy Model!

By Dr. Scott Rudge

Resolution in chromatography is critical, from analytical applications to large scale process chromatography. While baseline resolution is the gold standard in analytical chromatography, it is seldom achieved in process chromatography, where “samples” are concentrated and “sample volumes” represent a large fraction of the column bed volume, if not multiples of the bed volume. How do you know if your resolution is changing in process chromatography if you can’t detect changes from examining the chromatogram?


Many use the Height Equivalent to a Theoretical Plate technique to test the column’s resolution. In this technique, a small pulse of a non-offensive but easily detected material is injected onto the column, and the characteristics of the resulting peak are measured. The following equation is used:


Where H is the “height” in distance, tr and tw,1/2 are the retention time and time of the width of the peak at ½ height, L is the length of the column and N is the “number” of theoretical plates in the column. The lower the H, the smaller the dispersion, the greater the resolution.

It is well established that the flowrate and temperature affect the plate height. In fact, when the plate height is plotted against flow rate, we generate what is typically called the “van Deemter plot”, after Dutch scientists (van Deemter et al., Chem. Eng. Sci.,5, 271 (1956)) who established a common relationship in all chromatography (gas and liquid) according to the following equation:


Where A, B and C are constants and v is the linear velocity (flow rate divided by the column cross sectional area) of fluid in the column. It was later proposed by Knox (Knox, J., and H. Scott, J. Chromatog., 282, 297 (1983)) that van Deemter plots could be reduced to a common line for all chromatography if the plate height was normalized to resin particle size, and linear velocity was normalize to the resin particle size divided by the chemical’s diffusivity. While this did not turn out to be generally true, it is very close to true. Chemical engineers will recognize the ratio of linear velocity to diffusivity over particle size as the Peclet number, “Pe”, a standard dimensionless number used in many mass transfer analyses.

Since diffusivity is sensitive to temperature, it is logical that Pe is also sensitive to temperature, decreasing temperature decreases the diffusivity, and increases Pe. Thus, Pe is inversely proportional to temperature. We measured plate height as a function of linear flowrate and temperature in our lab on a Q-Sepharose FF column, using a sodium chloride pulse, and found the expected result, shown in the graph below.


We can easily use this graph as a measurement of van Deemter’s parameters A and C, and find the dependence of the diffusivity of sodium chloride on temperature. Based on these two points, we find the Peclet number proportional to 0.085/T where T is in degrees Kelvin. We also find the dependence of plate height on linear velocity, and we can predict that resolution will deteriorate in the column as flow rate increases.

We can also use Design of Experiments to find the same information. Analyzing the same data set with ANOVA yields significant factors for both flow rate and temperature, as shown in the following table:

Term Coef SE Coef T P
Constant0.097560.056041.740.157
Temp-0.0079350.001858-4.270.013
v0.04970.021472.320.082




But since the statistics don’t know that the temperature dependence is inverse, or that the Peclet number is a function of both temperature and flowrate, the model yields no additional understanding of the process.

It is possible to use analysis of variance to fit a model other than linear, as the van Deemter model clearly is. But one must know that the phenomenon being measured behaves non-linearly in order to use the appropriate statistics. Using Design of Experiments blindly, without knowing the relationship of the factors and responses, leads to empirical correlations only relevant to the system studied, and should only be extended with caution.