Showing posts with label process engineering. Show all posts
Showing posts with label process engineering. Show all posts

Tuesday, January 31, 2012

Is Membrane Chromatography the Answer?

by Dr. Scott Rudge

Membrane chromatography gets a fair amount of hype.  It’s supposed to be faster, cheaper, it can be made disposable.  But is it the real answer to the “bottleneck” in downstream processing?  Was Allen Iverson the answer to the Nugget’s basketball dilemma?  I’m still skeptical.

The idea to add ligand functionality to membranes was not new at the time, but the idea really got some traction when it was endorsed by Ed Lightfoot in 1986.  Lightfoot’s paper pointed out that the hydrodynamic price paid for averaging of flow paths in a packed bed might not be worth it.  If thousands of parallel hollow fibers of identical length and diameter could be placed in a bundle, and the diameter of these fibers could be small enough to make the diffusion path length comparable to that in a bed of packed spheres, or smaller, then performance would be equivalent or superior at a fraction of the pressure drop.  This is undoubtedly true; there is no reason to have a random packing if flowpaths can be guaranteed to be exactly equivalent.  However, every single defect in this kind of system works against its success.  For example, hollow fibers that are slightly more hollow will have lower pressure drop, lower surface to volume ratio, lower binding capacity and higher proportional flow.  Slightly longer fibers will have slightly higher pressure drop, slightly higher binding capacity, carry proportionally less of the flow.  Length acts linearly on pressure drop and flow rate, but internal diameter acts to the fourth power, so minor variations in internal diameter would dominate performance of such systems. 
Indeed, according to Mark Etzel, these systems were abandoned as impractical for membrane chromatography based on conventional membrane formats that have been derivatized to add binding functionality.  As this technology has been developed, its application and scale up has begun to look very much like packed bed chromatography.  Here are some particulars:
1.       Development and scale up is based on membrane volume.   However, breakthrough curves are measured in 10’s, or even 70’s of equivalent volumes (see Etzel, 2007) instead of 2’s or 3’s as found in packed beds
2.       Binding capacities are less in membrane chromatography.  In a recent publication by Sartorious, the ligand density in Sartobind Q is listed as 50 mM, while for Sepharose Q-HP it is 140 mM.  In theory, the membrane format has a higher relative dynamic binding capacity, but this has yet to be demonstrated (see above)
3.       The void volume in membranes is surprisingly high, at 70%, compared to packed beds at 30%.  This is a reason for the low relative binding capacity.
4.       Disposable is all the rage, but there’s no evidence that, on a volume basis, derivatized membranes are cheaper than chromatography resins.  In fact, economic comparisons published by Gottshalk always have to make the assumption that the packed bed will de facto be loaded 100 times less efficiently than membranes, just to make the numbers work.  The cost per volume per binding event goes down dramatically during the first 10 reuses of chromatography resins.
It turns out that membrane chromatography has a niche, and that is for flow-through operations in which some trace contaminant, like the residual endotoxin or DNA in a product is removed.  This too can be done efficiently with column chromatography when operated in a high capacity (for the contaminant) mode.  But there is a mental block among chromatographers who want to operate adsorption steps in chromatographic, resolution preserving modes. This block has not yet affected membraners.  A small, high-capacity column operated at an equivalent flowrate to a membrane (volumes per bed or membrane volume) will work as well, and in my opinion more cheaply if regenerated.
These factors should be considered when choosing between membrane and packed bed chromatography.

Wednesday, September 28, 2011

Is Your Chromatography in Control, or in Transition?

By Dr. Scott Rudge
While chromatography continues to be an essential tool in pharmaceutical manufacturing, it remains frustratingly opaque and resistant to feedback control of any kind.  Once you load your valuable molecule, and insufferable companion impurities, onto the column, there is little that you can do to affect the purification outcome that waits you some minutes to hours later.
Many practitioners of preparative and manufacturing scale chromatography perform “Height Equivalent to a Theoretical Plate” testing prior to putting a column into service, and periodically throughout the column’s lifetime.  Others also test for peak shape, using a measurement of peak skewness or Asymmetry.  However, these measurements can’t be made continuously or even frequently, and definitely cannot be made with the column in normal operation.  More over, the standard methods for performing these tests leave a lot of information “on the table” so to speak, by making measurements at half peak height, for example.
To address this shortcoming, many have started to use transition analysis to get more frequent snapshots of column suitability during column operation.  This option has been made possible by advances in computer technology and data acquisition.
Transition analysis is based on fairly old technology called moment theory.  It was originally developed to describe differences in population distributions, and applied to chromatography after the groundbreaking work of Martin and Synge (Biochem. J. 35, 1358 (1941)).  Eugene Kucera (J. Chromatog. 19, 237, (1965) derived the zeroth through fifth moments based on a linear model for chromatography that included pore diffusion in resins, which is fine reading for the mathematically enlightened.  Larson et al. (Biotech. Prog. 19, 485, (2003)) applied the theory to in-process chromatography data.  These authors examined over 300 production scale transitions resulting from columns ranging from 44 to 140 cm in diameter.  They found that the methods of transition/moment analysis were more informative than measurements of HETP and Asymmetry traditionally applied to process chromatography.
What is transition analysis, and how is it applied?  Any time there is a step change in conditions at the column inlet, there will occur, some time later, a transition in that condition at the column outlet.  For example, when the column is taken out of storage and equilibrated, there is commonly a change in conductivity and pH.  Ultimately, a wave of changing conductivity, or pH, or likely both, exits the column.  The shape of this wave gives important information on the health of the column, as described below.  Any and all transitions will do.  When the column is loaded, there is likely a transition in conductivity, UV, refractive index and/or pH.  When the column is washed or eluted, similar transitions occur.  As with HETPs, the purest transitions are those that don’t also have thermodynamic implications, such as those in which chemicals are binding to or exchanging with the resin.  However, the measurements associated with a particular transition should be compared “intra-cycle” to the same transition in subsequent chromatography cycles, not “inter-cycle” to different transitions of different natures within the same chromatography cycle.
Since transition analysis uses all the information in a measured wave, it can be very sensitive to effects that are observed any where along the wave, not just at, for example, half height.  For example, consider the two contrived transitions shown below:
In Case 1, a transition in conductivity is shown that is perfectly normally distributed.  In Case 2, an anomaly has been added to the baseline, representing a defect in the chromatography packing, for example.  Transition analysis consists of finding the zeroth, first and second moments of the conductivity wave as it exits the column.  These moments are defined as:

These are very easy calculations to make numerically, with appropriate filtering of noise in the data, and appropriate time steps between measurements.  The zeroth moment describes the center of the transition relative to the inlet step change.  It does not matter whether or not the peak is normally distributed.  The zeroth moments are nearly identical for Case 1 and Case 2, to several decimal places.  The first moment describes the variance in the transition, while the second moment describes the asymmetry of the peak.  These are markedly different between the two cases, due to the anomaly in the Case 2 transition.  Values for the zeroth, first and second moments are in the following table:


Case 1
Case 2
Zeroth moment
50.0
50.0
First moment
1002
979.6
Second moment
20,300
19,623


It would be sufficient to track the moments for transitions from cycle to cycle.  However, there is a transformation of the moments into a “non-Gaussian” HETP, suggested by McCoy and Goto (Chem. Eng. Sci., 49, 2351 (1994)):

Where

Using these relationships the variance and non-Gaussian HETP are shown in the table below for Case 1 and Case 2:

Using this method, a comparative measure of column performance can be calculated several times per chromatography cycle without making any chemical additions, breaking the column fluid circuit, or adding steps.  The use of transition analysis is still just gaining foothold in the industry, are you ahead of the curve, or behind?

Tuesday, May 31, 2011

The Art of Bioreactor/Fermenter Scale-Up (or Scale-Down)

by Dr. Deb Quick

Effective bioreactor or fermenter scale-up/down is essential for successful bioprocessing. During development, small scale systems are employed to quickly evaluate and optimize the process, but larger scale systems are necessary for producing commercial quantities at a reasonable cost. But how does one effectively transfer the process between scales so that the process performs the same?



In an ideal world, the physiological microenvironment within the cells/microorganisms will be conserved at the different scales, but with no direct measure of that microenvironment the scientist identifies relevant macroproperties to measure and control to ensure comparability. There are many macroproperties and operating parameters that define the process at each scale, and while the goal is to keep as many of those parameters constant between the scales, it simply isn’t possible to keep them all the same.

When using the same operating parameters at small and large scale is impractical, there are several correlations that are commonly used: mass transfer coefficient (kLa [the volumetric transfer coefficient, 1/hr] or OTR [oxygen transfer rate, mmol/hr]; volumetric power consumption (P/V, agitation power per unit volume); agitator tip speed; and mixing time.

Matching the kLa at different scales is generally considered the most important factor in scaling cell culture and microbial processes. The second most common approach is to match the power consumption. For both of these correlations, there are often multiple combinations of operating parameters that provide the same kLa or the same power consumption at the different scale. And herein lies the art of bioreactor and fermenter scale-up/down. Selecting the best combination of parameters to match process performance at different scales is an art. There is no magic combination that works best for all cell types and products.

To establish comparability at different scales, you’ll make your life significantly easier if you start with the same vessel design at the different scales, but this luxury is rarely reality. More often, the development lab has significantly different equipment than the manufacturing facility. But even with different reactor designs, comparable performance can be obtained at different scales through appropriate experimentation.
  • First, you’ll need to understand your equipment at all scales: measure the kLa and P/V of the different scales over a wide range of air flows, agitation rates, working volumes, and backpressures. It’s best to perform the testing in your process media, if possible. If you can find the time, it’s useful to evaluate different mixing schemes at small scale - different impeller styles and positions, baffles, and sparger styles and positions (particularly valuable if you already know the differences in these features between small and large scale systems available to you).
  • Second, you’ll need to understand how your product responds to the different operating parameters. Those dreaded statistically designed experiments (DoE) are particularly useful for understanding the effects and interactions of the many parameters that can be changed. Performing DoE experiments at small scale with your product to evaluate the effects of aeration, agitation, and volume will not only help you with scale-up, but will also provide useful information for setting acceptable ranges for the operating parameters at large scale. As with the kLa studies, it’s useful to study different mixing schemes at small scale if time allows. One set of experiments that is highly useful but rarely performed is the evaluation of the process performance at the same kLa (or P/V) obtained using different operating parameters.
Understanding your equipment and how your product responds to various operating conditions is the key to effective process scale-up and scale-down. Despite the historical and ongoing need for scaling bioprocesses up and down, there is no strategy that works in all situations. The art of successful scale-up lies in thoughtful experimental design and thorough data analysis in order to obtain the information that allows equivalent performance at all scales.

Friday, May 6, 2011

TFF Under Pressure

By Dr. Scott Rudge

Are there scale up issues for cross flow filtration?  In general, this step is overlooked as a scale up concern, and usually, given the primarily clean feed streams encountered in simple buffer exchange, this is warranted.  However, forewarned is forearmed when scale up is concerned.

Primarily, there is just one scale up issue with cross flow filtration, and that is the path length on the retentate side of the filter.  The flow on the retentate side of the filter is meant to continuously clean the filter surface, and prevent fouling, or at least limit it to a thin boundary layer.  The shear rate created by the fluid at the filter surface increases as the square of the linear velocity of the fluid.  The pressure drop through the filter module, from inlet to outlet, depends linearly on the length of the module, and also on the square of the linear velocity.  In many cases, a manufacturing scale module is about a meter in length.  However, on the lab scale, a module is likely to be closer to 10 cm.  Therefore, the pressure drop from the inlet to outlet on the retentate side will be 10 times higher at constant linear velocity on scale up from lab to manufacturing.  Since decreasing the flow rate will dramatically decrease the shear rate, the increased pressure will drive higher flux towards the membrane surface, increasing the thickness of the boundary layer and resulting in more surface polarization (fouling or gel formation, potentially). 

One approach taken to this predicament is to keep the pathlength constant on scale up.  This is analogous to maintaining constant bed height on chromatography scale up, an approach I disfavor.  The result of this approach is a “horizontal” scale up, where more and more units of lab proportion are lined up side by side.  This approach works, but is cumbersome and requires more and more manifolding for flow distribution, and other inconveniences.  It also assumes that the length of filter the manufacturer provides is the best and only length for every application, which is absurd.  However, this is an approach commonly pursued, and recommended by the filter manufacturers for its speed and certainty.

Another approach that is taken to this phenomenon is to increase the back pressure on the permeate.  This slows down the permeate independent of changes on the retentate side of the filter.  However, if the back pressure on the retentate side is greater than the pressure at any point along the filter on the retentate side, permeate will flow back to the retentate side.  This is clearly inefficient, it means a particular fluid element will be filtered at least three times, crossing from retentate to permeate, then back to retentate, and then eventually back to permeate on a subsequent pass.  This also means that the effective filtration area is decreased, as some portion of the filter is working in reverse, and another portion is working to correct the back flow.  The negative flow counts against filter area that is filtering in the positive direction.

Finally, employing a constant pressure gradient along the retentate side is worth trying.  Presuming the membrane geometry is essentially maintained on scale up (including spacers in the flow channel) maintaining constant pressure gradient along the retentate channel length means shear will be constant on scale up.  Pressure drop from retentate to permeate will be higher at the retentate inlet, but if the shear is appropriate and the boundary layer controlled, this will only lead to higher flux, which may be preferred.  This can be tested on the small scale by applying back pressure on the retentate and looking for leveling off of the flux vs. pressure curve.  As long as flux vs. back pressure is increasing linearly, you can get improved performance at higher pressure.  Then upon scale up, the pressure at the retentate inlet is held constant.  It is certainly worth exploring longer path lengths on scale up, performance may improve!

In the end, either horizontal scale up will be used, or some reduction in retentate flow rate will probably be required.  The result of the latter will be less shear at the membrane surface, but the payback will be in increased filtration efficiency.  Some back pressure should be applied to the retentate side on the lab scale, as more pressure due to path length will almost surely need to be applied in manufacturing.  Maintaining pressure drop on the retentate side with increased module length, along with back pressure on the permeate side usually results in successful scale up of a lab cross flow filtration procedure.