By Dr. Scott Rudge
Of all the unit operations used in pharmaceutical manufacture, filtration is used the most frequently, by far. Filters are used on the air and the water that makes its way into the production suite. They are used on the buffers and chemical solutions that are used to feed the process. They are used to vent the tanks and reactors that the products are held and synthesized in. But the sizing of the filters is largely an afterthought in process design.
Liquid filters that will be used to remove an appreciable amount of solid must be sized with the aid of experimental data. Typically, a depth filter is used, or a filter that contains a filtration aid, such as diatomaceous earth. A depth filter is a filter in which there are no defined pores, rather, they are usually some kind of spun fiber, like polyethylene, that serves as a matt for capturing particulate. You probably did a depth filtration experiment in high school with glass wool. Or you’ve used a depth filter in your home aquarium with the gravel (under gravel filter) or an external filter pump (where the fibrous cartridge you install is a depth filter, such as the "blue bonded filter pads" shown below).
A depth filter uses both its fiber mesh to trap particles, but also then uses the bed of particles to capture more particles. It is actually the nature of the particles that controls most of the filtration properties of the process.
Because of the solids being deposited onto the filter, the resistance of the filter to flow increases as the volume that has been filtered increases. Therefore, knowing the exact size of filter that will be required for your application can be complicated. The complication is overcome by developing a specific solids resistance that is normalized to the volume that has been filtered, and the solids load in the slurry. Once this is done, these depth filters can be sized by measuring the volume filtered at constant pressure in a laboratory setting. The linearized equation for filtration volume is:
By measuring the volume filtered with time at constant pressure, the two filtration resistances can be found as the slope and intercept of a plot of t/(V/A) vs. (V/A). The area of a depth filter is the cross section of the flow path. On scale up, the depth of a depth filter is held constant, and this cross section is increased. An example of the laboratory data that should be taken, and the resulting plots, is shown below:
As expected, the filter starts to clog as more filtrate is filtered. The linearized plot gives a positive y-axis intercept and a positive slope, which can be used to calculate the resistance of the filter and the resistance of the solids cake on the filter.
The resistance of the filter should be a constant and independent of any changes in the feed stream. However, the specific cake resistance, α, will vary with the solids load. It is important to know the solids load in the representative sample(s) tested, and the variability in the solids load in manufacturing. The filter then should be sized for the highest load anticipated. This will result in the under-utilization of the filter area for most of the batches manufactured, but will reduce or eliminate the possibility that the filter will have to be changed mid-batch.
Of course, reducing variability in the feed stream will increase the efficiency of the filter utilization, and reduce waste in other ways, such as reducing variability in manufacturing time, reducing manufacturing investigations and defining labor costs.
Showing posts with label Quality by Design. Show all posts
Showing posts with label Quality by Design. Show all posts
Monday, August 9, 2010
Wednesday, May 12, 2010
Bending the Curve
By Dr. Scott Rudge
To understand the best ways to develop preparative and industrial scale adsorption separations in biotechnology, it’s critical to understand the thermodynamics of solute binding. In this blog, I’ll review some basics of the Langmuir binding isotherm. This isotherm is a fairly simplistic view of adsorption and desorption, however, it applies fairly well to typical protein separations, such as ion exchange and affinity chromatography.
A chemical solution that is brought into contact with a resin that has binding sites for that chemical will partition between the solution phase and the resin phase. The partitioning will be driven by some form of affinity or equilibrium, that can be considered fairly constant at constant solution phase conditions. By “solution phase conditions”, I mean temperature, pH, conductivity, salt and other modifier concentrations. Changing these conditions changes the equilibrium partitioning. If we represent the molecule in solution by “c” and the same molecule adsorbed to the resin by “q”, then the simple mathematical relationship is:
If the capacity of the resin for the chemical is unlimited, then this is the end of the story, the equilibrium is “linear” and the behavior of the adsorption is easy to understand as the dispersion is completely mass transfer controlled. A example of this is size exclusion chromatography, where the resin has no affinity for the chemical, it simply excludes solutes larger than the pore or polymer mesh length. For resins where there are discrete “sites” to which the chemical might bind, or a finite “surface” of some kind with which the chemical has some interaction, then the equilibrium is described by:
and the maximum capacity of the resin has to be accounted for with a “site” balance, such as shown below:
Where Stot represents the total number of binding sites, and S0 represents the number of binding sites not occupied by the chemical of interest. The math becomes a little more complicated when you worry about what might be occupying that site, or if you want to know what happens when the molecule of interest occupies more than one site at a time. We’ll leave these important considerations for another day. Typically, the total sites can be measured. Resin vendors use terms such as “binding capacity” or “dynamic binding capacity” to advertise the capacity of their resins. The capacity is often dependent on the chemical of interest. The resulting relationship between c and q is no longer linear, it is represented by this equation:
When c is small, the denominator of this equation becomes 1, and the equilibrium equation looks like the linear equilibrium equation. When c is large, the denominator becomes Keqc, and the resin concentration of the chemical is equal to the resin capacity, Stot. When c is in between small and large, the isotherm bends over in a convex shape. This is shown in the graph below.
There are three basic conditions in preparative and industrial chromatographic operations. In the first, Keq is very low, and there is little or no binding of the chemical to the resin. This is represented by the red squares in the graph above. This is the case with “flow through” fractions in chromatography, and would generally be the case when the chemical has the same charge as the resin. In the third, Keq is very high, and the chemical is bound quantitatively to the resin, even at low concentrations. This is represented by the green triangles in the graph above. This is the case with chemicals that are typically only released when the column is “stripped” or “regenerated”. In these cases, the solution phase conditions are changed to turn Keq from a large number to a small number during the regeneration by using a high salt concentration or an extreme pH. The second case is the most interesting, and is the condition for most “product” fractions, where a separation is being made. That is, when the solution phase conditions are tuned so that the desired product is differentially adsorbing and desorbing, allowing other chemicals with slightly higher or lower affinities to elute either before or after the desired product, it is almost always the case that the equilibrium constant is not such that binding is quantitative or non-existent. In these cases, the non-linearity of the isotherm has consequences for the shape of the elution peak. We will discuss these consequences in a future blog.
In a “Quality-by-Design” world, these non-linearities would be understood and accounted for the in the design of the chromatography operation. An excellent example of the resulting non-linearity of the results was shown by Oliver Kaltenbrunner in 2008.
Relying on linear statistics to uncover this basic thermodynamic behavior is a fool’s errand. However, using basic lab techniques (a balance and a spectrophotometer) the isotherm for your product of interest can be determined directly, and the chromatographic behavior understood. This is the path to process understanding!
To understand the best ways to develop preparative and industrial scale adsorption separations in biotechnology, it’s critical to understand the thermodynamics of solute binding. In this blog, I’ll review some basics of the Langmuir binding isotherm. This isotherm is a fairly simplistic view of adsorption and desorption, however, it applies fairly well to typical protein separations, such as ion exchange and affinity chromatography.
A chemical solution that is brought into contact with a resin that has binding sites for that chemical will partition between the solution phase and the resin phase. The partitioning will be driven by some form of affinity or equilibrium, that can be considered fairly constant at constant solution phase conditions. By “solution phase conditions”, I mean temperature, pH, conductivity, salt and other modifier concentrations. Changing these conditions changes the equilibrium partitioning. If we represent the molecule in solution by “c” and the same molecule adsorbed to the resin by “q”, then the simple mathematical relationship is:
If the capacity of the resin for the chemical is unlimited, then this is the end of the story, the equilibrium is “linear” and the behavior of the adsorption is easy to understand as the dispersion is completely mass transfer controlled. A example of this is size exclusion chromatography, where the resin has no affinity for the chemical, it simply excludes solutes larger than the pore or polymer mesh length. For resins where there are discrete “sites” to which the chemical might bind, or a finite “surface” of some kind with which the chemical has some interaction, then the equilibrium is described by:
and the maximum capacity of the resin has to be accounted for with a “site” balance, such as shown below:
Where Stot represents the total number of binding sites, and S0 represents the number of binding sites not occupied by the chemical of interest. The math becomes a little more complicated when you worry about what might be occupying that site, or if you want to know what happens when the molecule of interest occupies more than one site at a time. We’ll leave these important considerations for another day. Typically, the total sites can be measured. Resin vendors use terms such as “binding capacity” or “dynamic binding capacity” to advertise the capacity of their resins. The capacity is often dependent on the chemical of interest. The resulting relationship between c and q is no longer linear, it is represented by this equation:
When c is small, the denominator of this equation becomes 1, and the equilibrium equation looks like the linear equilibrium equation. When c is large, the denominator becomes Keqc, and the resin concentration of the chemical is equal to the resin capacity, Stot. When c is in between small and large, the isotherm bends over in a convex shape. This is shown in the graph below.
There are three basic conditions in preparative and industrial chromatographic operations. In the first, Keq is very low, and there is little or no binding of the chemical to the resin. This is represented by the red squares in the graph above. This is the case with “flow through” fractions in chromatography, and would generally be the case when the chemical has the same charge as the resin. In the third, Keq is very high, and the chemical is bound quantitatively to the resin, even at low concentrations. This is represented by the green triangles in the graph above. This is the case with chemicals that are typically only released when the column is “stripped” or “regenerated”. In these cases, the solution phase conditions are changed to turn Keq from a large number to a small number during the regeneration by using a high salt concentration or an extreme pH. The second case is the most interesting, and is the condition for most “product” fractions, where a separation is being made. That is, when the solution phase conditions are tuned so that the desired product is differentially adsorbing and desorbing, allowing other chemicals with slightly higher or lower affinities to elute either before or after the desired product, it is almost always the case that the equilibrium constant is not such that binding is quantitative or non-existent. In these cases, the non-linearity of the isotherm has consequences for the shape of the elution peak. We will discuss these consequences in a future blog.
In a “Quality-by-Design” world, these non-linearities would be understood and accounted for the in the design of the chromatography operation. An excellent example of the resulting non-linearity of the results was shown by Oliver Kaltenbrunner in 2008.
Relying on linear statistics to uncover this basic thermodynamic behavior is a fool’s errand. However, using basic lab techniques (a balance and a spectrophotometer) the isotherm for your product of interest can be determined directly, and the chromatographic behavior understood. This is the path to process understanding!
Friday, April 23, 2010
Is There Ever a Good Time for Filter Validation?
By Dr. Scott Rudge
What is the right time to perform bacterial retention testing on a sterile filter for an aseptic process for Drug Product? I usually recommend that this be done prior to manufacturing sterile product. After all, providing for the sterility of the dosage form for an injectable drug is first and foremost the purpose of drug product manufacturing.
But there are some uncomfortable truths concerning this recommendation
1. Bacterial retention studies require large samples, liters
2. Formulations change between first in human and commercial manufacturing, requiring revalidation of bacterial retention
3. The chances of a formulation change causing bacteria to cross an otherwise integral membrane are primarily theoretical, the “risk” would appear to be low
On the other hand
1. The most frequent sterile drug product inspection citation in 2008 by the FDA was “211.113(b) Inadequate validation of sterile manufacturing” (source: presentation by Tara Gooel of the FDA, available on the ISPE website to members)
2. The FDA identifies aseptic processing as the “top priority for risk based approach” due to the proximal risk to patients
3. The FDA continues to identify smaller and smaller organisms that might pass through a filter
Is the issue serious? I think so, risk of infection to patients is one of the few direct consequences that pharmaceutical manufacturers can directly link between manufacturing practice and patient safety, which is one of the goals of Quality by Design. Is the safety threat from changes to filter properties and microbe size in the presence of slightly different formulations substantial? I don’t think so, especially not in proportion to the cost to demonstrate this specifically. But the data aren’t available to demonstrate this hypothesis, because the industry has no shared database to demonstrate a range of aqueous based protein solutions have no effect on bacterial retention. There is really nothing proprietary about this data, and the only organizations that benefit from keeping it confidential are the testing labs. Sharing this data should benefit all of us. An organization like PDA or ISPE should have an interest in polling this data and then making a case to the FDA and EMEA that the vast majority of protein formulations have been bracketed by testing that already exists, and that the revalidation of bacterial retention on filters following formulation changes is mostly superfluous.
In the meantime, if you don’t have enough product to perform bacterial retention studies, at least check the excipients, as in a placebo or diluents buffer. A filter failure is far more likely due to the excipients than the active ingredient, which is typically present in much smaller amounts (by weight and molarity). By doing this, you are both protecting your patients in early clinical testing, and reducing your risk with regulators.
What is the right time to perform bacterial retention testing on a sterile filter for an aseptic process for Drug Product? I usually recommend that this be done prior to manufacturing sterile product. After all, providing for the sterility of the dosage form for an injectable drug is first and foremost the purpose of drug product manufacturing.
But there are some uncomfortable truths concerning this recommendation
1. Bacterial retention studies require large samples, liters
2. Formulations change between first in human and commercial manufacturing, requiring revalidation of bacterial retention
3. The chances of a formulation change causing bacteria to cross an otherwise integral membrane are primarily theoretical, the “risk” would appear to be low
On the other hand
1. The most frequent sterile drug product inspection citation in 2008 by the FDA was “211.113(b) Inadequate validation of sterile manufacturing” (source: presentation by Tara Gooel of the FDA, available on the ISPE website to members)
2. The FDA identifies aseptic processing as the “top priority for risk based approach” due to the proximal risk to patients
3. The FDA continues to identify smaller and smaller organisms that might pass through a filter
Is the issue serious? I think so, risk of infection to patients is one of the few direct consequences that pharmaceutical manufacturers can directly link between manufacturing practice and patient safety, which is one of the goals of Quality by Design. Is the safety threat from changes to filter properties and microbe size in the presence of slightly different formulations substantial? I don’t think so, especially not in proportion to the cost to demonstrate this specifically. But the data aren’t available to demonstrate this hypothesis, because the industry has no shared database to demonstrate a range of aqueous based protein solutions have no effect on bacterial retention. There is really nothing proprietary about this data, and the only organizations that benefit from keeping it confidential are the testing labs. Sharing this data should benefit all of us. An organization like PDA or ISPE should have an interest in polling this data and then making a case to the FDA and EMEA that the vast majority of protein formulations have been bracketed by testing that already exists, and that the revalidation of bacterial retention on filters following formulation changes is mostly superfluous.
In the meantime, if you don’t have enough product to perform bacterial retention studies, at least check the excipients, as in a placebo or diluents buffer. A filter failure is far more likely due to the excipients than the active ingredient, which is typically present in much smaller amounts (by weight and molarity). By doing this, you are both protecting your patients in early clinical testing, and reducing your risk with regulators.
Thursday, January 7, 2010
Quality by Design: Dissolution Time
By Dr. Scott Rudge
In a previous post, I discussed the prevalence of statistics used in Quality by Design. These statistical tools are certainly useful and can provide (within their limits of error) prediction of future effects of excursions from control ranges for operating parameters, specifically for Critical Quality Attributes (CQA’s). The limitations of this approach were discussed in the previous blog. In the next series of blogs on Quality by Design, I will discuss opportunities for increasing quality, consistency and compliance for biotechnology products by building quality from the ground up.
While active pharmaceutical ingredient (API) manufacture by biosynthesis is a complicated and difficult to control prospect, there are a number of fundamental operations that are imminently controllable. Media and buffers must be compounded, sometimes adjusted or supplemented, stored and ultimately used in reactors and separators to produce and purify the API. These solutions are fairly easy to make with precision. Three factors come immediately to mind that can be known in a fashion that is scale independent and rigorous, 1) the dissolution rate, 2) the mixing power required and 3) the chemical stability of the solution.
The dissolution rate is a matter of mass transfer from a saturated solution at the dissolving solid interface to the bulk solution concentration. If particle size is fairly consistent, then the dissolution rate is represented by this equation:
where k is the mass transfer coefficient, provided the dissolving solid is fully suspended. It is easy to measure this mass transfer rate in the laboratory with an appropriate measure of solution concentration. For example, for the dissolution of sodium chloride, conductivity can be used. We conducted such experiments in our lab across a range of volumes salt concentrations, and found a scale independent mass transfer coefficient of approximately 0.4 s-1. An example of the results is shown in the accompanying figure.
With the mass transfer coefficient in hand, the mixing time can be precisely specified, and an appropriately short additional engineering safety factor added. If the times are known for dissolution, and mixing is scaled appropriately (as will be shown in future blogs) then buffers and other solutions can be made with high precision and little wasted labor or material. In addition, the properties of the solutions should be constant within a narrow range, and the reproducibility of more complicated unit operations such as reactors and separators, much improved.
A design based on engineering standards such as this produces predictable results. Predictable results are the basis of process validation. As the boiling point of water drives the design of the WFI still, we should let engineering design equations drive Quality by Design for process unit operations.
Leah Choi contributed to this work
In a previous post, I discussed the prevalence of statistics used in Quality by Design. These statistical tools are certainly useful and can provide (within their limits of error) prediction of future effects of excursions from control ranges for operating parameters, specifically for Critical Quality Attributes (CQA’s). The limitations of this approach were discussed in the previous blog. In the next series of blogs on Quality by Design, I will discuss opportunities for increasing quality, consistency and compliance for biotechnology products by building quality from the ground up.
While active pharmaceutical ingredient (API) manufacture by biosynthesis is a complicated and difficult to control prospect, there are a number of fundamental operations that are imminently controllable. Media and buffers must be compounded, sometimes adjusted or supplemented, stored and ultimately used in reactors and separators to produce and purify the API. These solutions are fairly easy to make with precision. Three factors come immediately to mind that can be known in a fashion that is scale independent and rigorous, 1) the dissolution rate, 2) the mixing power required and 3) the chemical stability of the solution.
The dissolution rate is a matter of mass transfer from a saturated solution at the dissolving solid interface to the bulk solution concentration. If particle size is fairly consistent, then the dissolution rate is represented by this equation:
where k is the mass transfer coefficient, provided the dissolving solid is fully suspended. It is easy to measure this mass transfer rate in the laboratory with an appropriate measure of solution concentration. For example, for the dissolution of sodium chloride, conductivity can be used. We conducted such experiments in our lab across a range of volumes salt concentrations, and found a scale independent mass transfer coefficient of approximately 0.4 s-1. An example of the results is shown in the accompanying figure.
With the mass transfer coefficient in hand, the mixing time can be precisely specified, and an appropriately short additional engineering safety factor added. If the times are known for dissolution, and mixing is scaled appropriately (as will be shown in future blogs) then buffers and other solutions can be made with high precision and little wasted labor or material. In addition, the properties of the solutions should be constant within a narrow range, and the reproducibility of more complicated unit operations such as reactors and separators, much improved.
A design based on engineering standards such as this produces predictable results. Predictable results are the basis of process validation. As the boiling point of water drives the design of the WFI still, we should let engineering design equations drive Quality by Design for process unit operations.
Leah Choi contributed to this work
Tuesday, November 10, 2009
The Good Buffer
By Scott Rudge
“A Good Buffer” has a number of connotations in biochemistry and biochemical engineering. A “good buffer” would be one that has good buffering capacity at the desired pH. The best buffering capacity is at the pK of the buffer of course, although it seems buffer salts are rarely used at their pK.
Second, a good buffer would be one matched to the application. Or maybe that’s first. For example, the buffering ion in an ion exchange chromatography step should be the same charge as the resin (so as not to bind and take up resin capacity). For example, phosphate ion (negative) is a good choice for cation exchange resins (also negatively charged) like S and CM resins.
Another meaning of a “Good” buffer is a buffer described by Dr. Norman Good and colleagues in 1966 (N. E. Good, G. D. Winget, W. Winter, T. N. Connolly, S. Izawa and R. M. M. Singh (1966). "Hydrogen Ion Buffers for Biological Research". Biochemistry 5 (2): 467–477.). These twelve buffers have pK’s spanning the range 6.15 to 8.35, and are a mixture of organic acids, organic bases and zwitterions (having both an acidic and basic site). All twelve of Good’s buffers have pK’s that are fairly strongly temperature dependent, meaning that, in addition to the temperature correction required for the activity of hydrogen ion, there is an actual shift in pH that is temperature dependent. So, while a buffer can be matched to the desired pH approximately every 0.2 pH units across pH 7 ± 1, the buffers are expensive and not entirely suited to manufacturing applications.
In our view, a good buffer is one that is well understood and is designed for its intended purpose. To be designed for its intended purpose, it should be well matched to provide adequate buffering capacity at the desired pH and desired temperature. As shown in the figure, the buffering capacity of a buffer with a pK of 8 is nearly exhausted below pH 7 and above pH 9.
It’s easy to overshoot the desired pH at these “extremes”, but just such a mismatch between buffering ion and desired pH is often specified. Furthermore, buffers are frequently made by titrating to the desired pH from the pK of the base or the acid. This leads to batch to batch variation in the amount of titrant used, because of overshooting and retracing. In addition, the temperature dependence of the pK is not taken into account when specifying the temperature of the buffer. Tris has a pK of 8.06 at 20°C, so a Tris buffer used at pH 7.0 is already not a good idea at 20°C. The pK of Tris changes by -0.03 pH units for every 1°C in positive temperature change. So if the temperature specified for the pH 7.0 buffer is 5°C, the pK will have shifted to 8.51. Tris has 3% of its buffering capacity available at pH 7.0, 5°C, it’s not well matched at all.
A good buffer will have a known mass transfer rate in water, so that its mixing time can be predicted. Precise amounts of buffering acid or base and cosalt are added to give the exact pH required at the exact temperature specified. This actually reduces our reliance on measurements like pH and conductivity that can be inexact. Good buffers can be made with much more precision than ± 0.2 pH units and ± 10% of nominal conductivity, and when you start to make buffers this way, you will rely more on your balance and understanding of appropriate storage conditions for your raw materials, than making adjustments in the field with titrants, time consuming mixing and guessing whether variation in conductivity is going to upset the process.
“A Good Buffer” has a number of connotations in biochemistry and biochemical engineering. A “good buffer” would be one that has good buffering capacity at the desired pH. The best buffering capacity is at the pK of the buffer of course, although it seems buffer salts are rarely used at their pK.
Second, a good buffer would be one matched to the application. Or maybe that’s first. For example, the buffering ion in an ion exchange chromatography step should be the same charge as the resin (so as not to bind and take up resin capacity). For example, phosphate ion (negative) is a good choice for cation exchange resins (also negatively charged) like S and CM resins.
Another meaning of a “Good” buffer is a buffer described by Dr. Norman Good and colleagues in 1966 (N. E. Good, G. D. Winget, W. Winter, T. N. Connolly, S. Izawa and R. M. M. Singh (1966). "Hydrogen Ion Buffers for Biological Research". Biochemistry 5 (2): 467–477.). These twelve buffers have pK’s spanning the range 6.15 to 8.35, and are a mixture of organic acids, organic bases and zwitterions (having both an acidic and basic site). All twelve of Good’s buffers have pK’s that are fairly strongly temperature dependent, meaning that, in addition to the temperature correction required for the activity of hydrogen ion, there is an actual shift in pH that is temperature dependent. So, while a buffer can be matched to the desired pH approximately every 0.2 pH units across pH 7 ± 1, the buffers are expensive and not entirely suited to manufacturing applications.
In our view, a good buffer is one that is well understood and is designed for its intended purpose. To be designed for its intended purpose, it should be well matched to provide adequate buffering capacity at the desired pH and desired temperature. As shown in the figure, the buffering capacity of a buffer with a pK of 8 is nearly exhausted below pH 7 and above pH 9.
It’s easy to overshoot the desired pH at these “extremes”, but just such a mismatch between buffering ion and desired pH is often specified. Furthermore, buffers are frequently made by titrating to the desired pH from the pK of the base or the acid. This leads to batch to batch variation in the amount of titrant used, because of overshooting and retracing. In addition, the temperature dependence of the pK is not taken into account when specifying the temperature of the buffer. Tris has a pK of 8.06 at 20°C, so a Tris buffer used at pH 7.0 is already not a good idea at 20°C. The pK of Tris changes by -0.03 pH units for every 1°C in positive temperature change. So if the temperature specified for the pH 7.0 buffer is 5°C, the pK will have shifted to 8.51. Tris has 3% of its buffering capacity available at pH 7.0, 5°C, it’s not well matched at all.
A good buffer will have a known mass transfer rate in water, so that its mixing time can be predicted. Precise amounts of buffering acid or base and cosalt are added to give the exact pH required at the exact temperature specified. This actually reduces our reliance on measurements like pH and conductivity that can be inexact. Good buffers can be made with much more precision than ± 0.2 pH units and ± 10% of nominal conductivity, and when you start to make buffers this way, you will rely more on your balance and understanding of appropriate storage conditions for your raw materials, than making adjustments in the field with titrants, time consuming mixing and guessing whether variation in conductivity is going to upset the process.
Tuesday, June 23, 2009
Why is Quality by Design so heavy with statistics?
Why is the literature on Quality by Design so laden with statistics and experimental design space jargon? After all, the definition of the term “design” doesn't seem to include the analysis of messy data leading to rough correlations with results that are valid only over a limited range. So what gives?
The idea behind QbD was to use mathematical, predictive models to predict process outcomes. This concept can be applied directly to simple unit operations, such as drying, distilling, heating and cooling. However, unlike in the petrochemical business, the thermodynamic properties of most active pharmaceutical ingredients are not known and are difficult to measure. The unit operations used to manufacture common biotechnology products, such as cell culture, chromatography and fermentation have been modeled, but the models are very sensitive to unknown or unmeasureable adjustable parameters. The batch nature of these operations also makes their control difficult, as classical control theory relies on the measurement of an output to make an adjustment to an input to correct the output back towards the design specification.
Since there does not appear to be a clear path to using models, an approach has been chosen that emphasizes getting as much phenomenological information from as few experiments as possible. This is the Design of Experiments approach, where input conditions or operating parameters are systematically varied over a range and the process outputs measured, with statistics used to deconvolute the results. The combined ranges tested become the “design space”, and the process performance outputs with the variations closest to the process failure limit become the critical performance parameters. The results are useful, but only within the design space, and only with the certainty that the statistics report. Also, since the results are phenomenological, the effect of scale is often unknown.
The statistical approach is acceptable, and for the immediate future it's probably the best that we can expect. But the focus on this approach seems to drown out the more pressing need for good process models and physical properties data. These are the elements that allowed the petrochemical and commodity chemicals industries to scale up processes with assurance that quality specifications would be met. There are countless models available for bioprocessing's more complicated unit operations, but they have parameters that we don't know and can't calculate from first principles. There is no question that we need to find ways to collect this data, and a commitment to publish or share it. There are also simpler unit operations that we can model, scale up and scale down with complete assurance. These include operations such as mixing and storing solutions, filtration, diafiltration, centrifugation and some reactions. We shouldn't let the more complicated operations that still require statistical DOE approaches prevent us from applying the true principles of QbD to our simpler unit operations.
The idea behind QbD was to use mathematical, predictive models to predict process outcomes. This concept can be applied directly to simple unit operations, such as drying, distilling, heating and cooling. However, unlike in the petrochemical business, the thermodynamic properties of most active pharmaceutical ingredients are not known and are difficult to measure. The unit operations used to manufacture common biotechnology products, such as cell culture, chromatography and fermentation have been modeled, but the models are very sensitive to unknown or unmeasureable adjustable parameters. The batch nature of these operations also makes their control difficult, as classical control theory relies on the measurement of an output to make an adjustment to an input to correct the output back towards the design specification.
Since there does not appear to be a clear path to using models, an approach has been chosen that emphasizes getting as much phenomenological information from as few experiments as possible. This is the Design of Experiments approach, where input conditions or operating parameters are systematically varied over a range and the process outputs measured, with statistics used to deconvolute the results. The combined ranges tested become the “design space”, and the process performance outputs with the variations closest to the process failure limit become the critical performance parameters. The results are useful, but only within the design space, and only with the certainty that the statistics report. Also, since the results are phenomenological, the effect of scale is often unknown.
The statistical approach is acceptable, and for the immediate future it's probably the best that we can expect. But the focus on this approach seems to drown out the more pressing need for good process models and physical properties data. These are the elements that allowed the petrochemical and commodity chemicals industries to scale up processes with assurance that quality specifications would be met. There are countless models available for bioprocessing's more complicated unit operations, but they have parameters that we don't know and can't calculate from first principles. There is no question that we need to find ways to collect this data, and a commitment to publish or share it. There are also simpler unit operations that we can model, scale up and scale down with complete assurance. These include operations such as mixing and storing solutions, filtration, diafiltration, centrifugation and some reactions. We shouldn't let the more complicated operations that still require statistical DOE approaches prevent us from applying the true principles of QbD to our simpler unit operations.
Subscribe to:
Posts (Atom)