By Ray Nims, Ph.D.
There was a time, not long ago, when it might take months to years to identify a viral contaminant isolated from a biological production process or from an animal or patient tissue sample. The identification process took this long because it involved what I have referred to as the “shotgun approach”, or it involved luck.
Let’s start with luck. That is probably the wrong term. What I mean by this is that there have been instances where an informed guess has led to a fairly rapid (i.e., weeks to months) identification of a contaminant. For instance, our group at BioReliance was able to rapidly identify contamination with REO virus (REO type 2 actually) and Cache Valley virus because we had observed these viruses in culture previously and because these viruses had unique properties (a unique cytopathic effect in the case of REO and a unique growth pattern in the case of Cache Valley virus). The time required to identify these viruses consisted of the time required to submit and obtain results from confirmatory PCR testing for the specific agents.
The first time we ran into Cache Valley virus, however, it was a different story. This was, it turns out, the first time that this particular virus had been detected in a biopharmaceutical bulk harvest sample. In this case, we participated in the “shotgun approach” that was applied to the identification of the isolate. The “shotgun approach” consisted of utilizing any detection technique available at the lab, namely, in vitro screening, bovine screening, application of any immunofluorescent stains available, and transmission electron microscopy (TEM). The TEM was helpful, as it indicated a 80-100 nm virus with 7-9 nm spikes. A bunyavirus-specific stain showed positive, and eventually (after months of work), sequencing and BLAST alignment was used to confirm the identity of the virus as Cache Valley virus.
The “shotgun approach” was subsequently applied to a virus isolated from harbor seal tissues, with no identity established as a result. After approximately a year of floundering using the old methods, the virus was eventually found to be a new picornavirus (Seal Picornavirus 1). How was this accomplished? During the time between the identification of the Cache Valley virus and the seal virus, a new technology called deep sequencing became available. Eric Delwart’s group used the technique to rapidly identify the virus to the species level. As this was the first time this particular picornavirus had ever been detected, deep sequencing is likely the only method that would have been able to make the identification.
Deep (massively parallel) sequencing is one of a few new technologies that will make virus isolate identification routine and rapid in the future. It has been adopted for detection of viral contaminants in cells and viral seed stocks and for evaluating vaccine cell substrates by BioReliance.The other is referred to as the T5000 universal biosensor. Houman Dehghani’s group at Amgen has been characterizing this methodology as a rapid identification platform for adventitious agent contaminations. Each technology has its advantages. Deep sequencing is more labor intensive, but has the ability to indicate (as described above) a new species. The universal biosensor can both serve as a detection method and as an identification method. Both can identify multiple contaminants within a sample.
Since identification of an adventitious viral contaminant of a biopharmaceutical manufacturing process is required for establishment of root cause, for evaluating effectiveness of facility cleaning procedures and viral purification procedures, and for assuring safety of both workers and patients, it is critical that the identification of a viral isolate is completed accurately and rapidly. Happily, we now have the tools at hand to accomplish this.
Monday, July 25, 2011
Saturday, July 9, 2011
Can You Decide on CAPA?
By Dr. Scott Rudge
When the root cause is found, it is eliminated, and the process goes on. The CAPA is closed and everyone waits for the next crisis or problem that needs to be fixed. Right?
But root cause elimination, although a worthy corrective objective, does not appear to be sufficient as a preventive action. One should consider how the root cause is to be eliminated. And one should consider whether the manner and means of root cause elimination might lead to other failure modes.
The first consideration, how to eliminate a root cause, is a decision. Decision analysis can be used to improve the process of choosing between multiple remediation options. When there are multiple good options for fixing a problem, or perhaps a series of “less bad” options, it’s a good idea to do some analysis.
First, the decision should be defined. The required objectives that the remediation option must meet should be listed. These should be specific and measureable. Each option for addressing the problem should also be listed, and the “features” of each option listed, so there is little uncertainty about the option and its implementation. Each option must fully meet all the required objectives, any option that partially addresses must be considered incomplete. Any option that is incomplete should either 1) not be considered further, or 2) have elements added that allow the option to completely satisfy the required objectives. All options that meet each of the required objectives is a “qualified option”.
But wait, there’s more. A final check of failure modes should be made. The last thing we want is to implement a remediation option that introduces more vulnerabilities into our systems. A simple FMEA can be used for this. If your organization routinely uses FMEAs and has standard criteria for scoring them, then you probably only need to perform one FMEA, and ensure no resulting risk priority numbers exceed the established threshold value for risk. If your organization doesn’t have this experience, then you may have to perform a comparative FMEA on two or more qualified options, to gauge relative risk.
When things go wrong in pharmaceutical manufacturing,
consequences can be dire. Small changes
in the quality of the pharmaceutical product can cause major consequences for
patients in ways too numerous to list in this blog. It can be surmised that the failure mode was
not anticipated, so the manufacturer is wise to determine the cause of the
failure, correct it, and prevent it from happening again. This exercise is known as CAPA, Corrective
and Preventive Actions.
There are numerous tools for determining the root causes of
failures, many of these are embedded fully in software, and ICH Q9, that helps
a manufacturer’s quality organization to track the resolution of CAPAs. Fault Tree Analysis and Root Cause Analysis
are two common and popular examples of these tools. These tools help to trace the cause of a
failure back to the basic characteristics of the failure. Once these basic characteristics have been
identified, they can be remediated (eliminated).When the root cause is found, it is eliminated, and the process goes on. The CAPA is closed and everyone waits for the next crisis or problem that needs to be fixed. Right?
But root cause elimination, although a worthy corrective objective, does not appear to be sufficient as a preventive action. One should consider how the root cause is to be eliminated. And one should consider whether the manner and means of root cause elimination might lead to other failure modes.
The first consideration, how to eliminate a root cause, is a decision. Decision analysis can be used to improve the process of choosing between multiple remediation options. When there are multiple good options for fixing a problem, or perhaps a series of “less bad” options, it’s a good idea to do some analysis.
First, the decision should be defined. The required objectives that the remediation option must meet should be listed. These should be specific and measureable. Each option for addressing the problem should also be listed, and the “features” of each option listed, so there is little uncertainty about the option and its implementation. Each option must fully meet all the required objectives, any option that partially addresses must be considered incomplete. Any option that is incomplete should either 1) not be considered further, or 2) have elements added that allow the option to completely satisfy the required objectives. All options that meet each of the required objectives is a “qualified option”.
There are often additional objectives that are less absolute
that should be considered in choosing a remediation option. These might be considered as differentiating
objectives. For example, cost may be a
required objective (the remediation must be implemented within the current
budget) and a differentiating objective (the least costly option, within the
budget, is preferred). These
differentiating objectives should also be listed, and weighted according to
their relative importance. The
objectives of all stake holders must be considered. The weightings may be controversial among
stakeholders, but with practice, an organization can learn to appropriately
weight objectives.
Qualified options can be scored according to the weighted differentiating
objectives. Each qualified option is
ranked according to how it fulfills these objectives, and the score is
multiplied by the weight. The weighted
scores are summed, and a “most preferred” option is identified as a
result. But wait, there’s more. A final check of failure modes should be made. The last thing we want is to implement a remediation option that introduces more vulnerabilities into our systems. A simple FMEA can be used for this. If your organization routinely uses FMEAs and has standard criteria for scoring them, then you probably only need to perform one FMEA, and ensure no resulting risk priority numbers exceed the established threshold value for risk. If your organization doesn’t have this experience, then you may have to perform a comparative FMEA on two or more qualified options, to gauge relative risk.
If your decision making is already perfect, then there is no
need for this analysis. Such excellent
seat of the pants decision making would be useful for the rest of us to
see! For the rest of us, when new issues
arise, the decision analysis can be revisited, and organizational assumptions
revised, to improve the decision making in the future. And it might be useful to have documented the
options you considered, and how you selected among them.
Friday, June 10, 2011
Small, non-enveloped viruses: number 1 threat to biologics manufacture
by Dr. Ray Nims
Perhaps surprisingly, few types of viruses have infected biologics manufacture since the 1980s when the first recombinant proteins began to be produced in mammalian cells. While the list of contaminating viruses has included some relatively large enveloped and non-enveloped viruses (Reovirus type 2, epizootic hemorrhagic disease virus, Cache Valley virus, human adenovirus), by far the most problematic contaminants have been the small non-enveloped viruses. Why? For the most part, the contaminations involving the larger viruses have been attributed to the use of non-gamma irradiated bovine serum or to operators conducting open vessel manipulations. Remediating the manufacturing processes to include gamma irradiation of the serum (or elimination of the use of serum altogether), and eliminating wherever possible open vessel operations should mitigate the risk of experiencing these viruses.
Now we come to the small non-enveloped viruses, the real problem. Foremost among these has been murine minute virus (MMV). This 20-25 nm non-enveloped parvovirus has infected biologics manufacturing processes using Chinese hamster cell substrates on at least four occasions, affecting at least three different manufacturers (Genentech, Amgen, and Merrimack). In each case, the source of the contamination has been unclear, making remediation of the processes difficult. Due to the ability of these viruses to survive on surfaces and their resistance to inactivation by detergents and solvents, eliminating the agent from contaminated facilities may require drastic measures such as fumigation with vaporous hydrogen peroxide .
A second problem virus is the 27-40 nm non-enveloped calicivirus, vesivirus 2117. This is the virus that was found to have infected the Genzyme Allston manufacturing facility in 2009. The same virus had appeared already once in the past, at a manufacturing facility in Germany. Both of the infected processes involved Chinese hamster production cells and both involved the use of bovine serum at some point in the manufacturing process. Whether or not the animal-derived material was the actual source of the infection was not proven in either case. Unfortunately, if the source was the bovine serum, gamma irradiation probably would not mitigate the risk, as gamma irradiation is less effective for inactivating the smaller non-enveloped viruses. This is another virus that may be able to survive on facility surfaces. As in the case with MMV, ridding a manufacturing facility of vesivirus may require entire facility fumigation with vaporous hydrogen peroxide, as was done at Genzyme.
Another problem virus is the 17-20 nm porcine circovirus that was found to contaminate a rotavirus vaccine in 2010. This virus was thought to have originated in contaminated porcine trypsin used in the manufacturing process. Wouldn’t this contaminant have shown up in the raw material testing done for the trypsin, or in the extensive cell bank testing required for vaccine production substrates? The answer is no. The circovirus would not have been detected using the 9CFR-based detection methods used for trypsin at this time (and at present). And the required testing for cell banks used to produce vaccines would not have detected this particular virus. To make matters worse, gamma irradiation of the trypsin would not be expected to inactivate this virus. How can we mitigate the risk of this virus going forward? As described in a previous posting, manufacturers may need to apply specific nucleic acid tests for the circovirus as part of the raw material release process for trypsin.
These and other small non-enveloped viruses represent the greatest risk for biologics manufacturing because they are more difficult to inactivate in raw materials, and more difficult to eradicate from the facility once infected, and because the source of the infection is not always clear. There must be analogous small-non-enveloped bacteriophage lurking out there that represent, for the same reasons, special threats to the fermentation industry.
Perhaps surprisingly, few types of viruses have infected biologics manufacture since the 1980s when the first recombinant proteins began to be produced in mammalian cells. While the list of contaminating viruses has included some relatively large enveloped and non-enveloped viruses (Reovirus type 2, epizootic hemorrhagic disease virus, Cache Valley virus, human adenovirus), by far the most problematic contaminants have been the small non-enveloped viruses. Why? For the most part, the contaminations involving the larger viruses have been attributed to the use of non-gamma irradiated bovine serum or to operators conducting open vessel manipulations. Remediating the manufacturing processes to include gamma irradiation of the serum (or elimination of the use of serum altogether), and eliminating wherever possible open vessel operations should mitigate the risk of experiencing these viruses.
Now we come to the small non-enveloped viruses, the real problem. Foremost among these has been murine minute virus (MMV). This 20-25 nm non-enveloped parvovirus has infected biologics manufacturing processes using Chinese hamster cell substrates on at least four occasions, affecting at least three different manufacturers (Genentech, Amgen, and Merrimack). In each case, the source of the contamination has been unclear, making remediation of the processes difficult. Due to the ability of these viruses to survive on surfaces and their resistance to inactivation by detergents and solvents, eliminating the agent from contaminated facilities may require drastic measures such as fumigation with vaporous hydrogen peroxide .
A second problem virus is the 27-40 nm non-enveloped calicivirus, vesivirus 2117. This is the virus that was found to have infected the Genzyme Allston manufacturing facility in 2009. The same virus had appeared already once in the past, at a manufacturing facility in Germany. Both of the infected processes involved Chinese hamster production cells and both involved the use of bovine serum at some point in the manufacturing process. Whether or not the animal-derived material was the actual source of the infection was not proven in either case. Unfortunately, if the source was the bovine serum, gamma irradiation probably would not mitigate the risk, as gamma irradiation is less effective for inactivating the smaller non-enveloped viruses. This is another virus that may be able to survive on facility surfaces. As in the case with MMV, ridding a manufacturing facility of vesivirus may require entire facility fumigation with vaporous hydrogen peroxide, as was done at Genzyme.
Another problem virus is the 17-20 nm porcine circovirus that was found to contaminate a rotavirus vaccine in 2010. This virus was thought to have originated in contaminated porcine trypsin used in the manufacturing process. Wouldn’t this contaminant have shown up in the raw material testing done for the trypsin, or in the extensive cell bank testing required for vaccine production substrates? The answer is no. The circovirus would not have been detected using the 9CFR-based detection methods used for trypsin at this time (and at present). And the required testing for cell banks used to produce vaccines would not have detected this particular virus. To make matters worse, gamma irradiation of the trypsin would not be expected to inactivate this virus. How can we mitigate the risk of this virus going forward? As described in a previous posting, manufacturers may need to apply specific nucleic acid tests for the circovirus as part of the raw material release process for trypsin.
These and other small non-enveloped viruses represent the greatest risk for biologics manufacturing because they are more difficult to inactivate in raw materials, and more difficult to eradicate from the facility once infected, and because the source of the infection is not always clear. There must be analogous small-non-enveloped bacteriophage lurking out there that represent, for the same reasons, special threats to the fermentation industry.
Tuesday, May 31, 2011
The Art of Bioreactor/Fermenter Scale-Up (or Scale-Down)
by Dr. Deb Quick
Effective bioreactor or fermenter scale-up/down is essential for successful bioprocessing. During development, small scale systems are employed to quickly evaluate and optimize the process, but larger scale systems are necessary for producing commercial quantities at a reasonable cost. But how does one effectively transfer the process between scales so that the process performs the same?
In an ideal world, the physiological microenvironment within the cells/microorganisms will be conserved at the different scales, but with no direct measure of that microenvironment the scientist identifies relevant macroproperties to measure and control to ensure comparability. There are many macroproperties and operating parameters that define the process at each scale, and while the goal is to keep as many of those parameters constant between the scales, it simply isn’t possible to keep them all the same.
When using the same operating parameters at small and large scale is impractical, there are several correlations that are commonly used: mass transfer coefficient (kLa [the volumetric transfer coefficient, 1/hr] or OTR [oxygen transfer rate, mmol/hr]; volumetric power consumption (P/V, agitation power per unit volume); agitator tip speed; and mixing time.
Matching the kLa at different scales is generally considered the most important factor in scaling cell culture and microbial processes. The second most common approach is to match the power consumption. For both of these correlations, there are often multiple combinations of operating parameters that provide the same kLa or the same power consumption at the different scale. And herein lies the art of bioreactor and fermenter scale-up/down. Selecting the best combination of parameters to match process performance at different scales is an art. There is no magic combination that works best for all cell types and products.
To establish comparability at different scales, you’ll make your life significantly easier if you start with the same vessel design at the different scales, but this luxury is rarely reality. More often, the development lab has significantly different equipment than the manufacturing facility. But even with different reactor designs, comparable performance can be obtained at different scales through appropriate experimentation.
Effective bioreactor or fermenter scale-up/down is essential for successful bioprocessing. During development, small scale systems are employed to quickly evaluate and optimize the process, but larger scale systems are necessary for producing commercial quantities at a reasonable cost. But how does one effectively transfer the process between scales so that the process performs the same?
In an ideal world, the physiological microenvironment within the cells/microorganisms will be conserved at the different scales, but with no direct measure of that microenvironment the scientist identifies relevant macroproperties to measure and control to ensure comparability. There are many macroproperties and operating parameters that define the process at each scale, and while the goal is to keep as many of those parameters constant between the scales, it simply isn’t possible to keep them all the same.
When using the same operating parameters at small and large scale is impractical, there are several correlations that are commonly used: mass transfer coefficient (kLa [the volumetric transfer coefficient, 1/hr] or OTR [oxygen transfer rate, mmol/hr]; volumetric power consumption (P/V, agitation power per unit volume); agitator tip speed; and mixing time.
Matching the kLa at different scales is generally considered the most important factor in scaling cell culture and microbial processes. The second most common approach is to match the power consumption. For both of these correlations, there are often multiple combinations of operating parameters that provide the same kLa or the same power consumption at the different scale. And herein lies the art of bioreactor and fermenter scale-up/down. Selecting the best combination of parameters to match process performance at different scales is an art. There is no magic combination that works best for all cell types and products.
To establish comparability at different scales, you’ll make your life significantly easier if you start with the same vessel design at the different scales, but this luxury is rarely reality. More often, the development lab has significantly different equipment than the manufacturing facility. But even with different reactor designs, comparable performance can be obtained at different scales through appropriate experimentation.
- First, you’ll need to understand your equipment at all scales: measure the kLa and P/V of the different scales over a wide range of air flows, agitation rates, working volumes, and backpressures. It’s best to perform the testing in your process media, if possible. If you can find the time, it’s useful to evaluate different mixing schemes at small scale - different impeller styles and positions, baffles, and sparger styles and positions (particularly valuable if you already know the differences in these features between small and large scale systems available to you).
- Second, you’ll need to understand how your product responds to the different operating parameters. Those dreaded statistically designed experiments (DoE) are particularly useful for understanding the effects and interactions of the many parameters that can be changed. Performing DoE experiments at small scale with your product to evaluate the effects of aeration, agitation, and volume will not only help you with scale-up, but will also provide useful information for setting acceptable ranges for the operating parameters at large scale. As with the kLa studies, it’s useful to study different mixing schemes at small scale if time allows. One set of experiments that is highly useful but rarely performed is the evaluation of the process performance at the same kLa (or P/V) obtained using different operating parameters.
Friday, May 6, 2011
TFF Under Pressure
By Dr. Scott Rudge
Are there scale up issues for cross flow filtration? In general, this step is overlooked as a scale up concern, and usually, given the primarily clean feed streams encountered in simple buffer exchange, this is warranted. However, forewarned is forearmed when scale up is concerned.
Primarily, there is just one scale up issue with cross flow filtration, and that is the path length on the retentate side of the filter. The flow on the retentate side of the filter is meant to continuously clean the filter surface, and prevent fouling, or at least limit it to a thin boundary layer. The shear rate created by the fluid at the filter surface increases as the square of the linear velocity of the fluid. The pressure drop through the filter module, from inlet to outlet, depends linearly on the length of the module, and also on the square of the linear velocity. In many cases, a manufacturing scale module is about a meter in length. However, on the lab scale, a module is likely to be closer to 10 cm. Therefore, the pressure drop from the inlet to outlet on the retentate side will be 10 times higher at constant linear velocity on scale up from lab to manufacturing. Since decreasing the flow rate will dramatically decrease the shear rate, the increased pressure will drive higher flux towards the membrane surface, increasing the thickness of the boundary layer and resulting in more surface polarization (fouling or gel formation, potentially).
One approach taken to this predicament is to keep the pathlength constant on scale up. This is analogous to maintaining constant bed height on chromatography scale up, an approach I disfavor. The result of this approach is a “horizontal” scale up, where more and more units of lab proportion are lined up side by side. This approach works, but is cumbersome and requires more and more manifolding for flow distribution, and other inconveniences. It also assumes that the length of filter the manufacturer provides is the best and only length for every application, which is absurd. However, this is an approach commonly pursued, and recommended by the filter manufacturers for its speed and certainty.
Another approach that is taken to this phenomenon is to increase the back pressure on the permeate. This slows down the permeate independent of changes on the retentate side of the filter. However, if the back pressure on the retentate side is greater than the pressure at any point along the filter on the retentate side, permeate will flow back to the retentate side. This is clearly inefficient, it means a particular fluid element will be filtered at least three times, crossing from retentate to permeate, then back to retentate, and then eventually back to permeate on a subsequent pass. This also means that the effective filtration area is decreased, as some portion of the filter is working in reverse, and another portion is working to correct the back flow. The negative flow counts against filter area that is filtering in the positive direction.
Finally, employing a constant pressure gradient along the retentate side is worth trying. Presuming the membrane geometry is essentially maintained on scale up (including spacers in the flow channel) maintaining constant pressure gradient along the retentate channel length means shear will be constant on scale up. Pressure drop from retentate to permeate will be higher at the retentate inlet, but if the shear is appropriate and the boundary layer controlled, this will only lead to higher flux, which may be preferred. This can be tested on the small scale by applying back pressure on the retentate and looking for leveling off of the flux vs. pressure curve. As long as flux vs. back pressure is increasing linearly, you can get improved performance at higher pressure. Then upon scale up, the pressure at the retentate inlet is held constant. It is certainly worth exploring longer path lengths on scale up, performance may improve!
In the end, either horizontal scale up will be used, or some reduction in retentate flow rate will probably be required. The result of the latter will be less shear at the membrane surface, but the payback will be in increased filtration efficiency. Some back pressure should be applied to the retentate side on the lab scale, as more pressure due to path length will almost surely need to be applied in manufacturing. Maintaining pressure drop on the retentate side with increased module length, along with back pressure on the permeate side usually results in successful scale up of a lab cross flow filtration procedure.
Subscribe to:
Posts (Atom)