Thursday, November 10, 2011

The inactivation literature for circoviruses

by Dr. Ray Nims

The Circoviridae family of viruses represent an extreme case for small, non-enveloped viruses. We have posted previously that the latter group constitutes a high risk for manufacturers of biologicals due to the difficulty of eradicating the viruses from raw materials or from a contaminated facility.  At 17-25 nm particle size, the circoviruses are among the smallest of the animal viruses. These viruses represent more of an economic threat than a threat to human health. The porcine circoviruses (PCV-1 and PCV-2) and the chicken anemia virus are well known examples of the circoviruses and these have been studied extensively due to their impact on the swine and poultry industries. So why care about the circoviruses in the biologics industry?
We begin being concerned about porcine circoviruses in the context of xenotransplantation of porcine tissues (e.g., islet cells) into humans. The worry was that a porcine circovirus might be transmitted to a patient via the porcine donor tissue. More recently, PCV genomic sequences were discovered in rotavirus vaccines manufactured by GlaxoSmithKline and Merck. The presence of the genomic material was attributed to the use of porcine trypsin during the culture of the cell substrates in which the vaccines were manufactured (see previous post).
As a result of the heightened awareness of the contamination threat posed by the porcine circoviruses, infectivity assays for these viruses are now being offered at contract testing organizations (e.g., BioReliance, MICROBIOTEST, and WuXi Apptec), for raw material testing, cleaning efficacy testing, and for evaluating the clearance of the circoviruses by purification processes. As might be expected based on our experience with other small, non-enveloped viruses, inactivation approaches that are effective against many larger non-enveloped  or enveloped viruses have little efficacy for the circoviruses.
So how much do we actually know about the inactivation of  circoviruses? The literature on the subject is fairly extensive for PCV-2 and for chicken anemia virus, if one is willing to spend time digging deeply. I have done the digging, and have assembled the literature into the following categories of inactivants: heat, irradiation, and disinfectants/chemicals. Keep in mind that the inactivation potential of the physical or chemical agents depends greatly upon the matrix in which the virus is present as well as the temperature and contact time with inactivant. The matrices evaluated varied for the studies described below, and the reader is directed to the individual papers for this critical detail. In addition, some variability in results may be expected because of the relative difficulty in assaying infectivity of the circoviruses.
A number of studies on the thermal stability of the circoviruses have been published. In general, it appears that 15 or more minutes of exposure to wet heat (heating of viruses spiked into solutions) at temperatures ≥80 ◦C should provide extensive inactivation (3-5 log10) of circoviruses. Dry heat (heating of freeze-dried virus and, by implication, viruses on the surfaces of coupon materials) appears to be much less effective, resulting in <1.5 log10 inactivation even at temperatures as high as 120 ◦C . The results of at least one investigator suggest that a temperature of 95 ◦C will be sufficient for high temperature short time (HTST) treatment for mitigating the risk of introducing a circovirus in a process solution.
A number of studies on the inactivation of the circoviruses by disinfectants and other chemicals have appeared in the literature, reflecting the relatively great economic threat of the circoviruses to the swine, poultry, and exotic bird industries. While many of the chemicals/disinfectants had little efficacy for inactivation of the circoviruses (as might be expected for a non-enveloped virus), certain treatments appear to have been highly effective. These included the following: a) glutaraldehyde at 1% or 2% and 10 or more minutes contact time; b) sodium hypochlorite at 6% and 10 or more minutes contact time; c) sodium hydroxide  at 0.1 N or greater; d) Roccal® D Plus at 0.5% and 10 minutes contact time; e) Virkon® S at 1% and 10 minutes contact time; f) β-propiolactone at 0.4% and 24 hours contact time; and g) formaldehyde at 10% and 2 hours contact time. Variable results were obtained for the iodine-containing disinfectants. These ranged from <1.0 log10 inactivation for iodine (10%; 30 minutes contact time; 20 ◦C) to ≥5.5 log10 for Cleanap® (1%; 2 hours contact time; 37 ◦C). A third study involving an iodine-containing disinfectant, FAM®30 (Biocide30) used at 1% or 2%; 30 minutes contact time, and 10 ◦C temperature demonstrated ≥3.5 log10 inactivation.
The literature on inactivation of circoviruses by irradiation is scant, to say the least. Plavsic and Bolin showed that gamma irradiation of PCV-2-spiked fetal bovine serum at the radiation doses normally employed for serum treatment (30 and 45 kGy) resulted in ≤1.0 log10 reduction in virus titer. Gamma irradiation (at the doses normally used) appears to be relatively ineffective for inactivating the very smallest of the viruses (parvoviruses and circoviruses) in serum so this result is perhaps not surprising. One approach that offers hope for inactivation of circoviruses is ultraviolet radiation (specifically UV-C) treatment, as this technology appears to be effective for smaller non-enveloped viruses such as the parvoviruses and caliciviruses. I expect that studies to demonstrate efficacy of UV-C for inactivating circoviruses will be performed in the near future.
In summary, there is ample evidence in the literature that effective inactivation approaches exist for the circoviruses. Careful selection of inactivation technologies that is based on the body of evidence accumulated by workers in the swine and poultry industries should enable appropriate risk mitigation and facility cleaning strategies to be adopted in the biologics industry.
<This material was excerpted in part from Nims and Plavsic, Bioprocessing J, 2012; 11:4-10>

Friday, October 28, 2011

Porcine circoviruses, vaccines, and trypsin

It has now been more than a year since the announcements by GlaxoSmithKline (GSK) and Merck of the presence of porcine circovirus (PCV) genomic material in their rotavirus vaccines.
The presence of the PCV viral sequences was, in both cases, provisionally attributed to the use of porcine trypsin during the culture of the cell substrates used in the manufacture of the vaccines. It has been reported that the genomic sequences were associated with low levels of infectious PCV in the GSK vaccine.     
As mentioned in a previous posting, an expected outcome of these disclosures was heightened regulatory expectations, going forward, for PCV screening of porcine raw materials and of Master and Working cell banks which were exposed to porcine ingredients (e.g., trypsin) at some point in their development. In January of 2011, the European Pharmacopoeia (Ph. Eur.) chapter 5.2.3 Cell substrates for production of vaccines for human use was revised to include the following instruction: Trypsin used for the preparation of cell cultures is examined by suitable methods and shown to be sterile and free from mycoplasmas and viruses, notably pestiviruses, <circoviruses> and parvoviruses.” The addition of circoviruses to the list of viruses of concern (previously, mainly bovine viral diarrhea virus and porcine parvovirus) in Ph. Eur. 7.2 was not unexpected, based on the rotavirus vaccine experience.
A more broad expectation going forward may also be that vaccine and biologics production cell banks be proactively screened for unexpected, perhaps previously undetectable, viruses using detection techniques such as the deep sequencing used initially to detect the PCV in the GSK rotavirus vaccine. A related technique referred to as massively parallel sequencing (Massively Parallel Sequencing (MP-Seq), a New Tool For Adventitious Agent Detection and Virus Discovery) has been adopted for detection of viral contaminants in cells and viral seed stocks and for evaluating vaccine cell substrates by the contract testing organization BioReliance.
The more important sequella of the porcine circovirus disclosures may therefore be the proactive use of these new and powerful virus detection techniques for ensuring the viral safety of production cell banks, going forward.

Wednesday, October 12, 2011

Ridding serum of viruses with gamma irradiation: part 2

by Dr. Ray Nims

In a previous posting, we described the susceptibility of viruses from various families to inactivation in frozen serum treated with gamma irradiation (data from the literature). Gamma irradiation is a commonly employed risk mitigation strategy for biopharmaceutical manufacture, and indeed the European Agency for the Evaluation of Medicinal Products in its Note for guidance on the use of bovine serum states that some form of inactivation (such as gamma irradiation) is expected. The use of non-treated serum in the production of biologics for human use must be justified, due to the potential for introducing a viral contaminant through use of this animal-derived material.

But how effective is this particular risk mitigation strategy? To answer this, we expressed the susceptibility to inactivation by gamma radiation for a series of 16 viruses in terms of log10 reduction in titer per kiloGray (kGy) of gamma radiation. With this value in hand, one may easily calculate the log10 reduction in titer of a given virus which might be expected following irradiation of frozen serum at any given kGy dose (serum is typically irradiated at a dose of 25-40 kGy).

The next step is to try to understand the results obtain with this relatively limited series of viruses, so that we may extrapolate the results to other viruses. If we take a look at the viral characteristics that might confer susceptibility to inactivation by gamma irradiation, will we find what we are looking for?

Viral characteristics that have been postulated to contribute to susceptibility or resistance to inactivation by gamma irradiation include: 1) radiation target size (genome size, particle size, genome shape, segmentation of the genome); 2) strandedness (in double stranded genomes the genomic information is recorded in duplicate, so loss of information on one strand may not be as damaging as it would be in the case of a single stranded genome); 3) presence or absence of a lipid envelope (non-enveloped viruses are resistant to a variety of chemical and physical inactivation approaches); and 4) genome type (RNA vs. DNA). The characteristics for our series of 16 viruses are displayed in the following table (click on table to enlarge).



For evaluating the contribution of radiation target size, we are able to make use of quantitative values available for each virus for genome size (in nucleotides) and particle size (in nm). Plotting genome size vs. log10 reduction in titer per kGy yields the results shown below. The coefficient of determination obtained is just 0.32, suggesting that factors other than (or in addition to!) genome size in nucleotides must be important..



A somewhat better concordance is obtained between particle size and log10 reduction in titer per kGy as shown below. The fit line in this case is non-linear, with a coefficient of determination of 0.60.



The contributions of genome type (RNA vs. DNA), genome strandedness, and lipid envelope (present or absent) to susceptibility to inactivation by gamma irradiation appear to be minimal, within this limited series of viruses. As a result, we are left with the conclusion that the most clear, albeit incomplete, determinant of susceptibility to inactivation by gamma irradiation appears to be particle size. This probably explains the resistance to inactivation displayed by the extremely small viruses such as circoviruses, parvoviruses, and caliciviruses. It is less clear why the polyomaviruses (e.g., SV-40 at 40-50 nm) are so resistant to gamma irradiation while certain of the picornaviruses (25-30 nm) are less resistant to inactivation. More work in this area is needed to better understand all of the factors that contribute to susceptibility to gamma radiation inactivation in viruses and bacteriophage.

< This information was excerpted in part from Nims, et al. Biologicals (2011) >

Wednesday, September 28, 2011

Is Your Chromatography in Control, or in Transition?

By Dr. Scott Rudge
While chromatography continues to be an essential tool in pharmaceutical manufacturing, it remains frustratingly opaque and resistant to feedback control of any kind.  Once you load your valuable molecule, and insufferable companion impurities, onto the column, there is little that you can do to affect the purification outcome that waits you some minutes to hours later.
Many practitioners of preparative and manufacturing scale chromatography perform “Height Equivalent to a Theoretical Plate” testing prior to putting a column into service, and periodically throughout the column’s lifetime.  Others also test for peak shape, using a measurement of peak skewness or Asymmetry.  However, these measurements can’t be made continuously or even frequently, and definitely cannot be made with the column in normal operation.  More over, the standard methods for performing these tests leave a lot of information “on the table” so to speak, by making measurements at half peak height, for example.
To address this shortcoming, many have started to use transition analysis to get more frequent snapshots of column suitability during column operation.  This option has been made possible by advances in computer technology and data acquisition.
Transition analysis is based on fairly old technology called moment theory.  It was originally developed to describe differences in population distributions, and applied to chromatography after the groundbreaking work of Martin and Synge (Biochem. J. 35, 1358 (1941)).  Eugene Kucera (J. Chromatog. 19, 237, (1965) derived the zeroth through fifth moments based on a linear model for chromatography that included pore diffusion in resins, which is fine reading for the mathematically enlightened.  Larson et al. (Biotech. Prog. 19, 485, (2003)) applied the theory to in-process chromatography data.  These authors examined over 300 production scale transitions resulting from columns ranging from 44 to 140 cm in diameter.  They found that the methods of transition/moment analysis were more informative than measurements of HETP and Asymmetry traditionally applied to process chromatography.
What is transition analysis, and how is it applied?  Any time there is a step change in conditions at the column inlet, there will occur, some time later, a transition in that condition at the column outlet.  For example, when the column is taken out of storage and equilibrated, there is commonly a change in conductivity and pH.  Ultimately, a wave of changing conductivity, or pH, or likely both, exits the column.  The shape of this wave gives important information on the health of the column, as described below.  Any and all transitions will do.  When the column is loaded, there is likely a transition in conductivity, UV, refractive index and/or pH.  When the column is washed or eluted, similar transitions occur.  As with HETPs, the purest transitions are those that don’t also have thermodynamic implications, such as those in which chemicals are binding to or exchanging with the resin.  However, the measurements associated with a particular transition should be compared “intra-cycle” to the same transition in subsequent chromatography cycles, not “inter-cycle” to different transitions of different natures within the same chromatography cycle.
Since transition analysis uses all the information in a measured wave, it can be very sensitive to effects that are observed any where along the wave, not just at, for example, half height.  For example, consider the two contrived transitions shown below:
In Case 1, a transition in conductivity is shown that is perfectly normally distributed.  In Case 2, an anomaly has been added to the baseline, representing a defect in the chromatography packing, for example.  Transition analysis consists of finding the zeroth, first and second moments of the conductivity wave as it exits the column.  These moments are defined as:

These are very easy calculations to make numerically, with appropriate filtering of noise in the data, and appropriate time steps between measurements.  The zeroth moment describes the center of the transition relative to the inlet step change.  It does not matter whether or not the peak is normally distributed.  The zeroth moments are nearly identical for Case 1 and Case 2, to several decimal places.  The first moment describes the variance in the transition, while the second moment describes the asymmetry of the peak.  These are markedly different between the two cases, due to the anomaly in the Case 2 transition.  Values for the zeroth, first and second moments are in the following table:


Case 1
Case 2
Zeroth moment
50.0
50.0
First moment
1002
979.6
Second moment
20,300
19,623


It would be sufficient to track the moments for transitions from cycle to cycle.  However, there is a transformation of the moments into a “non-Gaussian” HETP, suggested by McCoy and Goto (Chem. Eng. Sci., 49, 2351 (1994)):

Where

Using these relationships the variance and non-Gaussian HETP are shown in the table below for Case 1 and Case 2:

Using this method, a comparative measure of column performance can be calculated several times per chromatography cycle without making any chemical additions, breaking the column fluid circuit, or adding steps.  The use of transition analysis is still just gaining foothold in the industry, are you ahead of the curve, or behind?

Thursday, September 22, 2011

A much improved Ph. Eur. Chapter 5.3.2


Vaccine manufacturers intending to market in the EU should be aware of a recent change in the European Pharmacopoeia (Ph. Eur.) chapter 5.2.3 Cell substrates for production of vaccines for human use. This chapter addresses the characterization of vaccine cell substrates. The section on Test Methods for Cell Cultures within the chapter includes an instruction to perform a co-cultivation study. The language previously was as follows: “Co-cultivation. Co-cultivate intact and disrupted cells separately with other cell systems including human cells and simian cells. Carry out examinations to detect possible morphological changes. Carry out tests on the cell culture fluids to detect haemagglutinating viruses. The cells comply with the test if no evidence of any extraneous agent is found.”

This section has been changed, as of Ph. Eur. version 7.2 effective in January of 2011, to the following: “Co-cultivation. For mammalian and avian cell lines, co-cultivate intact and/or disrupted cells separately with other cell systems including human cells and simian cells. For insect cell lines, extracts of disrupted cells are incubated with other cell systems, including human, simian, and at least 1 cell line that is different from that used in production, is permissible to insect viruses and allows detection of human arboviruses (for example BHK-21). Carry out examinations to detect possible morphological changes. Carry out tests on the cell culture fluids to detect haemagglutinating viruses, or on cells to detect haemadsorbing viruses. The test for haemagglutinating viruses does not apply for arboviruses to be detected in insect cells. The cells comply with the test if no evidence of any extraneous agent is found.”

So what is the big deal? Co-cultivation is a commonly employed technique for detecting infectious retrovirus in a cell bank. It is effective for this purpose because the chances for spread of infectious virus from test cell to indicator (host) cell are optimized by the cultivation of live cells of each kind in close proximity. The endpoint of the retrovirus assay, be it reverse transcriptase enzyme induction or rescue of an S+L- virus, is not interfered with by the presence of two cell types in one culture. The same is not always true for a co-cultivation of a test cell with an indicator (host) cell for detection of infectious virus when morphological changes (viral cytopathic effects) are one of the assay endpoints. The reason is that the diploid human cells (e.g., MRC-5 or WI-38) used as one of the indicator cells in such assays are rapidly displaced during co-cultivation with intact continuous cell lines used to produce vaccines, such as the simian cell Vero. The result of this is that within a short period of time in co-cultivation, the test culture is no longer predominated by the diploid cell but rather by the test cells and observation of the culture for cytopathic effects becomes problematic. Changing the language of this section to read “…co-cultivate intact and/or disrupted cells separately with other cell systems…” allows the user to eliminate the inoculation of intact test cells onto a diploid indicator cell.

The other useful modification to the language of this section is the following addition: “For insect cell lines, extracts of disrupted cells are incubated with other cell systems, including human, simian, and at least 1 cell line that is different from that used in production, is permissible to insect viruses and allows detection of human arboviruses (for example BHK-21).” Testing of insect cells for extraneous virus is only marginally effective when it is conducted per the usual method of inoculating another insect cell. Why? The insect cells that are available are most commonly suspension cultures, making observation for cytopathic effect problematic. The extraneous viruses that are of most concern for an insect production cell are the arboviruses (viruses transmitted via insect vectors). It has been known for some time that the Syrian hamster cell line BHK-1 is an excellent host cell for detecting arboviruses. The new language in this section of Ph. Eur. chapter 5.2.3 now clears the way for the use of the monolayer BHK-1 cell line to be used for the testing of insect cells for extraneous virus. In this regard the Ph. Eur. chapter is now more closely aligned with the World Health Organization’s 2009 Evaluation of cell substrates for the production of biologicals: revision of WHO recommendations. The latter has the following passage: "For instance, in the case of insect cell substrates, certain insect cell lines may be used for detection of insect viruses, and BHK cells may serve for the detection of arboviruses."
   
Taken together, the recent changes to Ph. Eur. Chapter 5.2.3 greatly improve the chapter and the viral safety testing of vaccine production cell banks specifically proscribed within it.

Monday, September 12, 2011

Ridding serum of viruses with gamma irradiation: part 1

by Dr. Ray Nims

Blood serum, while at times required as a medium component for cell growth in vitro, is an animal-derived material that can introduce contaminating viruses such as Cache Valley virus, REO virus, vesivirus, and epizootic hemorrhagic disease virusinto a biological product. If animal serum must be used in upstream manufacturing processes, the risk of introducing a virus may be mitigated by gamma irradiation of the frozen serum prior to use. How effective is this treatment, and against which viruses?

To answer this question, I have surveyed the literature from the past two decades. A number of investigations have been conducted and the results are in the public domain. The most useful of these studies have examined the dose-response relationships for viral inactivation (rendering of the virus as non-infectious) by gamma irradiation.

In the table below, I have assembled the results obtained for 7 viruses, including four that might be expected to be found in bovine serum: (bovine viral diarrhea virus [BVDV], infectious bovine rhinotracheitis virus [IBR], respiratory-enteric orphan virus [REO virus], and parainfluenza type 3 virus [PI3]). The other three (canine adenovirus, porcine parvovirus [PPV], and mouse minute virus [MMV]), while perhaps not expected to be found in bovine serum, have been studied as model viruses for the adenovirus and parvovirus families (click on table to enlarge).

The efficacy of gamma irradiation for viral inactivation is reported as log10 reduction in titer per kGy, rather than the more commonly employed D10 (Mrad dose required to reduce the titer by 1 log10), as I find the former value to be more useful. To estimate the effectiveness of a given dose of gamma radiation for inactivation of a virus, just multiply the dose in kGy by the log10 reduction in titer per kGy value from the table. The result is the number of logs of inactivation estimated to be achieved for that virus at that radiation dose.

These data tell us that the mid- to large-sized viruses BVDV, IBR, PI3, REO, and CAV should be readily inactivated at the gamma radiation doses normally applied to frozen serum for risk mitigation (25-45 kGy). On the other hand, the two parvoviruses, PPV and MMV, are more difficult to inactivate, presumably due to their small size. Parvoviruses are often used to challenge viral removal and inactivation processes due to their size and lack of an envelope. Higher kGy dose levels may increase the effectiveness of inactivation for these viruses, although at such levels the performance of the animal serum being irradiated may be adversely impacted.

Gamma-irradiation can effectively mitigate the risk of introducing other potential contaminants of bovine serum, including Cache Valley virus, blue tongue virus, and epizootic hemorrhagic disease virus. Like the parvoviruses, however, other relatively small non-enveloped viruses of the calicivirus, picornavirus, polyomavirus, and circovirus families may represent cases where gamma irradiation is less effective at the doses normally applied. Other means of mitigating the risk associated with these viruses may need to be considered.

< This information was excerpted in part from Nims, et al. Biologicals (2011) >


Information sources: Daley et al., FOCUS 20(3):86-88, 1998; Wyatt et al. BioPharm 1993: 6(4):34-40; Purtel et al., 2006; Hanson and Foster, Art to Science 16:1-7, 1997; Hyclone Labs Art to Science 12(2): 1-6, 1993; Gauvin and Nims 2010; Plavsic et al. BioPharm 2001: 14(4):32-36.

Friday, August 26, 2011

Our take on process-specific vs. generic host cell protein assays

By Drs. Ray Nims and Lori Nixon 

The residual host cell protein (HCP assay) is used to determine the concentration, in process streams, of protein originating from the production cell (Chinese hamster, NS0, E. coli, etc.) used in the manufacture of a biologic. Host cell protein is considered to be a process-related impurity/contaminant. The most typical approach to quantitation of residual host cell protein in a test sample is the enzyme-linked immunosorbant assay (ELISA) platform.

Generic vs. process-specific assays.  A question that often arises is whether a process-specific HCP assay is required, or whether a generic assay can be used. Although there is a spectrum of “process specificity” that can be described, in this discussion, we are considering a generic assay to be one that is developed from a broad set of antigens from the host strain or closely-related strains, but not necessarily under specific process conditions mimicking the actual production/purification process of the protein of interest. That is, the reagents for a generic assay could be developed in advance of a defined production process. For example, it is often possible to purchase a “generic” HCP ELISA kit from a commercial provider, such as Cygnus Technologies, selecting a kit that matches the production cell line. Practically speaking, most firms start with generic HCP methods in early development for an obvious reason: you can’t have a process-specific assay until you have a defined process. Even after the process is defined, the lead time for developing custom (process-specific) HCP reagents and an assay using these can be as long as 1-2 years.
It should be appreciated from the outset that there is no perfect HCP assay. Production cells can express thousands of proteins, with expression patterns differing under different growth conditions. At each successive stage of the recovery and purification process, the population of HCPs is altered. At the final drug substance stage, there may be only a few HCPs present that have co-purified with the product. If you attempt to design an assay that only selects those few remaining HCPs, then some will argue that you might miss HCPs that might make it through under other circumstances (such as a process deviation).  If you attempt to design antibody reagents using the entire unpurified mixture of proteins from the production cell, the proteins of most interest may not elicit a good antibody response and you may have inadequate sensitivity to quantify HCP in your final drug substance. In fact, it has been reported that process-specific antibodies lead to HCP assays that are more sensitive for certain HCP species (and sometimes more broadly reactive) than the antibodies provided in a generic HCP assay. 
In recent years, there has been some regulatory pressure setting an expectation that a process-specific HCP assay must be developed for product registration. In our opinion, this is unjustified as a blanket prescription.  There are a multitude of approaches for creating antigen and antibody reagents, but the resulting assay should be judged primarily on its own merits.
Suitability of any HCP assay (whether generic or process-specific) must be evaluated on a case-by-case basis, and the assay validated for the specific product in question. One of the first characteristics to aim for is an assay that is able to readily quantitate HCP in the final drug substance. Without this sensitivity, it is difficult to get results that are meaningful to guide process optimization or ensure control of your product. If the available generic method(s) have limited sensitivity for the sample of interest (results are below the required range of quantitation), then it is a good idea to start planning the development of more process-specific reagents—which are likely to afford better sensitivity. Since the numerical values reported from these assays are semi-quantitative at best, sensitivity should be judged on the performance in actual samples rather than on the product specification. Of course there are other assay characteristics (non-reactivity to product, precision, dilutional linearity, spike recovery, etc) that are required to meet validation acceptance criteria. The antigen coverage should be determined, typically by 2D electrophoresis or 2D HPLC-ELISA. It is unrealistic to expect 100% coverage of all of the proteins present in the production cell; on the other hand, it is relatively more important to ensure response against bands that persist in the purification process.  All other things being equal, of course greater coverage is more desirable.
Example of 2D electrophoresis of E. coli proteins, from Kendrick Labs

Whether a generic or process-specific method is implemented, it is vital to ensure a consistent reagent supply that will last throughout the commercial life of the product. If using vendor-supplied antigen and antibody, be aware that by nature these are “single-source” reagents with associated supply risks. If creating a custom reagent set, make enough supplies to last for years to decades.
As mentioned above, most firms start with a generic assay and may move to a process-specific assay in later stages of development. A close evaluation of the generic assay during its characterization and validation may lead to the conclusion that the generic assay is suitable for your product.  In this case, be prepared to make a strong case in defense of your generic assay—based on empirical data with your product, rather than theoretical speculation.

Monday, July 25, 2011

Rapid Identification of Viral Contaminants, Finally

By Ray Nims, Ph.D.


There was a time, not long ago, when it might take months to years to identify a viral contaminant isolated from a biological production process or from an animal or patient tissue sample. The identification process took this long because it involved what I have referred to as the “shotgun approach”, or it involved luck.

Let’s start with luck. That is probably the wrong term. What I mean by this is that there have been instances where an informed guess has led to a fairly rapid (i.e., weeks to months) identification of a contaminant. For instance, our group at BioReliance was able to rapidly identify contamination with REO virus (REO type 2 actually) and Cache Valley virus  because we had observed these viruses in culture previously and because these viruses had unique properties (a unique cytopathic effect in the case of REO and a unique growth pattern in the case of Cache Valley virus). The time required to identify these viruses consisted of the time required to submit and obtain results from confirmatory PCR testing for the specific agents.

The first time we ran into Cache Valley virus, however, it was a different story. This was, it turns out, the first time that this particular virus had been detected in a biopharmaceutical bulk harvest sample. In this case, we participated in the “shotgun approach” that was applied to the identification of the isolate. The “shotgun approach” consisted of utilizing any detection technique available at the lab, namely, in vitro screening, bovine screening, application of any immunofluorescent stains available, and transmission electron microscopy (TEM). The TEM was helpful, as it indicated a 80-100 nm virus with 7-9 nm spikes. A bunyavirus-specific stain showed positive, and eventually (after months of work), sequencing and BLAST alignment was used to confirm the identity of the virus as Cache Valley virus.

The “shotgun approach” was subsequently applied to a virus isolated from harbor seal tissues, with no identity established as a result. After approximately a year of floundering using the old methods, the virus was eventually found to be a new picornavirus (Seal Picornavirus 1).  How was this accomplished? During the time between the identification of the Cache Valley virus and the seal virus, a new technology called deep sequencing became available. Eric Delwart’s group used the technique to rapidly identify the virus to the species level. As this was the first time this particular picornavirus had ever been detected, deep sequencing is likely the only method that would have been able to make the identification.

Deep (massively parallel) sequencing is one of a few new technologies that will make virus isolate identification routine and rapid in the future. It has been adopted for detection of viral contaminants in cells and viral seed stocks and for evaluating vaccine cell substrates by BioReliance.The other is referred to as the T5000 universal biosensor. Houman Dehghani’s group at Amgen has been characterizing this methodology as a rapid identification platform for adventitious agent contaminations.  Each technology has its advantages. Deep sequencing is more labor intensive, but has the ability to indicate (as described above) a new species. The universal biosensor can both serve as a detection method and as an identification method. Both can identify multiple contaminants within a sample.

Since identification of an adventitious viral contaminant of a biopharmaceutical manufacturing process is required for establishment of root cause, for evaluating effectiveness of facility cleaning procedures and viral purification procedures, and for assuring safety of both workers and patients, it is critical that the identification of a viral isolate is completed accurately and rapidly. Happily, we now have the tools at hand to accomplish this.

Saturday, July 9, 2011

Can You Decide on CAPA?

By Dr. Scott Rudge


When things go wrong in pharmaceutical manufacturing, consequences can be dire.  Small changes in the quality of the pharmaceutical product can cause major consequences for patients in ways too numerous to list in this blog.  It can be surmised that the failure mode was not anticipated, so the manufacturer is wise to determine the cause of the failure, correct it, and prevent it from happening again.  This exercise is known as CAPA, Corrective and Preventive Actions.
There are numerous tools for determining the root causes of failures, many of these are embedded fully in software, and ICH Q9, that helps a manufacturer’s quality organization to track the resolution of CAPAs.  Fault Tree Analysis and Root Cause Analysis are two common and popular examples of these tools.  These tools help to trace the cause of a failure back to the basic characteristics of the failure.  Once these basic characteristics have been identified, they can be remediated (eliminated).

When the root cause is found, it is eliminated, and the process goes on.  The CAPA is closed and everyone waits for the next crisis or problem that needs to be fixed.  Right?

But root cause elimination, although a worthy corrective objective, does not appear to be sufficient as a preventive action.  One should consider how the root cause is to be eliminated.  And one should consider whether the manner and means of root cause elimination might lead to other failure modes.

The first consideration, how to eliminate a root cause, is a decision.  Decision analysis can be used to improve the process of choosing between multiple remediation options.  When there are multiple good options for fixing a problem, or perhaps a series of “less bad” options, it’s a good idea to do some analysis. 

First, the decision should be defined.  The required objectives that the remediation option must meet should be listed.  These should be specific and measureable.  Each option for addressing the problem should also be listed, and the “features” of each option listed, so there is little uncertainty about the option and its implementation.  Each option must fully meet all the required objectives, any option that partially addresses must be considered incomplete.  Any option that is incomplete should either 1) not be considered further, or 2) have elements added that allow the option to completely satisfy the required objectives.  All options that meet each of the required objectives is a “qualified option”.

There are often additional objectives that are less absolute that should be considered in choosing a remediation option.  These might be considered as differentiating objectives.  For example, cost may be a required objective (the remediation must be implemented within the current budget) and a differentiating objective (the least costly option, within the budget, is preferred).  These differentiating objectives should also be listed, and weighted according to their relative importance.  The objectives of all stake holders must be considered.  The weightings may be controversial among stakeholders, but with practice, an organization can learn to appropriately weight objectives. 
Qualified options can be scored according to the weighted differentiating objectives.  Each qualified option is ranked according to how it fulfills these objectives, and the score is multiplied by the weight.  The weighted scores are summed, and a “most preferred” option is identified as a result. 

But wait, there’s more.  A final check of failure modes should be made.  The last thing we want is to implement a remediation option that introduces more vulnerabilities into our systems.  A simple FMEA can be used for this.  If your organization routinely uses FMEAs and has standard criteria for scoring them, then you probably only need to perform one FMEA, and ensure no resulting risk priority numbers exceed the established threshold value for risk.  If your organization doesn’t have this experience, then you may have to perform a comparative FMEA on two or more qualified options, to gauge relative risk.

If your decision making is already perfect, then there is no need for this analysis.  Such excellent seat of the pants decision making would be useful for the rest of us to see!  For the rest of us, when new issues arise, the decision analysis can be revisited, and organizational assumptions revised, to improve the decision making in the future.  And it might be useful to have documented the options you considered, and how you selected among them. 

Friday, June 10, 2011

Small, non-enveloped viruses: number 1 threat to biologics manufacture

by Dr. Ray Nims

Perhaps surprisingly, few types of viruses have infected biologics manufacture since the 1980s when the first recombinant proteins began to be produced in mammalian cells. While the list of contaminating viruses has included some relatively large enveloped and non-enveloped viruses (Reovirus type 2, epizootic hemorrhagic disease virus, Cache Valley virus, human adenovirus), by far the most problematic contaminants have been the small non-enveloped viruses. Why? For the most part, the contaminations involving the larger viruses have been attributed to the use of non-gamma irradiated bovine serum or to operators conducting open vessel manipulations. Remediating the manufacturing processes to include gamma irradiation of the serum (or elimination of the use of serum altogether), and eliminating wherever possible open vessel operations should mitigate the risk of experiencing these viruses.

Now we come to the small non-enveloped viruses, the real problem. Foremost among these has been murine minute virus (MMV). This 20-25 nm non-enveloped parvovirus has infected biologics manufacturing processes using Chinese hamster cell substrates on at least four occasions, affecting at least three different manufacturers (Genentech, Amgen, and Merrimack). In each case, the source of the contamination has been unclear, making remediation of the processes difficult. Due to the ability of these viruses to survive on surfaces and their resistance to inactivation by detergents and solvents, eliminating the agent from contaminated facilities may require drastic measures such as fumigation with vaporous hydrogen peroxide .

A second problem virus is the 27-40 nm non-enveloped calicivirus, vesivirus 2117. This is the virus that was found to have infected the Genzyme Allston manufacturing facility in 2009. The same virus had appeared already once in the past, at a manufacturing facility in Germany. Both of the infected processes involved Chinese hamster production cells and both involved the use of bovine serum at some point in the manufacturing process. Whether or not the animal-derived material was the actual source of the infection was not proven in either case. Unfortunately, if the source was the bovine serum, gamma irradiation probably would not mitigate the risk, as gamma irradiation is less effective for inactivating the smaller non-enveloped viruses. This is another virus that may be able to survive on facility surfaces. As in the case with MMV, ridding a manufacturing facility of vesivirus may require entire facility fumigation with vaporous hydrogen peroxide, as was done at Genzyme.

Another problem virus is the 17-20 nm porcine circovirus that was found to contaminate a rotavirus vaccine in 2010. This virus was thought to have originated in contaminated porcine trypsin used in the manufacturing process. Wouldn’t this contaminant have shown up in the raw material testing done for the trypsin, or in the extensive cell bank testing required for vaccine production substrates? The answer is no. The circovirus would not have been detected using the 9CFR-based detection methods used for trypsin at this time (and at present). And the required testing for cell banks used to produce vaccines would not have detected this particular virus. To make matters worse, gamma irradiation of the trypsin would not be expected to inactivate this virus. How can we mitigate the risk of this virus going forward? As described in a previous posting, manufacturers may need to apply specific nucleic acid tests for the circovirus as part of the raw material release process for trypsin.

These and other small non-enveloped viruses represent the greatest risk for biologics manufacturing because they are more difficult to inactivate in raw materials, and more difficult to eradicate from the facility once infected, and because the source of the infection is not always clear. There must be analogous small-non-enveloped bacteriophage lurking out there that represent, for the same reasons, special threats to the fermentation industry.

Tuesday, May 31, 2011

The Art of Bioreactor/Fermenter Scale-Up (or Scale-Down)

by Dr. Deb Quick

Effective bioreactor or fermenter scale-up/down is essential for successful bioprocessing. During development, small scale systems are employed to quickly evaluate and optimize the process, but larger scale systems are necessary for producing commercial quantities at a reasonable cost. But how does one effectively transfer the process between scales so that the process performs the same?



In an ideal world, the physiological microenvironment within the cells/microorganisms will be conserved at the different scales, but with no direct measure of that microenvironment the scientist identifies relevant macroproperties to measure and control to ensure comparability. There are many macroproperties and operating parameters that define the process at each scale, and while the goal is to keep as many of those parameters constant between the scales, it simply isn’t possible to keep them all the same.

When using the same operating parameters at small and large scale is impractical, there are several correlations that are commonly used: mass transfer coefficient (kLa [the volumetric transfer coefficient, 1/hr] or OTR [oxygen transfer rate, mmol/hr]; volumetric power consumption (P/V, agitation power per unit volume); agitator tip speed; and mixing time.

Matching the kLa at different scales is generally considered the most important factor in scaling cell culture and microbial processes. The second most common approach is to match the power consumption. For both of these correlations, there are often multiple combinations of operating parameters that provide the same kLa or the same power consumption at the different scale. And herein lies the art of bioreactor and fermenter scale-up/down. Selecting the best combination of parameters to match process performance at different scales is an art. There is no magic combination that works best for all cell types and products.

To establish comparability at different scales, you’ll make your life significantly easier if you start with the same vessel design at the different scales, but this luxury is rarely reality. More often, the development lab has significantly different equipment than the manufacturing facility. But even with different reactor designs, comparable performance can be obtained at different scales through appropriate experimentation.
  • First, you’ll need to understand your equipment at all scales: measure the kLa and P/V of the different scales over a wide range of air flows, agitation rates, working volumes, and backpressures. It’s best to perform the testing in your process media, if possible. If you can find the time, it’s useful to evaluate different mixing schemes at small scale - different impeller styles and positions, baffles, and sparger styles and positions (particularly valuable if you already know the differences in these features between small and large scale systems available to you).
  • Second, you’ll need to understand how your product responds to the different operating parameters. Those dreaded statistically designed experiments (DoE) are particularly useful for understanding the effects and interactions of the many parameters that can be changed. Performing DoE experiments at small scale with your product to evaluate the effects of aeration, agitation, and volume will not only help you with scale-up, but will also provide useful information for setting acceptable ranges for the operating parameters at large scale. As with the kLa studies, it’s useful to study different mixing schemes at small scale if time allows. One set of experiments that is highly useful but rarely performed is the evaluation of the process performance at the same kLa (or P/V) obtained using different operating parameters.
Understanding your equipment and how your product responds to various operating conditions is the key to effective process scale-up and scale-down. Despite the historical and ongoing need for scaling bioprocesses up and down, there is no strategy that works in all situations. The art of successful scale-up lies in thoughtful experimental design and thorough data analysis in order to obtain the information that allows equivalent performance at all scales.

Friday, May 6, 2011

TFF Under Pressure

By Dr. Scott Rudge

Are there scale up issues for cross flow filtration?  In general, this step is overlooked as a scale up concern, and usually, given the primarily clean feed streams encountered in simple buffer exchange, this is warranted.  However, forewarned is forearmed when scale up is concerned.

Primarily, there is just one scale up issue with cross flow filtration, and that is the path length on the retentate side of the filter.  The flow on the retentate side of the filter is meant to continuously clean the filter surface, and prevent fouling, or at least limit it to a thin boundary layer.  The shear rate created by the fluid at the filter surface increases as the square of the linear velocity of the fluid.  The pressure drop through the filter module, from inlet to outlet, depends linearly on the length of the module, and also on the square of the linear velocity.  In many cases, a manufacturing scale module is about a meter in length.  However, on the lab scale, a module is likely to be closer to 10 cm.  Therefore, the pressure drop from the inlet to outlet on the retentate side will be 10 times higher at constant linear velocity on scale up from lab to manufacturing.  Since decreasing the flow rate will dramatically decrease the shear rate, the increased pressure will drive higher flux towards the membrane surface, increasing the thickness of the boundary layer and resulting in more surface polarization (fouling or gel formation, potentially). 

One approach taken to this predicament is to keep the pathlength constant on scale up.  This is analogous to maintaining constant bed height on chromatography scale up, an approach I disfavor.  The result of this approach is a “horizontal” scale up, where more and more units of lab proportion are lined up side by side.  This approach works, but is cumbersome and requires more and more manifolding for flow distribution, and other inconveniences.  It also assumes that the length of filter the manufacturer provides is the best and only length for every application, which is absurd.  However, this is an approach commonly pursued, and recommended by the filter manufacturers for its speed and certainty.

Another approach that is taken to this phenomenon is to increase the back pressure on the permeate.  This slows down the permeate independent of changes on the retentate side of the filter.  However, if the back pressure on the retentate side is greater than the pressure at any point along the filter on the retentate side, permeate will flow back to the retentate side.  This is clearly inefficient, it means a particular fluid element will be filtered at least three times, crossing from retentate to permeate, then back to retentate, and then eventually back to permeate on a subsequent pass.  This also means that the effective filtration area is decreased, as some portion of the filter is working in reverse, and another portion is working to correct the back flow.  The negative flow counts against filter area that is filtering in the positive direction.

Finally, employing a constant pressure gradient along the retentate side is worth trying.  Presuming the membrane geometry is essentially maintained on scale up (including spacers in the flow channel) maintaining constant pressure gradient along the retentate channel length means shear will be constant on scale up.  Pressure drop from retentate to permeate will be higher at the retentate inlet, but if the shear is appropriate and the boundary layer controlled, this will only lead to higher flux, which may be preferred.  This can be tested on the small scale by applying back pressure on the retentate and looking for leveling off of the flux vs. pressure curve.  As long as flux vs. back pressure is increasing linearly, you can get improved performance at higher pressure.  Then upon scale up, the pressure at the retentate inlet is held constant.  It is certainly worth exploring longer path lengths on scale up, performance may improve!

In the end, either horizontal scale up will be used, or some reduction in retentate flow rate will probably be required.  The result of the latter will be less shear at the membrane surface, but the payback will be in increased filtration efficiency.  Some back pressure should be applied to the retentate side on the lab scale, as more pressure due to path length will almost surely need to be applied in manufacturing.  Maintaining pressure drop on the retentate side with increased module length, along with back pressure on the permeate side usually results in successful scale up of a lab cross flow filtration procedure.

Monday, April 25, 2011

What's That in My Protein? Degraded Polysorbate Again?

By Dr. Sheri Glaub

Mahler, et. al. have recently published a paper in Pharmaceutical Research entitled, “The Degradation of Polysorbates 20 and 80 and its Potential Impact on the Stability of Biotherapeutics.” (Subscription required.) As discussed in the paper, polysorbates are the most widely used non-ionic surfactants for stabilizing protein pharmaceuticals against interface-induced aggregation and surface adsorption.
Unknown Blogger uses a beater to induce aggregation in a protein solution

Concerns with polysorbate lot-to-lot variability, as well as potential degradation products prompted the authors to investigate the impact on four different monoclonal antibodies (mAbs).  They performed an extensive characterization of polysorbate degradation products, both volatile and insoluble, which included a number of ketones, aldehydes, furanones, fatty acids, and fatty acid esters. They then examined the effect of degraded PS on these proteins.

They concluded that as long as threshold levels of PS20 and PS80 were present (in this case >0.01%), the stability of the four mAbs in pharmaceutically relevant storage conditions (2-8 °C) was maintained despite observed polysorbate degradation.
The authors also suggest during formulation development one evaluate carefully the amount of PS to be used, considering the shelf life and potential behavior during storage.