Tuesday, November 16, 2010

Cell Culturists….Are your human cells authenticated?

by Dr. Ray Nims

Until fairly recently, it has been common practice to authenticate human cell cultures using phenotypic status (e.g., receptor or protein expression) and animal species of origin testing. This level of authentication is better than none, but it is not sufficient to unambiguously identify a human cell culture. The result has been that we are still hearing about cases of misidentified human cells being used for biomedical research.

There are now methods available that are capable of rapidly and unambiguously identifying human cell lines, tissues, and cell preparations to the individual level. The recent demonstration of the potential utility of molecular technologies such as short tandem repeat (STR) and single nucleotide polymorphism (SNP) profiling for cell authentication has provided the impetus for development of a new standardized method for human cell authentication.

To this end, an ATCC Standards Development Organization workgroup with international representation has spent the past two years developing a consensus standard for the Authentication of Human Cell Lines through STR Profiling. The forthcoming Standard will provide guidance on the use of STR profiling for authenticating human cells, tissue, and cell lines. It will contain methodological detail on the preparation and extraction of the DNA, guidance on the appropriate numbers and types of loci to be evaluated and on interpretation and quality control of the results. Associated with the standard itself will be the establishment of a public STR profile database which will be administered and maintained by the National Center for Biotechnology Information (NCBI). The database primarily will contain STR profiles of commonly used cell lines.

                                            STR Profiling of Hela Cells

 An announcement that the Standard is now available for public 45-day review, comment, and vote was published in the October 22, 2010 issue of the ANSI newsletter Standards Action.

The benefits of the Standard will depend on the degree to which it is adopted and followed in the biomedical research and development and biopharmaceutical  communities. Taking a broader view, it is hoped that funding agencies and journals will begin to use such authentication standards as important considerations for funding or publishing research employing human cells. The quality and validity of funded and published research should benefit greatly as a result of the reduction in frequency of use of misidentified human cells.

The deadline for comments is December 6, 2010. There is still time to review the draft Standard and to voice your opinions and concerns.

Wednesday, November 3, 2010

Fry those mollicutes!

By Dr. Ray Nims

It is not only viruses that may be introduced into biologics manufactured in mammalian cells using bovine sera in upstream cell growth processes. The other real concern is the introduction of mollicutes (mycoplasmas and acholeplasmas). Mollicutes, like viruses, are able to pass through the filters (including 0.2 micron pore size) used to sterilize process solutions. Because of this, filter sterilization will not assure mitigation of the risk of introducing a mollicute through use of contaminated bovine or other animal sera in upstream manufacturing processes.

Does mycoplasma contamination of biologics occur as a result of use of contaminated sera? The answer is yes. Most episodes are not reported to the public domain, but occasionally we hear of such occurrences. Dehghani and coworkers reported the occurrence of a contamination with M. mycoides mycoides bovine group 7 that was proven to have originated in the specific bovine serum used in the upstream process (Case studies of mycoplasma contamination in CHO cell cultures. Proceedings from the PDA Workshop on Mycoplasma Contamination by Plant Peptones. Pharmaceutical Drug Association, Bethesda, MD. 2007, pp. 53-59). Contamination with M. arginini and Acholeplasma laidlawii attributed to use of specific contaminated lots of bovine serum have also occurred.

Fortunately, the risk of introducing an adventitious mollicute into a biologics manufacturing process utilizing a mammalian cell substrate may be mitigated by gamma-irradiating the animal serum prior to use. This may be done in the original containers while the serum is frozen. Unlike the case for viruses, in which the efficacy of irradiation for inactivation may depend upon the size of the virus, mollicute inactivation by gamma irradatiion has been found to be highly effective (essentially complete), regardless of the species of molicute. The radiation doses required for inactivation are relatively low compared to those required for viruses (e.g., 10 kGy or less, compared to 25-45 kGy for viruses). The gamma irradiation that is performed by serum vendors is typically in the range of 25-40 kGy. This level of radiation is more than adequate to assure complete inactivation of any mollicutes that may be present in the serum. For instance, irradiation of calf serum at 26-34 kGy resulted in ≥6 log10 inactivation of M. orale, M. pneumoniae, and M. hyorhinis. In the table below I have assembled the data available on inactivation of mollicutes in frozen serum by gamma-irradiation.


So, the good news is that gamma irradiation of animal serum that is performed to mitigate the risk of introducing a viral contaminant will also mitigate the risk of introducing a mollicute contaminant. If the upstream manufacturing process cannot be engineered to avoid use of animal serum, the next best option is to validate the use of gamma irradiated serum in the process.  In fact, the EMEA Note for guidance on the use of bovine serum in the manufacture of human biological medicinal products strongly recommends the inactivation of serum using a validated and efficacious treatment, and states that the use of non-inactivated serum must be justified.


References: Gauvin and Nims, 2010; Wyatt et al. BioPharm 1993;6(4):34-40; Purtle et al., 2006

Wednesday, October 27, 2010

Risky Business

By Dr. Scott Rudge

“Risk Analysis” is a big topic in pharmaceutical and biotech product development. The International Conference on Harmonization (ICH) has even issued a guidance document on Risk Analysis (Q9). Despite the documentation available and the regulatory emphasis, these tools remain poorly understood. They are used to justify limiting the extent of fundamental understanding that can be gained on a process, while simultaneously used as a cure-all for management challenges that we face in pharmaceutical process development.

Risk analysis focuses only on failure modes. Failure mode effect and analysis (FMEA) was developed by the military, first published as MIL-P-1629, “Procedures for Performing a Failure Mode, Effects and Criticality Analysis” in November 1949. The procedure was established to discern the effects of system failure on mission success and personnel safety. Since then, we have found a much broader range of applications for this type of analysis. The methodology is used in the manufacture of airplanes, automobiles, software and elsewhere. People have devised “design” FMEAs and “process” FMEAs (DFMEA and PFMEA). These are all great tools, and help us to design and build better, safer products with better, safer processes.

Where Risk Analysis Falls Down

The FMEA is such a great hammer, it can make everything look like a nail. And when regulatory authorities are encouraging companies to use Risk Analysis for product design and process validation, the temptation to apply it further can be overwhelming. In particular, risk analysis is used in three inappropriate ways, in my estimation:

 
  1. Decision analysis
  2. Project management
  3. Work avoidance

Quite often, risk analysis tools are used to guide decisions. Here, the pros and cons of selecting a particular path are weighed, using the three criteria of risk analysis (occurrence, detectability and severity) as guides. However, not every decision path leads to a failure mode or an outcome that could be measured as a consequence. And detectability and occurrence may not be the only or most appropriate factors by which to weight a consequence. There are excellent decision tools that are designed specifically for weighting and evaluating preference criteria. A very simple tool is the Kepner Tregoe (KT) decision matrix. Decision analysis uses very detailed descriptions of the decision to model the potential outcomes. The KT decision matrix is sufficient for many decisions, large and small. But if you really want to study the anatomy of a decision, decision analysis is the most satisfactory method.
 
FMEAs are sometimes inappropriately applied in project management, to assign prioritization and order in which tasks should be completed. This may be handy in some instances, but is somewhat misleading or inappropriate in others. The riskiness of an objective should not be the sole determinant in its prioritization. Quite often, the fundamentals or building blocks need to be in place in order to best address the riskiest proposition in a project. Prematurely addressing the pieces of a project that present the greatest risk of failure may lead to that failure. On the other hand, being “fast to fail” and eliminate projects that might not bear fruit with the least amount of resources spent, is critical to overall company or project success. Project management requires consideration of failure modes, but also resource programming and timeline management. FMEAs can be an element of that formula, but should not be the focus.
 
Finally, and perhaps most painfully, FMEAs are used to justify avoiding work. Too often, risk analysis is applied to a problem, not to identify the elements that are most deserving of attention, but to justify neglecting areas that do not rank sufficiently high in the risk matrix. Sometimes, the smallest risks are the easiest to address, and in addressing them, variability can be removed from the process. Variability is the “elephant in the room” when it comes to pharmaceutical quality, as has been concisely pointed out by Johnston and Zhang.
 
The FMEA is a stellar tool, and it is applicable to problems in design, process and strategy across many industries. Its quantitative feel makes practitioners feel as though they are actually measuring something, and can make fine distinctions between risks that they were unable to articulate before. However, the FMEA can be applied rather too widely, or sometimes unscrupulously, yielding bad data and bad decisions.

Thursday, October 14, 2010

National Chemistry Week

By Dr. Sheri Glaub

Oct 17-23 is National Chemistry Week . This year’s theme is “Behind the Scenes with Chemistry”, which celebrates the chemistry in movies, set designs, makeup artistry and common special effects.

As exciting at that is, imagine a career working on life-saving therapies. Drug innovations in chemistry have helped increase life expectancy in the US by 30 years over the past century, and many chemical disciplines are involved. For example, medicinal and organic chemists isolate medicinal agents found in plants and create new synthetic drug compounds. Biochemists investigate the mechanism of a drug action; engage in viral research; conduct research pertaining to organ function; or use chemical concepts, procedures, and techniques to study the diagnosis and therapy of disease and the assessment of health. Analytical chemists use their knowledge of chemistry, instrumentation, computers, and statistics to solve problems in almost all areas of chemistry. Their measurements are used to assure compliance with regulations; to assure the safety and quality of food, pharmaceuticals, and water; to support the legal process; and to help physicians diagnose disease. Last, but not least, chemical engineers apply the principles of chemistry, math, and physics to the design and operation of large-scale chemical manufacturing processes. They translate processes developed in the lab into practical applications for the production of products such as plastics, medicines, detergents, and fuels.

This year, the Nobel Prize in Chemistry was jointly awarded to a trio of chemists for palladium-catalyzed cross couplings in organic synthesis. A recent article in Chemical and Engineering News quotes Stephen L. Buchwald, a chemistry professor at Massachusetts Institute of Technology. “This is a very exciting day for organic chemistry. This is a well-deserved award that is long overdue. It is hard to overestimate the importance of these processes in modern-day synthetic chemistry. They are the most used reactions by those in the pharmaceutical industry.”

The future of prescription drugs and medical devices depends on kids getting excited about science now. If you are in the Denver area, make plans to attend the Denver Museum of Nature and Science with your children on October 16 or 17 for the 7th Annual Demonstration and Outreach hosted by University of Northern Colorado Student Affiliate Chapter with advisor Dr. Kim Pacheco. If you are not in the Denver area, check with your local museum or your local ACS section to see what is planned.

Wednesday, September 22, 2010

Manufacturing Biologics with CHO Cells? What’s the Risk for Viral Contamination?

by Dr. Ray Nims

Chinese hamster ovary (CHO) cells are frequently used in the biopharmaceutical industry for the manufacture of biologics such as recombinant proteins, antibodies, peptibodies, and receptor ligands. One of the reasons that CHO cells are often used is that these cells have an extensive safety track record for biologics production. This is considered to be a well-characterized cell line, and as a result the safety testing required may be less rigorous in some respects (e.g., retroviral safety) than that required for other cell types. But how susceptible is the cell line to viral contamination?

There are a couple of ways of answering this question. One way is to examine, in an empirical fashion, the susceptibility of the cell type to productive infection by model exogenous viruses. This type of study has been conducted at least three times over the past decades by different investigators. Wiebe and coworkers (In: Advances in Animal Cell Biology and Technology for Bioprocesses. Great Britain, 1989; 68-71) examined over 45 viruses from 9 virus families for ability to infect CHO-K1 cells, using immunostaining and cytopathic effect to detect infection. Only 7 of the viruses (Table 1) were capable of infecting the cells. Poiley and coworkers (In Vitro Toxicol. 4: 1-12, 1991) followed with a similar study in which 9 viruses from 6 families were evaluated for ability to infect CHO-K1 cells as detected by cytopathic effect, hemadsorption, and hemagglutination. This study did not add any new viruses to the short list (Table 1). The most recent study was conducted by Berting et al. This study involved 14 viruses from 12 families. The viruses included a few known to have contaminated CHO cell-derived biologics in the past two decades, and therefore did add some new entities to the list in Table 1. Still, the list of viruses that are known to replicate in CHO cells is relatively short.



Chinese hamster cells possess an endogenous retrovirus which expresses its presence in the form of retroviral particles, however these particles have been consistently found to be non-infectious for cells from other animals, including human cells. This endogenous retrovirus therefore does not present a safety threat (Dinowitz et al. Dev. Biol. Stand. 76:210–207, 1992).

A second way of looking at the question of viral susceptibility of CHO cells is to examine the incidence and types of reported viral contaminations of manufacturing processes employing CHO cell substrates. This subject has been reviewed a number of times, most recently by Berting et al. The types of viral contaminants fill a fairly short list (Table 2). In most cases, the contaminations have been attributed to the use of a contaminated animal-derived raw material, such as bovine serum.

Sources: Rabenau et al.1993; Garnick 1996; Oehmig et al., 2003; Nims Dev. Biol. 123:153-164, 2006; Nims et al., 2008; Genzyme 2009..

Considering the frequency with which CHO cell substrates have been used in biologics production, this history of viral contamination is remarkably sparse. This is further testament to the overall safety of this particular cell substrate.






Wednesday, September 8, 2010

FDA to viral vaccine makers: it's time to update viral testing methods

By Dr. Ray Nims

If you have been following the recent (2010) unfolding of the discovery of porcine circovirus DNA contamination in rotavirus vaccines from GSK and Merck, you may not be surprised to hear that the FDA has asked viral vaccine manufacturers to outline, by October, their plans to update their testing methodologies to prevent future revelations of this type.
 
I had predicted earlier that biologics manufacturers would be asked to provide evidence, going forward, that their porcine raw materials (trypsin being the most common) are free of porcine circovirus. This testing has not been manditory in the past, but adding this to the porcine raw material virus screening battery moving forward is a prudent action in light of the recent rotavirus vaccine experience.

The FDA has appropriately gone a little farther in it's request to the viral vaccine manufacturers. The regulators would like to assure that the future will not bring additional discoveries of viral contaminants in licensed vaccines, and the best way to accomplish this at the moment appears to be to request implementation of updated viral screening methodologies. Does this mean that viral vaccine makers will need to employ deep sequencing on a lot-by-lot basis? Most likely not. It appears that reliance on the in vivo and in vitro virus screening methods which have been the gold standards since the 1980s will, however, no longer be sufficient. So what does this leave us with? What FDA appears to be asking for is a relatively sensitive universal viral screening method.

The in vivo and in vitro methods were, until now, the best option for this purpose. These methods detect infectious virus only and depend upon the ability of the virus to cause an endpoint response in the system (cytopathic effect, hemagglutination, hemadsorption, or pathology in the laboratory animal species used). So viral genomic material would not be detected, and the methods have had to be supplemented with specific nucleic acid-based tests for viruses which could not otherwise be detected (e.g., HIV, hepatitis B, human parvovirus B9, porcine circovirus).

Some options for sensitive and universal viral screening methods which might fit the requirements include DNA microarrays and universal sequencing methods performed on cell and viral stocks. The latter technology may be preferable, as microarrays are constructed to detect known viruses, while the desire is that the technology be universal in the sense that it detect both known and unknown viruses. Such a test will provide additional assurance that the virus and cell banks used to manufacture viral vaccines do not harbor a viral contaminant.

Other universal viral screening methods which are less labor intensive than the sequencing technologies may be developed in the near future and addition of one of these to the release testing battery for viral vaccine lots may need to be considered in satisfying the FDA's goals.

Wednesday, September 1, 2010

Is Clarence calculating clearance correctly?

by Dr. Ray Nims

As pointed out by Dr. Rudge in a recent posting “Do we have clearance, Clarence?”, spiking studies conducted for the purpose of validating impurity clearance are often done at only one spiking level (indeed often at the highest possible impurity load attainable). This is especially true for validation of adventitious agent (virus and mycoplasma) clearance in downstream processes. The studies are done in this way in order to determine the upper limit of agent clearance (in terms of log10 reduction) by the process. Such log10 reduction factors from individual process steps are then summed in order to determine the overall capability of the downstream processes to clear adventitious agents. The regulatory agencies have fairly clear expectations around such clearance capabilities which must generally be met by biologics manufacturers.

The limiting factor in such clearance studies is typically the amount or titer of the agent that is able to be spiked into the process solution, which is determined by: 1) the titer of the stock used for spiking, and 2) the maximum dilution of the process solution allowed during spiking (typically 10%). Under these circumstances, as Scott points out, there is a possibility that the determined clearance efficiency (i.e., the percentage of the load which is cleared during the step) is an underestimate of the actual clearance that might be obtained at lower impurity loading levels.

Adventitious agent clearance is comprised of two possible modalities, removal and inactivation. Removal refers to physical processes designed to eliminate the agent from the process solution, usually through filtration or chromatography. Removal efficiency through filtration would not be expected to display variability based on impurity loading. On the other hand, chromatographic separation of agents (by, for example, ion-exchange columns) may display saturation at the highest loadings, and therefore use of the highest possible loading levels may result in underestimates of removal efficiency at lower (i.e., more typical) impurity levels.

Inactivation refers to physical or chemical means of rendering the agent non-infectious. Agent inactivation is not always a simple, first-order reaction. It may be more complex, with a fast phase 1 stage of inactivation followed by a slow phase 2 stage of inactivation. An inactivation study is planned in such a way that samples are taken at different times so that an inactivation time curve can be constructed. As with removal studies, the highest possible impurity levels are typically used to determine inactivation kinetics.

Source: Omar et al. Transfusion 36:866-872, 1996

While the information obtained through clearance studies of this type may be incomplete from the point of view of understanding the relationships between impurity loading levels and clearance efficiency, the results obtained are consistent with the regulatory expectation that the clearance modalities be evaluated under worst-case conditions. Therefore, at least in the case of adventitious agent clearance validation, I would say that Clarence is calculating clearance correctly!

Thursday, August 26, 2010

Do We Have Clearance, Clarence?

By Dr. Scott Rudge

As in take offs and landings in civil aviation, the ability of a pharmaceutical manufacturing process to give clearance of impurities is vital to customer safety. It’s also important that clearance mechanism be clear, and not confused, as the conversation in the classic movie “Airplane!” surely was (and don’t call me Shirley).

There are two ways to demonstrate clearance of impurities.

The first is to track the actual impurity loads. That is, if an impurity comes into a purification step at 10%, and is reduced through that step to 1%, then the clearance would typically be called 1 log, or 10 fold.

The second is to spike impurities. This is typically done when an impurity is not detectable in the feed to the purification step, or when, even though detectable, it is thought desirable to demonstrate that even more of the impurity could be eliminated if need be.

The first method is very usable, but suffers from uneven loads. That is, batch to batch, the quantity and concentration of an impurity can vary considerably. And the capacity of most purification steps to remove impurities is based on quantity and concentration. Results from batch to batch can vary correspondingly. Typically, these results are averaged, but it would be better to plot them in a thermodynamic sense, with unit operation impurity load on the x-axis and efflux on the y-axis. The next figures give three of many possible outcomes of such a graph.


In the first case, there is proportionality between the load and the efflux. This would be the case if the capacity of the purification step was linearly related to the load. This is typically the case for absorbents, and adsorbents at low levels of impurity. In this case (and only this case, actually) does calculating log clearance apply across the range of possible loads. The example figure shows a constant clearance of 4.5 logs.


In the second case, the impurity saturates the purification medium. In this case, a maximum amount of impurity can be cleared, and no more. The closer to loading at just this capacity, the better the log removal looks. This would be the point where no impurity is found in the purification step effluent. All concentrations higher than this show increasing inefficiency in clearance.


In the third case, the impurity has a thermodynamic or kinetic limit in the step effluent. For example, it may have some limited solubility, and reaches that solubility in nearly all cases. The more impurity that is loaded, the more proportionally is cleared. There is always a constant amount of impurity recovered.

For these reasons, simply measuring the ratio of impurity in the load and effluent to a purification step is inadequate. This reasoning applies even more so to spiking studies, where the concentration of the impurity is made artificially high. In these cases, it is even more important to vary the concentration or mass of the impurity in the load, and to determine what the mechanism of clearance is (proportional, saturation or solubility).

Understanding the mechanism of clearance would be beneficial, in that it would allow the practitioner to make more accurate predictions of the effect of an unusual load of impurity. For example, in the unlikely event that a virus contaminates an upstream step in the manufacture of a biopharmaceutical, but the titer is lower than spiking studies had anticipated, if the virus is cleared by binding to a resin, and is below the saturation limit, it’s possible to make the argument that the clearance is much larger, perhaps complete. On the other hand, claims of log removal in a solubility limit situation can be misleading. The deck can be stacked by spiking extraordinary amounts of impurity. The reality may be that the impurity is always present at a level where it is fully soluble in the effluent, and is never actually cleared from the process.

Clearance studies are good and valuable, and help us to protect our customers, but as long as they are done as single points on the load/concentration curve, their results may be misleading. When the question comes, “Do we have clearance, Clarence?” we want to be ready to answer the call with clear and accurate information. Surely varying the concentration of the impurity to understand the nature of the clearance is a proper step beyond the single point testing that is common today.

And stop calling me Shirley.

Monday, August 9, 2010

Sizing Up Filters

By Dr. Scott Rudge

Of all the unit operations used in pharmaceutical manufacture, filtration is used the most frequently, by far. Filters are used on the air and the water that makes its way into the production suite. They are used on the buffers and chemical solutions that are used to feed the process. They are used to vent the tanks and reactors that the products are held and synthesized in. But the sizing of the filters is largely an afterthought in process design.

Liquid filters that will be used to remove an appreciable amount of solid must be sized with the aid of experimental data. Typically, a depth filter is used, or a filter that contains a filtration aid, such as diatomaceous earth. A depth filter is a filter in which there are no defined pores, rather, they are usually some kind of spun fiber, like polyethylene, that serves as a matt for capturing particulate. You probably did a depth filtration experiment in high school with glass wool. Or you’ve used a depth filter in your home aquarium with the gravel (under gravel filter) or an external filter pump (where the fibrous cartridge you install is a depth filter, such as the "blue bonded filter pads" shown below).

A depth filter uses both its fiber mesh to trap particles, but also then uses the bed of particles to capture more particles. It is actually the nature of the particles that controls most of the filtration properties of the process.

Because of the solids being deposited onto the filter, the resistance of the filter to flow increases as the volume that has been filtered increases. Therefore, knowing the exact size of filter that will be required for your application can be complicated. The complication is overcome by developing a specific solids resistance that is normalized to the volume that has been filtered, and the solids load in the slurry. Once this is done, these depth filters can be sized by measuring the volume filtered at constant pressure in a laboratory setting. The linearized equation for filtration volume is:


By measuring the volume filtered with time at constant pressure, the two filtration resistances can be found as the slope and intercept of a plot of t/(V/A) vs. (V/A). The area of a depth filter is the cross section of the flow path. On scale up, the depth of a depth filter is held constant, and this cross section is increased. An example of the laboratory data that should be taken, and the resulting plots, is shown below:




As expected, the filter starts to clog as more filtrate is filtered. The linearized plot gives a positive y-axis intercept and a positive slope, which can be used to calculate the resistance of the filter and the resistance of the solids cake on the filter.



The resistance of the filter should be a constant and independent of any changes in the feed stream. However, the specific cake resistance, α, will vary with the solids load. It is important to know the solids load in the representative sample(s) tested, and the variability in the solids load in manufacturing. The filter then should be sized for the highest load anticipated. This will result in the under-utilization of the filter area for most of the batches manufactured, but will reduce or eliminate the possibility that the filter will have to be changed mid-batch.

Of course, reducing variability in the feed stream will increase the efficiency of the filter utilization, and reduce waste in other ways, such as reducing variability in manufacturing time, reducing manufacturing investigations and defining labor costs.

Tuesday, July 20, 2010

The nuts and bolts of retrovirus safety testing

by Dr. Ray Nims


Retroviruses may integrate into the genome of host animals. For this reason they are often referred to as endogenous viruses. Viral particles may or may not be expressed in the host cell. Expressed viruses may be infectious or non-infectious, and infectious virus may have tropism for (ability to infect) the same or different animal species relative to the host cell of origin. Infection results from a process of reverse transcription of the viral RNA leading to proviral DNA. To accomplish this, retroviruses have a specialized enzyme known as reverse transcriptase. Through this process (see figure below), the infected cell may be enlisted to produce viral progeny. Certain of the retroviruses are known to be oncogenic (e.g., human T-lymphotropic virus 1, feline leukemia virus, Raus sarcoma virus, etc.). Other retroviruses are of concern as a result of disease syndromes caused in humans (e.g., human immunodeficiency virus 1 in acquired immunodeficiency syndrome, and the possible role of xenotropic murine leukemia virus-related virus in chronic fatigue syndrome). From a biosafety standpoint, there is a worry that under some conditions, integrated viruses in cell substrates employed to produce biopharmaceuticals which do not normally express their presence may be induced to produce infectious particles.




Retrovirology safety testing for biologics manufacture can be confusing to those not familiar with the subject. Here is a brief overview.

Demonstrating retroviral safety typically involves a combination of the following three components:
• detecting infectious retrovirus through cell culture assays (XC plaque, cocultivation with mink lung or Mus dunni cells, etc.).
• measuring reverse transcriptase enzyme activities either through tritiated thymidine incorporation into templates, or through product amplification (PCR) techniques (PERT, etc.). This is not required if infectious retrovirus is detected.
• Visualizing and enumerating retroviral particles in supernatants or in fixed cells using transmission electron microscopy.

The various assays are applied during cell bank characterization (including end of production cell testing), during evaluation and validation of purification processes, and in some instances, as bulk harvest lot-release assays (results from 3 lots at pilot or commercial scale are submitted with the marketing application). For processes using well-characterized rodent cells known to contain endogenous retrovirus (CHO, C127, BHK, murine hybridoma), retroviral infectivity testing of the processed bulk is not required provided that adequate downstream clearance of the particles has been demonstrated.

Infectivity testing can be particularly confusing, due to the variety of cell-based assays employed. These include both direct and indirect assays. An example of a direct assay is the XC-plaque assay for ecotropic (a term meaning the virus is infectious for mouse cells) murine retroviruses. By definition, therefore, this would only be used to assay production cells of mouse origin.

Indirect assays are those in which a second endpoint is required to assess positive or negative outcome. Indirect assays include the various co-cultivation assays in which the test cells are co-cultivated with host cells such as mink lung, Mus dunni, and any of a number of human cells (see Table 4 within USP <1237> Virology Tests for a list of commonly used host cells). The indirect assays are performed to detect xenotropic retroviruses (retroviruses which are capable of infecting only animals other than the species of origin). The secondary endpoints used to assess outcome include reverse transcriptase activity, sarcoma virus rescue (S+L- focus formation assays), or enzyme immunoassay. The indirect assays are used in the retrovirus testing of mouse, hamster, monkey, and human production cell substrates. The selection of the host cell for the cocultivation assay is dependent upon the species of origin of the production cell, recognizing that cocultivation host cells from a species other than that of the production cells must be used. For production processes using rodent or other non-human cells, one or more human host cells are typically used for the cocultivation assay, as xenotropic retroviruses infectious for human cells are of obvious concern.

Still confused? Don’t worry. An individual with virology testing expertise can assist in designing the appropriate retrovirus testing battery for your biologic.

Wednesday, June 23, 2010

Assessing rapid microbial detection systems

by Dr. Ray Nims

With each passing year, it seems that there are more options available for rapid microbial detection. These rapid systems come in a variety of “flavors”, that is - they differ with respect to a set of key attributes. For instance, how rapid is rapid? What is the sensitivity? What is the maximum sample volume that may be tested? Is it quantitative or qualitative? What units are the results given in? Is it destructive or non-destructive (i.e., can the organism, once detected, be identified)? When one considers the variety of applications for which rapid methods may potentially replace existing culture methods, it rapidly becomes clear that there may not be “one shoe that fits all”.

In order to select an appropriate rapid method for use in one of the many microbial detection applications, one must first assess the available rapid systems for the key attributes mentioned above. This then provides the opportunity to rule out systems which for one reason or the other will not suit the application. There may be some applications for which no rapid system currently meets all requirements. Those rapid systems which do appear to possess the attributes required may be further evaluated for cost and for performance capabilities using specific sample matrices.

In the table below, we have listed some of the currently available rapid microbial detection systems. These include only systems which are 48 hours in duration or less, and therefore some of the sterility replacement assays involving reduced incubation durations (e.g., BacT/ALERT®, Growth Direct™) are not listed.




The key attributes of these rapid systems are displayed in the table below. The systems are arranged by principle of detection, as in the table above. For certain methods (e.g., Micro Pro™) increased sensitivity can be gained through increasing the duration of the incubation time. For non-destructive methods, the ability to identify the organism(s) detected is facilitated by an additional incubation post-detection.



What is the regulatory position on rapid microbial detection methods? The U.S. FDA Guidance for Industry: Sterile Drug Products Produced by Aseptic Processing states that other suitable microbiological tests (e.g., rapid methods) may be considered for environmental monitoring, in-process control testing, and finished product release testing after it has been demonstrated that these new methods are equivalent or better than conventional (e.g., USP) methods. Additionally, the FDA Process Analytical Technology (PAT) initiative encourages the voluntary development and implementation of innovative approaches in pharmaceutical development, manufacturing, and quality assurance (from MJ Miller, PDA Journal 45: 1-5, 2002).

Are rapid methods being used in the pharmaceutical industry? ScanRDI was approved by the FDA for water testing at GSK and for sterility testing at Alcon; Pallchek has been approved by the FDA for bioburden testing at GSK; and Wyeth received approval for use of Celsis for microbial limits testing.

Like all methods proposed to replace existing “gold standards”, these rapid microbial detection systems must be demonstrated through comparability protocols to be equivalent to or better than the existing methods. The effort required should pay dividends in terms of shortened turnaround times and reduced costs.

Thursday, June 17, 2010

Informing the FMEA

By Dr. Scott Rudge
Risk reduction tools are all the rage in pharmaceutical, biotech and medical device process/product development and manufacturing. The International Conference on Harmonization has enshrined some of the techniques common in risk management in their Q9 guidance, “Quality Risk Management”. The Failure Modes and Effects Analysis, or FMEA, is one of the most useful and popular tools described. The FMEA stems from a military procedure, published in 1949 as MIL-P-1629, and has been applied in many different ways. The most used method in the health care involves making a list of potential ways in which a process can “fail”, or produce out of specification results. After this list has been generated, each failure mode in the list is assessed for its proclivity to “Occur”, “Be Severe” and “Be Detected”. Typically, these are scored from 1 to 10, with 10 being the worst case for each category, and 1 being the best case. The scores are multiplied together, and the product (mathematically speaking) is called the “Risk Priority Number”, or RPN. Then, typically, development work is directed towards the failure modes with the highest RPN.

The problem is, it’s very hard to assign a ranking from 1 to 10 for each of these categories in a scientific manner. More often, a diverse group of experts from process and product development, quality, manufacturing, regulatory, analytical and other stake holding departments, convene a meeting and assign rankings based on their experience. This is done once in the product life-cycle, and never revisited as actual manufacturing data start to accumulate. And, while large companies with mature products have become more sophisticated, and can pull data from other similar or “platform” products, small companies and development companies can really only rely on the opinion of experts, either from internal or external sources. The same considerations apply to small market or orphan drugs.

Each of these categories can probably be informed by data, but by far the easiest to assign a numerical value to is the “Occurrence” ranking. A typical Occurrence ranking chart might look something like this:

These rankings come from “piece” manufacturing, where thousands to millions of widgets might be manufactured in a short period of time. This kind of manufacturing rarely applies in the health care industry. However, this evaluation fits very nicely with the Capability Index analysis.

The Capability Index is calculated by dividing the variability of a process into its allowable variable range. Or, said less obtusely, dividing the specification range by the standard deviation of the process performance. The capability index is directly related to the probability that a process will operate out of range or out of specification. This table, found on Wikipedia (my source for truth), gives an example of the correlation between the calculated capability index to the probability of failure:

As a reminder, the capability index is the upper specification limit minus the lower specification limit divided by six times the standard deviation of the process. The two tables can be combined to be approximately:
How many process data points are required to calculate a capability index? Of course, the larger the number of points, the better the estimate of average and standard deviation, but technically, two or three data points will get you started. Is it better than guessing?

Wednesday, June 9, 2010

Riboflavin plus UVA irradiation: another inactivation approach to consider

by Dr. Ray Nims

Short-wavelength ultraviolet irradiation (UVC) has been used for years to disinfect air, surfaces, and thin liquid films because it is effective in inactivating a variety of bacteria, protozoa, phage, and viruses. More recently, UVC (100-280 nm) irradiation has been shown to be useful for viral risk mitigation in biologics manufacturing. UVC-treatment of culture media and other liquid reagents has been demonstrated to inactivate potential adventitious viral contaminants, including those which are resistant to inactivation by other physical means (e.g., murine minute viruscalicivirus; and porcine parvovirus and SV-40 [Wang et al., Vox Sanguinis 86: 230-238, 2004]).

Another approach that has been used recently, especially in the ophthalmologic and blood products communities, is UVA (315-400 nm) in the presence of the photosensitizer riboflavin. Riboflavin interacts with nucleic acid and photosensitizes to damage by UVA leading to direct electron transfer, production of singlet oxygen, and generation of hydrogen peroxide. The treatment results in oxidation and ring-opening of purines and in DNA strand breakage. The advantage of riboflavin over other photosensitizers (e.g., methylene blue, psoralens, etc.) is that riboflavin (vitamin B2) is an endogenous physiological substrate.

 
The photosensitizer interacts with nucleic acids.
Upon irradiation, the results may include cross-linking, mutation, or strand breakage.
Source: Bryant and Klein


Riboflavin/UVA treatment has been explored in ophthalmology applications such as infectious keratitis and keratomycosis. The typical treatment paradigm involves application of a solution of 0.1% riboflavin (as riboflavin-5-phosphate) followed by irradiation using 365 nm light (5 to 10 J/ml). The approach has shown effectiveness against a variety of pathogenic bacteria, including drug-resistant strains. Effectiveness against fungal pathogens requires combination therapy with amphotericin B.

In the blood products community, photosensitizer/UV treatment is being explored for pathogen reduction. For instance, riboflavin/UV treatment is being evaluated for platelet and plasma pathogen reduction, for prevention of graft versus host reactions, and for pathogen reduction in whole blood products. In the proprietary application (Mirasol PRT®), blood product pools are combined with riboflavin (final concentration 50 µM) and the solutions are irradiated with 6.24 J/ml broadband (265-370 nm) UV light. The technology has been shown to be effective for a variety of pathogenic bacteria and viruses (see Table 3 in the review by Bryant and Klein).

Will this approach be useful in the biopharma industry? It appears so. Recently, irradiation with UVA (365 nm) light in the presence of 50 µM riboflavin has been evaluated for controlled inactivation of gene transfer (adenovirus, adeno-associated virus, lentivirus) virus preparations. Complete inactivation was obtained in each case within 90 minutes.

Friday, May 28, 2010

Are Generic Pills Harder to Swallow?

By Ashley M. Jones

“What’s in a name? That which we call a rose, by any other name, would smell just as sweet” – It’s a good thing William Shakespeare lived much before the drug companies were around or he could never have come up with such an amazing statement. Today, there are drugs, and then there are drugs; and more than what’s inside or its efficacy, it’s the name that seems to matter. If you were asked to choose between a branded drug and its generic equivalent, given that their costs were almost the same, you would most probably go for the branded version. So why are generic pills harder to swallow than their branded counterparts?


Although generic medicines are much cheaper than branded drugs (they are priced lower because they can be manufactured only after the patent runs its course and so do not include the R&D costs that the big pharmaceutical companies incur when they manufacture new drugs), at an equal price, people prefer to buy the latter for various reasons.

For one, they may not want to call their doctor to verify that the generic equivalents are safe and the bioequivalent of the branded drug - the most important aspect to consider when you’re buying generic equivalents is to ensure that they are the bioequivalent of the branded drug you’ve been prescribed. Even if you know the active pharmacological ingredient in a drug, you may not be sure if the generic equivalent with the same active ingredient can be used as a reliable and safe substitute. For example, there are certain medicines that should not be substituted, like for example drugs that have the same active pharmacological ingredient but which are modified release and intermediate release pills respectively. These are different dosage forms, and hence not bioequivalent.

Also, some people may feel that generic drugs are not of the same quality as their branded equivalent. It’s not true of course, but perception is everything when it comes to drugs and medication. That’s why we find that even placebos work sometimes, because there are some diseases and illnesses that are cured by the power of the mind. So if you believe that a drug is inferior, your mind is going to block your recovery even though the drug is really efficient.

The key to choosing the best generic drugs is to go with those that your doctor prescribes or recommends. If you’re sure of the quality, generic pills become much easier to both swallow and digest.

This article is contributed by Ashley M. Jones, who regularly writes on the subject of Online Pharmacy Technician Certification. She invites your questions, comments at her email address: ashleym.jones643@gmail.com.


RMC Pharmaceutical Solutions welcomes guest posts related to pharmaceuticals, biotechnology, medical devices and other related topics.

Wednesday, May 19, 2010

Using porcine trypsin in biologics manufacture?

by Dr. Ray Nims

On March 22, 2010, a press release from GlaxoSmithKline (GSK) announced that porcine circovirus 1 (PCV 1) DNA had been detected in their rotavirus vaccine. On May 6, Merck disclosed that it had found DNA fragments of both PCV types 1 and 2 in its rotavirus vaccine. The PCV 2 findings in Merck's vaccine may be of greater concern, due to the fact that this virus causes disease in pigs, while PCV 1 apparently does not. However, the relative amounts of PCV DNA found in the GSK vaccine appear to be much greater (the lab discovering the PCV DNA in the GSK vaccine did not detect any in the Merck vaccine), and the worry in this case is that some of the genomic material may be associated with infectious PCV 1 virus. In both cases, the presence of the PCV genomic material has been attributed to the use of porcine trypsin at some point in the vaccine manufacturing process.


The FDA convened an advisory committee meeting on May 7th to discuss the findings of PCV DNA in the two licensed rotavirus vaccines. What was the result of the advisory committee meeting? The advisory committee felt that the benefits of the rotavirus vaccines clearly outweigh the risks. This, added to the fact that there appears to be little human health hazard associated with these viruses, led to the FDA clearing the two vaccines for continued use on May 14th. The product labels will be updated to reflect the presence of the PCV DNA in these products. In the longer term, these products may need to be "reengineered" to remove the PCV DNA. This may involve the preparation of new Master and Working cell banks and thus will take some time.

Another likely outcome of the advisory committee’s meeting may be heightened expectations, going forward, for PCV screening of porcine raw materials and of Master and Working cell banks which were exposed to porcine ingredients (e.g., trypsin) at some point in their development. Porcine-derived raw materials which are used in the production of biologics are to be tested per 9 CFR 113.53 Requirements for ingredients of animal origin used for production of biologics for a variety of viruses of concern. In the case of ingredients of porcine origin, those viruses of concern are listed in 9 CFR 113.47 Detection of extraneous viruses by the fluorescent antibody technique. These include rabies, bovine viral diarrhea virus, REO virus, porcine adenovirus, porcine parvovirus, transmissible gastroenteritis virus, and porcine hemagglutinating encephalitis virus. While porcine circovirus may not be specifically mentioned in the 9 CFR requirements, it will be prudent to add a nucleic acid-based assay for detection of this virus to the porcine raw material testing battery going forward. Similarly, Master and Working cell banks exposed to porcine raw materials (e.g., trypsin) during their developmental history should be assayed for PCV prior to use.

Routine nucleic acid-based testing for PCV should detect the genomic sequences for this virus should intact infectious or non-infectious PCV be present in the test materials. Now that this virus is one of concern to the FDA and to the public, performing the appropriate raw material and cell bank testing for it will most likely become an expectation for vaccine and biologics manufacturers.

Wednesday, May 12, 2010

Bending the Curve

By Dr. Scott Rudge

To understand the best ways to develop preparative and industrial scale adsorption separations in biotechnology, it’s critical to understand the thermodynamics of solute binding. In this blog, I’ll review some basics of the Langmuir binding isotherm. This isotherm is a fairly simplistic view of adsorption and desorption, however, it applies fairly well to typical protein separations, such as ion exchange and affinity chromatography.


A chemical solution that is brought into contact with a resin that has binding sites for that chemical will partition between the solution phase and the resin phase. The partitioning will be driven by some form of affinity or equilibrium, that can be considered fairly constant at constant solution phase conditions. By “solution phase conditions”, I mean temperature, pH, conductivity, salt and other modifier concentrations. Changing these conditions changes the equilibrium partitioning. If we represent the molecule in solution by “c” and the same molecule adsorbed to the resin by “q”, then the simple mathematical relationship is:

If the capacity of the resin for the chemical is unlimited, then this is the end of the story, the equilibrium is “linear” and the behavior of the adsorption is easy to understand as the dispersion is completely mass transfer controlled. A example of this is size exclusion chromatography, where the resin has no affinity for the chemical, it simply excludes solutes larger than the pore or polymer mesh length. For resins where there are discrete “sites” to which the chemical might bind, or a finite “surface” of some kind with which the chemical has some interaction, then the equilibrium is described by:

and the maximum capacity of the resin has to be accounted for with a “site” balance, such as shown below:

Where Stot represents the total number of binding sites, and S0 represents the number of binding sites not occupied by the chemical of interest. The math becomes a little more complicated when you worry about what might be occupying that site, or if you want to know what happens when the molecule of interest occupies more than one site at a time. We’ll leave these important considerations for another day. Typically, the total sites can be measured. Resin vendors use terms such as “binding capacity” or “dynamic binding capacity” to advertise the capacity of their resins. The capacity is often dependent on the chemical of interest. The resulting relationship between c and q is no longer linear, it is represented by this equation:

When c is small, the denominator of this equation becomes 1, and the equilibrium equation looks like the linear equilibrium equation. When c is large, the denominator becomes Keqc, and the resin concentration of the chemical is equal to the resin capacity, Stot. When c is in between small and large, the isotherm bends over in a convex shape. This is shown in the graph below.

There are three basic conditions in preparative and industrial chromatographic operations. In the first, Keq is very low, and there is little or no binding of the chemical to the resin. This is represented by the red squares in the graph above. This is the case with “flow through” fractions in chromatography, and would generally be the case when the chemical has the same charge as the resin. In the third, Keq is very high, and the chemical is bound quantitatively to the resin, even at low concentrations. This is represented by the green triangles in the graph above. This is the case with chemicals that are typically only released when the column is “stripped” or “regenerated”. In these cases, the solution phase conditions are changed to turn Keq from a large number to a small number during the regeneration by using a high salt concentration or an extreme pH. The second case is the most interesting, and is the condition for most “product” fractions, where a separation is being made. That is, when the solution phase conditions are tuned so that the desired product is differentially adsorbing and desorbing, allowing other chemicals with slightly higher or lower affinities to elute either before or after the desired product, it is almost always the case that the equilibrium constant is not such that binding is quantitative or non-existent. In these cases, the non-linearity of the isotherm has consequences for the shape of the elution peak. We will discuss these consequences in a future blog.

In a “Quality-by-Design” world, these non-linearities would be understood and accounted for the in the design of the chromatography operation. An excellent example of the resulting non-linearity of the results was shown by Oliver Kaltenbrunner in 2008

Relying on linear statistics to uncover this basic thermodynamic behavior is a fool’s errand. However, using basic lab techniques (a balance and a spectrophotometer) the isotherm for your product of interest can be determined directly, and the chromatographic behavior understood. This is the path to process understanding!

Thursday, May 6, 2010

Epizootic hemorrhagic disease virus: a future troublemaker?


Epizootic hemorrhagic disease virus (EHDV) is a double-stranded RNA virus of family Reoviridae, genus Orbivirus. This is a non-enveloped virus of approximately 60-80 nm size. This arbovirus is transmitted by a biting midge of genus Culicoides, and is closely related to another Orbivirus, the bluetongue virus. Two serotypes are endemic to cattle in North America (EHDV-1 and EHDV-2); the infections caused tend to be subclinical (asymptomatic) and therefore may go undetected.

Infections in cattle are more prevalent in areas of widespread infection within the local deer population. As shown in the figure below, the geographic distribution of infection of deer populations with EHDV and bluetongue virus includes areas within the high plains and mountain states in which bovine serum production is high (Utah, Kansas, etc.).

From Daniel Mead, Risk of Introduction of New Vector-borne Zoonoses

There have been recent outbreaks of epizootic hemorrhagic disease in cattle in Indiana (2006) as well as other states; in Israel (2006); and in Turkey (2007).

Basis of Concern: EHDV has been isolated previously from a biologics manufacturing process employing a Chinese hamster ovary (CHO) cell substrate (Rabenau et al. Contamination of genetically engineered CHO-cells by epizootic haemorrhagic disease virus (EHDV). Biologicals 21, 207-214, 1993). The infection was presumed, but not proven, to originate from use of a contaminated bovine serum in the manufacturing process.

Regulatory Expectations. EHDV is not mentioned specifically in 9CFR 113.47 (Detection of extraneous viruses by the fluorescent antibody technique as a virus of concern for raw materials of bovine origin), although this regulation requires testing for the closely related bluetongue virus. EHDV would be expected to cause cytopathic effects in Vero cells, one of the detector cells used in the 9CFR 113.47 assay, therefore this assay should detect the virus in grossly contaminated bovine sera.

Mitigating Risk. Elimination of animal-derived materials (esp. bovine sera) from the manufacturing process should reduce the risk of experiencing this virus. If this is not possible, treatment of the sera should be considered. Gamma-irradiation of the frozen serum at the dosages normally used should be effective, judging from results obtained with REO virus, another member of the family Reoviridae (Gauvin, 2009).

Conclusions. EHDV has been found previously to contaminate a biologics manufacturing process employing a CHO-cell substrate. It is therefore a virus of concern for the biopharmaceutical industry. Risk of infection of biological products with EHDV through use of bovine-derived materials such as bovine sera may increase in the event of future outbreaks of this disease in cattle from serum-producing regions of North America or Australia. Risk may be mitigated through implementation of gamma-irradiation of bovine sera and of viral purification processes capable of removing and inactivating non-enveloped viruses such as MMV and REO.

Wednesday, April 28, 2010

FDA regulation of “combination” products

by Dr. Ray Nims

The FDA’s Office of Combination Products (OCP) was established in 2002 to shepherd combination products, those comprised of a combination of drug, biological, and/or device, through the review and regulation process. An example of a combination product is the drug-releasing stent (see figure below). The OCP does not conduct the reviews, but is responsible for: assigning the product to the appropriate FDA center; coordinating reviews involving more than one center; and working with agency centers to develop guidance and regulations to make the regulation of combination products “as clear, consistent and predictable as possible”. 

An example of a drug-releasing stent.

The complexity of combination products arises because the efficacy and safety of the individual constituent components (i.e., the drug, biologic, or device) must be considered alone, as well as within the context of the combination product. Because of this complexity, there is no single development paradigm for all combination products. The guidance recommends that combination product developers consider any prior approval/clearance of the constituent parts, as well as how their testing may be influenced by the interaction of the components. Factors that should be considered (from the guidance) include:

• Are the constituent parts already approved for an indication?
• Is the indication for a given constituent part similar to that proposed for the combination product?
• Does the combination product broaden the indication or intended target population beyond that of the approved constituent part?
• Does the combination product expose the patient to a new route of administration or a new local or systemic exposure profile for an existing indication?
• Is the drug formulation different than that used in the already approved drug?
• Does the device design need to be modified for the new use?
• Is the device constituent used in an area of the body that is different than its existing approval?
• Are the device and drug constituents chemically, physically, or otherwise combined into a single entity?
• Does the device function as a delivery system, a method to prepare a final dosage form, and/or does it provide active therapeutic benefit?
• Is there any other change in design or formulation that may affect the safety/effectiveness of any existing constituent part or the combination product as a whole?
• Is a marketed device being proposed for use with a drug constituent that is a new molecular entity?
• Is a marketed drug being proposed for use with a complex new device?

As with individual constituent drugs, biologics, and devices, the FDA will require that the combination products be manufactured according to current good manufacturing practices. In most cases, a single investigational application (IND or IDE) is submitted to enable the clinical trials planned for the combination product. The science and technology associated with the combination products should drive the selection of statistical approaches, sample sizes, study endpoints, and methods for active principle measurement and for evaluating possible interactions between components. It may be best to involve the FDA in these decisions.

Complexity for the regulation of combination products also stems from the fact that separate manufacturing processes may exist for the various constituent parts. Potential changes in any of the component manufacturing processes, subsequent to initiating clinical trials or post-market, will need to be evaluated for possible effects on the safety and efficacy of the combination product.

Combination products represent therapeutic modalities with great promise for advancing health care. We expect to see more and more pharmaceutical activity in this area going forward.

Friday, April 23, 2010

Is There Ever a Good Time for Filter Validation?

By Dr. Scott Rudge

What is the right time to perform bacterial retention testing on a sterile filter for an aseptic process for Drug Product? I usually recommend that this be done prior to manufacturing sterile product. After all, providing for the sterility of the dosage form for an injectable drug is first and foremost the purpose of drug product manufacturing.

But there are some uncomfortable truths concerning this recommendation

1. Bacterial retention studies require large samples, liters

2. Formulations change between first in human and commercial manufacturing, requiring revalidation of bacterial retention

3. The chances of a formulation change causing bacteria to cross an otherwise integral membrane are primarily theoretical, the “risk” would appear to be low

On the other hand

1. The most frequent sterile drug product inspection citation in 2008 by the FDA was “211.113(b) Inadequate validation of sterile manufacturing” (source: presentation by Tara Gooel of the FDA, available on the ISPE website to members)

2. The FDA identifies aseptic processing as the “top priority for risk based approach” due to the proximal risk to patients

3. The FDA continues to identify smaller and smaller organisms that might pass through a filter

Is the issue serious? I think so, risk of infection to patients is one of the few direct consequences that pharmaceutical manufacturers can directly link between manufacturing practice and patient safety, which is one of the goals of Quality by Design. Is the safety threat from changes to filter properties and microbe size in the presence of slightly different formulations substantial? I don’t think so, especially not in proportion to the cost to demonstrate this specifically. But the data aren’t available to demonstrate this hypothesis, because the industry has no shared database to demonstrate a range of aqueous based protein solutions have no effect on bacterial retention. There is really nothing proprietary about this data, and the only organizations that benefit from keeping it confidential are the testing labs. Sharing this data should benefit all of us. An organization like PDA or ISPE should have an interest in polling this data and then making a case to the FDA and EMEA that the vast majority of protein formulations have been bracketed by testing that already exists, and that the revalidation of bacterial retention on filters following formulation changes is mostly superfluous.

In the meantime, if you don’t have enough product to perform bacterial retention studies, at least check the excipients, as in a placebo or diluents buffer. A filter failure is far more likely due to the excipients than the active ingredient, which is typically present in much smaller amounts (by weight and molarity). By doing this, you are both protecting your patients in early clinical testing, and reducing your risk with regulators.

Wednesday, April 14, 2010

Oops, adventitious viral DNA fragments in a vaccine

by Dr. Ray Nims

On March 22, 2010, a press release from GlaxoSmithKline (GSK) announced that DNA from porcine circovirus had been detected in their rotavirus vaccine. According to GSK, the DNA ”was first detected following work done by a research team in the US using a novel technique for looking for viruses and then confirmed by additional tests conducted by GlaxoSmithKline”. As a result of this finding, the FDA “is recommending that US clinicians and public health professionals temporarily suspend the use of Rotarix as a precautionary measure. The FDA have also stated that they intend to convene an advisory committee, within approximately four to six weeks, to review the available data and make recommendations on rotavirus vaccines licensed in the USA. The FDA will also seek input on the use of new techniques for identifying viruses in vaccines.” The EMEA, on the other hand, does not appear to consider this finding to be a safety concern, citing the fact that porcine circovirus is not infectious for human cells and does not cause disease in humans.



Porcine circovirus
Source: Meat and Livestock Commission, UK

What is porcine circovirus? Porcine circovirus (PCV) is a member of the family Circoviridae, among the smallest of the known animal DNA viruses. It is approximately 17 nm in diameter, non-enveloped, with icosahedral symmetry. The virus was originally identified as a noncytopathic contaminant of the PK-15 porcine kidney cell line (Tischer et al., Zentralblatt Bakt Hyg A 226:153-167, 1974). Like many very small, non-enveloped viruses, PCV represents a challenge for removal and inactivation.

Why is this finding coming to light now, or stated another way, why wasn’t the PCV DNA detected when the vaccine was initially tested and approved for human use? The press release wasn’t specific as to method used. It has subsequently been revealed, however,  that these fragments were detected using sequence-independent amplification (deep sequencing; or as Eric Delwart calls it, metagenomics). The resulting library of amplified sequences is characterized by BLAST searching using identification algorithms. Confirmation in this case was obtained using microarray and PCR analyses. The sequencing techniques have been available for some time, and have proven useful for identification of viruses in an academic setting, though they have not been applied to safety testing of biopharmaceuticals until fairly recently due to the relatively high costs associated with the analyses.

The finding of viral DNA should not be equated with detecting the infectious virus in the product. The sequence-independent amplification, microarray, and virus-specific PCR assays used can detect viral nucleic acid, but as normally performed do not indicate whether infectious virus is present. Generally, with the possible exception of transforming viruses, it is the infectious virus that is of concern, not its DNA. Efforts to assess the presence of intact viral genomic material and of infectious porcine circovirus in this vaccine are most likely underway at this time.

The presence of the PCV viral sequences has provisionally been attributed to the use of porcine trypsin during the culture of the Vero cell substrate in which the vaccine is manufactured. The trypsin used had been gamma-irradiated to inactivate adventitious viruses prior to use. While it would be comforting to believe that the PCV DNA may simply have reflected carryover of non-infectious, lethally-irradiated PCV1 from the trypsin, the fact is that gamma-irradiation is not very effective at inactivating this small, non-enveloped virus (Plavsic and Bolin. Resistence of porcine circovirus to gamma irradiation. Biopharm Int. 14:32-36, 2001). In the case of porcine circovirus, there is little evidence to indicate that the virus is infectious or pathogenic for humans. So regardless of the outcome of the various ongoing studies, it is likely that the use of the GSK rotavirus vaccine will be re-instated after the FDA convenes and discusses the implications of this finding.

I predict that there will be more and more of this kind of revelation in the future as the sequencing techniques display stretches of viral or other contaminant DNA within samples of biopharmaceuticals. I would hate to see revelations like this impede the use of the sequencing technologies going forward, as these technologies are going to be very useful to the industry as rapid detection methods for contaminant identification.