Showing posts with label risk. Show all posts
Showing posts with label risk. Show all posts

Wednesday, January 9, 2013

BVDV in commercial bovine serum...still?

by Dr. Ray Nims

One of the animal-derived materials (ADM) most commonly utilized for cell culture and for production of biologicals manufactured using cell cultures is bovine serum (most typically calf serum or fetal bovine serum). There is an inherent risk of introduction of adventitious contaminants (viruses and molllicutes) associated with the use of culture media containing serum. In fact, most of the viral contaminants that have been isolated from biologics bulk harvests (including REO type 2, Cache Valley virus, epizootic hemorrhagic disease virus, and vesivirus 2117) are believed to have been introduced via contaminated bovine serum. Another potential contaminant that may be introduced via bovine serum is the pestivirus bovine viral diarrhea virus (BVDV).
BVDV is a medium-sized (40-70 nm), enveloped, single-stranded RNA virus of the Flavivirus family. The regulatory requirements pertaining to the use of bovine materials for manufacturing biologics (9CFR113.47) contain specific instructions related to the detection of BVDV contamination. EMEA regulations require not only the testing of bovine sera for infectious BVDV, but also assessment of the presence of antibodies to BVDV. Neutralizing antibodies for BVDV are of concern since their presence could theoretically interfere with the detection of the virus in testing done for release of the serum. Although testing of bovine serum to be used for manufacturing biologicals is a regulatory expectation, experience has indicated that such testing is fraught with  false negative results. The relatively large volumes of serum that comprise a given batch are obtained by pooling large numbers of individal serum draws, and there is a chance of non-homogeneous contamination from a limited number of BVDV-infected draws.
How frequently has infectious BVDV been detected in commercially available bovine serum? What percentage of serum lots has been found to contain neutralizing antibodies to BVDV? Has BVDV genomic RNA invariably been found in bovine serum when tested by RT-PCR? These questions have been addressed by various authors over the past four decades.

Infectious BVDV continues to be detected in fetal bovine serum samples up to the present time. This reflects the fact that BVDV is distributed in cattle worldwide, subclinical infections with non-cytopathic BVDV are common in herds, and large serum pools are likely to be non-homogeneously contaminated with BVDV-infected serum. Over the past four decades, 667 lots of commercial fetal bovine serum have been examined for the presence of infectious BVDV in studies reported in the literature. Positive results have been reported for 29% of the lots examined, although the variability in frequency of detection has been quite large, as indicated by the range in the values that have been obtained in the various studies. The percentage of isolates comprising non-cytopathic BVDV has ranged from 98-100%, reflecting the continuing predominance of this variant over the cytopathic strains in cattle herds.
The frequency of detection of neutralizing anti-BVDV antibodies has ranged from 61% to 98% of fetal bovine serum lots. The overall number of commercial fetal bovine serum lots that have been evaluated for neutralizing antibodies to BVDV is 182, with antibodies being detected in 70% of these lots. 
Genomic RNA for BVDV has been detected in 79% of the 155 commercial fetal bovine serum lots that have been evaluated since 1996, when the RT-PCR methodology was initially applied to this question.

The risk of introducing infectious BVDV through contaminated FBS may be mitigated through gamma-irradiation of the FBS. BVDV is relatively sensitive to inactivation by this treatment. It is therefore unusual to detect infectious BVDV in gamma-irradiated serum, though in one case in the literature, a single lot of irradiated serum out of 9 lots tested contained infectious BVDV. This inactivation strategy used to mitigate risk of introducing infectious BVDV is not expected to reduce the frequency of detecting neturalizing anti-BVDV antibodies or genomic RNA for BVDV in the treated serum lots.

See Nims and Plavsic. The Pervasiveness of bovine viral diarrhea virus in commercial bovine serum.  BioProcessing Journal Winter 2012 /2013, 19-26 for the individual study results used to prepare the table shown above.
 

Wednesday, October 12, 2011

Ridding serum of viruses with gamma irradiation: part 2

by Dr. Ray Nims

In a previous posting, we described the susceptibility of viruses from various families to inactivation in frozen serum treated with gamma irradiation (data from the literature). Gamma irradiation is a commonly employed risk mitigation strategy for biopharmaceutical manufacture, and indeed the European Agency for the Evaluation of Medicinal Products in its Note for guidance on the use of bovine serum states that some form of inactivation (such as gamma irradiation) is expected. The use of non-treated serum in the production of biologics for human use must be justified, due to the potential for introducing a viral contaminant through use of this animal-derived material.

But how effective is this particular risk mitigation strategy? To answer this, we expressed the susceptibility to inactivation by gamma radiation for a series of 16 viruses in terms of log10 reduction in titer per kiloGray (kGy) of gamma radiation. With this value in hand, one may easily calculate the log10 reduction in titer of a given virus which might be expected following irradiation of frozen serum at any given kGy dose (serum is typically irradiated at a dose of 25-40 kGy).

The next step is to try to understand the results obtain with this relatively limited series of viruses, so that we may extrapolate the results to other viruses. If we take a look at the viral characteristics that might confer susceptibility to inactivation by gamma irradiation, will we find what we are looking for?

Viral characteristics that have been postulated to contribute to susceptibility or resistance to inactivation by gamma irradiation include: 1) radiation target size (genome size, particle size, genome shape, segmentation of the genome); 2) strandedness (in double stranded genomes the genomic information is recorded in duplicate, so loss of information on one strand may not be as damaging as it would be in the case of a single stranded genome); 3) presence or absence of a lipid envelope (non-enveloped viruses are resistant to a variety of chemical and physical inactivation approaches); and 4) genome type (RNA vs. DNA). The characteristics for our series of 16 viruses are displayed in the following table (click on table to enlarge).



For evaluating the contribution of radiation target size, we are able to make use of quantitative values available for each virus for genome size (in nucleotides) and particle size (in nm). Plotting genome size vs. log10 reduction in titer per kGy yields the results shown below. The coefficient of determination obtained is just 0.32, suggesting that factors other than (or in addition to!) genome size in nucleotides must be important..



A somewhat better concordance is obtained between particle size and log10 reduction in titer per kGy as shown below. The fit line in this case is non-linear, with a coefficient of determination of 0.60.



The contributions of genome type (RNA vs. DNA), genome strandedness, and lipid envelope (present or absent) to susceptibility to inactivation by gamma irradiation appear to be minimal, within this limited series of viruses. As a result, we are left with the conclusion that the most clear, albeit incomplete, determinant of susceptibility to inactivation by gamma irradiation appears to be particle size. This probably explains the resistance to inactivation displayed by the extremely small viruses such as circoviruses, parvoviruses, and caliciviruses. It is less clear why the polyomaviruses (e.g., SV-40 at 40-50 nm) are so resistant to gamma irradiation while certain of the picornaviruses (25-30 nm) are less resistant to inactivation. More work in this area is needed to better understand all of the factors that contribute to susceptibility to gamma radiation inactivation in viruses and bacteriophage.

< This information was excerpted in part from Nims, et al. Biologicals (2011) >

Friday, January 21, 2011

Remember....bacteriophage are viruses too

by Dr. Ray Nims

Are you using bacterial cells to produce a biologic? Do not make the mistake of thinking that your upstream process is safe from infection by adventitious viruses. True, you are not required to test for the usual viruses of concern using a lot release adventitious virus assay. But bacterial production systems are susceptible to introduction of viruses just as mammalian cell processes are. In this case, the viruses just happen to be referred to as bacteriophage. Other than this, the putative contaminants have the same nasty property exhibited by viruses that can contaminate mammalian cell processes, that is … their small size (24-200 nm) allows them to readily pass through the filters used to “sterilize” process solutions. So media, buffers, induction agents, vitamin mixes, trace metal mixes, etc. that are fed into the fermenter without proper treatment can introduce a bacteriophage. Especially worrisome in this regard are raw materials that are generated through bacterial fermentation (such as amino acids, antibiotics). A fermenter infected with a lytic phage exhibits a clear signal that the bacterial substrate is unhappy. The trick then is to discover where the phage originated and to mitigate the risk of experiencing it again.

How can you mitigate the risk of experiencing a bacteriophage infection? Many of the same strategies used to protect mammalian cell processes may be applicable to the bacterial fermentation world. Raw materials and/or process solutions may be subjected to gamma-irradiation, to ultraviolet light in the C range, to prolonged heating or to high temperature short time treatment, to viral filtration, etc. In addition, mitigation of risk of bacteriophage contamination may require filtration of incoming gasses using appropriate filters

A sampling of the data available on inactivation of bacteriophage by various methods is shown in the table below. The literature is extensive, and as with viral inactivation, the inactivation of phage by certain of the methods (e.g., UVC, gamma-irradiation) may be dependent both upon the matrix in which the phage is suspended as well as the physical properties of the phage (e.g., genome or particle size, strandness, etc.). For fairly dilute aqueous solutions, gamma-irradiation, UVC treatment, or parvovirus filtration should represent effective inactivation/removal methods. HTST at temperatures effective for parvoviruses (102°C, 10 seconds) should be effective for most bacteriophage, although this is an area that needs further exploration.


Mitigating the risk of experiencing a bacteriophage contamination of a bacterial fermentation process is possible if one remembers that bacteriophage are similar to mammalian viruses. Strategies that are effective for small-non-enveloped mammalian viruses (i.e., the worst case for mammalian viruses) should also be effective for most bacteriophage.

A possible exception to this is prophage. In analogy with the presence of endogenous retroviruses in certain mammalian cells (i.e., rodent, human, monkey), there is a possibility of encountering integrated bacteriophage (prophage) in certain bacterial cell lines. Like endogenous retroviruses, prophage may result in the production of infectious particles under certain conditions. This phenomenon deserves some discussion, but this will have to be deferred to a future blog.

References: Purtel et al., 2006; Ward, 1979; Sommer et al., 2001.

Wednesday, November 3, 2010

Fry those mollicutes!

By Dr. Ray Nims

It is not only viruses that may be introduced into biologics manufactured in mammalian cells using bovine sera in upstream cell growth processes. The other real concern is the introduction of mollicutes (mycoplasmas and acholeplasmas). Mollicutes, like viruses, are able to pass through the filters (including 0.2 micron pore size) used to sterilize process solutions. Because of this, filter sterilization will not assure mitigation of the risk of introducing a mollicute through use of contaminated bovine or other animal sera in upstream manufacturing processes.

Does mycoplasma contamination of biologics occur as a result of use of contaminated sera? The answer is yes. Most episodes are not reported to the public domain, but occasionally we hear of such occurrences. Dehghani and coworkers reported the occurrence of a contamination with M. mycoides mycoides bovine group 7 that was proven to have originated in the specific bovine serum used in the upstream process (Case studies of mycoplasma contamination in CHO cell cultures. Proceedings from the PDA Workshop on Mycoplasma Contamination by Plant Peptones. Pharmaceutical Drug Association, Bethesda, MD. 2007, pp. 53-59). Contamination with M. arginini and Acholeplasma laidlawii attributed to use of specific contaminated lots of bovine serum have also occurred.

Fortunately, the risk of introducing an adventitious mollicute into a biologics manufacturing process utilizing a mammalian cell substrate may be mitigated by gamma-irradiating the animal serum prior to use. This may be done in the original containers while the serum is frozen. Unlike the case for viruses, in which the efficacy of irradiation for inactivation may depend upon the size of the virus, mollicute inactivation by gamma irradatiion has been found to be highly effective (essentially complete), regardless of the species of molicute. The radiation doses required for inactivation are relatively low compared to those required for viruses (e.g., 10 kGy or less, compared to 25-45 kGy for viruses). The gamma irradiation that is performed by serum vendors is typically in the range of 25-40 kGy. This level of radiation is more than adequate to assure complete inactivation of any mollicutes that may be present in the serum. For instance, irradiation of calf serum at 26-34 kGy resulted in ≥6 log10 inactivation of M. orale, M. pneumoniae, and M. hyorhinis. In the table below I have assembled the data available on inactivation of mollicutes in frozen serum by gamma-irradiation.


So, the good news is that gamma irradiation of animal serum that is performed to mitigate the risk of introducing a viral contaminant will also mitigate the risk of introducing a mollicute contaminant. If the upstream manufacturing process cannot be engineered to avoid use of animal serum, the next best option is to validate the use of gamma irradiated serum in the process.  In fact, the EMEA Note for guidance on the use of bovine serum in the manufacture of human biological medicinal products strongly recommends the inactivation of serum using a validated and efficacious treatment, and states that the use of non-inactivated serum must be justified.


References: Gauvin and Nims, 2010; Wyatt et al. BioPharm 1993;6(4):34-40; Purtle et al., 2006

Wednesday, October 27, 2010

Risky Business

By Dr. Scott Rudge

“Risk Analysis” is a big topic in pharmaceutical and biotech product development. The International Conference on Harmonization (ICH) has even issued a guidance document on Risk Analysis (Q9). Despite the documentation available and the regulatory emphasis, these tools remain poorly understood. They are used to justify limiting the extent of fundamental understanding that can be gained on a process, while simultaneously used as a cure-all for management challenges that we face in pharmaceutical process development.

Risk analysis focuses only on failure modes. Failure mode effect and analysis (FMEA) was developed by the military, first published as MIL-P-1629, “Procedures for Performing a Failure Mode, Effects and Criticality Analysis” in November 1949. The procedure was established to discern the effects of system failure on mission success and personnel safety. Since then, we have found a much broader range of applications for this type of analysis. The methodology is used in the manufacture of airplanes, automobiles, software and elsewhere. People have devised “design” FMEAs and “process” FMEAs (DFMEA and PFMEA). These are all great tools, and help us to design and build better, safer products with better, safer processes.

Where Risk Analysis Falls Down

The FMEA is such a great hammer, it can make everything look like a nail. And when regulatory authorities are encouraging companies to use Risk Analysis for product design and process validation, the temptation to apply it further can be overwhelming. In particular, risk analysis is used in three inappropriate ways, in my estimation:

 
  1. Decision analysis
  2. Project management
  3. Work avoidance

Quite often, risk analysis tools are used to guide decisions. Here, the pros and cons of selecting a particular path are weighed, using the three criteria of risk analysis (occurrence, detectability and severity) as guides. However, not every decision path leads to a failure mode or an outcome that could be measured as a consequence. And detectability and occurrence may not be the only or most appropriate factors by which to weight a consequence. There are excellent decision tools that are designed specifically for weighting and evaluating preference criteria. A very simple tool is the Kepner Tregoe (KT) decision matrix. Decision analysis uses very detailed descriptions of the decision to model the potential outcomes. The KT decision matrix is sufficient for many decisions, large and small. But if you really want to study the anatomy of a decision, decision analysis is the most satisfactory method.
 
FMEAs are sometimes inappropriately applied in project management, to assign prioritization and order in which tasks should be completed. This may be handy in some instances, but is somewhat misleading or inappropriate in others. The riskiness of an objective should not be the sole determinant in its prioritization. Quite often, the fundamentals or building blocks need to be in place in order to best address the riskiest proposition in a project. Prematurely addressing the pieces of a project that present the greatest risk of failure may lead to that failure. On the other hand, being “fast to fail” and eliminate projects that might not bear fruit with the least amount of resources spent, is critical to overall company or project success. Project management requires consideration of failure modes, but also resource programming and timeline management. FMEAs can be an element of that formula, but should not be the focus.
 
Finally, and perhaps most painfully, FMEAs are used to justify avoiding work. Too often, risk analysis is applied to a problem, not to identify the elements that are most deserving of attention, but to justify neglecting areas that do not rank sufficiently high in the risk matrix. Sometimes, the smallest risks are the easiest to address, and in addressing them, variability can be removed from the process. Variability is the “elephant in the room” when it comes to pharmaceutical quality, as has been concisely pointed out by Johnston and Zhang.
 
The FMEA is a stellar tool, and it is applicable to problems in design, process and strategy across many industries. Its quantitative feel makes practitioners feel as though they are actually measuring something, and can make fine distinctions between risks that they were unable to articulate before. However, the FMEA can be applied rather too widely, or sometimes unscrupulously, yielding bad data and bad decisions.

Thursday, June 17, 2010

Informing the FMEA

By Dr. Scott Rudge
Risk reduction tools are all the rage in pharmaceutical, biotech and medical device process/product development and manufacturing. The International Conference on Harmonization has enshrined some of the techniques common in risk management in their Q9 guidance, “Quality Risk Management”. The Failure Modes and Effects Analysis, or FMEA, is one of the most useful and popular tools described. The FMEA stems from a military procedure, published in 1949 as MIL-P-1629, and has been applied in many different ways. The most used method in the health care involves making a list of potential ways in which a process can “fail”, or produce out of specification results. After this list has been generated, each failure mode in the list is assessed for its proclivity to “Occur”, “Be Severe” and “Be Detected”. Typically, these are scored from 1 to 10, with 10 being the worst case for each category, and 1 being the best case. The scores are multiplied together, and the product (mathematically speaking) is called the “Risk Priority Number”, or RPN. Then, typically, development work is directed towards the failure modes with the highest RPN.

The problem is, it’s very hard to assign a ranking from 1 to 10 for each of these categories in a scientific manner. More often, a diverse group of experts from process and product development, quality, manufacturing, regulatory, analytical and other stake holding departments, convene a meeting and assign rankings based on their experience. This is done once in the product life-cycle, and never revisited as actual manufacturing data start to accumulate. And, while large companies with mature products have become more sophisticated, and can pull data from other similar or “platform” products, small companies and development companies can really only rely on the opinion of experts, either from internal or external sources. The same considerations apply to small market or orphan drugs.

Each of these categories can probably be informed by data, but by far the easiest to assign a numerical value to is the “Occurrence” ranking. A typical Occurrence ranking chart might look something like this:

These rankings come from “piece” manufacturing, where thousands to millions of widgets might be manufactured in a short period of time. This kind of manufacturing rarely applies in the health care industry. However, this evaluation fits very nicely with the Capability Index analysis.

The Capability Index is calculated by dividing the variability of a process into its allowable variable range. Or, said less obtusely, dividing the specification range by the standard deviation of the process performance. The capability index is directly related to the probability that a process will operate out of range or out of specification. This table, found on Wikipedia (my source for truth), gives an example of the correlation between the calculated capability index to the probability of failure:

As a reminder, the capability index is the upper specification limit minus the lower specification limit divided by six times the standard deviation of the process. The two tables can be combined to be approximately:
How many process data points are required to calculate a capability index? Of course, the larger the number of points, the better the estimate of average and standard deviation, but technically, two or three data points will get you started. Is it better than guessing?

Friday, April 23, 2010

Is There Ever a Good Time for Filter Validation?

By Dr. Scott Rudge

What is the right time to perform bacterial retention testing on a sterile filter for an aseptic process for Drug Product? I usually recommend that this be done prior to manufacturing sterile product. After all, providing for the sterility of the dosage form for an injectable drug is first and foremost the purpose of drug product manufacturing.

But there are some uncomfortable truths concerning this recommendation

1. Bacterial retention studies require large samples, liters

2. Formulations change between first in human and commercial manufacturing, requiring revalidation of bacterial retention

3. The chances of a formulation change causing bacteria to cross an otherwise integral membrane are primarily theoretical, the “risk” would appear to be low

On the other hand

1. The most frequent sterile drug product inspection citation in 2008 by the FDA was “211.113(b) Inadequate validation of sterile manufacturing” (source: presentation by Tara Gooel of the FDA, available on the ISPE website to members)

2. The FDA identifies aseptic processing as the “top priority for risk based approach” due to the proximal risk to patients

3. The FDA continues to identify smaller and smaller organisms that might pass through a filter

Is the issue serious? I think so, risk of infection to patients is one of the few direct consequences that pharmaceutical manufacturers can directly link between manufacturing practice and patient safety, which is one of the goals of Quality by Design. Is the safety threat from changes to filter properties and microbe size in the presence of slightly different formulations substantial? I don’t think so, especially not in proportion to the cost to demonstrate this specifically. But the data aren’t available to demonstrate this hypothesis, because the industry has no shared database to demonstrate a range of aqueous based protein solutions have no effect on bacterial retention. There is really nothing proprietary about this data, and the only organizations that benefit from keeping it confidential are the testing labs. Sharing this data should benefit all of us. An organization like PDA or ISPE should have an interest in polling this data and then making a case to the FDA and EMEA that the vast majority of protein formulations have been bracketed by testing that already exists, and that the revalidation of bacterial retention on filters following formulation changes is mostly superfluous.

In the meantime, if you don’t have enough product to perform bacterial retention studies, at least check the excipients, as in a placebo or diluents buffer. A filter failure is far more likely due to the excipients than the active ingredient, which is typically present in much smaller amounts (by weight and molarity). By doing this, you are both protecting your patients in early clinical testing, and reducing your risk with regulators.

Tuesday, August 4, 2009

Do You Use Risk Assessments in Auditing?

Audits are a critical component of quality systems, but are they guided by formal assessments of risk to your products? In this world of ICH Q9, can you offer even a semi-quantitative justification for your audit priorities? We have spoken to many people in the industry, and almost all mention a risk assessment being undertaken prior to an audit. But we have not found many people that formalize that risk assessment, or keep it updated from audit to audit. Even fewer communicate their scoring of risk to either their internal clients or the vendor that has been audited.

A new trend in auditing is to use a form of risk assessment both before and after the audit. A popular form is the Failure Modes and Effects Assessment, or FMEA (see, for example, http://www.sre.org/pubs/Mil-Std-1629A.pdf). In a traditional FMEA, risks of failure are identified in a detailed fashion, and scored in three categories related to the failure’s probability, detectability and severity. Scoring is done on a semi-quantitative or relative basis using an arbitrary scale such as 1-10. For an audit, you might use the same categories as they relate to a particular vendor's (or department's) ability to deliver a product or service, failure free. You could organize your FMEA according to the critical quality attributes of the product or service being delivered or according to a list of requirements from a guideline or the CFR's. Your FMEA should receive input from affected departments, and should be used for prioritization of audit items. You should have the FMEA in mind as you conduct your audit, and remember why various items received high prioritization. You may change ratings for probability or detectability based on what you observe. If instead, you confirm your evaluation, you should probe remediations that decrease your firm's primary concern. A remediation that addresses detectability, when the issue was probability, likely won’t mitigate the risk of failure.
When you return from your audit, rescore the FMEA with assessments based on your observations and data that you collected. Make sure that you share your analysis with the stakeholders. And monitor the performance of the vendor until the next audit; the data will help inform your next FMEA.

Do you already use FMEA's in audit preparation and reporting? Let us know your practices.