Monday, July 25, 2011

Rapid Identification of Viral Contaminants, Finally

By Ray Nims, Ph.D.


There was a time, not long ago, when it might take months to years to identify a viral contaminant isolated from a biological production process or from an animal or patient tissue sample. The identification process took this long because it involved what I have referred to as the “shotgun approach”, or it involved luck.

Let’s start with luck. That is probably the wrong term. What I mean by this is that there have been instances where an informed guess has led to a fairly rapid (i.e., weeks to months) identification of a contaminant. For instance, our group at BioReliance was able to rapidly identify contamination with REO virus (REO type 2 actually) and Cache Valley virus  because we had observed these viruses in culture previously and because these viruses had unique properties (a unique cytopathic effect in the case of REO and a unique growth pattern in the case of Cache Valley virus). The time required to identify these viruses consisted of the time required to submit and obtain results from confirmatory PCR testing for the specific agents.

The first time we ran into Cache Valley virus, however, it was a different story. This was, it turns out, the first time that this particular virus had been detected in a biopharmaceutical bulk harvest sample. In this case, we participated in the “shotgun approach” that was applied to the identification of the isolate. The “shotgun approach” consisted of utilizing any detection technique available at the lab, namely, in vitro screening, bovine screening, application of any immunofluorescent stains available, and transmission electron microscopy (TEM). The TEM was helpful, as it indicated a 80-100 nm virus with 7-9 nm spikes. A bunyavirus-specific stain showed positive, and eventually (after months of work), sequencing and BLAST alignment was used to confirm the identity of the virus as Cache Valley virus.

The “shotgun approach” was subsequently applied to a virus isolated from harbor seal tissues, with no identity established as a result. After approximately a year of floundering using the old methods, the virus was eventually found to be a new picornavirus (Seal Picornavirus 1).  How was this accomplished? During the time between the identification of the Cache Valley virus and the seal virus, a new technology called deep sequencing became available. Eric Delwart’s group used the technique to rapidly identify the virus to the species level. As this was the first time this particular picornavirus had ever been detected, deep sequencing is likely the only method that would have been able to make the identification.

Deep (massively parallel) sequencing is one of a few new technologies that will make virus isolate identification routine and rapid in the future. It has been adopted for detection of viral contaminants in cells and viral seed stocks and for evaluating vaccine cell substrates by BioReliance.The other is referred to as the T5000 universal biosensor. Houman Dehghani’s group at Amgen has been characterizing this methodology as a rapid identification platform for adventitious agent contaminations.  Each technology has its advantages. Deep sequencing is more labor intensive, but has the ability to indicate (as described above) a new species. The universal biosensor can both serve as a detection method and as an identification method. Both can identify multiple contaminants within a sample.

Since identification of an adventitious viral contaminant of a biopharmaceutical manufacturing process is required for establishment of root cause, for evaluating effectiveness of facility cleaning procedures and viral purification procedures, and for assuring safety of both workers and patients, it is critical that the identification of a viral isolate is completed accurately and rapidly. Happily, we now have the tools at hand to accomplish this.

Saturday, July 9, 2011

Can You Decide on CAPA?

By Dr. Scott Rudge


When things go wrong in pharmaceutical manufacturing, consequences can be dire.  Small changes in the quality of the pharmaceutical product can cause major consequences for patients in ways too numerous to list in this blog.  It can be surmised that the failure mode was not anticipated, so the manufacturer is wise to determine the cause of the failure, correct it, and prevent it from happening again.  This exercise is known as CAPA, Corrective and Preventive Actions.
There are numerous tools for determining the root causes of failures, many of these are embedded fully in software, and ICH Q9, that helps a manufacturer’s quality organization to track the resolution of CAPAs.  Fault Tree Analysis and Root Cause Analysis are two common and popular examples of these tools.  These tools help to trace the cause of a failure back to the basic characteristics of the failure.  Once these basic characteristics have been identified, they can be remediated (eliminated).

When the root cause is found, it is eliminated, and the process goes on.  The CAPA is closed and everyone waits for the next crisis or problem that needs to be fixed.  Right?

But root cause elimination, although a worthy corrective objective, does not appear to be sufficient as a preventive action.  One should consider how the root cause is to be eliminated.  And one should consider whether the manner and means of root cause elimination might lead to other failure modes.

The first consideration, how to eliminate a root cause, is a decision.  Decision analysis can be used to improve the process of choosing between multiple remediation options.  When there are multiple good options for fixing a problem, or perhaps a series of “less bad” options, it’s a good idea to do some analysis. 

First, the decision should be defined.  The required objectives that the remediation option must meet should be listed.  These should be specific and measureable.  Each option for addressing the problem should also be listed, and the “features” of each option listed, so there is little uncertainty about the option and its implementation.  Each option must fully meet all the required objectives, any option that partially addresses must be considered incomplete.  Any option that is incomplete should either 1) not be considered further, or 2) have elements added that allow the option to completely satisfy the required objectives.  All options that meet each of the required objectives is a “qualified option”.

There are often additional objectives that are less absolute that should be considered in choosing a remediation option.  These might be considered as differentiating objectives.  For example, cost may be a required objective (the remediation must be implemented within the current budget) and a differentiating objective (the least costly option, within the budget, is preferred).  These differentiating objectives should also be listed, and weighted according to their relative importance.  The objectives of all stake holders must be considered.  The weightings may be controversial among stakeholders, but with practice, an organization can learn to appropriately weight objectives. 
Qualified options can be scored according to the weighted differentiating objectives.  Each qualified option is ranked according to how it fulfills these objectives, and the score is multiplied by the weight.  The weighted scores are summed, and a “most preferred” option is identified as a result. 

But wait, there’s more.  A final check of failure modes should be made.  The last thing we want is to implement a remediation option that introduces more vulnerabilities into our systems.  A simple FMEA can be used for this.  If your organization routinely uses FMEAs and has standard criteria for scoring them, then you probably only need to perform one FMEA, and ensure no resulting risk priority numbers exceed the established threshold value for risk.  If your organization doesn’t have this experience, then you may have to perform a comparative FMEA on two or more qualified options, to gauge relative risk.

If your decision making is already perfect, then there is no need for this analysis.  Such excellent seat of the pants decision making would be useful for the rest of us to see!  For the rest of us, when new issues arise, the decision analysis can be revisited, and organizational assumptions revised, to improve the decision making in the future.  And it might be useful to have documented the options you considered, and how you selected among them.