Showing posts with label analytical methods. Show all posts
Showing posts with label analytical methods. Show all posts

Tuesday, January 31, 2012

Is Membrane Chromatography the Answer?

by Dr. Scott Rudge

Membrane chromatography gets a fair amount of hype.  It’s supposed to be faster, cheaper, it can be made disposable.  But is it the real answer to the “bottleneck” in downstream processing?  Was Allen Iverson the answer to the Nugget’s basketball dilemma?  I’m still skeptical.

The idea to add ligand functionality to membranes was not new at the time, but the idea really got some traction when it was endorsed by Ed Lightfoot in 1986.  Lightfoot’s paper pointed out that the hydrodynamic price paid for averaging of flow paths in a packed bed might not be worth it.  If thousands of parallel hollow fibers of identical length and diameter could be placed in a bundle, and the diameter of these fibers could be small enough to make the diffusion path length comparable to that in a bed of packed spheres, or smaller, then performance would be equivalent or superior at a fraction of the pressure drop.  This is undoubtedly true; there is no reason to have a random packing if flowpaths can be guaranteed to be exactly equivalent.  However, every single defect in this kind of system works against its success.  For example, hollow fibers that are slightly more hollow will have lower pressure drop, lower surface to volume ratio, lower binding capacity and higher proportional flow.  Slightly longer fibers will have slightly higher pressure drop, slightly higher binding capacity, carry proportionally less of the flow.  Length acts linearly on pressure drop and flow rate, but internal diameter acts to the fourth power, so minor variations in internal diameter would dominate performance of such systems. 
Indeed, according to Mark Etzel, these systems were abandoned as impractical for membrane chromatography based on conventional membrane formats that have been derivatized to add binding functionality.  As this technology has been developed, its application and scale up has begun to look very much like packed bed chromatography.  Here are some particulars:
1.       Development and scale up is based on membrane volume.   However, breakthrough curves are measured in 10’s, or even 70’s of equivalent volumes (see Etzel, 2007) instead of 2’s or 3’s as found in packed beds
2.       Binding capacities are less in membrane chromatography.  In a recent publication by Sartorious, the ligand density in Sartobind Q is listed as 50 mM, while for Sepharose Q-HP it is 140 mM.  In theory, the membrane format has a higher relative dynamic binding capacity, but this has yet to be demonstrated (see above)
3.       The void volume in membranes is surprisingly high, at 70%, compared to packed beds at 30%.  This is a reason for the low relative binding capacity.
4.       Disposable is all the rage, but there’s no evidence that, on a volume basis, derivatized membranes are cheaper than chromatography resins.  In fact, economic comparisons published by Gottshalk always have to make the assumption that the packed bed will de facto be loaded 100 times less efficiently than membranes, just to make the numbers work.  The cost per volume per binding event goes down dramatically during the first 10 reuses of chromatography resins.
It turns out that membrane chromatography has a niche, and that is for flow-through operations in which some trace contaminant, like the residual endotoxin or DNA in a product is removed.  This too can be done efficiently with column chromatography when operated in a high capacity (for the contaminant) mode.  But there is a mental block among chromatographers who want to operate adsorption steps in chromatographic, resolution preserving modes. This block has not yet affected membraners.  A small, high-capacity column operated at an equivalent flowrate to a membrane (volumes per bed or membrane volume) will work as well, and in my opinion more cheaply if regenerated.
These factors should be considered when choosing between membrane and packed bed chromatography.

Friday, August 26, 2011

Our take on process-specific vs. generic host cell protein assays

By Drs. Ray Nims and Lori Nixon 

The residual host cell protein (HCP assay) is used to determine the concentration, in process streams, of protein originating from the production cell (Chinese hamster, NS0, E. coli, etc.) used in the manufacture of a biologic. Host cell protein is considered to be a process-related impurity/contaminant. The most typical approach to quantitation of residual host cell protein in a test sample is the enzyme-linked immunosorbant assay (ELISA) platform.

Generic vs. process-specific assays.  A question that often arises is whether a process-specific HCP assay is required, or whether a generic assay can be used. Although there is a spectrum of “process specificity” that can be described, in this discussion, we are considering a generic assay to be one that is developed from a broad set of antigens from the host strain or closely-related strains, but not necessarily under specific process conditions mimicking the actual production/purification process of the protein of interest. That is, the reagents for a generic assay could be developed in advance of a defined production process. For example, it is often possible to purchase a “generic” HCP ELISA kit from a commercial provider, such as Cygnus Technologies, selecting a kit that matches the production cell line. Practically speaking, most firms start with generic HCP methods in early development for an obvious reason: you can’t have a process-specific assay until you have a defined process. Even after the process is defined, the lead time for developing custom (process-specific) HCP reagents and an assay using these can be as long as 1-2 years.
It should be appreciated from the outset that there is no perfect HCP assay. Production cells can express thousands of proteins, with expression patterns differing under different growth conditions. At each successive stage of the recovery and purification process, the population of HCPs is altered. At the final drug substance stage, there may be only a few HCPs present that have co-purified with the product. If you attempt to design an assay that only selects those few remaining HCPs, then some will argue that you might miss HCPs that might make it through under other circumstances (such as a process deviation).  If you attempt to design antibody reagents using the entire unpurified mixture of proteins from the production cell, the proteins of most interest may not elicit a good antibody response and you may have inadequate sensitivity to quantify HCP in your final drug substance. In fact, it has been reported that process-specific antibodies lead to HCP assays that are more sensitive for certain HCP species (and sometimes more broadly reactive) than the antibodies provided in a generic HCP assay. 
In recent years, there has been some regulatory pressure setting an expectation that a process-specific HCP assay must be developed for product registration. In our opinion, this is unjustified as a blanket prescription.  There are a multitude of approaches for creating antigen and antibody reagents, but the resulting assay should be judged primarily on its own merits.
Suitability of any HCP assay (whether generic or process-specific) must be evaluated on a case-by-case basis, and the assay validated for the specific product in question. One of the first characteristics to aim for is an assay that is able to readily quantitate HCP in the final drug substance. Without this sensitivity, it is difficult to get results that are meaningful to guide process optimization or ensure control of your product. If the available generic method(s) have limited sensitivity for the sample of interest (results are below the required range of quantitation), then it is a good idea to start planning the development of more process-specific reagents—which are likely to afford better sensitivity. Since the numerical values reported from these assays are semi-quantitative at best, sensitivity should be judged on the performance in actual samples rather than on the product specification. Of course there are other assay characteristics (non-reactivity to product, precision, dilutional linearity, spike recovery, etc) that are required to meet validation acceptance criteria. The antigen coverage should be determined, typically by 2D electrophoresis or 2D HPLC-ELISA. It is unrealistic to expect 100% coverage of all of the proteins present in the production cell; on the other hand, it is relatively more important to ensure response against bands that persist in the purification process.  All other things being equal, of course greater coverage is more desirable.
Example of 2D electrophoresis of E. coli proteins, from Kendrick Labs

Whether a generic or process-specific method is implemented, it is vital to ensure a consistent reagent supply that will last throughout the commercial life of the product. If using vendor-supplied antigen and antibody, be aware that by nature these are “single-source” reagents with associated supply risks. If creating a custom reagent set, make enough supplies to last for years to decades.
As mentioned above, most firms start with a generic assay and may move to a process-specific assay in later stages of development. A close evaluation of the generic assay during its characterization and validation may lead to the conclusion that the generic assay is suitable for your product.  In this case, be prepared to make a strong case in defense of your generic assay—based on empirical data with your product, rather than theoretical speculation.

Monday, July 25, 2011

Rapid Identification of Viral Contaminants, Finally

By Ray Nims, Ph.D.


There was a time, not long ago, when it might take months to years to identify a viral contaminant isolated from a biological production process or from an animal or patient tissue sample. The identification process took this long because it involved what I have referred to as the “shotgun approach”, or it involved luck.

Let’s start with luck. That is probably the wrong term. What I mean by this is that there have been instances where an informed guess has led to a fairly rapid (i.e., weeks to months) identification of a contaminant. For instance, our group at BioReliance was able to rapidly identify contamination with REO virus (REO type 2 actually) and Cache Valley virus  because we had observed these viruses in culture previously and because these viruses had unique properties (a unique cytopathic effect in the case of REO and a unique growth pattern in the case of Cache Valley virus). The time required to identify these viruses consisted of the time required to submit and obtain results from confirmatory PCR testing for the specific agents.

The first time we ran into Cache Valley virus, however, it was a different story. This was, it turns out, the first time that this particular virus had been detected in a biopharmaceutical bulk harvest sample. In this case, we participated in the “shotgun approach” that was applied to the identification of the isolate. The “shotgun approach” consisted of utilizing any detection technique available at the lab, namely, in vitro screening, bovine screening, application of any immunofluorescent stains available, and transmission electron microscopy (TEM). The TEM was helpful, as it indicated a 80-100 nm virus with 7-9 nm spikes. A bunyavirus-specific stain showed positive, and eventually (after months of work), sequencing and BLAST alignment was used to confirm the identity of the virus as Cache Valley virus.

The “shotgun approach” was subsequently applied to a virus isolated from harbor seal tissues, with no identity established as a result. After approximately a year of floundering using the old methods, the virus was eventually found to be a new picornavirus (Seal Picornavirus 1).  How was this accomplished? During the time between the identification of the Cache Valley virus and the seal virus, a new technology called deep sequencing became available. Eric Delwart’s group used the technique to rapidly identify the virus to the species level. As this was the first time this particular picornavirus had ever been detected, deep sequencing is likely the only method that would have been able to make the identification.

Deep (massively parallel) sequencing is one of a few new technologies that will make virus isolate identification routine and rapid in the future. It has been adopted for detection of viral contaminants in cells and viral seed stocks and for evaluating vaccine cell substrates by BioReliance.The other is referred to as the T5000 universal biosensor. Houman Dehghani’s group at Amgen has been characterizing this methodology as a rapid identification platform for adventitious agent contaminations.  Each technology has its advantages. Deep sequencing is more labor intensive, but has the ability to indicate (as described above) a new species. The universal biosensor can both serve as a detection method and as an identification method. Both can identify multiple contaminants within a sample.

Since identification of an adventitious viral contaminant of a biopharmaceutical manufacturing process is required for establishment of root cause, for evaluating effectiveness of facility cleaning procedures and viral purification procedures, and for assuring safety of both workers and patients, it is critical that the identification of a viral isolate is completed accurately and rapidly. Happily, we now have the tools at hand to accomplish this.

Monday, December 14, 2009

Advantages of Compendial Methods

By Dr. Lori Nixon

When you are developing a new product specification, it is usually recommended to rely on the appropriate compendial method for applicable “generic” quality characteristics such as pH, residual solvents, trace metals, bioburden, etc. By compendial method, we mean methods that are described as chapters in the United States Pharmacopeia (USP) or others that may be applicable for a specific regulatory region. The three main compendia include the USP, European Pharmacopoeia, Japanese Pharmacopeia (USP, PhEur, JP); these are the “tripartite” bodies that are involved in the International Conference on Harmonization (ICH).



photo of USP laboratories from DPR Construction Inc.

Why rely on compendial methods rather than just using your own? It is generally recommended to refer to compendial methods where applicable. The advantage to the drug sponsor is a reduced requirement for validation supporting such methods (the methods themselves are considered validated, and may only require product-specific verification in the particular testing lab). Compendial methods are “familiar” to regulatory reviewers; they are also generally expected. If you propose your own method as an alternate method, you will need to justify why your own method is equivalent or better. For the testing lab, there is some advantage in having methods that can be applied to multiple products (avoiding a multiplicity of similar methods) and where the change process is relatively well-defined and publically communicated. You may also find it simpler to transfer testing between different labs.

To reference the compendial method in your specification, you may refer simply to the test by attribute and chapter, along with the associated limit for your product. For example, your specification may include a limit of 10 EU/mL for bacterial endotoxin as measured by USP<85>. The general expectation here is that at the time of testing, the current version of USP is used. Clearly, this will require that your testing lab is aware of any potential changes to the USP and can prepare for such changes accordingly. As with any other change to an analytical method, changes to compendial methods can impact training, internal procedures, product-specific re-validation/verification, etc.

In practice, labs often rely on additional internal descriptive procedures in order to execute the compendial methods (i.e., rather than just directing analysts to follow the chapter directly). This is usually a good idea, for several reasons. It can be easier to train analysts according to a standard documentation format, and it is often necessary to describe details that may be specific to the particular lab, equipment, instrumentation, reagents, reporting requirements, etc. Again, even if there is a lab-specific procedure, it is usually best to refer directly to the compendial method (USP<85>, e.g.) in the sponsor’s product specification.

Be aware of compliance with compendial testing requirements when you are outsourcing testing. For example, almost any chemical testing lab with have a method for pH, but that doesn’t necessarily mean that it will comply with USP<791>. For example, in this case the USP method describes measuring the sample temperature within a certain range; often “generic” lab methods for pH do not specify control of the sample temperature. There are additional requirements such as the calibration standards chosen, etc that must also be considered. When reviewing vendor methods, check the following:

- Does the method purport to comply with any compendia? (should be clearly stated in the vendor’s procedure if so)

- Which compendia?

- Check details of the method to ensure that it does indeed comply with the current compendial procedure(s) in question

- Does the lab have a mechanism to stay current with upcoming compendial changes?

- Do they have appropriate change control system to ensure that they can prepare for method changes and associated re-validation, etc?

- Consider what verification/validation is required to ensure that the vendor method provides reliable results for your particular product/sample type.

Of course, the downside of compendial methods is that they are region-specific, and one region may not recognize the compendia of another region. There have been efforts in recent years towards harmonizing methods (ICHQ4B for bioburden testing, for example), but this process is slow and far from complete.

If you intend to market or perform clinical trials in more than one region, you may need to ensure compliance with multiple compendia. In this case, consider the following:

• Has method been harmonized through the ICH process?

• Is it possible to create an internal “harmonized” method that meets the requirements of all relevant compendia?

• Is it possible to meet the requirements of all by following the “most stringent” compendial procedure?

• Will you need to test by multiple procedures to generate results acceptable to each region?