Showing posts with label impurity. Show all posts
Showing posts with label impurity. Show all posts

Friday, August 26, 2011

Our take on process-specific vs. generic host cell protein assays

By Drs. Ray Nims and Lori Nixon 

The residual host cell protein (HCP assay) is used to determine the concentration, in process streams, of protein originating from the production cell (Chinese hamster, NS0, E. coli, etc.) used in the manufacture of a biologic. Host cell protein is considered to be a process-related impurity/contaminant. The most typical approach to quantitation of residual host cell protein in a test sample is the enzyme-linked immunosorbant assay (ELISA) platform.

Generic vs. process-specific assays.  A question that often arises is whether a process-specific HCP assay is required, or whether a generic assay can be used. Although there is a spectrum of “process specificity” that can be described, in this discussion, we are considering a generic assay to be one that is developed from a broad set of antigens from the host strain or closely-related strains, but not necessarily under specific process conditions mimicking the actual production/purification process of the protein of interest. That is, the reagents for a generic assay could be developed in advance of a defined production process. For example, it is often possible to purchase a “generic” HCP ELISA kit from a commercial provider, such as Cygnus Technologies, selecting a kit that matches the production cell line. Practically speaking, most firms start with generic HCP methods in early development for an obvious reason: you can’t have a process-specific assay until you have a defined process. Even after the process is defined, the lead time for developing custom (process-specific) HCP reagents and an assay using these can be as long as 1-2 years.
It should be appreciated from the outset that there is no perfect HCP assay. Production cells can express thousands of proteins, with expression patterns differing under different growth conditions. At each successive stage of the recovery and purification process, the population of HCPs is altered. At the final drug substance stage, there may be only a few HCPs present that have co-purified with the product. If you attempt to design an assay that only selects those few remaining HCPs, then some will argue that you might miss HCPs that might make it through under other circumstances (such as a process deviation).  If you attempt to design antibody reagents using the entire unpurified mixture of proteins from the production cell, the proteins of most interest may not elicit a good antibody response and you may have inadequate sensitivity to quantify HCP in your final drug substance. In fact, it has been reported that process-specific antibodies lead to HCP assays that are more sensitive for certain HCP species (and sometimes more broadly reactive) than the antibodies provided in a generic HCP assay. 
In recent years, there has been some regulatory pressure setting an expectation that a process-specific HCP assay must be developed for product registration. In our opinion, this is unjustified as a blanket prescription.  There are a multitude of approaches for creating antigen and antibody reagents, but the resulting assay should be judged primarily on its own merits.
Suitability of any HCP assay (whether generic or process-specific) must be evaluated on a case-by-case basis, and the assay validated for the specific product in question. One of the first characteristics to aim for is an assay that is able to readily quantitate HCP in the final drug substance. Without this sensitivity, it is difficult to get results that are meaningful to guide process optimization or ensure control of your product. If the available generic method(s) have limited sensitivity for the sample of interest (results are below the required range of quantitation), then it is a good idea to start planning the development of more process-specific reagents—which are likely to afford better sensitivity. Since the numerical values reported from these assays are semi-quantitative at best, sensitivity should be judged on the performance in actual samples rather than on the product specification. Of course there are other assay characteristics (non-reactivity to product, precision, dilutional linearity, spike recovery, etc) that are required to meet validation acceptance criteria. The antigen coverage should be determined, typically by 2D electrophoresis or 2D HPLC-ELISA. It is unrealistic to expect 100% coverage of all of the proteins present in the production cell; on the other hand, it is relatively more important to ensure response against bands that persist in the purification process.  All other things being equal, of course greater coverage is more desirable.
Example of 2D electrophoresis of E. coli proteins, from Kendrick Labs

Whether a generic or process-specific method is implemented, it is vital to ensure a consistent reagent supply that will last throughout the commercial life of the product. If using vendor-supplied antigen and antibody, be aware that by nature these are “single-source” reagents with associated supply risks. If creating a custom reagent set, make enough supplies to last for years to decades.
As mentioned above, most firms start with a generic assay and may move to a process-specific assay in later stages of development. A close evaluation of the generic assay during its characterization and validation may lead to the conclusion that the generic assay is suitable for your product.  In this case, be prepared to make a strong case in defense of your generic assay—based on empirical data with your product, rather than theoretical speculation.

Thursday, August 26, 2010

Do We Have Clearance, Clarence?

By Dr. Scott Rudge

As in take offs and landings in civil aviation, the ability of a pharmaceutical manufacturing process to give clearance of impurities is vital to customer safety. It’s also important that clearance mechanism be clear, and not confused, as the conversation in the classic movie “Airplane!” surely was (and don’t call me Shirley).

There are two ways to demonstrate clearance of impurities.

The first is to track the actual impurity loads. That is, if an impurity comes into a purification step at 10%, and is reduced through that step to 1%, then the clearance would typically be called 1 log, or 10 fold.

The second is to spike impurities. This is typically done when an impurity is not detectable in the feed to the purification step, or when, even though detectable, it is thought desirable to demonstrate that even more of the impurity could be eliminated if need be.

The first method is very usable, but suffers from uneven loads. That is, batch to batch, the quantity and concentration of an impurity can vary considerably. And the capacity of most purification steps to remove impurities is based on quantity and concentration. Results from batch to batch can vary correspondingly. Typically, these results are averaged, but it would be better to plot them in a thermodynamic sense, with unit operation impurity load on the x-axis and efflux on the y-axis. The next figures give three of many possible outcomes of such a graph.


In the first case, there is proportionality between the load and the efflux. This would be the case if the capacity of the purification step was linearly related to the load. This is typically the case for absorbents, and adsorbents at low levels of impurity. In this case (and only this case, actually) does calculating log clearance apply across the range of possible loads. The example figure shows a constant clearance of 4.5 logs.


In the second case, the impurity saturates the purification medium. In this case, a maximum amount of impurity can be cleared, and no more. The closer to loading at just this capacity, the better the log removal looks. This would be the point where no impurity is found in the purification step effluent. All concentrations higher than this show increasing inefficiency in clearance.


In the third case, the impurity has a thermodynamic or kinetic limit in the step effluent. For example, it may have some limited solubility, and reaches that solubility in nearly all cases. The more impurity that is loaded, the more proportionally is cleared. There is always a constant amount of impurity recovered.

For these reasons, simply measuring the ratio of impurity in the load and effluent to a purification step is inadequate. This reasoning applies even more so to spiking studies, where the concentration of the impurity is made artificially high. In these cases, it is even more important to vary the concentration or mass of the impurity in the load, and to determine what the mechanism of clearance is (proportional, saturation or solubility).

Understanding the mechanism of clearance would be beneficial, in that it would allow the practitioner to make more accurate predictions of the effect of an unusual load of impurity. For example, in the unlikely event that a virus contaminates an upstream step in the manufacture of a biopharmaceutical, but the titer is lower than spiking studies had anticipated, if the virus is cleared by binding to a resin, and is below the saturation limit, it’s possible to make the argument that the clearance is much larger, perhaps complete. On the other hand, claims of log removal in a solubility limit situation can be misleading. The deck can be stacked by spiking extraordinary amounts of impurity. The reality may be that the impurity is always present at a level where it is fully soluble in the effluent, and is never actually cleared from the process.

Clearance studies are good and valuable, and help us to protect our customers, but as long as they are done as single points on the load/concentration curve, their results may be misleading. When the question comes, “Do we have clearance, Clarence?” we want to be ready to answer the call with clear and accurate information. Surely varying the concentration of the impurity to understand the nature of the clearance is a proper step beyond the single point testing that is common today.

And stop calling me Shirley.