Wednesday, April 28, 2010

FDA regulation of “combination” products

by Dr. Ray Nims

The FDA’s Office of Combination Products (OCP) was established in 2002 to shepherd combination products, those comprised of a combination of drug, biological, and/or device, through the review and regulation process. An example of a combination product is the drug-releasing stent (see figure below). The OCP does not conduct the reviews, but is responsible for: assigning the product to the appropriate FDA center; coordinating reviews involving more than one center; and working with agency centers to develop guidance and regulations to make the regulation of combination products “as clear, consistent and predictable as possible”. 

An example of a drug-releasing stent.

The complexity of combination products arises because the efficacy and safety of the individual constituent components (i.e., the drug, biologic, or device) must be considered alone, as well as within the context of the combination product. Because of this complexity, there is no single development paradigm for all combination products. The guidance recommends that combination product developers consider any prior approval/clearance of the constituent parts, as well as how their testing may be influenced by the interaction of the components. Factors that should be considered (from the guidance) include:

• Are the constituent parts already approved for an indication?
• Is the indication for a given constituent part similar to that proposed for the combination product?
• Does the combination product broaden the indication or intended target population beyond that of the approved constituent part?
• Does the combination product expose the patient to a new route of administration or a new local or systemic exposure profile for an existing indication?
• Is the drug formulation different than that used in the already approved drug?
• Does the device design need to be modified for the new use?
• Is the device constituent used in an area of the body that is different than its existing approval?
• Are the device and drug constituents chemically, physically, or otherwise combined into a single entity?
• Does the device function as a delivery system, a method to prepare a final dosage form, and/or does it provide active therapeutic benefit?
• Is there any other change in design or formulation that may affect the safety/effectiveness of any existing constituent part or the combination product as a whole?
• Is a marketed device being proposed for use with a drug constituent that is a new molecular entity?
• Is a marketed drug being proposed for use with a complex new device?

As with individual constituent drugs, biologics, and devices, the FDA will require that the combination products be manufactured according to current good manufacturing practices. In most cases, a single investigational application (IND or IDE) is submitted to enable the clinical trials planned for the combination product. The science and technology associated with the combination products should drive the selection of statistical approaches, sample sizes, study endpoints, and methods for active principle measurement and for evaluating possible interactions between components. It may be best to involve the FDA in these decisions.

Complexity for the regulation of combination products also stems from the fact that separate manufacturing processes may exist for the various constituent parts. Potential changes in any of the component manufacturing processes, subsequent to initiating clinical trials or post-market, will need to be evaluated for possible effects on the safety and efficacy of the combination product.

Combination products represent therapeutic modalities with great promise for advancing health care. We expect to see more and more pharmaceutical activity in this area going forward.

Friday, April 23, 2010

Is There Ever a Good Time for Filter Validation?

By Dr. Scott Rudge

What is the right time to perform bacterial retention testing on a sterile filter for an aseptic process for Drug Product? I usually recommend that this be done prior to manufacturing sterile product. After all, providing for the sterility of the dosage form for an injectable drug is first and foremost the purpose of drug product manufacturing.

But there are some uncomfortable truths concerning this recommendation

1. Bacterial retention studies require large samples, liters

2. Formulations change between first in human and commercial manufacturing, requiring revalidation of bacterial retention

3. The chances of a formulation change causing bacteria to cross an otherwise integral membrane are primarily theoretical, the “risk” would appear to be low

On the other hand

1. The most frequent sterile drug product inspection citation in 2008 by the FDA was “211.113(b) Inadequate validation of sterile manufacturing” (source: presentation by Tara Gooel of the FDA, available on the ISPE website to members)

2. The FDA identifies aseptic processing as the “top priority for risk based approach” due to the proximal risk to patients

3. The FDA continues to identify smaller and smaller organisms that might pass through a filter

Is the issue serious? I think so, risk of infection to patients is one of the few direct consequences that pharmaceutical manufacturers can directly link between manufacturing practice and patient safety, which is one of the goals of Quality by Design. Is the safety threat from changes to filter properties and microbe size in the presence of slightly different formulations substantial? I don’t think so, especially not in proportion to the cost to demonstrate this specifically. But the data aren’t available to demonstrate this hypothesis, because the industry has no shared database to demonstrate a range of aqueous based protein solutions have no effect on bacterial retention. There is really nothing proprietary about this data, and the only organizations that benefit from keeping it confidential are the testing labs. Sharing this data should benefit all of us. An organization like PDA or ISPE should have an interest in polling this data and then making a case to the FDA and EMEA that the vast majority of protein formulations have been bracketed by testing that already exists, and that the revalidation of bacterial retention on filters following formulation changes is mostly superfluous.

In the meantime, if you don’t have enough product to perform bacterial retention studies, at least check the excipients, as in a placebo or diluents buffer. A filter failure is far more likely due to the excipients than the active ingredient, which is typically present in much smaller amounts (by weight and molarity). By doing this, you are both protecting your patients in early clinical testing, and reducing your risk with regulators.

Wednesday, April 14, 2010

Oops, adventitious viral DNA fragments in a vaccine

by Dr. Ray Nims

On March 22, 2010, a press release from GlaxoSmithKline (GSK) announced that DNA from porcine circovirus had been detected in their rotavirus vaccine. According to GSK, the DNA ”was first detected following work done by a research team in the US using a novel technique for looking for viruses and then confirmed by additional tests conducted by GlaxoSmithKline”. As a result of this finding, the FDA “is recommending that US clinicians and public health professionals temporarily suspend the use of Rotarix as a precautionary measure. The FDA have also stated that they intend to convene an advisory committee, within approximately four to six weeks, to review the available data and make recommendations on rotavirus vaccines licensed in the USA. The FDA will also seek input on the use of new techniques for identifying viruses in vaccines.” The EMEA, on the other hand, does not appear to consider this finding to be a safety concern, citing the fact that porcine circovirus is not infectious for human cells and does not cause disease in humans.



Porcine circovirus
Source: Meat and Livestock Commission, UK

What is porcine circovirus? Porcine circovirus (PCV) is a member of the family Circoviridae, among the smallest of the known animal DNA viruses. It is approximately 17 nm in diameter, non-enveloped, with icosahedral symmetry. The virus was originally identified as a noncytopathic contaminant of the PK-15 porcine kidney cell line (Tischer et al., Zentralblatt Bakt Hyg A 226:153-167, 1974). Like many very small, non-enveloped viruses, PCV represents a challenge for removal and inactivation.

Why is this finding coming to light now, or stated another way, why wasn’t the PCV DNA detected when the vaccine was initially tested and approved for human use? The press release wasn’t specific as to method used. It has subsequently been revealed, however,  that these fragments were detected using sequence-independent amplification (deep sequencing; or as Eric Delwart calls it, metagenomics). The resulting library of amplified sequences is characterized by BLAST searching using identification algorithms. Confirmation in this case was obtained using microarray and PCR analyses. The sequencing techniques have been available for some time, and have proven useful for identification of viruses in an academic setting, though they have not been applied to safety testing of biopharmaceuticals until fairly recently due to the relatively high costs associated with the analyses.

The finding of viral DNA should not be equated with detecting the infectious virus in the product. The sequence-independent amplification, microarray, and virus-specific PCR assays used can detect viral nucleic acid, but as normally performed do not indicate whether infectious virus is present. Generally, with the possible exception of transforming viruses, it is the infectious virus that is of concern, not its DNA. Efforts to assess the presence of intact viral genomic material and of infectious porcine circovirus in this vaccine are most likely underway at this time.

The presence of the PCV viral sequences has provisionally been attributed to the use of porcine trypsin during the culture of the Vero cell substrate in which the vaccine is manufactured. The trypsin used had been gamma-irradiated to inactivate adventitious viruses prior to use. While it would be comforting to believe that the PCV DNA may simply have reflected carryover of non-infectious, lethally-irradiated PCV1 from the trypsin, the fact is that gamma-irradiation is not very effective at inactivating this small, non-enveloped virus (Plavsic and Bolin. Resistence of porcine circovirus to gamma irradiation. Biopharm Int. 14:32-36, 2001). In the case of porcine circovirus, there is little evidence to indicate that the virus is infectious or pathogenic for humans. So regardless of the outcome of the various ongoing studies, it is likely that the use of the GSK rotavirus vaccine will be re-instated after the FDA convenes and discusses the implications of this finding.

I predict that there will be more and more of this kind of revelation in the future as the sequencing techniques display stretches of viral or other contaminant DNA within samples of biopharmaceuticals. I would hate to see revelations like this impede the use of the sequencing technologies going forward, as these technologies are going to be very useful to the industry as rapid detection methods for contaminant identification.

Wednesday, April 7, 2010

What's Your Velocity?

By Dr. Scott Rudge

With the development of very high titer cell culture and fermentation processes, downstream processing has been identified as a new bottleneck in biotechnology. The productivity of chromatography in particular, has become a bottleneck. There are two schools of thought for scaling up chromatography: in one, linear velocity (flow rate divided by column cross sectional area) is held constant; in the other, the total (volumetric) flow rate divided by the volume of the column is held constant. In the former method, the length of the column has to be held constant. In the latter method, the geometry of the column is not important, as long as the column can be packed efficiently and flow is evenly distributed. This makes the latter method more flexible, and accommodating of commercially available off the shelf column hardware packages. But does it work?

In my experience, holding flow rate divided by column volume constant between scales works very well. There is plenty of theoretical basis for the methodology as well. Yamamoto has published extensively on the reasons that this technique works. This method is also the basis for scale up described in my textbook. Here, briefly, and using plate height theory, is the theoretical basis:

The basic goal in chromatography scale up is to maintain resolution. “Resolution” is a way to describe the power of a chromatography column to separate two components. It depends on the relative retention of the components, which is fixed by the thermodynamics of the column and remains constant as long as the chemistry (the resin type, the buffer composition) remains constant. It also depends on the peak dispersion in the column, which is a function of the transport phenomena, and is only related to the chemistry by the inherent diffusivity of the molecules involved. Otherwise, it is dependent on mass transfer, flow rate, temperature, flow distribution. Treating the thermodynamics as constant, we can say:

where Rs is Resolution, and N is the number of theoretical plates. N is the ratio of the column length L to the plate height, H, so
 
In liquids, H is approximately a linear function of linear velocity v, according to van Deemter, as discussed in a previous post. So we can say that H = Cv (where C= the van Deemter constant). Now, the linear velocity is the flow rate divided by the cross sectional area of the column, A, and the column volume, V, is the cross sectional area times the column length. The total flow rate (F) divided by the column volume is held constant between scales, we’ll call this constant “f”.
 
This bit of mathematical gymnastics says that Resolution only depends on two fundamental properties of the scale up, van Deemter’s term “C”, which considers the dispersion caused by convection relative to mass transfer to and from a resin particle, and “f”, the flow rate relative to the column volume that is chosen. There is no need to hold column length or linear velocity constant as long as the flow rate relative to column volume is held constant. You might also notice from the math that doubling the column length has the same effect on resolution as dropping the flowrate by half. However, doubling the length costs more in terms of resin, and consumes more solvent than decreasing the flow rate (that’s for you, Mike Klein!) and increases the pressure.

The plate count analysis is very phenomenological, but it does hold up under practice (otherwise it would be abandoned). And the more delicate mathematical models predict the same performance, so confidence in this scale up model is high.

One common mistake made by those using the constant linear velocity model is in adding extra column capacity. Since most people are unwilling to pay for a custom diameter column, but base their loadings on the total volume of resin, they add bed volume by adding length. But since they are unwilling to change the linear velocity, they end up decreasing the productivity of the column (because, for example, the resolution they achieved at a smaller scale in a 12 cm long column at 60 cm/hr is now being performed in a 15 cm long column, at 60 cm/hr, therefore taking 25% more time).

If the less well known constant F/V model is used for a process involving mammalian cells, it would be imperative to explain and demonstrate this model in the scale down validation that is a critical part of the viral clearance package.

But how can you get even more performance out of your chromatography? Treating the unit operation as an adsorption step, and scaling up using Mass Transfer Zone (MTZ) concepts will be treated in a future posting.