With the development of very high titer cell culture and fermentation processes, downstream processing has been identified as a new bottleneck in biotechnology. The productivity of chromatography in particular, has become a bottleneck. There are two schools of thought for scaling up chromatography: in one, linear velocity (flow rate divided by column cross sectional area) is held constant; in the other, the total (volumetric) flow rate divided by the volume of the column is held constant. In the former method, the length of the column has to be held constant. In the latter method, the geometry of the column is not important, as long as the column can be packed efficiently and flow is evenly distributed. This makes the latter method more flexible, and accommodating of commercially available off the shelf column hardware packages. But does it work?

In my experience, holding flow rate divided by column volume constant between scales works very well. There is plenty of theoretical basis for the methodology as well. Yamamoto has published extensively on the reasons that this technique works. This method is also the basis for scale up described in my textbook. Here, briefly, and using plate height theory, is the theoretical basis:

The basic goal in chromatography scale up is to maintain resolution. “Resolution” is a way to describe the power of a chromatography column to separate two components. It depends on the relative retention of the components, which is fixed by the thermodynamics of the column and remains constant as long as the chemistry (the resin type, the buffer composition) remains constant. It also depends on the peak dispersion in the column, which is a function of the transport phenomena, and is only related to the chemistry by the inherent diffusivity of the molecules involved. Otherwise, it is dependent on mass transfer, flow rate, temperature, flow distribution. Treating the thermodynamics as constant, we can say:

where Rs is Resolution, and N is the number of theoretical plates. N is the ratio of the column length L to the plate height, H, so

The plate count analysis is very phenomenological, but it does hold up under practice (otherwise it would be abandoned). And the more delicate mathematical models predict the same performance, so confidence in this scale up model is high.

One common mistake made by those using the constant linear velocity model is in adding extra column capacity. Since most people are unwilling to pay for a custom diameter column, but base their loadings on the total volume of resin, they add bed volume by adding length. But since they are unwilling to change the linear velocity, they end up decreasing the productivity of the column (because, for example, the resolution they achieved at a smaller scale in a 12 cm long column at 60 cm/hr is now being performed in a 15 cm long column, at 60 cm/hr, therefore taking 25% more time).

If the less well known constant F/V model is used for a process involving mammalian cells, it would be imperative to explain and demonstrate this model in the scale down validation that is a critical part of the viral clearance package.

But how can you get even more performance out of your chromatography? Treating the unit operation as an adsorption step, and scaling up using Mass Transfer Zone (MTZ) concepts will be treated in a future posting.

## No comments:

## Post a Comment