Why is the literature on Quality by Design so laden with statistics and experimental design space jargon? After all, the definition of the term “design” doesn't seem to include the analysis of messy data leading to rough correlations with results that are valid only over a limited range. So what gives?
The idea behind QbD was to use mathematical, predictive models to predict process outcomes. This concept can be applied directly to simple unit operations, such as drying, distilling, heating and cooling. However, unlike in the petrochemical business, the thermodynamic properties of most active pharmaceutical ingredients are not known and are difficult to measure. The unit operations used to manufacture common biotechnology products, such as cell culture, chromatography and fermentation have been modeled, but the models are very sensitive to unknown or unmeasureable adjustable parameters. The batch nature of these operations also makes their control difficult, as classical control theory relies on the measurement of an output to make an adjustment to an input to correct the output back towards the design specification.
Since there does not appear to be a clear path to using models, an approach has been chosen that emphasizes getting as much phenomenological information from as few experiments as possible. This is the Design of Experiments approach, where input conditions or operating parameters are systematically varied over a range and the process outputs measured, with statistics used to deconvolute the results. The combined ranges tested become the “design space”, and the process performance outputs with the variations closest to the process failure limit become the critical performance parameters. The results are useful, but only within the design space, and only with the certainty that the statistics report. Also, since the results are phenomenological, the effect of scale is often unknown.
The statistical approach is acceptable, and for the immediate future it's probably the best that we can expect. But the focus on this approach seems to drown out the more pressing need for good process models and physical properties data. These are the elements that allowed the petrochemical and commodity chemicals industries to scale up processes with assurance that quality specifications would be met. There are countless models available for bioprocessing's more complicated unit operations, but they have parameters that we don't know and can't calculate from first principles. There is no question that we need to find ways to collect this data, and a commitment to publish or share it. There are also simpler unit operations that we can model, scale up and scale down with complete assurance. These include operations such as mixing and storing solutions, filtration, diafiltration, centrifugation and some reactions. We shouldn't let the more complicated operations that still require statistical DOE approaches prevent us from applying the true principles of QbD to our simpler unit operations.
Nice overview- Good characterization is key to any process understanding. Continous improvement through analytical technology improvments are at the heart of understanding our product quality.
ReplyDeleteThanks randallg! I agree, analytical technologies are bringing us a long way in our understanding of biotechnology products. But there are things we can and should do in manufacturing that don't require fancy analytics. More on this later!
ReplyDelete