Showing posts with label process development. Show all posts
Showing posts with label process development. Show all posts

Tuesday, January 31, 2012

Is Membrane Chromatography the Answer?

by Dr. Scott Rudge

Membrane chromatography gets a fair amount of hype.  It’s supposed to be faster, cheaper, it can be made disposable.  But is it the real answer to the “bottleneck” in downstream processing?  Was Allen Iverson the answer to the Nugget’s basketball dilemma?  I’m still skeptical.

The idea to add ligand functionality to membranes was not new at the time, but the idea really got some traction when it was endorsed by Ed Lightfoot in 1986.  Lightfoot’s paper pointed out that the hydrodynamic price paid for averaging of flow paths in a packed bed might not be worth it.  If thousands of parallel hollow fibers of identical length and diameter could be placed in a bundle, and the diameter of these fibers could be small enough to make the diffusion path length comparable to that in a bed of packed spheres, or smaller, then performance would be equivalent or superior at a fraction of the pressure drop.  This is undoubtedly true; there is no reason to have a random packing if flowpaths can be guaranteed to be exactly equivalent.  However, every single defect in this kind of system works against its success.  For example, hollow fibers that are slightly more hollow will have lower pressure drop, lower surface to volume ratio, lower binding capacity and higher proportional flow.  Slightly longer fibers will have slightly higher pressure drop, slightly higher binding capacity, carry proportionally less of the flow.  Length acts linearly on pressure drop and flow rate, but internal diameter acts to the fourth power, so minor variations in internal diameter would dominate performance of such systems. 
Indeed, according to Mark Etzel, these systems were abandoned as impractical for membrane chromatography based on conventional membrane formats that have been derivatized to add binding functionality.  As this technology has been developed, its application and scale up has begun to look very much like packed bed chromatography.  Here are some particulars:
1.       Development and scale up is based on membrane volume.   However, breakthrough curves are measured in 10’s, or even 70’s of equivalent volumes (see Etzel, 2007) instead of 2’s or 3’s as found in packed beds
2.       Binding capacities are less in membrane chromatography.  In a recent publication by Sartorious, the ligand density in Sartobind Q is listed as 50 mM, while for Sepharose Q-HP it is 140 mM.  In theory, the membrane format has a higher relative dynamic binding capacity, but this has yet to be demonstrated (see above)
3.       The void volume in membranes is surprisingly high, at 70%, compared to packed beds at 30%.  This is a reason for the low relative binding capacity.
4.       Disposable is all the rage, but there’s no evidence that, on a volume basis, derivatized membranes are cheaper than chromatography resins.  In fact, economic comparisons published by Gottshalk always have to make the assumption that the packed bed will de facto be loaded 100 times less efficiently than membranes, just to make the numbers work.  The cost per volume per binding event goes down dramatically during the first 10 reuses of chromatography resins.
It turns out that membrane chromatography has a niche, and that is for flow-through operations in which some trace contaminant, like the residual endotoxin or DNA in a product is removed.  This too can be done efficiently with column chromatography when operated in a high capacity (for the contaminant) mode.  But there is a mental block among chromatographers who want to operate adsorption steps in chromatographic, resolution preserving modes. This block has not yet affected membraners.  A small, high-capacity column operated at an equivalent flowrate to a membrane (volumes per bed or membrane volume) will work as well, and in my opinion more cheaply if regenerated.
These factors should be considered when choosing between membrane and packed bed chromatography.

Thursday, April 21, 2011

Getting a grip on prophage

by Dr. Ray Nims

In a previous post, we discussed bacteriophage as a risk for the manufacture of biopharmaceuticals by bacterial fermentation. We mentioned briefly that bacteriophage may integrate within the genome of bacterial cells and that this may also represent a problem. Now we will explain why.

Bacteriophage are viruses that infect bacteria, and they have evolved two mutually exclusive strategies for survival. One involves a lytic growth cycle leading to death (lysis) of the host cell and release of progeny phage that may then infect additional host cells (so-called horizontal transmission). The other strategy is called lysogeny and involves integration of phage coding sequences into the host (bacterial) cell genome. The integrated phage is termed a prophage. This strategy for phage survival is referred to as vertical transmission since the phage genomic material is reproduced along with that of the host cell as the latter proliferates. Under certain circumstances, however, the integrated prophage can excise itself from the host cell chromosome in a process referred to as induction. The excised phage then may initiate a lytic infection of the host cell, causing all of the problems discussed in the previous post.




Illustration of a T4 phage infecting E. coli by Jonathan Heras

The relative success (i.e., from the perspective of the phage!) of the lytic vs. lysogenic survival strategies changes with the probability of host cell survival. Lysogeny appears to be a strategy that allows phage to persist during periods of low host cell availability or poor environmental (e.g., nutrient) conditions. Induction of prophage is an adaptation of the phage to host cell damage. This damage usually takes the form of a major stress to the host cell.

If stess can lead to prophage induction, the worry then becomes that some manipulation of a bacterial production cell during biopharmaceutical manufacture could lead to induction and initiation of a lytic phage infection. How can we assess and mitigate the potential for this to occur? There are two approaches: first, we can perform chemical or physical induction studies to determine the likelihood of encountering a prophage in a given production cell; and second, we can engineer the conditions of bacterial growth such that induction of a prophage is discouraged.

Phage induction studies may be performed on the bacterial production cell following initial engineering of the cell or during characterization of the cell bank. The inducing agent most often employed is mitomycin C. Other types of inducing agents (conditions) include carcinogens (such as the N-nitrosamines), hydrogen peroxide, high temperature, starvation, and UV radiation. The cells are treated with the inducing agent or condition, then one of various endpoints is used to detect the initiation of a lytic phage infection. These could include culture assays as well as molecular techniques such as PCR, microarray, or DNA chips.

Suppose you have an E. coli production cell harboring a problematic prophage. What can be done to discourage phage induction? Certain growth procedures have been shown to reduce spontaneous phage induction in E. coli cultures. These include using lower bacterial growth rates, replacement of glucose in growth medium with glycerol, and engineering the production cell through introduction of a plasmid conferring over-expression of the phage cI gene.

In summary, there are approaches that can identify the likelihood of encountering prophage induction from a bacterial production cell. The time to perform this type of testing is during development of the fermentation process (following the engineering of the production cell), or following banking of the production cell. If prophage induction appears to be a problem, bacterial growth procedures can help to reduce the potential. If this is not sufficient, the production cell may need to be re-engineered to produce a phage-resistant mutant.

Wednesday, February 16, 2011

What's up with USP Chapter 1050?

by Dr. Ray Nims

United States Pharmacopeia (USP) general chapter <1050> Viral Safety Evaluation of Biotechnology Products Derived from Cell Lines of Human or Animal Origin originally appeared in supplement 10 to USP23-NF18 in May 1999 and was at that time essentially a verbatim adoption of the International Conference of Harmonization (ICH) Guideline Q5A (R1) having the same title.

The chapter describes the methods of evaluating the viral safety of biotechnology pharmaceutical products that are manufactured using cell lines of human or animal origin.

In 2006, an ad hoc advisory panel was assembled by the USP and tasked with revision of this chapter. The goals were to update the chapter and, more specifically, to add greater detail in the viral clearance validation section. The hope was that a user following the recommendations set forth in the general chapter would have greater confidence that viral clearance validation data generated would prove acceptable to the regulatory agencies.

The organization of the revised chapter <1050> was not changed. It comprised the following main sections: 1) Introduction; 2)Potential sources of viral contamination; 3) Cell line qualification: testing for viruses; 4) Testing for viruses in unprocessed bulk; 5) Rationale and action plan for viral clearance studies and viruses tests on purified bulk; and 6) Evaluation and characterization of viral clearance procedures. The changes proposed for the initial 5 sections were minor and primarily reflected attempts to update the chapter and to align the chapter more closely with FDA guidance documents. The most extensive changes were to section 6, in keeping with the goals described above.

The revised chapter was published for public comment in Pharmacopeial Forum 36(3) in the fall of 2010. Comments received as a result of the public review apparently suggested that a more extensive update of the chapter was warranted. At any rate, the revised chapter was not made effective during the USP’s 2005-2010 revision cycle. A new ad hoc advisory panel now being assembled as part of USP's 2010-2015 revision cycle will take over the responsibility for moving the revision of this chapter forward.

Friday, January 21, 2011

Remember....bacteriophage are viruses too

by Dr. Ray Nims

Are you using bacterial cells to produce a biologic? Do not make the mistake of thinking that your upstream process is safe from infection by adventitious viruses. True, you are not required to test for the usual viruses of concern using a lot release adventitious virus assay. But bacterial production systems are susceptible to introduction of viruses just as mammalian cell processes are. In this case, the viruses just happen to be referred to as bacteriophage. Other than this, the putative contaminants have the same nasty property exhibited by viruses that can contaminate mammalian cell processes, that is … their small size (24-200 nm) allows them to readily pass through the filters used to “sterilize” process solutions. So media, buffers, induction agents, vitamin mixes, trace metal mixes, etc. that are fed into the fermenter without proper treatment can introduce a bacteriophage. Especially worrisome in this regard are raw materials that are generated through bacterial fermentation (such as amino acids, antibiotics). A fermenter infected with a lytic phage exhibits a clear signal that the bacterial substrate is unhappy. The trick then is to discover where the phage originated and to mitigate the risk of experiencing it again.

How can you mitigate the risk of experiencing a bacteriophage infection? Many of the same strategies used to protect mammalian cell processes may be applicable to the bacterial fermentation world. Raw materials and/or process solutions may be subjected to gamma-irradiation, to ultraviolet light in the C range, to prolonged heating or to high temperature short time treatment, to viral filtration, etc. In addition, mitigation of risk of bacteriophage contamination may require filtration of incoming gasses using appropriate filters

A sampling of the data available on inactivation of bacteriophage by various methods is shown in the table below. The literature is extensive, and as with viral inactivation, the inactivation of phage by certain of the methods (e.g., UVC, gamma-irradiation) may be dependent both upon the matrix in which the phage is suspended as well as the physical properties of the phage (e.g., genome or particle size, strandness, etc.). For fairly dilute aqueous solutions, gamma-irradiation, UVC treatment, or parvovirus filtration should represent effective inactivation/removal methods. HTST at temperatures effective for parvoviruses (102°C, 10 seconds) should be effective for most bacteriophage, although this is an area that needs further exploration.


Mitigating the risk of experiencing a bacteriophage contamination of a bacterial fermentation process is possible if one remembers that bacteriophage are similar to mammalian viruses. Strategies that are effective for small-non-enveloped mammalian viruses (i.e., the worst case for mammalian viruses) should also be effective for most bacteriophage.

A possible exception to this is prophage. In analogy with the presence of endogenous retroviruses in certain mammalian cells (i.e., rodent, human, monkey), there is a possibility of encountering integrated bacteriophage (prophage) in certain bacterial cell lines. Like endogenous retroviruses, prophage may result in the production of infectious particles under certain conditions. This phenomenon deserves some discussion, but this will have to be deferred to a future blog.

References: Purtel et al., 2006; Ward, 1979; Sommer et al., 2001.

Wednesday, November 3, 2010

Fry those mollicutes!

By Dr. Ray Nims

It is not only viruses that may be introduced into biologics manufactured in mammalian cells using bovine sera in upstream cell growth processes. The other real concern is the introduction of mollicutes (mycoplasmas and acholeplasmas). Mollicutes, like viruses, are able to pass through the filters (including 0.2 micron pore size) used to sterilize process solutions. Because of this, filter sterilization will not assure mitigation of the risk of introducing a mollicute through use of contaminated bovine or other animal sera in upstream manufacturing processes.

Does mycoplasma contamination of biologics occur as a result of use of contaminated sera? The answer is yes. Most episodes are not reported to the public domain, but occasionally we hear of such occurrences. Dehghani and coworkers reported the occurrence of a contamination with M. mycoides mycoides bovine group 7 that was proven to have originated in the specific bovine serum used in the upstream process (Case studies of mycoplasma contamination in CHO cell cultures. Proceedings from the PDA Workshop on Mycoplasma Contamination by Plant Peptones. Pharmaceutical Drug Association, Bethesda, MD. 2007, pp. 53-59). Contamination with M. arginini and Acholeplasma laidlawii attributed to use of specific contaminated lots of bovine serum have also occurred.

Fortunately, the risk of introducing an adventitious mollicute into a biologics manufacturing process utilizing a mammalian cell substrate may be mitigated by gamma-irradiating the animal serum prior to use. This may be done in the original containers while the serum is frozen. Unlike the case for viruses, in which the efficacy of irradiation for inactivation may depend upon the size of the virus, mollicute inactivation by gamma irradatiion has been found to be highly effective (essentially complete), regardless of the species of molicute. The radiation doses required for inactivation are relatively low compared to those required for viruses (e.g., 10 kGy or less, compared to 25-45 kGy for viruses). The gamma irradiation that is performed by serum vendors is typically in the range of 25-40 kGy. This level of radiation is more than adequate to assure complete inactivation of any mollicutes that may be present in the serum. For instance, irradiation of calf serum at 26-34 kGy resulted in ≥6 log10 inactivation of M. orale, M. pneumoniae, and M. hyorhinis. In the table below I have assembled the data available on inactivation of mollicutes in frozen serum by gamma-irradiation.


So, the good news is that gamma irradiation of animal serum that is performed to mitigate the risk of introducing a viral contaminant will also mitigate the risk of introducing a mollicute contaminant. If the upstream manufacturing process cannot be engineered to avoid use of animal serum, the next best option is to validate the use of gamma irradiated serum in the process.  In fact, the EMEA Note for guidance on the use of bovine serum in the manufacture of human biological medicinal products strongly recommends the inactivation of serum using a validated and efficacious treatment, and states that the use of non-inactivated serum must be justified.


References: Gauvin and Nims, 2010; Wyatt et al. BioPharm 1993;6(4):34-40; Purtle et al., 2006

Thursday, August 26, 2010

Do We Have Clearance, Clarence?

By Dr. Scott Rudge

As in take offs and landings in civil aviation, the ability of a pharmaceutical manufacturing process to give clearance of impurities is vital to customer safety. It’s also important that clearance mechanism be clear, and not confused, as the conversation in the classic movie “Airplane!” surely was (and don’t call me Shirley).

There are two ways to demonstrate clearance of impurities.

The first is to track the actual impurity loads. That is, if an impurity comes into a purification step at 10%, and is reduced through that step to 1%, then the clearance would typically be called 1 log, or 10 fold.

The second is to spike impurities. This is typically done when an impurity is not detectable in the feed to the purification step, or when, even though detectable, it is thought desirable to demonstrate that even more of the impurity could be eliminated if need be.

The first method is very usable, but suffers from uneven loads. That is, batch to batch, the quantity and concentration of an impurity can vary considerably. And the capacity of most purification steps to remove impurities is based on quantity and concentration. Results from batch to batch can vary correspondingly. Typically, these results are averaged, but it would be better to plot them in a thermodynamic sense, with unit operation impurity load on the x-axis and efflux on the y-axis. The next figures give three of many possible outcomes of such a graph.


In the first case, there is proportionality between the load and the efflux. This would be the case if the capacity of the purification step was linearly related to the load. This is typically the case for absorbents, and adsorbents at low levels of impurity. In this case (and only this case, actually) does calculating log clearance apply across the range of possible loads. The example figure shows a constant clearance of 4.5 logs.


In the second case, the impurity saturates the purification medium. In this case, a maximum amount of impurity can be cleared, and no more. The closer to loading at just this capacity, the better the log removal looks. This would be the point where no impurity is found in the purification step effluent. All concentrations higher than this show increasing inefficiency in clearance.


In the third case, the impurity has a thermodynamic or kinetic limit in the step effluent. For example, it may have some limited solubility, and reaches that solubility in nearly all cases. The more impurity that is loaded, the more proportionally is cleared. There is always a constant amount of impurity recovered.

For these reasons, simply measuring the ratio of impurity in the load and effluent to a purification step is inadequate. This reasoning applies even more so to spiking studies, where the concentration of the impurity is made artificially high. In these cases, it is even more important to vary the concentration or mass of the impurity in the load, and to determine what the mechanism of clearance is (proportional, saturation or solubility).

Understanding the mechanism of clearance would be beneficial, in that it would allow the practitioner to make more accurate predictions of the effect of an unusual load of impurity. For example, in the unlikely event that a virus contaminates an upstream step in the manufacture of a biopharmaceutical, but the titer is lower than spiking studies had anticipated, if the virus is cleared by binding to a resin, and is below the saturation limit, it’s possible to make the argument that the clearance is much larger, perhaps complete. On the other hand, claims of log removal in a solubility limit situation can be misleading. The deck can be stacked by spiking extraordinary amounts of impurity. The reality may be that the impurity is always present at a level where it is fully soluble in the effluent, and is never actually cleared from the process.

Clearance studies are good and valuable, and help us to protect our customers, but as long as they are done as single points on the load/concentration curve, their results may be misleading. When the question comes, “Do we have clearance, Clarence?” we want to be ready to answer the call with clear and accurate information. Surely varying the concentration of the impurity to understand the nature of the clearance is a proper step beyond the single point testing that is common today.

And stop calling me Shirley.

Thursday, June 17, 2010

Informing the FMEA

By Dr. Scott Rudge
Risk reduction tools are all the rage in pharmaceutical, biotech and medical device process/product development and manufacturing. The International Conference on Harmonization has enshrined some of the techniques common in risk management in their Q9 guidance, “Quality Risk Management”. The Failure Modes and Effects Analysis, or FMEA, is one of the most useful and popular tools described. The FMEA stems from a military procedure, published in 1949 as MIL-P-1629, and has been applied in many different ways. The most used method in the health care involves making a list of potential ways in which a process can “fail”, or produce out of specification results. After this list has been generated, each failure mode in the list is assessed for its proclivity to “Occur”, “Be Severe” and “Be Detected”. Typically, these are scored from 1 to 10, with 10 being the worst case for each category, and 1 being the best case. The scores are multiplied together, and the product (mathematically speaking) is called the “Risk Priority Number”, or RPN. Then, typically, development work is directed towards the failure modes with the highest RPN.

The problem is, it’s very hard to assign a ranking from 1 to 10 for each of these categories in a scientific manner. More often, a diverse group of experts from process and product development, quality, manufacturing, regulatory, analytical and other stake holding departments, convene a meeting and assign rankings based on their experience. This is done once in the product life-cycle, and never revisited as actual manufacturing data start to accumulate. And, while large companies with mature products have become more sophisticated, and can pull data from other similar or “platform” products, small companies and development companies can really only rely on the opinion of experts, either from internal or external sources. The same considerations apply to small market or orphan drugs.

Each of these categories can probably be informed by data, but by far the easiest to assign a numerical value to is the “Occurrence” ranking. A typical Occurrence ranking chart might look something like this:

These rankings come from “piece” manufacturing, where thousands to millions of widgets might be manufactured in a short period of time. This kind of manufacturing rarely applies in the health care industry. However, this evaluation fits very nicely with the Capability Index analysis.

The Capability Index is calculated by dividing the variability of a process into its allowable variable range. Or, said less obtusely, dividing the specification range by the standard deviation of the process performance. The capability index is directly related to the probability that a process will operate out of range or out of specification. This table, found on Wikipedia (my source for truth), gives an example of the correlation between the calculated capability index to the probability of failure:

As a reminder, the capability index is the upper specification limit minus the lower specification limit divided by six times the standard deviation of the process. The two tables can be combined to be approximately:
How many process data points are required to calculate a capability index? Of course, the larger the number of points, the better the estimate of average and standard deviation, but technically, two or three data points will get you started. Is it better than guessing?

Wednesday, April 7, 2010

What's Your Velocity?

By Dr. Scott Rudge

With the development of very high titer cell culture and fermentation processes, downstream processing has been identified as a new bottleneck in biotechnology. The productivity of chromatography in particular, has become a bottleneck. There are two schools of thought for scaling up chromatography: in one, linear velocity (flow rate divided by column cross sectional area) is held constant; in the other, the total (volumetric) flow rate divided by the volume of the column is held constant. In the former method, the length of the column has to be held constant. In the latter method, the geometry of the column is not important, as long as the column can be packed efficiently and flow is evenly distributed. This makes the latter method more flexible, and accommodating of commercially available off the shelf column hardware packages. But does it work?

In my experience, holding flow rate divided by column volume constant between scales works very well. There is plenty of theoretical basis for the methodology as well. Yamamoto has published extensively on the reasons that this technique works. This method is also the basis for scale up described in my textbook. Here, briefly, and using plate height theory, is the theoretical basis:

The basic goal in chromatography scale up is to maintain resolution. “Resolution” is a way to describe the power of a chromatography column to separate two components. It depends on the relative retention of the components, which is fixed by the thermodynamics of the column and remains constant as long as the chemistry (the resin type, the buffer composition) remains constant. It also depends on the peak dispersion in the column, which is a function of the transport phenomena, and is only related to the chemistry by the inherent diffusivity of the molecules involved. Otherwise, it is dependent on mass transfer, flow rate, temperature, flow distribution. Treating the thermodynamics as constant, we can say:

where Rs is Resolution, and N is the number of theoretical plates. N is the ratio of the column length L to the plate height, H, so
 
In liquids, H is approximately a linear function of linear velocity v, according to van Deemter, as discussed in a previous post. So we can say that H = Cv (where C= the van Deemter constant). Now, the linear velocity is the flow rate divided by the cross sectional area of the column, A, and the column volume, V, is the cross sectional area times the column length. The total flow rate (F) divided by the column volume is held constant between scales, we’ll call this constant “f”.
 
This bit of mathematical gymnastics says that Resolution only depends on two fundamental properties of the scale up, van Deemter’s term “C”, which considers the dispersion caused by convection relative to mass transfer to and from a resin particle, and “f”, the flow rate relative to the column volume that is chosen. There is no need to hold column length or linear velocity constant as long as the flow rate relative to column volume is held constant. You might also notice from the math that doubling the column length has the same effect on resolution as dropping the flowrate by half. However, doubling the length costs more in terms of resin, and consumes more solvent than decreasing the flow rate (that’s for you, Mike Klein!) and increases the pressure.

The plate count analysis is very phenomenological, but it does hold up under practice (otherwise it would be abandoned). And the more delicate mathematical models predict the same performance, so confidence in this scale up model is high.

One common mistake made by those using the constant linear velocity model is in adding extra column capacity. Since most people are unwilling to pay for a custom diameter column, but base their loadings on the total volume of resin, they add bed volume by adding length. But since they are unwilling to change the linear velocity, they end up decreasing the productivity of the column (because, for example, the resolution they achieved at a smaller scale in a 12 cm long column at 60 cm/hr is now being performed in a 15 cm long column, at 60 cm/hr, therefore taking 25% more time).

If the less well known constant F/V model is used for a process involving mammalian cells, it would be imperative to explain and demonstrate this model in the scale down validation that is a critical part of the viral clearance package.

But how can you get even more performance out of your chromatography? Treating the unit operation as an adsorption step, and scaling up using Mass Transfer Zone (MTZ) concepts will be treated in a future posting.

Tuesday, November 10, 2009

The Good Buffer

By Scott Rudge

“A Good Buffer” has a number of connotations in biochemistry and biochemical engineering. A “good buffer” would be one that has good buffering capacity at the desired pH. The best buffering capacity is at the pK of the buffer of course, although it seems buffer salts are rarely used at their pK.

Second, a good buffer would be one matched to the application. Or maybe that’s first. For example, the buffering ion in an ion exchange chromatography step should be the same charge as the resin (so as not to bind and take up resin capacity). For example, phosphate ion (negative) is a good choice for cation exchange resins (also negatively charged) like S and CM resins.

Another meaning of a “Good” buffer is a buffer described by Dr. Norman Good and colleagues in 1966 (N. E. Good, G. D. Winget, W. Winter, T. N. Connolly, S. Izawa and R. M. M. Singh (1966). "Hydrogen Ion Buffers for Biological Research". Biochemistry 5 (2): 467–477.). These twelve buffers have pK’s spanning the range 6.15 to 8.35, and are a mixture of organic acids, organic bases and zwitterions (having both an acidic and basic site). All twelve of Good’s buffers have pK’s that are fairly strongly temperature dependent, meaning that, in addition to the temperature correction required for the activity of hydrogen ion, there is an actual shift in pH that is temperature dependent. So, while a buffer can be matched to the desired pH approximately every 0.2 pH units across pH 7 ± 1, the buffers are expensive and not entirely suited to manufacturing applications.

In our view, a good buffer is one that is well understood and is designed for its intended purpose. To be designed for its intended purpose, it should be well matched to provide adequate buffering capacity at the desired pH and desired temperature. As shown in the figure, the buffering capacity of a buffer with a pK of 8 is nearly exhausted below pH 7 and above pH 9.




It’s easy to overshoot the desired pH at these “extremes”, but just such a mismatch between buffering ion and desired pH is often specified. Furthermore, buffers are frequently made by titrating to the desired pH from the pK of the base or the acid. This leads to batch to batch variation in the amount of titrant used, because of overshooting and retracing. In addition, the temperature dependence of the pK is not taken into account when specifying the temperature of the buffer. Tris has a pK of 8.06 at 20°C, so a Tris buffer used at pH 7.0 is already not a good idea at 20°C. The pK of Tris changes by -0.03 pH units for every 1°C in positive temperature change. So if the temperature specified for the pH 7.0 buffer is 5°C, the pK will have shifted to 8.51. Tris has 3% of its buffering capacity available at pH 7.0, 5°C, it’s not well matched at all.

A good buffer will have a known mass transfer rate in water, so that its mixing time can be predicted. Precise amounts of buffering acid or base and cosalt are added to give the exact pH required at the exact temperature specified. This actually reduces our reliance on measurements like pH and conductivity that can be inexact. Good buffers can be made with much more precision than ± 0.2 pH units and ± 10% of nominal conductivity, and when you start to make buffers this way, you will rely more on your balance and understanding of appropriate storage conditions for your raw materials, than making adjustments in the field with titrants, time consuming mixing and guessing whether variation in conductivity is going to upset the process.