Abstract
Substances leached from materials used in pharmaceutical manufacturing systems, packages, and/or medical devices can be administered to a patient as part of a clinical therapy. These leachables can have an undesirable effect on the effectiveness of the therapy and/or patient safety. Thus, relevant samples such as material extracts or drug products are chromatographically screened for foreign organic impurities, where screening is the analytical process of discovering, identifying, and quantifying these unspecified foreign impurities. Although screening methods for organic extractables and leachables have achieved a high degree of technical and practical sophistication, they are not without issues with respect to their ability to accomplish the aforementioned three functions. In this first part of a series of three manuscripts, the process of screening is examined, limitations in screening are identified, and the concept of using an internally developed analytical database to identify, mitigate, or correct these errors is introduced. Furthermore, errors of omission are described, where an error of omission occurs when a screening method fails to produce a recognizable response to an analyte present in the test sample. The error may be that no response is produced (“falling through the cracks”) or that a produced response is not recognizable (“failing to see the tree for the forest”). In either case, proper use of a robust internal extractables/leachables database can decrease the frequency with which errors of omission occur. Examples of omission errors, their causes, and their possible resolution are discussed.
- Extractables
- Leachables
- Chromatographic analysis
- Screening analysis
- Target analysis
- Quantitation
- Identification
- Database
- Errors of omission
Introduction
When drug products are manufactured, packaged, and administered, they unavoidably and inevitably contact items such as manufacturing components, packaging systems, and administration devices. During contact, substances present in or on these items can be transferred to the drug product where they become foreign impurities, otherwise known as leachables. When the drug products are administered to a patient during clinical therapy, the patient is exposed to the leachables.
Additionally, medical devices contact patients and potentially clinical personnel either directly or indirectly during their clinical use. As the device is used, substances present in or on the device may leach out into the device/human contact medium. Thus, patients and potentially clinicians are exposed to the leached substances during the device's clinical use.
As foreign impurities (i.e., impurities not associated with the drug product's intentional ingredients), leachables could adversely affect the drug product's or medical device's suitability for its intended use, including:
patient and user health;
drug product and/or medical device quality;
drug product and/or medical device stability;
drug product and/or medical device efficacy; and
drug product and/or medical device compliance.
Thus, drug products are tested for foreign impurities (leachables), and extracts of contacted items are tested for extractables (as potential or probable foreign impurities) so that the foreign impurities can be identified, quantified, and ultimately assessed for potential adverse effects (1, 2).
Analytics of Organic Extractables and Leachables
When an extract is tested for organic extractables (or a drug product is tested for organic leachables), the desired outcome is to account for all extractables uniquely present in an extract (versus an extraction blank) above an established threshold or to establish all leachables uniquely present in a drug product above an established threshold. This desired outcome is achieved by analyzing the extract or drug product (and any associated blank or control) with chromatographic methods that are able to produce useful and interpretable responses for potential extractables or leachables (3⇓–5). The chromatographic methods employed for this purpose are generally of two types—target or screening (see Figure 1).
In targeted analyses, specific extractables or leachables have been selected from a larger population of substances as targets for the analysis. As the specified targets are known, the sole purpose of target analysis is to quantify the targets in either the extracts or the drug product. As quantitation is the sole purpose and the number of targeted substances is generally small, targeted methods have been optimized in terms of attributes relevant to quantitation, such as accuracy, precision, sensitivity, selectivity, and linearity of response.
However, it is often the case that the extractables in an extract or leachables in a drug product cannot be specified up front and thus they must be discovered. Once discovered, the substances must be identified, as they were not specified up front. Lastly, the discovered and identified substances must be quantified. The analytical process by which unspecified extractables or leachables are discovered, identified, and quantified is termed screening.
The difference between screening and targeting can be understood by considering the questions that each approach attempts to answer (Figure 2). In screening, relevant questions include:
Are there substances unique to the sample (versus an appropriate blank) that are present in the sample above a certain concentration threshold? The screening method answers this question by producing a recognizable response for all such substances.
If yes to question 1, what are the identities of those substances? The screening method answers this question via a response that contains information that can be used to secure an identity.
If yes to question 1, what are the concentrations of those substances? The screening method answers this question via a response whose magnitude can be qualitatively correlated to its concentration.
Considering targeting, the following questions are relevant:
Is a specified substance present in the sample in reportable quantities? The target method answers this question by producing a specific response of an anticipated nature.
If yes to question 1, what is the concentration of the specified substance? The target method answers this question via a response whose magnitude can be quantitatively correlated to its concentration.
One obvious difference in screening versus targeting is that the list of targeting questions is shorter than the list of screening questions. This is the case since the question of identity, relevant to screening, is irrelevant to targeting as the act of targeting specifies the identity up front. Nevertheless, although targeting does not require that the identity of the targeted analyte be established, it does require confirmation that the compound being measured as the target is in fact the target and not an “imposter.” A second difference can be linked to the difference between an assessment threshold (applicable to screening) and a reporting threshold (applicable to targeting). In screening, any chromatographic peak whose response is larger than the assessment threshold, for example, the analytical evaluation threshold (AET), must be reported for assessment, where the assessment threshold is established on the basis of a leachable's potential product quality impact. To facilitate the assessment, the compound responsible for the peak must be identified and quantified. In targeting, a compound is reported only if its concentration is greater than a reporting threshold. Although one can certainly envision circumstances where both a screening and a target method would have to respond to an analyte down to the AET, one can also envision circumstances where the screening method would have to perform down to the AET, whereas the target method would report values only above a reporting threshold, likely larger than the AET, linked to a substance's probable accumulation level and its inherent safety impact.
It is important to consider screening and targeting more closely to address a common misconception about both approaches. Consider quantitation, for example. Although determining an analyte's concentration is a desired outcome of both screening and targeting, an analyte's concentration is not directly produced by either screening or a target analysis. In both cases, an analyte response is produced during testing and it is the response that is further processed to produce the concentration. The same thing is true with respect to identification; although the screening and targeting assays produce a response that contains information that can be used to infer an identity, the response itself is not an identity. It is only with further processing that the response's information can lead to an identification. Thus, screening methods do not identify and quantify substances and target methods do not quantify substances. Rather, both methods produce data that are further interpreted to provide quantities and identities. Thus, if a false identification or an inaccurate concentration estimate is secured, it may be an error on the part of the test method (i.e., failure to produce a response), the interpretation of the test method response data (i.e., failure to properly interpret a response's information), or both the method and interpretation (e.g., failure to produce an interpretable response).
As will be expanded upon subsequently, the universe of all possible organic extractables and leachables is large and diverse and thus no single analytical method has the ability to screen extracts or drug products for all possible extractables and leachables. Therefore, a set of complementary, overlapping, and independent (orthogonal) analytical methods are employed for the purpose of extractables and leachables screening. As chromatographic methods are uniquely appropriate for analyzing samples for organic constituents, it has become standard industry practice to accomplish extractables and leachables (E&L) screening via chromatographic methods as the chromatographic process is a necessary and effective means of separating individual analytes from one another. This separation is an essential aspect in terms of securing identities and concentration estimates. Moreover, the various types of chromatography exhibit the desirable characteristics of complementary, overlapping, and orthogonal.
For example, gas chromatography coupled with headspace sampling is widely accepted as an effective means of analyzing a sample for volatile organic substances. Additionally, so-called direct injection gas chromatographic methods are suitable for semivolatile organic analytes, whereas liquid chromatography is well-suited for nonvolatile organic analytes (see Figure 3). As they rely on different mechanisms of separation, these methods are orthogonal; because the distinction between volatile, semivolatile, and nonvolatile is indistinct, the methods are complementary in the sense that analytes can be accounted for by two methods.
In essence, the chromatographic separation is a means of preparing an extract (or drug product) for detection. It is the detection, as opposed to the separation, that specifically provides the information that enables discovery, identification, and quantitation. The need to perform these functions for a large and diverse population of potential analytes establishes the requirements for appropriate detection mechanisms, where the ideal detection mechanism produces:
A response for the widest possible set of potential analytes (discovery).
A response that contains information with which or from which an analyte's identity can be inferred (identification).
A response whose magnitude can be correlated to the analyte's concentration in the tested sample (quantitation).
Although the chromatographic methods noted in Figure 3 are amenable to numerous appropriate detection approaches, clearly very few of these approaches fulfill these three requirements. For example, consider flame-ionization detection (FID), which is commonly applied in tandem with a gas chromatography (GC) separation. Although an FID detector responds well to a broad population of chemically diverse organic substances with high sensitivity and a more or less uniform and linear response function (making it an appropriate detector for discovery and quantitation), the FID signal provides little information beyond retention time that could be used to establish an analyte's identity.
In fact, the only commonly employed GC and liquid chromatography (LC) detector that is arguably capable of producing information that meets all three requirements is a mass spectrometer (MS), which is why the MS detector, in its many different manifestations, is the detector most widely used in E&L screening.
If the MS detector has shortcomings as the “universal” detector applied in E&L screening it is that (a) some compounds do not ionize and thus do not produce an MS response and (b) MS responses among extractables and leachables can vary greatly, confounding quantitation (where both (a) and (b) are especially true when applied as an LC detector) (6⇓⇓–9). To address these shortcomings, MS detection is often used in conjunction with orthogonal detection methods. For example, the GC column effluent can be split between FID and MS detectors to produce simultaneous FID and MS chromatograms while the use of UV and MS detection in LC is the standard of practice. If nothing else, use of simultaneous dual detection methods alerts an analyst to “a peak that was not picked up in the MS chromatogram” (an error of omission as noted later in this manuscript).
Limitations of the Chromatographic Screening Process
In the ideal or perfect situation, leachables in a drug product and extractables in an extract are:
completely and fully accounted for (meaning that all leachables present in the drug product or all extractables present in the extract above an established and justified analytical threshold have been discovered);
fully and correctly identified; and
accurately and precisely quantified
as such information is essential for assessing potential adverse effects. Generally, this leachables and extractables information is obtained by analytically screening or targeting drug products or extracts. In screening, analytical methods of broad general applicability and with high information content are used to secure responses for substances that are likely to be either extractables or leachables. Information contained within these responses is used to secure identities and estimate the concentration of the individually revealed substances.
In targeting (also called profiling), an analytical method that has been developed and qualified for a set of defined target substances is applied to drug products or extracts to answer two questions: (1) “is the target substance present in the sample?” and (2) if the target substance is present in the sample, “what is its concentration?” Given the large population of chemically diverse extractables and leachables, screening is much more commonly applied than is profiling.
In the perfect or ideal world, screening methods would (1) respond to the entire population of extractables/leachables, (2) correctly identify all the substances that produced a response, and (3) accurately quantify all substances that produced a response. In the real and imperfect world, the possibility exists that leachables or extractables screening is subject to errors, including (Figure 4):
the error of omission;
the error of inexact identification; and
the error of inaccurate and imprecise quantitation.
These errors, individually or cumulatively, can hinder, complicate, bias, or even thwart an assessment, producing an incomplete or erroneous assessment that could lead to either the commercialization of compromised drug products or the rejection of acceptable, necessary, and effective drug products.
Addressing the Limitations via an Internal Extractables and Leachables Database
Although extractables or leachables screening is sufficiently complex that it is virtually impossible to completely eliminate these errors, practices can be adopted to mitigate the number and magnitude of these errors. Thus, it is reasonable to expect that organizations engaged in extractables or leachables assessments have generated and routinely apply an internally derived, verified, well-populated, and expanding extractables/leachables database. Furthermore, it is reasonable to expect that the database contains sufficient information so that when it is applied to screening, the database:
minimizes omissions in an extractables or leachables profile;
enables full and exact identification of all relevant substances; and
facilitates more accurate and precise quantitation.
By definition, an extractables or leachables analytical database is a collection of analytical information for specified and relevant extractables and leachables, secured by subjecting reference materials (such as authentic standards) to the analytical screening process and collecting and collating relevant information. The extractables/leachables database is thus a compilation of essential analytical information for a defined set of substances that have been established to be extractables and leachables. Minimally, this essential information includes analytical data that secures or enables robust and rigorous identification and accurate and precise quantitation for a broad population of extractables and leachables that reflect those substances that have been encountered or anticipated in laboratory investigations. Such analytical data includes:
key mass spectral features such as accurate mass, fragmentation patterns revealing major and diagnostic ions, and other identifying information;
absolute or relative retention times or retention indices; and
response factors and/or response functions.
Expanding their utility in terms of facilitating extractables/leachables assessment, such databases could include information such as:
key chemical properties (molecular weight, acid/base dissociation constant, octanol/water partition coefficient, solubility);
source and use of the extractable/leachable; and
assessment-enabling data (QSAR analysis for mutagenic/sensitization/irritation potential, Cramer classification, NOAEL, PDE [permissible daily exposure], etc.).
An Example of an Internal Extractables and Leachables Database
Although it is likely that the format and the physical nature of an E&L database varies somewhat from owner to owner, the power of the database is derived from the number of substances in the database, the amount and type of information collected in the database, and the integrity of that information. Throughout this series of documents, general concepts will be supported by examples and case studies encountered by Nelson Labs and by excerpts from an analysis of the Nelson Labs Database (hereafter referred to as the “Database”), provided for the purpose of fostering sound scientific processes and practices. As an example of an E&L database, Table I represents a small portion of the Database for semivolatile organic compounds characterized by gas chromatography/mass spectrometry (GC/MS) that consists of entries for >2500 semivolatile substances that have been encountered as either extractables or leachables over the course of E&L studies performed at Nelson Labs. The chromatographic and mass spectral information contained in the Database enables the procurement of confirmed identifications and accurate concentration estimates for a large number of the most commonly encountered extractables and leachables.
Errors of Omission, a Fatal Error
Extractables and leachables profiles obtained by analytically screening either extracts or drug products should ideally consist of every organic extractable present in an extract (or organic leachable in a drug product) above a defined and justified evaluation or reporting threshold (e.g., the AET) (10, 11). An error of omission occurs when the analytical screening process fails to account for all such extractables and leachables. In chromatographic methods used to screen for organic extractables/leachables, an error of omission is the absence of a recognizable chromatographic peak attributable to the analyte of interest.
Commission of an error of omission is a fatal error as the assessment of the extractables or leachables profile is irreversibly compromised by committing the error. An extractable or leachable that is not accounted for by the analytical process is an extractable or leachable that cannot and will not be assessed. Thus, an error of omission causes the assessment of the effect of the extractable/leachable to be incomplete.
There are two major “flavors” of omission errors:
squeezing through the cracks; and
failing to see the tree in the forest.
Each of these “flavors” will be considered in greater detail as follows.
Substances Squeezing through the Cracks
The error of omission that is termed “squeezing through the cracks” is undoubtedly the omission error that comes most readily to people's mind, which is the circumstance where the analytical method(s) do not produce a response to the analyte that has thus “squeezed through the cracks (or gaps)” in the method(s). Although most modern analytical screening strategies for extractables/leachables employ multiple overlapping but orthogonal analytical methods to close as many cracks (gaps) as possible, it is well-recognized that there are significant and meaningful gaps in both the individual analytical methods and in the combined, multimethod analytical strategy.
There are three aspects to consider in addressing this type of error of omission: (1) how to determine that such an error has occurred, (2) how to correct the error once it has surfaced, and (3) how to minimize the possibility that such an error occurs. Considering aspect (1), addressing the means of establishing whether extractables or leachables have been missed by the analytical screening process is outside the scope of this document. However, concepts such as total organic carbon (TOC) reconciliation have been proposed as one means of establishing that one or more substances has “fallen through the cracks” (12, 13). Moreover, a highly evolved internal database of extractables/leachables data could provide alerts to potential omissions if the database is associative. For example, a number of degradation products of Irganox-related antioxidants have been reported as extractables (14, 15). In an associative database, noting that a test article contains an Irganox-type antioxidant could produce a list of probable additional Irganox-related extractables. If extractables screening of an Irganox-containing test article revealed only some of these known related extractables, it would be reasonable for the analyst to ask “I wonder where the other related substances are?” and to answer the question by more closely examining the chromatographic data, possibly “finding” a nearly omitted substance.
However, once it has been established that an error of omission has occurred, the only means of correcting the error is to expand or augment the analytical screening process by either optimizing the existing screening methods to close the identified gaps or by including additional screening methods that are capable of responding to compounds that fall outside the capabilities of the initial screening approach.
The utility of a database in terms of addressing the third aspect is addressed later in this document.
Failing to See the Trees in the Forest
In the ideal world, individual extractables or leachables produce individual and unique responses in screening methods that are readily distinguishable from all other analytical responses. In the real world, it is often the case that analyses for extractables/leachables produce a multitude of responses, which, for one reason or another, might hide or obscure other responses.
Consider extractables screening as an example. It is generally the case that one produces an extract that is well-suited for analysis, meaning that it is more or less readily amenable to analysis and it produces minimal background response. Thus, extractables can generally be readily differentiated from the background response associated with the extraction solvent.
Nevertheless, certain extractables profiles themselves can be sufficiently complex that it is difficult to differentiate individual extracted substances from one another. Although a response is produced for all analytes, responses for individual analytes might be either unresolved, unrecognizable, or otherwise not useful.
In certain cases, leachables screening can be even more complicated, as the drug products that are screened for leachables may be much more analytically “busy” than are extraction blanks, meaning that the drug product matrix itself may produce a multitude of analytical responses that interfere with, mask, or obscure the analytical responses associated with leachables.
For example, consider the paired chromatograms shown in Figure 5, reflecting a drug product blank (drug product stored in an inert vessel) and a drug product sample (drug product stored in its container closure system). As analytical responses produced by the drug product itself obscure entire regions in the chromatogram, possible responses to leachables that elute in the obscured regions could go unrecognized, resulting in an error of omission.
Thus, an error of omission occurs when one cannot differentiate one tree (the response produced by a single extractable or leachable) from the forest of trees (all responses produced by either the test sample itself or other extractables/leachables).
Addressing the Errors of Omission with a Database
The use of an associative database to deal with extractables that might be omitted was discussed previously. Additional effects that an internal database can have on the three aspects of errors of omission are illustrated in Figure 6. Although the internal database can tangentially facilitate the determination of whether an omission has occurred and possibly correct an omission once it has been discovered, the most direct use of the database is to reduce the possibility of an omission occurring.
It is highly desirable that the possibility of substances falling through the cracks be reduced to as low a value as possible. One means of accomplishing this objective is through the generation of an extractables/leachables analytical database. Clearly, if an individual extractable or leachable is present in the analytical database, then one or more of the analytical screening methods is capable of accounting for that extractable or leachable. Thus, the larger the database (i.e., the more compounds in the database), the lower is the probability of committing an error of omission.
For example, consider the case of a universe of extractables that contains 5000 individual substances. In the absence of a database, possibly each and every extractable in the general population could be omitted. However, if there were a database of 1000 compounds, then these compounds in the database would be accounted for by application of the analytical process and would not be omitted. On average, however, there is the 80% possibility (4000 unaccounted extractables ÷ 5000 possible extractables × 100%) that one or more individual extractables in the population could be omitted. If the database was expanded to include 2500 compounds, the possibility of an omission decreases to 50%. If the database were to include the entire 5000-member population of extractables, then the possibility of an error of omission would be eliminated.
Although the existence of a database decreases the probability of an error of omission, it does not prevent an omission from occurring or correct the omission, as omissions are an intrinsic property of the analytical methods themselves. Nevertheless, it is appropriate to consider the use of an E&L database to optimize the analytical screening methods, as optimized screening methods will create fewer omissions. Among the many unanswered questions facing E&L practitioners is the aspect of “how complete are my screening methods?” and “how can I validate their completeness?”
Another way of expressing this thought is “how many potential extractables do my screening methods miss and what can I do to fill the gaps?” These phrases are alternate ways of expressing the error of omission. Clearly, the E&L database can be leveraged to address and rectify this issue. Knowledge of what compounds the methods have found is surely relevant and applicable to the questions of “what compounds haven't I found?” and “how can I verify the breadth of my screening methods?”
Although the generation of the database is, first and foremost, a means of defining the screening method's capabilities, it is also a means of revealing gaps in these capabilities and can provide insights on how to fill those gaps. It is not unreasonable to expect that the statement “these are the compounds that I have found with my screening methods” would increase one's ability to answer the question “what types of compounds have my methods missed?” Furthermore, method optimization via the database could be used to improve separation efficiency thereby mitigating interferences, co-elution, substances eluting too close to the void, and substances that “do not come off the column.”
It is overly simplistic to think of errors of omission as just a matter of numbers, meaning “the more compounds I have found (and placed in a database), the fewer compounds I am missing.” When the “80–20 Rule” is applied to surfacing extractables and leachables via screening methods, it can be restated as “20% of the extractables are present in 80% of the extracts, whereas the remaining 80% of the extractables are present in only 20% of the extracts.” Thus, when it has been established that the screening methods (supported by a database) are capable of responding to this 20% of critical extractables, situations where screening misses (omits) an extractable occur more infrequently. When a database contains the most commonly encountered substances, the possibility of an error of omission occurring is substantially diminished.
The power of a database to address the second type of omission error, failing to see the tree in the forest, is illustrated in Figures 5 and 7 and is based on the observation that it is easier to answer the question “is this specific tree in the forest?” than it is to answer the question “what individual trees are in the forest?” Although it may have been challenging to distinguish the peaks ascribed to the extractables that were intentionally added to the drug product on the basis of simply comparing the chromatograms and looking for “difference” peaks (Figure 5), the ability to “find” the difference peaks is greatly enhanced if one knows the retention and the response characteristics of the added substances because they are contained in a database. To illustrate this point, Figure 7 shows how the ability to deconvolute MS spectral data can be used to reveal “trees in the forest” whose presence was unknown and unknowable in the MS total ion chromatogram (TIC). By deconvoluting the MS spectral data, mass spectra of the individual compounds that elute in a “busy” part of the chromatogram become “cleaner” (containing more relevant ions from the compound of interest and fewer irrelevant ions from co-eluting interfering compounds), thus facilitating the identification of the compound of interest and minimizing the risk of an omission error. Although the utility of peak deconvolution processes is considerable, it is noted that as is the case with all advanced computerized data processing techniques, the outcome of the deconvolution process depends on the skill with which it is applied, for example, the input parameters.
Examples of Errors of Omission and Possible Means for Their Resolution
During the course of extractables/leachables profiling, organizations experienced in this activity have encountered and remediated errors of omission. Examples of the omission errors encountered (or envisioned) and addressed by Nelson Labs Europe are contained in Tables II⇓⇓–V. Although the circumstances addressed in these tables are by no means a complete list of all situations that could lead to an error of omission, these circumstances are fairly representative of situations that are routinely encountered in extractables/leachables screening. Recognizing the occurrence of these errors, the testing laboratory should optimize their screening methods to mitigate these errors.
Concluding Thoughts
Analytically screening extracts and drug products to discover, identify, and quantify extractables or leachables is a universal and necessary aspect of extractables/leachables assessments. Optimization of chromatographic methods applied in screening focuses on maximizing the method's ability to account for the greatest number of substances that represent the large and chemically diverse population of extractables/leachables and thus creates shortcomings in the method's other performance capabilities. Moreover, even though the individual chromatographic methods have been optimized with respect to breadth, the combined suite of methods cannot account for all potential extractables/leachables. Thus, the screening methods are susceptible to certain analytical errors, including errors of omission, inexact identification, and inaccurate quantitation. An analytical database, consisting of analytical information for specified and relevant extractables and leachables, includes that analytical data which can be used to mitigate or eliminate such errors.
If a screening method fails to respond to an extractable or leachable in a recognizable manner, the resultant error of omission is a fatal error as the substance that is missed cannot be impact assessed, leading to an incomplete assessment. Errors of omission can occur when either the compound truly does not produce an analytical response (thus “falling through the cracks” of the analytical process) or when the substance's response is not recognized as being unique among the other responses obtained for all substances (failing to see a specific tree in the forest).
An internally developed analytical database mitigates errors involving “falling through the cracks” as it facilitates the optimization of the analytical methods in terms of eliminating gaps. Such a database also addresses the inability to recognize relevant responses in a forest of responses by providing information that more clearly delineates the response of interest from the potentially masking responses. A database will also “close the gaps” in a screening strategy by fostering method optimizations and by establishing associations between substances that are “found” and related substances that should also be present.
Looking Forward
Part 2 of this series of documents will consider the two remaining types of errors (errors in identification and quantitation) and how an analytical database, specifically one that is internally generated, is used to mitigate the errors. In addition to limitations of the methods themselves, it is noted that errors can occur because of improper implementation of an otherwise effective analytical method. Thus, the use of system suitability testing as a means of managing errors is considered in Part 3. Additionally, Part 3 looks ahead to the future of analytical E&L facilitated by an internal database, considering how the internally generated database establishes a measure of the degree of scientific rigor in screening studies and the ability of the database to drive both scientific innovation and process efficiency in E&L assessment.
Conflict of Interest Declaration
The authors declare that they have no competing or conflicting interests, noting their relationship with Nelson Labs, a provider of extractables and leachables testing and consulting services.
- © PDA, Inc. 2020