Rapid Chemical Exposure and Dose Research
EPA is responsible for ensuring the safety of thousands of chemicals but quantitative exposure data are available for only a small fraction of registered chemicals. As part of its ongoing efforts to support implementation of the Toxic Substance Act as revised by the Frank R. Lautenberg Chemical Safety for the 21st Century Act, EPA researchers are developing innovative methods to make exposure estimates for thousands of chemicals. These type of exposure data are combined with toxicity data to help thoroughly evaluate chemicals for potential health effects.
On this page:
Tools and Resources
- Rapid Exposure Predictions
- Consumer Product Information
- Non-Targeted Analysis
- Estimating Chemical Concentrations in Humans
- Evaluating Predictions
- Using Rapid Exposure Dose Models
Rapid, also called high-throughput, exposure predictions or ExpoCast provide rapid exposure estimates for thousands of chemicals. ExpoCast quickly and efficiently looks at multiple routes of exposure to provide exposure estimates. ExpoCast uses, enhances and evaluates two well-known exposure models to provide exposure predictions.
Farfield exposure models are used to predict exposure from chemicals that are released into the outdoor environment through industrial releases. ExpoCast uses “off-the-shelf” models, USETox and RAIDAR, to estimate outdoor environment exposures. These models estimate the average amount of chemical that gets into the air, water, and soil. The estimates from these models are used in combination with the estimates from the nearfield models to make exposure predictions.
Nearfield exposure models provide estimates of exposure to chemicals used in consumer products and in-home products. The model used to estimate the range of total chemical exposures in a population is the EPA’s Stochastic Human Exposure and Dose Simulation (SHEDS) model. There is a SHEDS high-throughput model that estimates exposure for thousands of chemicals, and a more precise traditional SHEDS model which needs more input data to make more accurate exposure predictions.
- SHEDS High-throughput: Models population-level distributions of exposure to near-field chemical sources. SHEDS-HT can produce a model for thousands of chemicals. This model accounts for the multiple routes, scenarios, and pathways of exposure to understand the total exposure to these chemicals while retaining population and life stage information. SHEDS-HT is useful for the quick evaluation of many chemicals as it has broad applicability, is flexible for what inputs are allowed, and can add new chemicals easily.
- Traditional: Estimates the range of total chemical exposures in a population from different exposure pathways (inhalation, skin contact, dietary and non-dietary ingestion) over different time periods, given a set of demographic characteristics. The estimates are calculated using available data, such as dietary consumption surveys; human activity data drawn from EPA's Consolidated Human Activity Database (CHAD); and observed chemical levels in food, water, air, and on surfaces like floors and counters. Data on chemical concentrations and exposure factors used in SHEDS are based on measurements collected in EPA field studies and published literature values. SHEDS is useful for considering all exposure scenarios.
High-throughput exposure predictions from SHEDS high-throughput use a simple indicator of consumer product use. The high-throughput exposure models are being improved by adding more refined indoor and consumer use information. More refined consumer use information is available in the EPA Chemical and Products Database (CPDat), which is a database that maps more than 49,000 chemicals to a set of terms categorizing their use or function in 16,000 consumer products (e.g. shampoo, soap) types based on what chemicals they contain.
The information in the database comes from collating electronic material safety data sheets (MSDS), analyzing consumer product purchasing behavior and data resulting from testing consumer products for the presence of chemicals using a technology called non-targeted analysis. CPDat is a part of EPA's Computational Toxicology (CompTox) Chemicals Dashboard.
Most exposure sampling techniques are chemical-specific and designed to test for chemicals that are suspected to be present. EPA researchers are developing “Non-Targeted Screening” methods to test indoor environmental samples such as dust for all chemicals present in the samples collected from the homes.
The objective of non-targeted analysis methods is to identify unknown chemicals in water, soil and other types of samples, without having a preconceived idea of what chemicals are present. To that end, EPA scientists are leading a multi-phase project to evaluate the ability of non-targeted analysis methods to consistently and correctly identify unknown chemicals in samples. EPA’s Non-Targeted Analysis Collaborative Trial (ENTACT) was formed in late 2015 and includes nearly 30 academic, government and industry laboratories.
EPA researchers are developing more precise methods for estimating chemical concentrations in humans following exposure. While they are a valuable tool, typical high-throughput assays are hampered by uncertainty in estimating exposure dose. To address this limitation, EPA scientists developed a method to make its high-throughput results more applicable to humans by replacing the traditional constant exposure rate with more realistic human exposure pathways.
EPA researchers developed four toxicokinetic models within an R software package called high-throughput toxicokinetics to estimate chemical concentrations in humans. The package currently uses human in vitro data to make predictions about the fate of chemicals in humans, rats, mice, dogs, and rabbits,
Out of the 987 chemicals with human data in HTTK, 825 of the chemicals are in the ToxCast screening library, and 283 are pharmaceuticals (some pharmaceuticals are included in the ToxCast library because they have well-characterized toxicity data).
To make useful predictions about chemical risks facing the public, scientists need three types of data: exposure, dosimetry, and hazard. Exposure is the amount of a chemical one is exposed to, dosimetry is the way the chemical interacts with one’s body, and hazard is the amount of a chemical that causes toxicity.
Dosimetry Data: Dr. Barbara Wetmore and Dr. Rusty Thomas (director of EPA’s Center for Computational Toxicology and Exposure) began collecting toxicokinetic data in their labs more than ten years ago using animal-free methods originally developed for pharmaceuticals. Their experiments measure how much of a chemical attaches to proteins or cells in the body and how long before the body transforms and eliminates a chemical. The functionality of the HTTK tool depends on the sort of data provided by Dr. Wetmore’s lab, contractors to the EPA, and external partners.
Hazard Data: Though animal tests continue to provide very useful hazard data for toxicity studies, it is not reasonable or ethical to use this method for the backlog of untested chemicals. Therefore, scientists get hazard estimates from other EPA tools, like ToxCast. Any chemical-caused effects identified with animal-free tools like ToxCast can be converted into human (or other animal) terms with HTTK. In this way a lab concept like “in vitro concentration” can be converted into an amont that a person would need to eat or breathe.
Exposure Data: If you know how much chemical someone is exposed to, then HTTK can predict how much of that chemical ends up in different parts of your body. Much of the exposure data used with the HTTK tool comes from blood and urine tests carried out by the Centers for Disease Control and Prevention (CDC). The CDC tests a representative sample of the American population (n~10,000) in order to know the types and concentrations of chemicals Americans. Also, exposure models like, EPA’s ExpoCast tool, use epidemiology and machine learning to simulate what the average American is exposed to in a day. Some exposure data is provided by manufacturers, who are required to release information about their practices.
When evaluating the risk of chemicals, uncertainty exists in hazard identification and exposure predictions. There is also variability in exposure due to differences in key populations. General population exposure estimates are helpful, but population specific exposure values for children, older adults, and other key populations are needed to account for group level variability.
High-throughput toxicokinetics can also be used to provide a more rapid and less resource intensive method for understanding population specific differences in exposure and dose. For example, there is biological variability in the rate that a chemical is cleared from the body across different age and ethnic subpopulations due to differing amounts and activities of metabolic enzymes. This method allows you to adjust exposure models to account for these population specific susceptibilities.
People that work in manufacturing and other factory jobs are exposed to more chemicals more often than the general population. Toxicokinetic inhalation models are essential for scientists to accurately predict exposure in these environments. Previous chemical screening models focused on less volatile chemicals that may be more likely to enter the body orally.
EPA scientists plan to continue improving the accuracy of the HTTK R Package. New models are being developed to describe absorption through the skin, exposure to aerosols (clouds of droplets), partial oral absorption, and human pregnancy.
EPA is currently evaluating the effectiveness of high-throughput exposure models using the Systematic Empirical Evaluation of Models (SEEM) framework. SEEM includes calibration and evaluation of the models using chemical concentrations found in blood and urine samples from the National Health and Nutrition Examination Study. EPA’s high-throughput models are continually refined as more data is gathered for consumer product use, non-targeted chemical exposure screening, and from estimates for oral doses. It also allows for the systematic evaluation of whether additional data improves the exposure predictions.