Skip to main content

2019 | Buch

Advances in Computational Toxicology

Methodologies and Applications in Regulatory Science

insite
SUCHEN

Über dieses Buch

This book provides a comprehensive review of both traditional and cutting-edge methodologies that are currently used in computational toxicology and specifically features its application in regulatory decision making. The authors from various government agencies such as FDA, NCATS and NIEHS industry, and academic institutes share their real-world experience and discuss most current practices in computational toxicology and potential applications in regulatory science. Among the topics covered are molecular modeling and molecular dynamics simulations, machine learning methods for toxicity analysis, network-based approaches for the assessment of drug toxicity and toxicogenomic analyses. Offering a valuable reference guide to computational toxicology and potential applications in regulatory science, this book will appeal to chemists, toxicologists, drug discovery and development researchers as well as to regulatory scientists, government reviewers and graduate students interested in this field.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Computational Toxicology Promotes Regulatory Science
Abstract
New tools have become available to researchers and regulators including genomics, transcriptomics, proteomics, machine learning, artificial intelligence, molecular dynamics, bioinformatics, systems biology, and other advanced techniques. These new advanced approaches originated elsewhere but over time have perfused into the toxicology field, enabling more efficient risk assessment and safety evaluation. While traditional toxicological methods remain in full swing, the continuing increase in the number of chemicals introduced into the environment requires new toxicological methods for regulatory science that can overcome the shortcoming of traditional toxicological methods. Computational toxicology is a new toxicological method which is much faster and cheaper than traditional methods. A variety of methods have been developed in computational toxicology and some have been adopted in regulatory science. This book summarizes some methods in computational toxicology and reviews multiple applications in regulatory science, indicating that computational toxicology promotes regulatory science.
Rebecca Kusko, Huixiao Hong

Methods in Computational Toxicology

Frontmatter
Chapter 2. Background, Tasks, Modeling Methods, and Challenges for Computational Toxicology
Abstract
Sound chemicals management requires scientific risk assessment schemes capable of predicting physical–chemical properties, environmental behavior, and toxicological effects of vast number of chemicals. However, the current experimental system cannot meet the need for risk assessment of the large and ever-increasing number of chemicals. Meanwhile, current experimental approaches are not sufficient for toxicology to thrive in the era of information. Thus, an auxiliary yet critical field for complementing the experimental sector of chemicals risk assessment has emerged: computational toxicology. Computational toxicology is an interdisciplinary field based especially on environmental chemistry, computational chemistry, chemo-bioinformatics, and systems biology, etc., and it aims at facilitating efficient simulation and prediction of environmental exposure, hazard, and risk of chemicals through various in silico models. Computational toxicology has profoundly changed the way people view and interpret basic concepts of toxicology. Meanwhile, this field is continuously borrowing ideas from exterior fields, which greatly promotes innovative development of toxicology. In this chapter, backgrounds and tasks of computational toxicology are firstly introduced. Then, a variety of in silico models linking key information of chemicals involved in the continuum of source to adverse outcome, such as source emission, concentrations in environmental compartments, exposure concentrations at biological target sites, and adverse efficacy or thresholds are described and discussed. Finally, challenges in computational toxicology such as parameterization for the proposed models, representation of complexity of living systems, and modeling of interlinked chemicals as mixtures are also discussed.
Zhongyu Wang, Jingwen Chen
Chapter 3. Modelling Simple Toxicity Endpoints: Alerts, (Q)SARs and Beyond
Abstract
The correlation of chemical structure with physicochemical and biological data to assess a desired or undesired biological outcome now utilises both qualitative and quantitative structure–activity relationships ((Q)SARs) and advanced computational methods. The adoption of in silico methodologies for predicting toxicity, as decision support tools, is now a common practice in both developmental and regulatory contexts for certain toxicity endpoints. The relative success of these tools has unveiled further challenges relating to interpreting and applying the results of models. These include the concept of what makes a negative prediction and exploring the use of test data to make quantitative predictions. Due to several factors, including the lack of understanding of mechanistic pathways in biological systems, modelling complex endpoints such as organ toxicity brings new challenges. The use of the adverse outcome pathway (AOP) framework as a construct to arrange models and data, to tackle such challenges, is reviewed.
Richard Williams, Martyn Chilton, Donna Macmillan, Alex Cayley, Lilia Fisk, Mukesh Patel
Chapter 4. Matrix and Tensor Factorization Methods for Toxicogenomic Modeling and Prediction
Abstract
Prediction of unexpected, toxic effects of compounds is a key challenge in computational toxicology. Machine learning-based toxicogenomic modeling opens up a systematic means for genomics-driven prediction of toxicity, which has the potential also to unravel novel mechanistic processes that can help to identify underlying links between the molecular makeup of the cells and their toxicological outcomes. This chapter describes the recent big data and machine learning-driven computational methods and tools that enable one to address these key challenges in computational toxicogenomics, with a particular focus on matrix and tensor factorization approaches. Here we describe these approaches by using exemplary application of a data set comprising over 2.5 × 108 data points and 1300 compounds, with the aim of explaining dose-dependent cytotoxic effects by identifying hidden factors/patterns captured in transcriptomics data with links to structural fingerprints of the compounds. Together transcriptomics and structural data are able to predict pathological states in liver and drug toxicity.
Suleiman A. Khan, Tero Aittokallio, Andreas Scherer, Roland Grafström, Pekka Kohonen
Chapter 5. Cardio-oncology: Network-Based Prediction of Cancer Therapy-Induced Cardiotoxicity
Abstract
The growing awareness of cardiotoxicities associated with cancer treatment has led to the emerging field of cardio-oncology (also known onco-cardiology), which centers on screening, monitoring, and treating cancer patients with cardiac dysfunction before, during, or after cancer treatment. The classical approach centered on the hypothesis of ‘one gene, one drug, one disease’ in the traditional drug discovery paradigm may have contributed to unanticipated off-target cardiotoxicity. However, there are no guidelines in terms of how to prevent and efficiently treat new cardiotoxicities in drug discovery and development. Novel approaches, such as network-based drug-disease proximity, shed light on the relationship between drugs and diseases, offering novel tools for risk assessment of drug-induced cardiotoxicity. In this chapter, we will introduce an integrated, network-based, systems pharmacology approach that incorporates disease-associated proteins/genes, drug-target networks, and the human protein-protein interactome, for risk assessment of drug-induced cardiotoxicity. Specifically, we will introduce available bioinformatics resources and quantitative network analysis tools. In addition, we will showcase how to use network proximity for risk assessment of drug-induced cardiotoxicity and for understanding of their underlying cardiotoxicity-related mechanism-of-action (e.g., multi-targeted kinase inhibitors). Finally, we will discuss existing challenges and highlight future directions of network proximity approaches for comprehensive assessment of oncological drug-induced cardiotoxicity in the early stage of drug discovery, clinical trials, and post-marketing surveillance.
Feixiong Cheng
Chapter 6. Mode-of-Action-Guided, Molecular Modeling-Based Toxicity Prediction: A Novel Approach for In Silico Predictive Toxicology
Abstract
Computational toxicology is a sub-discipline of toxicology concerned with the development and use of computer-based models and methodology to understand and predict chemical toxicity in a biological system (e.g., cells and organisms). Quantitative structure–activity relationship (QSAR) has been the predominant approach in computational toxicology. However, classical QSAR methodology has often suffered from low prediction accuracy, largely owing to the lack or non-integration of toxicological mechanisms. To address this lingering problem, we have developed a novel in silico toxicology approach that is based on molecular modeling and guided by mode of action (MoA). Our approach is implemented through a target-specific toxicity knowledgebase (TsTKb), consisting of a pre-categorized database of chemical MoA (ChemMoA) and a series of pre-built, category-specific classification and quantification models. ChemMoA serves as the depository of chemicals with known MoAs or molecular initiating events (i.e., known target biomacromolecules) and quantitative information for measured toxicity endpoints (if available). The models allow a user to qualitatively classify an uncharacterized chemical by MoA and quantitatively predict its toxicity potency. This approach is currently under development and will evolve to incorporate physiologically based pharmacokinetic (PBPK) modeling to address absorption, distribution, metabolism and excretion (ADME) processes in a biological system. The fully developed approach is believed to significantly advance in silico-based predictive toxicology and provide a new powerful toolbox for regulators, the chemical industry and the relevant academic communities.
Ping Gong, Sundar Thangapandian, Yan Li, Gabriel Idakwo, Joseph Luttrell IV, Minjun Chen, Huixiao Hong, Chaoyang Zhang
Chapter 7. A Review of Feature Reduction Methods for QSAR-Based Toxicity Prediction
Abstract
Thousands of molecular descriptors (1D to 4D) can be generated and used as features to model quantitative structure–activity or toxicity relationship (QSAR or QSTR) for chemical toxicity prediction. This often results in models that suffer from the “curse of dimensionality”, a problem that can occur in machine learning practice when too many features are employed to train a model. Here we discuss different methods of eliminating redundant and irrelevant features to enhance prediction performance, increase interpretability, and reduce computational complexity. Several feature selection and extraction methods are summarized along with their strengths and shortcomings. We also highlight some commonly overlooked challenges such as algorithm instability and selection bias while offering possible solutions.
Gabriel Idakwo, Joseph Luttrell IV, Minjun Chen, Huixiao Hong, Ping Gong, Chaoyang Zhang
Chapter 8. An Overview of National Toxicology Program’s Toxicogenomic Applications: DrugMatrix and ToxFX
Abstract
DrugMatrix and its automated toxicogenomics reporting system, ToxFX are the scientific communities’ largest molecular toxicology reference database and informatics systems. DrugMatrix consists of the comprehensive results of thousands of highly controlled and standardized toxicological experiments where rats or primary rat hepatocytes were systematically treated with more than 600 therapeutic, industrial, or environmental chemicals at both non-toxic and toxic doses. Following administration in vivo, comprehensive studies of the effects of these compounds were carried out after multiple durations of exposure, and in multiple target organs. Study types included pharmacology, clinical chemistry, hematology, histology, body and organ weights, and clinical observations. Additionally, a curation team extracted all relevant information on the compounds from the literature, the Physicians’ Desk Reference, package inserts, and other relevant sources. At the heart of the DrugMatrix database are thousands of gene expression data sets generated by extracting RNA from the toxicologically relevant organs and tissues and analyzing these RNAs using the GE Codelink rat array, and the Affymetrix whole-genome 230 2.0 rat GeneChip array systems. Additionally, the database contains 148 scorable genomic signatures, covering 96 distinct phenotypes derive from mining the DrugMatrix gene expression data. The signatures are informative of organ-specific pathology (e.g., hepatic steatosis), and mode of toxicological action (e.g., PXR activation in the liver). The phenotypes cover several common target tissues in toxicity testing (liver, kidney, heart, bone marrow, spleen, and skeletal muscle). Taken as a whole, DrugMatrix enables a toxicologist to formulate a comprehensive picture of toxicity with greater efficiency than traditional methods.
Daniel L. Svoboda, Trey Saddler, Scott S. Auerbach
Chapter 9. A Pair Ranking (PRank) Method for Assessing Assay Transferability Among the Toxicogenomics Testing Systems
Abstract
The use of animal models for risk assessment is not a reliable and satisfying paradigm. Accompanying the strategic planned shift by regulatory agencies, more and more advocating campaigns such as the 3Rs in Europe and Tox21/ToxCast in the USA were proposed to develop in silico and in vitro approaches to eliminate animal use. To effectively implement non-animal models in risk assessment, novel approaches are urgently needed for investigating the concordance between testing systems to facilitate the selection of the fit-for-purpose assay. In this chapter, we introduce a Pair Ranking (PRank) method for the quantitative evaluation of assay transferability among the different toxicogenomics (TGx) testing systems. First, we will summarize the critical issues of TGx related to its success in risk assessment. Second, we will elucidate the application of proposed PRank method for addressing key questions in TGx. Finally, we will suggest some potential use of the PRank method for advancing risk assessment.
Zhichao Liu, Brian Delavan, Liyuan Zhu, Ruth Robert, Weida Tong
Chapter 10. Applications of Molecular Dynamics Simulations in Computational Toxicology
Abstract
Computational toxicology is a discipline seeking to computationally model and predict toxicity of chemicals including drugs, food additives, and other environmental chemicals. Risk assessment of chemicals using current in vitro or in vivo experimental methods is at best time-consuming and expensive. Computational toxicology seeks to accelerate this process and decrease the cost by predicting the risk of chemicals to humans and animals. Molecular dynamics (MD) simulation, an emerging computational toxicology technique, characterizes the interactions of chemicals with biomolecules such as proteins and nucleic acids. This chapter will give a brief review both of available software tools for MD simulations and also how to apply these software tools to computational toxicology challenges. We also summarize key protocols to run MD simulations.
Sugunadevi Sakkiah, Rebecca Kusko, Weida Tong, Huixiao Hong

Applications in Regulatory Science

Frontmatter
Chapter 11. Applicability Domain: Towards a More Formal Framework to Express the Applicability of a Model and the Confidence in Individual Predictions
Abstract
A common understanding of the concept of applicability domain (AD) is that it defines the scope in which a model can make a reliable prediction; in other words, it is the domain within which we can trust a prediction. However, in reality, the concept of confidence in a prediction is more complex and multi-faceted; the applicability of a model is only one aspect amongst others. In this chapter, we will look at these different perspectives and how existing AD methods contribute to them. We will also try to formalise a holistic approach in the context of decision-making.
Thierry Hanser, Chris Barber, Sébastien Guesné, Jean François Marchaland, Stéphane Werner
Chapter 12. Application of Computational Methods for the Safety Assessment of Food Ingredients
Abstract
At the Office of Food Additive Safety (OFAS) in the Center for Food Safety and Applied Nutrition at the United States Food and Drug Administration, scientists review toxicological data submitted by industry or published in scientific journals as a part of premarket safety assessments of food ingredients. OFAS also reviews relevant safety data during postmarket assessments of food ingredients as new toxicological data or exposure information become available. OFAS is committed to maintaining a high standard of science-based safety reviews and to staying abreast of novel computational approaches used by industry that could add value to improve safety assessments of food ingredients. In this chapter, we discuss some computational approaches, including quantitative structure–activity relationships, toxicokinetic modeling and simulation, and bioinformatics, as well as OFAS’s in-house food ingredient knowledgebase. We describe the scientific utility of these computational approaches for improving the efficiency of the review process and reducing uncertainties in decisions about the safe use of food ingredients and highlight some challenges with their use for food ingredient safety assessments.
Patra Volarath, Yu (Janet) Zang, Shruti V. Kabadi
Chapter 13. Predicting the Risks of Drug-Induced Liver Injury in Humans Utilizing Computational Modeling
Abstract
Drug-induced liver injury (DILI) is a significant challenge to clinicians, drug developers, as well as regulators. There is an unmet need to reliably predict risk for DILI. Developing a risk management plan to improve the prediction of a drug’s hepatotoxic potential is a long-term effort of the research community. Robust predictive models or biomarkers are essential for assessing the risk for DILI in humans, while an improved DILI annotation is vital and largely affects the accuracy and utility of the developed predictive models. In this chapter, we will focus on the DILI research efforts at the National Center for Toxicological Research of the US Food and Drug Administration. We will first introduce our drug label-based approach to annotate the DILI risk associated with individual drugs and then upon these annotations we developed a series of predictive models that could be used to assess the potential of DILI risk, including the “rule-of-two” model, DILI score model, and conventional and modified Quantitative structure–activity relationship (QSAR) models.
Minjun Chen, Jieqiang Zhu, Kristin Ashby, Leihong Wu, Zhichao Liu, Ping Gong, Chaoyang (Joe) Zhang, Jürgen Borlak, Huixiao Hong, Weida Tong
Chapter 14. Predictive Modeling of Tox21 Data
Abstract
As an alternative to traditional animal toxicology studies, the toxicology for the twenty-first century (Tox21) program initiated a large-scale, systematic screening of chemicals against target-specific, mechanism-oriented in vitro assays aiming to predict chemical toxicity based on these in vitro assay data. The Tox21 library of ~10,000 environmental chemicals and drugs, representing a wide range of structural diversity, has been tested in triplicate against a battery of cell-based assays in a quantitative high-throughput screening (qHTS) format generating over 85 million data points that have been made publicly available. This chapter describes efforts to build in vivo toxicity prediction models based on in vitro activity profiles of compounds. Limitations of the current data and strategies to select an optimal set of assays for improved model performance are discussed. To encourage public participation in developing new methods and models for toxicity prediction, a “crowd-sourcing” challenge was organized based on the Tox21 assay data with successful outcomes.
Ruili Huang
Chapter 15. In Silico Prediction of the Point of Departure (POD) with High-Throughput Data
Abstract
Determining the point of departure (POD) is a critical step in chemical risk assessment. Current approaches based on chronic animal studies are costly and time-consuming while being insufficient for providing mechanistic information regarding toxicity. Driven by the desire to incorporate multiple lines of evidence relevant to human toxicology and to reduce animal use, there has been a heightened interest in utilizing transcriptional and other high-throughput assay endpoints to infer the POD. In this review, we outline common data modeling approaches utilizing gene expression profiles from animal tissues to estimate the POD in comparison with obtaining PODs based on apical endpoints. Various issues in experiment design, technology platforms, data analysis methods, and software packages are explained. Potential choices for each step are discussed. Recent development for models incorporating in vitro assay endpoints is also examined, including PODs based on in vitro assays and efforts to predict in vivo PODs with in vitro data. Future directions and potential research areas are also discussed.
Dong Wang
Chapter 16. Molecular Modeling Method Applications: Probing the Mechanism of Endocrine Disruptor Action
Abstract
The potential endocrine-related detrimental effects of endocrine-disrupting chemicals (EDCs) on humans and wildlife are a growing worldwide concern. The mechanism of action (MOA) of EDCs induced endocrine-related diseases and endocrine dysfunction can be summarized as the interactions between EDCs and biomacromolecules in endocrine system. Thus, insights into the endocrine-linked MOA of EDCs with corresponding targets will pave the way for developing screening methods of EDCs, prioritizing, and constructing endocrine-related adverse outcome pathways. To date, batteries of laboratory bioassays have been developed and employed to distinguish whether EDCs activate/inhibit/bind to a target or not. However, such test methods poorly assess the underlying molecular mechanisms. Molecular modeling methods are an essential and powerful tool in deciphering the mechanism of endocrine disruptor action. In this chapter, several critical processes related to performing the molecular modeling are described. Topics include preparing 3D biomacromolecules and EDCs structures, obtaining and refining the EDC–biomacromolecule complex, and probing the underlying interaction mechanism. Among these topics, we have emphasized revealing the underlying mechanism by analyzing binding patterns and noncovalent interactions and calculating binding energy. Lastly, future directions in molecular modeling are also proposed.
Xianhai Yang, Huihui Liu, Rebecca Kusko
Chapter 17. Xenobiotic Metabolism by Cytochrome P450 Enzymes: Insights Gained from Molecular Simulations
Abstract
Accurate chemical risk assessment requires consideration of the metabolism functioned by the vast majority of enzymes, since neglecting these metabolic pathways (and toxic metabolites) may lead to inaccurate evaluation of their adverse effects on human health. Traditional in vivo or in vitro methods toward this end can be confronted with obstacles, e.g., the huge and ever-increasing number of chemicals, cost and labor-intensive tests, and lack of chemical standards in analysis. Instead, molecular simulations (in silico) are deemed as a promising alternative, which has gradually proven to be feasible for gaining insights into toxicological disposition of xenobiotic chemicals. In this chapter, we review recent progress in molecular simulations of xenobiotic metabolism catalyzed by the typical phase I enzyme: cytochrome P450 enzymes (CYPs). The first section describes the significance of xenobiotic metabolism in chemical risk assessment. Then, the versatile functionality of CYPs in xenobiotic metabolism is briefly summarized by introducing some of the fundamental reactions, e.g., C–H hydroxylation, phenyl oxidation, and heteroatom (N, P, S) oxidation. The last section presents case studies of molecular simulations for metabolism of typical environmental contaminants (e.g., brominated flame retardants, chlorinated alkanes, substituted phenolic compounds), with an emphasis on mechanistic insights gained from quantum chemical density functional theory (DFT) calculations with the active species of CYPs.
Zhiqiang Fu, Jingwen Chen
Chapter 18. Integrating QSAR, Read-Across, and Screening Tools: The VEGAHUB Platform as an Example
Abstract
In silico models are evolving toward a more mature view, which integrates several perspectives. This integration proceeds on the application side toward a deeper exploitation of the data and information available, coping toward more challenging tasks. On a theoretical point of view, the QSAR models are nowadays most typically general models, at least in their ambition, while read-across is local. There are also general tools for prioritization. There are common aspects between these approaches, but also peculiar aspects. On the other side, users are interested in the application of these tools, for the evaluation of specific chemicals (which may relate to read-across and QSAR models), or for the assessment of populations of substances, also quite large (which may relate to QSAR and prioritization tools). In the development of VEGA, we tried to be as close as possible to the user’s need, reducing the barriers between the different approaches, and providing a series of tools which may fit different purposes. We describe below the philosophy of VEGA, and how the user may take advantage of the complex tools for different purposes.
Emilio Benfenati, Alessandra Roncaglioni, Anna Lombardo, Alberto Manganaro
Chapter 19. OpenTox Principles and Best Practices for Trusted Reproducible In Silico Methods Supporting Research and Regulatory Applications in Toxicological Science
Abstract
Our aim in this work and initiative is to establish a practice and guidance for tracking and reporting modern in silico data analyses in a reproducible manner. The recommended reproducible principle supports the concept that data analyses, and more generally, scientific claims and regulatory evidence, are published with their raw data and software code so that others may verify the findings and build upon them. We discuss here how we are demonstrating implementations of trusted reproducible in silico evidence workflows and are enhancing their acceptance with an open knowledge community approach supported within OpenTox and OpenRiskNet. The general principle discussed in this article can be applied in regulatory settings.
Barry Hardy, Daniel Bachler, Joh Dokler, Thomas Exner, Connor Hardy, Weida Tong, Daniel Burgwinkel, Richard Bergström
Backmatter
Metadaten
Titel
Advances in Computational Toxicology
herausgegeben von
Dr. Huixiao Hong
Copyright-Jahr
2019
Electronic ISBN
978-3-030-16443-0
Print ISBN
978-3-030-16442-3
DOI
https://doi.org/10.1007/978-3-030-16443-0

Premium Partner