Elsevier

Methods

Volume 115, 15 February 2017, Pages 65-79
Methods

Analysis of live cell images: Methods, tools and opportunities

https://doi.org/10.1016/j.ymeth.2017.02.007Get rights and content

Highlights

  • Advances in microscopy, biosensors and cell culturing have transformed live cell imaging.

  • Today, live cell imaging plays an important role in basic biology research and drug development.

  • Image analysis is needed to extract quantitative information from vast and complex data sets.

  • We present an overview of image analysis methods, visualization approaches and software tools.

  • Opportunities machine learning (e.g. deep learning) and computer vision provide are highlighted.

Abstract

Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits.

Introduction

Starting with Antonie van Leeuwenhoek, who published his discovery of bacteria, blood cells and muscle fibres in a number of letters to London’s Royal Society in the late 17th century, optical microscopy has become an indispensable tool for scientific discovery. Imaging cells and tissues using methods such as automated high-content and high-throughput microscopy have offered new insights into biology. This review clarifies the role imaging can play in gaining insights into biology and describes the extraction of quantitative information that will aid in this task. Quantitative imaging will be required at all anatomical scales ranging from the sub-cellular to the organ level, where structure, form, organization, and most importantly function, are essential for a complete characterization of the organism.

In this review we will focus on the sub-cellular and the cellular scales of living organisms. The characteristic structure and function at each scale is associated with cues manifest as bio-molecules which are made to express near visible or visible electromagnetic energy endogenously, or are tagged appropriately to respond to optical stimuli. Optical instrumentation will then convert the expressed energy into a digital signal that will be analysed to glean structure and function for the specimen. Imaging reveals both structure and function of a single cell or a collection of cells.

The nucleus was the first cellular organelle to be discovered by a microscope. Leeuwenhoek observed a ”lumen”, the nucleus, in the red blood cells of salmon. Today, tagging or binding specific molecules is now the preferred way to delineate the nucleus. Although, histology has leveraged the use of hematoxylin and eosin stains to great advantage for capturing tissue architecture, that specific labelling technology is of limited value for live-cell imaging. Fluorescence microscopy is resorted to localize the binding location in the chromosome of a fluorescent probe; excitation through the leverage of fluorescence phenomena, in turn, delineates the nucleus. Selected examples are shown in Fig. 1. Thus, DAPI (4’,6-diamidino-2-phenylindole) is a popular fluorescent stain that binds strongly to A-T nucelotide rich regions in the DNA. A variegated and sampled landscape can be obtained by using fluorescent in situ hybridzation (FISH) probes which localize specific DNA and RNA level expression in the cell. The cytoskeleton composed of the water-rich cytoplasm and various filaments is imaged through the use of fluorescence. Phalloidin is often used by labelling it with fluorescent analogs and then subsequently staining actin filaments. Further emblematic of the adoption of this approach has been the discovery of the green fluorescent protein [1] and the wide-spread use of microscopy methods such as confocal and two-photon optics that have dramatically transformed live cell imaging. Given the leverage that the optical spectrum offers in capturing multiple structures and functionalities (cellular mechanisms and signalling), the colour revolution has spurred the wide-spread use of imaging both for high-content and high-throughput [2].

The desire to understand intercellular and intracellular processes has driven the development of super-resolved fluorescent microscopy [3], [4]. To monitor organ formation or indeed the development of an entire organism in 4D (3D + t), the concept of in toto imaging [5], [6] has been developed. With the help of advanced biosensors [7] it is now possible to report the activation states (e.g. conformation and phosphorylation) of endogenous proteins with minimal perturbation. In 2005 three different groups established Channelrhodopsin-2 (ChR2) as a tool for genetically targeted optical remote control [8], [9], [10]. Rapidly developing optogenetics techniques [11] now allow the fast and specific excitation and inhibition of proteins in complex cellular systems. However, high-content time lapse microscopy of living cells is still confined to the laboratory and its use is limited to gleaning cellular structure and function. There is a need to scale imaging experiments and methodologies.

The concept of high-throughput screening was invented to address the needs of industrial and academic drug development efforts. Fairly basic cellular model systems were used to investigate specific molecular hypotheses. To overcome the limitations of this approach, the concept of phenotypic screening [12] was developed. New emerging developments in (patient derived) ex vivo cultures, induced pluripotent stem cells (iPSC) technology, three dimensional (3D) co-culture and organotypic systems hold the potential of designing more disease relevant model systems that will eventually replace traditional cell based models. Horvath et al. [13] recently published a comprehensive review that sets out the principles to facilitate the definition and development of disease relevant assays.

Optical microscopy platforms have evolved to support the growing demand for conducting more complex in vitro experiments. Advanced high-throughput microscopy platforms as for example the Cell Voyager 7000S (Yokogawa), Opera Phenix (Perkin Elmer) or the IN Cell Analyzer 6000 (GE Healthcare Life Sciences) offer the capability of acquiring three dimensional (3D) imagery over time. As a result it is now possible to acquire time lapse three dimensional data (3D + T) at scale. Sophisticated software tools help to transform vast amounts of complex multi-channel imaging data into quantitative information.

Thanks to these advances high-throughput cellular imaging is not only used in all stages of target based drug development [15] but is also becoming a relevant tool for investigating more fundamental biological questions. Experiments in vitro allow to monitor cell fate, build artifical tumors for studying the link between cancer and inflammation [16], [17], and chart the interactions in the microenvironment to new drugs [18], [19], [20]. While it is probably too early to judge if live cell imaging will have any significant role in clinical practice, it is safe to assume that there is a growing need for processing and analysing images of more complex 2D/3D co-culture models at scale.

The process of extracting meaningful information from image data (see Fig. 1) is part of an emerging field Houle and collaborators [21] termed phenomics. It is an area of computational biology concerned with the comprehensive high-dimensional measurement of phenomes. Houle et al. [21] distinguish between two different paradigms. The first is to take a large set of measurements at a given point in time, which they refer to as extensive phenotyping. In our context one could extensively characterise a given cell line with a number of different assays. Intensive phenotyping on the other would require characterising a given phenotype in great detail. Imaging specific processes using video microscopy [22], [23] would fall into this category. For the purpose of illustration a notional imaging based phenotyping workflow is shown in Fig. 2.

While the difference between intrinsic and extrinsic phenotyping is less relevant to this review, it is the notion of taking measurements that is of fundamental importance. A good measurement should be accurate, robust and precise. Additional key elements [24] are the limit of detection, the response function and specificity. Given that there is noise in the measurement, the limit of detection defines the level below which the response is not meaningful. The response function specifies the dependence of the signal on systematic changes in experimental conditions. Good measurements are necessary for generating reproducible data. The choice of a specific cell segmentation algorithm can, for example, affect the interpretation of an experiment. The systematic analysis of various segmentation methods [25], [26] documents how measurement statistics depend on algorithm choices and parameter settings.

The community of computer scientists, engineers and bioinformaticians that develops and advances mathematical methods and algorithms has grown substantially. Biological image analysis is now a broadly recognised area in leading international medical imaging and bioinformatics conferences. A number of challenge competitions, such as the “Particle Tracking Challenge” [27], “Cell Tracking Competition” [28] and Digital Reconstruction of Axonal and Dendritic Morphology Challenge (DIADEM) [29] have been initiated to advance the application specific algorithms. Related challenge competitions can be found on the grand-challenge.org web page.

A field which was started by a few enthusiasts (e.g. [30]) has now matured and produces very powerful algorithms that will continue to impact the life sciences. Thanks to fundamental methodological advances in image analysis, signal processing, medical imaging and computer vision the field will continue to evolve rapidly. Examples for one such development are advances in machine learning which will be discussed in this article. We expect that biomedical imaging will become a core part of the life science curriculum. Only with a certain understanding of the underlying methods it is possible to apply these in a thoughtful way to ensure studies produce reproducible results that ultimately help addressing key scientific questions. There is no doubt that microscopy has evolved from a technique of choice for producing stunning and impressive cover images for scientific publications to a technology that turns vast amounts of imaging data into quantitative information.

A number of review articles [2], [31], [32], [33], [34] have highlighted the opportunities biomedical imaging will provide. The term bio-image informatics evolved and is now used by some. Meijering [35] published a very comprehensive review on cell tracking methods. Dufour et al. [36] provide an overview of assessing 3D morphology. Here we strive to capture the impact more recent developments in computer vision and machine learning will have.

Enabled by machine learning computer vision has emerged as a key technology powering applications ranging from internet search to autonomous driving. The way in which we develop and design algorithms is changing dramatically. Rather than designing such methods from first principles, advanced machine learning techniques now allow us to learn computational models directly from the data. We will discuss how such deep architectures can be used to build reliable algorithms for biological imaging applications.

The workflow shown in Fig. 2 also provides a high level motivation for overall structure of this article. The analysis of shape and motion are the two central themes of this review. Only if we can delineate biological targets accurately, will it be possible to extract a set of meaningful and robust measurements. The segmentation of objects is discussed in Section 3. Methods for cell tracking are presented in Section 4. In the light of recent methodological developments, we believe that these are the areas which will advance most rapidly.

In many cases we cannot work on the raw images directly. Preprocessing methods play a central role in removing image noise and other artefacts. Given the broad range of microscopy methods, Section 2 does not permit a comprehensive review of preprocessing methods. Instead, more recent approaches for processing label-free microscopy images will be discussed.

In Section 5 we discuss some of the available software tools. Rather than limiting the discussion to software relevant to end users, we also provide an overview of toolkits to be used by algorithm developers. With larger data sets information visualisation starts to play a more prominent role. In Section 6 we discuss ideas of visualising the extracted data more effectively. Conclusions and directions for future work are being presented in Section 7.

Section snippets

Image preprocessing methods

Shading correction, removal of image noise and the suppression of out of focus light are perhaps the most important steps that would need to be addressed during image preprocessing. Given the focus of this review it is not possible to discuss the necessary calibration procedures that should be part of experimental protocols. Here we only highlight a few topics that should be taken into account during image processing. As the illumination across the field of view will not be uniform an explicit

Delineating objects of interest

Digital images are represented as pixels associated with various intensity, or brightness values. Biologists however, are interested in objects such as cells, vesicles or tissue components such as blood vessels or glands. Image segmentation allows the identification of object boundaries which then can be used to quantify and analyse various attributes associated the objects of interest.

Despite the development of many segmentation methods over the last five decades image segmentation remains one

Cell tracking

Cell tracking is essential in understanding the temporal dynamics of cell behaviour in time-lapse sequences. Cellular density in time lapse sequences obtained from typical biological experiments often tend to be high given the very closely-packed arrangement of cells in most frames. In addition, equipment and biological limitations including risk of photodamage reduce the frequency at which images can be captured over time. Consequently, this results in time lapse data with significantly low

Software tools

Eliceiri et al. [33] provide a comprehensive overview of all the different software tools that are necessary for the implementation of an image informatics workflow. Here image acquisition, storage, analysis as well as image and data visualisation need to be taken into account. This section focuses on freely available open-source software tools. The following three sections provide an overview of software packages for end users, tool kits facilitating algorithm development and client server

Information visualisation

Image analysis is only the first step in imaging studies that results in large and complex high dimensional datasets describing the phenomena-of-interest or the differences between biological samples. Visualisation plays an important role in understanding and analysing complex microscopy data. Dedicated software is needed to effectively view and inspect 3D time-lapse data that has been acquired in multiple channels. Segmentation and tracking results need to be displayed in context of the

Conclusion & future directions

In recent years the field of biological imaging has grown significantly. The community has developed new approaches and tools that have become an integrative part of biological studies. Algorithms and methodologies continue to evolve. The recent advances in machine learning and computer vision, which allow learning capable and robust algorithms with the aid of deep learning directly from the data, have been highlighted in this review. Rather than setting up pipelines that utilise a number of

Acknowledgements

TAN received support from the RCUK Centre for Doctoral Training in Healthcare Innovation, RCUK Digital Economy Theme (EP/G036861/1), and the Wellcome Institutional Strategic Support Fund of the University of Oxford. EPSRC and the Wellcome Institutional Strategic Support Fund of the University Of Oxford. JR and HS are supported by the EPSRC SeeBiByte Programme Grant (EP/M013774/1). In addition, JR is supported by the Ludwig Institute for Cancer Research.

References (178)

  • N. Xu et al.

    Object segmentation using graph cuts based active contours

    Comput. Vis. Image Understanding

    (2007)
  • K. Hornik et al.

    Multilayer feedforward networks are universal approximators

    Neural Networks

    (1989)
  • M. Veta et al.

    Assessment of algorithms for mitosis detection in breast cancer histopathology images

    Med. Image Anal.

    (2015)
  • R.Y. Tsien

    The green fluorescent protein

    Ann. Rev. Biochem.

    (1998)
  • C. Vonesch et al.

    The colored revolution of bioimaging

    IEEE Signal Process. Mag.

    (2006)
  • E. Betzig et al.

    Imaging intracellular fluorescent proteins at nanometer resolution

    Science

    (2006)
  • S.W. Hell et al.

    The 2015 super-resolution microscopy roadmap

    J. Phys. D: Appl. Phys.

    (2015)
  • S.G. Megason

    In toto imaging of embryogenesis with confocal time-lapse microscopy

    Zebrafish: Methods Protoc.

    (2009)
  • J. Zhang et al.

    Creating new fluorescent probes for cell biology

    Nat. Rev. Mol. Cell Biol.

    (2002)
  • X. Li et al.

    Fast noninvasive activation and inhibition of neural and network activity by vertebrate rhodopsin and green algae channelrhodopsin

    Proc. Natl. Acad. Sci. U.S.A.

    (2005)
  • E.S. Boyden et al.

    Millisecond-timescale, genetically targeted optical control of neural activity

    Nat. Neurosci.

    (2005)
  • L. Fenno et al.

    The development and application of optogenetics

    Ann. Rev. Neurosci.

    (2011)
  • D.C. Swinney et al.

    How were new medicines discovered?

    Nat. Rev. Drug Discovery

    (2011)
  • P. Horvath, N. Aulner, M. Bickle, A.M. Davies, E.D. Nery, D. Ebner, M.C. Montoya, P. Östling, V. Pietiäinen, L.S....
  • V. Ljosa et al.

    Annotated high-throughput microscopy image sets for validation

    Nat. Methods

    (2012)
  • P. Lang et al.

    Cellular imaging in drug discovery

    Nat. Rev. Drug Discovery

    (2006)
  • A. Mantovani et al.

    Cancer-related inflammation

    Nature

    (2008)
  • S.M. Crusz et al.

    Inflammation and cancer: advances and new agents

    Nat. Rev. Clin. Oncol.

    (2015)
  • H.E. Francies et al.

    What role could organoids play in the personalization of cancer treatment?

    Pharmacogenomics

    (2015)
  • C. Pauli et al.

    An emerging role for cytopathology in precision oncology

    Cancer Cytopathol.

    (2016)
  • D. Houle et al.

    Phenomics: the next challenge

    Nat. Rev. Genet.

    (2010)
  • L. Ji et al.

    Fluctuations of intracellular forces during cell protrusion

    Nat. Cell Biol.

    (2008)
  • A.L. Plant et al.

    Improved reproducibility by assuring confidence in measurements in biomedical research

    Nat. Methods

    (2014)
  • A.A. Dima et al.

    Comparison of segmentation algorithms for fluorescence microscopy images of cells

    Cytometry A

    (2011)
  • P. Bajcsy et al.

    Survey statistics of automated segmentations applied to optical imaging of mammalian cells

    BMC Bioinf.

    (2015)
  • N. Chenouard et al.

    Objective comparison of particle tracking methods

    Nat. Methods

    (2014)
  • M. Maška et al.

    A benchmark for comparison of cell tracking algorithms

    Bioinformatics

    (2014)
  • Y. Liu

    The diadem and beyond

    Neuroinformatics

    (2011)
  • A.K. Jain et al.

    Segmentation of muscle cell pictures: a preliminary study

    IEEE Trans. Pattern Anal. Mach. Intell. PAMI

    (1980)
  • H. Peng

    Bioimage informatics: a new area of engineering biology

    Bioinformatics

    (2008)
  • J. Rittscher

    Characterization of biological processes through automated image analysis

    Ann. Rev. Biomed. Eng.

    (2010)
  • K.W. Eliceiri et al.

    Biological imaging software tools

    Nat. Methods

    (2012)
  • C. Kervrann et al.

    A guided tour of selected image processing and analysis methods for fluorescence and electron microscopy

    IEEE J. Sel. Top. Signal Process.

    (2016)
  • A.C. Dufour et al.

    Signal processing challenges in quantitative 3-D cell morphology: more than meets the eye

    IEEE Signal Process. Mag.

    (2015)
  • M.A. Model

    Intensity calibration and shading correction for fluorescence microscopes

    Current Protoc. Cytometry Chapter

    (2006)
  • L. Landmann

    Deconvolution improves colocalization analysis of multiple fluorochromes in 3D confocal data sets more than filtering techniques

    J. Microsc.

    (2002)
  • S. Mallat

    A theory for multiresolution signal decomposition: The wavelet representation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1989)
  • D.L. Donoho

    De-noising by soft-thresholding

    IEEE Trans. Inf. Theory

    (1995)
  • J. Boulanger et al.

    Patch-based nonlocal functional for denoising fluorescence microscopy image sequences

    IEEE Trans. Med. Imaging

    (2010)
  • C. Kervrann et al.

    A guided tour of selected image processing and analysis methods for fluorescence and electron microscopy

    IEEE J. Sel. Top. Signal Process.

    (2016)
  • Cited by (0)

    View full text