Reliability evaluation of circuits operating in harsh environments is essential to prevent functional failures and potential catastrophic events. To achieve this, it is important to have a solid understanding of different radiation environments and their effects on matter. This chapter aims to provide the necessary foundations to comprehend the dynamics of radiation environments, including space and Earth’s atmosphere, as well as particle accelerators. Additionally, the chapter explores the prominent effects resulting from the interaction of energetic particles with matter. This knowledge is crucial for developing strategies to mitigate the impact of radiation on circuit reliability and ensure the proper functioning of electronic systems in challenging environments.
1.1 Introduction
From the terrestrial landscapes of Earth to the celestial expanses of our solar system, ionizing radiation is omnipresent. It shapes the world around us, playing an indispensable role in sustaining life on Earth while also posing potential threats. Radiation itself is a form of energy emitted by atoms, taking the form of either electromagnetic waves or energetic particles. Different types of radiation interact with matter in unique ways, and they can be generated naturally, as in the case of galactic cosmic rays, or artificially, as in particle accelerators. Furthermore, the intensity and their energy vary significantly across environments. Therefore, understanding the sources and characteristics of radiation within a specific environment is crucial for ensuring safety and the effective operation of electronic systems in any given environment.
1.2 Radiation Environments
1.2.1 Space Environment
Mission success in the near-Earth space environment hinges on a thorough understanding of the prevalent radiation sources (as illustrated in Fig. 1.1). These sources can be broadly categorized into three primary types:
×
Advertisement
Solar Energetic Particles (SEPs): continuously emitted by the Sun during periods of enhanced activity, SEPs are energetic protons and heavier ions accelerated by solar flares and coronal mass ejections (CMEs). The intensity and energy spectrum of SEPs vary significantly depending on the specific solar event.
Galactic Cosmic Rays (GCRs): originating from outside our solar system, GCRs are high-energy particles, primarily consisting of protons and heavier nuclei, believed to be produced by violent events such as supernovae explosions. Due to their high energy, GCRs can penetrate deep into spacecraft and pose significant health risks to astronauts on long-duration missions. GCRs are a constant source of radiation in near-Earth space, but their flux is modulated by the Sun’s heliosphere, a vast bubble of charged particles extending outward from the Sun.
Geomagnetically Trapped Particle Radiation (Van Allen Belts): this region consists of a complex mixture of energetic particles, primarily protons and electrons, trapped by the Earth’s magnetosphere. The magnetosphere is a vast, dynamic region shaped by the Earth’s magnetic field, which acts as a shield against incoming charged particles from the Sun and beyond. These trapped particles form two distinct toroidal regions known as the Van Allen radiation belts. The inner belt primarily consists of high-energy protons, while the outer belt contains a broader mix of protons and electrons. The specific location and intensity of trapped particles within these belts vary depending on space weather conditions.
Solar activity plays a crucial role in shaping the near-Earth space radiation environment. Solar flares and CMEs can significantly enhance the flux of SEPs, potentially creating intense radiation bursts that can overwhelm spacecraft shielding. Furthermore, solar activity can also indirectly influence the intensity of GCRs and trapped radiation. During periods of high solar activity, the Earth’s magnetosphere becomes more compressed, allowing a higher flux of GCRs to penetrate into the inner regions near-Earth space. Additionally, energetic solar wind particles can interact with the magnetosphere, leading to enhanced particle populations within the Van Allen belts.
Therefore, understanding and monitoring solar activity through observations of sunspots and other solar phenomena is critical for predicting the radiation environment that spacecraft and sensitive electronics may encounter in near-Earth space.
Particle flux is the rate of particle that passes through a unit area per unit time, expressed in \(\mbox{cm}^{-2}\cdot ~\mbox{s}^{-1}\), also known as flux density. Another widely used quantity is the particle fluence, which is the particle flux integrated over a period of time and expressed in \(\mbox{cm}^{-2}\).
Advertisement
Numerous space missions have been specifically designed to measure and comprehend solar activity, recognizing its significance for life on Earth and its influence on mission planning and design. A notable example is the Solar and Heliospheric Observatory (SOHO), a joint mission between the European Space Agency (ESA) and NASA, which was launched in 1995 and continues to operate successfully.
The Sun’s activity follows an approximate 11-year cycle, characterized by periods of high (solar maximum, about 7 years) and low (solar minimum, about 4 years) activity. Figure 1.2 presents solar activity measurements and predictions for the upcoming cycles based on the sunspot number, also known as the Wolf number. Sunspots are cooler, darker regions on the Sun’s surface associated with intense magnetic activity.
×
While the Sun continuously emits a low-energy stream of particles known as the solar wind, primarily consisting of protons and electrons with some heavier ions, significant radiation events are associated with solar flares and coronal mass ejections (CMEs). These infrequent but powerful events release a substantial number of highly energetic particles into space. Solar wind particles typically have energies ranging from kiloelectronvolts (keV) to gigaelectronvolts (GeV) and can reach speeds up to 80% of the speed of light.
Highly energetic particles originating from intense solar activities are particularly concerning due to their ability to rapidly reach Earth’s atmosphere within hours or days. These energetic particles pose a dual threat: endangering astronauts and disrupting electronic systems in space [2]. The intensity of the solar wind and the frequency of solar particle events are directly influenced by solar activity, specifically the number of sunspots. During the solar maximum, particularly the declining phase, CMEs and solar flares occur more frequently [3].
Interestingly, solar activity has an inverse correlation with the flux of galactic cosmic rays (GCRs). GCRs are high-energy particles originating from outside the solar system and have the lowest particle flux among the radiation sources in near-Earth space. However, due to their high energy, they are highly penetrating and pose a significant threat to space-borne electronics and human health. Shielding technology offers limited protection against GCR radiation exposure.
During periods of high solar activity, the Sun’s magnetic field strengthens, acting as a more effective shield against incoming particles from beyond the solar system. This leads to a decrease in the flux of GCRs reaching Earth’s vicinity [5‐7]. Conversely, during solar minimum periods, the weakened solar magnetic field allows a higher GCR flux to penetrate into near-Earth space. Understanding and monitoring solar activity and its associated radiation environment is crucial for several reasons:
Space Mission Planning: knowledge of the radiation environment allows engineers to design spacecraft with appropriate shielding strategies and operational procedures to ensure the safety and functionality of spacecraft systems.
Astronaut Safety: astronauts on long-duration missions are exposed to elevated levels of radiation, posing potential health risks. Understanding the radiation environment helps to develop strategies for mitigating these risks.
Space Weather Forecasting: missions like SOHO provide valuable data that contributes to improved space weather forecasting and radiation risk assessment.
Solar events, such as solar storms and CMEs, have been documented for centuries. The Carrington event of 1859 serves as a powerful historical example. This massive solar storm caused significant disruptions to early telegraph systems and produced auroras visible across the globe [8, 9]. It was one of the first instances where the consequences of a solar event affecting Earth were recognized. One event in this level could be very catastrophic nowadays as modern technology is completely dependent on electricity. A more recent example occurred in January 2022, where a significant solar storm originating from a CME caused unexpected impacts on orbiting satellites [10, 11]. The solar energy from the CME increased atmospheric drag, pulling several satellites out of their intended orbits. This incident, unfortunately resulting in the loss of multiple satellites, underscores the importance of considering solar activity’s potential effects when designing space missions.
The third radiation component in the near-Earth environment is the trapped particle radiation in the Van Allen radiation belts, illustrated in Fig. 1.3. As can be seen in Fig. 1.3, the outer belt is wider than the inner belt, and it is also the most unstable due to the weaker influence of the Earth’s magnetosphere. Similar to the GCRs, the trapped radiation suffers influence from the solar activity, and it is modulated such that the higher the activity, the higher is the electron flux and the lower the proton flux and vice versa [12]. Van Allen radiation belts have been always a concern for space mission designs due to their impact on electronic reliability. Protons are able to induce a variety of effects ranging from parametric degradation due to absorbed dose to even singular effects.
×
These radiation effects are further detailed in the next section. The analysis of the proton flux in the inner belt is highly important for any space mission targeting low-Earth orbit (LEO) as, for example, several Earth observation instruments and the International Space Station (ISS) shown in Fig. 1.3. Overall, a comprehensive understanding of the dynamics and composition of the trapped radiation belts plays a crucial role in developing effective measures to safeguard satellites against the adverse impacts of space weather.
Another crucial consideration in the design of spaceborne components is the anomaly within the Earth’s magnetosphere, known as the South Atlantic Anomaly (SAA). This anomaly arises from a slight tilt and offset between the Earth’s geomagnetic and rotational axes, causing a significant weakening of the Earth’s magnetic field over the South Atlantic region. This weakened field allows energetic particles from the Van Allen belts and galactic cosmic rays (GCRs) to penetrate lower altitudes than they typically would elsewhere, resulting in a zone of increased radiation exposure. Figure 1.4 illustrates this phenomenon, showing the AP8 MIN model’s depiction of high-flux protons reaching altitudes as low as 500 km and below in the SAA region.
×
The presence of the SAA poses a significant challenge for spacecraft operating in LEO. The intense radiation environment within the SAA can potentially damage sensitive electronics onboard satellites. As a result, many satellites employ a precautionary measure of switching off sensitive electronic components when traversing the SAA to avoid potential damage, particularly from stochastic events (discussed in the next chapter).
1.2.2 Atmospheric Environment
When cosmic rays enter the Earth’s atmosphere, they interact with atmospheric atoms, primarily nitrogen and oxygen molecules, leading to the production of secondary radiation. Similarly, these secondary particles can also interact with atmospheric atoms, creating a cascade effect and generating additional secondary particles. This phenomenon is known as a cosmic ray air shower, and it is illustrated in Fig. 1.5. In this example, the primary cosmic particle is a highly energetic proton that collides with the nuclei of a nitrogen atom in the air. These collisions lead to the production of secondary particles, including protons, neutrons, pions \(\pi \), kaons K, and muons \(\mu \). This process continues until the secondary particles have insufficient energy to sustain further interactions. Pions and kaons are unstable particles with short lifespans. They rapidly decay into various particles, including muons, electrons \(e^{-}\), positron \(e^{+}\) (antiparticle of electrons), neutrinos \(\nu \), and photons \(\gamma \).
×
Cosmic ray air showers are comprised of various secondary particles, each with unique properties. Muons are particularly noteworthy due to their relatively long lifespan compared to pions and kaons, despite still being short-lived compared to stable particles like protons and electrons. Although their average lifetime is just 2.2 microseconds; this allows them to travel several kilometers through the atmosphere before decaying. Similar to electrons, muons are fundamental particles. However, their much greater mass grants them significantly higher penetration power. Muons can traverse several centimeters of materials like silicon, potentially reaching sensitive electronic components within spacecraft or even electronics on the ground. Historically, secondary neutrons were considered the primary contributors of radiation damage in electronic devices exposed to the atmospheric environment. However, with advancements in miniaturization and increased device sensitivity, muons pose a growing threat to system reliability by inducing radiation damage in microelectronics [14‐17].
It is important to note that the cascade of secondary interactions triggered by a high-energy cosmic ray is a complex process with numerous possible reaction pathways. These pathways depend heavily on the specific energy, angle of incidence, and characteristics of the primary particle, along with the properties of the target atoms it interacts with. While Fig. 1.5 provides a simplified illustration, the true nature of these interactions involves a vast array of potential reactions. The flux and composition of atmospheric radiation are subject to fluctuation, influenced by various factors. Solar activity notably impacts the influx of particles bombarding the upper atmosphere, while the Earth’s nonuniform magnetic field results in spatial variations in radiation intensity and composition. Regions with weaker magnetic fields, such as the South Atlantic Anomaly discussed earlier, experience heightened fluxes of energetic particles.
Altitude also plays a pivotal role. As we descend through the atmosphere, particle flux initially rises due to collisions with atmospheric nuclei, generating secondary particles. This increase peaks at the Pfotzer maximum, after which flux diminishes due to energy loss through interactions, absorption by atmospheric molecules, and decay of unstable particles [18]. For instance, neutron flux increases with decreasing altitude, reaching its peak density of approximately \(10^4\) neutrons per square centimeter per hour (\(\mbox{n/cm}^2/\mbox{h}\)) at around 20 km [19, 20]. However, this density drastically declines to about 20 \(\mbox{n/cm}^2/\mbox{h}\) at ground level, a mere fraction of its peak value, approximately 0.2%. Apart from neutrons and muons, protons are also a concern for ground-level atmospheric applications, although typically exhibiting lower fluxes when compared to energies up to 400 MeV [21].
While the probability of failure induced by secondary cosmic ray particles is relatively low at ground level, it can still be a concern for safety-critical systems that rely heavily on electronics. These systems require extremely high reliability, and even a small chance of failure can have significant consequences. Therefore, understanding the particle flux and composition at different altitudes and locations is crucial for assessing the radiation threat and designing reliable systems for atmospheric applications.
1.2.3 Accelerator Environment
Another hostile environment with a very particular radiation field is encountered in particle accelerators used in high-energy physics research laboratories. A very good example is the world’s largest collider, the large hadron collider (LHC) in the European Organization for Nuclear Research (CERN) in which, in its current design, protons can be accelerated up to 7 TeV. The LHC is a circular collider as shown in the layout in Fig. 1.6 where the eight octants, the interaction points (IP), and the direction of the beams are illustrated. There are four main interaction points where the physics experiments are hosted: interaction point 1 (IP1) hosts the ATLAS experiment, the general-purpose LHC experiment where the world’s largest particle detector is installed; interaction point 2 (IP2) hosts the ALICE experiment, a large ion collider detector dedicated to heavy-ion physics; interaction point 5 (IP5) hosts the compact muon solenoid (CMS) experiment, which is another general-purpose detector similar to ATLAS with a broad physics program and a different magnet system; and, lastly, the interaction point 8 (IP8) hosts the LHCb experiment, in which, different from the previous experiments, the large hadron collider beauty is a collection of subdetectors used to detect the forward particles from the collisions and to investigate the beauty quark (b quark).
×
The performance of particle colliders such as the LHC is mostly associated with its physics production, i.e., the number of particle collisions and the production of new physics. Generally, it is measured by the energy that the injected particles can be accelerated to and by the integrated luminosity (number of observed inelastic collisions) measured by their detectors. During the LHC physics run 2 (2015–2018), an average integrated luminosity of 50 \(\mbox{fb}^{-1}\) (femtobarns) per year was achieved. This high luminosity allowed for a significant number of collision events to be recorded and analyzed. However, with the high luminosity (HL) LHC upgrade project, the nominal annual integrated luminosity is expected to reach over 250 \(\mbox{fb}^{-1}\). The increase in integrated luminosity and the corresponding increase in the number of collisions also lead to higher radiation levels in different areas of the accelerator. These elevated radiation levels can pose operational challenges, such as reduced beam availability caused by beam dumps resulting from magnet quenching (loss of superconductivity) or system failures due to radiation effects on electronics.
Luminosity is a measurement of how many collisions a particle collider can produce. It is usually expressed as an integral over the physics run duration and the unit is the inverse femtobarn (\(fb^{-1}\)) corresponding to roughly \(10^{14}\) particle interactions at TeV energies [23].
The LHC radiation environment consists of a mixed field with a variety of particles and energies which vary along the different locations in the accelerator complex. It depends not only on the different beam loss mechanisms at play but also on the accelerator settings such as the beam intensity and collimator settings [23, 24]. In this context, there are three main sources of radiation generally related to beam losses:
1.
Beam Interaction with Residual Gas: the particles are accelerated in a vacuum pipe to prevent their interaction with air molecules and therefore the beam intensity loss. However, despite the very high vacuum levels, there is still some remaining (residual) gas in the vacuum chamber. When the protons interact with these residual molecules, a radiation cascade of secondary particles similar to the one observed in the atmospheric environment is also observed in the accelerator tunnels. This beam loss mechanism is the main contributor to the radiation level in the arc sections in the LHC ring, and it is expected to scale linearly with the integrated intensity (number of injected particles) and the residual gas density in the vacuum chamber [25].
2.
Beam Interaction with Machine Elements: despite the strong magnetic field applied to control and guide the bunch of particles within the beam pipes, some particles, known as halo particles, can deviate from the desired trajectory and interact with different beam line elements leading to their activation and increased radiation levels. For instance, to prevent the damage of sensitive components and possible magnet quenches and to provide a protection against uncontrolled beam losses, collimators are used to clean up the beam by absorbing the halo particles. However, the collimator locations are hotspots for radiation levels as a large number of secondary particles is produced from these halo collisions.
3.
Debris from Collisions in the physics experiments: another common source of radiation is the collision debris from the proton-proton inelastic interactions in the four main LHC experiments shown in Fig. 1.6. For every single proton-proton collision, an average multiplicity of 120 secondary particles are generated and intercepted by the detectors placed in the experiments [23]. However, some of these secondaries are able to reach out of the detector zone and collide with the beam elements in the surrounding area, increasing the radiation levels. Different from the beam losses due to the beam-gas interactions, the radiation levels induced by collision debris are luminosity driven and therefore scale with the integrated luminosity of each experiment [25].
The monitoring of radiation levels in particle accelerators serves multiple crucial purposes. It not only safeguards the integrity and performance of the accelerator itself but also ensures the safety of personnel working in the underground beam lines. While delving into the intricacies of radiation protection measures falls beyond the scope of this book, it is essential to acknowledge their significance in such radioactive environments. Additionally, radiation hardness assurance (RHA) methodologies play a pivotal role in qualifying components and systems to withstand the harsh radiation environment, mitigating the risk of system failures induced by high radiation levels [23, 26‐28]. When examining the impact of radiation on electronics, particular attention is given to high-energy hadrons (HEH) and neutrons, as they dominate the particle spectrum within the LHC environment. Figure 1.7 presents a representative lethargy spectrum encountered at the LHC, considering the relevant hadrons such as protons, charged pions, and kaons, as well as neutrons. This spectrum was simulated using FLUKA, a widely validated Monte Carlo transport code extensively utilized in accelerator research and beyond [29‐31]. As depicted in Fig. 1.7, three distinct energy ranges stand out, with varying abundances and effects on electronic reliability. High-energy hadrons exhibit a prominent flux peak in the several hundred MeV range, while neutrons span a range from thermal energies up to 20 MeV [32]. These energy ranges are of significant relevance when assessing the potential impact of radiation on electronic systems. By comprehending the characteristics and intensities of these particles, strategies can be devised to harden electronics against radiation-induced challenges and ensure their reliable operation in the demanding environment of particle accelerators [27].
×
Hadrons are subatomic particles composed of two or more quarks held together by the strong nuclear force, such as protons, neutrons, pions, and kaons. The term high-energy hadrons (HEH) is widely used in the accelerator environment to design the hadrons with energies in the MeV, which are quite relevant in regard to inducing radiation events on electronic components.
In Table 1.1, the approximate HEH annual fluxes are presented for the different radiation environments, from space, i.e., the International Space Station (ISS) and low-Earth orbits, to ground and accelerator environments. For instance, the HEH fluxes at CERN’s accelerator complex, from the injector chain to the LHC ring, can be comparable to the fluxes observed in commercial flights (around \(10^{7}/\mbox{cm}^{2}/\mbox{yr}\)) and in space (\(10^{8}-10^{9}/\mbox{cm}^{2}/\mbox{yr}\)); however, depending on the location in machine, it can reach levels way beyond (up to \(10^{12}/\mbox{cm}^{2}/\mbox{yr}\)).
Table 1.1
Approximate high-energy hadron (HEH) annual fluxes from space to accelerator radiation environments
Spectrum
\(\Phi \) HEH (\(/\mbox{cm}^{2}/\mbox{yr}\))
Ground level
\({\sim }2 \cdot 10^{5}\)
Avionics
\({\sim }2 \cdot 10^{7}\)
ISS orbit
\({\sim }7 \cdot 10^{8}\)
LEO orbit (800 km)
\({\sim }3 \cdot 10^{9}\)
LHC ring \(+\) transfer lines
\({\sim }10^{6} - 10^{12}\)
Super proton synchrotron (SPS)
\({\sim }10^{6} - 10^{12}\)
Proton synchrotron (PS)
\({\sim }10^{10} - 10^{11}\)
PS booster
\({\sim }10^{9} - 10^{11}\)
1.3 Particle Interaction Mechanisms
This section serves as an introduction to particle-matter interactions, focused mainly on important mechanisms capable of inducing radiation effects on electronic components. These effects can be broadly classified into two groups: cumulative effects, which encompass total ionizing dose (TID) and displacement damage (DD) and single-event effects (SEEs), a category of effects where a single particle impact can disrupt the proper operation of electronic devices. Regardless of the specific effect on the electronic device, it always originates from one or more particles that deposit energy in the device. In this book, our focus is primarily on the study and mitigation of single-event effects, more specifically on the nondestructive effects known as soft errors. Therefore, the subsequent sections will be dedicated to comprehending these effects in greater detail.
Numerous types of particles have the capability to deposit energy in a device and induce SEEs. Radiation-matter interactions are strongly dependent on the characteristics of the particle, including its charge, mass, and energy. For instance, charged particles like protons engage in both nuclear reactions and Coulombic interactions with the orbital electrons of the target nucleus. Conversely, neutrons, being uncharged, can traverse the electronic clouds of an atom without interaction, making them highly penetrating particles that solely interact and lose energy with atomic nuclei. Figure 1.8 outlines the main Coulombic and nuclear interactions, as well as photon-matter interactions, which are essential for understanding the impact of radiation on electronics.
Coulombic Interactions:Ionization, when a charged particle (e.g., proton) interacts with the electric field of an orbital electron, it can remove it from its orbit, creating an electron-hole pair, where the vacancy left behind by the ejected electron acts like a positive charge carrier (hole); Excitation, occurs when the interaction with the electric field only elevates the electron to a higher energy level within the same atom and upon returning to its ground state, the electron releases energy in the form of a photon; Elastic collision, when a charged particle is deflected by the atomic nucleus and the total kinetic energy is conserved; Bremsstrahlung, occurs when a charged particle accelerates or decelerates, emitting electromagnetic radiation (photons) due to this change in motion.
×
Nuclear Reactions:Neutron absorption or capture, occurs when a low-energy neutron collides with a nucleus, leading to the emission of gamma rays as the excess energy is released; in elastic scattering, the incident particle collides with the nucleus, transferring some kinetic energy to the nucleus. The nucleus recoils, while the incident particle is scattered in a different direction; however, the total kinetic energy of the system remains conserved; on the other hand, in inelastic scattering, the total kinetic energy is not conserved and the collision can induce the emission of photons or ejection of electrons; a similar phenomenon is observed in non-elastic scattering, where the nucleus is fragmented into secondary particles, neutrons, or other ions species.
Photon-matter interactions:Photoelectric effect, a photon transfers all its energy to a tightly bound orbital electron, ejecting it from the atom with a kinetic energy equal to the difference between the energy of the incident photon and the binding energy; compton scattering occurs when a photon interacts with a relatively free electron (compared to tightly bound inner-shell electrons), imparting a portion of its energy to the electron, which recoils with some kinetic energy. The remaining energy of the photon is carried away by a scattered photon with a longer wavelength (lower energy) compared to the incident photon; Rayleigh scattering, also known as coherent scattering, occurs when a low-energy photon interacts with an atom and is scattered elastically. The scattered photon retains its energy but deflects in a different direction; lastly, pair production, high-energy photons (typically exceeding 1.022 MeV) can interact with the strong electric field near a nucleus, leading to the creation of an electron-positron pair. The positron is the antiparticle of the electron and has the same mass but a positive charge. The minimum energy requirement of at least 1.022 MeV for a photon arises from the fact that the resting mass of both the electron and positron, when expressed in energy units, is 0.511 MeV each. Consequently, for the creation of an electron-positron pair, a minimum total energy of 1.022 MeV is necessary, corresponding to twice the energy of a single electron or positron.
1.4 Energy Deposition
Originally, single-event effects (SEEs) were primarily associated with neutrons, protons, or heavier ions. It is worth noting, however, that certain studies have shown that SEEs can also be triggered by electrons [33‐36] or muons [14‐17]. Nevertheless, for the purposes of our discussion, these particles will not be considered. SEEs primarily result from ionization, occurring directly with ions and protons, and indirectly in the case of neutrons due to nuclear reactions. While ions and protons can directly ionize the target material, they can also engage in nuclear reactions with the atomic nucleus. For heavy ions, the contribution from nuclear reactions is typically insignificant compared to direct ionization. However, for protons, both direct and indirect ionization processes are significant. Figure 1.9 illustrates the direct and indirect ionization processes for a heavy ion and a neutron, respectively.
×
Historically, the direct ionization of protons was not a major concern as they had limited capability to ionize matter directly. Consequently, only high-energy protons (\(E > 20\) MeV) were considered a threat to electronic systems due to their capacity to induce nuclear reactions in silicon and produce secondary ions with higher ionization potential. However, in deeply scaled technologies, low-energy protons (\(E < 3\) MeV) have been identified as significant contributors to soft error rates in space applications and beyond [37‐39].
The loss of energy is generally quantified by the concept of stopping power, S, also called specific energy loss, which is defined as the differential energy loss \(\mathrm {dE}\) dividing by the differential path length \(\mathrm {dx}\):
where the minus sign is a common way to deal with a positive quantity (the energy loss \(\mathrm {dE}\) is a negative variation of energy). The stopping power unit is given in megaelectronvolts per centimeter (\(\mbox{MeV}/\mbox{cm}\)). In the radiation effect community, the stopping power is generally divided by the mass density of the material, making the new quantity independent of its phase. This new quantity, called mass stopping power, \(\mathrm {S}_{\mathrm {m}}\) is then:
where \(\rho \) is the density of the target material. For electronics, the silicon density is used, \(\rho _{Si} = 2.32\:\mbox{g}\cdot \,\mbox{cm}^{-3}\). The mass stopping power unit is given in megaelectronvolts square centimeter per milligram (\(\mbox{MeV}\cdot \,\mbox{cm}^2/\mbox{mg}\)).
Another important quantity for charged particle is the linear energy transfer, or LET. While the stopping power measures the energy loss of the particle, the LET measures the energy absorbed in matter. The difference comes from the energy loss by radiation, which is actually not significant for ions. Through misuse of language, the word LET is often use instead of stopping power or even instead of mass stopping power.
The stopping power depends not only on the particle type but also on its energy and the target material where ionization takes place. The variation of mass stopping power as a function of the ion energy in silicon was calculated for different ions using the stopping and range of ions in matter (SRIM) code [40] as shown in Fig. 1.10.
×
The stopping power of a particle increases with energy until it reaches the Bragg peak, which represents the maximum value of stopping power. It is important to note that the Bragg peak for hydrogen (protons) is significantly lower compared to heavier ions like phosphorus. In general, the stopping power at the Bragg peak is higher for ions with a higher atomic number (Z). This is because the energy loss is primarily governed by the Coulomb effect, and at higher Z, the interaction between the particle and the medium is stronger. For energies above the Bragg peak, the stopping power decreases as the particle energy increases. It is interesting to observe that different ion species with different energies can yield the same stopping power value. This is because energy loss involves interactions with both electrons and nuclei of the medium. Therefore, the stopping power is typically considered as the sum of two contributions:
The first term represents the interaction of the particle with electrons. When sufficient energy is transferred, it leads to the generation of electron-hole pairs. This process is known as ionization, as an electron moves from the valence band to the conduction band. The second term accounts for the interaction of the particle with the nuclei of the medium. This interaction can be mediated by either the Coulomb force or the strong nuclear force. In the case of Coulomb interactions, the target nucleus can undergo vibration or recoil, leading to displacement damage. Alternatively, if the interaction is governed by the strong force, a nuclear reaction can occur, causing the nucleus to break up into fragments.
While neutrons, protons, and heavy ions interact with matter in distinct ways, they all ultimately result in the ionization of the material, either directly or indirectly. This ionization process can potentially trigger SEEs in electronic devices. Therefore, understanding the stopping power of these ions, both primary and secondary, is of prime importance for the assessment of their impact and the mitigation of the associated risks.
Another crucial quantity of interest for charged particles is the range, denoted as R. It is defined as the distance the particle can travel before being at rest. The range is related to the stopping power through the following relationship:
where E\({ }_{init}\) is the initial energy of the particle. Particles with high stopping power experience significant energy loss, resulting in a short range of travel. This can be observed in Fig. 1.11, which presents examples of particle ranges calculated using SRIM. The figure illustrates how the range varies for different particles, highlighting the impact of their stopping power on the distance they can travel in a material.
×
The energy deposition in the semiconductor leads to a nearly linear path of electron-hole pairs (ehp). The minimum energy required to generate an electron-hole pair can be estimated based on the bandgap energy\(E_{g}\) of the material using Eq. 1.5 [41]:
For silicon-based devices, the silicon bandgap energy \(E_{g}\) is equal to \(1.11\)eV ; therefore, based on Eq. 1.5, the average ionization energy\(E_{ehp}\) in silicon is approximately \(3.6\)\(\mbox{eV/ehp}\). In Table 1.2, for different semiconductor materials, the bandgap energy \(E_g\) and the corresponding ionization energy \(E_{ehp}\) calculated using Eq. 1.5 are presented. The higher the bandgap energy of the material, the higher is the energy required to ionize the matter and create electron-hole pairs. This explains why wide-bandgap electronics such as the SiC- and GaN-based devices show a better radiation performance overall.
Table 1.2
Bandgap energy \(E_g\) and ionization energy \(E_{ehp}\) for different semiconductor materials
Bandgap energy
Ionization energy
Material
Symbol
\(E_g\) (eV)
\(E_{ehp}\)(eV)
Germanium
Ge
0.66
2.4
Silicon
Si
1.11
3.6
Indium phosphide
InP
1.34
4.2
Gallium arsenide
GaAs
1.43
4.5
Gallium phosphide
GaP
2.26
6.7
Silicon carbide
3C-SiC
2.35
7.0
6H-SiC
3.08
9.0
4H-SiC
3.28
9.5
Gallium nitride
GaN
3.4
9.8
Aluminum nitride
AlN
6.24
17.6
Based on the assumption that the stopping power is constant as a consequence of the small ionization path length l within the sensitive volume of a device, the deposited energy can be converted in deposited charge\(Q_{D}\) by following Eq. 1.6:
where q is the elementary charge and S is the so-called surface mass stopping power which refers to the mass stopping power value when the particle enters the sensitive volume, i.e., after traveling and losing energy through all the back-end-of-line (BEOL) layers. The deposited charge \(Q_{D}\) needs to be collected by a sensitive node of a circuit and exceeds the minimum charge necessary to induce a SEE, known as critical charge\(Q_{crit}\). The charge collection mechanisms are briefly described in the following section.
Another quite relevant aspect, when studying radiation effects on electronics, is the temperature dependence of many mechanisms at the device and circuit levels. For instance, the bandgap energy of a semiconductor material is temperature dependent and can be described using Varshni’s Equation [42]:
where \(E_g[0]\) is the bandgap energy for temperature T equals to 0 K, while \(\alpha \) and \(\beta \) are fitting parameter to experimental data. These three parameters are presented in Table 1.3 for the three main semiconductor materials. In Fig. 1.12, the bandgap and ionization energy for silicon and gallium arsenide (GaAs) are shown for a wide range of temperature measure in K and calculated with Eq. 1.7. As mentioned previously, lower ionization energy is observed in Si due to its lower bandgap energy when compared to GaAs. However, as the temperature is increased, both materials show a reduction in the ionization energy. Although the decrease is minimal, it can increase the number of electron-hole pairs that a particle can generate and increase the total deposited charge. Other mechanisms are also affected by temperature, as it will be shown in the following chapters, such as carrier mobility and resistance.
Table 1.3
Fitting parameters for Varshni’s equation to consider the temperature dependence of bandgap energy of Ge, Si, GaAs, and 6H-SiC
Material
Symbol
\(E_g[0]\)
\(\alpha \)
\(\beta \)
Germanium
Ge
0.7412
4.561
210
Silicon
Si
1.557
7.021
1108
Gallium arsenide
GaAs
1.5216
8.871
572
Silicon carbide
6H-SiC
3.024
\(-\)0.3055
\(-\)311
×
Besides depending on the material, the ionization energy\(E_{ehp}\) is also temperature dependent. As the temperature increases, the bandgap energy reduces and, therefore, the ionization energy is also reduced.
1.5 Charge Collection
After the energy deposition in the semiconductor material, the generated carriers are transported and collected by the junctions of the device. In the context of SEE, there are two main transport mechanisms that play a crucial role: the drift and the diffusion. Figure 1.13 illustrates the ionization process of an ion in a reverse-biased p-n junction and the subsequent carrier transport mechanisms.
×
Drift is a transport mechanism that is primarily governed by the electric field existing within the p-n junctions of sensitive electronic devices. When a particle directly impacts the sensitive collecting area of the circuit, the resulting carriers experience rapid collection due to the influence of the high electric field present in the reverse-biased p-n junction. In semiconductor materials, the drift current \(J_{drift}\) can be described as a function of the applied electrical field \(\mathscr {E}\) and the conductance G, as expressed by the equation:
$$\displaystyle \begin{aligned} J_{drift} = G \cdot \mathscr{E} \end{aligned} $$
(1.8)
Finally, by knowing that the conductance G is proportional to the the carrier mobility \(\mu \) and to the carrier concentrations n and p, the total drift current \(J_{drift}\) can be described as [44]:
where q is the elemental charge (\(q=1.6\times 10^{-19} C\)).
On the other hand, diffusion is a carrier transport mechanism governed by the carrier concentration gradients. Therefore, whenever there is a gradient of carriers, they are transported from regions with high to low concentrations to reach a state of uniformity [44]. The diffusion current J can be obtained based on Fick’s Law:
where \(D_n\) and \(D_p\) are the diffusion coefficients and \(\mbox{dn}/\mbox{dx}\) and \(\mbox{dp}/\mbox{dx}\) are gradient of carrier concentrations for electrons and holes, respectively. As the diffusion is governed by random thermal motion of carriers and scattering, it has a direct relationship with the mobility of carriers \(\mu \) and, therefore, the diffusion coefficients can be described as [44]:
where, at room temperature (300 K), \((kT/q)\) is equal to \(0.0259\) V.
Considering the one-dimensional case, the total current J can be calculated using Eq. 1.9 for the drift component, and for the diffusion component using Eqs. 1.10 and 1.14 as shown below:
where R is the recombination rate of electron-hole pairs.
Solving transport equations is a long and complex process that requires to perform approximations and use numerical tools, called technology computer-aided design (TCAD) tools.
Another very interesting effect occurs when the particle strikes in or near the p-n junction, as shown in Fig. 1.13b. The depletion region, also known as the space charge region, is increased, and therefore, the drift collection efficiency is also increased. This phenomenon is known as funneling effect and together with drift, they are two very fast processes due to the presence of the high electric field in the junction. Following these processes, the remaining carriers are collected by diffusion process or they go through recombination.
Understanding these intricate processes is crucial for comprehending the behavior of semiconductor devices, particularly in relation to their response to particle interactions. By examining the interplay between drift, funneling effect, diffusion, and recombination, we can gain valuable insights into the overall performance and reliability of these devices.
1.5.1 Charge Sharing and Pulse Quenching Effect
As technology advances and transistors are placed in closer proximity to each other, the critical charge required to trigger an SEE is reduced. Consequently, a single incident particle is able to induce sufficient charge collection from multiple neighboring electrodes. This phenomenon is known as charge sharing effect. The work developed by Amusan et al. [45] provides an analysis of the charge sharing effect in adjacent N-channel metal-oxide semiconductor (NMOS) and P-channel metal-oxide semiconductor (PMOS) transistors based on TCAD simulations. The 130 nm CMOS devices, based on the International Business Machines corporation (IBM) twin well technology, were characterized using advanced simulation tools such as Synopsis DEVISE and DESSIS. In the simulation setup, a particle hit was precisely targeted at the center of the drain region, perpendicular to the surface of the device structure. It is worth noting that for the purpose of these simulations, the influence of angular effects was omitted in the analysis. The authors used two particular notations: active device, which refers to the device directly struck by the particle and actively collecting the carriers, and the passive device, which represents the device not directly impacted by the particle but passively collecting the diffused carriers. In Fig. 1.14, the collected charge is shown for the active and passive devices when PMOS and NMOS transistors are used.
×
The results of this study clearly demonstrate that passive PMOS devices are capable of collecting a greater amount of charge compared to passive NMOS devices. Specifically, the passive PMOS device collected approximately 40% of the charge collected by the active PMOS device, while the passive NMOS transistor collected less than 25% of the charge. The authors attribute this discrepancy to the disparity in carrier diffusion coefficients between electrons and holes, as well as the bipolar amplification effect that enhances charge collection in PMOS devices [45, 46].
While the charge sharing mechanism is responsible for the increased SEE sensitivity in deeply scaled technologies, it has also been shown to reduce the pulse width of SETs in combinational cells [47, 48]. Due to the similar time constant for the circuit delay and the diffusion process, the radiation-induced transient is able to activate the charge collection in electrodes from following stage of circuits in such a way that the resultant transient is shortened (i.e., quenched). This phenomenon is referred to as the pulse quenching effect (PQE). To observe the PQE in a circuit, an inverting relationship between logic stages is required. Both charge sharing and the pulse quenching effect are intricate and critical mechanisms that heavily depend on the specific device technology and design strategies employed. Therefore, acquiring a comprehensive understanding of their impact is of utmost importance for accurately assessing the sensitivity of modern electronic components to single-event effects.
1.6 Summary
In this chapter, we explored the fundamental concepts surrounding the study of radiation effects in electronics. Our exploration begins by unraveling the complex dynamics and composition of the space and atmospheric radiation environment. At the heart of this cosmic interplay is the Sun, which acts as the primary modulator of solar particle radiation and galactic cosmic rays. For missions focused on Earth’s vicinity, it becomes crucial to consider the impact of the Van Allen radiation belts on on-board electronic systems. Furthermore, in low-orbit and atmospheric applications like satellites and aviation, the South Atlantic Anomaly region exhibits a heightened proton flux, necessitating careful attention.
Depending on the type of incident radiation, direct and indirect ionization processes can occur within electronic components, resulting in energy deposition and charge collection by critical device electrodes. In advanced technology nodes, a single particle strike has the potential to impact multiple devices within a chip, resulting in the diffusion of charges across multiple critical nodes—a phenomenon known as the charge sharing effect. Looking ahead, the subsequent chapter will delve into the radiation-induced effects at the circuit level, specifically focusing on the well-known single-event effects (SEEs).
Highlights
Three main natural sources of radiation should be considered in space and atmospheric applications: the solar energetic particles (SEPs), galactic cosmic rays (GCRs), and geomagnetically trapped particle radiation (Van Allen’s radiation belts).
The Sun’s activity is the main radiation modulator of the radiation environment near Earth, but also impacts ground-level applications, such as the Carrington event.
A misalignment between the Earth’s geomagnetic and rotational axes generates a weakness in the geomagnetic field over South America, leading to higher particle fluxes in this region known as the South Atlantic Anomaly (SAA).
With the technology scaling, neutrons but also muons are considered a threat to the electronic system reliability.
In an accelerator environment, there are also three main sources of radiation: beam interaction with residual gas, beam interaction with machine elements, and debris from collisions in physics experiments.
The high-energy hadrons (HEH) and neutrons are the most abundant and relevant particles in the LHC environment in terms of inducing electronic failures.
An energetic particle can deposit energy either through direct ionization (ions, protons, etc.) or by indirect ionization (protons and neutrons).
Two main mechanisms are responsible for charge collection in a device: drift due to the electric field present in the p-n junctions and diffusion due to the carrier concentration gradients.
With the miniaturization of transistor technology, a single particle is able to deposit energy in multiple devices, and therefore charge sharing effect is prominent in highly scaled technologies.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.