Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2023 | OriginalPaper | Buchkapitel

Intelligent Edge Biomedical Sensors in the Internet of Things (IoT) Era

verfasst von : Elisabetta De Giovanni, Farnaz Forooghifar, Gregoire Surrel, Tomas Teijeiro, Miguel Peon, Amir Aminifar, David Atienza Alonso

Erschienen in: Emerging Computing: From Devices to Systems

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The design of reliable wearable systems for real-time and long-term monitoring presents major challenges, although they are poised as the next frontier of innovation in the context of Internet-of-Things (IoT) to provide personalized healthcare. This new generation of biomedical sensors targets to be interconnected in ways that improve our lives and transform the medical industry. Therefore, they offer an excellent opportunity to integrate the next generation of artificial intelligence (AI) based techniques in medical devices. However, several key challenges remain in achieving this potential due to the inherent resource-constrained nature of wearable systems for Big Data medical applications, which need to detect pathologies in real time. Concretely, in this chapter, we discuss the opportunities for edge computing and edge AI in next-generation intelligent biomedical sensors in the IoT era and the key challenges in wearable systems design for pathology detection and health/activity monitoring in the context of IoT technologies. First, we introduce the notion of self-awareness toward the conception of the next-generation intelligent edge biomedical sensors to trade-off machine-learning performance versus system lifetime, according to the application requirements of the medical monitoring systems. Subsequently, we present the implications of personalization and multi-parametric sensing in the context of the system-level architecture of intelligent edge biomedical sensors. Thus, they can adapt to the real world, as living organisms do, to operate efficiently according to the target application requirements and available energy at any moment in time. Then, we discuss the impacts of self-awareness and low-power requirements at the circuit level for sampling through a paradigm shift to react to the input signal itself. Finally, we conclude by highlighting that the techniques discussed in this chapter may be applied jointly to design the next-generation intelligent biomedical sensors and systems in the IoT era.

1 Introduction to Self-Aware and Adaptive Internet of Things

Remote health monitoring has attracted a lot of attention over the past decades to provide the opportunity of early detection and prediction of pathological health conditions. This early detection and prediction not only improves the quality of life for the patients but also significantly reduces the load on their family members. Moreover, this improvement reduces the socioeconomic burden that is caused due to the disability of patients to work despite their health conditions. Wearable and Internet of Things (IoT) technologies offer a promising solution in pervasive health monitoring by relaxing the constraints with respect to time and place.
Today, wearable systems are facing fundamental barriers in terms of battery lifetime and Quality of Service (QoS). Indeed, the main challenge in wearable systems is increasing their battery lifetime while maintaining the reliability of system. A recently proposed concept for overcoming this challenge is self-awareness. Self-awareness offers a promising solution for this issue by equipping the system with two key concepts of learning and reasoning. In the learning phase the system gains knowledge about itself and its environment and in the reasoning phase this information is used to make a decision and act in a way that the pre-defined goals of the system are fulfilled (Lewis et al. 2016). Thus, the main goal of self-awareness is to give the system the ability to monitor its own performance with respect to self and environmental changes, adapt to these changes and as a result improve autonomously (Lewis et al. 2011). These three steps that are repeated in a loop, as shown in Fig. 1, keep the system aware of the changes in situation and assist the system to continuously move toward reaching its goals.
Self-awareness can be applied to many different categories of applications in the systems equipped with control mechanisms and units, and has three major properties: self-reflection, self-prediction and self-adaptation (Jantsch et al. 2017). According to these properties, each self-aware system is aware of its architecture, execution environment, operational goals, and their corresponding dynamic changes during operation. It is also able to predict the effect of dynamic changes and proactively adapt itself as the environment evolves in order to ensure that quality of service or energy requirements are satisfied.
The notion of self-awareness can be adopted in various design aspects of IoT systems, in spite of highly dynamic changes in the environment. In the system-on-chips domain, self-awareness assists to deal better with the complexity coming from the system itself, from the environment, and from the exceedingly diverse goals and objectives (Hoffmann et al. 2012; Jantsch et al. 2017). In the self-triggered control domain, the controllers execute only when the expected performance is about to be violated (Filieri et al. 2014; Aminifar et al. 2016; Andrén et al. 2017). And finally, in the remote health monitoring system domain, where the quality of the results is significantly affected by different conditions of the patient (such as age and medical history) as well as the environment (such as temperature and ...), self-awareness adapts the system to the situation to guarantee the quality of results (Anzanpour et al. 2017; Masinelli et al. 2020).
The target applications in this chapter of the book are wearable monitoring systems used for pathology detection and health and activity monitoring. We will discuss the self-aware techniques that can be applied in different design domains of such systems with the goal to improve the performance and/or increase the battery lifetime of wearable monitoring systems. We will start with the application level aspects of the system talking about machine learning and artificial intelligence. Afterwards, we move to circuit architecture discussing self-aware signal acquisition and sampling techniques. Finally, we cover the self-aware system architecture and platform taking in to account the personalization and multi-sensor systems.
  • The overview of a typical wearable system used for health monitoring is shown in Fig. 2. The main phases of these systems are acquisition and preprocessing of the bio-signals, extraction of corresponding features and the machine learning module, which is developed in the train phase, using the data that are previously gathered from the patients, and is used for detection of pathology in the test phase on the new data acquired from patients. In the application level, which translates to the learning part of the monitoring system, we will see the contribution of self-awareness in distribution of the monitoring over different levels of learning models with different performance and energy consumption profiles. We will also see efficient distribution of monitoring over higher computational layers, i.e. fog and cloud layers using concept of self-awareness.
  • One of the main goals of self-awareness is energy efficiency to reduce the battery lifetime of wearable systems. In the middle of hardware-level and application-level optimizations, power management at the platform layer is one of the means to achieve this goal. In a single-core platform this translates into alternating low energy modes with active mode depending on the application duty cycle. When parallelization is required, a platform with multiple cores can handle different states of clock-gating and power-gating of the different blocks of cores. We will have two examples of single-core and multi-core platforms to illustrate the power management and when it is worth to choose between one or the other given the application duty cycle.
  • Self-awareness of a system can happen at the level of signal acquisition. Changing the paradigm used for sampling, a self-aware sampling strategy reacts to the input signal itself, changing it sampling constraints accordingly. It becomes possible to lower the energy spent on data acquisition without lowering the performance of the target application. We will see how bio-signals can benefit from this novel approach of analog-to-digital conversion.

2 Self-Aware Machine Learning and Artificial Intelligence

In this section we discuss about embedding the notion of self-awareness in machine learning and artificial intelligence algorithms to improve either the performance of the target systems or reduce the energy consumption in such platforms. The self-aware learning algorithm is described in general at first. We then analyze this method in an epileptic seizure detection system (Forooghifar et al. 2019) as a real case study to give a better and more detailed overview of the technique. We also introduce simple results in this specific application to better observe the positive effect of self-awareness on the system.
To utilize the notion of self-awareness in systems that implement machine learning methods, we take advantage of the fact that, to classify the majority of inputs we do not need complex model and a simple classifier gives us reliable results. Thus, we can define different levels of our machine learning algorithm with different complexities, but only switch to the complex classifier when it is necessary for obtaining reliable result, using notion of self-awareness. We will motivate this approach by a small example, and after that, will discuss more complicated variations of the same technique by discussing distributed machine learning. Using this technique, the higher-level infrastructures such as fog and cloud assist the wearable system by performing the complex classifications.

2.1 Motivational Example

To illustrate the main idea of our approach, we start by using a small example. Without loss of generality, and for the simplicity of the presentation, we herein consider two sets of features, calculated in the feature extraction phase of Fig. 2, for a binary classification problem, which are shown in Fig. 3. We assume that the computational complexity of the feature set that is along the vertical axis (feature set 2) is higher than the computational complexity of the one along the horizontal axis (feature set 1). For instance, feature set 1 may contain time-domain features of the dataset, whereas feature set 2 may contain frequency-domain features. Time-domain features have a complexity order of \(\mathcal {O}(n)\), where n is the signal’s length, whereas the frequency-domain features have a complexity order of \(\mathcal {O}(n\log _{2}n)\), as the calculation of frequency-domain features requires additional signal transformations, such as the Fourier transform.
In this example, we consider 25 dot-shaped samples of class 1 and another 25 cross-shaped samples of class 2. For instance, in the case of a pathology, dot-shaped samples belong to people suffering from the pathology, whereas cross-shaped ones belong to healthy subjects. Let us suppose that \(n =1024\). Depending on the confidence level, we can build three different classifiers.
The first classifier is shown by the dashed line in Fig. 3. This classifier uses only the feature set 1 to separate two classes. As it can be observed in Fig. 3, if we use this classifier, some samples within the shaded gray area will be misclassified. The accuracy and the expected computational complexity of this classifier are: \(Accuracy_{\text {dashed}} = 88\%\), \(Complexity_{\text {dashed}} = n = 1024\). Therefore, the expected computational complexity of this classifier is low, while also the classification accuracy is less than the optimal solution.
Another alternative is to combine both feature sets. This is done using a second classifier shown by the solid line in Fig. 3. The accuracy and the expected computational complexity of this classifier are \(Accuracy_{\text {solid}} = 100\%\), \(Complexity_{\text {solid}} = n\log _{2}n = 10240\), respectively. This classifier outperforms the first classifier in terms of classification accuracy, but its computational complexity is ten times higher.
Finally, the third classifier uses the combination of the two previous ones. In order to have optimum trade-off between accuracy and complexity, based on the confidence level that we want to have, we can either use the classifier which uses feature set 1 (the dashed line) or we can use the full one which uses both sets of features (the solid line). The main goal of this scheme is to reduce the classifier complexity in terms of the number of features that we will use for final classification, while maintaining a high classification accuracy. As shown in Fig. 3, the first classifier cannot make confident decisions for samples that happen to be in the shaded gray area. For these samples, the second classifier, i.e., the classifier that uses all the available features should be used if we target a medical application that truly requires a high confidence level. Hence, once we identify the region in which the first classifier does not provide high confidence results, the next step is to check for each testing example if it falls into this shaded area and, if that is the case, to use the second classifier. Otherwise, we use the first classifier, i.e., the classifier with the reduced number of features. For this particular example, let us suppose that we have found the region in which the first classifier does not provide high confidence results, shown in Fig. 3. If we use a two-level classifier scheme that we are proposing in this work, the classification accuracy is \(Accuracy_{\text {two-level}} = 100\%\), whereas the expected classification complexity is calculated as:
$$E(C) = \frac{30}{50} \cdot n + \frac{20}{50} \cdot n \cdot \log _{2}n.$$
For \(n=1024\), the expected classification complexity is \(E_{our\_approach}(C) = 4710.4\), whereas with the classical approach that uses all available features (shown by the solid line in Fig. 3) we obtain \(E_{classical}(C) = \frac{50}{50} \cdot n \cdot \log _{2}n = 10240\). Hence, for this motivational example and \(n=2^{10}\), our approach reduces the classification complexity by a factor of 2.
In summary, for our motivational example shown in Fig. 3, we have presented three different classifiers. The first classifier (the dashed line) has a low computational complexity, but its performance is much lower that the performance of the second one (the solid line). On the other hand, the second classifier has a high classification accuracy, but its computational complexity is much more than the complexity of the first-level one. Our proposed (third) classifier combines the two previous classifiers, to achieve a high classification accuracy and a low computational complexity.

2.2 Centralized Two-Level Learning

Embedding the two-level classification technique in the wearable monitoring system results in the system shown in Fig. 4. To utilize the notion of self-awareness, we define the concept of ‘confidence’ for the simple model. The model is considered confident if it can make reliable decision about its current input. If the model can predict the result correctly, its confidence is ‘1’ and otherwise it is ‘0’. Based on this labeling, in order to be able to predict the confidence of the simple model, a separate model is developed in the training phase. In the test phase of the system, if the confidence of the simple model is calculated as ‘1’, the system uses this model to decrease the complexity and energy consumption of the system while maintaining the performance. Otherwise, if the simple model is not confident, the system switches to the complex model and trade-offs energy with performance.
In our epileptic seizure detection case study, we define a simple model, which uses the model that is trained with few number of simple features, and a complex model, which uses more complex features. In fact, the entire set of features are used seizure detection only when the confident classification based on the set of simple features is not possible. Then, we take advantage of the multi-mode execution possibilities of the platform, in a self-aware fashion, so that the energy consumption is reduced while the detection performance remains in an acceptable level for medical use. If we consider the energy consumption of the confidence calculation, simple classification, and complex classification as \(E_C\), \(E_1\) and \(E_2\), respectively, the total consumed energy for our self-aware classification technique will be:
$$\begin{aligned} \begin{aligned} E_{execution} =E_C + P_1\cdot E_1 + (1-P_1)\cdot E_2, \end{aligned} \end{aligned}$$
(1)
where \(P_1\) is the probability of invoking the simple model (Forooghifar et al. 2019). As a result, as the percentage of choosing simple classifier, which mainly depends on the application, is increased the energy consumption of the system is decreased.

2.3 Decentralized Multi-Level Learning

Due to the limitation in the computational resources of wearable devices, migration of complex and energy-hungry tasks to higher level infrastructures that can provide more computational resources is crucial (Forooghifar et al. 2019). Different computation infrastructures, including fog (personal devices such as cellphones and smart watches) and cloud, are available for interaction with wearable devices as shown in Fig. 5. Deciding whether to communicate with higher layers depends on the trade-off between communication and computation costs, in order to reduce the overall energy consumption of wearable devices and improve their battery lifetime.
In this task distribution over higher computation layers via communication, self-awareness can provide us with information to determine whether this communication can contribute in total energy reduction. We consider the same two-level classification technique, where the complex model is implemented on the fog or cloud. Whenever the simple model is confident, we execute the classification on the device. Otherwise, based on the communication cost, we choose whether to perform the classification on the wearable device or to distribute it to fog/cloud.
In the epileptic seizure detection case study, the simple feature set is used in the wearable device and the complex features are implemented on the fog/cloud. In the formulation of latency for this system, the first two terms are latency of calculating confidence (\(L_C\)) and using simple classifier (\(P_1\cdot L_1\)), respectively. In addition, the latency of task distribution to the fog/cloud consists of two elements, the latency of communication with fog/cloud (\(L_{1\rightarrow 2}\)) and the latency of classification, which is the latency of complex classification (\(L_2\)) multiplied by the speed-up factor of fog/cloud (\(\gamma _{2}\)). As a result the execution latency of this system is:
$$\begin{aligned} L_{execution}&= L_C + P_1\cdot L_1 + (1-P_1)\cdot (L_{1\rightarrow 2} + \gamma _{2} \cdot L_2), \end{aligned}$$
(2)
We estimate the energy consumption of the wearable system:
$$\begin{aligned} E_{execution}&= E_C + P_1\cdot E_1 + (1-P_1)\cdot E_{1\rightarrow 2} , \end{aligned}$$
(3)
where \(E_C\), \(E_1\), \(E_2\), and \( E_{1\rightarrow 2}\) are the energy consumption of the confidence calculation, simple classification, and communication with fog/cloud, respectively.

2.4 Case Study: Epileptic Seizure Detection

In this part we present some simple experimental results of applying self-awareness in the epileptic seizure detection system. Table 1 compares the performance, latency, and energy consumption of the centralized and decentralized epileptic seizure detection system. We observe that using the self-aware classifier improves the detection performance by 5.67% compared to the simple classifier, which is only 1.7% less than the quality of the complex classification. At the same time the energy consumption of the proposed classifier is only 39.4% of the complex classifier.
Table 1
Summarizing the trade-offs between centralized and decentralized systems with and without applying self-aware technique (Forooghifar et al. 2019)
Scenario
Performance (%)
Latency (ms)
Energy \(\times 10^{-7}\)
Complex classifier
82.53
3270.80
13.11
Simple classifier
75.16
554.00
1.18
Self-aware classifier
80.83
1287.54
4.41
\(E\rightarrow F\)
80.83
1420.24
3.65
\(E\rightarrow C\)
80.83
\(1.06 \times 10^{9}\)
2369.84
According to this table, among the presented solutions, the most energy-efficient choice is to offload the computationally-complex tasks to the fog. This solution requires the lowest energy (3.65 mJ) and the latency overhead is only approximately 10.4% of the entire end-to-end latency. In our case study, the communication with the cloud engines is only used to notify the hospital in case of emergency for rescue, due to the limited bandwidth and the major energy overhead of transmission via this protocol.
In conclusion, in this section, we have presented how to introduce the notion of self-awareness into the machine learning module of the wearable systems as a novel approach to reduce their energy consumption, while guaranteeing the quality of service of the system. We considered an epileptic seizure detection system as our real-life case study and validated our approach. Overall, using different levels of the classification based on the demand of the system and application is a promising self-aware technique to reach system’s goals in the application level.

3 Self-Aware System Architecture and Platform

Energy efficiency is an important factor to take into account in any wearable sensor design to ensure remote long-term health monitoring (Sinha et al. 2000). To achieve accurate inference with minimal power consumption, wearable sensor nodes (WSNs) have evolved (Guk et al. 2019) from single-core systems (Rincon et al. 2012; Surrel et al. 2018) into ultra-low power (ULP) (Konijnenburg et al. 2019) and multi-core parallel computing platforms (Conti et al. 2016; Duch et al. 2017; Pullini et al. 2019). These modern ULP platforms combine several techniques to achieve high computing performance when required, while reducing their overall energy consumption. Moreover, they can leverage information about their working scenario, which varies for each concrete patient and situation, to adjust their use of internal resources to fulfill the required tasks with the minimum use of energy.
In this section, we describe two different paradigms to reduce energy consumption: platform-aware design and the emerging patient-aware design.

3.1 Platform-Aware Application Design

When designing a biomedical application for remote health monitoring we find two broad groups of platforms: single- and multi-core. Whereas single-core platforms were traditionally simpler and cheaper, modern multi-core platforms can improve the execution of inherently parallel computations, such as multi-lead electrocardiogram (ECG) signal processing, by distributing the work among several cores. This distribution of tasks allows the system to meet all the deadlines while operating at a lower clock frequency and correspondingly lower voltage level, which results in significant energy savings. In the following paragraphs we describe the most relevant aspects of WSN platforms for energy efficiency.

3.1.1 Sleeping Modes

Embedded platforms typically have the ability of clock-gating their elements to reduce dynamic power. Clock-gating is a very effective technique that allows fine-grained control over individual elements, frequently allowing them to resume their work with a delay of a single cycle. Unfortunately, clock-gating cannot reduce current leakage, which impact is becoming more pronounced as technology evolves towards smaller transistors. In contrast, power-gating suppresses leakage current, but the time required to reactivate an element—particularly if clock generators are stopped—limits the minimum duration of the sleeping periods. Moreover, whereas clock-gating can be implemented with a small area overhead through specific gates, power-gating often requires careful design of local power supply networks with significant area overhead. Therefore, modern ULP platforms offer several sleeping modes, from fine-grained clock-gating to power-gating of large blocks. A careful match between resource activation and computational requirements allows the platform to operate at an optimal energy point.

3.1.2 Multi-banked Memories

The division of the platform memories into smaller banks that can be independently powered on/off, or placed into retention mode, enables fine-grained control on their energy use. For example, applications that acquire and process input signals in “windows” can control which bank is active because it is being currently used by the direct memory access (DMA), which banks must retain their contents until the next processing interval, and which ones can remain off because they do not contain yet any data.
Another important characteristic is the existence of a memory hierarchy. Since smaller banks generally have a lower energy cost per access, placing the most accessed data into them can reduce the total energy consumption.

3.1.3 DMA Modules

(DMAs) are crucial to achieve low power operation in applications that capture windows of data before processing, as the cores, which consume more energy, can be kept deactivated until enough samples have been acquired. Advanced DMA modules are also used to transfer data between different levels in a memory hierarchy, i.e. implementing double-buffers.

3.1.4 Efficient Hardware-Based Synchronization

Efficient parallelization in multi-core platforms requires hardware-based mechanisms that enable single-cycle synchronization and fine-grained clock-gating of the cores that are waiting for an event (Duch et al. 2017; Flamand et al. 2018).

3.1.5 Example Platforms

In this section, we study one representative platform from each of the two categories:
Single-core platform:
An instance of the previous generation of single-core low power platforms is the EFM32LG-STK3600 containing an EFM32 TM Leopard Gecko 32-bit MCU (Silicon Labs 2017), a 48 MHz ARM Cortex-M3 processor with a 3 V supply, 256 KB flash memory and 32 KB RAM. The platform can be paired with the corresponding Simplicity Studio software energy profiler as a tool for energy consumption analysis. Power management is implemented through 5 working modes, controlled by the energy management unit (EMU), with an active mode (EM0) and 4 low energy modes (EM1–EM4), in descending order of energy consumption and increasing wake-up time (Table 2). The results of the analysis in this platform can be used as a base for ECG-based devices like the SmartCardia INYU (Surrel et al. 2018) or electroencephalogram monitoring devices such as the e-Glass (Sopic et al. 2018), which includes a microcontroller unit (MCU) of the same family of the EFM32.
Table 2
Current consumption in different power modes of the EFM32LG platform (summarized from Silicon Labs (2017))
Mode
Current (\(\upmu \)A MHz\({-1}\))
Current (total)
Wake-up (\(\upmu \)s)
Notes
EM0
211
10 mA
Fully active at 48 MHz
EM1
63
3 mA
0
CPU sleeping, DMA available
EM2
0.95 \(\upmu \)A
2
Deep sleep, RTC on, CPU and RAM retention, I/O available
EM3
0.65 \(\upmu \)A
2
Stop, CPU & RAM retention, no I/O peripherals
EM4
20 nA
160
Shutoff
Multi-core platform:
GAP8 (Flamand et al. 2018) is a commercial RISC-V implementation based on the PULP project (Conti et al. 2016) built on 55 nm. It consists of a main processor, termed “fabric controller” (FC), which performs short tasks and manages the complete platform, and a cluster of eight additional cores. The cluster cores are activated only during compute-intensive phases; they can cooperate on a single task or work independently. A hardware event unit implements single-cycle synchronization primitives and clock-gating of waiting cores.
The memory is divided into two blocks: The first one L2 (512 KB) is connected to the FC, whereas the second one L1 (64 KB) provides energy-efficient data access to the cluster cores. Data transfers between both levels are performed by a dedicated DMA module. A second DMA module is in charge of the data acquisition without intervention of the FC. Power numbers for GAP8 at different operating frequencies and voltage points are reported in Flamand et al. (2018). GAP8 implements full retention of the 512 KB L2 memory at only 32 \(\upmu \)W (8 \(\upmu \)W per 128 KB bank). Furthermore, advanced power management reduces power down to 3.6 \(\upmu \)W in deep sleep mode.
A typical processing cycle in GAP8 starts with the DMA receiving data from external sources while the FC is clock-gated and the cluster is completely power-gated. Once enough data are received, the FC activates the cluster cores and programs the DMA to transfer data in and out of the L1 memory. While the cluster cores are processing, the FC can be clock-gated to conserve energy. Once the heaviest parts of the computation are completed, the FC can power down the cluster (and its L1 memory).

3.1.6 Power Management

To achieve continuous monitoring, different power management techniques that exploit the characteristics of biomedical applications are available to conserve energy. Figure 6 shows the periodic life cycle of a typical biomedical application, both on a single-core (Fig. 6a) and on a multi-core (Fig. 6b) platform. These applications are typically pseudo-periodic, with a duty cycle (DC) defined as the amount of computation time over a period including sample acquisition and computation (10% in the figure). These variations of computational demands along time create opportunities to conserve energy through an appropriate power state management.
Using the values reported in Table 2 for a single-core platform like the EFM32, Fig. 7 shows the impact of three possible power management strategies on the average current drawn by the platform—which relates directly to average power and energy consumption over time. Taking advantage of the shallower sleeping mode (EM1), which guarantees the fastest reaction time to external events, a decrease in current of 63% can be achieved. If the system can afford longer reaction times, and counts with external buffering (i.e., in the sampling or communication devices themselves), EM2 can be used, reducing the average current up to 90% with respect to the original.
In the case of a multi-core platform, such as GAP8 (Fig. 6b), the main processor (FC) can be used on its own during light computation stages while the cluster is power-gated (off). The cluster can be activated to take advantage of parallelization and reduce the total execution time of more complex tasks. Figure 8 shows the impact of power management and parallelization on the energy consumption of the platform. Since the speed-up achieved through parallelization changes the execution time, it directly affects the final energy consumption. Hence, the figure shows energy consumption per second of computation, rather than average current or power, to illustrate the impact of the parallelization (and the corresponding reduction of processing time) in the final energy consumption of the system. The figure shows two groups of bars: the left one for a DC of 10%, and the right one for 100%. In both cases, the first bar corresponds to the energy consumed in single-core mode without implementing any power management, whereas the second one corresponds to single-core using clock-gating during idle periods—of course, under a DC of 100%, both cases are equivalent. The remaining bars correspond to the energy consumption in multi-core mode. We make two important observations: First, the multi-core version can achieve significant improvements (up to 41%) with respect to the single-core version with power management. Second, the parallelization needs to reach a minimum speed-up to attain any energy savings. For example, with a 4-core platform, if the speed-up obtained is \(2\times \), then the multi-core version will consume even more energy than the single-core one because the additional hardware is not correctly exploited.

3.1.7 Memory Management

Quite frequently, biosignal processing applications need to acquire a number of samples to complete a window of processing. The sampling period typically extends over several seconds, with a low acquisition rate (e.g., 250 Hz). During the sampling period, the processor is typically clock- or power-gated; the system keeps active only the DMA controller, the memory and the devices required for signal acquisition such as analog-to-digital converters (ADCs) or bus interfaces. However, even with this minimal amount of hardware active, the amount of energy consumed during the acquisition phase can be quite large. In fact, the energy consumed by the memories during the sample acquisition period, which is measured in seconds, can become comparable in magnitude to the energy consumed during the processing period, which is typically measured in tens or hundreds of milliseconds.
In that sense, modern platforms have memories divided into multiple banks which can be independently switched off or on. Most memories support also a retention mode in which they keep their contents, but cannot be accessed. To minimize the energy consumption during sampling, a system should use the smallest size of bank that is feasible, and keep off all the banks except the one currently receiving new samples—as power mode transition requires some time, it may be better to keep the bank active between sampling periods. When the bank is filled, it should placed into retention mode. As capturing progresses, the banks move from disconnected state to active and, finally, to retention. When the sampling period ends, all the banks containing data can be activated before starting the computation.

3.2 Patient-Aware Applications

In health monitoring, targeted and personalized diagnosis and treatment are essential for a successful prognosis. One example is in the group of patients suffering from paroxysmal atrial fibrillation (PAF), which is caused by heterogeneous mechanisms and can be asymptomatic. In our work De Giovanni et al. (2017), we show how a patient-aware approach to predict the onset of PAF significantly increases the accuracy compared to methods that consider inter-patient variability.
In our work, we propose to use an abstraction of the ECG signal (termed “delineation”) by selecting specific relevant points for each patient. Then, we train different models for the different patients, automatically adjusting the complexity of the model (i.e., number of features or delineated points) as required to the specific condition of each patient.
Since the extraction of each of the delineation points, or features, has a different computational cost, the patient-aware approach changes the computational complexity of the same application for each patient: By choosing different groups of ECG delineation points with different computational load per patient, the energy consumption of the algorithm is scaled to the specific patient.
These considerations can be used in conjunction with the previously explored platform-aware techniques to achieve optimal computation. For example, in the case of patients for which few (easy) points are delineated, the algorithm may be able to work in single-core mode or at a lower frequency-voltage point. If the set of required points makes the delineation process become more complex, then a higher frequency-voltage point can be used to guarantee that all the deadlines are met.

3.3 Towards Adaptive and Multi-parametric Applications

Newer WSN applications for remote health monitoring or for tracking performance in sports are becoming multi-parametric and adaptive. These applications use multiple sensors, such as respiratory activity (RSP) or photoplethysmography (PPG) to estimate vital parameters such as heart-rate (Giovanni et al. 2016), blood pressure and oxygen saturation (Murali et al. 2017). Due to the performance challenges that these new applications create, modern applications are evolving to handle those larger workloads, but also to follow their performance requirements more closely. For example, Mr. Wolf (Pullini et al. 2019) is a PULP-based platform that can run applications such as seizure detection with 23 ECG electrodes (Benatti et al. 2016) or an online learning and classification EMG-based gesture recognition (Benatti et al. 2019). These new platforms offer parallel processing and floating point support, but can also operate at different voltage-frequency points. In that way, the designers can pick among multiple combinations of working states: single-core at low frequency-voltage (minimum performance, minimum energy consumption), single-core at high frequency-voltage, multi-core at low frequency-voltage, and multi-core at high frequency-voltage (highest performance, highest energy consumption).
In consequence, the relevant problem is becoming now how to match the performance requirements of the application with platform resources to meet all deadlines, while avoiding energy wastage. One example of this effort are adaptive applications that employ algorithms of increasing complexity in cascade, activating only the least complex ones required to achieve a satisfying precision. For example, in Forooghifar et al. (2019) the authors propose to use multiple support vector machines (SVMs) of increasing complexity to process increasing numbers of biosignals in cognitive workload monitoring. A fundamental feature of SVMs is that they produce a classification (e.g., “stressed” versus “not stressed”), but they also produce a certainty score. Using that score, the designer can determine if the current SVM is adequate or if the next complex one should be used. The successive SVMs require more complete inputs, which increases both the complexity of feature extraction and the computation of the SVMs itself.
In conclusion, a carefully designed application can determine the required resources at each stage, and configure platform resources according to its requirements.

4 Self-Aware Signal Acquisition and Sampling

IoT devices are small embedded systems, often constrained in resources. While the more powerful IoT devices have a permanent power supply, other devices are more limited. This is the case for remote wireless systems dedicated to data collection powered by a battery. While in the former case, the energy budget is often, in the later case, each Joule must be used wisely.
In the most constrained setups, sampling data from the environment is often one of the main energy expenditures. Therefore, the components used for the signal acquisition need to be as low-power as the design constraints allow. Indeed, the performance of the device must not be lowered below the threshold of acceptability. The Analog-to-Digital Converters (ADCs) used in this context are prominently based on the Successive Approximation Register (SAR) architecture, because they are lower low-power than other architectures for a given bit-depth, as seen in Fig. 9. An additional benefit that cannot be represented in the figure is the evolution of the current draw depending on the sampling rate. Indeed, while the current consumed by the Sigma-Delta architecture is constant, it scales with the number of samples takes in the case of SAR ADCs. This means that lowering the sampling frequency rate directly leads to significant power savings.

4.1 When the Design Drives the Sampling: The Data Deluge

There are inherent limits to how low the energy consumption of ADCs can be. Indeed, even with optimised Analog-to-Digital Converters (ADCs) using a low-power architecture, there is a minimum amount of samples required to accuractely capture the signal. This amount is defined according to the Shannon-Nyquist theorem, where the highest frequency in the signal drives the sampling rate to use. For example, when recording an audio signal, the maximum frequency (highest pitch) that can be heard by human ear is close to 20 kHz, which requires a sampling rate of 40 kHz. When considering the addition of a low-pass filter with a 2.05 kHz transition band to remove frequencies above 20 kHz, we reach the common 44.1 kHz sampling frequency.
Following the Shannon-Nyquist sampling theorem, there is a constant sampling rate defined for the acquisition. This sampling rate is directly connected to the highest frequency that could happen in the signal, which might not even happen. As a consequence, there is an global over-sampling of the signal at every instant where is maximum expected frequency is not in the signal.

4.2 When the Signal Drives the Sampling: The Event-Driven Strategy

As the energy consumption of SAR ADCs is linked to the number of samples taken, lowering this number leads to a globally energy savings. One of such strategy is Compressed Sensing (Mamaghanian et al. 2011), where the system follows an irregular sampling pattern according to a mathematical model: the signal’s sparsity can be used to recover the original signal by solving an under-constrained linear system. This is already an improvement over the uniform sampling approach, but it has few shortcomings. First, this relies on the probability of getting significant data at the right time. When the system does not samples the signal when a significant event is happening, it is either totally lost, or is has a poor acquisition. Second, the process to reconstruct the full signal from the compressed sensing one is a computationally expensive process. A low-power IoT sensor node would not have the energy budget to retrieve the signal and react if necessary. If the data processing is pushed towards the remote IoT node, an alternative design must be chosen.
The best approach is to have the signal itself driving the sampling. There are multiple strategies that implement this way of reasoning about signal acquisition. In the following parts, event-triggered sampling is motivated and illustrated with the specific case of the main bio-signal of the heart, the electro-cardiogram (ECG).
ECG signals combine periods of high frequency when the beat happens, and lower frequencies otherwise. Each heartbeat in an ECG is observed as a sequence of three wave components (annotated in Figs. 10 and 11):
1.
P wave: electrical activation of the atria,
 
2.
QRS complex: electrical activation of the ventricles,
 
3.
T wave: electrical recovery of the ventricles.
 

4.2.1 Level-Crossing Event Triggering

From the traditional situation of uniform sampling, as shown in Fig. 10, the sampling frequency chosen here is not sufficient to correctly capture all the signal’s characteristics. However, raising the sampling rate to reliably capture the R peak would be over-provisioning the other parts of the signal. Over-sampling is detrimental for resource-constrained medical systems (Surrel et al. 2018; Sopic et al. 2018) as more samples means more energy required to process, store, or transmit the acquired data. Indeed, a medical device has a lower usability for the patients if the battery life is a limiting factor.
Even though the full ECG can be very informative to detect symptoms of diseases, it is not always necessary to have full details about the P, QRS, and T waves. Depending on the application, desired accuracy and use-case, partial data can be sufficient to run the required diagnostics. For instance, it is possible to have an online detection of Obstructive Sleep Apnea (OSA) on a wearable device only relying on the time between heart beats (Surrel et al. 2018). The accuracy is improved when the peaks’ amplitude is also used. Because all the processing is performed on the device, it needs to spare the energy spent. For this example application, lowering the sampling frequency exacerbates two problems. First, it is impossible to significantly reduce the sampling frequency as we need to accurately detect heart-beats. Secondly, the algorithm needs accurate timing between the heart-beats, otherwise the quality of the results decreases dramatically. Lowering the sampling frequency lowers the temporal resolution of the heart-beat detection. As a consequence, any energy saved from lowering the sampling frequency is paid with a reduced performance.
Switching to an event-driven signal acquisition is beneficial due to two reasons. First, the heart-beat is the highest peak in the signal. Therefore, it will quickly cross multiple thresholds, clearly flagging its presence in the triggers received. Compared to Compressed Sensing, it is not possible to miss it in the signal. Secondly, even with a coarse configuration (i.e., low number of thresholds), we only lose precision in the peak height but the time of when the heart-beat happens is preserved.

4.2.2 Error-Based Event Triggering

The classical level-crossing method is less than ideal because oscillations or noise around a threshold will trigger many samples. Second, a linear evolution of the signal, that is to say with a zero derivative, will generate samples regularly spaces both in time and value. This bring no useful information compared to simply having the first and last points of the linear section. An alternative approach is to consider the error between the raw signal and its sampled version as the trigger for sampling.
Putting the focus on the signal reconstruction error, the event-triggered sampling task is a minimization problem, looking for the minimum number of samples that allow us to obtain a digital representation of the analog signal that is sufficient for ECG processing.
A family of methods well suited for this problem is polygonal approximation, also called piece-wise linear representation or linear path simplification (Keogh et al. 2004). These methods assume that the input signal can be represented as a sequence of linear segments, and they apply different techniques to obtain the minimum number of segments satisfying some error criterion.
Within this family, one method especially suitable for sampling time series is the Wall-Danielsson algorithm (Wall 1984). This method has linear complexity, works online, and only needs one signal sample in advance to estimate the approximation error. Conversely, it does not guarantee optimality neither in the number of points, nor in the selected samples. This method follows a bottom-up approach, in which points are merged to the current linear segment until an error threshold is reached, and then a new segment is created. The error is measured as the area deviation between the original signal and the current segment.
This algorithm overcomes two main shortcomings of the classical level-crossing method visible in Fig. 12. First, an almost constant signal oscillating near a level generates more events than required. Second, fast linear changes generate numerous events. With polygonal approximation, the number of samples is not affected by constant displacements of the signal level, and linear changes are always represented by just two samples, no matter the slope value.

4.2.3 Self-Aware Sampling for ECG Signals

For a self-aware system, an important ECG feature that can be exploited to reduce the amount of data is the physiological regularity observed in the signal. In particular, under a normal sinus rhythm situation, the same heartbeat pattern is repeated between 60 and 100 times per minute. Thus, if this situation is detected on a signal fragment, from that point onward it would be enough to capture just the information needed to identify a change in the rhythm.
This idea is illustrated in Fig. 13, showing a 24-s ECG segment. As long as we observe three regular P-QRS-T heartbeat patterns with a normal distance between them, we drastically reduce the detail of the signal just to be able to check that the regularity is maintained. This results in a rougher approximation of the signal, but detailed enough to observe the regular heartbeats at the expected time points. When an unexpected event breaks this regularity, the procedure is able to lower the error between the signal and its sampled version, hence supporting a more precise analysis of the new situation.

4.3 Evaluation of Event-Driven Sampling

The potential of event-driven sampling is illustrated with ECG signals by comparing the performance of a standard QRS detection algorithm provided in the WFDB software package from PhysioNet. The output of the QRS detection algorithm is compared against the manual annotation done by a medical doctor, using the bxb application from the WFDB toolkit.
The dataset used is the MIT-BIH Arrhythmia database available on PhysioNet, which is widely used in the bibliography to evaluate QRS detection algorithms. This database contains 48 ECG records of 30 min duration sampled 360 Hz.
Table 3 shows the performance comparison between the proposed method and the other sampling strategies, including ordinary uniform sampling (U.S.), compressed sensing (C.S.), level-crossing (L.C.), and finally self-aware adaptive sampling (S.A.). The considered performance metrics are specificity, positive predictivity and the combined \(F_1\) score. The compressed sensing method has been applied as explained in Mamaghanian et al. (2011), while the adopted level-crossing scheme is linear with a threshold every 200 \(\upmu \)V. The results show that for a similar \(F_1\) score, compressed-sensing halves the sampling frequency while level-crossing divides it by more than eight. The self-aware adaptive approach outperforms the two other strategies.
Table 3
QRS detection performance comparison among different sampling strategies and resulting average sample rate for the 46 selected records from the MIT-BIH Arrhythmia DB
Sampling strategy
Se
+P
F\(_1\)
fs (Hz)
Uniform Sampling (U.S.)
99.73
99.85
99.79
360.0
Compressed Sensing (C.S.)
99.64
99.82
99.73
180.0
Level-Crossing (L.C.)
99.66
99.83
99.74
43.7
Self-Aware (S.A.)
99.62
99.84
99.73
13.6
The relevance of event-driven sampling is necessarily application-specific, where the performance must be evaluated carefully. However, given the properties of such a system where the samples are taken depending on the signal itself rather than an external factor, it is expected to bring significant energy-savings for an equivalent system performance in multiple domains.

5 Conclusions

When optimizing a system for a given task, it is often required to change our mindset. The majority of systems benefit from dynamically adapting to changes according to their capabilities and limits. This adaptive behavior opens the way to significant energy-savings, while maintaining the required performance.
The design of a self-aware system can take place on one or multiple levels, depending on the final goal and constraints. Each layer is very different when considering the impact of making it self-aware, with trade-offs involving design complexity, implementation cost, or technological availability.
This chapter presented three practical examples of self-aware designs, one for each layer considered. In Fig. 14, the layer the closest to the analog world, the hardware layer, is involved in the analog-to-digital conversion of a signal, sampling it. One layer up is the architecture layer, i.e., the electronic components to run software algorithms. It offers specific capabilities for building up self-aware systems. Finally, the closest layer to the user is the application layer, which is involved in data processing and yielding the final results. Each one of these layers has great potentials for full customization in a self-aware system.
1.
The Application Layer: An epileptic seizure detection classifier has been presented. An initial lightweight classification is first performed. Depending on the results, the classifier decides if a more in-depth analysis is required. This approach brings significant savings for all cases where is lightweight analysis is sufficient, with a minimal overhead when performing the full analysis.
 
2.
The Architecture Layer: In transitioning to the application layer, an important factor to consider is the structure of the different platforms available and how it can affect the energy efficiency. Power management must be taken into account in the application design process for any platform. The need of parallelization in multi-core platforms, described as energy savings compared to the single-core implementation, depends on the application duty cycle and speed-up on the cores cluster. This analysis can enable choosing the optimal set up to achieve the maximum energy savings given certain general features of the application.
 
3.
The Hardware Layer: At the beginning of the chain of processing data comes the signal digitization. In low-power systems, the energy budget of sampling can be lowered by changing the data acquisition strategy, moving from a uniform sampling to an adaptive event-driven one. In the application presented, this non-Nyquist sampling could lower by more than \(25\times \) the total number of samples collected without any significant decrease in performance.
 
The design constraints may require a single layer to be self-aware. In that case, it is likely that targeting the application layer is the right thing to do. If needed, it is possible to go further by turning the architecture layer or the hardware layer self-aware. Finally, a fully self-aware system will bring a definite advantage, i.e., the possibility to have a much lower energy consumption without any major loss in terms of performance.

Acknowledgements

This work has been partially supported by the ML-Edge Swiss National Science Foundation (NSF) Research project (GA No. 200020_182009/1), by the H2020 DeepHealth Project (GA No. 825111), by the ReSoRT Project (GA No. REG-19-019) funded by Botnar Foundation, and by the PEDESITE Swiss NSF Sinergia project (GA No. SCRSII5 193813/1).
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
Zurück zum Zitat A. Aminifar, P. Tabuada, P. Eles, Z. Peng, in 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE) (IEEE, 2016), pp. 636–641 A. Aminifar, P. Tabuada, P. Eles, Z. Peng, in 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE) (IEEE, 2016), pp. 636–641
Zurück zum Zitat M.T. Andrén, B. Bernhardsson, A. Cervin, K. Soltesz, in 2017 IEEE 56th Annual Conference on Decision and Control (CDC) (IEEE, 2017), pp. 5438–5444 M.T. Andrén, B. Bernhardsson, A. Cervin, K. Soltesz, in 2017 IEEE 56th Annual Conference on Decision and Control (CDC) (IEEE, 2017), pp. 5438–5444
Zurück zum Zitat A. Anzanpour, I. Azimi, M. Götzinger, A.M. Rahmani, N. TaheriNejad, P. Liljeberg, A. Jantsch, N. Dutt, in Proceedings of the Conference on Design, Automation & Test in Europe (European Design and Automation Association, 2017), pp. 1056–1061 A. Anzanpour, I. Azimi, M. Götzinger, A.M. Rahmani, N. TaheriNejad, P. Liljeberg, A. Jantsch, N. Dutt, in Proceedings of the Conference on Design, Automation & Test in Europe (European Design and Automation Association, 2017), pp. 1056–1061
Zurück zum Zitat E. De Giovanni, A. Aminifar, A. Luca, S. Yazdani, J.M. Vesin, D. Atienza, in Proceedings of CINC, vol. 44 (2017), pp. 285–191 E. De Giovanni, A. Aminifar, A. Luca, S. Yazdani, J.M. Vesin, D. Atienza, in Proceedings of CINC, vol. 44 (2017), pp. 285–191
Zurück zum Zitat A. Filieri, H. Hoffmann, M. Maggio, in Proceedings of the 36th International Conference on Software Engineering (2014), pp. 299–310 A. Filieri, H. Hoffmann, M. Maggio, in Proceedings of the 36th International Conference on Software Engineering (2014), pp. 299–310
Zurück zum Zitat F. Forooghifar, A. Aminifar, D.A. Alonso, in 2018 21st Euromicro Conference on Digital System Design (DSD) (IEEE, 2018), pp. 426–432 F. Forooghifar, A. Aminifar, D.A. Alonso, in 2018 21st Euromicro Conference on Digital System Design (DSD) (IEEE, 2018), pp. 426–432
Zurück zum Zitat F. Forooghifar, A. Aminifar, L. Cammoun, I. Wisniewski, C. Ciumas, P. Ryvlin, D. Atienza, Mobile Networks and Applications (2019), pp. 1–14 F. Forooghifar, A. Aminifar, L. Cammoun, I. Wisniewski, C. Ciumas, P. Ryvlin, D. Atienza, Mobile Networks and Applications (2019), pp. 1–14
Zurück zum Zitat F. Forooghifar, A. Aminifar, D. Atienza, IEEE Trans. Biomed. Circuits Syst. 13(6), 1338 (2019)CrossRef F. Forooghifar, A. Aminifar, D. Atienza, IEEE Trans. Biomed. Circuits Syst. 13(6), 1338 (2019)CrossRef
Zurück zum Zitat H. Hoffmann, J. Holt, G. Kurian, E. Lau, M. Maggio, J.E. Miller, S.M. Neuman, M. Sinangil, Y. Sinangil, A. Agarwal et al., in Proceedings of the 49th Annual Design Automation Conference (2012), pp. 259–264 H. Hoffmann, J. Holt, G. Kurian, E. Lau, M. Maggio, J.E. Miller, S.M. Neuman, M. Sinangil, Y. Sinangil, A. Agarwal et al., in Proceedings of the 49th Annual Design Automation Conference (2012), pp. 259–264
Zurück zum Zitat A. Jantsch, N. Dutt, A.M. Rahmani, IEEE Design & Test 34(6), 8 (2017) A. Jantsch, N. Dutt, A.M. Rahmani, IEEE Design & Test 34(6), 8 (2017)
Zurück zum Zitat E. Keogh, S. Chu, D. Hart, M. Pazzani, in Data Mining in Time Series Databases (World Scientific, 2004), pp. 1–21 E. Keogh, S. Chu, D. Hart, M. Pazzani, in Data Mining in Time Series Databases (World Scientific, 2004), pp. 1–21
Zurück zum Zitat M. Konijnenburg, R. van Wegberg, S. Song, H. Ha, W. Sijbers, J. Xu, S. Stanzione, C. van Liempd, D. Biswas, A. Breeschoten, P. Vis, C. Van Hoof, N. Van Helleputte, in IEEE International Solid-State Circuits Conference (ISSCC) (2019), pp. 360–362. https://doi.org/10.1109/ISSCC.2019.8662520 M. Konijnenburg, R. van Wegberg, S. Song, H. Ha, W. Sijbers, J. Xu, S. Stanzione, C. van Liempd, D. Biswas, A. Breeschoten, P. Vis, C. Van Hoof, N. Van Helleputte, in IEEE International Solid-State Circuits Conference (ISSCC) (2019), pp. 360–362. https://​doi.​org/​10.​1109/​ISSCC.​2019.​8662520
Zurück zum Zitat P.R. Lewis, A. Chandra, S. Parsons, E. Robinson, K. Glette, R. Bahsoon, J. Torresen, X. Yao, in 2011 Fifth IEEE Conference on Self-Adaptive and Self-Organizing Systems Workshops (SASOW) (IEEE, 2011), pp. 102–107 P.R. Lewis, A. Chandra, S. Parsons, E. Robinson, K. Glette, R. Bahsoon, J. Torresen, X. Yao, in 2011 Fifth IEEE Conference on Self-Adaptive and Self-Organizing Systems Workshops (SASOW) (IEEE, 2011), pp. 102–107
Zurück zum Zitat P.R. Lewis, M. Platzner, B. Rinner, J. Tørresen, X. Yao, Self-aware Computing Systems (Springer, 2016) P.R. Lewis, M. Platzner, B. Rinner, J. Tørresen, X. Yao, Self-aware Computing Systems (Springer, 2016)
Zurück zum Zitat H. Mamaghanian, N. Khaled, D. Atienza, P. Vandergheynst, IEEE TBME 58(9), 2456 (2011) H. Mamaghanian, N. Khaled, D. Atienza, P. Vandergheynst, IEEE TBME 58(9), 2456 (2011)
Zurück zum Zitat G. Masinelli, F. Forooghifar, A. Arza, A. Aminifar, D. Atienza, IEEE Design & Test (2020) G. Masinelli, F. Forooghifar, A. Arza, A. Aminifar, D. Atienza, IEEE Design & Test (2020)
Zurück zum Zitat D. Sopic, A. Aminifar, A. Aminifar, D. Atienza, IEEE TBioCaS (99), 1 (2018) D. Sopic, A. Aminifar, A. Aminifar, D. Atienza, IEEE TBioCaS (99), 1 (2018)
Zurück zum Zitat D. Sopic, A. Aminifar, D. Atienza, in 2018 IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5 D. Sopic, A. Aminifar, D. Atienza, in 2018 IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE, 2018), pp. 1–5
Zurück zum Zitat G. Surrel, A. Aminifar, F. Rincón, S. Murali, D. Atienza, IEEE TBioCaS 12(4), 762 (2018) G. Surrel, A. Aminifar, F. Rincón, S. Murali, D. Atienza, IEEE TBioCaS 12(4), 762 (2018)
Metadaten
Titel
Intelligent Edge Biomedical Sensors in the Internet of Things (IoT) Era
verfasst von
Elisabetta De Giovanni
Farnaz Forooghifar
Gregoire Surrel
Tomas Teijeiro
Miguel Peon
Amir Aminifar
David Atienza Alonso
Copyright-Jahr
2023
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-16-7487-7_13