Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2013 | OriginalPaper | Buchkapitel

8. Sensor Deployments for Home and Community Settings

verfasst von : Michael J. McGrath, Cliodhna Ní Scanaill

Erschienen in: Sensor Technologies

Verlag: Apress

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this chapter, we will outline the methodologies that have been successfully developed and utilized by Intel and Technology Research for Independent Living (TRIL) Centre researchers in the design, implementation, deployment, management, and analysis of home- and community-based research pilots. Translating research from the confines of the laboratory to real-world environments of community clinics and people’s homes is challenging. However, when successful, applying that research correctly can deliver meaningful benefits to both patients and clinicians. Leveraging the expertise of multidisciplinary teams is vital to ensuring that all issues are successfully captured and addressed during the project life cycle. Additionally, the end user must be the center of focus during both the development and evaluation phases. Finally, the trial must generate data and results that are sufficiently robust to withstand rigorous review by clinical and scientific experts. This is vital if any new technology solution is to be successful adopted for clinical use.
In this chapter, we will outline the methodologies that have been successfully developed and utilized by Intel and Technology Research for Independent Living (TRIL) Centre researchers in the design, implementation, deployment, management, and analysis of home- and community-based research pilots. Translating research from the confines of the laboratory to real-world environments of community clinics and people’s homes is challenging. However, when successful, applying that research correctly can deliver meaningful benefits to both patients and clinicians. Leveraging the expertise of multidisciplinary teams is vital to ensuring that all issues are successfully captured and addressed during the project life cycle. Additionally, the end user must be the center of focus during both the development and evaluation phases. Finally, the trial must generate data and results that are sufficiently robust to withstand rigorous review by clinical and scientific experts. This is vital if any new technology solution is to be successful adopted for clinical use.
Note
The TRIL Centre is a large-scale collaboration between Intel and research teams at University College Dublin (UCD), Trinity College Dublin (TCD), National University of Ireland (NUI) Galway, and St James’s Hospital in Dublin. It is a multidisciplinary research center focused on discovering how technology can be used to facilitate older people living independent lives in the location of their choice. TRIL research encompasses identifying, understanding, and accurately measuring clinical variables to predict health, prevent decline, and promote well-being and independence. The driving principles are to facilitate the measurement of critical clinical variables and test technology-enabled clinical assessments and interventions in a nonclinical environment, with validation in both environments. This is combined with a technology design and development process that is person-centered and informed by ethnographic research. Many of the insights presented in this chapter were collected during the course of TRIL research activities. The case studies outlined were carried out collaboratively by Intel and TRIL researchers.
You can find additional information at www.trilcentre.org .

Healthcare Domain Challenges

Technology is used during the course of everyday practice in hospital clinics to identify and measure the extent of gait issues, cardiovascular health, sensory impairments, cognitive function, and so on. However, technologies are rarely used in community settings for a variety of reasons, including cost, usability, and the necessity for specialized facilities and/or expert personnel. Additionally, the value of monitoring data sets over the long term within the clinical domain is subject to debate, with critics seeking evidence of impact on healthcare outcomes, costs, and efficiencies (Alwan, 2009, McManus et al., 2008, Tamura et al., 2011). Although sensor technologies for singular or short-term measures, such as EKGs/ECGs or 24-hour ambulatory blood pressure monitoring, has achieved acceptance in the diagnostics process, there has been slow progress in embracing prognostics-oriented technology. The use of prognostics originated in the engineering domain, where it is used to predict, within specified degrees of statistical confidence, whether a system (or components within a system) will not perform within expected parameters. Of course, the human body is a complex biological system, which makes it difficult to model as a system because of the subtle interactions between physiological, biomechanical, behavioral, and cognitive parameters. Therefore, frequent assessments, long-term monitoring, or both are required to develop an understanding of what indicates a significant trend from an individual’s own normal limits. Large population data sets that enable the creation of groupings with strongly related epidemiological characteristics are also critical. Normative values or ranges can be established that identify how an individual differs from other similar individuals. The population sizes within these data sets are critical because larger sample sizes support the development of more granular models, which are reflective of groups with shared physiological, cognitive, and biomechanical ranges.
Collecting high-quality sensor data sets in a reliable, robust, and low-cost manner over long periods of time, with minimal clinical oversight, poses enormous challenges. To successfully meet these challenges, new thinking is required into the way biomedical research is conducted. A broader perspective of sensor based applications is required, which includes all stakeholders in the research process, and is not just limited to the engineering-doctor interface. Clinicians, scientists, biomedical engineers, social scientists, designers, information and communication technology (ICT) engineers, and others need to work collaboratively in the development, deployment, and evaluation of technologies that will facilitate new paradigms of health and wellness management, as illustrated in Figure 8-1.
Because health and wellness applications are human-centric in nature, insights collected directly from end users play an important role in the development process. By using this information to inform the design of the sensor solution, it will have a higher probability of being successfully used in clinical practice or people’s daily lives. The science of ethnography is a highly effective tool for developing the insights required to deliver a successful application or product. The ethnographic process helps to develop an understanding of how people live, what makes them live the way they do, and how technologies, including sensors, can be acceptable and beneficial. Ethnographers often spend extended periods of time with people, getting to know them and understanding their day-to-day lives. This helps identify opportunities to put technologies in place that are both unobtrusive and effective in supporting the health of the people that researchers and clinicians are studying.
A consistent finding of much aging research is that older adults want to stay in their own homes as they age, and not in care institutions. Homes are symbols of independence and are a demonstration of people’s ability to maintain autonomous lives. Homes are full of personal possessions and memories and help maintain continuity in people’s relationships with their past and their hopes for the future. Homes evoke and support autonomy, independence, freedom, and personal space. However, homes by their very nature are uncontrolled environments, which make both the deployment and utilization of sensor technologies challenging.

Study Design

The design and execution of any research study is a multiphase process, as shown in Figure 8-2. The steps should be followed in a sequential manner to minimize study design issues and to maximize the quality of the research outputs.
1.
The obvious starting phase of any research project is defining the research question, and in some cases the technology questions, to be addressed.
 
2.
The cohort size required to generate statistically significant results is assessed during the power analysis phase. Cohort sizing is particularly important for any sensor applications that are required to demonstrate statistically significant diagnostic capabilities.
 
3.
Protocol design focuses on the data to be collected. Careful design of the protocol is critical to ensure data quality. Data quality can be achieved by observing the data collection (in other words, a supervised protocol) or by having very rigid measurement methods, although both these methods may be unacceptable for large-scale deployments.
 
4.
The utility and acceptability of the sensor system is determined during the user evaluation phase. Based on the information collected during this process, the system design may be refined or completely redesigned to make it more acceptable to end users.
 
5.
The deployment phase focuses on the successful installation of the sensor system. The goal is to collect the necessary data required to answer the research question set out at the start of the study design. Technical issues that were not identified during laboratory testing may also be found. The deployment will also highlight usability issues that were not found or prioritized during the user evaluation phase.
 
6.
The final phase of the life cycle is data analysis, where many of the data mining and visualization techniques as described in Chapter 5 can be applied to reveal interesting patterns in the data.
 

Setting the Research Questions

A clear research question supports the identification of what data must be collected to address the research hypothesis, which in turn defines the cohort size by means of a power analysis. Participant inclusion/exclusion criteria will be determined based on the research question and the data to be examined. The clinical protocol will be determined by the type of data and the personnel required for data collection. The presence of a clinician may be legally or ethically required if blood samples are required or if the cohort is vulnerable in any way. Trained psychologists may be required to administer and interpret psychological assessments, and engineers may need to install and maintain technologies and validate data as it is collected.
The clinical protocol should be documented accurately as part of the ethical approval process required by most institutions. The protocol document will be referred to during the data collection phase and should be strictly adhered to. This document will be invaluable during the data analysis phase of the study, providing context to the data and its interpretation.
When working in a multidisciplinary environment, as is common during clinical studies, it is appropriate to create a clinical requirements document (CRD) as well as a prototype requirements document (PRD). A CRD is developed by clinicians to document their requirements for the study. This document extends the protocol document to specify what the clinician requires from the sensor technology and personnel to achieve the study aims. The PRD is a technical document, developed by engineers, that describes the hardware and software choices made to meet the clinical requirements. Both of these documents are shared and debated among the multidisciplinary team members until agreement is achieved on the requirements for the initial prototype. Sometimes these documents raise questions that necessitate further research or focus groups to answer. Although creating (and maintaining) documents such as the PRD and CRD is time-consuming, these documents are invaluable in preventing the misinterpretation of requirements, which can arise in multidisciplinary teams. Ultimately, these structured requirement capture processes can result in significant overall time savings and can minimize potential disagreements between the team during the evaluation phase of the project.

Clinical Cohort Characterization

Clinical cohorts are a pivotal component in any health- and wellness-oriented research. Considerable cost and effort are required in the recruitment of cohorts. However, the up-front investment in characterizing a cohort can pay significant dividends. These dividends include helping to inform the analysis of the research data and revealing interesting and novel relationships between parameters during the analysis.
One of the first issues to be addressed is cohort size. A power analysis should be conducted to establish the ideal cohort size required to enable accurate and reliable statistical analysis (Cohen, 1988). Power analysis is used to calculate the minimum sample size required to detect an effect of a given size. This prevents either under- or over-sizing of the cohort. If the sample size is too small, the data will not be accurate enough to draw reliable conclusions on predefined research questions. If the sample size is too big, time and effort will be wasted. The power of a study is dependent on the effect size, the sample size, and the size of type I error (e.g., α = 0.05, meaning incorrect rejection of a true null hypothesis, in other words, incorrectly identifying a relationship between parameters when none exists). The effect size should represent the smallest effect that would be of clinical significance. For this reason, it will vary with the measure under consideration. If the effect size and alpha are known, then the sample size becomes a function of the power of the study. Typically, statistical power greater than 80 percent is considered adequate, and a sample size corresponding to 80 percent power is used as the minimum cohort size. The size of the recruited cohort should always be larger than the size calculated by a study power analysis to account for drop-outs. This buffering factor depends on the cohort demographics and the nature of the protocol.
Where feasible, the cohort should be given a generalized assessment to establish their baseline health and to identify any risks that must be immediately addressed and would preclude them from participating in a study. This will be a requirement of most ethics committees dealing with clinical studies. For example, an elderly cohort may undergo a falls risk assessment, cognitive functional tests, and physiological testing to establish their baseline health.

Protocol Options: Supervised vs. Unsupervised

The selection of a supervised or unsupervised protocol is contingent on the specifics of the research question under investigation. A clinician or researcher is present during a supervised protocol, recording the data, ensuring that the protocol is followed correctly, and ensuring the safety of the participant. For these reasons, a supervised protocol is usually suited to trials where sensor data quality is critical. However, when long-term sensor measures are of interest, it is not always practical to conduct a supervised study. It may be inconvenient for a participant to visit a clinic/lab every day for a number of months; similarly, it may not be convenient for a researcher to visit the home of a participant on an ongoing basis. In such cases, an unsupervised protocol may provide the best option. During an unsupervised study, issues such as remote sensor management, data management, and data context are important. Remote management tools may be used to monitor data as it is collected (see the section “Home Deployment Management”). These tools allow researchers to monitor data quality, protocol adherence, and equipment status as the trial is conducted. If data is not monitored using a remote management infrastructure, participants may be required to complete a diary on every day of the study to provide context to the sensor data collected, as described in Case Study 3.

Clinical Benefit

Technological solutions can often be more costly and time-consuming than “care as usual.” However, financial and temporal costs are surmountable if the clinical benefit of using a sensor technology can be demonstrated. Technology—which is simply a lower-cost, more portable version of existing clinical solutions—can leverage the existing research for the clinical device, provided that the new sensor technology is validated against the clinical device. Such a sensor will be readily accepted by those who already have experience in using and interpreting data from the existing clinical device.
The burden of proof is much higher for technologies that do not have a clinical equivalent. First, the economic and health benefit of using the new measures must be compared to “care as usual” through costly longitudinal trials, such as randomized control trials (RCT). Second, results of these studies must be published in clinically accepted, internationally peer-reviewed journals. Finally, the burden of interpreting the data is higher for a new technology/assessment than for an existing measure that the community-based practitioner has experience in interpreting. This interpretation must be provided by the new technology or by additional training.

Home Deployment Elements

The development of sensor systems suitable for home settings, and to a lesser extent community settings, creates significant challenges, including cost, reliability, sensitivity, practicality of installation, and aesthetic considerations. The relative weight of these factors will vary from application to application. In this section, we focus on the considerations that apply to the development of body-worn or ambient sensing solutions.

Home- and Community-Based Sensing

The availability of low-cost, reliable, and intuitive technologies is critical in enabling community- and home-based solutions. While these technologies may not have the ultra-high resolution of systems found in hospital clinics, they can provide the community-based clinicians with a strong indication of whether an issue exists or whether the patient is trending significantly in a manner that warrants referral to a specialist facility for further investigation.
Geriatric medicine makes extensive use of subjective or largely subjective tools to assess the patient. The ability to add insights obtained from patient observation to the result of the tests requires years of specialized training by the clinician. Even if the clinician has the ability to provide an accurate assessment of the patient’s condition, there is no comprehensive quantitative record of the tests. This impacts the ability of the clinician to accurately track whether a prescribed intervention is working appropriately. Sensor systems provide a means to fully capture the spectrum of data available during the tests. Objective measures enable the repeatability and reproducibility of tests, which are important in enabling long-term patient monitoring.
Sensors can play a key role in providing objective cost-effective measurements of patients. New clinical sensor technologies will enable public health policy to move from a reactive to proactive healthcare model. These solutions must process the sensor data, present the data in a manner that enables intuitive interpretation, demonstrate whether and how the patient’s data differs from their comparative population, and allow the clinician to compare the patient’s previous test results. Collectively, these elements can give local health professionals capabilities that have up to now been the preserve of specialist clinicians in specialized clinics. Consequently, the cost of patient treatment should be reduced. Early intervention and proactive patient treatment are often less expensive than reactive treatment in a hospital following a health event.
The current sensor-oriented approaches fall into two broad categories:
  • Body-worn applications are primarily used in either assessment and/or monitoring applications.
  • Ambient applications are typically used to passively monitor a subject’s behaviors and activity patterns or to trigger actuators to initiate actions, such as switching on lights when a bed exit sensor is triggered during the night.
Although both approaches can utilize common technology elements, they will have differing design considerations and constraints such as contact requirements with the human body, long-term deployments, and data processing requirements.

Body-Worn Assessment Applications

Body-worn sensors (BWS)have been used in a wide variety of physiological and kinematic monitoring applications (Catherwood et al., 2010, Ghasemzadeh et al., 2010, Aziz et al., 2006, Greene et al., 2010). BWS have the potential to support the clinical judgment of community-based practitioners through the provision of objective measures for tests, which are currently administered in a qualitative manner. Because of their small form factor, BWS can provide on-body measurements over extended periods of time. They can also support flexible protocols. Data-forwarding BWS can stream data up to distances of 100 meters from the sensor to the data aggregator. Data logging BWS can record data to local data storage, thus allowing location-independent monitoring (Scanaill et al., 2006). However, the design of the on-body sensor attachments is extremely important. The sensors used to capture the data must ensure that the data collected is of sufficient resolution to prevent motion artifacts corrupting the signal. The method of attachment should also be intuitive and prevent incorrect positioning and orientation of the sensor on the body. Because of these various interrelated requirements, a sensor systems approach will be the most effective method to ensure integrated end-to-end capabilities (see Chapter 3).
For applications that require long-term monitoring, compliance can be challenging. The patient must remember to reattach the sensor if it has been removed at night, during showering, or for charging. In physiological-sensing applications, the electrode must be carefully designed to minimize skin irritation and ensure good signal quality. Given these considerations, body-worn sensors must be carefully matched to the application. Short-term measurements—such as 24-hour EKG/ECG monitoring or a one-off measurement for diagnostic tests—provide good use cases for BWS. Such applications are supervised in some manner by a healthcare professional, ensuring that the sensor is correctly attached and the data is of the required quality.
Kinematic sensors are increasingly used by research groups for supervised motion analysis applications because their size has no impact on the gait pattern of the subject being tested. They can also provide location-independent monitoring by storing data to an onboard memory card. Gait and balance impairment, one of the most prevalent falls risk factors, can be accurately measured using low-cost kinematic sensor (in other words, accelerometer, gyroscope, or magnetometer) technology. Despite the low-cost of these technologies, they are rarely used outside of a research or clinical environment. There are a few reasons for this. First, many existing technologies do not provide sufficient context for the results they produce and therefore require a level of expertise to interpret. Second, community-based clinicians do not have the time to set up the technology and perform complex protocols as part of their routine examinations. Finally, these technologies are not developed for, or marketed toward, the community clinician. Therefore, most are unaware of the existence of such technologies. Case Study 1 describes a body-worn kinematic sensing application that was designed and developed with these considerations in mind.

Ambient Sensing

Noncontact sensing systems provide 24-hour monitoring of subjects in their home by connecting various sensing technologies, such as passive or active infrared sensors, to a data aggregator. The data aggregator can provide simple storage of the data for offline analysis or can process and forward the data to a back-end infrastructure. Once data is transferred to a back end, it can be processed in application-specific ways. For example, in an activity of daily living (ADL) application, the data can be used in an inference engine (for example, Bayesian and Markov Models) to determine the subject’s ADLs. The determination is based on the interactions detected between the subject and their home environment. Alternatively, the data could be used to identify a change in personal routine, which may indicate an onset of disease. For example, diabetes may be diagnosed by identifying more frequent visits to the bathroom during the night, or dementia may be diagnosed by increasingly erratic movement patterns during the day or night. Other noncontact solutions include the following:
  • Activities of daily living (Wood et al., 2008)
  • Safety (Lee et al., 2008)
  • Location determination (Kelly et al., 2008)
  • Gait velocity (Hagler et al., 2010, Hayes et al., 2009)
  • Cognition/dementia (Biswas et al., 2010)
Ambient sensors can also be used to provide inputs into actuators or other forms of integrated systems. Pressure sensors can be used in a bedroom to detect when someone exits their bed. They can trigger an action such as lighting a path to the bathroom to prevent accidental trips.
From a user perspective, ambient sensing can engender mixed reactions. On one hand, the systems can provide people with a sense of security, especially those who are living alone. However, they also generate strong negative responses, especially if the person associates the sensor with being monitored by a camera. Good form-factor design can make the sensor less obvious, which helps reduce people’s awareness of them. Battery-powered sensors afford great flexibility in placement and can be placed in unobtrusive locations, unlike mains-powered sensors, which must be placed near a power socket.

User Touch Points

The user touch point is playing an increasingly important role in the design and deployment of monitoring and assessment tools, as solutions are no longer limited to large PCs and laptops. There is increasing interest in exploiting the growing capabilities of smartphones and tablets in a variety of domains, including healthcare (Middleton, 2010). More than 80 percent of doctors now use mobile devices such tablets and smartphones to improve patient care (Bresnick, 2012). Leveraging the capabilities of these low-cost devices for healthcare applications is a logical next step to enable greater access at lower cost to previously silo’ d applications. However, significant focus must be given to the appropriate design of the user interfaces to ensure that applications can be used effectively on smaller screen sizes. Smartphones are already being used by clinicians and medical students to manage e-mails, access online resources, and view medical references. However, for clinical applications, such as viewing lab results or instrumented tests such as ECGs, and electronic prescribing, usage is significantly less (Prestigiacomo, 2010). Despite the sensing, processing, and data storage capabilities of smartphones, they are also not yet commonly applied as clinical data capture/assessment devices.
Smartphones and tablets provide intuitive user interaction, integrated sensing, low-power processing, low-cost data acquisition and storage, and wireless connectivity. Another key advantage of smartphones is the ability to extend the functionality of a device by downloading software applications from online app stores or creating custom applications using the software development kit (SDK) provided by the manufacturer.
The tablet form factor provides a natural and intuitive interaction model for older adults and individuals who have some form of physical or cognitive disability (Smith, 2011). The ability to use a tablet requires little or no training. Applications for older adults such as reminiscing (Mulvenna et al., 2011), language translation (Schmeier et al., 2011), and toilet locations (Barnes et al., 2011) have been recently reported in the literature. The design of the interaction model for the application should be given appropriate attention. The benefit of a simple physical interaction with the device quickly evaporates if the application navigation and interaction are not of equivalent simplicity.
One application focus area that benefits from the easy user interaction, location independence, and large screen size afforded by a tablet is cognitive functional testing. Some cognitive tests simply require the subject to re-create a displayed pattern using a pen and paper. Replicating such a test using a standard computing device, such as a laptop or desktop computer, could present significant usability challenges. Those challenges make it difficult to separate the ability to perform the test from the ability to use the laptop/desktop effectively. Tests built on tablets can address many of these usability issues and also allow the participant to take the test in their preferred location in their own home. The integrated sensors on a pad can also be utilized to improve usability. The pad’s built-in light sensor, for example, can be used to ensure a consistent visual presentation baseline, by detecting the ambient light conditions and automatically adjusting the screen brightness.
Most smartphones are augmented with integrated inertial sensors, including accelerometers and gyroscopes. The availability of this integrated kinematic sensing capability has led to the development of smartphone applications for biometric detection (Mantyjarvi et al., 2005), activity detection (Khan et al., 2010), and motion analysis (LeMoyne et al., 2010). However, there are some limitations to using the integrated sensors in a tablet/phone device, particularly for applications that require consistent sampling rates. Smartphone devices cannot guarantee a consistent sampling rate or sensor sensitivity. In applications where control of these parameters is essential, interfacing the smartphone/tablet device to a known discrete sensor is a more appropriate design choice than using integrated sensors. The development of an Android-based application with discrete body-worn sensors is described in Case Study 1. Stand-alone sensor hubs for smartphones, operating independently of the phone’s CPU, address many of the current deterministic performance limitations.
Televisions have received interest for a number of years as a potential healthcare touch point. Commercial assisted-living products, such as Philip’s Motiva (Philips, 2013) have emerged that are focused on chronic disease management (CDM). Until now, CDM solutions have required the use of a set-top connected to the TV. The emergence of web-enabled televisions containing integrated CPUs and network connectivity from manufacturers such as Samsung, LG, Sony, and Panasonic will provide a new platform for healthcare content consumption in the future. As health-related devices connect to the cloud, data and other analytics-derived information could potentially be consumed through a “smart TV” web interface. However, it is likely to be a number of years before this platform is sufficiently mature to be a viable health platform (Blackburn et al., 2011). Although the user experience on this platform is improving, it is still far from seamless, and significant challenges still remain in delivering a high-quality user interaction experience.

Participant Feedback

The provision of feedback to the end user raises some interesting questions and conflicts, especially if the technology deployment is primarily research oriented. Participants in a trial generally look for feedback on their performance and context for any results obtained and ask questions such as “Does that mean I did well?” Feedback and the manner in which it is delivered to a participant can provide a critical hook in maintaining participant engagement. However, the advantages of providing feedback must be offset against the potential negative influence on the data set acquired during the course of the study. Users who receive feedback may bias the data by over-engaging with the technology or adapting the way they perform the experiment to improve their scores. Ultimately, whether feedback is provided or not, the type of feedback provided will be defined by your research questions.

Home Deployment Management

There are two key overheads in any home-based sensor technologies. First, truck roll means the time and resources required to physically install, maintain, and uninstall the technology in the participant’s home. A clearly defined process should be developed to minimize truck roll time and ensure that the participant is happy with the technology. The process documentation should be continually updated to reflect new insights as they arise. Second, the ability to manage remote sensor deployments is a key requirement to a successful deployment (see Figure 8-3). This includes ensuring that data is collected as defined by the experimental protocol and transported and managed in a secure and reliable manner. It should also be possible to remotely debug and update the sensors and any associated technology as required. Remote management tools enable the deployment to be managed in an efficient and proactive manner. Automated monitoring identifies issues at the earliest juncture, thus minimizing the potential for data loss. The system design should allow for the loss of remote connectivity and provide local buffering of data until connectivity can be restored.
Finally, the remote management tools should provide data quality checks. Automated processes can be put in place to interrogate the validity of received data packets/files before the data is committed to the central data repository. These processes often take the form of scripts that can check the file/packet for errors such as missing data, null data, and outliers.
The ability to remotely manage computer equipment is standard practice in enterprise environments, where it significantly improves operational efficiencies and reduces costs. The heterogeneous nature of home and community technology deployments makes delivery of this type of functionality more challenging than in the controlled and homogenous enterprise environment. However, many of the existing technologies available in the enterprise environment can be adapted successfully to this domain if an appropriate understanding of the issues involved is developed and assumptions are continuously challenged.
Various sensor data management frameworks have been reported. These frameworks have been proposed to manage various aspects of the network, such as data accuracy (Ganeriwal et al., 2008), data coordination (Melodia et al., 2005), and data queries (Li et al., 2008). However, these efforts have been focused between tier 1 (sensor) and tier 2 (aggregator) with little focus on external data transfer. Balazinska suggests that most sensor network researchers have paid too much attention to the networking of distributed sensors and too little attention to tools that manage, analyze, and understand the data (Balazinska et al., 2007).
Remote data transport from sensor networks normally requires management frameworks, which are generally bespoke in nature, and consequently require significant development time, cost, and support overhead. These frameworks typically do not provide management tools to support the remote administration of the data aggregator, back-end management reporting, and configuration and management of exceptions. Online tools such as Xively ( xively.com ) and iDigi ( www.idigi.com ), which allow users to remotely manage their sensor deployment from sensor to the cloud, may prove to be significant.

Remote Deployment Framework

The remote deployment framework (RDF) (McGrath et al., 2011) is an effort to address these issues based on lessons learned during the deployment of remote sensor technologies by TRIL researchers. The primary function of the RDF is to provide a generic framework to collect, transport, and persist sensor data in a secure and robust manner (Walsh et al., 2011). The RDF has a service-orientated architecture and is implemented using Java enterprise technologies. The framework enables secure and managed data collection from remotely deployed sensors to a central location via heterogeneous aggregators. The RDF also provides a full set of tool to manage any home deployment including remote client monitoring, data monitoring, remote client connectivity, and exception notification management.
The RDF is based on the realization of the five key requirements for a home deployment management framework:
  • Platform independence: The framework should work on a range of hardware platforms and operating systems including Windows and Linux. Programmable interfaces should support C/C++,.NET, Java, and scripting languages. This ensures that a sensor system architect has a wide range of design options available.
  • Interoperability: The framework supports open standards (such as WS-I) where possible to ensure future compatibility, integration, and security.
  • Data independence: The RDF is data agnostic; its data model can support a variety of usage models.
  • System scalability and extensibility: The framework supports variable numbers of sensor nodes, data volumes, and the seamless addition of new functionality. A single instance of the RDF should support multiple discrete sensor trials.
  • Security: The framework must support data confidentiality at both the transport and application layers. It should also offer multiple authentication/authorization mechanisms and a secure audit trail to ensure full traceability from the sensor node to the data store and subsequent data retrieval.
The RDF embraces the philosophy of open standards and the use of nonproprietary technologies to ensure the ease of integration for third parties and research collaborators. The RDF was applied by TRIL for ambient home monitoring and the remote monitoring of chronic obstructive pulmonary disease (COPD) patients.

The Prototyping Design Process

In TRIL, ethnographers spend time in the homes and communities of end users to understand their lives and experiences. The information gathered during this fieldwork is distilled and presented to multidisciplinary teams to inspire concept development and inform the development of research prototypes. These brainstorming data downloads are typically led by designers and/or ethnographers, and the teams include clinical researchers, engineers, and research scientists. The best concepts from these sessions are explored further using design tools, such as storyboards and low-fidelity models, before they are presented to potential end users during focus groups. Feedback from the focus group is presented back to the multidisciplinary team and applied to further refine the concepts. This refinement and feedback loop may be repeated several times, with increasingly sophisticated prototypes, until a final prototype is developed for deployment into the home or community.
This design process is dependent on gaining an understanding of both the user and their environment and then applying this knowledge, along with the collective knowledge of multidisciplinary stakeholders, to develop an appropriate technology for the purpose.

Design for and with the End User

Understanding the end user is critical to developing technology they will be motivated to use and interact with. Physical and cognitive abilities/limitations of the user, their daily routine, and the user’s previous experiences with technology all need to be known. In understanding end users, it is important to recognize that no two users are the same. Therefore, individuals within the target grouping should be examined in depth. The larger grouping to which they belong should be broadly examined as well. Figure 8-4 shows an example of interaction testing results with target users to confirm design choices and usability.
Unfortunately, some designers and engineers have little understanding of the unique needs of their target user. For example, some of the early generation of the insulin pens had a small liquid crystal display (LCD) that was difficult for diabetics to read (Burton et al., 2006). The product designers had failed to consider that diabetics often suffer from poor eyesight. As result, these early version of devices had to be redesigned at significant cost with larger displays featuring improved contrast that were easier for diabetics to read. Therefore, strategies such as ethnographic fieldwork and engaging end users in the design process are necessary to get to know the end users. In TRIL, projects encompass an iterative participatory design process, actively involving older adults in “co-designing” technology for older adults. Designing technology in this way contributes to a higher probability of usable technology and fosters compliance in long-term usage by addressing specific needs.
There are also challenges in co-designing technology with nontechnical end users. These end users can struggle to understand the potential benefits of new technology. That struggle can limit their ability to actively contribute to a discussion on technological requirements. We found that using storyboards, scenarios, and personas in focus groups can address this issue by shifting the focus from the technology itself to what the technology can provide in terms of benefitting the individual. These techniques facilitate open discussion among participants and ultimately help the design team identify key features of the technology. We have also found it necessary to educate users to be critical in their feedback. Ethnographers are experts in getting beyond the overly positive, superficial level of feedback and “setting the stage” for critical feedback.

Design with Multidisciplinary Team Members

The tenet of assessment and intervention technologies is that they address a clinical need, whether physical, social, or cognitive. Technology should not be developed for the sake of technology. As such, the design process must begin with expert input from health professionals to determine clinical requirements. In designing Case Study 1, for example, the process began with a geriatrician, who had expertise in falls of older adults, outlining the desired outcomes of the project. These outcomes were then translated into design and engineering requirements through a series of workshops with multidisciplinary team members.
In multidisciplinary teams, each team member brings their own expertise and experiences to the project. The research scientists typically ensure that the scientific integrity of the study is maintained. The ethnographic researchers ensure that any solution will be practical and fit into an end user’s life. The engineers and designers will identify and develop a technology to answer the research question. As a multidisciplinary team works together on more and more projects, these roles may evolve and overlap. These overlaps can be beneficial as team members gain confidence in providing feedback on not only their domain of expertise but also the domains of others. There are often disagreements within multidisciplinary teams because of the differing priorities of team members. For example, the most sophisticated technology may not be the most appropriate for the user, the research question cannot be answered using technology alone, or the proposed protocol may be too restrictive for the user. In addressing such disagreements, the needs of the user are always the priority. Solutions that prioritize sophisticated technology or strict study protocols over a user’s needs risk noncompliance. That in turn impacts the quantity and quality of the data captured during the study (Bailey et al., 2011).

Data Analytics and Intelligent Data Processing

The process of examining, processing, and modeling data to uncover clinically meaningful patterns and relationships is essential to the success of any clinical research trial. The topic of data analytics and intelligent data processing (reviewed in Chapter 5) is complex and diverse. A brief overview of commonly used techniques in biomedical data analysis is provided in this section.
As previously described in Chapter 5, an important step preceding data analysis is removing noise and unwanted artifacts due to movement or electronic equipment. Data should be filtered to remove frequency components outside the active bandwidth of the signal. For example, in gait analysis, the kinematic data of interest are typically at frequencies below 100 Hz. The data should be low-pass filtered to remove the influence of higher frequency artifacts. Similarly, when looking at physiological data such as the electromyogram (EMG), where the frequency range of interest would be 20–450 Hz, data should be band-pass filtered to remove the low-frequency movement artifacts and high-frequency noise.
To interpret clinical data, it is necessary to extract relevant features. These features are dependent on the type of data and the protocol under which it was collected. Typically, features would be computed that have clinical relevance. For example, in gait analysis, features are determined that may relate to the timing of strides, the distance traveled during each stride, or the coordination of left and right legs during gait. In EMG, the median frequency of the power spectrum could be examined to provide insight into muscle fatigue over the duration of an experiment. In electrocardiography, the Q, R, and S wave (QRS) events may be detected for each heartbeat and used to examine heart rate variability.
Relatively simple statistical techniques may be used to examine differences in a specific feature for subgroups of a cohort. For example, you may be interested in differences between men and women, between young and old participants, or between healthy and pathological participants. Simple t-tests, analysis of variance (ANOVA), or rank sum tests could be used for this investigation.
Alternatively, if the study aims to classify participants by subgroups, it would be more appropriate to combine a selection of features using regression or discriminant analysis. If the dependent variable is numerical (e.g., age), linear or nonlinear regression analysis would be most appropriate. On the other hand, if the dependent variable is categorical (e.g., gender), either logistic regression or discriminant analysis could be employed, depending on the nature of the predictive features. If a large number of features relative to cohort size are available, it may be necessary to reduce the number of features to avoid overfitting and to produce a more robust model that will generalize to unseen data. This may be achieved using either feature extraction or feature selection. Feature extraction methods, such as principal component analysis (PCA), transform the feature set to a smaller number of uncorrelated components that describe as much variance in the data as possible. Feature selection methods, such as forward feature selection, sequentially add features and evaluate model performance at each step, continuing until model performance does not further improve. Model performance should be cross-validated to test robustness. A commonly used technique is k-fold cross-validation (Kohavi, 1995, Han et al., 2000). Using this technique, data is divided into k subsections or folds, the model is trained using k-1 folds, and its performance is tested on the remaining fold. The process is repeated k times with the model being tested using a different fold each time.
Machine learning techniques can be split into three categories: supervised, unsupervised, and reinforcement learning. The previous paragraph primarily deals with supervised learning, where the identities of each feature and dependent variable are known. This technique has clinical applications in identifying physical, cognitive, or psychological pathologies based on clinical or sensor-derived features. In unsupervised learning, the aim is to extract hidden trends and predictive information from large databases of unknown variables using a range of methods, including clustering. Applications of unsupervised learning range from medical image processing to genome mapping to population analysis. In reinforcement learning, feedback is provided to a classifier model on whether a decision was correct or incorrect, and this feedback is used to train the model over time.
Ultimately, many different approaches will provide clinically insightful results, and your choice should be determined by the research objectives of the study. The method used to analyze a data set must be carefully chosen based on the nature of the dependent variables as well as the predictive features and must always consider the research objectives of the study. In choosing the most appropriate method, you will achieve scientifically valid as well as clinically relevant results.

Case Studies

Over the last number of years, the TRIL Centre has deployed a variety of technology platforms into several hundred homes. We present three representative case studies here that demonstrate the application of sensor technologies and techniques already described. The case studies describe sensor-based applications either for assessments or for interventions.

Case Study 1: Quantified Timed Get Up and Go (QTUG) Test

The standard Timed Get and Go (TUG) test is a quick and popular clinical assessment used to assess the basic mobility skills of older adults. The test requires the subject to get up from a seated position, walk 3 meters (approximately 10 feet), turn, return to the chair, and sit down—as quickly and safely as possible. This test challenges several aspects of a subject’s gait and balance control, but only a single measure—time to complete the test—is objectively measured. The exact timing thresholds can vary with cut-off values of 10 to 30 seconds to distinguish between fallers and non-fallers (Beauchet et al., 2011). For example, in the United States the Centers for Disease Control and Prevention (CDC) specifies that a test time of greater than 12 seconds indicates a high falls risk. TRIL researchers focused on developing a clinical tool that used data captured from body-worn kinematic sensors on a subject that was undertaking the TUG test in an effort to quantify their falls risk, as shown in Figure 8-5. The new test was called Quantified Timed Get and Go or QTUG.
Initially, a PC application was developed to collect data from the sensors as patients completed the TUG test. This data was used to develop algorithms that could extract features of interest from the kinematic signals. The algorithms generate a variety of features (greater than 45)—for example, stride time and so on—that were used to build statistical models that could provide a prospective estimation of falls risk within a two-year period.

Algorithm Development

The first phase in the development of the QTUG prototype was the implementation of an adaptive threshold algorithm in Mathlab that was used to reliably identify the initial and terminal contact points in the data stream (in other words, heel-strike and toe-off events) from the body-worn kinematic sensors (located on the front of the lower shank on each leg), as shown in Figure 8-6.
The algorithms developed were used to identify the following parameters from the gait cycle:
  • Temporal gait parameters
  • Spatial gait parameters
  • Turn parameters
  • Tri-axial angular velocity parameters

Model Development

The data was then stratified by gender and age into three separate groups: males, females younger than 75, and females older than 75 years. Sequential forward feature selection combined with regularized discriminant (RD) classifier models were used to generate each of the three predictive classifier models (males, females<75, and females ≥75) for estimating the risk of future falls in older adults. A grid search was used to determine the optimum feature set and classifier model parameters for each of the three classifier models.
Models were validated using tenfold cross-validation to provide a statistically unbiased estimate of performance. The output is the probability of a fall for each patient. The models developed were validated using a cross-selection study based on the falls history collected over a five-year period in the TRIL clinic where the sample size was as follows: N=349 (103 male, 246 female), mean age 72.4±7.4. 207 of the subjects had a self-reported history of falling (Greene et al., 2010). Additionally, a prospective study was conducted using data collected in the clinic (two-year falls follow-up data). The sample size was as follows: N=226, mean age 71.5±6.7 years, 164 female, 83 fallers. Results obtained through cross-validation yielded a mean classification accuracy of 79.69 percent (mean 95 percent CI: 77.09–82.34) in prospectively identifying participants who fell during the follow-up period. The results obtained were significantly more accurate than those obtained for falls risk estimation using two standard measures of falls risk (manually timed TUG and the Berg balance score (Bogle et al., 1996), which yielded mean classification accuracies of 59.43 percent [95 percent CI: 58.07–60.84] and 64.30 percent [95 percent CI: 62.56–66.09], respectively) (Greene et al., 2012).

Prototype Development

The algorithms and statistical models were then converted into an Android application than ran on a 7" tablet. The application had an intuitive and easy-to-use interface that steps the user sequentially through the process of running the QTUG test.
The kinematic body-worn sensors stream data via Bluetooth to the application running on the Android tablet where it is displayed (as shown in Figure 8-7a). The data is processed using the algorithms and models to calculate a falls risks estimate for the subject under test (as shown in Figure 8-7b).
Ethnographic researchers ran focus groups with various types of clinical professionals to collect feedback throughout the prototype design and evaluation process. The focus groups revealed that clinical falls experts preferred a tool that provided the details of all the parameters so that they could combine this data with other data from their supplementary assessments and apply their expertise to determine a subject’s falls risk. As a result, the application was modified to provide user access to detailed parametric data that can be used to develop an understanding of the underlying cause of a falls risk. Community-based practitioners preferred a tool that interpreted the data for them. This feedback was incorporated into the prototype to provide a percentage-based score for community practitioners and an option for clinicians to view details of each measure.
The prototype demonstrated the implementation of a falls risk assessment tool that is simple, portable, and low cost. Findings from clinical focus groups have shown that the prototype could form the basis for a screening tool for community-based falls risk screening because of its ease of use and intuitive presentation of results. The QTUG prototype encapsulated how a standard clinical test can be enhanced through the use of technology and how user-centered design can be applied to develop technology suitable for community use (Greene et al., 2010). The success of the QTUG prototype has led to the formation of a TRIL start-up company that is planning to bring the technology to market.

Case Study 2: Ambient Assessment of Daily Activity and Gait Velocity

Gait velocity has long been correlated with falls history, and many studies indicate that fallers tend to walk slower than nonfallers. However, gait velocity may have been recorded in a clinical setting days, weeks, and sometimes months after a fall. It is difficult to establish whether this is the same velocity at which the person walked at prior to the fall occurring or whether this is a new velocity adopted by the faller following the fall. The purpose of this study was to determine whether gait velocity changes prior to a fall event. If so, could a change in gait velocity predict an increased risk of falling?
Daily measurement of velocity was a key requirement for this project; therefore, an in-home ambient sensing approach was selected to ensure long-term compliance. A home-monitoring system was developed using PIR wireless sensors and an Internet-enabled aggregator (see Figure 8-8). Velocity was measured in two ways in the home. First, a velocity rail was developed to measure the time taken to walk a fixed distance in the home. Second, sensors were placed in multiple doorways to measure time taken to pass through the doorway. The PIR sensors measured velocity by measuring the difference between the time in which the person was first detected and the time in which the person was last detected. The sensors transmitted their data via an 802.15.14 radio to a laptop-based data aggregator, which forwarded data to a remote server. Data was collected from eight (one male, seven female) older adults (aged 67–87) for an average 36-day period per participant. No significant correlations were found between health status, as recorded in the bi-daily diaries and gait velocity measurements. However, a longer trial with a larger number of homes may reveal such correlations.
Context is critical in any assessment. The inability of PIRs to distinguish between different people was a key consideration in planning the trial and analyzing the trial data. Various strategies were used to overcome these issues, including recruiting only those people who lived alone, developing physical maps of the participant’s home and temporal maps of their routines within the home, and asking participants to keep diaries rating how well they felt and if they had visitors.
Deploying ambient technology into unsupervised settings for even short periods of time required significant planning, preparatory work, and maintenance by the deployment teams. There was also a high demand on the participants to be available for predeployment visits, installation visits, and uninstall visits, as well as completing their bi-daily diaries. This high overhead made it difficult to manage more than ten homes at any given time, and the demands on the participants made deployments longer than eight weeks difficult. In working through the challenges in the study, key improvements were identified and applied in subsequent home deployments. Uploading data to a remote server allowed the home deployment teams to detect sensor and data issues within 24 hours of faults occurring. The ability to remotely log in to the in-home data aggregator significantly reduced the time spent in the home troubleshooting. Finally, participant information booklets reassured the participants about the study’s goals, what was required of them, and who the members of the home deployment team were (Walsh et al., 2011). Because of the overhead and difficulties in deploying ambient monitoring solution in people’s home, there has been interest in utilizing other sources of data within homes to provide ambient intelligence monitoring. The rollout of smart meters for monitoring energy and other utilities is an area that researchers have been actively looking at to provide a data stream that provides insights into regular life behaviors without the need to deploy additional sensors (see Chapter 9).
Ambient monitoring applications irrespective of whether the user deployed sensors in the home, smart meters, or other forms of data raise issues of ethics, privacy, and security. A clear legal framework within which the data and access to it is controlled will be required before potential innovation in this area can be exploited.

Case Study 3: Training for Focused Living

The Alertness: Training for Focused Living project aimed to raise awareness of the concepts of alertness and attention and their importance in our everyday lives. The project was a four-week, self-administered, home-based training program that taught older adults a technique to modify their alertness at will. The program was developed through an iterative participatory design process with older adults. The first iteration of this study involved deploying a low-cost version of existing clinical technology into the homes of older adults. Users were taught to perform the alertness technique and shown how to use the technology in a clinical setting before the technology was installed in their homes. This technology—a laptop and a handheld galvanic skin response (GSR) sensor—gave the user biofeedback via a graph on the laptop screen. Ethnographic evaluation of this approach discovered that participants found using a laptop and interpreting graphs difficult, and they applied the technique only when they had the technology. These findings demonstrated that the project aim of improving everyday alertness was not being achieved.
A new revision of the system was developed in which participants received an Alertness Training Kit in the mail, consisting of the biofeedback device, audio CD, and guidebook that provided education about alertness, reflective questions, and instructions for the self-alerting technique (see Figure 8-9). On receipt of the home deployment kit, participants were encouraged to use the guidebook to explore their perception of alertness. The following week, they used the audio CD, along with the guidebook, to learn the self-alerting technique. On the third and subsequent weeks of the trial, participants were given the new biofeedback device consisting of a SHIMMER GSR sensor to provide real-time feedback on how well they were performing the technique. The biofeedback device had a user-friendly cushion-like form factor, a single on-off switch, and a built-in LCD display to provide feedback to the user on how well they were performing the technique. Data from this study was saved to a built-in micro-SD card on the device and analyzed after the study was completed.
In adopting the ethnographic findings and re-imagining both the technology and the study methodology, a more reliable, user-friendly system was developed. The new technology allowed participants to quickly and easily practice their self-alertness technique whenever they had time. The new methodology led to participants who were more aware of their alertness and who were both willing and able to raise their alertness in their everyday lives—thus achieving the research aims. The low cost of the technology and the mail-drop deployment technique enabled researchers to deploy systems into many more homes than would have been possible using a PC-based technology and a home deployment team (Greco et al., 2011).

Lessons Learned

Because of the heterogeneous nature of most homes, deploying sensor-related technologies that are either ambient or require direct participation with a subject are challenging for variety reasons. Planning and preparation are instrumental to success. When each deployment or set of deployments is completed, carefully capturing what worked and what did not work is important. Insights into both the positive and negative aspects of any deployment will help ensure higher probable success in the future. The following sections discuss some of the key insights collected by researchers both at Intel and the TRIL Centre over many years of home- and community-based sensor deployments.

The Installation Process

Technology installation must be as expeditious as possible. After about 90 minutes, participants can tire, become impatient, or have to leave for other appointments. Even simple installations can easily exceed this threshold. Therefore, advance preparation by the engineer is critical to minimizing the in situ installation time. Complex installations involving the calibration of sensors can take well over two hours and are generally not appropriate. Photographic documentation or mapping of the pre- and postinstallation areas is important, especially when addressing issues that may arise after the uninstall process.

Key Sensor Findings

Sensors must respect the privacy and security of subjects and users. Unencrypted wireless sensors are open to potential eavesdropping outside the home. The sensor should avoid transmitting personally identifiable information. Ideally, an eavesdropper should not be able to gain any information about the users or their behavior. This is of particular concern when a sensor is deployed in the home or worn for an extended period of time.
In applications where the sensor system measures human behavior, it must minimally disturb that behavior. Sensor maintenance must also be considered: body-worn sensors that require daily recharging require the wearer to remember to take the device off, charge it, and put it back on. That can be challenging for many people. Paradoxically, remembering to charge a device every other day may be much more difficult than remembering to charge it daily.
Design can play a key role in the success of home-based sensor systems. The design of the sensor should not cause needless privacy concerns to users. Designs that feature a blinking LED may result in a study participant worrying that there is a camera in the room, even if the sensor doesn’t behave anything like a camera. Design is particularly important for body-worn sensors, which have to worn for extended periods of time or are externally visible to others. Older adults are resistant to any visible indications of dependency or failing health. Aesthetic considerations are not as important for single-use or ambient sensors, but design for usability is important. Usability includes considerations such as how the sensor is attached to the person’s body (see Chapter 10). For example, if a strap is required to attach a sensor to a limb for a motion analysis application, that strap must be designed to securely hold the sensor in the correct orientation throughout the assessment/monitoring period, while still allowing for ease of attachment and comfort. The design of the sensor can have a direct impact on the comfort level for the subject. For example, sensors designed for wearing to bed must be particularly comfortable to avoid disturbing the wearer’s sleep. Ambient sensors should be designed with as few placement restrictions as possible. Wall-mounted sensors have to share space with existing photos, paintings, bookcases, tapestries, vases, and other wall decorations and must visually co-exist, both in form and in color, with the room decor. Light, pastel-colored devices have a better chance of blending in than those with darker or highly saturated colors.
The deployed system should have a minimal requirement for site visits by the deployment engineer. Poor battery life of wall-mounted sensors can mean regular visits by the deployment engineer. These visits cause disruption to participant lives and can remind the users that they are in a study, reducing the invisibility of the sensors and consequently the validity of the data.
The industrial, scientific, and medical (ISM) 2.4 GHz radio frequency band is crowded with various radios, such as Bluetooth devices, some Digital Enhanced Cordless Telecommunications (DECT) phones, home networks, microwave oven, baby monitors, smart TVs, and so on. Some of those can cause serious interference with sensor transmissions. Resistance to potential inference sources depends on radio choice. For example, the Bluetooth and 802.11 protocols can frequency-hop to avoid bands that experience interference. Similarly, Ultra Wide Band (UWB) radios find unused bands to work in. In contrast, 802.15.4 does not have a frequency-hopping protocol, making it susceptible to interference sources. If a non-frequency-hopping protocol is used, manual selection of the channels may be required to avoid local sources of ISM interference. It is important that any deployed technology can co-exist not only with environmental interference, as just described, but also with other instances of the deployed technology. This is particularly important in deployments in which two households or two clinical systems are within radio range of each other.
Radio messages are often the largest consumer of power in a sensor. Duty cycle designs should therefore focus on minimizing the number and frequency of communications. However, in designs that feature very low messaging rates, it can be difficult to differentiate between a failed sensor and a sensor that simply has not needed to say anything. To avoid this problem, each sensor should periodically send a “heartbeat” or “keepalive” message. This data can be used by the back-end monitoring system, such as an RDF, to generate an alert for a failed sensor when a predefined window for receipt of a “heartbeat” message is exceeded.

Data Quality

From a technology perspective, data quality is influenced at the sensor layer, application layer, aggregation/transport layer, and analysis layer. Most sensors use low-power radios, which suffer frequent interruptions in service. For this reason, the data sent to and from sensors must either be transferred reliably (for example, via a positive-acknowledgement-and-retransmit protocol) or be transmitted redundantly (for example, via a fixed number of retransmissions per message, without acknowledgment). The sensor protocol should also enable the receiver to detect errors. Packet checksum or Cyclic Redundancy Check (CRC) bytes are generally adequate to detect common transmission errors.
For applications such as an assessment carried out in a doctor’s office, automated checking of the data quality received from a sensor is important. Any quality issues detected—such as significant dropped packets, temporary loss of communications with the sensor, or an incomplete test time—should force a retest situation. Any variability in the data that is not a result of the subject’s test performance can result in an incorrect result, the magnitude of which will depend on the type of data analysis technique being used. Applications such as the falls risk estimation application described in Case Study 1 are sensitive to data variability. As such, these applications must have rigorous checks in place to ensure data quality.
Early detection of sensor/aggregator failures or missing data in the back-end database is important. The aggregation/transport layer should provide this early detection of equipment issues. The layer should also provision robust transport protocols to prevent data loss between the aggregator and the back end. The robustness of the solutions implemented will depend on data loss sensitivities and temporal constraints.
To ensure data quality during the data processing phase, it is important to filter data correctly, avoiding over- and under-filtering. Ideally, data should be recorded in an electrically “quiet” environment, and body-worn sensors should be securely fastened in place. These measures will reduce the need for excessive filtering. The appropriate calibration of data is essential when interpreting inertial and physiological signals, and gravity correction should be considered when examining accelerometer data. As features are extracted from the processed data, it is advisable to check that they are within published ranges and that they make intuitive sense.

User Engagement

Maintaining user engagement is a requirement for home-based assessment and interventions. A poor experience in using the technology will engender negative feelings and may result in the technology being abandoned. Good design and reliable technology are essential to creating a positive user experience and increasing the user’s confidence in the technology. Another key aspect in developing sensor technology for the home and community is user engagement in the design process. Technologies designed by representative members of the end user’s peer group are more likely to be accepted by the end user and will ensure that their day-to-day lives are not negatively impacted. Engaging users in the design process will increase the probability of capturing and successfully addressing potential usability issues.
One area of user engagement that can be problematic is the provision of user feedback. Feedback is provided to the end user to motivate the user and maintain engagement with the technology. Visualization of data in an engaging and intuitive form is therefore necessary. Technical feedback, comprising endless trend charts with control limits, while intuitive to engineers, may not be appropriate for all users. Simple and clean design that is dynamic in nature and adds vibrancy to the data is important in helping maintain user engagement over time. Such design can include a social networking element, where participants can compare themselves to their peers. However, information of this kind should be presented with appropriate context to ensure that participants are not unduly worried if they differ significantly from their peers. There is also growing interest in using computer gaming technologies to provide feedback, such as avatars that represent the subject of the study within a virtual rendering of the home environment. These techniques are finding application in sports and fitness sensing applications where there is a focus on using gaming techniques to introduce an element of competition. The inclusion of a competitive element is designed to maintain long-term user engagement in physical activity (see Chapter 10).

Summary

In this chapter, we have outlined how a multidisciplinary team approach can be applied to develop and deploy innovative assessment and intervention technologies for home and community settings. The use of wireless sensors, mobile form factors, and intelligent data analytics is enabling the migration of capabilities that were previously the preserve of specialized clinics into these settings. The TRIL Centre has demonstrated that successful technology solutions require a structured approach to ensure the members of the multidisciplinary teams are correctly aligned and that the end user is at the center of the design process.

References

Alwan, Majd, “Passive in-home health and wellness monitoring: Overview, value and examples,” in IEEE Annual International Conference of the Engineering in Medicine and Biology Society (EMBS '09), Minneapolis, Minnesota, USA 2009, pp. 4307–4310.
McManus, Richard J, et al., “Blood pressure self monitoring: questions and answers from a national conference,” BMJ, vol. 337 2008.
Tamura, Toshiyo, Isao Mizukura, and Yutaka Kimura, “Factors Affecting Home Health Monitoring in a 1-Year On-Going Monitoring Study in Osaka.,” in Future Visions on Biomedicine and Bioinformatics 1, Bos, Lodewijk, Denis Carroll, Luis Kun, Andrew Marsh, and Laura M. Roa, Eds., Heidelberg, Springer 2011, pp. 105–113.
Cohen, Jacob, Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Oxford, England: Routledge Academic, 1988.
Catherwood, Philip A., Nicola Donnelly, John Anderson, and Jim McLaughlin, “ECG motion artefact reduction improvements of a chest-based wireless patient monitoring system,” presented at the Computing in Cardiology, Belfast, Northern Ireland, 2010.
Ghasemzadeh, Hassan, Roozheb Jafari, and Balakrishnan Prabhakaran, “A Body Sensor Network With Electromyogram and Inertial Sensors: Multimodal Interpretation of Muscular Activities,” IEEE Transactions on Information Technology in Biomedicine, vol. 14 (2), pp. 198–206, 2010.
Aziz, Omer, Benny Lo, Ara Darzi, and Guang-Zhong Yang, “Introduction,” in Body Sensor Networks, Yang, Guang-Zhong, Ed., London, Springer-Verlag, 2006, pp. 1–39.
Greene, Barry R., et al., “An adaptive gyroscope-based algorithm for temporal gait analysis,” Medical and Biological Engineering and Computing, vol. 48 (12), pp. 1251–1260, 2010.
Scanaill, Cliodhna Ní, et al., “A Review of Approaches to Mobility Telemonitoring of the Elderly in Their Living Environment,” Annals of Biomedical Engineering, vol. 34 (4), pp. 547–563, 2006.
Wood, Anthony D., et al., “Context-aware wireless sensor networks for assisted living and residential monitoring,” IEEE Network, vol. 22 (4), pp. 26–33, 2008.
Lee, Byunggil and Howon Kim, “A Design of Context Aware Smart Home Safety Management using by Networked RFID and Sensor Home Networking.” vol. 256, Agha, Khaldoun Al, Xavier Carcelle, and Guy Pujolle, Eds., Springer Boston, 2008, pp. 215–224.
Kelly, Damien, Sean McLoone, Terrence Dishongh, Michael McGrath, and Julie Behan, “Single access point location tracking for in-home health monitoring,” in 5th Workshop on Positioning, Navigation and Communication (WPNC 2008), 2008, pp. 23–29.
Hagler, Stuart, Daniel Austin, Tamara L. Hayes, Jeffrey Kaye, and Misha Pavel, “Unobtrusive and Ubiquitous In-Home Monitoring: A Methodology for Continuous Assessment of Gait Velocity in Elders,” IEEE Transactions on Biomedical Engineering, vol. 57 (4), pp. 813–820, 2010.
Hayes, Tamara L., Stuart Hagler, Daniel Austin, Jeffrey Kaye, and Misha Pavel, “Unobtrusive assessment of walking speed in the home using inexpensive PIR sensors,” in Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2009), Minneapolis, Minnesota, USA, 2009, pp. 7248–7251.
Biswas, Jit, et al., “Health and wellness monitoring through wearable and ambient sensors: exemplars from home-based care of elderly with mild dementia,” Annals of Telecommunications, vol. 65 (9), pp. 505–521, 2010.
Middleton, Catherine, “Delivering services over next generation broadband networks: Exploring devices, applications and networks,” Journal of Australia Telecommunications vol. 60 (4), pp. 59.1–59.13, 2010.
Bresnick, Jennifer. “HIMSS survey: 80% of clinicians use iPads, smartphone apps to improve patient care”, Last Update: December 4th 2012, http://ehrintelligence.com/2012/12/04/himss-survey-80-of-clinicians-use-ipads-smartphone-apps-to-improve-patient-care/ /2012/12/04/himss-survey-80-of-clinicians-use-ipads-smartphone-apps-to-improve-patient-care/
Prestigiacomo, Jennifer. “Dialing Into Physician Smartphone Usage”, Last Update: August 3rd 2010, http://www.healthcare-informatics.com/article/dialing-physician-smartphone-usage /article/dialing-physician-smartphone-usage
Smith, Ken, “Innovations in Accessibility: Designing for Digital Outcasts.,” presented at the 58th Annual Conference of the Society for Technical Communications, Sacramento, CA., 2011.
Mulvenna, Maurice D., et al., “Evaluation of Card-Based versus Device-Based Reminiscing Using Photographic Images,” Journal of CyberTherapy & Rehabilitation, vol. 4 (1), pp. 57–66, 2011.
Schmeier, Sven, Matthias Rebel, and Renlong Ai, “Computer assistance in Bilingual task-oriented human-human dialogues,” in Proceedings of the 14th international conference on Human-computer interaction: interaction techniques and environments, Orlando, FL, 2011, pp. 387–395.
Barnes, Ian, Elizabeth Brooks, and Grant Cumming, “Toilet-Finder: Community co-creation of health related information,” presented at the 3rd International Conference on Web Science (ACM WebSci'11), Koblenz, Germany, 2011.
Mantyjarvi, Jani, Mikko Lindholm, Elena Vildjiounaite, Satu-Marja Makela, and Heikki Ailisto, “Identifying users of portable devices from gait pattern with accelerometers,” presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), Philadelphia, Pennsylvania, USA, 2005.
Khan, A. M., Y. K. Lee, S. Y. Lee, and T. S. Kim, “Human Activity Recognition via an Accelerometer-Enabled-Smartphone Using Kernel Discriminant Analysis,” presented at the IEEE 5th International Conference on Future Information Technology (FutureTech), Busan, South Korea, 2010.
LeMoyne, Robert, Timothy Mastroianni, Michael Cozza, Cristian Coroian, and Warren Grundfest, “Implementation of an iPhone as a wireless accelerometer for quantifying gait characteristics ” in Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '10) Buenos Aires, Argentina, 2010, pp. 3847–3851.
Philips Electronics N.V. “Motiva - Improving people’s lives”, http://www.healthcare.philips.com/main/products/telehealth/products/motiva.wpd ., 2013.
Blackburn, Steven, Simon Brownsell, and Mark S Hawley, “A systematic review of digital interactive television systems and their applications in the health and social care fields,” Journal of Telemedicine and Telecare, vol. 17 (4), pp. 168–176, 2011.
Ganeriwal, Saurabh, Laura K. Balzano, and Mani B. Srivastava, “Reputation-based framework for high integrity sensor networks,” ACM Trans. Sen. Netw., vol. 4 (3), pp. 1–37, 2008.
Melodia, Tommaso, Dario Pompili, Vehbi C. Gungor, and Ian F. Akyildiz, “A distributed coordination framework for wireless sensor and actor networks,” in Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, Urbana-Champaign, IL, USA, 2005, pp. 99–110.
Li, Lily and Kerry Taylor, “A Framework for Semantic Sensor Network Services,” in Service-Oriented Computing – ICSOC 2008. vol. 5364, Bouguettaya, Athman, Ingolf Krueger, and Tiziana Margaria, Eds., Springer Berlin / Heidelberg, 2008, pp. 347–361.
Balazinska, Magdalena, et al., “Data Management in the Worldwide Sensor Web,” IEEE Pervasive Computing, vol. 6 (2), pp. 30–40, 2007.
McGrath, Michael J. and John Delaney, “An Extensible Framework for the Management of Remote Sensor Data,” in IEEE Sensors, Limerick, Ireland, 2011, pp. 1712–1715.
Walsh, Lorcan, Barry Greene, and Adrian Burns, “Ambient Assessment of Daily Activity and Gait Velocity ” in Pervasive Health 2011 AAL Workshop, Dublin, 2011.
Burton, Darren and Mark Uslan, “Diabetes and Visual Impairment: Are Insulin Pens Acessible?”, AFB AccessWorld Magazine, vol. 7 (4), 2006, http://www.afb.org/afbpress/pub.asp?DocID=aw070403
Bailey, Cathy, et al., “ENDEA”: a case study of multidisciplinary practice in the development of assisted technologies for older adults in Ireland,” Journal of Assistive Technologies, vol. 5 (3), pp. 101–111, 2011.
Kohavi, Ron, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in 14th International Joint Conference on Artificial Intelligence (IJCAI'95), Montreal, Quebec, Canada, 1995, pp. 1137–1143.
Han, Jiawei and Micheline Kamber, Data Mining: Concepts and Techniques, 1st ed. San Francisco, CA: Morgan Kaufmann, 2000.
Beauchet, Olivier, et al., “Timed up and go test and risk of falls in older adults: A systematic review,” The journal of nutrition, health & aging, vol. 15 (10), pp. 933–938, 2011.
Greene, Barry R., et al., “Quantitative Falls Risk Assessment Using the Timed Up and Go Test,” IEEE Transactions on Biomedical Engineering, vol. 57 (12), pp. 2918–2926, 2010.
Bogle, Linda D, Thorbahn, and Roberta A Newton, “Use of the Berg Balance Test to Predict Falls in Elderly Persons,” Physical Therapy, vol. 76 (6), pp. 576–583, 1996.
Greene, Barry R., et al., “Evaluation of Falls Risk in Community-Dwelling Older Adults Using Body-Worn Sensors,” Gerentology, vol. 58 pp. 472–480, 2012.
Greco, Eleonora, Agnieszka Milewski-Lopez, Flip van den Berg, Siobhan McGuire, and Ian Robertson, “Evaluation of the efficacy of a self-administered biofeedback aided alertness training programme for healthy older adults ” presented at the 8th Annual Psychology, Health and Medicine Conference, Galway, Ireland, 2011.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (http://​creativecommons.​org/​licenses/​by-nc-nd/​4.​0/​), which permits any noncommercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this chapter or parts of it.
The images or other third party material in this chapter are included in the chapter’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Metadaten
Titel
Sensor Deployments for Home and Community Settings
verfasst von
Michael J. McGrath
Cliodhna Ní Scanaill
Copyright-Jahr
2013
Verlag
Apress
DOI
https://doi.org/10.1007/978-1-4302-6014-1_8