Activity knowledge transfer in smart environments

https://doi.org/10.1016/j.pmcj.2011.02.007Get rights and content

Abstract

Current activity recognition approaches usually ignore knowledge learned in previous smart environments when training the recognition algorithms for a new smart environment. In this paper, we propose a method of transferring the knowledge of learned activities in multiple physical spaces, e.g. homes A and B, to a new target space, e.g. home C. Transferring the knowledge of learned activities to a target space results in reducing the data collection and annotation period, achieving an accelerated learning pace and exploiting the insights from previous settings. We validate our algorithms using data collected from several smart apartments.

Introduction

The remarkable recent progress in computing power, networking, and sensors combined with the power of machine learning and data mining techniques have enabled us to create cyber–physical systems that reason intelligently, act autonomously, and respond to the needs of the users in a context-aware manner [1]. Some of the efforts for realizing such cyber–physical systems have been demonstrated in actual smart environment physical testbeds such as the CASAS project [2], the MavHome project [3], the Gator Tech Smart House [4], the iDorm [5], and the Georgia Tech Aware Home [6]. A smart environment typically contains many highly interactive devices and different types of sensors such as motion sensors. The data collected from those sensors can be used in conjunction with data mining and machine learning techniques in order to discover residents’ frequent activity patterns [7], predict upcoming events [8], automate interactions with the environment [9], recognize activity patterns [10], and detect abnormal patterns in case of health monitoring or security applications [11]. Recognizing residents’ activities allows the smart environment to respond in a context-aware way to residents’ needs for achieving more comfort, security, energy efficiency, as well as for monitoring the well being of older adults or people with cognitive and physical impairments [12]. For example, researchers are recognizing that smart environments can be of great value for monitoring and tracking the daily activities of individuals with memory impairments, as the ability to consistently complete Activities of Daily Living (ADLs) [13] is necessary in order to live independently at home. The need for the development of such smart home technologies is underscored by the aging of the population, the cost of formal health care, and the importance that individuals place on remaining independent in their own homes [14].

While smart environments offer many societal benefits, they also introduce new and complex machine learning challenges. A typical home may be equipped with hundreds or thousands of sensors. Because the captured data is rich in structure and voluminous, the learning problem is a challenging one. As a result, each environmental situation has been treated as a separate context in which to perform learning. What can propel research in cyber–physical systems forward is the ability to leverage experience of previous environments in new different environments. However current activity recognition approaches do not exploit the knowledge learned in previous spaces in order to kick start activity recognition in a new space. This results in a delayed installation period in practice due to the need for collecting and annotating huge amounts of data for each new space. It also leads to redundant computational effort and excessive time investment, and results in ignoring insights gained from previous spaces. In this paper, we propose a method for transferring the knowledge of learned activities from multiple physical smart environments to a new target physical smart environment. In our approach, the layout of the spaces and the residents’ schedule can be different. Also the type and number of sensors in two spaces might be different. We validate our algorithms using data collected from six different smart apartments with different layouts and different residents.

Section snippets

Related work

Activity recognition has been used in different situations in smart environments, such as for recognizing nurses’ activities in hospitals [15], recognizing quality inspection activities during car production [16], or monitoring elderly adults’ activities [17]. To discover, recognize and model activities, researchers have devised numerous supervised and unsupervised methods. Naive Bayes classifiers are one of the simplest yet promising supervised methods [18]. Decision trees [19] learn the

Model description

Our objective is to develop a method for transferring knowledge in the form of activity models learned in multiple source physical spaces to a target physical space in order to reduce data collection time, reduce or eliminate data annotation time and exploit prior source knowledge in a new target space. We will refer to our method as Multi Home Transfer Learning (MHTL for short). We denote N individual sources as S1,,SN and the single target space as T.

We assume that the physical aspects of

Activity model extraction

The first step of the multi-stage MHTL algorithm is to extract the activity models from input data in both source and target spaces. For each single source space Sk we convert each contiguous sequence of sensor events with the same label to an activity a. This results in finding the set of activities Ak for each one of the individual source spaces Sk. The start time of the activity is the timestamp of its first sensor event, while its duration is the difference between its last and first

Mapping sensors and activities

The next step after the activity models for the source and target space have been identified is to map the source activity templates to the target activity templates. First we initialize the sensor and activity mapping matrices, mk and Mk, for each source and target pair (Sk,T). The initial values of the sensor mapping matrix mk[p,q] for two sensors pRk and qRT is defined as 1.0 if they have the same location tag, and as 0 if they have different location tags. The initial value of Mk[a,b] for

Label assignment

In order to assign labels to the target activities, we use a voting ensemble method [36] based on the activity models Ak for each space Sk. Combining data from different sources to improve the accuracy and to have access to complimentary information is known as data fusion or as a form of ensemble learning [42]. Ensemble learning is a strategic way to combine multiple models, such as different classifiers or hypotheses to solve a computational problem. In our problem, using multiple sources

Experiments

Our approach was implemented and tested as part of the Center for Advanced Studies on Adaptive Systems (CASAS) smart environment project [2] at Washington State University. We evaluated the performance of our MHTL algorithm using the data collected from six different smart apartments. The layout of the apartments, including sensor placement and location tags, are shown in Fig. 2. We will refer to the apartments in Fig. 2(a) through Fig. 2(f) as apartments 1–6. The data was collected during a

Summary, limitations, and future work

In this paper we presented a method for transferring the knowledge of learned activities from multiple physical spaces to a new physical space. This allows us to reduce the data collection time, reduce or eliminate the need for data annotation and also exploit the insights from previous spaces. Our experiments show that despite different space layouts and different residents’ schedules, we were still able to discover and recognize activities in a target space using no labeled data.

It should be

Acknowledgements

This research is supported in part by National Science Foundation grant CNS-0852172. The authors also would like to thank Brian Thomas for developing the visualizer software and making available the activity distribution diagrams.

References (43)

  • D. Cook et al.
  • P. Rashidi et al.

    the resident in the loop: adapting the smart home to the user

    IEEE Transactions on Systems, Man & Cybernetics Journal, Part A (Systems & Humans)

    (2009)
  • D. Cook, M. Youngblood, I. Heierman, O. Edwin, K. Gopalratnam, S. Rao, A. Litvin, F. Khawaja, Mavhome: an agent-based...
  • S. Helal et al.

    The gator tech smart house: a programmable pervasive space

    Computer

    (2005)
  • F. Doctor et al.

    A fuzzy embedded agent-based approach for realizing ambient intelligence in intelligent inhabited environments

    IEEE Transactions on Systems, Man & Cybernetics Journal, Part A (Systems & Humans)

    (2005)
  • G. Abowd et al.
  • E.O. Heierman III, D.J. Cook, Improving home automation by discovering regularly occurring device usage patterns, in:...
  • K. Gopalratnam et al.

    Online sequential prediction via incremental parsing: the active lezi algorithm

    IEEE Intelligent Systems

    (2007)
  • P. Rashidi, D.J. Cook, Adapting to resident preferences in smart environments, in: AAAI Workshop on Preference...
  • L. Liao, D. Fox, H. Kautz, Location-based activity recognition using relational Markov networks, in: Proceedings of the...
  • T. Yiping, J. Shunjing, Y. Zhongyuan, Y. Sisi, Detection elder abnormal activities by using omni-directional vision...
  • C. Wren, E. Munguia-Tapia, Toward scalable activity recognition for sensor networks, in: Proceedings of the Workshop on...
  • B. Reisberg et al.

    The Alzheimer’s disease activities of daily living international scale (adl-is)

    International Psychogeriatrics

    (2001)
  • V. Rialle et al.

    What do family caregivers of Alzheimer’s disease patients desire in smart home technologies? Contrasted results of a wide survey

    Methods of Information in Medicine

    (2008)
  • D. Sánchez et al.

    Activity recognition for the smart hospital

    IEEE Intelligent Systems

    (2008)
  • G. Ogris, T. Stiefmeier, P. Lukowicz, G. Troster, Using a complex multi-modal on-body sensor system for activity...
  • D. Cook et al.

    Assessing the quality of activities in a smart environment

    Methods of Information in Medicine

    (2009)
  • T. van Kasteren, B. Krose, Bayesian activity recognition in residence for elders, in: 3rd International Conference on...
  • U. Maurer, A. Smailagic, D.P. Siewiorek, M. Deisher, Activity recognition and monitoring using multiple sensors on...
  • S. Luhr, H. Bui, S. Venkatesh, G. West, Recognition of human activity through hierarchical stochastic learning, in:...
  • T. Inomata, F. Naya, N. Kuwahara, F. Hattori, K. Kogure, Activity recognition from interactions with objects using...
  • Cited by (54)

    • Cross-environment activity recognition using word embeddings for sensor and activity representation

      2020, Neurocomputing
      Citation Excerpt :

      Feuz and Cook [13] describe a Feature Space Remapping technique which can be used to map sensors in one environment to sensors in another environment. Unlike [12], this approach uses features of sensors to calculate the alignment between source and target environments, and assumes the semantic meaning of features is not known. In our proposal, though, the semantic information of sensors and activities is a key component of the system, which allows a unique representation approach for different environments, instead of focusing on sensor alignment for a pair of environments.

    • Collaborative learning based on centroid-distance-vector for wearable devices

      2020, Knowledge-Based Systems
      Citation Excerpt :

      Shi and Yu [24] apply dimensionality reduction to heterogeneous feature spaces, where they require the data instances being linked as in multi-view learning. There are also manual mapping strategies such as [25–27], which use the locations of sensors to pair different devices. Our method is designed for the highly dynamic IoT environment, where the assumption that the source and the target domains have the same feature space is not always realistic.

    • Cross-environment activity recognition using a shared semantic vocabulary

      2018, Pervasive and Mobile Computing
      Citation Excerpt :

      In [17] Rashidi presents an approach for cross-environment activity recognition which uses a novel activity discovery method to learn activity templates and then computes sensors alignments using a semi-EM algorithm. For each unique combination of source and target environment, the approach in [17] calculates a complete pairwise mapping between sensors. SCEAR differs by explicitly transforming sensor activity to a common semantically-grounded feature space which removes the need to map individual sensors in one environment to sensors in a different environment.

    View all citing articles on Scopus
    View full text