Ubiquitous computing presents nowadays a new paradigm of computation where computers are embedded into the everyday environment. The vanishing of computers in the environment can be obtained through sophisticated and miniaturized devices such as wearable sensors, unobtrusive sensor networks and computer vision technologies. Nowadays, thanks to the low cost, small size and high computational power of these devices, pervasive sensors are able to interact autonomously with humans as a part of their day-to-day. One of the most fundamental areas of research that builds on top of Ubiquitous Applications is the making of human-centric intelligent spaces. As a difference with other intelligent spaces, they focus on creating a context-sensitive computing with respect to humans. That is to say, this intelligence provides inference mechanisms regarding humans’ conditions, feelings, actions or activities. This inference is noteworthy in order to improve the ubiquitous interaction between humans and electronic devices which surround them. Hence, the overall success rate of these systems strongly depends on the inference from the context. In other words, ubiquitous devices should be context-aware.

This special issue is focused on one of the most important inference or context-aware systems in Ubiquitous applications (among other such as location, natural language processing, emotional computing, etc.) that is Human Activity Recognition (HAR). In this human activity awareness it is possible to distinguish between different levels of recognition regarding the complexity of activities they tackle: (1) gestures; (2) individual actions; (3) human-object interactions; (4) human–human interactions; (5) complex or composite activities.

HAR systems have become a very prominent area of research but more especially in fields such medical, security and military. Thus, in computing there have been advances in HAR since the late 90’s. Nevertheless, there are still many open issues to overcome in this purpose. From the hardware perspective, there is still an intensive area of research to improve the portability, price, network communication, processing capacities and energy efficiency. Hardware devices can be classified between external (such as intelligent homes) and wearable devices.

External devices have the capacity of recognizing long complex activities that comprise several actions such as “preparing the dinner”, “cutting the grass”, “playing console games” etc. The recognition of these activities depends heavily on the knowledge extraction from several sensors (locational, tag, cameras, microphones etc.). Such environments with many sensors are so-named intelligent homes or smart homes. The problem in this type arises when users do not necessarily interact (for example, they are out of range) with those sensors. Besides, some specific sensors such as video cameras entail further issues such as privacy and scalability.

From the software perspective, there are still many challenges such as: (1) the design of an optimal feature extraction; (2) real-time and scalable inference systems; (3) low computational and embeddable algorithms; (4) support to new users without the need of re-training.

In this special issue are presented state-of-the-art advances from four European specialized research centers in computing. Three out of four of these pieces of research focus on improvement of the intelligent system methodology, whereas one in particular deals with a processing architecture to ease and improve that inference, independently from the intelligent system.

The first paper, by Shumey Zhang, Paul MacCullagh, Chris Nugent, Huiri Zheng and Norman Black from the School of Computing and Mathematics at the University of Ulster (United Kingdom), explores the potential and beneficial aspects of ontological modeling in HAR. For this purpose, they developed a system called iMessenger inference system. This approach tackles the problem of dealing with different sources in a smart home. The proposed system is able to process and combine inference from different sources such as accelerometers and RFID sensors. The former provides them with information about the activity posture and the latest about indoor location. On top of the inference level provided by those sources, an ontological model is built. This ontological model provides a logical framework to achieve context-aware task reminders for users. By comparing the inference and the task settings through the ontological model, the system is able to provide feedback and support users in performing their daily tasks correctly.

The second paper, by Daniel Roggen, Killian Förster, Alberto Calatroni, Gerhard Tröster from the Wearable Computing Laboratory, ETH Zurich (Switzerland), discusses data processing architecture with adaptation strategies so-called adARC (Adaptive Activity Recognition Chain) for on-body accelerometer sensors. The proposed architecture is able to handle multiple distributed on-body sensor sources (so-called ContextCells) and also allows to add new ones. This type of distributed data processing works on the solution of avoiding re-training when the context is composed of changeable events or processes. In this particular research, the changes are represented by sensor on-body displacements. The challenge of self-calibration is achieved by a well-studied set of heuristics, information framework and architectural layers between ContextCells. This work is motivated towards long-term HAR applications with self-developing and self-monitoring capacities.

The third paper, by Heli Koskimäki, Ville, Huikari, Pekka Siirtola, Juha Röning from the Computer Science and Engineering Laboratory, University of Oulu (Finland), looks into the processing of wearable sensor data. In this research a single wrist-worn inertial measurement unit is used as information source. The sensor was attached to a worker of industrial assembly lines and an activity dataset was collected and labelled. Activities performed by the worker were well recognized after a pre-processing, feature extraction and classification process. Moreover, a state machine analysis was conducted to recognize sequences of tasks that represent complex or composite activities. The aim of this work is to identify wrong and correct workers’ behaviours based on the accurate classification of activities and their order of performance. This work presents significant advances as regards the data processing in HAR for single sensor sources.

Finally, the paper of Javier Andreu and Plamen Angelov from the School of Computing and Communications at Lancaster University (United Kingdom), describes a machine learning model that is able to recognize activities in a self-developing, self-monitoring and online fashion. Wearable sensors are used as information sources. Accurate results on activity recognition are obtained after pre-processing, feature decomposition and online learning. The machine learning algorithm used in this research belongs to the computational intelligent methodology so-called Evolving Intelligent Systems (EIS). The learning process and the feature decomposition share the same online and adaptive performance. The machine learning algorithm is suitable for long-term and heterogonous application since the necessary adaptation to the changing context is considered. The algorithm shows accurate results with little initial training from just one random and independent user. The obtained learning model can be applied to different users without prior specific training from those particular users.

We hope that everybody interested in the field of HAR find useful and stimulating the research results that are presented in this special issue. The organizers of this Journal encourage readers to review the electronic versions of these papers to enjoy the full colour pictures, hyperlinks and addendum. Last but not least, we would like to thank very much the Editor-in-Chief of this Journal, Professor Vincenzo Loia, for his kind invitation to organize this special issue.