Skip to main content

2010 | Buch

Software Technologies for Embedded and Ubiquitous Systems

8th IFIP WG 10.2 International Workshop, SEUS 2010, Waidhofen/Ybbs, Austria, October 13-15, 2010. Proceedings

herausgegeben von: Sang Lyul Min, Robert Pettit, Peter Puschner, Theo Ungerer

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The 8th IFIP Workshop on Software Technologies for Embedded and Ubiq- tous Systems (SEUS 2010) in Waidhofen/Ybbs, Austria, October 13-15, 2010, succeeded the seven previous workshops in Newport Beach, USA (2009); Capri, Italy (2008); Santorini, Greece (2007); Gyeongju, Korea (2006); Seattle, USA (2005); Vienna, Austria (2004); and Hokodate, Japan (2003); installing SEUS as a successfully established workshop in the ?eld of embedded and ubiquitous systems. SEUS 2010 continued the tradition of fostering cross-community scienti?c excellence and establishing strong links between research and industry. SEUS 2010 provided a forum where researchers and practitioners with substantial - periences and serious interests in advancing the state of the art and the state of practice in the ?eld of embedded and ubiquitous computing systems gathered with the goal of fostering new ideas, collaborations, and technologies. The c- tributions in this volume present advances in integrating the ?elds of embedded computing and ubiquitous systems. The call for papers attracted 30 submissions from all around the world. Each submission was assigned to at least four members of the Program Committee for review. The Program Committee decided to accept 21 papers, which were arranged in eight sessions. The accepted papers are from Austria, Denmark, France, Germany, Italy, Japan, Korea, Portugal, Taiwan, UK, and USA. Two keynotes complemented the strong technical program.

Inhaltsverzeichnis

Frontmatter

Invited Program

Component-Based Design of Embedded Systems
Abstract
In many engineering disciplines, large systems are built from prefabricated components with known and validated properties. Components are connected via stable, understandable, and standardized interfaces. The system engineer has knowledge about the global properties of the components-as they relate to the system functions-and of the detailed specification of the component interfaces. Knowledge about the internal design and implementation of the components is neither needed, nor available in many cases. A prerequisite for such a constructive approach to system building is that the validated properties of the components are not affected by the system integration. This composability requirement is an important constraint for the selection of a platform for the component-based design of large distributed embedded systems.
Component-based design is a meet-in-the middle design method. On one side the functional and temporal requirements on the components are derived top-down from the desired application functions. On the other side, the functional and temporal capabilities of the components are contained in the specifications of the available components (bottom up). During the design process a proper match between component requirements and component capabilities must be established. If there is no component available that meets the requirements, a new component must be developed.
A prerequisite of any component-based design is a crystal clear component concept that supports the precise specification of the services that are delivered and acquired across the component interfaces. In real-time systems, where the temporal properties of component services are as important as the value properties, the proper notion of a component is a hardware software unit.
In the first part of this presentation the different interfaces of an embedded system component will be introduced. The services of the component are provided by the linking interface (LIF) that must be precisely specified in the domains of value and time. The technology independent interface TII is used to parameterize a component to the given application environment. While a component provider must assure that the component works properly in all configurations that are covered by the parameter space, the component user is primarily interested in the correct operation of the concrete component configuration. The following parts will be concerned with the composition of components and the notion of emerging properties in systems of components.
Hermann Kopetz
AUTOSAR Appropriates Functional Safety and Multi-core Exploitation
Abstract
The main subject of this presentation is the connection between AUTOSAR as software standardization initiative and the automotive functional safety domain. The History of AUTOSAR and the functional safety evolution for road vehicles will be presented together with a short overview of the typical implementation of these techniques (E-Gas, X by Wire, etc.).
New emerging multi-core architectures are influencing the automotive related software implementations and have relevance in both software standardization and in the implementation of specific functional safety techniques. The actual status will be presented together with the potential future techniques.
It is discussed how much support AUTOSAR can already provide for practical implementations of functional safety and multi core today followed by an outlook on ongoing research and further standardization needs.
Bert Böddeker, Rafael Zalman

Hardware

Chip-Size Evaluation of a Multithreaded Processor Enhanced with a PID Controller
Abstract
In this paper the additional chip size of a Proportional/Integral/Diff-erential (PID) controller in a multithreaded processor is evaluated. The task of the PID unit is to stabilize a thread’s throughput, the instruction per cycle rate (IPC rate). The stabilization of the IPC rate allocated to the main thread increases the efficiency of the processor and also the execution time remaining for other threads. The overhead introduced by the PID controller implementation in the VHDL model of an embedded Java real-time-system is examined.
Michael Bauer, Mathias Pacher, Uwe Brinkschulte
Crash Recovery in FAST FTL
Abstract
NAND flash memory is one of the non-volatile memories and has been replacing hard disk in various storage markets from mobile devices, PC/Laptop computers, even to enterprise servers. However, flash memory does not allow in-place-update, and thus a block should be erased before overwriting the existing data in it. In order to overcome the performance problem from this intrinsic deficiency, flash storage devices are equipped with the software module, called FTL (Flash Translation Layer). Meanwhile, flash storage devices are subject to failure and thus should be able to recover metadata (including address mapping information) as well as data from the crash. In general, the FTL layer is responsible for the crash recovery. In this paper, we propose a novel crash recovery scheme for FAST, a hybrid address mapping FTL. It writes periodically newly generated address mapping information in a log structured way, but it exploits the characteristics of FAST FTL that the log blocks in a log area are used in a round-robin way, thus providing two advantages over the existing FTL recovery schemes. One is the low overhead in performing logging during normal operations in FTL. The other is the fast recovery time.
Sungup Moon, Sang-Phil Lim, Dong-Joo Park, Sang-Won Lee

Real-Time Systems

Time-Predictable Computing
Abstract
Real-time systems need to be time-predictable in order to prove the timeliness of all their time-critical responses. While this is a well-known fact, recent efforts of the community on concretizing the predictability of task timing have shown that there is no common agreement about what the term time-predictability exactly means. In this paper we propose a universal definition of time-predictability that combines the essence of different discussions about this term. This definition is then instantiated to concrete types of time-predictability, like worst-case execution time (WCET) predictability. Finally, we introduce the concept of a timing barrier as a mechanism for constructing time-predictable systems.
Raimund Kirner, Peter Puschner
OTAWA: An Open Toolbox for Adaptive WCET Analysis
Abstract
The analysis of worst-case execution times has become mandatory in the design of hard real-time systems: it is absolutely necessary to know an upper bound of the execution time of each task to determine a task schedule that insures that deadlines will all be met. The OTAWA toolbox presented in this paper has been designed to host algorithms resulting from research in the domain of WCET analysis so that they can be combined to compute tight WCET estimates. It features an abstraction layer that decouples the analyses from the target hardware and from the instruction set architecture, as well as a set of functionalities that facilitate the implementation of new approaches.
Clément Ballabriga, Hugues Cassé, Christine Rochange, Pascal Sainrat
Ubiquitous Verification of Ubiquitous Systems
Abstract
Ubiquitous embedded computing systems expected to reliably perform one or more relevant tasks need design and verification methods currently not available. New envisioned applications and trends in system design increase this need. Several of these trends, e.g. function integration, concurrency, energy awareness, networking and their consequences for verification are considered in this article. It is described that, already in the past, verification was made possible only due to rules restricting the design and it is argued that even more so in the future the constructive influence on the design of hardware and software will be a necessary condition to keep the verification task tractable.
Reinhard Wilhelm, Matteo Maffei

Model-Based Design and Model-Checking

A Model–Based Design Methodology with Contracts to Enhance the Development Process of Safety–Critical Systems
Abstract
In this paper a new methodology to support the development process of safety–critical systems with contracts is described. The meta–model of Heterogeneous Rich Component (HRC) is extended to a Common System Meta–Model (CSM) that benefits from the semantic foundation of HRC and provides analysis techniques such as compatibility checks or refinement analyses. The idea of viewpoints, perspectives, and abstraction levels is discussed in detail to point out how the CSM supports separation of concerns. An example is presented to detail the transition concepts between models. From the example we conclude that our approach proves valuable and supports the development process.
Andreas Baumgart, Philipp Reinkemeier, Achim Rettberg, Ingo Stierand, Eike Thaden, Raphael Weber
Combining Ontology Alignment with Model Driven Engineering Techniques for Home Devices Interoperability
Abstract
Ubiquitous Systems are expected in the near future to have much more impact on our daily tasks thanks to advances in embedded systems, ”Plug-n-Play” protocols and software architectures. Such protocols target home devices and enables automatic discovery and interaction among them. Consequently, smart applications are shaping the home into a smart one by orchestrating devices in an elegant manner.
Currently, several protocols coexist in smart homes but interactions between devices cannot be put into action unless devices are supporting the same protocol. Furthermore, smart applications must know in advance names of services and devices to interact with them. However, such names are semantically equivalent but syntactically different needing translation mechanisms.
In order to reduce human efforts for achieving interoperability, we introduce an approach combining ontology alignment techniques with those of Model Driven Engineering domain to reach a dynamic service adaptation.
Charbel El Kaed, Yves Denneulin, François-Gaël Ottogalli, Luis Felipe Melo Mora
Rewriting Logic Approach to Modeling and Analysis of Client Behavior in Open Systems
Abstract
Requirements of open systems involve constraints on clients behavior as well as system functionalities. Clients are supposed to follow policy rules derived from such constraints. Otherwise, the system as a whole might fall into undesired situations. This paper proposes a framework for system description in which client behavior and policy rules are explicitly separated. The description is encoded in Maude so that advanced analysis techniques such as LTL model-checking are applied to reason about the system properties.
Shin Nakajima, Masaki Ishiguro, Kazuyuki Tanaka

Sensor Nets

A Model-Driven Software Development Approach Using OMG DDS for Wireless Sensor Networks
Abstract
The development of embedded systems challenges software engineers with timely delivery of optimised code that is both safe and resource-aware. Within this context, we focus on distributed systems with small, specialised node hardware, specifically, wireless sensor network (WSN) systems. Model-driven software development (MDSD) promises to reduce errors and efforts needed for complex software projects by automated code generation from abstract software models. We present an approach for MDSD based on the data-centric OMG middleware standard DDS. In this paper, we argue that the combination of DDS features and MDSD can successfully be applied to WSN systems, and we present the design of an appropriate approach, describing an architecture, metamodels and the design workflow. Finally, we present a prototypical implementation of our approach using a WSN-enabled DDS implementation and a set of modelling and transformation tools from the Eclipse Modeling Framework.
Kai Beckmann, Marcus Thoss
Reactive Clock Synchronization for Wireless Sensor Networks with Asynchronous Wakeup Scheduling
Abstract
Most of the existing clock synchronization algorithms for wireless sensor networks can be viewed as proactive clock synchronization since they require nodes to periodically synchronize their clock to a reference node regardless of whether they use time information or not. However, the proactive approach wastes unnecessary energy and bandwidth when nodes don’t use time information for their operations. In this paper, we propose a new clock synchronization scheme called Reactive Clock Synchronization (RCS) that can be carried out on demand. The main idea is that a source node initiates a synchronization process in parallel with a data communication. To propagate clock information only when there is traffic, we embed the synchronization process in a data communication process. The results from detailed simulations confirm that RCS consumes only less than 1 percent of the energy consumption compared to two representative existing algorithms while it improves the clock accuracy by up to 75.8%.
Sang Hoon Lee, Yunmook Nah, Lynn Choi
On the Schedulability Analysis for Dynamic QoS Management in Distributed Embedded Systems
Abstract
Dynamic Quality-of-Service (QoS) management has been shown to be an effective way to make an efficient use of systems resources, such as computing, communication or energy. This is particularly important in resource-contrained embedded systems, such as vehicles, multimedia devices, etc.. Deploying dynamic QoS management requires using an appropriate schedulability test that is fast enough and ensures continued schedulability while the system adapts its configuration. In this paper we consider four utilization-based tests with release jitter, a particularly relevant feature in distributed systems, three of which were recently proposed and one is added in this work. We carry out an extensive comparison using random task sets to characterize their relative merits and we show a case study where multiple video streams are dynamically managed in a fictitious automotive application using such schedulability bounds.
Luís Almeida, Ricardo Marau, Karthik Lakshmanan, Raj Rajkumar

Error Detection and System Failures

Error Detection Rate of MC/DC for a Case Study from the Automotive Domain
Abstract
Chilenski and Miller [1] claim that the error detection probability of a test set with full modified condition/decision coverage (MC/DC) on the system under test converges to 100% for an increasing number of test cases, but there are also examples where the error detection probability of an MC/DC adequate test set is indeed zero. In this work we analyze the effective error detection rate of a test set that achieves maximum possible MC/DC on the code for a case study from the automotive domain. First we generate the test cases automatically with a model checker. Then we mutate the original program to generate three different error scenarios: the first error scenario focuses on errors in the value domain, the second error scenario focuses on errors in the domain of the variable names and the third error scenario focuses on errors within the operators of the boolean expressions in the decisions of the case study. Applying the test set to these mutated program versions shows that all errors of the values are detected, but the error detection rate for mutated variable names or mutated operators is quite disappointing (for our case study 22% of the mutated variable names, resp. 8% of the mutated operators are not detected by the original MC/DC test set). With this work we show that testing a system with a test set that achieves maximum possible MC/DC on the code detects less errors than expected.
Susanne Kandl, Raimund Kirner
Simultaneous Logging and Replay for Recording Evidences of System Failures
Abstract
As embedded systems take more important roles at many places, it is more important for them to be able to show the evidences of system failures. Providing such evidences makes it easier to investigate the root causes of the failures and to prove the responsible parties. This paper proposes simultaneous logging and replaying of a system that enables recording evidences of system failures. The proposed system employs two virtual machines, one for the primary execution and the other for the backup execution. The backup virtual machine maintains the past state of the primary virtual machine along with the log to make the backup the same state as the primary. When a system failure occurs on the primary virtual machine, the VMM saves the backup state and the log. The saved backup state and the log can be used as an evidence. By replaying the backup virtual machine from the saved state following the saved log, the execution path to the failure can be completely analyzed. We developed such a logging and replaying feature in a VMM. It can log and replay the execution of the Linux operating system. The experiment results show the overhead of the primary execution is only fractional.
Shuichi Oikawa, Jin Kawasaki

Hard Real-Time

Code Generation for Embedded Java with Ptolemy
Abstract
Code generation from models is the ultimate goal of model-based design. For real-time systems the generated code must be analyzable for the worst-case execution time (WCET). In this paper we evaluate Java code generation from Ptolemy II for embedded real-time Java. The target system is the time-predictable Java processor JOP. The quality of the generated code is verified by WCET analysis for the target platform. Our results indicate that code generated from synchronous data-flow and finite state machine models is WCET analyzable and the generated code leads to tight WCET bounds.
Martin Schoeberl, Christopher Brooks, Edward A. Lee
Specification of Embedded Control Systems Behaviour Using Actor Interface Automata
Abstract
Distributed Timed Multitasking (DTM) is a model of computation describing the operation of hard real-time embedded control systems. With this model, an application is conceived as a network of distributed embedded actors that communicate with one another by exchanging labeled messages (signals), independent of their physical allocation. Input and output signals are exchanged with the controlled plant at precisely specified time instants, which provides for a constant delay from sampling to actuation and the elimination of I/O jitter. The paper presents an operational specification of DTM in terms of actor interface automata, whereby a distributed control system is modeled as a set of communicating interface automata executing distributed transactions. The above modeling technique has implications for system design, since interface automata can be used as design models that can be implemented as application or operating system components. It has also implications for system analysis, since actor interface automata are essentially timed automata that can be used as analysis models in model checking tools and simulation environments.
Christo Angelov, Feng Zhou, Krzysztof Sierszecki
Building a Time- and Space-Partitioned Architecture for the Next Generation of Space Vehicle Avionics
Abstract
Future space systems require innovative computing system architectures, on account of their size, weight, power consumption, cost, safety and maintainability requisites. The AIR (ARINC 653 in Space Real-Time Operating System) architecture answers the interest of the space industry, especially the European Space Agency, in transitioning to the flexible and safe approach of having onboard functions of different criticalities share hardware resources, while being functionally separated in logical containers (partitions). Partitions are separated in the time and space domains. In this paper we present the evolution of the AIR architecture, from its initial ideas to the current state of the art. We describe the research we are currently performing on AIR, which aims to obtain an industrial-grade product for future space systems, and lay the foundations for further work.
José Rufino, João Craveiro, Paulo Verissimo

Middleware and Smart Spaces

EMWF: A Middleware for Flexible Automation and Assistive Devices
Abstract
EMWF (Embedded Workflow Framework) is an open source middleware for flexible (i.e., configurable, customizable and adaptable), user-centric automation and assistive devices and systems. EMWF 1.0 provides a light-weight workflow manager and engines on Windows CE, Windows XP Embedded, and Linux. It is for small embedded automation devices. EMWF 2.0 also provides basic message passing and real-time scheduling mechanisms and workflow communication facility. This paper describes EMWF 1.0 and extensions in EMWF 2.0, as well as case studies on workflow-based design and implementation as motivations for EMWF and the extensions.
Ting-Shuo Chou, Yu Chi Huang, Yung Chun Wang, Wai-Chi Chen, Chi-Sheng Shih, Jane W. S. Liu
An Investigation on Flexible Communications in Publish/Subscribe Services
Abstract
Novel embedded and ubiquitous infrastructures are being realized as collaborative federations of heterogeneous systems over wide-area networks by means of publish/subscribe services. Current publish/subscribe middleware do not jointly support two key requirements of these infrastructures: timeliness, i.e., delivering data to the right destination at the right time, and flexibility, i.e., enabling heterogeneous interacting applications to properly retrieve and comprehend exchanged data. In fact, some middleware solutions pay more attention to timeliness by using serialization formats that minimize delivery time, but also reduce flexibility by constraining applications to adhere to predefined data structures. Other solutions adopt XML to improve flexibility, whose redundant syntax strongly affects the delivery latency.
We have investigated the consequences of the adoption of several light-weight formats, which are alternative to XML, in terms of flexibility and timeliness. Our experiments show that the performance overhead imposed by the use of flexible formats is not negligible, and even the introduction of data compression is not able to manage such issue.
Christian Esposito, Domenico Cotroneo, Stefano Russo
Mobile Agents for Digital Signage
Abstract
This paper presents an agent-based framework for building and operating context-aware multimedia content on digital signage in public/private spaces. It enables active and multimedia content to be composed from mobile agents, which can travel from computer to computer and provide multimedia content for advertising or user-assistant services to users. The framework automatically deploys their agents at computers near to their current positions to provide advertising or annotations on objects to users. To demonstrate the utility of the framework, we present a user-assistant that enables shopping with digital signage.
Ichiro Satoh

Function Composition and Task Mapping

Composition Kernel: A Multi-core Processor Virtualization Layer for Rich Functional Smart Products
Abstract
Future ambient intelligence environments will embed powerful multi-core processors to compose various functionalities into a smaller number of hardware components. This makes the maintainability of intelligent environments better because it is not easy to manage massively distributed processors.
A composition kernel makes it possible to compose multiple functionalities on a multi-core processor with the minimum modification of OS kernels and applications. A multi-core processor is a good candidate to compose various software developed independently for dedicated processors into one multi-core processor to reduce both the hardware and development cost. In this paper, we present SPUMONE which is a composition kernel for developing future smart products.
Tatsuo Nakajima, Yuki Kinebuchi, Alexandre Courbot, Hiromasa Shimada, Tsung-Han Lin, Hitoshi Mitake
Mobile Phone Assisted Cooperative On-Node Processing for Physical Activity Monitoring
Abstract
One of the main challenges in the body-area sensor network domain is to suitably break complex signal processing tasks into manageable parts in order to reduce their algorithmic complexity while retaining their output quality. The goal is to map some of these tasks onto sensor nodes and the others onto computation platforms or gateways like mobile phones. In this paper we attempt to address this problem in the specific context of physical activity monitoring. To start with, physical activity recognition tasks are carried out on the mobile phone. But as soon as a steady-state (e.g., walking or running at constant speed) is detected, this information is transmitted to the sensor node. At this stage, the sensor node monitors the known physical activity, which entails relatively simpler algorithms. In the event of a change in activity pattern, it switches back to raw data transmission and hands over processing to the mobile phone. Such cooperative signal processing significantly improves the battery life of the mobile phone as well as that of the sensor node. We present the main principles behind such distributed physical activity monitoring algorithms and compare their output quality with those from standard processing done entirely on the mobile phone.
Robert Diemer, Samarjit Chakraborty
Backmatter
Metadaten
Titel
Software Technologies for Embedded and Ubiquitous Systems
herausgegeben von
Sang Lyul Min
Robert Pettit
Peter Puschner
Theo Ungerer
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-16256-5
Print ISBN
978-3-642-16255-8
DOI
https://doi.org/10.1007/978-3-642-16256-5