1 Introduction
-
Interoperability models are reusable, visual software artifacts that model the behavior of services in a lightweight and technology independent manner. These models are used to help developers create and test systems that correctly interoperate. These models are a combination of architecture specification (i.e. services and interface dependencies) and behaviour specification (using state machines and rule-based transitions to evaluate protocol events). These models are based upon Finite State Machines (FSM); there are a number of active testing solutions based upon FSM [9]. Importantly, our models focus only on what is required to interoperate, simplifying the complexity of the model in comparison to approaches that fully model a system’s behavior.
-
A Graphical development tool to allow the developer to create and edit interoperability models and to also execute tests to report interoperability issues. This tool aims to further reduce the development process by making it easier to understand and develop the models themselves; this is in contrast to textual, heavyweight and disjoint distributed systems models such as BPEL and WSDL.
-
The Interoperability monitoring and testing framework captures systems events (REST operations, middleware messages, data transfers, etc.) and transforms them into a model specific format that can be used to evaluate and reason against required interoperability behavior. The framework tests monitored systems against interoperability models to evaluate compliance, reporting where interoperability issues occur, such that the developer can pinpoint and resolve concerns.
-
Specification compliance; to check that systems comply with particular specifications, e.g. an IoT sensor produces event data according to the NSGI specification,2 or streamed data content complies with a data format specification uploaded to the HyperCAT catalogue.
-
Interoperability testing; monitors the interaction between multiple systems to test whether they interoperate with one another, identifying the specific issues to be resolved where there is failure.
2 Model-driven interoperability
2.1 Interoperability engineering methodology
-
Interoperability testers create new IoT applications and services to be composed with one another. Hence, they wish to engineer interoperable solutions; testing that their software interoperates with other services, and pinpoint the reasons for any interoperability errors that occur. Therefore, reducing the overall effort required to deliver, test and maintain correctly functioning distributed applications. The framework will identify application behavior and data errors, e.g. data is not received by system A because system B has not correctly published information.
-
Application developers (these may be the same as interoperability tester) model the interoperability requirements of service compositions; that is, they create interoperability models to specify how IoT applications should behave when composed: what the sequence of messages exchanged between should be (in terms of order and syntax), and what data types and content should form the exchanged information. Importantly, these models are re-usable abstractions that can be edited, shared and composed.
-
Service or API developers model the compliance requirements of their new service API, that is, they create compliance models to specify how applications must interact with their services, such that tests can be generated to ensure that an implementation of this model is compliant.
-
Specification compliance testers test compliance of their specification implementation against the model of a service API in order to guarantee future interoperability with other parties conforming with this standard.
2.2 Interoperability and compliance models
-
Protocol-specific rules. Evaluate events according to the structure and content of an observed protocol message (not the application/data content). For example, check the IP address of sender of the message to verify which services are interacting with each other. Further, evaluating the protocol type (HTTP, IIOP, AMQP, etc.) and the protocol message type (HTTP GET, HTTP POST or an IIOP request) to ensure that the correct protocol specification is followed. Finally, checking protocol fields (e.g. a HTTP header field exists or contains a required value) to ensure that the message contains the valid protocol content required to interoperate. Currently, the tool evaluates HTTP protocol rules.
-
Application and data-specific rules. Evaluate the data content of protocol messages to ensure that services interoperate in terms of their application usage. For example, the data content is of a particular type (e.g. XML or JSON), corresponds to a particular format or schema, contains a particular field unit (e.g. temperature), etc. Furthermore, rules can make constraints on the application message, e.g., ensuring the operations required are performed in order (e.g. A sends a subscribe message to B, before C sends a publish message to B). Data rules are evaluated using data-specific expression languages, for example, we leverage XPATH4 and JSONPATH5 tools to extract data fields and evaluate whether a given expression is true (e.g. a rule in the XPATH format: Data[name(/*)] = queryContext).
-
Trigger state (B1 in Fig. 3). This is an active state as opposed to an observing state, i.e., it does not monitor for events, rather it triggers the sending of a new event described in the out transition. A trigger state can only have one outgoing transition.×
-
Trigger transition (Transition from B1 to B2 in Fig. 3). This is a transition from one state of the distributed system caused by the sending of a new message. This message is a HTTP message that is described in the attributes of the transition.
discoverCapabilities
operations to view the technical capabilities and installed features of a CDMI deployment. The first state is a trigger state, this means that the tool creates and sends a HTTP GET message to the cdmi_capabilities
URL. The system being tested for compliance should understand this message and send back a response. Hence, the second state transition evaluates a rule set against this received response to ensure that the data in the HTTP response matches the required data format of the api. Again, we see rules to test the structure of the HTTP message and that the data has fields equal to specific values and contains required fields.3 Interoperability modeling and testing tool
-
Monitoring deployment; the framework takes an interoperability model as input and generates a set of proxy elements that capture REST events (these relate to all interface points in the application). Hence, if we observe that a service receives events at a particular URL; we generate a proxy to capture those events–the proxy simply reads the information before redirecting the request to the actual service. The implementation is built upon the RESTLET framework.7
-
Model evaluator receives events and evaluates them against the rules specified in the transitions. The evaluator is protocol independent (per protocol plug-ins map concrete messages to the format of the model rules); hence, at present the framework parses HTTP messages, but is extensible to other data protocols. The evaluator creates a report to identify success or failure to the developer, and where a failure occurs, the framework performs simple reasoning to pinpoint the source of the error. In future work, we plan to explore knowledge-based reasoners to provide richer feedback.
4 Evaluation
4.1 Case one: developing an application to interoperate with cloud and IoT services
Service | Interface | Protocol |
---|---|---|
Context broker | Open Mobile Alliance’s NGSI9a
| HTTP Rest/JSON |
Complex Event Processor | FIWARE CEP specification | HTTP Rest/XML |
Big Data Adaptor | Apache Flume connectorb
| Binary |
Big Data Service | FIWARE Big Data specification | Rest/XML |
Object Storage | CDMI API specification |