Skip to main content
main-content

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the workshops held at the 11th International Conference on Web Engineering, ICWE 2011, in Paphos, Cyprus, in June 2011. The 42 revised full papers presented were carefully reviewed and selected from numerous submissions . The papers are organized in sections on the Third International Workshop on Lightweight Composition on the Web (ComposableWeb 2011); First International Workshop on Search, Exploration and Navigation of Web Data Sources (ExploreWeb 2011); Second International Workshop on Enterprise Crowdsourcing (EC 2011); Seventh Model-Driven Web Engineering Workshop (MDWE 2011); Second International Workshop on Quality in Web Engineering (QWE 2011); Second Workshop on the Web and Requirements Engineering (WeRE 2011); as well as the Doctoral Symposium2011, and the ICWE 2011 Tutorials.

Inhaltsverzeichnis

Frontmatter

Third International Workshop on Lightweight Composition on the Web (ComposableWeb 2011)

An Evaluation of Mashup Tools Based on Support for Heterogeneous Mashup Components

Mashups are built by combining building blocks, which are commonly referred to as mashup components. These components are characterized by a high level of heterogeneity in terms of technologies, access methods, and the behavior they may exhibit within a mashup. Abstracting away this heterogeneity is the mission of the so-called mashup tools aiming at automating or semi-automating mashup development to serve non-programmers. The challenge is to ensure this abstraction mechanism does not limit the support for heterogeneous mashup components. In this paper, we propose a novel evaluation framework that can be applied to assess the degree to which a given mashup tool addresses this challenge. The evaluation framework can serve as a benchmark for future improved design of mashup tools with respect to heterogeneous mashup components support. In order to demonstrate the applicability of the framework, we also apply it to evaluate some existing tools.

Saeed Aghaee, Cesare Pautasso

An Approach to Construct Dynamic Service Mashups Using Lightweight Semantics

Thousands of Web services have been available online, and mashups built upon them have been creating added value. However, mashups are mostly developed with a predefined set of services and components. The extensions to them always involve programming work. Furthermore, when a service is unavailable, it is challenging for mashups to smoothly switch to an alternative that offers similar functionalities. To address these problems, this paper presents a novel approach to enable mashups to select and invoke semantic Web services on the fly. To extend a mashup with new semantic services, developers are only required to register and publish them as Linked Data. By refining the strategies of service selection, mashups can behave more adaptively and offer higher fault-tolerance.

Dong Liu, Ning Li, Carlos Pedrinaci, Jacek Kopecký, Maria Maleshkova, John Domingue

Task-Based Recommendation of Mashup Components

Presentation-oriented mashup applications are usually developed by manual selection and assembly of pre-existent components. The latter are either described on a very technical, functional level, or using informal descriptors, such as tags, which bear certain ambiguities. With regard to the increasing number and complexity of available components, their discovery and integration has become a challenge for non-programmers. Therefore, we present a novel concept for the task-based recommendation of mashup components, which comprises a more natural, task-driven description of user requirements and a corresponding semantic matching algorithm for universal mashup components. By its realization and integration with an composition platform, we could prove the feasibility and sufficiency of our approach.

Vincent Tietz, Gregor Blichmann, Stefan Pietschmann, Klaus Meißner

Integration of Telco Services into Enterprise Mashup Applications

In this paper we present our approach to integrate telco services into enterprise mashup applications. We show how cross-network integration and multi-user-oriented mashup concept support execution and orchestration of business processes. We identify the main classes of telco services and provide a reference architecture for telco-enabled mashup applications. Finally, we describe our approach for systematic integration process and give an outlook into our further research.

Olexiy Chudnovskyy, Frank Weinhold, Hendrik Gebhardt, Martin Gaedke

Orchestrated User Interface Mashups Using W3C Widgets

One of the key innovations introduced by web mashups into the integration landscape (basically focusing on data and application integration) is

integration at the UI layer

. Yet, despite several years of mashup research, no commonly agreed on component technology for UIs has emerged so far. We believe

W3C’s widgets

are a good starting point for componentizing UIs and a good candidate for reaching such an agreement. Recognizing, however, their shortcomings in terms of inter-widget communication – a crucial ingredient in the development of interactive mashups – in this paper we (i) first discuss the

nature of UI mashups

and then (ii) propose an

extension

of the widget model

that aims at supporting a variety of inter-widget communication patterns.

Scott Wilson, Florian Daniel, Uwe Jugel, Stefano Soi

Cross-Domain Embedding for Vaadin Applications

Although the design goals of the browser were originally not at running applications or at displaying a number of small widgets on a single web page, today many web pages considerably benefit from being able to host small embedded applications as components. While the web is full such applications, they cannot be easily reused because of the same origin policy restrictions that were introduced to protect web content from potentially malicious use. In this paper, we describe a generic design for cross domain embedding of web applications in a fashion that enables loading of applications from different domains as well as communication between the client and server. As the proof-of-concept implementation environment, we use web development framework Vaadin, a Google Web Toolkit based system that uses Java for application development.

Janne Lautamäki, Tommi Mikkonen

Web Linking-Based Protocols for Guiding RESTful M2M Interaction

The

Representational State Transfer (REST)

style has become a popular approach for lightweight implementation of Web services, mainly because of relevant benefits such as massive scalability, high evolvability, and low coupling. It was designed considering the human-user as the one who drives service invocation and discovery. Attempts to provide machine-clients a similar autonomy have been proposed and recently, interesting discussion evaluate explicit semantics in the form of well-defined media types but introducing higher levels of coupling. We explore Web linking as a lightweight mechanism for representing

link

semantics and guiding machine-clients in the execution of well-defined choreographies and illustrate our approach with the OAuth and OpenId protocols exploring asynchrony and machine expectations as the interaction moves forward.

Jesus Bellido, Rosa Alarcon, Cristian Sepulveda

Batched Transactions for RESTful Web Services

In this paper, we propose a new transaction processing system for RESTful Web services; we describe a system architecture and algorithms. Contrary to other approaches, Web services do not require any changes to be used with our system. The system is transparent to non-transactional clients. We achieve that by introducing an overlay network of mediators and proxy servers, and restricting transactions to be a batched set of REST/HTTP operations (or requests) on Web resources addressed by URIs. To be able to use existing Web hosts that normally do not support versioning of Web resources, transaction resources are currently modified in-place, with a simple compensation mechanism. Concurrent execution of transactions guarantees isolation.

Sebastian Kochman, Paweł T. Wojciechowski, Miłosz Kmieciak

Secure Mashup-Providing Platforms - Implementing Encrypted Wiring

Mashups were not designed with security in mind. Their main selling point is the flexible and easy to use development approach. The fact that mashups enable users to compose services to create a piece of software with new functionalities, integrating inputs from various sources, implies a security risk. However, in many scenarios where mashups add business value, e.g. enterprise mashups, security and privacy are important requirements. A secure environment for the handling of potentially sensitive end user information is needed, unless the user fully trusts the mashup-providing-platform (MPP), which is unlikely for hosted enterprise mashups. In this paper we present a proof-of-concept implementation which enables the secure usage of a mashup-providing platform and protects sensitive data against malicious widgets and platform operators.

Matthias Herbert, Tobias Thieme, Jan Zibuschka, Heiko Roßnagel

First International Workshop on Search, Exploration and Navigation of Web Data Sources (ExploreWeb 2011)

A Conceptual Framework for Linked Data Exploration

An increasing number of open data sets is becoming available on the Web as Linked Data (LD), many efforts has been devoted to show the potential of LD applications from the technical point of view. However, less attention has been paid to the analysis of the information seeking requirements from the user point of view. In this paper we examine the Information Seeking Process and we propose a general framework that address all its requirements in the context of LD-based applications. We support seamless integration of both Linked and non-Linked data sources and we allow designers to define complex, rank-aware result construction and exploration rules based on rank aggregation and multiple many-to-many data navigation.

Alessandro Bozzon, Marco Brambilla, Emanuele Della Valle, Piero Fraternali, Chiara Pasini

Support for Reusable Explorations of Linked Data in the Semantic Web

The Linked Data cloud growth is changing current Web application development. One of the first steps is to determine whether there is information already available that can be immediately reused. We provide an environment which allows non-technically savvy users, but who understand the problem domain, to accomplish these tasks. They employ a combination of search, query and faceted navigation in a direct manipulation, query-by-example style interface. In this process, users can reuse solutions previously found by other users, which may accomplish sub-tasks of the problem at hand. It is also possible to create an end-user friendly interface to allow them to access the information. Once a solution has been found, it can be generalized, and optionally made available for reuse by other users.

Marcelo Cohen, Daniel Schwabe

Generation of Semantic Clouds Based on Linked Data for Efficient Multimedia Semantic Annotation

The major drawback of existing semantic annotation methods is that they are not intuitive enough for users to easily resolve semantic ambiguities while associating semantic meaning to a chosen keyword. We have developed a semantic-cloud-based annotation scheme in which users can use semantic clouds as the primary interface for semantic annotation, and choose the most appropriate concept among the candidate semantic clouds. The most critical element of this semantic-cloud-based annotation scheme is the method of generating efficient semantic clouds that make users intuitively recognize candidate concepts to be annotated without having any semantic ambiguity. We propose a semantic cloud generation approach that locates essential points to start searching for relevant concepts in Linked Data and then iteratively analyze potential merges of different semantic data. We focus on reducing the complexity of handling a large amount of Linked Data by providing context sensitive traversal of such data. We demonstrate the quality of semantic clouds generated by the proposed approach with a case study.

Han-Gyu Ko, In-Young Ko

Ontology Based Segmentation of Geo-Referenced Queries

The last generation of search engines is confronted with complex queries, whose expression goes beyond the capability of the Bag of Word model and requires the systems which understand query sentences. Among these queries, huge importance is taken by geo-referenced queries, i.e. queries whose understanding requires localizing objects of interest, where the user location is the most important parameter. In this paper, we focus on geo-referenced queries and show how natural language analysis can be used to decompose queries into sub-queries and associating them to suitable real-world objects. In this paper we propose a syntactic and semantic approach, which uses syntactic query segmentation techniques and the ontological notion of geographic concepts to produce good query interpretations; an analysis of the method shows its practical viability.

Mamoun Abu Helou

SimSpectrum: A Similarity Based Spectral Clustering Approach to Generate a Tag Cloud

Tag clouds are means for navigation and exploration of information resources on the web provided by social Web sites. The most used approach to generate a tag cloud so far is based on popularity of tags among users who annotate by those tags. This approach however has several limitations, such as suppressing number of tags which are not used often but could lead to interesting resources as well as tags which have been suppressed due to the default number of tags to present in the tag cloud. In this paper we propose the

SimSpectrum:

a similarity based spectral clustering approach to generate a tag cloud which improves the current state of the art with respect to these limitations. Our approach is based on finding to which extent the tags are related by a similarity calculus. Based on the results from similarity calculation, the spectral clustering algorithm finds the clusters of tags which are strongly related and are loosely related to the other tags. By doing so, we can cover part of the tags which are discarded by traditional tag cloud generation approaches and therefore, present the user with more opportunities to find related interesting web resources. We also show that in terms of the metrics that capture the structural properties of a tag cloud such as coverage and relevance our method has significant results compared to the baseline tag cloud that relies on tag popularity. In terms of the overlap measure, our method shows improvements against the baseline approach. The proposed approach is evaluated using MedWorm medical article collection.

Frederico Durao, Peter Dolog, Martin Leginus, Ricardo Lage

Graph Access Pattern Diagrams (GAP-D): Towards a Unified Approach for Modeling Navigation over Hierarchical, Linear and Networked Structures

In this paper we motivate the advantages of a unified, language-independent concept for describing and defining navigation systems based on underlying graph structures. We expect that such an approach will lower the effort for implementing navigation systems with application frameworks while increasing the configurability and reusability of navigation systems at the same time. It also allows adapting navigation components to new data sources easily. A visual notation called Graph Access Pattern Diagrams (GAP-Ds) is outlined and its expressivity is demonstrated by examples.

Matthias Keller, Martin Nussbaumer

Data-Driven and User-Driven Multidimensional Data Visualization

Data Visualization on the Web is one of the main pillars for understanding the information coming from Business Intelligence based systems. However, the variety of data sources and devices together with the multidimensional nature of data and the continuous evolution of requirements is making this discipline more complicated as well as passionate. This paper outlines a process for obtaining a multidimensional data visualization driven by both, the data and the user, providing an automatic code generation. While the designer is automatically provided with a wide range of possible visualizations for a given data set, the user can change the visualization in several ways: the dominant dimension, the kind of visualization and the data set itself by adding, removing or grouping variables.

Rober Morales-Chaparro, Juan C. Preciado, Fernando Sánchez-Figueroa

Second International Workshop on Enterprise Crowdsourcing (EC 2011)

Context-Aware and Adaptive Web Interfaces: A Crowdsourcing Approach

Web site providers currently have to deal with the growing range and increased diversity of devices used for web browsing. It is not only technically challenging to provide flexible interfaces able to adapt to the large variety of viewing situations, but also costly. We discuss the idea and challenges of adopting a crowdsourcing model in which end-users can participate in the adaptation process with the goal of enabling a much wider range of use contexts to which applications can adapt.

Michael Nebeling, Moira C. Norrie

Massive Multiplayer Human Computation for Fun, Money, and Survival

Crowdsourcing is an effective tool to solve hard tasks. By bringing 100,000s of people to work on simple tasks that only humans can do, we can go far beyond traditional models of data analysis and machine learning. As technologies and processes mature, crowdsourcing is becoming mainstream. It powers many leading Internet companies and a wide variety of novel projects: from content moderation and business listing verification to real-time SMS translation for disaster response. However, quality assurance can be a major challenge. In this paper CrowdFlower presents various crowdsourcing applications, from business to ethics, to money and survival, all of which showcase the power of labor-on-demand, otherwise known as the human cloud.

Lukas Biewald

Enterprise Crowdsourcing Solution for Software Development in an Outsourcing Organization

Enterprise Crowdsourcing has the potential to be a very powerful and disruptive paradigm for human resource deployment, project development and project management as we know them. This paper details ongoing work at TCS Innovation Labs – Web 2.0, Tata Consultancy Services, Chennai, India to develop an Enterprise Crowdsourcing Solution to tackle the various processes involved in the development of software by leveraging the untapped human resource in the organization. Large IT organizations have a lot of untapped manpower in the form of trainees, the bench strength and people involved in roles which do not fully employ their strengths in particular technologies they are experts in. This system aims to allow untapped talent to get access to challenging tasks part of other projects and work on them while providing a disruptive way to allocate resources in a conventional software development environment.

Ranganathan Jayakanthan, Deepak Sundararajan

Seventh Model-Driven Web Engineering Workshop (MDWE 2011)

A Model-Driven Framework for Developing Web Service Oriented Applications

The advancements made in terms of the capabilities of mobile devices have shifted the interest of service engineering towards frameworks that are able to deliver applications rapidly and efficiently. The development of services that can be fully functional in mobile environments and operable on a variety of devices is an important and complex task for the research community. In this work, we propose a Model-Driven Web Service oriented framework that combines Model-Driven Engineering (MDE) with Web Services to automate the development of platform-specific web-based applications. The importance of this work is revealed through a case study that involves modelling and generation of a representative Web Service oriented mobile application.

Achilleas Achilleos, Georgia M. Kapitsaki, George A. Papadopoulos

Developing Enterprise Web Applications Using the Story Driven Modeling Approach

Today’s browsers, tools and Internet connections enable the growth of Enterprise Web Applications. These applications are no longer page-based and designed using HTML code. Enterprise Web Applications bring the capabilities and concepts of traditional desktop applications to the browser. We are used to the development of desktop applications for years and have defined our own process to enable the full model-driven development of applications without source code. Using this process and its tools, we are able to define not only data models for traditional applications and generate code out of it. Combined with the story-driven modeling approach, we are able to design the logic of applications using models and generate fully functional code. To use our knowledge and tools as well as our usual process for the development of Enterprise Web Applications, we investigated our process and adapted it to the new needs. As result we propose a new development process that combines the needs of complex software development with the implementation of web user interfaces and control flows between these user interfaces. The process is a guideline to use models and tools for the development of complex Enterprise Web Applications including data model, behaviour and user interface.

Christoph Eickhoff, Nina Geiger, Marcel Hahn, Albert Zündorf

Aspect-Oriented Modeling of Web Applications with HiLA

Modern web applications often contain features, such as landmarks, access control, or adaptation, that are difficult to model modularly with existing Model-Driven Web Engineering approaches. We show how

HiLA

, an aspect-oriented extensions for UML state machines, can represent these kinds of features as aspects. The resulting models achieve separation of concerns and satisfy the “Don’t Repeat Yourself” (DRY) guideline. Furthermore,

HiLA

provides means to detect potential interferences between features and a declarative way to specify the behavior of such feature combinations.

Gefei Zhang, Matthias Hölzl

Model-Driven Web Form Validation with UML and OCL

Form validation is an integral part of a web application. Web developers must ensure that data input by the user is validated for correctness. Given the importance of form validation it must be considered as part of a model-driven solution to web development. Existing model-driven approaches typically have not addressed form validation as part of the model. In this paper, we present an approach that allows validation constraints to be captured within a model using UML and OCL. Our approach covers three common types of validation: single element, multiple element, and entity association. We provide an example to illustrate an architecture-centric approach.

Eban Escott, Paul Strooper, Paul King, Ian J. Hayes

Modernization of Legacy Web Applications into Rich Internet Applications

In the last years one of the main concerns of the software industry has been to reengineer their legacy Web Applications (WAs) to take advantage of the benefits introduced by Rich Internet Applications (RIAs), such as enhanced user interaction and network bandwith optimization. However, those reengineering processes have been traditionally performed in an ad-hoc manner, resulting in very expensive and error-prone projects. This situation is partly motivated by the fact that most of the legacy WAs were developed before Model-Driven Development (MDD) approaches became mainstream. Then maintenance activities of those legacy WAs have not been yet incorporated to a MDA development lifecycle. OMG Architecture Driven Modernization (ADM) advocates for applying MDD principles to formalize and standardize those reengineering processes with modernization purposes. In this paper we outline an ADM-based WA-to-RIA modernization process, highlighting the special characteristics of this modernization scenario.

Roberto Rodríguez-Echeverría, José María Conejero, Pedro J. Clemente, Juan C. Preciado, Fernando Sánchez-Figueroa

Second International Workshop on Quality in Web Engineering (QWE 2011)

Quality Models for Web [2.0] Sites: A Methodological Approach and a Proposal

This paper discusses a methodological approach to define quality models (QM) for Web sites of any kind, including Web 2.0 sites. The approach stresses the practical use of a QM, in requirement definition and quality assessment, during design & development processes or during site operation. An important requirement for such QMs is

organization mapping

, which allows who is in charge of quality management to easily identify the actors in the organization responsible for implementing or improving each specific quality characteristic. A family of QMs is proposed and compared with ISO/IEC 25010 QMs for software products and software-intensive computer systems.

Roberto Polillo

Exploring the Quality in Use of Web 2.0 Applications: The Case of Mind Mapping Services

Research in Web quality has addressed quality in use as the most important factor affecting a wide acceptance of software applications. It can be conceived as comprising two complementary concepts, that is, usability and user experience, which accounts for the employment of more user-centred evaluations. Nevertheless, in the context of Web 2.0 applications, this topic has still not attracted sufficient attention from the HCI community. This paper addresses the quality in use of Web 2.0 applications on the case of mind mapping services. The evaluation methodology brings together three complementary methods. The estimated quality in use is measured by means of the logging actual use method, while the perceived quality in use is evaluated by means of the retrospective thinking aloud (RTA) method and a questionnaire. The contribution of our work is twofold. Firstly, we provide empirical evidence that the proposed methodology in conjunction with the model, set of attributes, and measuring instruments is appropriate for evaluating quality in use of Web 2.0 applications. Secondly, the analysis of qualitative data reveals that performance and effort based attributes considerably contribute to mind mapping services success.

Tihomir Orehovački, Andrina Granić, Dragutin Kermek

Second Workshop on the Web and Requirements Engineering (WeRE 2011)

Detecting Conflicts and Inconsistencies in Web Application Requirements

Web applications evolve fast. One of the main reasons for this evolution is that new requirements emerge and change constantly. These new requirements are posed either by customers or they are the consequence of users’ feedback about the application. One of the main problems when dealing with new requirements is their consistency in relationship with the current version of the application. In this paper we present an effective approach for detecting and solving inconsistencies and conflicts in web software requirements. We first characterize the kind of inconsistencies arising in web applications requirements and then show how to isolate them using a model-driven approach. With a set of examples we illustrate our approach.

Matias Urbieta, Maria Jose Escalona, Esteban Robles Luna, Gustavo Rossi

Streamlining Complexity: Conceptual Page Re-modeling for Rich Internet Applications

The growth of Rich Internet Applications (RIAs) calls for new conceptual tools that enable web engineers to model the design complexity unleashed by innovative interaction (with increasing communication potential) and to carefully consider the impact of the design decisions on the optimal flow of the User Experience (UX). In this paper we illustrate how is particularly relevant for RIA engineering not only to capture existing RIA technologies with suitable design artifacts but also to model an effective dialogue between users and RIA interfaces. Through a case study, we propose a set of conceptual design primitives (Rich-IDM) to enable web engineers to characterize the fluid, smooth and organic nature of the user interaction, and to take design decisions which meet both usability and communication requirements.

Andrea Pandurino, Davide Bolchini, Luca Mainetti, Roberto Paiano

Doctoral Symposium2011

A Flexible Graph-Based Data Model Supporting Incremental Schema Design and Evolution

Web data is characterized by a great structural diversity as well as frequent changes, which poses a great challenge for web applications based on that data. We want to address this problem by developing a schema-optional and flexible data model that supports the integration of heterogenous and volatile web data. Therefore, we want to rely on graph-based models that allow to incrementally extend the schema by various information and constraints. Inspired by the on-going web 2.0 trend, we want users to participate in the design and management of the schema. By incrementally adding structural information, users can enhance the schema to meet their very specific requirements.

Katrin Braunschweig, Maik Thiele, Wolfgang Lehner

ProLD: Propagate Linked Data

Since the Web of Data consists of different data sources maintained by different authorities the up-to-dateness of the resources varies a lot. However a number of applications are built upon that. To tackle the problem of out-dated resources, we propose to develop a framework that utilizes the linkage between Linked Data nodes to propagate updates in the cloud. For that purpose we have observed propagation strategies developed in the database domain and have created a list of currently unsolved problems which emphasize the difference between the propagation in the Web of Data and state of the art approaches. Apart from the improvement of the up-to-dateness of data, by following the approach of propagation the network improves and inconsistencies will be reduced.

Peter Kalchgruber

Causal Relation Detection for Activities from Heterogeneous Sources

On the web, information representing specific activities is often scattered over different systems. Although, causal relations exist between these activities, these are usually not obviously visible to the user, unless explicitly given. This paper outlines the difficulties which are caused by missing relations. The core contribution of this work will be a system which is capable of identifying cause-effect relations between single activities. The system will use these relations to form coarse-grained groups consisting of sequences with single activities. The intended goal is to employ the detected relations to reduce information overload while increasing accountability, clarity, and traceability for its users. The research is conceived under the assumption of handling heterogeneous sources of information. A further objective is to create a highly generic and flexible system which can be adapted to different use cases. The system will be evaluated with concrete case studies, one of them analyzing relations on software development sites such as SourceForge.

Philipp Katz, Alexander Schill

XML Document Versioning, Revalidation and Constraints

One of the prominent characteristics of XML applications is their dynamic nature. When a system grows and evolves, old user requirements change and/or new requirements accumulate. Apart from changes in the interfaces used/provided by the system or its components, it is also necessary to modify the existing documents with each new version, so they are valid against the new specification. In this doctoral work we will extend an existing conceptual modeling approach with the support for multiple versions of the model. Thanks to this extension, it will be possible to detect changes between two versions of a schema and generate revalidation script for the existing data. By adding integrity constraints to the model, it will be able to revalidate changes in semantics besides changes in structure.

Jakub Malý, Martin Nečaský

A Reuse-Oriented Product-Line Method for Enterprise Web Applications

Software product line engineering (SPLE) is a methodology for achieving systematic asset reuse in a family of software. The author of this proposal is producing a range of enterprise web portal products for Higher Education Institutions. The commonalities and variabilities of this product family suggest a SPLE approach would be beneficial. However, research indicates that full-blown, proactive SPLE is not always suited to small businesses. Efforts exist to reduce the overheads of SPLE. In this vein, this research proposes to develop a method for applying software product line engineering to enterprise web application development that makes efficient use of existing frameworks. This research falls into the domain of model-driven processes and methods for web engineering.

Neil Mather, Samia Oussena

A Flexible Architecture for Client-Side Adaptation

Currently the Web allows users to perform complex tasks which involve different Web applications. Anyway they still have to face these tasks in a handcrafted way. Although it is possible to build service-based software, such as mashups, to combine data and information from different providers, many times this approach has limitations. In this paper we present an approach for Client-Side Adaptation aimed to support complex concern-sensitive and task-based adaptations with user-collected data. Our approach improves user experience by supporting user tasks among several Web applications.

Sergio Firmenich, Gustavo Rossi, Silvia Gordillo, Marco Winckler

Applications of Mobile Application Interface Description Language MAIDL

Developments of mobile mashup applications have a rapid growth in the recent years. We present a development of Mobile Application Interface Description Language (MAIDL) and its applications. The language enables the development of mobile mashup applications with less programming efforts. Using our description language, composers are able to reuse existent mobile applications, Web services, and Web applications as the components to create a mashup mobile application or a Tethered Web service on a mobile device (TeWS). We demonstrate the further application of a TeWS to deliver a cooperative mashup via a functionality exchange between an Android and an iOS device.

Prach Chaisatien, Korawit Prutsachainimmit, Takehiro Tokuda

A Domain-Specific Language for Do-It-Yourself Analytical Mashups

The increasing amount and variety of data available in the web leads to new possibilities in end-user focused data analysis. While the classic data base technologies for data integration and analysis (ETL and BI) are too complex for the needs of end users, newer technologies like web mashups are not optimal for data analysis. To make productive use of the data available on the web, end users need easy ways to find, join and visualize it.

We propose a domain specific language (DSL) for querying a repository of heterogeneous web data. In contrast to query languages such as SQL, this DSL describes the visualization of the queried data in addition to the selection, filtering and aggregation of the data. The resulting data mashup can be made interactive by leaving parts of the query variable. We also describe an abstraction layer above this DSL that uses a recommendation-driven natural language interface to reduce the difficulty of creating queries in this DSL.

Julian Eberius, Maik Thiele, Wolfgang Lehner

Information Extraction from Web Pages Based on Their Visual Representation

This research is dedicated to enhancing the efficiency of web information extraction and web accessibility. The motivation behind the research, its aim and objectives are presented, and the performed work on developing web page model for information extraction is described. We also present work on making extracted information accessible to blind users, providing them with the means to navigate and access required information quickly. We also present our ongoing research on creating efficient methods and approaches for information extraction from the proposed model. There are two main approaches considered: 1) development of the library which provides required functionality to the programmer; 2) development of declarative Datalog-like language for information extraction.

Ruslan R. Fayzrakhmanov

End-User Programming for Web Mashups

Open Research Challenges

Mashup is defined as the practice of lightweight composition, serendipitous reuse, and user-centric development on the Web. In spite of the fact that the development of mashups is rather simple due to the reuse of all the required layers of a Web application (functionality, data, and user interface), it still requires programming experience. This is a significant hurdle for non-programmers (end-users with minimal or no programming experience), who constitute the majority of Web users. To cope with this, an End-User Programming (EUP) tool can be designed to reduce the barriers of mashup development, in a way that even non-programmers will be able to create innovative, feature-rich mashups. In this paper, we give an overview of the existing EUP approaches for mashup development, as well as a list of open research challenges.

Saeed Aghaee, Cesare Pautasso

ICWE 2011 Tutorials

Multi-dimensional Context-Aware Adaptation for Web Applications

This tutorial presents the state-of-the-art of adaptation for web interfaces concerning multi-dimensionality and context-awareness. The specific goals include the presentation of: (i) fundamental concepts, as motivations, definitions and relevant context information; (ii) adaptation techniques for web applications, as methods, models, strategies and technologies; (iii) adaptable and adaptive web applications in scientific and commercial aspects.

Vivian Genaro Motti, Jean Vanderdonckt

Engineering the Personal Social Semantic Web

In this tutorial, we discuss challenges and solutions for engineering the

Personal Social Semantic Web

, a Web where user modeling and personalization is featured across system boundaries. Therefore, we learn user modeling and personalization techniques for Social Web systems. We dive into engineering aspects of social tagging and micro-blogging services and examine appropriate modeling and mining techniques for these systems. We discuss Semantic Web and Linked Data principles that allow for linkage and alignment of distributed user data and show how system engineers can exploit the Social Semantic Web to personalize user experiences.

Fabian Abel, Geert-Jan Houben

Automating the Use of Web APIs through Lightweight Semantics

Web services have already achieved a solid level of acceptance and play a major role for the rapid development of loosely-coupled component-based systems, overcoming heterogeneity within and between enterprises. Current developments in the world of services on the Web are marked by the proliferation of Web APIs and Web applications, commonly referred to as RESTful services, which show high potential and growing user acceptance. Still, despite the achieved progress, the wider adoption of Web APIs is hindered by the fact that their implementation and publication hardly follow any standard guidelines or formats. REST principles are indeed a good step in this direction but the vast majority of the APIs do not strictly adhere to these principles. As a consequence, in order to use them, developers are obliged to manually locate, retrieve, read and interpret heterogeneous documentation, and subsequently develop custom tailored software, which has a very low level of reusability. In summary, most tasks during the life-cycle of services require extensive manual effort and applications based on existing Web APIs suffer from a lack of automation.

This tutorial introduces an approach and a set of integrated methods and tools to address this drawback, making services more accessible to both experts and non-expert users, by increasing the level of automation provided during common service tasks, such as the discovery of Web APIs, their composition and their invocation. The tutorial covers i) the conceptual underpinnings, which integrate Web APIs with state of the art technologies from the Web of Data and Semantic Web Services; ii) the presentation of an integrated suite of Web-based tools supporting service users; iii) and hands-on examples illustrating how the tools and technologies can help users in finding and exploiting existing Web APIs.

Maria Maleshkova, Carlos Pedrinaci, Dong Liu, Guillermo Alvaro

Improving Quality in Use of Web Applications in a Systematic Way

A first step to evaluate quality is to define nonfunctional requirements usually through quality models. The ISO 25010 standard describes one such model for general usage in specifying and evaluating software quality requirements, but its concepts need to be adapted based on a specific information need and context, i.e. for evaluating WebApps in a real situation particularly when it comes to evaluating quality in use (QinU). WebApps and their quality evaluation have been proposed in research through many approaches, but mostly for the purpose of understanding, rather than improving. In this tutorial, we demonstrate employing a quality modeling framework and strategy to instantiate quality models with the specific purpose not only to understand the current situation of a WebApp, but also to improve it.

Philip Lew, Luis Olsina

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise