Skip to main content

2014 | Buch

Advances in Conceptual Modeling

ER 2014 Workshops, ENMO, MoBiD, MReBA, QMMQ, SeCoGIS, WISM, and ER Demos, Atlanta, GA, USA, October 27-29, 2014. Proceedings

herausgegeben von: Marta Indulska, Sandeep Purao

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of workshops, held at the 33rd International Conference on Conceptual Modeling, ER 2014, in Atlanta, GA, USA in October 2014. The 24 revised full and 6 short papers were carefully reviewed and selected out of 59 submissions and are presented together with 4 demonstrations. The papers are organized in sections related to the individual workshops: the First International Workshop on Enterprise Modeling, ENMO 2014; the Second International Workshop on Modeling and Management of Big Data, MoBiD 2014; the First International Workshop on Conceptual Modeling in Requirements and Business Analysis, MReBA 2014; the First International Workshop on Quality of Models and Models of Quality, QMMQ 2014; the 8th International Workshop on Semantic and Conceptual Issues in GIS, SeCoGIS 2014; and the 11th International Workshop on Web Information Systems Modeling, WISM 2014. The contributions cover a variety of topics in conceptual modeling, including requirements and enterprise modeling, modeling of big data, spatial conceptual modeling, exploring the quality of models, and issues specific to the design of web information systems.

Inhaltsverzeichnis

Frontmatter

1st International Workshop on Enterprise Modeling (ENMO14)

Frontmatter
Model Based Enterprise Simulation and Analysis
A Pragmatic Approach Reducing the Burden on Experts

Modern enterprises are complex systems operating in highly dynamic environments. The time to respond to the various change drivers is short and the cost of incorrect decisions is prohibitively high. Modern enterprises tend to exist in silos leading to fragmented knowledge with little support available for composing the fragments. Current practice places a heavy burden on experts by requiring a quick and comprehensive solution. This paper proposes a model based approach to this problem in terms of a language to be used for enterprise simulation and analysis that is capable of integrating the ‘what’, ‘how’ and ‘why’ aspects of an enterprise. Possible implementation is also hinted.

Vinay Kulkarni, Tony Clark, Souvik Barat, Balbir Barn
3D vs. 4D Ontologies in Enterprise Modeling

This paper presents a comparison between a 3D and a 4D ontology, with the purpose of identifying modeling variations that arise from using these different kinds of ontologies. The modeling variations are illustrated by using two enterprise modeling enigmas to which both ontologies are applied. The goal of our comparison is to demonstrate that the choice of an ontology impacts on the representation of real world phenomena and will eventually result in different enterprise models.

Michaël Verdonck, Frederik Gailly, Geert Poels
Applications of Ontologies in Enterprise Modelling: A Systematic Mapping Study

Ontologies have been used in several fields as an engineering artefact with the main purpose of conceptualizing a specific object of study. Therefore, it is reasonable to think about using ontologies to support enterprise modelling. In this paper, we investigate the application of ontologies in enterprise modelling. We performed a comprehensive systematic mapping study in order to understand the usage of ontologies to enterprise modeling. We group the results by business areas, business segments, languages, environments and methodologies. We conclude that ontologies are applicable to assist enterprise modelling and have been used specially in Industry, Health and Environment and Government.

Vitor Afonso Pinto, Camila Leles de Rezende Rohlfs, Fernando Silva Parreiras

2nd International Workshop on Modeling and Management of Big Data (MoBiD14)

Frontmatter
A Semi-clustering Scheme for High Performance PageRank on Hadoop

As global Internet business has been evolving, large-scale graphs are becoming popular. PageRank computation on the large-scale graphs using Hadoop with default data partitioning method suffers from poor performance because Hadoop scatters even a set of directly connected vertices to arbitrary multiple nodes. In this paper we propose a semi-clustering scheme to address this problem and improve the performance of PageRank on Hadoop. Our scheme divides a graph into a set of semi-clusters, each of which consists of connected vertices, and assigns a semi-cluster to a single data partition in order to reduce the cost of data exchange between nodes during the computation of PageRank. The semi-clusters are merged and split before the PageRank computation, in order to evenly distribute a large-scale graph into a number of data partitions. Our semi-clustering scheme drastically improves the performance: total elapsed time including the cost of the semi-clustering computation reduced by up to 36%. Furthermore, the effectiveness of our scheme increases as the size of the graph increases.

Seungtae Hong, Jeonghoon Lee, Jaewoo Chang, Dong Hoon Choi
Energy Consumption Prediction by Using an Integrated Multidimensional Modeling Approach and Data Mining Techniques with Big Data

During the past decades the resources have been used of an irresponsible and negligent manner. This has led to an increasing necessity of adopting more intelligent ways to manage the existing resources, specially the ones related to energy. In this regard, one of the main aims of this paper is to explore the opportunities of using ICT (Information and Communication Technologies) as an enabling technology to reduce energy use in cities. This paper presents a study in which we propose a multidimensional hybrid architecture that makes use of current energy data and external information to improve knowledge acquisition and allow managers to make better decisions. Our main goal is to make predictions about energy consumption based on energy data mining and supported by external knowledge. This external knowledge is represented by a torrent of information that, in many cases, is hidden across heterogeneous and unstructured data sources, which is recuperated by an Information Extraction system. This paper is complemented with a real case study that shows promising partial results.

Jesús Peral, Antonio Ferrández, Roberto Tardío, Alejandro Maté, Elisa de Gregorio
Benchmarking Performance for Migrating a Relational Application to a Parallel Implementation

Many organizations rely on relational database platforms for OLAP-style querying (aggregation and filtering) for small to medium size applications. We investigate the impact of scaling up the data sizes for such queries. We intend to illustrate what kind of performance results an organization could expect should they migrate current applications to big data environments. This paper benchmarks the performance of Hive [20], a parallel data warehouse platform that is a part of the Hadoop software stack. We set up a 4-node Hadoop cluster using Hortonworks HDP 1.3.2 [10]. We use the data generator provided by the TPC-DS benchmark [3] to generate data of different scales. We use a representative query provided in the TPC-DS query set and run the SQL and Hive Query Language (HiveQL) versions of the same query on a relational database installation (MySQL) and on the Hive cluster. We measure the speedup for query execution for all dataset sizes resulting from the scale up. Hive loads the large datasets faster than MySQL, while it is marginally slower than MySQL when loading the smaller datasets.

Krishna Karthik Gadiraju, Karen C. Davis, Paul G. Talaga
A Data Quality in Use Model for Big Data
(Position Paper)

Organizations are nowadays immersed in the Big Data Era. Beyond the hype of the concept of Big Data, it is true that something in the way of doing business is really changing. Although some challenges keep being the same as for regular data, with big data, the focus has changed. The reason is due to Big Data is not only data, but also a complete framework including data themselves, storage, formats, and ways of provisioning, processing and analytics. A challenge that becomes even trickier is the one concerning to the management of the quality of big data. More than ever the need for assessing the quality-in-use of big datasets gains importance since the real contribution – business value- of a dataset to a business can be only estimated in its context of use. Although there exists different data quality models to assess the quality of data there still lacks of a quality-in-use model adapted to big data. To fill this gap, and based on ISO 25012 and ISO 25024, we propose the 3Cs model, which is composed of three data quality dimensions for assessing the quality-in-use of big datasets: Contextual Consistency, Operational Consistency and Temporal Consistency.

Ismael Caballero, Manuel Serrano, Mario Piattini
Business Intelligence and Big Data in the Cloud: Opportunities for Design-Science Researchers

Cloud computing and big data offer new opportunities for business intelligence (BI) and analytics. However, traditional techniques, models, and methods must be redefined to provide decision makers with service of data analysis through the cloud and from big data. This situation creates opportunities for research and more specifically for design-science research. In this paper, we propose a typology of artifacts potentially produced by researchers in design science. Then, we analyze the state of the art through this typology. Finally, we use the typology to sketch opportunities of new research to improve BI and analytics capabilities in the cloud and from big data.

Odette Mwilu Sangupamba, Nicolas Prat, Isabelle Comyn-Wattiau
From Business Intelligence to Semantic Data Stream Management

The Semantic Web technologies are being increasingly used for exploiting relations between data. In addition, new tendencies of real-time systems, such as social networks, sensors, cameras or weather information, are continuously generating data. This implies that data and links between them are becoming extremely vast. Such huge quantity of data needs to be analyzed, processed, as well as stored if necessary. In this paper, we will introduce recent work on Real-Time Business Intelligence that includes semantic data stream management. We will also present underlying approaches such as continuous queries and data summarization.

Marie-Aude Aufaure, Raja Chiky

1st International Workshop on Conceptual Modeling in Requirements and Business Analysis (MReBA14)

Frontmatter
Understandability of Goal Concepts by Requirements Engineering Experts

ARMOR is a graphical language for modeling business goals and enterprise architectures. In previous work we have identified problems with understandability of goal-oriented concepts for practicing enterprise architects. In this paper we replicate the earlier quasi-experiments with experts in requirements engineering, to see if similar problems arise. We found that fewer mistakes were made in this replication than were made in the previous experiment with practitioners, but that the types of mistakes made in all the concepts were similar to the mistakes made in our previous experiments with enterprise architects. The stakeholder concept was used perfectly by our sample, but the goal decomposition relation was not understood. The subjects provided explanations for understandability problems that are similar to our previous hypothesized explanations. By replicating some of our earlier results, this paper provides additional support for the generalizability of our earlier results.

Wilco Engelsman, Roel Wieringa
On the Definition of Self-service Systems

Changing requirements are common in today’s organizations, and have been a central concern in Requirements Engineering (RE). Over time, methods have been developed to deal with such variability. Yet, the latter often require considerable amount of time to be applied. As time-to-value is becoming a critical requirement of users, new types of systems have been developed to deal more efficiently with changing requirements: the Self-Service Systems. In this paper, we provide an overall discussion about what self systems are, what they imply in terms of engineering, how they can be designed, and what type of questions they raise to RE.

Corentin Burnay, Joseph Gillain, Ivan J. Jureta, Stéphane Faulkner
Semantic Monitoring and Compensation in Socio-technical Processes

Socio-technical processes are becoming increasingly important, with the growing recognition of the computational limits of full automation, the growth in popularity of crowd sourcing, the complexity and openness of modern organizations etc. A key challenge in managing socio-technical processes is dealing with the flexible, and sometimes dynamic, nature of the execution of human-mediated tasks. It is well-recognized that human execution does not always conform to predetermined coordination models, and is often error-prone. This paper addresses the problem of semantically monitoring the execution of socio-technical processes to check for non-conformance, and the problem of recovering from (or compensating for) non-conformance. This paper proposes a semantic solution to the problem, by leveraging semantically annotated process models to detect non-conformance, and using the same semantic annotations to identify compensatory human-mediated tasks.

Yingzhi Gou, Aditya Ghose, Chee-Fon Chang, Hoa Khanh Dam, Andrew Miller
Modeling Regulatory Compliance in Requirements Engineering

A large and rapidly growing number of laws is impacting software systems world-wide. For each one of them, software designers need to ensure that their system, new or legacy, complies with the law. Establishing compliance requires the ability to identify which laws are applicable, what are the different ways to comply, and whether given requirements for a software system comply. In this short paper we give an overview of ongoing work on Nòmos 3, a modelling language tailored to modelling laws and requirements. Nòmos 3 models can be translated into a formal specification that supports automated compliance analysis.

Silvia Ingolfo, Alberto Siena, John Mylopoulos
Automated Completeness Check in KAOS

KAOS is a popular and useful goal oriented requirements engineering (GORE) language, which can be used in business requirements modelling, specification, and analysis. Currently, KAOS is being used in areas such as business process modelling, and enterprise architecture (EA). But, an incomplete or malformed KAOS model can result to incomplete and erroneous requirements analysis, which in turn can lead to overall systems failure . Therefore, it is necessary to check that a requirements specification in KAOS language are complete and well formed. The contribution at hand is to provide an automated technique for checking the completeness and well-formed-ness of a requirements specification in KAOS language. Such a technique can be useful, especially to business or requirements analysts in industries and research, to check that requirements specification in KAOS language is well formed.

Joshua C. Nwokeji, Tony Clark, Balbir Barn, Vinay Kulkarni
Practical Goal Modeling for Enterprise Change Context: A Problem Statement

Modern enterprise need to rapidly respond to changes. Goal modeling techniques are intuitive mechanisms that help in modeling and analyzing rationale behind enterprise’s response to change. In spite of their intuitiveness, there are several challenges that need to be addressed for their practical adoption and application. We present a problem statement based on real world case study and possible ways in which these challenges can be addressed.

Sagar Sunkle, Hemant Rathod, Vinay Kulkarni

1st International Workshop on Quality of Models and Models of Quality (QMMQ14)

Frontmatter
A Superstructure for Models of Quality

With additional quality modeling features added to conceptual models, computers could play a greater role in ensuring a higher level of quality in the information we model. For information-discovery applications, these additional conceptual modeling features should automatically accommodate certainty and conflicting information, support evidence-based research, automate collaboration, and provide research guidance. To address these issues, we propose a superstructure that adds four additional abstraction layers to typical conceptual models: a knowledge layer, an evidence layer, a communication layer, and an action layer. We show by a running example the benefits these abstraction layers provide for increasing the quality of the information being modeled.

David W. Embley, Stephen W. Liddle, Scott N. Woodfield
A Quality Management Workflow Proposal for a Biodiversity Data Repository

The importance of quality-assured data in scientific analysis necessitates the inclusion of data quality management (DQM) functionality in research data repositories in addition to their primary role of data storage, sharing and integration. Typically, the DQM workflow in data repositories is fixed and semi-automated for datasets whose structure and semantics is known a-priori, however, for other types of datasets, DQM is either manual or minimal. In comparison, classical DQM methodology (especially in data warehousing research) has established standard, typically manually undertaken, DQM procedures for different types of data. Therefore, our proposal aims at customizing and semi-automating the classical DQM procedures for bio-diversity data repositories. As opposed to reviewing scientific contents of the data, we focus on technical data quality. Our proposed workflow includes DQM criteria specification, client and server-side validation, data profiling, error detection analysis, data enhancement and correction, and quality monitoring.

Michael Owonibi, Birgitta Koenig-Ries
Applying a Data Quality Model to Experiments in Software Engineering

Data collection and analysis are key artifacts in any software engineering experiment. However, these data might contain errors. We propose a Data Quality model specific to data obtained from software engineering experiments, which provides a framework for analyzing and improving these data. We apply the model to two controlled experiments, which results in the discovery of data quality problems that need to be addressed. We conclude that data quality issues have to be considered before obtaining the experimental results.

María Carolina Valverde, Diego Vallespir, Adriana Marotta, Jose Ignacio Panach
Towards Indicators for HCI Quality Evaluation Support

The current variety of existing approaches for HCI quality evaluation is marked by a lack of the integration of subjective methods (such as the questionnaire method) and objective methods (such as the electronic informer method) for supporting in making an evaluation final decision. Over the past decades, different researches have been interested to define various quality criteria with their measures. However, the lack in determining how to integrate qualitative with quantitative data leads us to specify new indicators for HCI quality evaluation. This paper aims at defining and constructing quality indicators with their measures related relatively to existing quality criteria based on ISO/IEC 15939 standard. These indicators allow the integration of qualitative and quantitative data and provide a basis for decision making about the quality of the HCI relatively to the evaluation quality criteria. This paper presents a proposal for defining and constructing quality indicators and it highlights a proposed example. A feasibility study of using a quality indicator is presented by the evaluation of traffic supervision system in Valenciennes (France) as a part of CISIT-ISART project.

Ahlem Assila, Káthia Marçal de Oliveira, Houcine Ezzedine

8th International Workshop on Semantic and Conceptual Issues in GIS (SeCoGIS14)

Frontmatter
Towards a Qualitative Representation of Movement

Over the past few years there have been several attempts to model and represent the spatial and temporal properties of moving entities. Several semantic and computational frameworks have been developed to track and analyze moving object trajectories, but there is still a need for a qualitative reasoning support at the abstract level. The research presented in this paper introduces a qualitative approach for representing and reasoning about moving entities. The model combines topological relations with qualitative distances over a spatial and temporal framework. Several basic movement configurations over dynamic entities are identified as well as movement transitions.

Jing Wu, Christophe Claramunt, Min Deng
Lagrangian Xgraphs: A Logical Data-Model for Spatio-Temporal Network Data: A Summary

Given emerging diverse spatio temporal network (STN) datasets, e.g., GPS tracks, temporally detailed roadmaps and traffic signal data, the aim is to develop a logical data-model which achieves a seamless integration of these datasets for diverse use-cases (queries) and supports efficient algorithms. This problem is important for travel itinerary comparison and navigation applications. However, this is challenging due to the conflicting requirements of expressive power and computational efficiency as well as the need to support ever more diverse STN datasets, which now record non-decomposable properties of n-ary relations. Examples include travel-time and fuel-use during a journey on a route with a sequence of coordinated traffic signals and turn delays. Current data models for STN datasets are limited to representing properties of only binary relations, e.g., distance on individual road segments. In contrast, the proposed logical data-model, Lagrangian Xgraphs can express properties of both binary and n-ary relations. Our initial study shows that Lagrangian Xgraphs are more convenient for representing diverse STN datasets and comparing candidate travel itineraries.

Venkata M. V. Gunturi, Shashi Shekhar
Efficient Reverse kNN Query Algorithm on Road Network Distances Using Partitioned Subgraphs

Reverse

k

-nearest neighbor (R

k

NN) queries in road network distances require long processing time in most conventional algorithms because these require

k

NN search on every visited node. In this paper, we propose a fast R

k

NN search algorithm that runs using a simple materialized path view (SMPV). In addition, we adopt the incremental Euclidean restriction (IER) strategy for fast

k

NN queries. In the SMPV used in our proposed algorithm, distance tables are constructed only inside of an individual partitioned subgraph, therefore the amount of data is drastically reduced in comparison with the conventional materialized path view (MPV). According to our experimental results using real road network data, our proposed method showed 100 times faster in processing time than conventional approaches when POIs are sparsely distributed on the road network.

Aye Thida Hlaing, Htoo Htoo, Yutaka Ohsawa
Using the Model-Driven Architecture Approach for Geospatial Databases Design of Ecological Niches and Potential Distributions

An ecological niche is defined by an array of biotic and abiotic requirements that allow organisms to survive and reproduce in a geographic area. Environmental data from a region can be used to predict the potential distribution of a species in a different region. Potential geographic distributions are useful in predicting the extent of invasive species, preventing economic and ecological damages. Many formalisms for modeling geospatial information have been developed over the years. The most notable benefit of these formalisms is their focus on a high-level abstraction of reality, leaving unnecessary details behind. This paper presents the stages of the Model-Driven Architecture approach for the design of database, with geospatial capabilities, for niches and potential geographic distributions. We take advantage of the UML GeoProfile formalism for geospatial databases, which is capable of modeling geographic and environmental data.

Gerardo José Zárate, Jugurta Lisboa-Filho, Carlos Frankl Sperber
OMT-G Designer: A Web Tool for Modeling Geographic Databases in OMT-G

Data modeling tools are useful in software development and in database design. Some advanced modeling tools available in the market go beyond the data modeling process and allow the generation of source code or DDL scripts for RDBMSs based on the modeled schema. This work presents OMT-G Designer, a web tool for modeling geographic databases using OMT-G, an object-oriented data model for geographic applications. The tool provides various consistency checks on the integrity of the schema, and includes a function that maps OMT-G geographic conceptual schemas into physical schemas, including the necessary spatial integrity constraints. The tool was developed using free software and aims to increase the practical and academic uses of OMT-G, by providing an open and platform-independent modeling resource.

Luís Eduardo Oliveira Lizardo, Clodoveu Augusto Davis Jr.
A Model of Aggregate Operations for Data Analytics over Spatiotemporal Objects

In this paper, we identify a conceptual framework to explore notions of spatiotemporal aggregate operations over moving objects, and use this framework to discover novel aggregate operators. Specifically, we provide constructs to discover temporal and spatial coverage of a query window that may itself be moving, and identify quantitative properties of entropy relating to the motion of objects.

Logan Maughan, Mark McKenney, Zachary Benchley

11th International Workshop on Web Information Systems Modeling (WISM14)

Frontmatter
Natural Language Processing for Linking Online News and Open Government Data

The value in the vast amount of linked data and open data produced during the last decade is widely recognized and being exploited by different initiatives. However, a remaining challenge is to integrate government information with semi-structured data in sources relevant to citizens, who have become skeptical of official versions and more interested in information associated with their own interests and values. We present a system that integrates and provides uniform access to government data linked to news portals, via an automated named entity linking process, and information provided by a parliament monitoring organization. We develop a prototype to show how this system can be used to develop semantic web applications that assist citizens in making informed political decisions using data linked to their interests and sources not affiliated with the government. This enables them to contrast the official information and find political figures associated to their own personal interests.

Daniel Sarmiento Suárez, Claudia Jiménez-Guarín
Enterprise Linked Data: A Systematic Mapping Study

Over the past years we have witnessed the Web becoming an established channel for learning. Nowadays, hundreds of repositories are freely available on the Web aiming at sharing and reusing learning objects, but lacking in interoperability. In this paper, we present a comprehensive literature review on the state-of-the-art in the research field of Linked Enterprise Data. More precisely, this Systematic Literature Review intends to answer the following research question: What are the applications of Linked Data for Corporate Environments? Studies point out that there is a pattern regarding the frameworks used for implementing Semantic Web in enterprises. This pattern enables interlinking of both internal and external data sources.

Vitor Afonso Pinto, Fernando Silva Parreiras
Ontology Population from Web Product Information

Due to the explosion of information on the Web, there is a need to structure Web data in order to make it accessible to both users and machines. E-commerce is one of the areas in which increasing data volume on the Web has serious consequences. This paper proposes a framework that populates tabular product information from Web shops in a product ontology. By formalizing product information in this way, one can make better product comparison or recommender applications on the Web. Our approach makes use of lexical and syntactic matching techniques for mapping properties and instantiating values. The performed evaluation shows that instantiating TVs and MP3 players from two popular Web shops, Best Buy and Newegg.com, results in an F

1

score of 95.07% for property mapping and 76.60% for value instantiation.

Damir Vandic, Lennart J. Nederstigt, Steven S. Aanen
Service Adaptability Analysis across Semantics and Behavior Levels Based on Model Transformation

Web service adaptability analysis is a prominent aspect of service adaptation. Current Petri net-based approaches focus on behavior-level service adaptability analysis, lacking of cross-level analysis; Besides, they mainly rely on state exploration of the services which may result in state explosion. This paper firstly formalizes service behavior as an ontology-annotated service flow net which is an extended version of service flow net augmented with ontology concepts, thus we are able to discuss the notion of service adaptability across semantics and behavior levels. This paper secondly extends regular flow net with implicit choice and loop constructs, thus we are able to represent more service behaviors with ontology-annotated regular service flow nets. Last but not least, this paper efficiently achieves service adaptability checking by exploiting the regular characteristic of ontology-annotated regular service flow nets at behavior-level, and completely avoids state explosion problems.

Guorong Cao, Qingping Tan, Xiaoyan Xue, Wei Zhou, Yongyong Dai
Prioritizing Consumer-Centric NFPs in Service Selection

Service Selection continues to be a challenge in Service Oriented Architecture (SOA). In this paper, we propose a consumer-centric Non- Functional Properties (NFP) based services selection approach that relies on an externally-validated set of NFP descriptions integrated with the Web Service Description Language (WSDL). Our approach is based on three steps: (1) a Filtering step based on Hard NFPs defined in the consumer’s request, (2) a Matchmaking step to discover the functionally-equivalent services, and (3) a Ranking step that sorts the resulting set of services based on the Soft NFPs defined by the consumer. The evaluation of our proposed service selection approach shows that the prioritization of NFP usage enhances the performance time of the service selection process while satisfying the functional and the nonfunctional requirements of the consumer.

Hanane Becha, Sana Sellami

ER’14 Demonstrations

Personal eHealth Knowledge Spaces though Models, Agents and Semantics

In this paper, we present a web-based platform that generates a Personal eHealth Knowledge Space as an aggregation of several knowledge sources relevant for the provision of individualized personal services. To this end, novel technologies are exploited and demonstrated, such as

knowledge on demand

to lower the information overload for the end-users,

agent-based communication and reasoning

to support cooperation and decision making, and

semantic integration

to provide uniform access to heterogeneous information. All three technologies are combined to create a novel web-based platform allowing seamless user interaction through a portal that supports personalized, granular and secure access to relevant information. We demonstrate the portal and then the aforementioned technologies using real medical scenarios.

Haridimos Kondylakis, Dimitris Plexousakis, Vedran Hrgovcic, Robert Woitsch, Marc Premm, Michael Schuele
Lightweight Semantic Prototyper for Conceptual Modeling

While much research work was devoted to conceptual model quality validation techniques, most of the existing tools in this domain focus on syntactic quality. Tool support for checking semantic quality (correspondence between the conceptual model and requirements of a domain to be engineered) is largely lacking. This work introduces a lightweight model-driven semantic prototyper to test/validate conceptual models. The goal of the tool is twofold: (1) to assist business analysts in validating semantic quality of conceptual business specifications using a fast prototyper to communicate with domain experts; (2) to support the learning perspective of conceptual modeling for less experienced modelers (such as students or novice analysts in their early career) to facilitate their progression to advanced level of expertise. The learning perspective is supported by providing automated feedback that visually links the test results to their causes in the model’s design. The effectiveness of the tool has been confirmed by means of empirical experimental studies.

Gayane Sedrakyan, Monique Snoeck
SQTime: Time-Enhanced Social Search Querying

In this paper, we present SQTime, a system for social search queries that exploit temporal information available in social networks. Specifically, SQTime introduces different types of queries aiming at satisfying information needs from different perspectives. SQTime is built upon a social graph and query model both augmented with time, and develops methods for query processing and time-dependent ranking.

Panagiotis Lionakis, Kostas Stefanidis, Georgia Koloniari
Backmatter
Metadaten
Titel
Advances in Conceptual Modeling
herausgegeben von
Marta Indulska
Sandeep Purao
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-12256-4
Print ISBN
978-3-319-12255-7
DOI
https://doi.org/10.1007/978-3-319-12256-4

Premium Partner