Skip to main content
Top

2013 | Book

Knowledge Discovery, Knowledge Engineering and Knowledge Management

Second International Joint Conference, IC3K 2010, Valencia, Spain, October 25-28, 2010, Revised Selected Papers

Editors: Ana Fred, Jan L. G. Dietz, Kecheng Liu, Joaquim Filipe

Publisher: Springer Berlin Heidelberg

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-conference proceedings of the Second International Joint Conference on Knowledge Discovery, Knowledge Engineering, and Knowledge Management, IC3K 2010, held in Valencia, Spain, in October 2010. This book includes revised and extended versions of a strict selection of the best papers presented at the conference; 26 revised full papers together with 2 invited lectures were carefully reviewed and selected from 369 submissions. According to the three covered conferences KDIR 2010, KEOD 2010, and KMIS 2010, the papers are organized in topical sections on knowledge discovery and information retrieval, knowledge engineering and ontology development, and on knowledge management and information sharing.

Table of Contents

Frontmatter

Invited Papers

Frontmatter
Actionable Mining of Large, Multi-relational Data Using Localized Predictive Models
Abstract
Many large datasets associated with modern predictive data mining applications are quite complex and heterogeneous, possibly involving multiple relations, or exhibiting a dyadic nature with associated side-information. For example, one may be interested in predicting the preferences of a large set of customers for a variety of products, given various properties of both customers and products, as well as past purchase history, a social network on the customers, and a conceptual hierarchy on the products. This article provides an overview of recent innovative approaches to predictive modeling for such types of data, and also provides some concrete application scenarios to highlight the issues involved. The common philosophy in all the approaches described is to pursue a simultaneous problem decomposition and modeling strategy that can exploit heterogeneity in behavior, use the wide variety of information available and also yield relatively more interpretable solutions as compared to global ”one-shot” approaches. Since both the problem domains and approaches considered are quite new, we also highlight the potential for further investigations on several occasions throughout this article.
Joydeep Ghosh, Aayush Sharma
Improving the Semantics of a Conceptual Schema of the Human Genome by Incorporating the Modeling of SNPs
Abstract
In genetic research, the concept known as SNP, or single nucleotide polymorphism, plays an important role in detection of genes associated with complex ailments and detection of hereditary susceptibility of an individual to a specific trait. Discussing the issue, as it surfaced in the development of a conceptual schema for the human genome, it became clear a high degree of conceptual ambiguity surrounds the term. Solving this ambiguity has lead to the main research question: What makes a genetic variation, classified as a SNP different from genetic variations, not classified as SNP?. For optimal biological research to take place, an unambiguous conceptualization is required. Our main contribution is to show how conceptual modeling techniques applied to human genome concepts can help to disambiguate and correctly represent the relevant concepts in a conceptual schema, thereby achieving a deeper and more adequate understanding of the domain.
Óscar Pastor, Matthijs van der Kroon, Ana M. Levin, Matilde Celma, Juan Carlos Casamayor

Part I: Knowledge Discovery and Information Retrieval

Frontmatter
A Spatio-anatomical Medical Ontology and Automatic Plausibility Checks
Abstract
In this paper, we explain the peculiarities of medical knowledge management and propose a way to augment medical domain ontologies by spatial relations in order to perform automatic plausibility checks. Our approach uses medical expert knowledge represented in formal ontologies to check the results of automatic medical object recognition algorithms for spatial plausibility. It is based on the comprehensive Foundation Model of Anatomy ontology which we extend with spatial relations between a number of anatomical entities. These relations are learned inductively from an annotated corpus of 3D volume data sets. The induction process is split into two parts. First, we generate a quantitative anatomical atlas using fuzzy sets to represent inherent imprecision. From this atlas we then abstract the information further onto a purely symbolic level to generate a generic qualitative model of the spatial relations in human anatomy. In our evaluation we describe how this model can be used to check the results of a state-of-the-art medical object recognition system for 3D CT volume data sets for spatial plausibility. Our results show that the combination of medical domain knowledge in formal ontologies and sub-symbolic object recognition yields improved overall recognition precision.
Manuel Möller, Daniel Sonntag, Patrick Erñst
Experimentally Studying Progressive Filtering in Presence of Input Imbalance
Abstract
Progressively Filtering (PF) is a simple categorization technique framed within the local classifier per node approach. In PF, each classifier is entrusted with deciding whether the input in hand can be forwarded or not to its children. A simple way to implement PF consists of unfolding the given taxonomy into pipelines of classifiers. In so doing, each node of the pipeline is a binary classifier able to recognize whether or not an input belongs to the corresponding class. In this chapter, we illustrate and discuss the results obtained by assessing the PF technique, used to perform text categorization. Experiments, on the Reuters Corpus (RCV1- v2) dataset, are focused on the ability of PF to deal with input imbalance. In particular, the baseline is: (i) comparing the results to those calculated resorting to the corresponding flat approach; (ii) calculating the improvement of performance while augmenting the pipeline depth; and (iii) measuring the performance in terms of generalization- / specialization- / misclassification-error and unknown-ratio. Experimental results show that, for the adopted dataset, PF is able to counteract great imbalances between negative and positive examples. We also present and discuss further experiments aimed at assessing TSA, the greedy threshold selection algorithm adopted to perform PF, against a relaxed brute-force algorithm and the most relevant state-of-the-art algorithms.
Andrea Addis, Giuliano Armano, Eloisa Vargiu
Semantic Web Search System Founded on Case-Based Reasoning and Ontology Learning
Abstract
With the continuous growth of data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. However, manual ontology building is time consuming. An automatic approach may aid to solve this problem by analyzing implicitly available knowledge such as the users’ search feedback. In this context, we propose a semantic web search system founded on Case-Based-Reasoning (CBR) and ontology learning that aims to enrich automatically the ontologies by using previous search queries performed by the user. Some experiments and results obtained with the proposed system are also presented, which show an improvement on the precision of the Web search and ontology enrichment.
Hajer Baazaoui-Zghal, Nesrine Ben Mustapha, Manel Elloumi-Chaabene, Antonio Moreno, David Sanchez
Literature-Based Knowledge Discovery from Relationship Associations Based on a DL Ontology Created from MeSH
Abstract
Literature-based knowledge discovery generates potential discoveries from associations between specific concepts that have been previously reported in the literature. However, because the associations are generally between individual concepts, the knowledge of specific relationships between those concepts is lost. A description logic (DL) ontology adds a set of logically defined relationship types, called properties, to a classification of concepts for a particular knowledge domain. Properties can represent specific relationships between instances of concepts used to describe the things studied by a particular researcher. These relationships form a “triple” consisting of a domain instance, a range instance, and the property specifying the way those instances are related. A “relationship association” is a pair of relationship triples where one of the instances from each relationship can be determined to be semantically equivalent. In this paper, we report our work to structure a subset of more than 1300 terms from the Medical Subject Headings (MeSH) controlled vocabulary into a DL ontology, and to use that DL ontology to create a corpus of A-Boxes, which we call “semantic statements”, each of which describes one of 392 research articles that we selected from MEDLINE. Relationship associations were extracted from the corpus of semantic statements using a previously reported technique. Then, by making the assumption of the transitivity of association used in literature-based knowledge discovery, we generate hypothetical relationship associations by combining pairs of relationship associations. We then evaluate the “interestingness” of those candidate knowledge discoveries from a life science perspective.
Steven B. Kraines, Weisen Guo, Daisuke Hoshiyama, Takaki Makino, Haruo Mizutani, Yoshihiro Okuda, Yo Shidahara, Toshihisa Takagi
Early Warning and Decision Support in Critical Situations of Opinion Formation within Online Social Networks
Abstract
A growing number of people are exchanging their opinions in online social networks and influencing one another. Thus, companies should observe opinion formation concerning their products in order to identify risks at an early stage. By doing so counteractive measures can be initiated by marketing managers. A neuro fuzzy system detects critical situations in the process of opinion formation and issues warnings for the marketing managers. The system learns rules for identifying critical situations on the basis of the opinions of the network members, the influence of the opinion leaders and the structure of the network. The opinions and characteristics of the network are identified by text mining techniques and social network analysis. Simulations based on swarm intelligence are used to derive recommendations which help the marketing managers influencing the right opinion leaders to prevent the negative opinions from spreading. The approach is illustrated by an exemplary application.
Carolin Kaiser, Sabine Schlick, Freimut Bodendorf
A Connection between Extreme Learning Machine and Neural Network Kernel
Abstract
We study a connection between extreme learning machine (ELM) and neural network kernel (NNK). NNK is derived from a neural network with an infinite number of hidden units. We interpret ELM as an approximation to this infinite network. We show that ELM and NNK can, to certain extent, replace each other. ELM can be used to form a kernel, and NNK can be decomposed into feature vectors to be used in the hidden layer of ELM. The connection reveals possible importance of weight variance as a parameter of ELM. Based on our experiments, we recommend that model selection on ELM should consider not only the number of hidden units, as is the current practice, but also the variance of weights. We also study the interaction of variance and the number of hidden units, and discuss some properties of ELM, that may have been too strongly interpreted previously.
Eli Parviainen, Jaakko Riihimäki
Visually Summarizing Semantic Evolution in Document Streams with Topic Table
Abstract
We propose a visualization technique for summarizing contents of document streams, such as news or scientific archives. The content of streaming documents change over time and so do themes the documents are about. Topic evolution is a relatively new research subject that encompasses the unsupervised discovery of thematic subjects in a document collection and the adaptation of these subjects as new documents arrive. While many powerful topic evolution methods exist, the combination of learning and visualization of the evolving topics has been less explored, although it is indispensable for understanding a dynamic document collection.
We propose Topic Table, a visualization technique that builds upon topic modeling for deriving a condensed representation of a document collection. Topic Table captures important and intuitively comprehensible aspects of a topic over time: the importance of the topic within the collection, the words characterizing this topic, the semantic changes of a topic from one timepoint to the next. As an example, we visualize content of the NIPS proceedings from 1987 to 1999.
André Gohr, Myra Spiliopoulou, Alexander Hinneburg
A Clinical Application of Feature Selection: Quantitative Evaluation of the Locomotor Function
Abstract
Evaluation of the locomotor function is important for several clinical applications (e.g. fall risk of the elderly, characterization of a disease with motor complications). We consider the Timed Up and Go test which is widely used to evaluate the locomotor function in Parkinson’s Disease (PD). Twenty PD and twenty age-matched control subjects performed an instrumented version of the test, where wearable accelerometers were used to gather quantitative information. Several measures were extracted from the acceleration signals; the aim is to find, by means of a feature selection, the best set that can discriminate between healthy and PD subjects. A wrapper feature selection was implemented with an exhaustive search for subsets from 1 to 3 features. A nested leave-one-out cross validation (LOOCV) was implemented, to limit a possible selection bias. With the selected features a good accuracy is obtained (7.5% of misclassification rate) in the classification between PD and healthy subjects.
Luca Palmerini, Laura Rocchi, Sabato Mellone, Franco Valzania, Lorenzo Chiari
Inference Based Query Expansion Using User’s Real Time Implicit Feedback
Abstract
Query expansion is a commonly used technique to address the problem of short and under-specified search queries in information retrieval. Traditional query expansion frameworks return static results, whereas user’s information needs is dynamics in nature. User’s search goal, even for the same query, may be different at different instances. This often leads to poor coherence between traditional query expansion and user’s search goal resulting poor retrieval performance. In this study, we observe that user’s search pattern is influenced by his/her recent searches in many search instances. We further propose a query expansion framework which explores user’s real time implicit feedback provided at the time of search to determine user’s search context and identify relevant query expansion terms. From extensive experiments, it is evident that the proposed query expansion framework adapts to the changing needs of user’s information need.
Sanasam Ranbir Singh, Hema A. Murthy, Timothy A. Gonsalves
Testing and Improving the Performance of SVM Classifier in Intrusion Detection Scenario
Abstract
Intrusion Detection attempts to detect computer attacks by examining various data records observed in processes on the network. Anomaly discovery has attracted the attention of many researchers to overcome the disadvantage of signature-based IDSs in discovering complex attacks. Although there are some existing mechanisms for Intrusion detection, there is need to improve the performance. Machine Learning techniques are a new approach for Intrusion detection and KDDCUP’99 is the mostly widely used data set for the evaluation of these systems. The goal of this research is using the SVM machine learning model with different kernels and different kernel parameters for classification unwanted behavior on the network with scalable performance. Also elimination of the insignificant and/or useless inputs leads to a simplification of the problem, faster and more accurate detection may result. This work also evaluates the performance of other learning techniques (Filtered J48 clustering, Naïve Bayes) over benchmark intrusion detection dataset for being complementary of SVM. The model generation is computation intensive; hence to reduce the time required for model generation various different algorithms. Various algorithms for cluster to class mapping and instance testing have been proposed to overcome problem of time consuming for real time detection. I show that our proposed variations matured in this paper, contribute significantly in improving the training and classifying process of SVM with high generalization accuracy and outperform the enhanced technique.
Ismail Melih Önem

Part II: Knowledge Engineering and Ontology Development

Frontmatter
A Quantitative Knowledge Measure and Its Applications
Abstract
Several concepts related to knowledge have emerged recently: knowledge management, knowledge society, knowledge engineering, knowledge bases, etc. We are here specifically interested in “scientific knowledge” in the context of student learning assessment. Therefore, we develop a framework within which knowledge is decomposed into grains called knowlets so that it can be quantified. Knowledge becomes then a measurable quantity in very much the same way information is known to be a measurable quantity (in the sense of Shannon’s information theory). We then define an appropriate metric that we use in the specific domain of learning assessment. The proposed framework may be utilized for knowledge acquisition in the context of ontology learning and population.
Rafik Braham
A Model-Driven Development Method for Applying to Management Information Systems
Abstract
Almost every information system is built assuming that it is to be modified during operation. It costs very much to transplant the system to change requirement specifications or implementing technologies. For this purpose, there is a model theory approach that is based on a theory that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. In addition, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language. This paper proposes a new development method applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of workloads is more than 30% of all the workloads.
Keinosuke Matsumoto, Tomoki Mizuno, Naoki Mori
Automated Reasoning Support for Ontology Development
Abstract
The design and evaluation of ontologies in first-order logic poses many challenges, many of which focus on the specification of the intended models for the ontology’s concepts and the relationship between these models and the models of the ontology’s axioms. In this paper we present a methodology for the verification of first-order logic ontologies, and provide a lifecycle in which it may be implemented to develop a correct ontology. Automated reasoning plays a critical role in the specification of requirements, design, and verification of the ontology. The application of automated reasoning in the lifecycle is illustrated by examples from the PSL Ontology.
Megan Katsumi, Michael Grüninger
Modeling the International Classification of Diseases (ICD-10) in OWL
Abstract
Current efforts in healthcare focus on establishing interoperability and data integration of medical resources for better collaboration between medical personal and doctors, especially in the patient treatment process. In covering human diseases, one of the major international standards in clinical practice is the International Classification for Diseases (ICD), maintained by the World Health Organization (WHO). Several country- and language-specific adaptations exist which share the general structure of the WHO version but differ in certain details. This complicates the exchange of patient records and hampers data integration across language borders. We present our approach for modeling the hierarchy of the ICD-10 using the Web Ontology Language (OWL). OWL, which we will introduce shortly, should provide a formal ontological basis for ICD-10 with enough expressivity to model interoperability and data integration of several medical resources such as ICD. Our resulting model captures the hierarchical information of the ICD-10 as well as comprehensive class labels for English and German. Specialities such as “Exclusion” statements, which make statements about the disjointness of certain ICD-10 categories, are modeled in a formal way. For properties which exceed the expressivity of OWL-DL, we provide a separate OWL-Full component which allows us to use the hierarchical knowledge and class labels with existing OWL-DL reasoners and capture the additional information in a Semantic Web format.
Manuel Möller, Daniel Sonntag, Patrick Ernst
A Systemic Approach to Multi-Party Relationship Modeling
Abstract
Socio-economic systems exist in a wide variety of activity domains and are composed of multiple stakeholder groups. These groups pursue objectives which are often entirely motivated from within their local context. Domain specificities in the form of institutional design, for example the de-regulation of Public Utility systems, can further fragment this context. Nevertheless, for these systems to be viable, a management subsystem that maintains a holistic view of the system is required. From a Systems perspective, this highlights the need to invest in methods that capture the interactions between the different stakeholders of the system. It is the understanding of the individual interactions that can help piece together a holistic view of the system thereby enabling system level discourse. In this paper we present a modeling technique that models industry interactions as a multi-party value realization process and takes a Systems approach in analyzing them. Every interaction is analyzed both from outside – system as a black box and from within – system as a white box. The design patterns that emerge from this whole/composite view of value realization provide the necessary foundation to analyze the working of multi-stakeholder systems. An explicit specification of these concepts is presented as Regulation Enabling Ontology, REGENT. As an example, we instantiate REGENT for the urban residential electricity market and demonstrate its effectiveness in identifying the requirements for time-based electricity supply systems.
Anshuman B. Saxena, Alain Wegmann
Evaluating Dynamic Ontologies
Abstract
Ontology evaluation poses a number of difficult challenges requiring different evaluation methodologies, particularly for a “dynamic ontology” generated by a combination of automatic and semi-automatic methods. We review evaluation methods that focus solely on syntactic (formal) correctness, on the preservation of semantic structure, or on pragmatic utility. We propose two novel methods for dynamic ontology evaluation and describe the use of these methods for evaluating the different taxonomic representations that are generated at different times or with different amounts of expert feedback. These methods are then applied to the Indiana Philosophy Ontology (InPhO), and used to guide the ontology enrichment process.
Jaimie Murdock, Cameron Buckner, Colin Allen
An Architecture to Support Semantic Enrichment of Knowledge Sources in Collaborative Engineering Projects
Abstract
This work brings a contribution focused on collaborative engineering projects where knowledge plays a key role in the process, aiming to support collaborative work carried out by project teams, through an ontology-based platform and a set of knowledge-enabled services. We introduce the conceptual approach, the technical architectural (and its respective implementation) supporting a modular set of semantic services based on individual collaboration in a project-based environment (for Building & Construction sector). The approach presented here enables the semantic enrichment of knowledge sources, based on project context. The main elements defined by the architecture are an ontology (to encapsulate human knowledge), a set of web services to support the management of the ontology and adequate handling of knowledge providing search/indexing capabilities (through statistical/semantically calculus), providing a systematic procedure for formally documenting and updating organizational knowledge. Results achieved so far and future goals pursued here are also presented.
Ruben Costa, Celson Lima
Designing Ontology-Driven System Composition Processes to Satisfy User Expectations: A Case Study for Fish Population Modelling
Abstract
Ontology-Driven Compositional Systems (ODCSs) are designed to assist a user with semi- or fully automatic composition of a desired system utilizing previously implemented algorithms and/or software. Current research with ODCSs has been conducted around the discovery and composition of web services and a resource management approach. This chapter utilizes the collaboration with a Fish Population Modelling research group to argue that current ODCSs do not fully consider a users’ expectations of [a] his/her leverage and acquisition of knowledge from the ODCS, and [b] the trustworthy, high quality, and efficient performance of desired resultant systems. The authors support their argument by acknowledging that the current semantic frameworks have yet to fully represent the knowledge required to make proper discovery, decision-making, and composition. The authors introduce the beginning of their work of utilizing the inheritance of multiple ontologies to fully represent the function, data, execution, quality, trust, and timeline semantics of compositional units within an ODCS. Finally, a case study is utilized to illustrate how a more robust representation model will improve the satisfaction of the user’s expectations.
Mitchell G. Gillespie, Deborah A. Stacey, Stephen S. Crawford

Part III: Knowledge Management and Information Sharing

Frontmatter
Knowledge Sharing in the First Aid Domain through End-User Development
Abstract
The paper addresses the knowledge sharing needs of an Italian non- profitassociationforfirstaid.Theirvolunteers,andparticularly ambulance drivers, need to know the territory to provide first aid quickly and in a safe manner. This knowledge is often tacit and distributed. Paper-based maps are currently the means to spread and share knowledge among volunteers, while training sessions regularly provide information about holdups and fast routes to a place. The paper describes FirstAidMap, a collaborative web mapping system we have designed with volunteers to satisfy their needs. The system beyond supporting the training activity of ambulance drivers provides an interactive space that all volunteers can directly shape to build and share their knowledge about the territory. FirstAidMap integrates proper end-user development functionalities to engage and motivate volunteers to participate in map shaping, thus evolving from passive users to co-designers of map content. Results of a preliminary evaluation are discussed.
Daniela Fogli, Loredana Parasiliti Provenza
Knowledge Management Tools for Terrorist Network Analysis
Abstract
A terrorist network is a special kind of social network with emphasis on both secrecy and efficiency. Such networks (consisting of nodes and links) needs to be analyzed and visualized in order to gain a deeper knowledge and understanding that enable network destabilization. This paper presents two novel knowledge management tools for terrorist network analysis. CrimeFighter Investigator provides advanced support for human-centered, target-centric investigations aimed at constructing terrorist networks based on disparate pieces of terrorist information. CrimeFighter Assistant provides advanced support for network, node, and link analysis once a terrorist network has been constructed. The paper focuses primarily on the latter tool.
Uffe Kock Wiil, Jolanta Gniadek, Nasrullah Memon, Rasmus Rosenqvist Petersen
Optimization Techniques for Range Queries in the Multivalued-partial Order Preserving Encryption Scheme
Abstract
Encryption is a well-studied technique for protecting the privacy of sensitive data. However, encrypting relational databases affects the performance during query processing. Multivalued-Partial Order Preserving Encryption Scheme (MV-POPES) allows privacy preserving queries over encrypted databases with reasonable overhead and an improved security level. It divides the plaintext domain into many partitions and randomizes them in the encrypted domain. Then, one integer value is encrypted to different multiple values to prevent statistical attacks. At the same time, MV-POPES preserves the order of the integer values within the partitions to allow comparison operations to be directly applied on encrypted data. However, MV-POPES supports range queries at a high overhead. In this paper, we present some optimization techniques to reduce the overhead for range queries in MV-POPES by simplifying the translated condition and controlling the randomness of the encrypted partitions. The basic idea of our approaches is to classify the partitions into many supersets of partitions, then restrict the randomization within each superset. The supersets of partitions are created either based on predefined queries or using binary recursive partition. Experiments show high improvement percentage in performance using the proposed optimization approaches. Also, we study the affect of those optimization techniques on the privacy level of the encrypted data.
Hasan Kadhem, Toshiyuki Amagasa, Hiroyuki Kitagawa
miKrow: Enabling Knowledge Management One Update at a Time
Abstract
Knowledge Management technologies have been around for a while, and while their growth has been increasing lately, it still lacks the current traction that other related technologies are experiencing. One of the hardest bottlenecks Knowledge Management systems currently face are the hurdles that users face, reducing their interest and finally discouraging them for being involved. This proposal showcases the benefits that extending the current Enterprise 2.0 approach can provide. The key evolutions proposed for this lightweight Knowledge Management system are adding (1) a semantic back-end, making the system more intelligent both internally with the use of domain ontologies, and externally by leveraging the Linked Data paradigm, and (2) a simple and smooth microblogging front-end, that improves user experience and makes users more comfortable by taking advantage of a familiar environment they can relate to, in this case. A current implementation and evaluation are also discussed, as well as different boosting techniques that are being studied and deployed.
Guillermo Álvaro, Víctor Penela, Francesco Carbone, Carmen Córdoba, Michelangelo Castagnone, José Manuel Gómez-Pérez, Jesús Contreras
A Pervasive Approach to a Real-Time Intelligent Decision Support System in Intensive Medicine
Abstract
The decision on the most appropriate procedure to provide to the patients the best healthcare possible is a critical and complex task in Intensive Care Units (ICU). Clinical Decision Support Systems (CDSS) should deal with huge amounts of data and online monitoring, analyzing numerous parameters and providing outputs in a short real-time. Although the advances attained in this area of knowledge new challenges should be taken into account in future CDSS developments, principally in ICUs environments. The next generation of CDSS will be pervasive and ubiquitous providing the doctors with the appropriate services and information in order to support decisions regardless the time or the local where they are. Consequently new requirements arise namely the privacy of data and the security in data access. This paper will present a pervasive perspective of the decision making process in the context of INTCare system, an intelligent decision support system for intensive medicine. Three scenarios are explored using data mining models continuously assessed and optimized. Some preliminary results are depicted and discussed.
Filipe Portela, Manuel Filipe Santos, Marta Vilas-Boas
Semantics and Machine Learning: A New Generation of Court Management Systems
Abstract
The progressive deployment of ICT technologies in the courtroom, jointly with the requirement for paperless judicial folders pushed by e-justice plans, are quickly transforming the traditional judicial folder into an integrated multimedia folder, where documents, audio recordings and video recordings can be accessed via a web-based platform. Most of the available ICT toolesets are aimed at the deployment of case management systems and ICT equipment infrastructure at different organisational levels (court or district). In this paper we present the JUMAS system, stemmed from the homonymous EU project, that instead takes up the challenge of exploiting semantics and machine learning techniques towards a better usability of the multimedia judicial folders. JUMAS provides not only a streamlined content creation and management support for acquiring and sharing the knowledge embedded into judicial folders but also a semantic enrichment of multimedia data for advanced information retrieval tasks.
E. Fersini, E. Messina, F. Archetti, M. Cislaghi
A Workspace to Manage Tacit Knowledge Contributing to Railway Product Development during Tendering
Abstract
Railway products development during the Tendering phase is very challenging and tacit knowledge of experts remains the key of its success. Therefore, there is a need to capture and preserve tacit knowledge used during Tendering in order to improve, in time, railway products development. In this context, we think that tacit knowledge management and organizational learning need to be intertwined and supported by a dedicated workspace. In this paper, we propose the model and the implementation of such a workspace.
Diana Penciuc, Marie-Hélène Abel, Didier Van Den Abeele
Semantics for Enhanced Email Collaboration
Abstract
Digital means of communications such as email and IM have become a crucial tool for collaboration. Taking advantage of the fact that information exchanged over these media can be made persistent, a lot of research has strived to make sense of the ongoing communication processes in order to support the participants with their management. In this Chapter we pursue a workflow-oriented approach to demonstrate how, coupled with appropriate information extraction techniques, robust knowledge models and intuitive user interfaces; semantic technology can provide support for email-based collaborative work. While eliciting as much knowledge as possible, our design concept imposes little to no changes, and/or restrictions, to the conventional use of email.
Simon Scerri
Backmatter
Metadata
Title
Knowledge Discovery, Knowledge Engineering and Knowledge Management
Editors
Ana Fred
Jan L. G. Dietz
Kecheng Liu
Joaquim Filipe
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29764-9
Print ISBN
978-3-642-29763-2
DOI
https://doi.org/10.1007/978-3-642-29764-9

Premium Partner