Skip to main content

2013 | Buch

Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data

Third International Workshop, HCI-KDD 2013, Held at SouthCHI 2013, Maribor, Slovenia, July 1-3, 2013. Proceedings

herausgegeben von: Andreas Holzinger, Gabriella Pasi

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the Third Workshop on Human-Computer Interaction and Knowledge Discovery, HCI-KDD 2013, held in Maribor, Slovenia, in July 2013, at SouthCHI 2013. The 20 revised papers presented were carefully reviewed and selected from 68 submissions. The papers are organized in topical sections on human-computer interaction and knowledge discovery, knowledge discovery and smart homes, smart learning environments, and visualization data analytics.

Inhaltsverzeichnis

Frontmatter

Human-Computer Interaction and Knowledge Discovery

Hypothesis Generation by Interactive Visual Exploration of Heterogeneous Medical Data

High dimensional, heterogeneous datasets are challenging for domain experts to analyze. A very large number of dimensions often pose problems when visual and computational analysis tools are considered. Analysts tend to limit their attention to subsets of the data and lose potential insight in relation to the rest of the data. Generating new hypotheses is becoming problematic due to these limitations. In this paper, we discuss how interactive analysis methods can help analysts to cope with these challenges and aid them in building new hypotheses. Here, we report on the details of an analysis of data recorded in a comprehensive study of cognitive aging. We performed the analysis as a team of visualization researchers and domain experts. We discuss a number of lessons learned related to the usefulness of interactive methods in generating hypotheses.

Cagatay Turkay, Arvid Lundervold, Astri Johansen Lundervold, Helwig Hauser
Combining HCI, Natural Language Processing, and Knowledge Discovery - Potential of IBM Content Analytics as an Assistive Technology in the Biomedical Field

Medical professionals are confronted with a flood of big data most of it containing unstructured information. Such unstructured information is the subset of information, where the information itself describes parts of what constitutes as significant within it, or in other words - structure and information are not completely separable. The best example for such unstructured information is text. For many years, text mining has been an essential area of medical informatics. Although text can easily be created by medical professionals, the support of automatic analyses for knowledge discovery is extremely difficult. We follow the definition that knowledge consists of a set of hypotheses, and knowledge discovery is the process of finding or generating new hypotheses by medical professionals with the aim of getting insight into the data. In this paper we present some lessons learned of ICA for dermatological knowledge discovery, for the first time. We follow the HCI-KDD approach, i.e. with the human expert in the loop matching the best of two worlds: human intelligence with computational intelligence.

Andreas Holzinger, Christof Stocker, Bernhard Ofner, Gottfried Prohaska, Alberto Brabenetz, Rainer Hofmann-Wellenhof
Designing Computer-Based Clinical Guidelines Decision Support by a Clinician

Computer systems have long been promoted for their potential to improve the quality of health care, including their use in supporting clinical decisions. In this work, the need for developing the computer surveillance system, to support CV risk assessment procedure, according to the last update of the SCORE system of the European Society of Cardiology, is presented and documented. The key step, in transorming guidelines into the computer media, is designing the logical pathway diagram, to take structure and human reasoning into rules and recommendations provided by the guidelines. At this step, the role of the end user (clinician) is essential, to adjust human cognition with the computer-based information processing. The second benefit arises from the demand of the computer media for data standardisation and systematic documentation and screening of the whole population, as well, all together leading to the translation of a problem-solving approach, in a medical care domain, into a programed-practice approach. Beneficial is that programs allow follow-up, comparison, evaluation and quality improvement.

Ljiljana Trtica-Majnarić, Aleksandar Včev
Opinion Mining on the Web 2.0 – Characteristics of User Generated Content and Their Impacts

The field of opinion mining provides a multitude of methods and techniques to be utilized to find, extract and analyze subjective information, such as the one found on social media channels. Because of the differences between these channels as well as their unique characteristics, not all approaches are suitable for each source; there is no “one-size-fits-all” approach. This paper aims at identifying and determining these differences and characteristics by performing an empirical analysis as a basis for a discussion which opinion mining approach seems to be applicable to which social media channel.

Gerald Petz, Michał Karpowicz, Harald Fürschuß, Andreas Auinger, Václav Stříteský, Andreas Holzinger
Evaluation of SHAPD2 Algorithm Efficiency Supported by a Semantic Compression Mechanism in Plagiarism Detection Tasks

This paper presents the issues concerning knowledge protection and, in particular, research in the area of natural language processing focusing on plagiarism detection, semantic networks and semantic compression. The results demonstrate that the semantic compression is a valuable addition to the existing methods used in plagiarism detection. The application of the semantic compression boosts the efficiency of the Sentence Hashing Algorithm for Plagiarism Detection 2 (SHAPD2) and the

w

 − 

shingling

algorithm. All experiments were performed on an available PAN–PC plagiarism corpus used to evaluate plagiarism detection methods, so the results can be compared with other research teams.

Dariusz Adam Ceglarek
Using Hasse Diagrams for Competence-Oriented Learning Analytics

Learning analytics refers to the process of collecting, analyzing, and visualizing (large scale) data about learners for the purpose of understanding and pro-actively optimizing teaching strategies. A related concept is formative assessment – the idea of drawing information about a learner from a broad range of sources and on a competence-centered basis in order to go beyond mere grading to a constructive and tailored support of individual learners. In this paper we present an approach to competence-centered learning analytics on the basis of so-called Competence-based Knowledge Space Theory and a way to visualize learning paths, competency states, and to identify the most effective next learning steps using Hasse diagrams.

Michael D. Kickmeier-Rust, Dietrich Albert
Towards the Detection of Deception in Interactive Multimedia Environments

A classical application of biosignal analysis has been the psychophysiological detection of deception, also known as the polygraph test, which is currently a part of standard practices of law enforcement agencies and several other institutions worldwide. Although its validity is far from gathering consensus, the underlying psychophysiological principles are still an interesting add-on for more informal applications. In this paper we present an experimental off-the-person hardware setup, propose a set of feature extraction criteria and provide a comparison of two classification approaches, targeting the detection of deception in the context of a role-playing interactive multimedia environment. Our work is primarily targeted at recreational use in the context of a science exhibition, where the main goal is to present basic concepts related with knowledge discovery, biosignal analysis and psychophysiology in an educational way, using techniques that are simple enough to be understood by children of different ages. Nonetheless, this setting will also allow us to build a significant data corpus, annotated with ground-truth information, and collected with non-intrusive sensors, enabling more advanced research on the topic. Experimental results have shown interesting findings and provided useful guidelines for future work.

Hugo Plácido da Silva, Ana Priscila Alves, André Lourenço, Ana Fred, Inês Montalvão, Leonel Alegre
Predictive Sentiment Analysis of Tweets: A Stock Market Application

The application addressed in this paper studies whether Twitter feeds, expressing public opinion concerning companies and their products, are a suitable data source for forecasting the movements in stock closing prices. We use the term predictive sentiment analysis to denote the approach in which sentiment analysis is used to predict the changes in the phenomenon of interest. In this paper, positive sentiment probability is proposed as a new indicator to be used in predictive sentiment analysis in finance. By using the Granger causality test we show that sentiment polarity (positive and negative sentiment) can indicate stock price movements a few days in advance. Finally, we adapted the Support Vector Machine classification mechanism to categorize tweets into three sentiment categories (positive, negative and neutral), resulting in improved predictive power of the classifier in the stock market application.

Jasmina Smailović, Miha Grčar, Nada Lavrač, Martin Žnidaršič
A UI Prototype for Emotion-Based Event Detection in the Live Web

Microblogging platforms are at the core of what is known as the

Live Web

: the most dynamic, and fast changing portion of the web, where content is generated

constantly

by the

users

, in snippets of information. Therefore, the

Live Web

(or

Now Web

) is a good source of information for event detection, because it reflects what is happening in the physical world in a timely manner. Meanwhile, it introduces constraints and challenges: large volumes of unstructured, noisy data, which are also as diverse as the users and their interests. In this work we present a prototype User Interface (UI) of our

TwInsight

system, which deals with event detection of real-world phenomena from microblogs. Our system applies

i

) emotion extraction techniques on microblogs, and

ii

) location extraction techniques on user profiles. Combining these two, we convert highly unstructured content to thematically enriched, locational information, which we present to the user through a unified front-end. A separate area of the UI is used to show events to the user, as they are identified. Taking into account the characteristics of the setting, all of the components are updated along the temporal dimension. We discuss each part of our UI in detail, and present anecdotal evidence of its operation through two real-life event examples.

George Valkanas, Dimitrios Gunopulos
Challenges from Cross-Disciplinary Learning Relevant for KDD Methods in Intercultural HCI Design

In this paper, the challenges in cross-disciplinary learning relevant for using KDD methods in intercultural human-computer interaction (HCI) design are described and solutions are provided. For instance, reframing HCI through local and indigenous perspectives requires the analysis of the local and indigenous perspectives relevant for HCI design. This can be done by experts for intercultural HCI design with different cultural backgrounds and different focal points from relevant disciplines such as psychology, philosophy, linguistics, computer science, information science and cultural studies by using methods for intercultural HCI design. The most important goal is, therefore, to come to a common understanding regarding terminology, methodology and processes necessary in intercultural HCI design to be able to use them in the intercultural context. This also has implications for the use of KDD methods in the cultural context. Hence, the challenges evolved during the intercultural HCI design process are subject to analysis and some aspects useful in reaching the goal are suggested.

Rüdiger Heimgärtner
Intent Recognition Using Neural Networks and Kalman Filters

Pointing tasks form a significant part of human-computer interaction in graphical user interfaces. Researchers tried to reduce overall pointing time by guessing the intended target a priori from pointer movement characteristics. The task presents challenges due to variability of pointer movements among users and also diversity of applications and target characteristics. Users with age-related or physical impairment makes the task more challenging due to there variable interaction patterns. This paper proposes a set of new models for predicting intended target considering users with and without motor impairment. It also sets up a set of evaluation metrics to compare those models and finally discusses the utilities of those models. Overall we achieved more than 63% accuracy of target prediction in a standard multiple distractor task while our model can recognize the correct target before the user spent 70% of total pointing time, indicating a 30% reduction of pointing time in 63% pointing tasks.

Pradipta Biswas, Gokcen Aslan Aydemir, Pat Langdon, Simon Godsill
HCI Empowered Literature Mining for Cross-Domain Knowledge Discovery

This paper presents an exploration engine for text mining and cross-context link discovery, implemented as a web application with a user-friendly interface. The system supports experts in advanced document exploration by facilitating document retrieval, analysis and visualization. It enables document retrieval from public databases like PubMed, as well as by querying the web, followed by document cleaning and filtering through several filtering criteria. Document analysis includes document presentation in terms of statistical and similarity based properties and topic ontology construction through document clustering, while the distinguishing feature of the presented system is its powerful cross context and cross-domain document exploration facility through bridging term discovery aimed at finding potential cross-domain linking terms. Term ranking based on the developed ensemble heuristic enables the expert to focus on cross context terms with greater potential for cross-context link discovery. Additionally, the system supports the expert in finding relevant documents and terms by providing customizable document visualization, a color-based domain separation scheme and highlighted top-ranked bisociative terms.

Matjaž Juršič, Bojan Cestnik, Tanja Urbančič, Nada Lavrač
An Interactive Course Analyzer for Improving Learning Styles Support Level

Learning management systems (LMSs) contain tons of existing courses but very little attention is paid to how well these courses actually support learners. In online learning, teachers build courses according to their teaching methods that may not fit with students with different learning styles. The harmony between the learning styles that a course supports and the actual learning styles of students can help to magnify the efficiency of the learning process. This paper presents a mechanism for analyzing existing course contents in learning management systems and an interactive tool for allowing teachers to be aware of their course support level for different learning styles of students based on the Felder and Silverman’s learning style model. This tool visualizes the suitability of a course for students’ learning styles and helps teachers to improve the support level of their courses for diverse learning styles.

Moushir M. El-Bishouty, Kevin Saito, Ting-Wen Chang, Kinshuk, Sabine Graf
A Framework for Automatic Identification and Visualization of Mobile Device Functionalities and Usage

While mobile learning gets more and more popular, little is known about how learners use their devices for learning successfully and how to consider context information, such as what device functionalities/features are available and frequently used by learners, to provide them with adaptive interfaces and personalized support. This paper presents a framework that automatically identifies the functionalities/features of a device (e.g., Wi-Fi connection, camera, GPS, etc.), monitors their usage and provides users with visualizations about the availability and usage of such functionalities/features. While the framework is designed for any type of device such as mobile phones, tablets and desktop-computers, this paper presents an application for Android phones. The proposed framework (and the application) can contribute towards enhancing learning outcomes in many ways. It builds the basis for providing personalized learning experiences considering the learners’ context. Furthermore, the gathered data can help in analyzing strategies for successful learning with mobile devices.

Renan H. P. Lima, Moushir M. El-Bishouty, Sabine Graf
Crowdsourcing Fact Extraction from Scientific Literature

Scientific publications constitute an extremely valuable body of knowledge and can be seen as the roots of our civilisation. However, with the exponential growth of written publications, comparing facts and findings between different research groups and communities becomes nearly impossible. In this paper, we present a conceptual approach and a first implementation for creating an open knowledge base of scientific knowledge mined from research publications. This requires to extract facts - mostly empirical observations - from unstructured texts (mainly PDF’s). Due to the importance of extracting facts with high-accuracy and the impreciseness of automatic methods, human quality control is of utmost importance. In order to establish such quality control mechanisms, we rely on intelligent visual interfaces and on establishing a toolset for crowdsourcing fact extraction, text mining and data integration tasks.

Christin Seifert, Michael Granitzer, Patrick Höfler, Belgin Mutlu, Vedran Sabol, Kai Schlegel, Sebastian Bayerl, Florian Stegmaier, Stefan Zwicklbauer, Roman Kern
Digital Archives: Semantic Search and Retrieval

Social media, in the recent years, has become the main source of information regarding society’s feedback on events that shape the everyday life. The social web is where journalists look to find how people respond to the news they read but is also the place where politicians and political analysts would look to find how societies feel about political decisions, politicians, events and policies that are announced. This work reports on the design and evaluation of a search and retrieval interface for socially enriched web archives. The considerations on the end user requirements regarding the social content are presented as well as the approach on the design and testing using a large collection of web documents.

Dimitris Spiliotopoulos, Efstratios Tzoannos, Cosmin Cabulea, Dominik Frey
Inconsistency Knowledge Discovery for Longitudinal Data Management: A Model-Based Approach

In the last years, the growing diffusion of IT-based services has given a rise to the use of huge masses of data. However, using data for analytical and decision making purposes requires to perform several tasks, e.g. data cleansing, data filtering, data aggregation and synthesis, etc. Tools and methodologies empowering people are required to appropriately manage the (high) complexity of large datasets.

This paper proposes the multidimensional RDQA, an enhanced version of an existing model-based data verification technique, that can be used to identify, extract, and classify data inconsistencies on longitudinal data. Specifically, it discovers fine grained information about the data inconsistencies and it uses a multidimensional visualisation technique for showing them. The enhanced RDQA supports and empowers the users in the task of assessing and improving algorithms and solutions for data analysis, especially when large datasets are considered.

The proposed technique has been applied on a real-world dataset derived from the Italian labour market domain, which we made publicly available to the community.

Roberto Boselli, Mirko Cesarini, Fabio Mercorio, Mario Mezzanzanica
On Knowledge Discovery in Open Medical Data on the Example of the FDA Drug Adverse Event Reporting System for Alendronate (Fosamax)

In this paper, we present a study to discover hidden patterns in the reports of the public release of the Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) for alendronate (fosamax) drug. Alendronate (fosamax) is a widely used medication for the treatment of osteoporosis disease. Osteoporosis is recognised as an important public health problem because of the significant morbidity, mortality and costs of treatment. We consider the importance of alendronate (fosamax) for medical research and explore the relationship between patient demographics information, the adverse event outcomes and drug’s adverse events. We analyze the FDA’s AERS which cover the period from the third quarter of 2005 through the second quarter of 2012 and create a dataset for association analysis. Both Apriori and Predictive Apriori algorithms are used for implementation which generates rules and the results are interpreted and evaluated. According to the results, some interesting rules and associations are obtained from the dataset. We believe that our results can be useful for medical researchers and decision making at pharmaceutical companies.

Pinar Yildirim, Ilyas Ozgur Ekmekci, Andreas Holzinger
Random Forests for Feature Selection in Non-invasive Brain-Computer Interfacing

The aim of the present study was to evaluate the usefulness of the Random Forest (RF) machine learning technique for identifying most significant frequency components in electroencephalogram (EEG) recordings in order to operate a brain computer interface (BCI). EEG recorded from ten able-bodied individuals during sustained left hand, right hand and feet motor imagery was analyzed offline and BCI simulations were computed. The results show that RF, within seconds, identified oscillatory components that allowed generating robust and stable BCI control signals. Hence, RF is a useful tool for interactive machine learning and data mining in the context of BCI.

David Steyrl, Reinhold Scherer, Gernot R. Müller-Putz

Knowledge Discovery and Smart Homes

End Users Programming Smart Homes – A Case Study on Scenario Programming

Smart technology for the private home holds promising solutions, specifically in the context of population overaging. The widespread usage of smart home technology will have influences on computing para- digms, such as an increased need for end user programming which will be accompanied by new usability challenges. This paper describes the evaluation of smart home scenarios and their relation to end user programming. Based on related work a two-phase empirical evaluation is performed within which the concept of scenarios in the private home is evaluated. On the basis of this evaluation a prototype which enables the simulation of end user programming tasks was developed and evaluated in comparison to two commercial products. The results show that, compared to the commercial products, our approach has both, some advantages as well as drawbacks which will be taken into consideration in further development planned for the future.

Gerhard Leitner, Anton J. Fercher, Christian Lassen
Understanding the Limitations of Eco-feedback: A One-Year Long-Term Study

For the last couple of decades the world has been witnessing a change in habits of energy consumption in domestic environments, with electricity emerging as the main source of energy consumed. The effects of these changes in our eco-system are hard to assess, therefore encouraging researchers from different fields to conduct studies with the goal of understanding and improving perceptions and behaviors regarding household energy consumption. While several of these studies report success in increasing awareness, most of them are limited to short periods of time, thus resulting in a reduced knowledge of how householders will behave in the long-term. In this paper we attempt to reduce this gap presenting a long-term study on household electricity consumption. We deployed a real-time non-intrusive energy monitoring and eco-feedback system in 12 families during 52 weeks. Results show an increased awareness regarding electricity consumption despite a significant decrease in interactions with the eco-feedback system over time. We conclude that after one year of deployment of eco-feedback it was not possible to see any significant increase or decrease in the household consumption. Our results also confirm that consumption is tightly coupled with independent variables like the household size and the income-level of the families.

Lucas Pereira, Filipe Quintal, Mary Barreto, Nuno J. Nunes
“…Language in Their Very Gesture” First Steps towards Calm Smart Home Input

Weiser and Brown made it clear when they predicted the advent of ubiquitous computing: the most important and challenging aspect of developing the all-encompassing technology of the early 21

st

Century is the need for computers that can accept and produce information in a manner based on the natural human ways of communicating. In our first steps towards a new paradigm for calm interaction, we propose a multimodal trigger for getting the attention of a passive smart home system, and we implement a gesture recognition application on a smart phone to demonstrate three key concepts: 1) the possibility that a common gesture of human communication could be used as part of that trigger, and; 2) that some commonly understood gestures exist and can be used immediately, and; 3) that the message communicated to the system can be extracted from secondary features of a deliberate human action. Demonstrating the concept, but not the final hardware or mounting strategy, 16 individuals performed a double clap with a smart phone mounted on their upper arm. The gesture was successfully recognized in 88% of our trials. Furthermore, when asked to try and deceive the system by performing any other action that might be similar, 75% of the participants were unable to register a false positive.

John N. A. Brown, Bonifaz Kaufmann, Franz J. Huber, Karl-Heinz Pirolt, Martin Hitz
"Oh, I Say, Jeeves!” A Calm Approach to Smart Home Input

Now that we are in the era of Ubiquitous Computing, our input devices must evolve beyond the mainframe and PC paradigms of the last century. Previous studies have suggested establishing automatic speech recognition and other means of audio interaction for the control of embedded systems and mobile devices. One of the major challenges for this approach is the distinction between intentional and unintentional commands, especially in a noisy environment. We propose the

Snark Circuit

, based on the notion that a command received three times “must be true”. Eventually, overlapping systems will recognize three triggers when a user claps twice (giving signals of sound and motion) and speaks the name of her computer. 20 participants took part in a study designed to test two of these three inputs: the sound of a double-clap and a spoken name. Vocal command recognition was successful in 92.6% of our trials in which a double clap was successfully performed.

John N. A. Brown, Bonifaz Kaufmann, Florian Bacher, Christophe Sourisse, Martin Hitz

Smart Learning Environments

Optimizing Classroom Environment to Support Technology Enhanced Learning

Researchers have found that classroom environment has close relationship to students’ learning performance. When considering technology enriched classroom environment, researches are mainly on the psychological environment and the measurement of the environment. While as technology integrated in classroom, the physical classroom environment should be investigated to facilitate students’ effective and engaged learning. First we carry out a survey on the current technology enriched classroom, after that we sample the Technology Involved Classroom (TIC) and Technology Uninvolved Classroom (TUC) to compare the differences between the two kinds of classroom; then we do the classroom observation and interview with teachers; finally based on the analysis of these data, we propose some solutions for optimizing the classroom environment to facilitate technology enriched learning in China.

Junfeng Yang, Ronghuai Huang, Yanyan Li
A Smart Problem Solving Environment

Researchers of constructivist learning suggest that students should rather learn to solve real-world problems than artificial problems. This paper proposes a smart constructivist learning environment which provides real-world problems collected from crowd-sourcing problem-solution exchange platforms. In addition, this learning environment helps students solve real-world problems by retrieving relevant information on the Internet and by generating appropriate questions automatically. This learning environment is smart from three points of view. First, the problems to be solved by students are real-world problems. Second, the learning environment extracts relevant information available on the Internet to support problem solving. Third, the environment generates questions which help students to think about the problem to be solved.

Nguyen-Thinh Le, Niels Pinkwart
Collaboration Is Smart: Smart Learning Communities

Technological advances in the last decades have significantly influenced education. Smart Learning Environments (SLEs) could be one solution to meet the needs of the 21

st

century. In particular, we argue that smart collaboration is one fundamental need. This paper deals with the question what ‘smart’ is and why a SLE’s design has to consider collaboration. Drawing on various theories, we argue that the community aspect plays a vital role in successful learning and problem solving. This paper outlines the benefits for the community and all parties involved (defined as a win-for-all or win

n

-solution), as well as drivers that might influence collaboration. Design principles for SLEs, Smart Learning Communities (SLCs) and finally the conclusion close the paper.

Gabriele Frankl, Sofie Bitter
Smart Open-Ended Learning Environments That Support Learners Cognitive and Metacognitive Processes

Metacognition and self-regulation are important for effective learning; but novices often lack these skills. Betty’s Brain, a Smart Open-Ended Learning Environment, helps students develop metacognitive strategies through adaptive scaffolding as they work on challenging tasks related to building causal models of science processes. In this paper, we combine our previous work on sequence mining methods to discover students’ frequently-used behavior patterns with context-driven assessments of the effectiveness of these patterns. Post Hoc analysis provides the framework for systematic analysis of students’ behaviors online to provide the adaptive scaffolding they need to develop appropriate learning strategies and become independent learners.

Gautam Biswas, James R. Segedy, John S. Kinnebrew
Curriculum Optimization by Correlation Analysis and Its Validation

The paper introduces a refined Educational Data Mining approach, which refrains from explicit learner modeling along with an evaluation concept. The technology is a ”lazy” Data Mining technology, which models students’ learning characteristics by considering real data instead of deriving (”guessing”) their characteristics explicitly. It aims at mining course characteristics similarities of former students’ study traces and utilizing them to optimize curricula of current students based to their performance traits revealed by their educational history. This (compared to a former publication) refined technology generates suggestions of personalized curricula. The technology is supplemented by an adaptation mechanism, which compares recent data with historical data to ensure that the similarity of mined characteristics follow the dynamic changes affecting curriculum (e.g., revision of course contents and materials, and changes in teachers, etc.). Finally, the paper derives some refinement ideas for the evaluation method.

Kohei Takada, Yuta Miyazawa, Yukiko Yamamoto, Yosuke Imada, Setsuo Tsuruta, Rainer Knauf
The Concept of eTextbooks in K-12 Classes from the Perspective of Its Stakeholders

With the emergence of eTextbooks initiatives on the rise, it is promising that eTextbooks support significant opportunities for improving the educational practices. However, there are both positive and negative reports of using eTextbooks in the academic setting. This study was aimed at exploring the concept of eTextbooks by confirming the opinions from stakeholders on the features and functions of eTextbooks in K-12 classes using a Delphi method. We conducted a three-round Delphi study with 56 respondents, including administrators, teachers, students, parents, and researchers from 14 organizations in Beijing. The findings identified 18 features and functions that covered a range of dimensions including structure and layout, interactive media, note-taking tools, assignment tools and management tools. Then, we tested eTextbooks in real classes at two primary schools initiatively. The results showed that eTextbooks could keep the instructional process running as smoothly as before.

Guang Chen, Chaohua Gong, Junfeng Yang, Xiaoxuan Yang, Ronghuai Huang
A Multi-dimensional Personalization Approach to Developing Adaptive Learning Systems

In this study, a multi-dimensional personalization approach is pro-posed for developing adaptive learning systems by taking various personalized features into account, including learning styles and cognitive styles of student. In this innovative approach, learning materials were categorized into several types and associated as a learning content based on students’ learning styles to provide personalized learning materials and presentation layouts. Furthermore, personalized user interfaces and navigation strategies were developed based on students’ cognitive styles. To evaluate the performance of the proposed approach, an experiment was conducted on the learning activity on the learning activity of the "Computer Networks" course of a college in Taiwan. The experimental results showed that the students who learned with the system developed with the proposed approach revealed significantly better learning achievements than the students who learn with conventional adaptive learning system, showing that the proposed is effective and promising.

Tzu-Chi Yang, Gwo-Jen Hwang, Tosti H. C. Chiang, Stephen J. H. Yang
Extending the AAT Tool with a User-Friendly and Powerful Mechanism to Retrieve Complex Information from Educational Log Data

In online learning, educators and course designers traditionally have difficulty understanding how educational material is being utilized by learners in a learning management system (LMS). However, LMSs collect a great deal of data about how learners interact with the system and with learning materials/activities. Extracting this data manually requires skills that are outside the domain of educators and course designers, hence there is a need for specialized tools which provide easy access to these data. The Academic Analytics Tool (AAT) is designed to allow users to investigate elements of effective course designs and teaching strategies across courses by extracting and analysing data stored in the database of an LMS. In this paper, we present an extension to AAT, namely a user-friendly and powerful mechanism to retrieve complex information without requiring users to have background in computer science. This mechanism allows educators and learning designers to get answers to complex questions in an easy understandable format.

Stephen Kladich, Cindy Ives, Nancy Parker, Sabine Graf
Automating the E-learning Personalization

Personalization of E-learning is considered as a solution for exploiting the richness of individual differences and the different capabilities for knowledge communication. In particular, to apply a predefined personalization strategy for personalizing a course, some learners’ characteristics have to be considered. Furthermore, different ways for the course representation have to be considered too. This paper studies solutions to the question: How to automate the E-learning personalization according to an appropriate strategy? This study finds an answer to this original question by integrating the automatic evaluation, selection and application of personalization strategy. In addition, this automation is supported by learning object metadata and an ontology which links these metadata with possible learners characteristics.

Fathi Essalmi, Leila Jemni Ben Ayed, Mohamed Jemni, Kinshuk, Sabine Graf
Teaching Computational Thinking Skills in C3STEM with Traffic Simulation

Computational thinking (CT) skills applied to Science, Technology, Engineering, and Mathematics (STEM) are critical assets for success in the 21st century workplace. Unfortunately, many K-12 students lack advanced training in these areas. C3STEM seeks to provide a framework for teaching these skills using the traffic domain as a familiar example to develop analysis and problem solving skills. C3STEM is a smart learning environment that helps students learn STEM topics in the context of analyzing traffic flow, starting with vehicle kinematics and basic driver behavior. Students then collaborate to produce a large city-wide traffic simulation with an expert tool. They are able to test specific hypotheses about improving traffic in local areas and produce results to defend their suggestions for the wider community.

Anton Dukeman, Faruk Caglar, Shashank Shekhar, John Kinnebrew, Gautam Biswas, Doug Fisher, Aniruddha Gokhale
Learning Analytics to Support the Use of Virtual Worlds in the Classroom

Virtual worlds in education, intelligent tutorial systems, and learning analytics – all these are current buzz words in recent educational research. In this paper we introduce ProNIFA, a tool to support theory-grounded learning analytics developed in the context of the European project Next-Tell. Concretely we describe a log file analysis and presentation module to enable teachers making effectively use of educational scenarios in virtual worlds such as OpenSimulator or Second Life.

Michael D. Kickmeier-Rust, Dietrich Albert

Visualization and Data Analytics

Evaluation of Optimized Visualization of LiDAR Point Clouds, Based on Visual Perception

This paper presents a visual perception evaluation of efficient visualization for terrain data obtained by LiDAR technology. Firstly, we briefly summarize a proposed hierarchical data structure and discuss its advantages. Then two level-of-detail rendering algorithms are presented. The experimental results are then provided regarding the performance and rendering qualities for both approaches. The evaluation of the results is finally discussed in regard to the visual and spatial perceptions of human observers.

Sašo Pečnik, Domen Mongus, Borut Žalik
Visualising the Attributes of Biological Cells, Based on Human Perception

This paper presents a new automatic colouring technique for grey-scale images that extends the CSL model from the visual perception point of view. Colour-coding is based on the attributes of contained objects, such as their area or radius of the bounding circle. Their extraction is achieved using advanced concepts of connected operators from mathematical morphology, whilst CIELab LCH colour-space is considered for their visualisation. A comparison between the proposed attribute-based visualisation (ABV) model and the CSL model was performed during a test-case on biological-cells. Whilst both models were superior to the original grey-scale image representation, we showed that the ABV model significantly increased the clarity of the visualisation in comparison to the CSL model, as it produced smoother transitions from low to high attribute values and avoided creating visual boundaries between regions of similar attributes.

Denis Horvat, Borut Žalik, Marjan Slak Rupnik, Domen Mongus
Interactive Visual Transformation for Symbolic Representation of Time-Oriented Data

Data Mining on time-oriented data has many real-world applications, like optimizing shift plans for shops or hospitals, or analyzing traffic or climate. As those data are often very large and multi-variate, several methods for symbolic representation of time-series have been proposed. Some of them are statistically robust, have a lower-bound distance measure, and are easy to configure, but do not consider temporal structures and domain knowledge of users. Other approaches, proposed as basis for Apriori pattern finding and similar algorithms, are strongly configurable, but the parametrization is hard to perform, resulting in ad-hoc decisions. Our contribution combines the strengths of both approaches: an interactive visual interface that helps defining event classes by applying statistical computations and domain knowledge at the same time. We are not focused on a particular application domain, but intend to make our approach useful for any kind of time-oriented data.

Tim Lammarsch, Wolfgang Aigner, Alessio Bertone, Markus Bögl, Theresia Gschwandtner, Silvia Miksch, Alexander Rind
Organizing Documents to Support Activities

This paper describes a study in the wild of software for document organization aimed at supporting activities. A preliminary study of current practices with traditional tools was followed by the design and development of a program called Docksy. Docksy introduces workspaces explicitly zoned by movable panels and features document descriptors augmented with tags, comments, and checkboxes. Docksy was deployed for at least two weeks and users’ practices with the new tool were studied. The aim of the study was to see how the new tool was appropriated and how people used the new features. The workspace structured in panels was shown to support users in clustering and separating documents, in having a holistic view of the document space, in locating files inside a workspace, and in managing temporary files. The study also shows how tags, comments, and checkboxes afforded the use of documents as explicit items in a workflow. The study suggests Docksy supports users in a variety of information and activity management tasks, including new practices for emerging activities.

Anna Zacchi, Frank M. Shipman III
Backmatter
Metadaten
Titel
Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data
herausgegeben von
Andreas Holzinger
Gabriella Pasi
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-39146-0
Print ISBN
978-3-642-39145-3
DOI
https://doi.org/10.1007/978-3-642-39146-0