Skip to main content

2017 | Buch

Knowledge Management in Organizations

12th International Conference, KMO 2017, Beijing, China, August 21-24, 2017, Proceedings

insite
SUCHEN

Über dieses Buch

This book contains the refereed proceedings of the 12th International Conference on Knowledge Management in Organizations, KMO 2017, held in Beijing, China, in August 2017. The theme of the conference was "Emerging Technology and Knowledge Management in Organizations."

The 45 contributions accepted for KMO 2017 were selected from 112 submissions and are organized in topical sections on: Knowledge Management Models and Behaviour Studies; Knowledge Sharing; Knowledge Transfer and Learning; Knowledge and Service Innovation; Knowledge and Organization; Information Systems Research; Value Chain and Supply Chain; Knowledge Re-presentation and Reasoning; Data Mining and Intelligent Science; Big Data Management; Internet of Things and Network.

Inhaltsverzeichnis

Frontmatter

Knowledge Management Models and Behaviour Studies

Frontmatter
Activity Theory Based Approach for Requirements Analysis of Android Applications

This paper aims at providing a detailed explanation on the necessity of an alternative approach based on Activity theory for the requirements analysis of a restaurant automation application. In the recent past, android platform has turned out to be one of the most user-friendly platforms for the development of application software. Also, the number of devices on which android applications can be used outrun the other platforms. Automation is one such domain which takes advantage of this platform. One such scenario is the application of automation in restaurants. This helps in efficiently reducing manpower, improving efficiency, accuracy and quality of the system. However, the applications developed in this domain fail to meet all of an average customer’s requirements. Thus the requirements analysis phase is critical to the development of such applications. The traditional methods of requirements engineering do not ensure that a majority of the requirements are captured and hence turn out to be unsuitable for restaurant automation application. Thus, in this paper, the use of activity theory for requirements analysis has been proposed for capturing the non-functional requirements which play a major role in the evaluation of performance characteristics of the system.

Kandiraju Sai Ashritha, T. M. Prajwala, K. Chandrasekaran
Machine Consciousness of Mind Model CAM

Mind model consciousness and memory (CAM) is a biologically inspired mind model, which is a general framework for human-level intelligent systems. Consciousness module plays very important role in mind model CAM. This paper presents architecture of machine consciousness containing awareness, attention, motivation, metacognition, introspective learning and global workspace modules. The proposed machine consciousness will be applied in cyborg intelligence for action planning.

Zhongzhi Shi, Gang Ma, Jianqing Li
The Knowledge Communication Conceptual Model in Malaysian Public Sector

Planning for Information and Communication Technology (ICT) future in the context of Malaysian public service is in the hand of group of IT experts and decision makers. Both parties are responsible for making decisions and plan the future ICT direction of the organization. Therefore it is important to comprise common understanding between Information Technology (IT) experts and decision makers so it will lead to a better and comprehensive decision making. However, there are still lacks of studies to describe the parties’ potential to undertake the situation from the perspective of the knowledge integration of among IT experts and decision makers especially in term of communication. Therefore, it is an essential to develop further understanding on the knowledge communication idea. It is done by integrating the concept of both knowledge sharing and knowledge transfer. From the literature analysis, the study found knowledge sharing framework and knowledge transfer framework might influence to the knowledge communication model. Therefore further study is recommended to validate the knowledge communication conceptual model in the real organization setting.

Rohaizan Daud, Nor Zairah Ab Rahim, Roslina Ibrahim, Suraya Ya’acob
Applying Process Virtualization Theory in E-HR Acceptance Research: Testing and Modifying an Experiment

Some HR processes are more easily accepted when they go online, why? The Process Virtualization Theory provides some viable explanation. This article presents the development, testing and modification of an experiment to be used in studies predicting virtualizability of HR processes. The experimental procedure was developed along with an instrument that contains measurement of Process Virtualization requirements and other criterion variables. Two e-HR process mock-up were developed for the testing purpose. Data was collected from 230 business majors from six different colleges located in northern Taiwan. Students were randomly divided into two groups in a computer lab setting. Each group experienced a different e-HR process mock-up. T-test result shows that the procedure and the instrument was able to find a significant difference in relationship requirements and monitoring capability between the two e-HR processes. However, the two groups do not show a significant difference on the criterion variable behavior intention.

C. Rosa Yeh, Shin-Yau Hsiao

Knowledge Sharing

Frontmatter
Virtual Teams Stress Experiment Proposal: Investigating the Effect of Cohesion, Challenge, and Hindrance on Knowledge Sharing, Satisfaction, and Performance

This current paper will present a research proposal with regard to the effect of stress on virtual teams. The proposed research will specifically concentrate on the virtual teams’ stress as double constructs (positive stress as the challenge and negative stress as the hindrance), cohesion as social support to mitigate the stress, and how these three factors will affect the virtual teams’ knowledge sharing, satisfaction, and performances. A study which incorporates an experimental research methodology will be specifically proposed to test the aforementioned relationships. Related literature reviews on stress, hypotheses development, as well as experimental research design will be mentioned in the paper.

Andree E. Widjaja
Knowledge Sharing on YouTube

The purpose of the present study is to focus on knowledge sharing by posting video clips in YouTube and its effectiveness. YouTube was chosen as the knowledge transfer channel as it is a popular social media network that enables free interactions among registered users about the particular video clip available. Different video contents were manipulatively created, uploaded and their popularity was tested on YouTube. Video counts are based on YouTube statistics. Using real examples collected on YouTube, several factors were identified as the reasons for the popularity of the video clips; i.e., topic selected, keywords used and urgency of the topic. This paper helps people to develop their video clips on YouTube and to create successful video marketing campaigns on YouTube.

Eric Kin Wai Lau
Internal Knowledge Sharing Motivation in Startup Organizations

Knowledge sharing is an integral part for increasing the innovation capability of organizations. For small organizations, this is of particular importance, as they require a high innovation capability in order to support organizational growth. To support internal knowledge sharing, incentives encouraging sharing can be introduced. However, there is no clear understanding of how incentives should be used to support knowledge sharing in startups. Hence this article aims to answer the question: how can incentives be used to support knowledge sharing in startup organizations?To answer the research question, 10 semi-structured interviews with were carried out in Hong Kong and Japan. Nine of the interviews were carried out with founders and one was carried out with a lawyer who specializes in incentive creation. The interviews were then analyzed using cross-case analysis methodology.Results of the analysis show that the need for incentives depends on how clear the founder’s vision for the company is, how the founder views the employees and how motivated by the topic of work the employees are.The results give insight into whether incentives are needed to encourage knowledge sharing and when they should be used. The results also give future research directions into how incentive usage evolves over time as the organization starts to grow.

Jouni A. Laitinen, Dai Senoo
Knowledge Sharing in Constructability Management

Knowledge sharing in construction is one of the key challenges for successful implementation of a construction project. Thus, it would be a good idea to improve the knowledge sharing practices. This paper focuses in constructability management in large infrastructure projects. Constructability pursues to find ways to construct effectively. The paper analyses how the knowledge is shared when using the constructability management principles. We proposes how to improve the theoretical model on the incentives on knowledge sharing. We also concluded this paper with some future directions and suggestions about improving constructability by using knowledge sharing.

Marja Naaranoja, Joni Vares

Knowledge Transfer and Learning

Frontmatter
Skills Sets Towards Becoming Effective Data Scientists

The tsunami of data and information brings with it challenges in decision making to organisations especially so to the government sector. Decision making is vital in ensuring effective service delivery to constituents. Nowadays, Big Data Analytics (BDA) tools and software are readily available but what is lacking are the skills and competency of the personnel to handle and manage these data. The Government of Malaysia requires its IT officers to assume a more important role to extract data and turn into valuable information which is beneficial to operations and planning. However initial findings revealed that these IT officers are lacking in data scientist skills. Therefore, there is a need to propose a guideline on the direction towards acquirement of these skills to become data scientists. This paper presents the findings conducted recently via experts’ views using the Delphi technique, regarding data scientist skills required by the Government IT officers. The findings revealed 46 in-service skill sets which are deemed mandatory for IT officers to have, of which the top seven being analysis, data visualisation, data modelling, decision making, ethics, communication and basic database knowledge and skills. This data is helpful towards building a Data Scientist Competency Development Roadmap for the next five years as a stop gap measure before data scientists graduates are churned from universities.

Wardah Zainal Abidin, Nur Amie Ismail, Nurazean Maarop, Rose Alinda Alias
E-Learning Platforms Analysis for Encourage Colombian Education

The incorporation of Information and Communication Technologies in societies has been given according with countries development, for that reason national politics are directly involved in how society use technology and how the different sectors of economy advance depending of resources’ access. It is well known that the education level is not the unique factor that determines the development of a country. However, the evolution of this sector has determined how is the behavior of countries to manage knowledge and accept technology, reason why this document is done by authors in order to evaluate how technology works in education sector, what are the existence e-learning tools and which is the acceptation factor from teachers through some applied interviews in engineering faculties from Colombia (the studied scenery), to give a critical analysis about the impact of the use of learning management systems as a way to improve educational system and enhance general development in developing countries, keeping in mind e-learning platforms characteristics and need from Colombian context.

Lizeth Xiomara Vargas Pulido, Nicolás Olaya Villamil, Giovanny Tarazona
Using a Simulation Game Approach to Introduce ERP Concepts – A Case Study

ERP systems are huge and complex systems, therefore, the introduction of these solutions into a study program and process is quite challenging. Especially if there is only a small ERP course within the scope of which students are expected to gain an understanding of ERP systems from both a functional (business process) and implementation perspective. Fortunately, business simulation games have proved their efficacy in enhancing the learning of business-related subjects. In this paper we present some preliminary results of a study aimed at investigating the appropriateness and effectiveness of the business simulation game approach for introductory ERP lectures/classes at the Master Degree ERP course provided to students of informatics and technologies of communication. The results demonstrate that the Game Simulation method was well accepted by both students as well as instructors, and recognized as a valuable teaching method for an ERP course.

Marjan Heričko, Alen Rajšp, Paul Wu Horng-Jyh, Tina Beranič
Knowledge Creation Activity System for Learning ERP Concepts Through Simulation Game

Learning ERP concepts is challenging as ERP systems are complex. Gamification promises to be an effective method to enhance students’ motivation to learn. However, how to effectively integrate simulation game into ERP concept learning is not straightforward. We propose a theory-driven methodology by formulating a ERP learning system based on simulation game. We argue that Activity Theory, SECI and CMC knowledge creation processes can be consistently unified for an expressive theoretical model that clearly defines the unit of analysis and demonstrates the linkage between the expansive, social and historical processes that give rise to a snapshot of a system at a particular moment. This paper proceeds to formulate a ERP learning activity system through simulation game. With the clearly defined unit of analysis, a set of measures can be constructed to measure the effectiveness of the activity system. A pilot study involving graduating students enrolled in an ERP course is conducted. The preliminary result suggests positive effects of the simulation game on the learning of ERP concepts. Lastly, this paper expands on the preliminary results and anticipate the application of this theory-driven methodology to formulate comprehensive learning activity systems and measures for effectiveness of learning ERP concepts. Further empirical studies are required to implement and assess the effectiveness of these activities systems such anticipated.

Horng-Jyh Paul Wu, See Tiong Beng, Marjan Heričko, Tina Beranič
Entrepreneurship Knowledge Transfer Through a Serious Games Platform
The Venture Creation Game Case

One of the challenges for productivity is how to improve the experiential and impact of learning in education, The research aims to evaluate the creation of a serious game using Nonakas SECI model to transfer experiential knowledge related to entrepreneurial practices to an experiential simulation game, the game was name “venture creation game”, the application and use of the serious game was evaluated in higher education students. The empirical results indicates that games have a very positive impact in the learning process off students and is a very efficient way to enhance the teaching of entrepreneurship in Higher education. Also trough the SECI model, aspects of entrepreneurship theory were developed.

Dario Liberona, Cristian Rojas

Knowledge and Service Innovation

Frontmatter
Knowledge for Translating Management Innovation into Firm Performance

This paper examines the role of tacit and explicit knowledge in translating innovation measures into firm performance in Japanese companies. While innovation has been found to be a source of higher firm performance, this research is considering whether innovation measures adopted by the firm translate directly into higher firm performance or whether these innovation measures generate tacit and/or explicit knowledge which themselves produce higher corporate performance.Using a questionnaire survey and conditional process analysis, this paper found that there was no direct effect of innovation measures onto firm performance, and that instead, both tacit and explicit knowledge fully mediated the relationship between innovation measures and firm performance. Previous research did not consider the role of knowledge as interface to translate management innovation into firm performance. This paper uncovers the mediating role of knowledge, potentially elucidating past inconclusive results.

Remy Magnier-Watanabe, Caroline Benton
Product vs. Service War: What Next? A Case Study of Japanese Beverage Industry Perspective

Products, services or servitization concept; nothing is new anymore to the marketer these days. Historically, it is observed that the development of market has been shifting from one-phase to another phase in every 10 years. Since 1950s to the 1990s, the market experts have changed their focus from the production of goods to quality of products, selling to marketing approach, and products maintenance to service orientation respectively. In mid 2000s, the Japanese beverage industry has realized a big change, while many leading companies were diversified their business-focus and entered into rivals’ core business segments. As a result, the marketplace is quickly flooded as same category products and services-integration concept became spotlight to the marketer in the beginning of 2010s. However, today the services offer by soft drinks manufacturer cannot alone differentiate the firm from its rivals that pressure them to re-think its business strategy and finding new mechanisms for long-term sustainability. Therefore, this paper aims to examine the current market situation more precisely, identifying value deficiencies, and proposes a conceptual “Relationship Business Model”. Data was collected from three top leading Japanese beverage firms namely, Coca-Cola, Suntory, and ITO EN limited (CSI). The firms’ answer to a survey included multiple choices and open questions about the current market situation, challenges, and key factors for future business success, and so on.

Zahir Ahamed, Akira Kamoshida, H. M. Belal, Chris Wai Lung Chu
Comparing the Perceived Values of Service Industry Innovation Research Subsidiary Between Reviewers and Applicants in Taiwan

A service industry innovation research (SIIR) project is a government subsidiary program in Taiwan for service-oriented small and medium enterprises (SMEs). A prior study was conducted to understand the perceived SIIR value by the awarded SMEs. This research collected the open-end survey data of 37 SIIR project reviewers with 10 identified value categories to balance the single perspective of 55 applicants with 11 perceived value categories. The similarities and differences were analyzed and led to three major findings. The comparative research result provides insightful knowledge for governmental funding agencies to reposition the project objective and procedural adjustments for sustainability. The result also presents a valuable reference for similar government subsidiary projects in Taiwan and in interested countries.

Yu-Hui Tao, Yun-An Lin
Validation Tools in Research to Increase the Potential of its Commercial Application

Researchers are being challenged to show how their research is contributing to commercialization in the private sector, especially when they received public funding. At the same time, the private sector is challenged to further develop their business models so they reflect on the continuous innovation is all fields. For innovation to be successful, it is important to validate it with customers and so co-create the value. This paper increases the awareness of the paramount role that validation techniques play in research by providing recommendations to properly validate research efforts. A validation tool based on the co-creation of value with customers was proposed.

Anna Závodská, Veronika Šramová, Anne-Maria Aho

Knowledge and Organization

Frontmatter
Global Studies About the Corporate Social Responsibility (CSR)

At present, corporate social responsibility (CSR) has become an imperative topic of modern firms’ research theories, and the enhancement of CSR practices globally is remarkable. However the association among CSR and the other factors is still ambiguous. This research reviewed the development of notion of CSR practices among the Western, Asian, Middle East and African countries. According to the recent literatures, the results of studies show that the CSR situation in Middle East countries is very optimistic.

Sahar Mansour
Enhancing Work Engagement Towards Performance Improvement

Underpinned by role expansionist perspective, this study examined why and how family-to-work enhancement is related to work engagement leading to an improvement in task and contextual performance. Controlling for family-to-work conflict and job autonomy, the results revealed that consistent with our predictions, work engagement mediated the relationships for family-to-work enhancement with task and contextual performance. Moderated mediation analyses further revealed that work engagement mediated the relationships for (a) task performance for only those supervisors who were supportive to their subordinates’ work-family issues; and (b) contextual performance, regardless of supervisor’s level of support. Results highlighted the importance of supportive supervisors or family-friendly work environment when examining the relationships between enhancement, engagement, and job performance.

Chris W. L. Chu, Reuben Mondejar, Akira Kamoshida, Zahir Ahamed
Knowledge Management and Triangulation Logic in the Foresight Research and Analyses in Business Process Management

The idea of the article is to integrate knowledge management thinking to triangulation logic. Triangulation logic can be seen as a quality control mechanism in the field of knowledge management. Triangulation logic can also see as a critical knowledge management mechanism, which helps integrate inductive and deductive thinking in scientific projects. Triangulation is a vehicle for cross validation when two or more distinct methods are found to be congruent and yield comparable data. Today various scholars have noted that qualitative and quantitative methods should be viewed as complementary rather than as rival approaches. In fact, most textbooks underscore the desirability of mixing methods given the strengths and weaknesses found in single mono-method designs. In this kind of scientific discussion, the triangulation logic, which integrates quantitative and qualitative methods, is a highly relevant knowledge management issue. Today future-oriented foresight research is mostly based on quantitative and qualitative analyses. The three key elements of foresight activity are diagnosis (hindsight), prognosis (foresight) and description (decision support). In order to integrate triangulation logic to these foresight activities, experts and knowledge management scientists must think carefully alternative strategies, how to build up logical link between triangulation logic and foresight activities. This article provides some new ideas, how these novel knowledge management strategies could be constructed, when we look KM issues from broader triangulation perspective.

Kaivo-oja Jari, Lauraeus Theresa
Corporate Knowledge Management, Foresight Tools, Primary Economically Affecting Disruptive Technologies, Corporate Technological Foresight Challenges 2008–2016, and the Most Important Technology Trends for Year 2017

It is obvious that experts of KM field must understand technological disruptions. In the current market conditions, the corporate and technology foresight are the key elements of business landscape analysis. Firstly, authors present key definitions and difference between disruptive innovations, technological disruptions and radical innovations. These three key concepts are highly relevant for modern corporate management foresight.Secondly, we will present the McKinsey´s twelve potentially and primary disruptive technologies. Therefore, we focused on technologies that we believe having significant potential to drive economic impact and disruption by 2025. An economically disruptive technology must have the potential to create massive economic impact. Thus, we will present the disruptive technologies that drives most economic growth and productivity.Thirdly, authors discuss about key elements of current technology transformation and summarize it to create big a picture and better understanding. Thus, we present comparative analysis about the changes in the Gartner hype cycle between years 2008–2016, which verifies this important aspect of fast technological disruption. We will present the most important ten trends and foresights for year 2017 and discuss about these issues.Fourthly, we will present the foresight tools for the innovation knowledge management and corporation management. New tools for corporate and technology foresight are needed. These new tools help leaders to manage volatility, uncertainty, complexity and ambiguity especially in the turbulent conditions of hyper-competition and technological disruption.

Kaivo-oja Jari, Lauraeus Theresa
Non Profit Institutions IT Governance: Private High Education Institutions in Bogota Case

Higher Education institutions (HEI) and non-profit institutions are relevant in the education sector because they can create a private and public value. In fact, to a great extent, the development of a country is linked to its education and therefore to the HEIs, which are the ones that provide it. There are several studies that correlate the performance of business organizations with their governance about information technology (IT). These research works have been developed mainly in profit companies, using ROI and ROA as measures of performance, indicators that do not apply to non-profit entities, such as HEIs, which are measured through benefit and social impact indicators. Therefore, this research analysed IT governance in some private non-profit institutions from Colombia, with the purpose of identifying the current state of its practices in the subject, and the correlation that exists between the IT government and its performance. The method used was the consultation of experts of the HEIs studied, identification of the IT governance profile, design of a performance matrix and the location of HEIs. The conclusions obtained reflect the importance of the good practices analysed by the IT government in the HEIs studied, which have not yet been capitalized to improve both academic and administrative performance; therefore, it will strengthen innovative practices, aimed at the delivery of quality education, which can be evaluated with indicators of impact, social benefit and project management.

Yasser de Jesús Muriel Perea, Flor Nancy Díaz-Piraquive, Rubén González Crespo, Ivanhoe Rozo Rojas

Information Systems Research

Frontmatter
Decision Making in Cloud Computing: A Method that Combines Costs, Risks and Intangible Benefits

There are two directions of research in cloud computing, first examines the economic benefits of the cloud, the second studies the risks that arise in the transfer of information resources in the cloud, but there are very few studies that consider the economic benefits and risks together. The purpose of this paper is to overcome this gap and offer a model for a joint assessment of the benefits and risks arising from the use of cloud computing. Three simple criteria that should help to estimate different sourcing alternatives are proposed: costs, intangible benefits and risks, and simple rules to obtain quantitative values of these criteria are described.

Yuri Zelenkov
Study on Network Online Monitoring Based on Information Integrated Decision System

Information integrated decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, and provides with various services, such as email, short messages, drawing and sharing, etc., which also supports two types of network, WLAN and Wi-Fi. Because of the underlayer protocols are different, user standards are not unified, and many errors are happened during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which maybe happened in different operation phases, stages, TCP/IP communication protocol layers, and sub-system software. It is necessary to design a network online monitoring tool for IIDS. In order to solve the above problems, this paper studies on network online monitoring based on IIDS, which provides strong theory and technology supports for the operating and communicating of IIDS.

Fan Yang, Zhenghong Dong
Marine Geochemical Information Management Strategies and Semantic Mediation

Oceanography is a comprehensive, interdisciplinary, cooperative global ocean science. Evolution of marine investigation technologies brings various and massive marine data, so its format, syntax and semantic are varied. Such heterogeneity is an obstacle to knowledge integration, transmission and sharing. A framework of E-oceanography is proposed for dealing with marine information with the background of big data era. Within this framework, semantic mediation system is developed for data interoperation in marine community. The greatest benefit of the semantic management system is to allow a growing number of marine knowledge to be accessed without semantic obstacle and interdisciplinary constrain.

Tenglong Hong, Xiaohong Wang, Jianliang Xu, Shijuan Yan, Chengfei Hou
Implementation of a Text Analysis Tool: Exploring Requirements, Success Factors and Model Fit

This paper reports on lessons learned from implementing a text analysis tool in an industrial setting. We conducted two rounds of focus group interviews — one pre- and a second one post-implementation — and extended our analysis by a survey undertaken one month after the tool had gone live. This methodology let us explore and compare the suitability of three different technology acceptance models. Findings show that the Technology Acceptance Model (TAM) fits as a general mathematical approach describing our tool’s acceptance, whereas the Hospitality Metaphor (HM) produces slightly more precise analytical results, explaining its adoption from a more holistic point of view. Finally, we found that the hybrid approach emphasized by the Unified Theory of Acceptance and Use of Technology (UTAUT) showed the most reliable and trustful results, as it combines both human and business/technology aspects.

Giorgio Ghezzi, Stephan Schlögl, Reinhard Bernsteiner

Value Chain and Supply Chain

Frontmatter
A Workflow-Driven Web Inventory Management System for Reprocessing Businesses

This paper describes the design and implementation of a workflow-driven web application to manage the inventory and business operations for the recycling and reprocessing businesses. On the backend, the relational database is built with flexibilities and extensibilities for future organization expansion. The user management is customized to the company needs with tiered role-based data access control. For the front end, the graphical user-friendly interface is tailored for non-IT users to efficiently update and search inventory items, track incoming and outgoing orders, and manage their business workflow. For the middleware, the business logics, operational workflow managements, the data processing, the system optimizations, and web-page connections are integrated into the design and development for the various business operations. The system has been implemented using MySQL and PHP with the XAMPP the web service package. Additional dashboards, graphic user interfaces, and data analytical functions have also been implemented. This system can be easily transformed to other database management systems and other middleware tools. The overall database design can be conveniently adopted by other businesses where inventory management is a challenge.

Huanmei Wu, Jian Zhang, Sunanda Mukherjee, Miaolei Deng
Towards Information Governance of Data Value Chains: Balancing the Value and Risks of Data Within a Financial Services Company

Data is emerging as a key asset of value to organizations. Unlike the traditional concept of a business value chain, or an information value chain, the concept of a data value chain has less currency and is still under-researched. This article reports on the findings of a survey of employees of a financial services company who use a range of data to support their financial analyses, and investment decisions. The purpose of the survey was: to test out the idea of the data value chain as an abstract model useful for organizing the different discrete processes involved in data gathering, data analysis, and decision-making; and to further identify issues, and suggest improvements. While data and its analysis is clearly a tool for supporting the delivery of financial services, there are also a number of risks to its value being realized, most prominently data quality, along with some reservations as to the relative advantages of data-driven over intuitive decision-making. The findings also raise further data and information governance concerns. If implemented these programmes can aid in the realization of value from data, while also mitigating the risks of value not being realized.

Haifangming Yu, Jonathan Foster
A Dempster Shafer Theory and Fuzzy-Based Integrated Framework for Supply Chain Risk Assessment

This paper presents an integrated framework for supply chain risk assessment. The framework consists of some main components: risk identification, D-S calculation, fuzzy inference, risk analysis and risk evaluation. The risk identification comprises three parts, literature review, expert opinion interview, and questionnaire there are all used to identify the risk categories and their reasons and hazards. D-S calculation utilizes Dempster-Shafer Evidence Theory to fuse the potential risk’s information which are identified by the experts’ knowledge, historical data, literature review and questionnaire. The fuzzy inference part aims to solve how to identify the risk’s impact when there are no explicit data. The risk analysis part use the data from D-S calculation and fuzzy inference to define the main bodies of risk, it’s total probability, impact, and the final score of this risk-event. The risk evaluation component integrates all resources from the risk analysis part and gets a final supply chain score based on the assignment weight which are decided by the experts. A case study from a computer manufacturing environment is considered. Through the analysis of the supply chain, integrating the probability, hazard, and weight of the risk events and calculating a final score, managers can have a comprehensive understanding of the risks in the supply chain, and make some reasonable adjustment to avoid risks and reduce error rate for the purpose of maximizing their profits.

Yancheng Shi, Zhenjiang Zhang, Kun Wang

Knowledge Re-presentation and Reasoning

Frontmatter
A Minimal Temporal Logic with Multiple Fuzzy Truth-Values

Temporal logic is a very important branch of non-classical logic, systematically studying formal reasoning over time, which actually is a kind of modal logic with the truth-value set of $$\{0,1\}$$. However, in real life, propositions that concern with tense are not always absolutely true or false. To this end, this paper fuzzifies the minimal temporal logic system. Specifically, we fuzzify propositions’ truth values to six fuzzy linguistic truth values, and thus we build a new multi-valued temporal logic system. We also prove the completeness and soundness of our logic system. In addition, we illustrate our system by a real life example.

Xinyu Li, Xudong Luo, Jinsheng Chen
Modified Similarity Algorithm for Collaborative Filtering

Collaborative filtering (CF) is one of the most applied techniques in recommendation systems and has been widely used in various conditions. The accuracy of the CF method requires further improvement despite the method’s advancement. Numerous issues exist in traditional CF recommendation, such as data scarcities, cold start and scalability problems. Since the data’s sparsity, the nearest neighbors formed around the target user would cause the loss of the information. When the new recommended system started, the information about evaluating is poor, the result of recommend is also poor. About the scalability problems, under the background of big data, the complexity and accuracy of calculation is facing a great challenge. Global information has not been fully used in traditional CF methods. The cosine similarity algorithm (CSA) uses only the local information of the ratings, which may result in an inaccurate similarity and even affect the target user’s predicted rating. To solve this problem, a modified similarity algorithm is proposed to provide high accuracy recommendations, and an adjustment factor is added to the traditional CSA. Finally, a series of experiments are performed to validate the effectiveness of the proposed method. Results show that the recommendation precision is better than those of traditional CF algorithms.

Kaili Shen, Yun Liu, Zhenjiang Zhang
Healthcare-Related Data Integration Framework and Knowledge Reasoning Process

In this paper, we illustrate sensor data based healthcare information integration framework with semantic knowledge reasoning power. Nowadays, more and more people start to use mobile applications that can collects data from a variety of health and wellbeing sensors and presents significant correlations across sensors systems. However, it is difficult to correlate and integrate data from these varieties provided users with overall wellbeing picture and hidden insights about systematic health trends. The paper presents a data semantic integration solution using semantic web technologies. The process includes knowledge lifting and reasoning process that could feedback many hidden health factors and personal lifestyle analysis using semantic rule language (SWRL).

Hong Qing Yu, Xia Zhao, Zhikun Deng, Feng Dong

Data Mining and Intelligent Science

Frontmatter
Algorithms for Attribute Selection and Knowledge Discovery

The features relevant selection is a task performed prior to the data mining and can be seen as one of the most important problems to solve in the data preprocessing stage an in the machine learning. With the feature selection is mainly intended to improve predictive or descriptive performance of models and implement faster and less expensive algorithms. In this paper an analysis about feature selection methods is made emphasizing on decision trees, entropy measure for ranking features, and estimation of distribution algorithms. Finally, we show the result analysis of execute the three algorithms.

Jorge Enrique Rodríguez R., Víctor Hugo Medina García, Lina María Medina Estrada
P-IRON for Privacy Preservation in Data Mining

Data mining includes extracting useful and interesting patterns from large dataset, to create and enhance decision support systems. Due to this, data mining has become an important component in various fields of day-to-day life including medicine, business, education, science and so on. Numerous data mining techniques have been developed. These techniques make the privacy preservation an important issue. When applying privacy preservation techniques, importance is given to the utility and information loss. In this paper we propose Preference Imposed Individual Ranking based microaggregation with Optimal Noise addition technique (P-IRON) for anonymizing the individual records. Through the experimental results, our proposed technique is validated to prevent the disclosure of sensitive data without degradation of data utilization. Our work highlights some discussions about future work and promising directions in the perspective of privacy preservation in data mining.

G. Arumugam, V. Jane Varamani Sulekha
Education Intelligence Should Be the Breakthrough in Intelligence Science

Intelligence Science (IS) is a new science witch is defined by the Chinese Association for Artificial Intelligence (CAAI) in October, 2003. To distract from Artificial Intelligence (AI) to Intelligence Science (IS) is a strategic transformation. That is significant contribution to science and should be confirmed by time very soon. It applies scientific spirit in physics study ago to study mental processes and phenomena. It is actually an emergent multidisciplinary direction. Science has finally focused intelligence field. That should start a new civilization age. The First International Conference on Intelligence Science (the ICIS2016) has been held in Chengdu, Oct. 31 to Nov. 1, 2016. It aimed at constructing the theoretical foundation of IS and focusing on Theoretical Intelligence (TI). Education Intelligence (EI) is another important breach of IS. From the view of intelligence traditional education and new human-computer interaction education are all examinations of intelligence science study. This paper will give a new understanding with the meaning of AI and discuss some original approaches in EI. They are “Xing”, “FAI” and “BAI”. FAI & BAI decompose normal saying AI as two steps that we can understand mind processes clearer. Education needs guidance and support from IS. On the other side EI should be one of the most urgent frontiers of intelligence science. EI should become the early breakthrough in IS study.

Chuan Zhao
An Improved Method for Chinese Company Name and Abbreviation Recognition

Recognizing Chinese company name entity extraction and corresponding abbreviation is vital and challenging in Chinese natural language processing, as traditional methods have difficulties in time sensitive, recognizing Out of Vocabulary words, and collecting training data from open data set etc. To avoid the effects of those issues and improve the recognizing performance, we develop a new algorithm that combines and optimizes the rule-based way, dictionary-way and statistics-learning way together. The precision, recall and F are 96.75%, 89.05% and 92.74% respectively in financial news from the internet.

Lei Meng, Zhi-Hao Wei, Tian-Yi Hu, Qian Cheng, Yu Zhu, Xiao-Tao Wei

Big Data Management

Frontmatter
A Big Data Application of Machine Learning-Based Framework to Identify Type 2 Diabetes Through Electronic Health Records

Studies of the genotype-phenotype associations for diseases such as type 2 diabetes mellitus (T2DM) become increasingly popular in recent years. Commonly used methods are genome-wide association study (GWAS) and phenome-wide association study (PheWAS). To perform the above analysis, it is necessary to identify T2DM subjects’ cases and controls based on electronic health records (EHR). However, the existing expert-based identification algorithms often have a low recall and miss a large number of the valuable samples under conservative filtering standards. As a pilot study, this paper proposed a semi-automated framework based on machine learning. We target to optimize the filtering criteria to improve recall at the same time keeping low false-positive rate. We validate the proposed framework using a EHR database with ten years of records and show the effectiveness of the proposed framework.

Tao Zheng, Ya Zhang
A Case Study on API-Centric Big Data Architecture

The digital transformation trend is a significant key factor in driving innovations in today’s world. As a result the leaders of organizations are constantly under pressure to introduce a cost effective, performance enhanced big data systems to stay competitive. Organizations must run mission critical systems in a stable and secure ways at the same time they have to respond at pace with an ever changing customer expectations. Thus, an ability to scale the business operations to meet the demands from web apps, mobile apps, operational systems and big data analytical tools etc., are crucial. Integration patterns are at the heart of connecting the various systems in organizations for decades but now the industry is focusing more on API-centric approach as an alternate to that. API-centric software enables the component to solve a well detailed problem. So, the other programs can be built upon the existing functionalities of an API. This case study demonstrates the various available approach options to develop an API-centric Big Data system. It also explains the opportunities and challenges in various approaches. Among the various approaches the case study mainly focuses on utilizing the power of API building software Apigee and Azure cloud technologies to build the API-centric big data trading system for an asset management firm based in London, UK.

Aravind Kumaresan, Dario Liberona, R. K. Gnanamurthy
Survey on Big Data Security Framework

Big Data are large data sets that are either structured/unstructured collection of various data formats. These repositories are mined for information extraction and decision-making. The data being sensitive draws unauthorized access, prompting to secure each and every vulnerable access points. Security in Big Data is one of the interesting areas that are being researched. This Paper covers an overall framework for the big data security including Data Classification, Authentication, Authorization, Crypto Methods, Logging and Monitoring. Major security concerns appear on the Application level, Network level, Classification level and Analytics level.The breach of security results in sensitive data loss, threats on data misuse and even loss of money for customers. The aim of this paper is to provide a detailed survey on the researches undertaken on the ‘Big Data Security Framework’ area for the past few years through various publications. It also analyses the open problems still existing despite consistent exploration and probable future directions to probe other security options.

M. Thangaraj, S. Balamurugan
Towards Integrated Model of Big Data (BD), Business Intelligence (BI) and Knowledge Management (KM)

The emerging of Big Data (BD) in recent years as the latest development of Business Intelligence (BI) and Business Analytics (BA), representing new and unusual sources of data (e.g. social media, sensors), advanced technologies (e.g., Hadoop architectures, visualisation, predictive analytics), and new requirements of user skills (e.g., data scientists). This has a major impact on the fundamental of the traditional Business Intelligence (BI) and Knowledge Management (KM) processes. How does this trend square with the way we conceptualize knowledge as intellectual capital and value it? Knowledge management and business intelligence have both recognized data and information though generally as non-value precursors of valuable knowledge assets. In establishing the conceptual foundation of big data as an additional valuable assets related to knowledge, we are making a case for bringing big data and business intelligence into the KM fold. In developing this theoretical foundation, concepts such as tacit and explicit knowledge, learning, and others can be deployed to increase understanding. As a result, we believe we can help the field better understand the idea of big data and how it relates to knowledge assets as well as providing a justification for bringing knowledge management tools and processes to bear on big data and business intelligence, a move towards an integrated conceptual model of big data (BD), business intelligence (BI) and knowledge management (KM).KM is the continuum of BD as the main information deposit, and BI is an activity necessary for the mobilization of the information resource. Thus it necessitates the collective intelligence to be structured in the context of BI to convert data into actionable knowledge for strategic advantage relative to the various organizational environments. This paper will contribute to the creation of such a model that is the focus of on-going research.The aim is to propose an integrated BDBIKM model that represents the processing of raw data and its transformation into contextual knowledge that can be adopted to add specific business value in practice and innovate knowledge to provide unique competitive advantage.

Souad Kamoun-Chouk, Hilary Berger, Bing Hwie Sie
A Review of Resource Scheduling in Large-Scale Server Cluster

Resource scheduling has played a crucial role for improving resource utilization of server cluster. There are five types of scheduling architectures that are being used widely for scheduling resource in server clusters that includes statically partitioned schedulers, monolithic schedulers, two-level scheduling, shared-state scheduling and distributed schedulers. In this paper, several scheduling architectures will be discussed. This paper also illustrates key techniques of these scheduling architectures, including resource representation and sharing model, scheduling algorithms and some other techniques. Different scheduling techniques are being applied in different scheduling architectures. Based on this review paper, it can be concluded that there are a lot of works related to scheduling strategies in large-scale cluster are conducted. However, the relatively complicated application and scaling cluster size present new requirements to scheduling techniques. Then some scheduling techniques can still be improved.

Libo He, Zhenping Qiang, Wei Zhou, Shaowen Yao

Internet of Things and Network

Frontmatter
Mobile Application for the Reduction of Waiting Times Through IoT (Prototype)

This paper shows the development of a mobile application prototype that allows the identification of the average waiting times in a queuing system for different branch offices or points of care. This application gives a person the possibility to select the alternative that best suits him/her, showing the nearby points of care by average waiting times in each point. The validation of the operation of this prototype is carried out through the Flexsim queue simulator, through which random cases are generated from branch offices referenced from Google Maps. With this, it is possible to give a hierarchical order according to the shortest waiting time and indicating the location of these points of care on the map. In order to implement the application it is necessary to install sensors in the points of care, linked to an online database through IoT, so that this information can be visualized in the mobile application.

Pedro Santiago Sierra, Jeyson Bolivar Siculaba, Giovanny Mauricio Tarazona
Design Science and ThinkLets as a Holistic Approach to Design IoT/IoS Systems

The Internet of Things (IoT) is one of the most important developments which will change not only businesses in many industries but also our personal life. In current literature, the technical aspects of IoT systems are dominant as they are the basic building blocks for new products, services and processes. Nevertheless, this is not enough to make IoT successful and sustainable in terms of user acceptance.Designing and implementing IoT systems is a novel and innovative endeavour. Besides technical aspects the user’s point of view must be integrated in the design process at a very early stage. When it comes to the design of IoT systems the traditional process models for systems planning have some shortcomings, especially in the requirements engineering phase. In order to overcome this, we used the Design Science Research framework which is a well-known and accepted model that helps to create new and innovative systems as an exploration framework.The requirements engineering of IoT systems requires collaboration and cooperation between experts from different knowledge domains. ThinkLets, which is a specific type of collaboration patterns, were used to design and structure this highly interactive collaboration process. ThinkLets were then combined with the Design Science Research framework in order to define an environment which is eligible for this new kind of products and services.

Reinhard Bernsteiner, Stephan Schlögl
Research on DV-HOP Algorithm for Wireless Sensor Networks

The DV-Hop algorithm is a typical location algorithm in wireless sensor networks. This paper gives Mesh-like distribution nodes and randomly distributed nodes in the square area, and analyzes the algorithm several important parameters, including the number of anchor nodes, communication radius and the total number of nodes, of which parameters impact on the positioning error, and gives the simulation results and analysis. Through the simulation results, the optimum value of the parameter in DV-Hop algorithm have been obtained. Theoretical analysis and simulation results demonstrate that parameters of the optimized algorithm effectively reduce the node positioning error.

Xiao-Li Cui, Wen-Bai Chen, Cui Hao
Dynamic Charging Planning for Indoor WRSN Environment by using Self-propelled Vehicle

Wireless rechargeable sensor network is a promising method to solve lifetime problem of wireless sensor network. Different wireless rechargeable sensor network environments have own suitable planning way for the chargers. This work aims to plan the chargers in a large indoor environment. Considering that distances between the dispersed sensor nodes are long, we try to set the charger on the vehicle then adopt concept of simulated annealing algorithm to design a charging planning way for solving the travelling salesman problem. The results of this paper shows that in comparison with the static charger planning, the dynamic charging schedule guarantees that any sensors will not be dead as well as it is able to significantly reduce the cost to extent the network lifetime efficiently.

Wei-Che Chien, Hsin-Hung Cho, Chin Feng Lai, Timothy K. Shih, Han-Chieh Chao
Backmatter
Metadaten
Titel
Knowledge Management in Organizations
herausgegeben von
Lorna Uden
Wei Lu
I-Hsien Ting
Copyright-Jahr
2017
Electronic ISBN
978-3-319-62698-7
Print ISBN
978-3-319-62697-0
DOI
https://doi.org/10.1007/978-3-319-62698-7

Premium Partner