Skip to main content

2011 | Buch

Software Engineering, Business Continuity, and Education

International Conferences ASEA, DRBC and EL 2011, Held as Part of the Future Generation Information Technology Conference, FGIT 2011, in Conjunction with GDC 2011, Jeju Island, Korea, December 8-10, 2011. Proceedings

herausgegeben von: Tai-hoon Kim, Hojjat Adeli, Haeng-kon Kim, Heau-jo Kang, Kyung Jung Kim, Akingbehin Kiumi, Byeong-Ho Kang

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This book comprises selected papers of the International Conferences, ASEA, DRBC and EL 2011, held as Part of the Future Generation Information Technology Conference, FGIT 2011, in Conjunction with GDC 2011, Jeju Island, Korea, in December 2011. The papers presented were carefully reviewed and selected from numerous submissions and focuse on the various aspects of advances in software engineering and its Application, disaster recovery and business continuity, education and learning.

Inhaltsverzeichnis

Frontmatter
A Novel Web Pages Classification Model Based on Integrated Ontology

The main existed problem in the traditional text classification methods is can’t use the rich semantic information in training data set. This paper proposed a new text classification model based SUMO (The Suggested Upper Merged Ontology) and WordNet ontology integration. This model utilizes the mapping relations between WordNet synsets and SUMO ontology concepts to map terms in document-words vector space into the corresponding concepts in ontology, forming document-concepts vector space, based this, we carry out a text classification experiment. Experiment results show that the proposed method can greatly decrease the dimensionality of vector space and improve the text classification performance.

Bai Rujiang, Wang Xiaoyue, Hu Zewen
AgentSpeak (L) Based Testing of Autonomous Agents

Autonomous agents perform on behalf of the user to achieve defined goals or objective. Autonomous agents are often programmed using AgentSpeak language. This language is rich enough to provide necessary support to gain proper functionality within certain environment. Testing of agents programmed in AgentSpeak language is a challenging task. In this paper testing of agents programmed in AgentSpeak has been proposed by deriving AgentSpeak code into goal-plan diagram. Certain coverage criteria have been defined based on the goal-plan diagram. Test cases meeting the defined coverage criteria are used to test the AgentSpeak program with respect to expected functionality.

Shafiq Ur Rehman, Aamer Nadeem
A Flexible Methodology of Performance Evaluation for Fault-Tolerant Ethernet Implementation Approaches

In this paper, we propose a flexible methodology which is applicable to performance evaluation for various Fault-Tolerant Ethernet (FTE) implementation approaches including two conventional approaches, the software-based and the hardware-based ones. Then, we present performance analysis results in terms of fail-over time for a redundant Ethernet network device by using our methodology.

Hoang-Anh Pham, Dae Hoo Lee, Jong Myung Rhee
Behavioral Subtyping Relations for Timed Components

This paper deals with the behavioral substitutability of active components where services availability is a critical criterion for safety-properties preservation. Some timed subtyping relations are given and discussed in relation with the compatibility issue of active components.

Youcef Hammal
A Quantitative Analysis of Semantic Information Retrieval Research Progress in China

The main goal of this paper is to mine the research progress and fu ture trends in semantic information retrieval in China. Totally 550 papers about the semantic information retrieval papers were collected from CNKI academic data base. Bibliometric analysis method and social network analysis software were employed. We drew the annual numbers of table, the paper sources distribu tion table, the authors distribution table, the themes distribution table and co-occurrence network of high-frequency keywords. Though these tables and graphics, the main conclusion is that semantic information retrieval re search are relatively weak force in China. We should construct many research groups on semantic information retrieval and followed by international development.

Xiaoyue Wang, Rujiang Bai, Liyun Kang
Applying Evolutionary Approaches to Data Flow Testing at Unit Level

Data flow testing is a white box testing approach that uses the dataflow relations in a program for the selection of test cases. Evolutionary testing uses the evolutionary approaches for the generation and selection of test data. This paper presents a novel approach applying evolutionary algorithms for the automatic generation of test paths using data flow relations in a program. Our approach starts with a random initial population of test paths and then based on the selected testing criteria new paths are generated by applying a genetic algorithm. A fitness function evaluates each chromosome (path) based on the selected data flow testing criteria and computes its fitness. We have applied one point crossover and mutation operators for the generation of new population. The approach has been implemented in Java by a prototype tool called ETODF for validation. In experiments with this prototype, our approach has much better results as compared to random testing.

Shaukat Ali Khan, Aamer Nadeem
Volume-Rendering of Mitochondrial Transports Using VTK

Mitochondria is an important organelle for maintaining cells such as neurons’ physiological processes. Mitochondrial transport is known to be strongly related to neurodegenerative disease of the central nervous system such as Alzheimer’s disease and Parkinson’s disease. Recently a novel micro-fluidic culture platform enabled

in-vivo

mitochondrial being imaged using a confocal microscope. However, automated analysis of these images is still infeasible because of the low signal-to-noise ratio and formidable amount of image data. Noticing that three dimensional (3-D) visualization techniques have been useful to handle these limitations, we develop a volume-rendering tool using the Visualization Toolkit (VTK), which facilitates analysis of mitochondrial transports and comprehension of the correlations with characteristics of neurons.

Yeonggul Jang, Hackjoon Shim, Yoojin Chung
Model Checking of Transition-Labeled Finite-State Machines

We show that recent Model-driven Engineering that uses sequential finite state models in combination with a common sense logic is subject to efficient model checking. To achieve this, we first provide a formal semantics of the models. Using this semantics and methods for modeling sequential programs we obtain small Kripke structures. When considering the logics, we need to extend this to handle external variables and the possibilities of those variables been affected at any time during the execution of the sequential finite state machine. Thus, we extend the construction of the Kripke structure to this case. As a proof of concept, we use a classical example of modeling a microwave behavior and producing the corresponding software directly from models. The construction of the Kripke structure has been implemented using

flex

,

bison

and

C++

, and properties are verified using

NuSMV

.

Vladimir Estivill-Castro, David A. Rosenblueth
Development of Intelligent Effort Estimation Model Based on Fuzzy Logic Using Bayesian Networks

Accuracy gain in the software estimation is constantly being sought by researchers. On the same time new techniques and methodologies are being employed for getting capability of intelligence and prediction in estimation models. Today the target of estimation research is not only the achievement of accuracy but also fusion of different technologies and introduction of new factors. In this paper we advise improvement in some existing work by introducing mechanism of gaining accuracy. The paper focuses on method for tuning the fuzziness function and fuzziness value. This document proposes a research for development of intelligent Bayesian Network which can be used independently to calculate the estimated effort for software development, uncertainty, fuzziness and effort estimation. The comparison of relative error and magnitude relative error bias helps the selection of parameters of fuzzy function; however the process can be repeated n-times to get suitable accuracy. We also present an example of fuzzy set development for ISBSG data set in order to elaborate working of proposed system.

Jahangir Khan, Zubair A. Shaikh, Abou Bakar Nauman
A Prolog Based Approach to Consistency Checking of UML Class and Sequence Diagrams

UML is an industrial standard for designing and developing object-oriented software. It provides a number of notations for modeling different system views, but it still does not have any means of meticulously checking consistency among the models. These models can contain overlapping information which may lead to inconsistencies. If these inconsistencies are not detected and resolved properly at an early stage, they may result in many errors in implementation phase. In this paper, we propose a novel approach for consistency checking of class and sequence diagrams based on Prolog language. In the proposed approach, consistency checking rules as well as UML models are represented in Prolog, then Prolog’s reasoning engine is used to automatically find inconsistencies.

Zohaib Khai, Aamer Nadeem, Gang-soo Lee
A UML Profile for Real Time Industrial Control Systems

A model is a simplified representation of the reality. Software models are built to represent the problems in an abstract way. Unified modeling language is a popular modeling language among software engineering community. However, due to the limitations of unified modeling language it does not provide complete modeling solution for different domains, especially for real time and industrial control system domains. The object-oriented modeling of real time industrial control systems is in its growing stage. In this research we have evaluated the existing profiles for modeling real time industrial control systems. We have identified limitations of the existing modeling notations and proposed a new profile which overcomes the existing limitations. Our profile is based on unified modeling language’s standard extension mechanism and the notations/symbols used are according to international electrotechnical committee standard.

Kamran Latif, Aamer Nadeem, Gang-soo Lee
A Safe Regression Testing Technique for Web Services Based on WSDL Specification

Specification-based regression testing of web services is an important activity which verifies the quality of web services. A major problem in web services is that only provider has the source code and both user and broker only have the XML based specification. So from the perspective of user and broker, specification based regression testing of web services is needed. The existing techniques are code based. Due to the dynamic behavior of web services, web services undergo maintenance and evolution process rapidly. Retesting of web services is required in order to verify the impact of changes. In this paper, we present an automated safe specification based regression testing approach that uses original and modified WSDL specifications for change identification. All the relevant test cases are selected as reusable hence our regression test selection approach is safe.

Tehreem Masood, Aamer Nadeem, Gang-soo Lee
Evaluating Software Maintenance Effort: The COME Matrix

If effort estimates are not easily assessed upfront by software maintainers we may have serious problems with large maintenance projects, or when we make repeated maintenance changes to software. This is particularly problematic when inaccurate estimates of the required resources leads to serious negotiation issues. The development of a Categorisation of Maintenance Effort (COME) matrix enables an overall summary of software maintenance changes and maintenance effort to be shown, upfront, to software practitioners. This can occur without any consideration or use of other effort estimation techniques whose results, when used to estimate effort, can appear complicated and it may not be clear how accurate their estimates may be.

We use a simple approach to categorizing maintenance effort data using five steps. We use regression analysis with Jorgensen’s 81 datasets to evaluate the selected variables to find out the true efficacy of our approach: 1) adaptive changes and functional changes with maintenance effort predicted from low to high, 2) high predicted effort when updating KSLOC for software maintenance changes, 3) find that more lines of source codes do not imply that more software maintenance effort is needed, 4) find no significant relationship when we consider the age of the application and 5) find that at least 20 application sizes between KSLOC of 100, 200, 400 and 500 have a low predicted software maintenance effort.

Our experiment shows that using the COME matrix is an alternative approach to other cost estimation techniques for estimating effort for repeated requirement changes in large software maintenance projects.

Bee Bee Chua, June Verner
COSMIC Functional Size Measurement Using UML Models

Applying Functional Size Measurement (FSM) early in the software life cycle is critical for estimation purposes. COSMIC is a standardized (ISO 19761) FSM method. COSMIC has known a great success as it addresses different types of software in contrast to previous generations of FSM methods. On the other hand, the Unified Modeling Language (UML) is an industrial standard software specification and modeling language. In this paper, we present a literature survey and analysis of previous research work on how to apply COSMIC functional size measurement using UML models. Moreover, we introduce a UML-based framework targeting the automation of COSMIC FSM procedures. In particular, we discuss the motivation and the rationale behind our approach, which consists in extending UML through the design of a specific UML profile for COSMIC FSM to support appropriately functional size measurements of software using UML models.

Soumaya Barkallah, Abdelouahed Gherbi, Alain Abran
Identifying the Crosscutting among Concerns by Methods’ Calls Analysis

Aspect Oriented Programming allows a better separation and encapsulation of the (crosscutting) concerns by means of the “aspects”. This paper proposes an approach to identify and analyze the crosscutting relationships among identified concerns with respect to the concerns’ structure (in terms of source code elements) and their interactions due to the calls among methods. The approach has been applied to several software systems producing valid results.

Mario Luca Bernardi, Giuseppe A. Di Lucca
A Pattern-Based Approach to Formal Specification Construction

Difficulty in the construction of formal specifications is one of the great gulfs that separate formal methods from industry. Though more and more practitioners become interested in formal methods as a potential technique for software quality assurance, they have also found it hard to express ideas properly in formal notations. This paper proposes a pattern-based approach to tackling this problem where a set of patterns are defined in advance. Each pattern provides an expression in informal notation to describe a type of functions and the method to transform the informal expression into a formal expression, which enables the development of a supporting tool to automatically guide one to gradually formalize the specification. We take the SOFL notation as an example to discuss the underlying principle of the approach and use an example to illustrate how it works in practice.

Xi Wang, Shaoying Liu, Huaikou Miao
A Replicated Experiment with Undergraduate Students to Evaluate the Applicability of a Use Case Precedence Diagram Based Approach in Software Projects

The Use Case Precedence Diagram (UCPD) is a technique that addresses the problem of determining the construction sequence or prioritization of a software product from the developer’s perspective. This paper presents a replicated controlled experiment with undergraduate students. The results obtained from this experiment confirm the results obtained in previous studies with practitioners in which the proposed approach enables developers to define construction sequences more precisely than with other ad-hoc techniques. However, unlike previous studies with practitioners, qualitative evaluation of the UCPD based on the Method Adoption Model (MAM), where the intention to use a method is determined by the users’ perceptions, shows that the relationships defined by the MAM are not confirmed with the results obtained with undergraduate students.

José Antonio Pow-Sang, Ricardo Imbert, Ana María Moreno
Automated Requirements Elicitation for Global Software Development (GSD) Environment

Global software development (GSD) outsourcing is a modern business strategy for producing high quality software at low cost. Most of the problems in Global software development (GSD) occur due to the lack of communication between stakeholders, time zone issues, cultural differences, etc. In this paper, our main emphasis will be to improve the Value-based requirement elicitation (VBRE) steps in GSD environment and also to overcome the major GSD environment problems while taking the process of requirement elicitation from valued stakeholders. Though this model works for every kind of project but specifically for generic software.

M. Ramzan, Asma Batool, Nasir Minhas, Zia Ul Qayyum, M. Arfan Jaffar
Optimization of Transaction Mechanism on Java Card

Reliable update of data is very important on Java Card. Transaction mechanism ensures the data integrity on cards, but such transaction mechanism is very time consuming. Therefore, this paper presents an optimized transaction mechanism based on high object locality on Java Card. At first, we define the concept of object access and storage locality in applet transactions and then the transaction memory scheme based on hash table is designed for new value logging method. Secondly, we design the read and write access method for transactions based on access locality. At last, we optimize the commit process based on storage locality in order to reduce the number of EEPROM writing. The test results show that this optimized mechanism expands the transaction capacity and improves the execution speed of Java Card applets.

Xiaoxue Yu, Dawei Zhang
SOCF: Service Oriented Common Frameworks Design Pattern for Mobile Systems with UML

The field of mobile applications and services continues to be one of the most rapidly evolving areas of communications. Modeling of domain-dependent aspects is a key prerequisite for the design of software for mobile systems. Most mobile systems include a more or less advanced model of selected aspects of the domain in which they are used. However, the traditional development approach to business applications, data base and general software is not suitable for mobile devices of the different paradigms. In this paper, we discuss the creation of such a model and its relevance for technical design of mobile software applications. The paper also reports from an empirical study where a methodology that combines both of these approaches was introduced and employed for modeling of the domain-dependent aspects that were relevant for the design of a mobile software component. The resulting models of domain-dependent aspects are presented, and the experiences from the modeling process are discussed. It strongly focus on the lightweight mobile service oriented common framework architectures for business applications running on mobile devices. It is concluded that a dual perspective based on both of the conventional approaches is relevant for capturing the aspects that are necessary for creating the domain-dependent models that are integrated in a mobile software system. As an architect, you are often challenged – by client enterprise architects and IT stakeholders - to articulate Service-Oriented Architecture (SOA) patterns and service components in a nonproprietary, product-agnostic way. In this paper, use Unified Modeling Language (UML) models to describe the SOA architecture pattern and its associated service components. You also learn about the service components of the SOA pattern in the context of industry-standard UML formats to help stakeholders to better understand the components that constitute an SOA.

Haeng-Kon Kim
Double Layered Genetic Algorithm for Document Clustering

Genetic algorithm for document clustering(GC) shows good performance. However the genetic algorithm has problem of performance degradation by premature convergence phenomenon(PCP). In this paper, we propose double layered genetic algorithm for document clustering(DLGC) to solve this problem. The clustering algorithms including DLGC are tested and compared on Reuter-21578 data collection. The results show that our DLGC has the best performance among traditional clustering algorithms(K-means, Group Average Clustering) and GC in various experiments.

Lim Cheon Choi, Jung Song Lee, Soon Cheol Park
Multi-Objective Genetic Algorithms, NSGA-II and SPEA2, for Document Clustering

This paper proposes the multi-objective genetic algorithm (MOGA) for document clustering. The studied, hierarchical agglomerative algorithms,

k

-means algorithm and general genetic algorithm (GA) are more progressing in document clustering. However, in hierarchical agglomerative algorithms, efficiency is a problem (

O

(

n

2

log

n

)),

k

-means algorithm depends on too much the initial centroids, and general GA can converge to the local optimal value when defining an objective function which is not suitable. In this paper, two of MOGA’s algorithms, NSGA-II and SPEA2 are applied to document clustering in order to complete these disadvantages. We compare to NSGA-II, SPEA2 and the existing clustering algorithms (

k

-means, general GA). Our experimental results show the average values of NSGA-II and SPEA2 are about 28% higher the clustering performance than the

k

-means algorithm and about 17% higher the clustering performance than the general GA.

Jung Song Lee, Lim Cheon Choi, Soon Cheol Park
Implementing a Coordination Algorithm for Parallelism on Heterogeneous Computers

Writing parallel programs is not a simple task. Especially, writing parallel programs for a heterogeneous computing environment is even more difficult. In this paper, a coordination algorithm is implemented to support programmers to write implicitly parallel programs in a heterogeneous computing environment. The programs can be written in a sequential programming language that the programmers are familiar with and feel comfortable to write. In addition, the implicit parallelism allows programmers not to worry about how the tasks are to be performed in parallel. The programmers only focus on what the tasks are supposed to do.

Hao Wu, Chia-Chu Chiang
Efficient Loop-Extended Model Checking of Data Structure Methods

Many methods in data structures contain a loop structure on a collection type. These loops result in a large number of test cases and are one of the main obstacles to systematically test these methods. To deal with the loops in methods, in this paper, we propose a novel loop-extended model checking approach, abbreviated as LEMC, to efficiently test whether methods satisfy their own invariant. Our main idea is to combine dynamic symbolic execution with static analysis techniques. Specifically, a concrete execution of the method under test is initially done to collect dynamic execution information, which is used to statically identify the loop-extended similar paths of the concrete execution path. LEMC statically checks and prunes all the states which follow these loop-extended similar paths. The experiments on several case studies show that LEMC can dramatically reduce as many as 90% of the search space and achieve much better performance, compared with the existing approaches such as the Glass Box model checker and Korat.

Qiuping Yi, Jian Liu, Wuwei Shen
The Systematic Practice of Test Design Automation

This paper proposes more practical guidelines for test case generation extending pairwise techniques including boundary value analysis and equivalence partitioning as a generic analysis technique of factors’ values which can affect target features. That is, as a factor’s values are also classified and any combination technique of them can be selectively applied, it is possible to create robust test cases which can detect interactive defects among factors. On top of that, single fault based test design can be applied to comprise test cases including invalid values of factors. Also, as defining test oracle and details of each factor’s value at test design phase, enormous test efforts which should be always considered to create test oracles or to understand automated test data by testers when testing software can remarkably be removed.

Oksoon Jeong
Application Runtime Framework for Model-Driven Development

Model-driven development aims to overcome the complexity of software construction by allowing developers to work at the high-level models of software systems instead of low-level codes. Most studies have focused on model abstraction, deployment of modeling languages, and automated supports for transforming the models to implemented codes. However, current model-driven engineering (MDE) has little or no support for system evolution (e.g., platform, meta-model). This paper takes the vision of MDE to further transform models to running systems. We present a framework for developing an MDE runtime environment that supports the model-driven development of enterprise applications to automatically deploy the models and produce the running applications. Furthermore, the framework supports platform evolution by providing an infrastructure that is robust to changing requirements from new target platforms. The framework architecture, its underlying infrastructure and mechanisms are described and illustrated on a running enterprise application system for semi-automated price quotation approval service.

Nacha Chondamrongkul, Rattikorn Hewett
The Fractal Prediction Model of Software Reliability Based on Wavelet

Software failure time series analysis is an important part of the research of software reliability. Wavelet methods have been frequently used for time series analysis with high speed and accuracy. In this paper we apply the fractal model based on wavelet techniques to estimate software reliability. Analyzing the empirical failure data and comparison with the classical models validate the validity of the model. A new idea for the research of the software failure mechanism is provided.

Yong Cao, Youjie Zhao, Huan Wang
Source Code Metrics and Maintainability: A Case Study

Measuring high level quality attributes of operation-critical IT systems is essential for keeping the maintainability costs under control. International standards and recommendations, like ISO/IEC 9126, give some guidelines regarding the different quality characteristics to be assessed, however, they do not define unambiguously their relationship to the low level quality attributes. The vast majority of existing quality models use source code metrics for measuring low level quality attributes. Although, a lot of researches analyze the relation of source code metrics to other objective measures, only a few studies deal with their expressiveness of subjective feelings of IT professionals. Our research involved 35 IT professionals and manual evaluation results of 570 class methods of an industrial and an open source Java system. Several statistical models have been built to evaluate the relation of low level source code metrics and high level subjective opinions of IT experts. A decision tree based classifier achieved a precision of over 76% during the estimation of the

Changeability

ISO/IEC 9126 attribute.

Péter Hegedűs, Tibor Bakota, László Illés, Gergely Ladányi, Rudolf Ferenc, Tibor Gyimóthy
Systematic Verification of Operational Flight Program through Reverse Engineering

Software reverse engineering is an engineering process analyzing a system for specific purposes such as identifying interrelationship between system components or reorganizing the system structure. The HELISCOPE project aims to develop an unmanned helicopter and its on-flight embedded computing system for navigation and real-time transmission of motion video using wireless communication schemes. The OFP (Operational Flight Program) in HELISCOPE project keeps only informal and non-standardized documents and has made us difficult to analyze and test it thoroughly. This paper introduces a verification plan through reverse engineering to get over the difficulties, and we share an experimentation about a small portion of the plan to the HELISCOPE OFP.

Dong-Ah Lee, Jong-Hoon Lee, Junbeom Yoo, Doo-Hyun Kim
A Study on UML Model Convergence Using Model Transformation Technique for Heterogeneous Smartphone Application

Smart phones have various types of platform such as Android, Cocoa touch, and Windows Phone. As software is developed in one specific platform, it is impossible to use this software on different platforms. To solve this problem, this paper suggests UML model convergence with model conversion method to develop heterogeneous software per each platform. The suggested method consists of two stages: one TIM(target independent model) stage to abstract a model independent on the particular platform and other TSM(target dependent model) stage to convert the independent model into several target models based on the Model-to-Model transformation method. As a case study, a calculator model on Android oriented Platform is converted into another model on Windows oriented Platform.

Woo Yeol Kim, Hyun Seung Son, Robert Young Chul Kim
A Validation Process for Real Time Transactions

Real financial transactions take place on a consecutive real-time serialization. They are carried out as dynamic transaction chains along with various related systems rather than with just one system. This paper suggests a process that generates a test case for the verification of such consecutive real-time transaction system. The suggested process generates a test case through mechanism applied with UML and ECA (Event/Condition/Action) rules. Through analyzing transactions, we generate UML modeling, then maps UML with ECA rules, which creates an ECA-decision table. A test scenario is generated with this table. That is, the test scenario is modeled from an ECA diagram based on the consecutive transaction chains, which generate a test case.

Kyu Won Kim, Woo Yeol Kim, Hyun Seung Son, Robert Young Chul Kim
A Test Management System for Operational Validation

NMS (Network Management System) is a central monitoring system that can manage equipment on network environments. This should be used for efficient and centralized management of network equipment. On NMS, it enables the real-time transmission and monitoring of the data on states, problems, composition, and statistics of equipment that make a network. But we need to verify whether it works on operations or functions of NMS operations or not. To do this, this paper suggests a test management system for the efficient verification of NMS environments. In order to develop a test management system, requirements from each NMS shall be extracted, and designed and materialized based on them. The suggested system enables efficient test management, result analysis, and comparative verification of test versions.

Myoung Wan Kim, Woo Yeol Kim, Hyun Seung Son, Robert Young Chul Kim
Mobile Application Compatibility Test System Design for Android Fragmentation

Android is an open operating system developed by the Open Handset Alliance (OHA) that Google-led. However, due to the nature of an open operating system, Android has the fragmentation problem which is mobile application behavior varies by device. And due to this problem, a developer has been consuming a lot of time and money to develop mobile app test. In this paper, we have designed mobile application compatibility test system for android fragmentation. This system is designed to compare code analysis result and API pre-testing to detect android fragmentation. By comparing the fragmentation in the code level and the API level, the time and cost of mobile application test can be reduced.

Hyung Kil Ham, Young Bom Park
Efficient Image Identifier Composition for Image Database

As devices with image acquisition functionality have became affordable, the amount of images produced in diverse applications is enormous. Hence, binding an image with a distinctive value for the purpose identification needs to be efficient in the perspective of cost and effective regarding the goal as well. In this paper, we present a novel approach to image identifier generation. The proposed identifier generation method is motivated by pursuing a simple but effective and efficient approach. Taking fundamental image feature extraction methods into account, we make use of distribution of line segment so as to compose identifiers that basically satisfy one-to-one relationship between an image and a corresponding identifier. The generated identifiers can be used for name composition mechanism in a storage system or indexing in a massive image database. Our experimental results on generation of constituent index values have shown favorable results.

Je-Ho Park, Young Bom Park
A Note on Two-Stage Software Testing by Two Teams

This paper reports our experience of software reliability evaluation in the actual software development which employs somewhat uncommon testing procedure. The procedure can be called as a two-stage testing by two teams. First we discuss the problems underlying the software testing. We explain the concept of the procedure of the two-stage testing by two teams, and then, a stochastic model which traces the procedure is derived. The model analyzes a data set which were obtained from the real testing activity. By using the result of data analysis, we show some numerical illustrations for the software reliability assessment and the evaluation of the skill level of the test teams.

Mitsuhiro Kimura, Takaji Fujiwara
Cumulative Damage Models with Replacement Last

This paper proposes a damage model with replacement last. First, the preventive replacement policies with a damage process is given: the unit is replaced at a damage level

Z

or a planned time

T

, whichever occurs last. Second, optimal

Z

for a given

T

is derived, and compare such an optimal policy with those of replacement first and standard replacement. It shows that the ratio of replacement costs plays an important role in determining which policy is better. Third, a numerical example is given in an exponential case.

Xufeng Zhao, Keiko Nakayama, Syouji Nakamura
Periodic and Random Inspection Policies for Computer Systems

Faults in computer systems sometimes occur intermittently. This paper applies a standard inspection policy with imperfect inspection to a computer system: The system is checked at periodic times and its failure is detected at the next checking time with a certain probability. The expected cost until failure detection is obtained, and when the failure time is exponential, an optimal inspection time to minimize it is derived. Next, when the system executes computer processes, it is checked at random processing times and its failure is detected at the next checking time with a certain probability. The expected cost until failure detection is obtained, and when random processing times are exponential, an optimal inspection time to minimize it is derived. Finally, this paper compares optimal times for two inspection policies and shows that if the random inspection cost is the half of the periodic one, then two expected costs are almost the same.

Mingchih Chen, Cunhua Qian, Toshio Nakagawa
Software Reliability Growth Modeling with Change-Point and Its Goodness-of-Fit Comparisons

In an actual testing-phase, software testing manager usually observes a change of the software failure-occurrence phenomenon due to some factors being related to the software reliability growth process. Testing-time when behavior of the software failure-occurrence time interval notably changes is called change-point. Such change influences accuracy of software reliability assessment based on a software reliability growth model. This paper discusses software reliability growth modeling with the influence of the change-point by using the environmental function. Then, we check goodness-of-fit of our change-point models to actual data by comparing with the existing non-change-point models.

Shinji Inoue, Shigeru Yamada
Replacement Policies with Interval of Dual System for System Transition

This study considers optimal replacement policies in which the system operates as a dual system from the beginning of new unit to the stopping of old unit. Especially, when a new unit begins to operate, it is in initial failure period. Then, the old unit is in random failure period or wearout failure period. When the system fails, minimal repair is done. Introducing the loss cost for a minimal repair and maintenance, we obtain the expected cost for a interval that the new unit is in initial failure period, and derive analytically optimal times of stopping of old unit. Numerical examples are given when the failure distribution of new unit is Weibull distribution and ones of old unit are exponential and Weibull distributions.

Satoshi Mizutani, Toshio Nakagawa
Probabilistic Analysis of a System with Illegal Access

As the Internet has been greatly developed, the demand for improvement of the reliability of the Internet has increased. Recently, there exists a problem in the illegal access which attacks a server intentionally. This paper considers two inspection policies. In Model 1, we assume that illegal access is checked at both random and periodic time. In Model 2, we assume that illegal access is checked by two types of random check. Optimal policies which minimize the expected cost are discussed. Finally, numerical examples are given.

Mitsuhiro Imaizumi, Mitsutaka Kimura
Bayesian Inference for Credible Intervals of Optimal Software Release Time

This paper deals with the estimation of a credible interval of the optimal software release time in the context of Bayesian inference. In the past literature, the optimal software release time was often discussed under the situation where model parameters are exactly known. However, in practice, we should evaluate effects of the optimal software release time on uncertainty of the model parameters. In this paper, we apply Bayesian inference to evaluating the uncertainty of the optimal software release time. More specifically, a Markov chain Monte Carlo (MCMC) method is proposed to compute a credible interval of the optimal software release time.

Hiroyuki Okamura, Tadashi Dohi, Shunji Osaki
A Note on Replacement Policies in a Cumulative Damage Model

In this note, we consider a single unit system that should operate over an infinite time span. It is assumed that shocks occur in random times and each shock causes a random amount of to a unit. These damages are additive, and a unit fails when the total damage has exceeded a failure level

K

. We consider preventive maintenance to recover the cumulative damage before system failure. We assume that the shock process is independent of system maintenance and system replacement can not affect the shock process. We compare four typical policies for preventive maintenance (PM) by numerical examples: Time-based PM, number-based PM, damage-based PM, and modified time-based PM.

Won Young Yun
Reliability Consideration of a Server System with Replication Buffering Relay Method for Disaster Recovery

Recently, replication buffering relay method has been used in order to guarantee consistency of database content and reduce cost of replication. We consider the problem of reliability in server system using the replication buffering relay method. The server in a main site updates the storage database when a client requests the data update, and transfers the data and the address of the location updated data to a buffering relay unit. The server transmits all of the data in the buffering relay unit to a backup site after a constant number of data update. We derive the expected number of the replication and of updated data in buffering relay unit. Further, we calculate the expected cost and discuss optimal replication interval to minimize it. Finally, numerical examples are given.

Mitsutaka Kimura, Mitsuhiro Imaizumi, Toshio Nakagawa
Estimating Software Reliability Using Extreme Value Distribution

In this paper, we propose a novel modeling approach for the non-homogeneous Poisson process (NHPP) based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. Specifically, we apply the extreme value distribution to the software fault-detection time distribution. We study the effectiveness of extreme value distribution in software reliability modeling and compare the resulting NHPP-based SRMs with the existing ones.

Xiao Xiao, Tadashi Dohi
Program Conversion for Detecting Data Races in Concurrent Interrupt Handlers

Data races are one of the most notorious concurrency bugs in explicitly shared-memory programs including concurrent interrupt handlers, because these bugs are hard to reproduce and lead to unintended nondeterministic executions of the program. The previous tool for detecting races in concurrent interrupt handlers converts each original handler into a corresponding thread to use existing techniques that detect races in multi-threaded programs. Unfortunately, this tool reports too many false positives, because it uses a static technique for detecting races. This paper presents a program conversion tool that translates the program to be debugged into a semantically equivalent multi-threaded program considering real-time scheduling policies and interrupt priorities of processor. And then, we detect races in the converted programs using a dynamic tool which detects races in multi-threaded programs. To evaluate this tool, we used two flight control programs for unmanned aerial vehicle. The previous approach reported two and three false positives in these programs, respectively, while our approach did not report any false positive.

Byoung-Kwi Lee, Mun-Hye Kang, Kyoung Choon Park, Jin Seob Yi, Sang Woo Yang, Yong-Kee Jun
Implementation of an Integrated Test Bed for Avionics System Development

An integrated test environment is required to test functions for development of avionics systems. In this paper we introduce an integrated test bed system which utilizes variety functions of the commercial flight simulator X-Plane and the model-based programming language LabView. The test system generates the flight data from X-Plane which linked a running gear and input the data to a display function as 3D map using Google Earth. Our proposed system could drive the flight operation in real time using generated flight data. We could trace the flying route of the simulated data based on the visualized results.

Hyeon-Gab Shin, Myeong-Chul Park, Jung-Soo Jun, Yong-Ho Moon, Seok-Wun Ha
Efficient Thread Labeling for On-the-fly Race Detection of Programs with Nested Parallelism

It is quite difficult to detect data races in parallel programs, because they may lead to unintended nondeterministic executions of the program. To detect data races during an execution of program that may have nested parallelism, it is important to maintain thread information called label which is used to determine the logical concurrency between threads. Unfortunately, the previous schemes of thread labeling introduce a serious amount of overhead that includes serializing bottleneck to access centralized data structure or depends on the maximum parallelism or the depth of nested parallelism. This paper presents an efficient thread labeling, called

eNR Labeling

, which does not use any centralized data structure and creates thread labels on every thread operation in a constant amount of time and space even in its worst case. Furthermore, this technique allows to determine the logical concurrency between threads in a small amount of time that is proportional only to the depth of nested parallelism. Compared with three state-of-the-arts labeling schemes, our empirical results using OpenMP benchmarks show that eNR labeling reduces both time overhead by 10% and space overhead by more than 90% for on-the-fly race detection.

Ok-Kyoon Ha, Yong-Kee Jun
A Taxonomy of Concurrency Bugs in Event-Driven Programs

Concurrency bugs are a well-documented topic in shared-memory programs including event-driven programs which handle asynchronous events. Asynchronous events introduce fine-grained concurrency into event-driven programs making them hard to be thoroughly tested and debugged. Unfortunately, previous taxonomies on concurrency bugs are not applicable to the debugging of event-driven programs or do not provide enough knowledge on event-driven concurrency bugs. This paper classifies the event-driven program models into low and high level based on event types and carefully examines and categorizes concurrency bug patterns in such programs. Additionally, we survey existing techniques to detect concurrency bugs in event-driven programs. To the best of our knowledge, this study provides the first detailed taxonomy on concurrency bugs in event-driven programs.

Guy Martin Tchamgoue, Ok-Kyoon Ha, Kyong-Hoon Kim, Yong-Kee Jun
Efficient Verification of First Tangled Races to Occur in Programs with Nested Parallelism

Since data races result in unintended nondeterministic executions of the programs, detecting the races is important for debugging shared memory programs with nested parallelism. Particularly, the first races to occur in an execution of a program must be detected, because they can potentially affect other races that occur later. Previous on-the-fly techniques are inefficient or can not guarantee to verify the existence of the first tangle of data races to occur. This paper presents an efficient two-pass on-the-fly technique in such a kind of programs. This technique is still efficient with regard to the execution time and memory space by keeping a constant number of accesses involved in the first races for each shared variable during the execution. We empirically compare our technique with previous techniques using a set of synthetic programs with OpenMP directives.

Mun-Hye Kang, Young-Kee Jun
Implementation of Display Based on Pilot Preference

As the many functions on the aircraft are implemented through software, they provide the pilots with more flexible and extensible controls. Among them, display is one of the important instruments in that information on the aircraft is recognized through it. While previous display was static with fixed layout, a new display is dynamic by employing floating layout on the screen through the help of software. In this paper, we propose a new display method, which automatically returns to the layout of instruments on the aircraft according to the pilot preference. To achieve this, the software records current layout and suggests the best matching layout to the pilot whenever the same event occurs. We explain the design and implementation issues to add this function into the current system.

Chung-Jae Lee, Jin Seob Yi, Ki-Il Kim
A Study on WSN System Integration for Real-Time Global Monitoring

Since Wireless Sensor Networks (WSNs) have a lot of potential capability to provide diverse services to human by monitoring things scattered in real world, they are envisioned one of the core enabling technologies for ubiquitous computing. However, existing sensor network systems are designed for observing special zones or regional things by using small-scale, low power, and short range technologies. The seamless system integration in global scale is still in its infancy stage due to the lack of the fundamental integration technologies. In this paper, we present an effective integration avenue of real-time global monitoring system. The proposed technology includes design, integration, and operational strategies of IP-WSN based territorial monitoring system to ensure compatibility and interoperability. We especially offer the standardizations of sensing data formats and their database interfaces, which enable a spontaneous and systematic integration among the legacy WSN systems. The proposed technology would be a fundamental element for the practically deployable global territorial monitoring systems.

Young-Joo Kim, Sungmin Hong, Jong-uk Lee, Sejun Song, Daeyoung Kim
The Modeling Approaches of Distributed Computing Systems

The distributed computing systems have grown to a large scale having different applications such as, grid computing and cloud computing systems. It is important to model the architectures of large scale distributed computing systems in order to ensure stable dynamics of the computing systems as well as the reliability and performance. In general, the discrete event dynamical systems formalism is employed to construct the models of different dynamical systems. This paper proposes the constructions of models of distributed computing systems employing two different formalisms such as, discrete event dynamical systems and discrete dynamical systems. A comparative analysis illustrates that the discrete dynamical systems based model of distributed computing systems can be very effective to design and analyze the distributed computing architectures.

Susmit Bagchi
Event-Centric Test Case Scripting Method for SOA Execution Environment

Electronic collaboration over the Internet between business partners appear to be converging toward well-established types of message exchange patterns that involve both user-defined standards and infrastructure standards. At the same time, the notion of event is increasingly promoted for asynchronous communication and coordination in SOA systems. In collaboration between partners or between components is achieved by the means of choreographed exchanges of discrete units of data - messages or events - over an Internet-based protocol. This paper presents an event-centric test case scripting method and execution model for such systems.

Youngkon Lee
bQoS(business QoS) Parameters for SOA Quality Rating

With Web services starting to be deployed within organizations and being offered as paid services across organizational boundaries, quality of service (QoS) has become one of the key issues to be addressed by providers and clients. While methods to describe and advertise QoS properties have been developed, the main outstanding issue remains how to implement a service that lives up to promised QoS properties. This paper provides the service level agreement (SLA) parameters for QoS management applied to Web services and raises a set of research issues that originate in the virtualization aspect of services and are specific to QoS management in a services environment – beyond what is addressed so far by work in the areas of distributed systems and performance management.

Youngkon Lee
Business-Centric Test Assertion Model for SOA

This paper presents a design method for business-centric SOA test framework. The reference architecture of SOA system is usually layered: business process layer, service layer, and computing resource layer. In the architecture, there are so many subsystems affecting the system’s performance, which relates with each other. As a result, in respect of overall performance, it is meaningless to measure each subsystem’s performance separately. In SOA system, the performance of the business process layer with which users keep in contact usually depends on the summation of the performance of the other lower layers. Therefore, for testing SOA system, test cases describing business process activities should be prepared. We devised a business-centric SOA test assertion model which enables to semi-automatic transform test assertions into test cases by the concept of prescription level and normalized prerequisite definition. The model also minimizes the semantic distortion in the transformation process.

Youngkon Lee
Application of Systemability to Software Reliability Evaluation

This paper applies the concept of systemability, which is defined as the reliability characteristic subject to the uncertainty of the field operational environment, to the operational software reliability evaluation. We take the position that the software reliability characteristic in the testing phase is originally different from that in the user operation. First we introduce the environmental factor to consistently bridge the gap between the software failure-occurrence characteristics during the testing and the operation phases. Then we consider the randomness of the environmental factor, i.e., the environmental factor is treated as a random-distributed variable. We use the hazard rate-based model focusing on the software failure-occurrence time to describe the software reliability growth phenomena in the testing and the operation phases. We derive several operational software reliability assessment measures. Finally, we show several numerical illustrations to investigate the impacts of the consideration of systemability on the field software reliability evaluation.

Koichi Tokuno, Shigeru Yamada
‘Surge Capacity Evaluation of an Emergency Department in Case of Mass Casualty’

Health care has experienced many silos efforts to address mitigation and preparedness for large scale emergencies or disasters that can bring catastrophic consequences. All professionals and experts in this area have each developed relatively independent efforts to enhance emergency response of a health care facility in case of some disaster, but the need of the time is to integrate all these crucially important initiatives. A comprehensive surge management plan that provides coherent strategic guidance and tactical directions should be developed and implemented on priority in each health care installation on facility level followed by its integration with state surge management plan for optimum use or high utilization of space, resources and services. This research uses the concept of daily surge status and capacity of a health care facility, its relationship and the need of its effective integration with state level surge management plan which is a relatively new area to be considered. The simulation modeling and analysis technique is used for the modeling of an emergency department of a health care facility under consideration and after having an insight of the prevailing situation, few crowding indices were developed while considering resource capacities for the purpose of using them appropriately to reflect facility’s daily surge status when required. The crowding indices as developed will highlight health care facility AS-IS situation after a specific time interval as defined by the management and will actuate relevant surge control measures in the light of developed surge management plan to effectively and efficiently cater for the surge and for restoration of normal facility working and operation.

Young Hoon Lee, Heeyeon Seo, Farrukh Rasheed, Kyung Sup Kim, Seung Ho Kim, Incheol Park
Business Continuity after the 2003 Bam Earthquake in Iran

In disaster prone countries, such as Iran, business continuity are one of the most important challenges in relation to disaster recovery operations. Iranian experiences indicate that following natural disasters, small business recovery has been faced with a number of problems. It seems that in the light of the lack of business continuity planning, disruption of small business operations may cause the affected community with a number of difficulties.

The 2003 Bam Earthquake resulted in huge physical destruction and significant impacts on small businesses. Following the disruption of economic activities, decision makers considered many issues in small-business recovery. However in the process of post-earthquake business continuity recovery, problems were arisen not only for shopkeepers but citizens. For instance, In the case of Bam, lack of specific organization and having a ready plan for pre-plan on business continuity management and also inefficient cooperation between the stockholders and the planners caused the problems of small business district reconstruction program reach a peak. So reconstruction planners endeavored to decrease the negative effects of shopping arcades reconstruction on the local community. In addition, the allocation of low interest loans to small business recovery resulted in some satisfaction among the stockholders. However in some aspects implemented plans were not able to facilitate the economic recovery completely. The present paper examines the reconstruction process of small businesses after Bam earthquake and discusses the main aspects of Business Continuity Planning. It concludes that an integration of Business Continuity Planning and reconstruction master plan may pave the way of better disaster recovery planning in Iran.

Alireza Fallahi, Solmaz Arzhangi
Emergency-Affected Population Identification and Notification by Using Online Social Networks

Natural disasters have been a major cause of huge losses for both people’s life and property. There is no doubt that the importance of Emergency Warning System (EWS) has been considered more seriously than ever. Unfortunately, most EWSs do not provide acceptable service to identify people who might be affected by a certain disasters. In this project, we propose an approach to identify possibly affected users of a target disaster by using online social networks. The proposed method consists of three phases. First of all, we collect location information from social network websites, such as Twitter. Then, we propose a social network analysis algorithm to identify potential victims and communities. Finally, we conduct an experiment to test the accuracy and efficiency of the approach. Based on the result, we claim that the approach can facilitate identifying potential victims effectively based on data from social networking systems.

Huong Pho, Soyeon Caren Han, Byeong Ho Kang
Development and Application of an m-Learning System That Supports Efficient Management of ‘Creative Activities’ and Group Learning

In this paper, we developed an App of SNS-based mobile learning system where students can interact anytime and anywhere and the teacher can five a real-time feedback. By activating the developed App at Smartphones, users can store a variety of data at a web server without accessing to the website. In addition, if the teacher and students are at a different place, problems can be solved real-time via the SNS of the system, with feedback from the teacher. Therefore, when used for other activities like ‘creative activity’ classes and group lessons, the developed App can be used for selfdirected learning and as a medium of real-time communication to raise the performance and interest of students. Also, the teacher can have an in-depth understanding of the level of each student from accumulated data, and students can develop various portfolios.

Myung-suk Lee, Yoo-ek Son
The Good and the Bad: The Effects of Excellence in the Internet and Mobile Phone Usage

Advanced InformationTechnology brings not only good effects but also bad effects to people’s lives. In this paper, we analyze the relationship between Korean Information Culture Index (KICI) and the Internet and mobile phone addiction for Korean adolescent. KICI represents a quantitative value that measures people’s information literacy levels. Thus, KICI measures how well people use the Internet, whereas the Internet and mobile phone addiction levels represent how badly people use them. In order to achieve our research goal, we firstly diagnose KICI and the Internet and mobile phone addiction level of Korean adolescent by using our questionnaire. Next, we analyze the relationships between KICI and the two addiction levels with regression analysis to see how KICI affects the two addictions. We also find out which questions are more influential to Korean adolescent in KICI measurement. Finally, we propose a better educational way to improve KICI level and to decrease the two addiction levels for Korean adolescent.

Hyung Chul Kim, Chan Jung Park, Young Min Ko, Jung Suk Hyun, Cheol Min Kim
Trends in Social Media Application: The Potential of Google+ for Education Shown in the Example of a Bachelor’s Degree Course on Marketing

Google Plus has the potential to improve students’ collaboration through circles, conduct research for projects with sparks, improve the student-instructor relationship by using this kind of social media to get in touch with each other, and support blended learning with the hang out functionality. The course concept, which is shown as an example, offers a concrete approach for integrating Google Plus functionalities in education. The results of the forthcoming analysis of the concept in use will show advantages and potential for improvement, both of the system and the use of it in education.

Alptekin Erkollar, Birgit Oberer
Learning Preferences and Self-Regulation – Design of a Learner-Directed E-Learning Model

In e-learning, questions concerned how one can create course material that motivate and support students in guiding their own learning have attracted an increasing number of research interests ranging from adaptive learning systems design to personal learning environments and learning styles/preferences theories. The main challenge of learning online remains how learners can accurately direct and regulate their own learning without the presence of tutors to provide instant feedback. Furthermore, learning a complex topic structured in various media and modes of delivery require learners to make certain instructional decisions concerning what to learn and how to go about their learning. In other words, learning requires learners to self-regulate their own learning[1]. Very often, learners have difficulty self-directing when topics are complex and unfamiliar. It is not always clear to the learners if their instructional decisions are optimal.[2] Research into adaptive e-learning systems has attempted to facilitate this process by providing recommendations, classifying learners into different preferred learning styles, or highlighting suggested learning paths[3]. However, system-initiated learning aid is just one way of supporting learners; a more holistic approach, we would argue, is to provide a simple, all-in-one interface that has a mix of delivery modes and self-regulation learning activities embedded in order to help individuals learn how to improve their learning process. The aim of this research is to explore how learners can self-direct and self-regulate their online learning both in terms of domain knowledge and meta knowledge in the subject of computer science. Two educational theories: experiential learning theory (ELT) and self-regulated learning (SRL) theory are used as the underpinning instructional design principle. To assess the usefulness of this approach, we plan to measure: changes in domain-knowledge; changes in meta-knowledge; learner satisfaction; perceived controllability; and system usability. In sum, this paper describes the research work being done on the initial development of the e-learning model, instructional design framework, research design as well as issues relating to the implementation of such approach.

Stella Lee, Trevor Barker, Vive Kumar
Project Based Learning in Higher Education with ICT: Designing and Tutoring Digital Design Course at M S R I T, Bangalore

This paper presents an approach to develop digital design curricula using modified Bloom’s Taxonomy for making it more appealing to the students. The proposed approach uses the

Project Based Learning

strategy for increasing the attractiveness of the course. The case study also explores the use of ICT for effective teaching and learning. The developed curriculum has been evaluated successfully for the last three academic years. The students have shown increased interest in the digital design course, have acquired new skills and obtained good academic results. An important observation during the conduction of the course was that all the students have developed innovative digital systems, exhibited team work and have made a poster presentation during their second year of engineering undergraduate course.

Satyadhyan Chickerur, M. Aswatha Kumar
A Case Study on Improvement of Student Evaluation of University Teaching

The study is to bring up the current issues on student evaluation of university teaching and to find a way to improve the evaluation. For this, the survey was conducted with a sample of 1,690 students and 24 professors were interviewed from W university. The results are as follows. Most of students and instructors agreed to the necessity of the course evaluation and revelation of the evaluation results. However, the evaluation method and the use of the evaluation results are recognized as problematic. Findings suggest that development of valid evaluation instruments, provision of full information related to course evaluation and the active use of evaluation results by establishing a supporting and management system.

Sung-Hyun Cha, Kum-Taek Seo
An Inquiry into the Learning Principles Based on the Objectives of Self-directed Learning

Self-directed learning has been utilized as a method used to increase class participation rates of students in education plans and has been advocated as a target of school education in order to cultivate the ‘self-directed student’. It is not apparent whether it is the advancement of self-directed learning competency that is the objective, or whether it is merely to be utilized as a method to help accomplish certain study objectives. self-directed learning requires differentiating between self-directed learning that seeks to better understand the characteristics and principles of a course of education and self-directed learning that aims to elevate self-directed learning competency depending on the objectives being pursued. These two forms of objectives in self-directed learning are executed using different learning systems and methods. In order to execute both selfdirected learning and to evaluate self-directed learning competency in schools, a close investigation needs to begin into whether these two different objectives are able to co-exist with one another or not.

Gi-Wang Shin
Bioethics Curriculum Development for Nursing Students in South Korea Based on Debate as a Teaching Strategy

Bioethics became a major field of discourse in medical education. This study aims to develop bioethics curriculum in nursing education for nursing students. The survey of nursing students and educators revealed that they think the bioethics curriculum should be a major requirement in the form of a two-credit, semester-long course open to all grades. The curriculum’s learning contents consist of 8 categories and 29 subcategories. The curriculum utilizes both lecture and debate for effective education. Lecture is used mainly to teach contents of theoretical nature, and debate is used for other case-based contents. The finding of this study will provide a basis of bioethics education program for nursing students.

Kwisoon Choe, Myeong-kuk Sung, Sangyoon Park
A Case Study on SUID in Child-Care Facilities

This study investigates a healing process of parents who suddenly lose their child through observing their psychological changes reflected in parents behavioral characteristics based on recent SUID cases. The analysis method used in this study is performed based on the characteristics of a condolence process presented in a case in which a mother loses her child and overcomes her sadness of bereavement through creative painting. In the results of this study, three different periods are presented in the behavioral characteristics of the parent. First, the parent does not accept the death of their child after 1~2 months since their child was passed away and represents anger and violence. Second, although the parent shows certain rational behaviors after 3~4 months since their child was passed away, their emotion still represents unstable states. Third, after 4 months since their child was passed away the parent shows rational judgements and tries to set up new relationships and goals through some efforts to give meaning to the death of their child.

Soon-Jeoung Moon, Chang-Suk Kang, Hyun-Hee Jung, Myoung-Hee Lee, Sin-Won Lim, Sung-Hyun Cha, Kum-Taek Seo
Frames of Creativity-DESK Model; Its Application to ’Education 3.0’

Recently, creativity education has been becoming the central trend of education in Korea. In this situation, we should critically review the understanding of and approach to existing creativity education and should suggest alternative ideas if any problem is found. This paper takes notice of the fact that the concept of creativity that has been dominantly accepted thus far does not reflect the cultural background of Korea. This research is proposing ideas of the understanding of and education on creativity based on agrarian culture which is the cultural base of the Eastern countries such as Korea, Japan, and China etc. In agrarian culture, the contents of education are accepted more importantly than the methods of education. As contents for creativity education, this researcher proposes 114 elements of creativity (DESK model of creativity) and discusses quite concrete methods to develop creativity education programs and teach creativity utilizing these elements with ’education 3.0’ that is newly introduced approach to education.

Seon-ha Im
Blended Nurture

There are numerous offerings on the media landscape which deal with the topic of upbringing, whereby the quality and usefulness of these media varies greatly. In particular the portrayal of educational concepts on television is questionable because here the line separating reality from fiction is blurred. This paper discusses the possibility of using a new, converged media format in the development of educationally oriented games and summarizes a number of parameters for the creation of interactive, virtual environments, which serve educational purposes. The concept of

Blended Nurture

will be introduced, which combines computer-delivered content with live interaction and arranges it in the new format.

Robert J. Wierzbicki
University-Industry Ecosystem: Factors for Collaborative Environment

This research contribution is an effort to figure out the root causes that leads to the integration and disintegration of Research & Development (R&D) collaborations between University and Industry. R&D has been in practice since many years in nearly all kinds of organizations from developed and developing nations. Educational institutions with strong and state –of-the-art research knowledge base have shown more intentions for R&D collaborations. Literature has presented many reasons for the execution of these joint ventures and elaborated the key issues to make them more profitable and feasible. University-Industry R&D collaboration has got an important research status and becoming an emerging field due to the global wave of commercialization, cooperation and mergers.

The purpose of this research is to find out those raison d’êtres that lead the firms to collaborate with other firms and academia. The authors suggest the road map that leads to strong, long lasting, productive, and effective collaboration ventures. This paper also marks the reasons which are the cause of cleavage in these collaborations of university-Industry endeavors. This earnest and conscientious activity has also highlighted the challenges for University-Industry (U-I) R&D Collaborations and devises a plan to overcome these mistrust paradigm of conflict.

Muhammad Fiaz, Baseerat Rizran
Role Playing for Scholarly Articles

In attempting to read a scholarly article, learners (students) often struggle with problems of comprehension. It is likely that when a scholar writes a paper and discusses a new idea or a method within a particular discipline, they assume that readers have a scholarly background enabling them to understand the paper content in detail. Such an assumption is not justified, and there is evidence that students who are learning to carry out research for the first time find that understanding scholars’ papers is not always an easy task. Deciding which RE (Requirement Elicitation) technique and tool to apply to issues can become complicated for educators constructing a learning approach which emphasises that group learners should read scholarly articles in a short time and be able to summarise the content from the article, while at the same time allowing learners to devise solutions that will, enhance their creativity and innovation, allowing them to explore how an idea in a paper could be integrated into an industry application. A case study in this paper introduces a university framework, which can be applied to aid students in their decision-making on selecting a presentation technique; that is, choosing role-playing as the appropriate technique for an effective learning outcome.

Bee Bee Chua
Statistical Analysis and Prior Distributions of Significant Software Estimation Factors Based on ISBSG Release 10

Software estimation is an active research area with researchers working on areas like accuracy, new model development and statistical analysis. Estimates are probabilistic values and can be represented with a degree of uncertainty. Prior distributions are one of the way to represent the historical and organizational data which can be used by researchers to conduct further estimations. In this paper we introduce the software estimation landscape and prior distributions of significant factors, determined from ISBSG data set. These priors can be used for development of estimation models e.g. Bayesian networks. The paper make contributions in number of ways, it provides a brief overview of quality of data set. It also provides statistics of vital factors from dataset. This paper also provides prior distributions of productivity for Architecture e.g. Standalone, Client server and mixture of architectures.

Abou Bakar Nauman, Jahangir khan, Zubair A. Shaikh, Abdul Wahid Shaikh, Khisro khan
Virtual FDR Based Frequency Monitoring System for Wide-Area Power Protection

There have been recent research activities on GPS-based FNET to prevent wide-area blackouts by monitoring frequency deviation. This paper presented a virtual FDR based monitoring system for monitoring regional frequencies in power grid modeling as an advanced research project for implementing intelligent wide-area protective relaying of South Korea. The system was implemented by modeling an actual 345 kV transmission system using EMTP-RV and by measuring voltages and currents at 5 regions. The frequencies were estimated with a frequency estimation algorithm using gain compensation. The virtual FDR based monitoring system was implemented and simulated in various failure conditions.

Kwang-Ho Seok, Junho Ko, Chul-Won Park, Yoon Sang Kim
Engaging and Effective Asynchronous Online Discussion Forums

Discussion is usually considered a powerful tool for the development of pedagogical skills such as critical thinking, collaboration and reflection. In the last few years, an online asynchronous discussion forum has become an integral part of teaching and learning in tertiary education. However, there are considerable challenges involved in designing discussion forum for learning and teaching arrangements that can support desired learning outcomes. The study analysed the factors that affect the level of the student participation in the online discussion forums with emphases on some of the critical issues that should be taken into account when designing online discussions forums that can support desired learning outcomes. We show that the course instructor roles and level of participations in the discussion forum particularly determines the overall level of discussion among the learning communities.

Jemal Abawajy, Tai-hoon Kim
Online Learning Environment: Taxonomy of Asynchronous Online Discussion Forums

Due to its perceived benefits, asynchronous discussion forums have become progressively popular in higher education. The ultimate goal of developing an asynchronous discussion forum is to create an online learning environment that will achieve high levels of learning. This paper reviews the exiting literature and develops taxonomy of the asynchronous discussion forums with the aims of increasing the understanding and awareness of various types of asynchronous discussion forums. The taxonomy will help increase the online course designers’ ability to design more effective learning experiences for student success and satisfaction. It will also help researchers to understand the features of the various asynchronous discussion forums.

Jemal Abawajy, Tai-hoon Kim
Erratum: University-Industry Ecosystem: Factors for Collaborative Environment

In the original version, the second name of the author is incorrect. Instead of “Baseerat Rizran” it should be read as “Baseerat Rizwan”.

Muhammad Fiaz, Baseerat Rizran
Backmatter
Metadaten
Titel
Software Engineering, Business Continuity, and Education
herausgegeben von
Tai-hoon Kim
Hojjat Adeli
Haeng-kon Kim
Heau-jo Kang
Kyung Jung Kim
Akingbehin Kiumi
Byeong-Ho Kang
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-27207-3
Print ISBN
978-3-642-27206-6
DOI
https://doi.org/10.1007/978-3-642-27207-3

Premium Partner