Skip to main content

2018 | Buch

Towards Extensible and Adaptable Methods in Computing

herausgegeben von: Prof. Dr. Shampa Chakraverty, Dr. Anil Goel, Prof. Dr. Sanjay Misra

Verlag: Springer Singapore

insite
SUCHEN

Über dieses Buch

This book addresses extensible and adaptable computing, a broad range of methods and techniques used to systematically tackle the future growth of systems and respond proactively and seamlessly to change. The book is divided into five main sections: Agile Software Development, Data Management, Web Intelligence, Machine Learning and Computing in Education. These sub-domains of computing work together in mutually complementary ways to build systems and applications that scale well, and which can successfully meet the demands of changing times and contexts. The topics under each track have been carefully selected to highlight certain qualitative aspects of applications and systems, such as scalability, flexibility, integration, efficiency and context awareness.

The first section (Agile Software Development) includes six contributions that address related issues, including risk management, test case prioritization and tools, open source software reliability and predicting the change proneness of software. The second section (Data Management) includes discussions on myriad issues, such as extending database caches using solid-state devices, efficient data transmission, healthcare applications and data security. In turn, the third section (Machine Learning) gathers papers that investigate ML algorithms and present their specific applications such as portfolio optimization, disruption classification and outlier detection. The fourth section (Web Intelligence) covers emerging applications such as metaphor detection, language identification and sentiment analysis, and brings to the fore web security issues such as fraud detection and trust/reputation systems. In closing, the fifth section (Computing in Education) focuses on various aspects of computer-aided pedagogical methods.

Inhaltsverzeichnis

Frontmatter

Agile Software Development

Frontmatter
Risk Assessment Framework: ADRIM Process Model for Global Software Development
Abstract
Information Technology (IT) companies have to sustain the heavy competition involved in developing and delivering quality both software and hardware products to the market. It is important to note that Global Software Development (GSD) process span across various geographical locations in the world. There are many phases in GSD and each of this phase is associated with a unique risk parameter. Thus it is extremely essential to identify and mitigate the risks dynamically in various phases of GSD projects. It is also evident that many IT companies nowadays use agile model for software development due to regular adaptation to changing circumstances and customer requirements. On a similar note, this paper proposes an agile based approach that uses multi agent to identify the risks associated with each phase of GSD dynamically and mitigate the risk. A sample study which predicts the applicability of this framework in the software organization is also discussed.
Chamundeswari Arumugam, Sriraghav Kameswaran, Baskaran Kaliamourthy
An Extended Test Case Prioritization Technique Using Script and Linguistic Parameters in a Distributed Agile Environment
Abstract
Agile methodologies are widely used in software industry, as agile results in efficiency, accuracy and effectiveness in less time. Agile has been adopted among different organization, having team members of diverse cultures with different set of working habits. The acceptance level of these diverse habits is largely governed by the client, who is one of the stakeholders, whose job is to provide requirements to an organization. These requirements may come in different languages as client usually give the work to that organization where cost is less and in less time, good quality work is done. Understanding different language requirement is a tedious task for an organization. Also, requirements keep on changing with time from client end. In this paper, a technique is proposed that may be used to understand requirements in a better manner and further, prioritization of test cases is performed using noun and verb approach.
Anita, Naresh Chauhan
AutoJet: Web Application Automation Tool
Abstract
The test automation tools available in the present market is limited in various features, e.g., reporting, debugging, logging, usability, and portability, resulting in need of integration of these features to achieve automation objectives, e.g., positive ROI, stability, and efficiency. In this paper, we propose an innovative tool that proves as a panacea for many of test automation challenges discovered in worldwide surveys. In order to provide these challenging features in Web automation testing domain, an attempt has been made to create a tool called AutoJet. It is a Web automation testing tool that aims to provide a great extent of usability to both automation as well as manual testers. Using AutoJet, paradoxically a manual tester can automate test scenarios smoothly and profitably. Our study shows that it is possible for a manual tester to automate test scenarios effectively with reduced time span as it eliminates the efforts spent by the testers to understand existing test harness or to create a new one.
Sheetika Kapoor, Kalpna Sagar
Prioritization of User Story Acceptance Tests in Agile Software Development Using Meta-Heuristic Techniques and Comparative Analysis
Abstract
User stories that are the requirements engineering artifacts in agile software development must be accepted by the end user before being implemented. Acceptance testing is used to confirm the acceptance of user stories. User story acceptance tests are driven by user-defined acceptance criteria. The number of acceptance tests increases as the application size increases. One of the prominent reasons for adopting agile software development is quicker delivery of working software. In this paper, we attempt to prioritize acceptance tests in order to identify critical tests. Execution of critical acceptance tests is sufficient to satisfy the acceptance criteria for a user story and reduces the time to delivery of software. Prioritization of acceptance tests is realized by application of meta-heuristic techniques, i.e., genetic algorithm (GA), cuckoo search algorithm, and micro-GA algorithm. The information flow (IF) model is used as a basis of fitness function to ensure maximum coverage of user acceptance criteria. We demonstrate the applicability and effectiveness of the proposed approach with the help of a realistic example. A comparative analysis of the application of meta-heuristic techniques is performed to choose the best one.
Ritu Sibal, Preeti Kaur, Chayanika Sharma
Software Reliability Assessment Using Deep Learning Technique
Abstract
Some of the quality parameters for any successful open-source software(OSS) may be attributed to affordability, availability of source code, redistributability, modifiability, etc. Quality of software can be further improvised subsequently by either users or associated developers by constantly monitoring some of the reliability aspects. Since multiple users can modify the software, there is a possible threat that it may be exposed to various security problems, which might degrade the reliability of software. Bug tracking systems are often considered to monitor various software faults, detected mostly in OSS projects. Various authors have made study in this direction by applying different techniques, so that reliability of OSS projects can be improved. In this paper, an efficient approach based on deep learning technique has been proposed to improve the reliability of open-source software. An extensive numerical illustration has also been presented for bug data recorded on bug tracking system. The effectiveness of proposed deep learning-based technique for estimating the level of faults associated with the systems has been verified by comparing it with similar approaches as available in the literature.
Suyash Shukla, Ranjan Kumar Behera, Sanjay Misra, Santanu Kumar Rath
Empirical Validation of OO Metrics and Machine Learning Algorithms for Software Change Proneness Prediction
Abstract
The two main causes of changes in software systems are defects and enrichment. We aim to study the relationship between object-oriented metrics and change proneness in software systems in this paper. Statistical and machine learning methods are two different practices for software quality prediction. We evaluate and compare the performance of these practices based on five open-source software, written in Java language. The performance is evaluated using receiver operating characteristic analysis and Friedman test. Experimental results indicate that few machine learning methods are better than regression for software change proneness prediction. Hence, these models can be used to decrease the likelihood of defect occurrence and can be useful in the maintenance of software systems.
Anushree Agrawal, Rakesh Kumar Singh

Data Management

Frontmatter
Extending Database Cache Using SSDs
Abstract
Advancements in speed, volume, and reliability have made solid-state device (SSD) a premium enterprise resource. SSDs are placed between main memory and hard disk devices (HDDs) when compared in terms of speed and cost. SSDs offer latencies lower than hard disk drives but have higher costs. In this paper, we demonstrate how enterprise class data management solutions can effectively use SSDs to manage competing goals of higher throughput and low resource costs. We demonstrate that SSDs can be used as a secondary caching layer between main memory-based buffer cache and HDD-based page store to meet these goals. Intelligent meta-data management ensures most important data or ‘hot data’ resides in RAM, ‘warm data’ resides in SSDs, and ‘cold data’ resides in HDDs. Hotness is measured dynamic and based on access pattern; hence, it needs no user intervention. SAP’s relational database server: SAP Adaptive Server Enterprise (SAP ASE) implements these ideas in its feature named ‘non-volatile cache.’
Prateek Agarwal, Vaibhav Nalawade
Cloud-Based Healthcare Monitoring System Using Storm and Kafka
Abstract
With the significant development of the technology in the field of computer science, the concept of telemedicine is gaining popularity. Telemedicine enables the healthcare sector and its stakeholders such as doctors and nurses for monitoring patients. It enables to provide high-quality treatment irrespective of geographical conditions and remotely enabled. However, the key requirement for telemedicine is a well-equipped infrastructure for remote monitoring and analysis. Cloud computing with key features of scalability, dynamic provisioning, and service cloud models promises to be the infrastructure for such requirement. In this paper, a healthcare-based automation system using cloud has been proposed that collects the required health data for analysis.
N. Sudhakar Yadav, B. Eswara Reddy, K. G. Srinivasa
Honeynet Data Analysis and Distributed SSH Brute-Force Attacks
Abstract
Due to the increase in the number of network attacks, it has become essential to gain deeper insight into the malicious activities carried out by the attackers. In this paper, the authors have analysed malicious network traffic captured using a honeynet and provided a deeper understanding of brute-force attacks. Initially, the overall attack behaviour is known. Since the most attacked service was SSH, it was scrutinised to know more about distributed brute-force attacks. It is highly unlikely that the distributed brute-force attacks are from a single botnet. The authors have proposed a methodology to detect individual botnets from a set of password-guessing attacks.
Gokul Kannan Sadasivam, Chittaranjan Hota, Bhojan Anand
Efficient Data Transmission in WSN: Techniques and Future Challenges
Abstract
Wireless sensor networks comprise of sensor nodes which collect data by sensing events and forward the data to sink or base station. Transmitting data to the base station is an energy-consuming job and needs optimal routing for ensuring less data loss and delay. Lifetime and performance of the network can be enhanced by balancing the load and energy consumption during routing. Changing the deployment environment also calls for new routing paradigm to attain desired results. Various routing techniques have been proposed for terrestrial and underwater wireless sensor networks. In this paper, we have comparatively analyzed the existing routing techniques and identified the present drawbacks and future challenges. This study will be helpful in designing new algorithms which will alleviate the existing drawbacks and result in better performance of the system.
Nishi Gupta, Shikha Gupta, Satbir Jain
A Study of Epidemic Spreading and Rumor Spreading over Complex Networks
Abstract
Simulation of epidemic spreading and rumor spreading on complex network gives us the relations between the rate of spreading, rate of recovery with the impact of epidemic or rumor which can be quite useful while tackling with them. This paper gives a technical and graphical overview to the complex networks, epidemic spreading, rumor spreading, comparison between scale-free and random network, and various related plots.
Prem Kumar, Puneet Verma, Anurag Singh
Medical Alert System Using Social Data
Abstract
Medical data research and analysis is a heuristic-based system that analyzes disease in metropolitan cities of India like Delhi using input from the cyber social system. This provides information regarding the level of infections in different localities and correlations among events which cause that disease. The inputs from physical cyber social systems are analyzed according to different locations of the city listed in another record. From the data collected, crucial information pertaining to studying infected patterns was extracted. Obtained with the results, we analyze the data from social media and generate a heuristic-based opinion.
Kumar Abhishek, M. P. Singh, Prakhar Shrivastav, Suraj Thakre

Machine Learning

Frontmatter
A Novel Framework for Portfolio Optimization Based on Modified Simulated Annealing Algorithm Using ANN, RBFN, and ABC Algorithms
Abstract
The objective of the portfolio optimization problem is to search an optimal solution for investing an amount as a stipulated value if a set of assets or securities is given. This paper has a description of a new approach for a framework that has two nascent basal parameters which are derived from return values obtained from the basic mean-variance model and another significant parameter conditional value at risk. This framework is capable of finding an optimal solution for the cost involving quadratic equations using these basal parameters, and an illustration of the approach based on modified stimulated annealing (SA) is offered in the framework, which uses a significant parameter, viz. modified step and computes optimal values of the cost. The computation of the value of the parameter modified step uses another parameter radius. Two approaches for computing the value of parameter radius are provided, which are based on ABC algorithm or by applying RBFN. ABC algorithm uses two significant parameters which are used for binding maximum and minimum values for computing the value of radius. RBFN uses three different functions, and the value of radius is computed from the maximum and minimum value of points in these functions. Lastly, the value of step is modified by multiplying it by another factor which is computed from ANN structure. Finally, the modified SA algorithm is applied, such that an optimal value of the cost, as well as the optimal value of the basal parameter, may be obtained using this modified value of step. The intention is to minimize the overall cost which is computed from quadratic equations based on these basal key parameters. A comparison of both the approaches which are based on either modified ABC or modified RBFN is shown in the paper. The results obtained show the validation of the two schemes adopted in the paper for computing the optimal solutions of the basal parameters used in the solution of portfolio optimization.
Chanchal Kumar, M. N. Doja, Mirza Allim Baig
A Proposed Method for Disruption Classification in Tokamak Using Convolutional Neural Network
Abstract
Thermonuclear fusion is one of the alternative sources of energy. Fusion reactors use a device called tokamak. Classification of favorable and non-favorable discharges in a tokamak is very important for plasma operation point of view. Non-favorable discharges are mainly disruptive in nature which causes certain losses of confinement that take place abruptly and affect the integrity of tokamak. During disruptions, the plasma energy gets transferred to the surrounding structures of vacuum vessel which causes to massive heat and serious damage. The objective of the proposed work is to classify such plasma discharges in tokamak among the other favorable discharges and make some suitable classifiers. The convolutional neural network can be implemented as one of the most viable and responsive tools to classify the disruption. In this paper, along with review on the various existing approaches, a proposed CNN-based disruption classification method specifically for Aditya tokamak is presented.
Priyanka Sharma, Swati Jain, Vaibhav Jain, Sutapa Ranjan, R. Manchanda, Daniel Raju, J. Ghosh, R. L. Tanna
Comparative Evaluation of Machine Learning Algorithms for Network Intrusion Detection Using Weka
Abstract
For the past few years, it has been seen that the computer intrusion attacks are becoming more sophisticated, and the volume, velocity, and variance of traffic data have greatly increased. Because the conventional methods and tools have become impotent in the detection of intrusion attacks, most intrusion detection systems now embrace the use of machine learning tools and algorithms for efficiency. This is because of their ability to process large volume, velocity, and very high variance data. This work reviews and analyzes the performance of three out of the most commonly used machine learning algorithms in network intrusion. In this work, the performance of Naïve Bayes, decision tree, and random forest algorithms were evaluated as they were being trained and tested with the KDD CUP 1999 dataset from DARPA using a big data and machine learning tool called Weka. These classification algorithms are evaluated based on their precision, sensitivity, and accuracy.
Nureni Ayofe Azeez, Obinna Justin Asuzu, Sanjay Misra, Adewole Adewumi, Ravin Ahuja, Rytis Maskeliunas
Super-Intelligent Machine Operations in Twenty-First-Century Manufacturing Industries: A Boost or Doom to Political and Human Development?
Abstract
Recent studies reveal that there are growing controversies over the real impact and benefits of advances in artificial intelligence (AI) for today’s manufacturing industries (MI). Scholars who argue that AI provides profound advancements in technology beneficial for improving the quality of human life seem to outweigh those holding contrary opinions about the gains of AI. The Marxian alienation theory, which essentially argues that AI technology exerts traumatic extinction strain on the psyche of individuals, was adopted for this study. The ex post facto research methodology and Derrida’s reconstructive and deconstructive analytical method were adopted for interrogating the meaning of concepts and arguments offered by contending researchers. Justifications for advancing contrary views about the benefits of AI technology for MI were adduced. Scientists were enjoined to identify ways of aligning the goals of super-intelligent machines with those of humanity since the study affirmed that the extinction threat is real.
I. A. P. Wogu, S. Misra, P. A. Assibong, S. O. Ogiri, R. Damasevicius, R. Maskeliunas
Exploring Ensembles for Unsupervised Outlier Detection: An Empirical Analysis
Abstract
Ensemble learning for unsupervised outlier detection is a less explored research field. The challenge of recognizing atypical and unexpected observations in data can be handled better when performed by a variety of detection methods. Presumably, each of them introduces outliers of distinctive characteristics, which are essential for a good outlier detection system. Certainly, outlier ensembles are also associated with their own set of challenges, particularly due to unsupervised nature. The related fundamentals and research questions have already been discussed by Aggarwal (ACM SIGKDD Explorations Newsletter 14(2):49–58, 2013) and Zimek ( ACM SIGKDD Explorations Newsletter 15(1):11–22, 2014). Complementary to their points, here, we empirically analyze the existing issues and open questions for unsupervised outlier detection ensembles. In addition, our analysis introduces some new issues to be taken into consideration for more effective ensemble design. Further, we highlight the usefulness of ensemble learning for unsupervised outlier detection by empirical means.
Akanksha Mukhriya, Rajeev Kumar

Web Intelligence

Frontmatter
Effect of Classifiers on Type-III Metaphor Detection
Abstract
Metaphors are a fascinating aspect of human language which facilitates optimal communication. It enables us to express an abstract idea or event with the help of a well-understood concept from another domain. Recently, there is a surge in interest of researchers from the cognitive domain as well as linguistics to process different types of metaphorical utterances in text effectively. In this paper, we reflect upon the problem of Type-III metaphor detection which occurs in the form of \({<}\text {adjective},\,\text {noun}{>}\) in natural text. Prior works have predominantly used word embeddings with different algorithms and datasets to detect these metaphorical instances. However, there is a need to analyze if there is any significant advantage of using techniques such as Neural Network over traditional models such as SVM. In this paper, we perform a qualitative analysis to understand the efficacy of different algorithms in detecting Type-III metaphors. We perform experiments on two publicly available datasets. Our results indicate that given a large dataset of training, the models trained using different algorithms provide a comparable performance.
Sunny Rai, Shampa Chakraverty, Ayush Garg
Multi-class Classification of Sentiments in Hindi Sentences Based on Intensities
Abstract
Sentiment analysis is a field of Natural Language Processing and Information Retrieval. Generally, the text has been classified as neutral, negative or positive. But, in this work, Hindi sentences have been classified among 5-classes and 7-classes based on their intensities (e.g. weakly positive, strongly positive, weakly negative and strongly negative). In this work, we have taken NLP approach to classify Hindi sentences taken from tagged movie corpora and tourism corpora created by IIT Bombay. In this work, language independent and dependent features both have been used for classification. A new term weighting scheme has been proposed in this work. Features used are unigrams and bigrams. A senti-lexicon Hindi-SentiWordNet has also been used. To implement this, hybrid fuzzy neural network (FNN) method has been used and its result has been compared with Naïve Bayes, SVM and MaxEnt. This approach has given the promising results.
Kanika Garg, D. K. Lobiyal
Language Identification for Hindi Language Transliterated Text in Roman Script Using Generative Adversarial Networks
Abstract
This work aims to achieve a novel approach to identify text-based content written in Roman script which conveys meaning in Hindi language. The research work proposes a methodology to identify language based on semantic meaning of the text. The solution is approached by means of feature extraction which are eventually fed to artificial neural network(ANN). The final output of the ANN is multiplied with the feature vector and then fed through a autoencoder and a generative adversarial network(GAN) which then trains the model in a semi-supervised manner. The feature extraction defines a feature vector, and ANN model then detects the probability of language classified correctly. The data set was curated using open data from Web, and common chat applications were used to curate the data set. Parts from that data set were used to form the training and test data and also the data for comparative study for the purpose of evaluation.
Deepak Kumar Sharma, Anurag Singh, Abhishek Saroha
An Improved Similarity Measure to Alleviate Sparsity Problem in Context-Aware Recommender Systems
Abstract
Context-aware recommender systems (CARS) tend to incorporate contextual information while making recommendations, thereby enhancing accuracy and satisfaction. Similarity-based collaborative filtering is a very satisfactory and popular approach in this area. Typically, data sparsity problem in CARS becomes more severe when preferences are diluted with context factors in user-item preference matrix. Moreover, most of the researches have focused on utilizing traditional similarity measures to compute user/item similarity which is not suitable to sparse data and even did not utilize contextual conditions of the users. Traditional similarity measures suffer from co-rated item problem and do not consider global preferences. Therefore, this paper presents a new similarity measure and its variant which is suitable for CARS. Proposed measure utilizes contextual conditions, global preferences of the user behavior, and proportion of the common ratings. Subsequently, we applied them in similarity-based algorithms where each component is contextually weighted. The proposed algorithms are also analyzed for group of users. Recommendation results using two global context-aware datasets show that the proposed similarity measure-based algorithms outperform.
Veer Sain Dixit, Parul Jain
Trust and Reputation-Based Model to Prevent Denial-of-Service Attacks in Mobile Agent System
Abstract
Inside this paper, a trust and reputation-based model is proposed that a mobile agent system can implement to secure agent as well as the platform for malicious agent and platform. This model is unique in the protection of agent to the platform as well as the platform to agent denial-of-service attacks. The reputation of any platform can be calculated by taking the average trust values of all its mobile agents. JADE analysis shows the results that prove the uniqueness of this model in comparison with other models.
Praveen Mittal, Manas Kumar Mishra
Fraud Detection in Online Transactions Using Supervised Learning Techniques
Abstract
The mounting dependence on computers and the Internet has made online fraud—an increasing concern for both users and law enforcement agencies. People and organizations can often be seen getting involved in a fraud fiasco, resulting in loss of money, property, legal rights, reputation. Detection of fraud is crucial as it deals with protecting oneself from getting duped. The work presented in this paper provides an empirical study and analysis of supervised learning techniques, i.e., logistic regression, nearest neighbors, linear and RBF SVM, decision trees, random forest and naïve Bayes on a benchmark credit card transaction dataset. The performance results have been evaluated and compared to identify the best predictive technique. The techniques have been used to detect whether a given transaction is fraudulent or not.
Akshi Kumar, Garima Gupta

Computing in Education

Frontmatter
Real-Time Printed Text Reader for Visually Impaired
Abstract
Present paper aims at providing a solution to convert text from any written material into digital format in real time by firstly detecting the printed text through text recognition API using the smartphone camera. The detected text selection and aggregation are done based on the proposed algorithm. Finally, it is then passed to the speech engine which converts text into speech. Thereby allowing people to read who are having difficulty in reading either due to partial or full loss of vision. One of the main applications of this tool is in reading books, thereby helping the members of the visually impaired community increase their knowledge and also helping themselves in academics.
Ashutosh Dadhich, Kamlesh Dutta
Teaching Algorithms Using an Android Application
Abstract
In the scenario of twenty-first century, education is the centre of living unlike other basic needs. The overall literacy rate of our country is 74.04% which has been raised by 9.21% as compared to 2001, and present youth literacy rate is 90.2%; the Android applications are playing a very important role in framing various learning tools for undergraduate courses. The application developed exclusively in Android has features of searching and sorting techniques. This application has contents which can be used for computer students of high school, UG courses like bachelors of computer applications, bachelors of engineering and bachelors of technology. The paper has a survey over various case studies inclusive of different applications which were developed to enhance the learning among students at the university. Forty-three responses have been recorded taking various aspects in view. It also enables readers to understand the change in the trend of education in the past decade.
Dipika Jain, Pawan Kumar
Keyword Extraction Using Graph Centrality and WordNet
Abstract
Keywords are a summarized shortened version of any document. While a lot of research has been done on keyword extraction, very few of them analyze the network of a semantic network to identify the most important words in the document. In this research, we present one such method which uses WordNet as our knowledge base to exploit the semantic relatedness of terms and hence determine keywords. This is based upon graph centrality measures which help to identify the central nodes or keywords from the document.
Chhavi Sharma, Minni Jain, Ayush Aggarwal
Hybrid Mobile Learning Architecture for Higher Education
Abstract
Mobile phones are primarily designed for enabling communication over wireless network infrastructure like cellular network through audio calling. Nowadays, they are getting smarter and becoming the center of entertainments by running different apps. As the devices are very much portable to the day-to-day activities of the public, their usage can further be exploited in the education industry. In this study, hybridized architecture is proposed for higher education exploiting mobile learning scenario. This architecture is used to combine the strength of both the native and Web apps in order to make the mobile learning app on time through push notifications. Mobile apps developed in hybrid architecture are installable and can pull content from the Internet independent of browsers. Generally, by designing the whole system’s physical architecture, a prototype implementation was made in agile process model. Finally, the prototype was evaluated and validated to perform well for the core functionalities stipulated as usage scenarios.
Asrat Mulatu, Addisu Anbessa, Sanjay Misra, Adewole Adewumi, Robertas Damaševičius, Ravin Ahuja
Using Collaborative Robotics as a Way to Engage Students
Abstract
Science, technology, engineering, and mathematics (STEM) fields are core technological underpinnings of an advanced society and they are also related to the economic competitiveness of nations. Many countries are suffering from low achievement and low interest among learners in STEM subjects compared to others. In this paper, we argue that the robots can be used not only for teaching schoolchildren and university students to learn the STEM subjects, but also in social and humanistic sciences to increase engagement in technology and facilitate acquisition of transdisciplinary knowledge. As a case study, we present an approach to educational robotics adopted in Kaunas University of Technology, Lithuania. The approach allows to foster student creativity and improve their teamwork abilities, as well as knowledge in robotics and robot programming.
Lina Narbutaitė, Robertas Damaševičius, Egidijus Kazanavičius, Sanjay Misra
Assessing Scratch Programmers’ Development of Computational Thinking with Transaction-Level Data
Abstract
In the learning analytics research community, transaction-level data is being increasingly utilized to assess learner abilities on tasks performed within computer-based environments. This fine-grained data reflects step-by-step decisions made by learners and can be used to detect misconceptions or gaps in learner understanding that may be undetectable in coarser data obtained from traditional assessments. Our key contributions are (1) an extension to the virtual machine for Scratch (a popular block-based programming language designed to teach coding to school-age children) which logs transaction-level data as a stream of time-stamped steps taken by learners as they program within the Scratch environment and (2) a tool to visualize these data streams to help instructors assess the development of learners’ computational thinking abilities.
Milan J. Srinivas, Michelle M. Roy, Jyotsna N. Sagri, Viraj Kumar
Metadaten
Titel
Towards Extensible and Adaptable Methods in Computing
herausgegeben von
Prof. Dr. Shampa Chakraverty
Dr. Anil Goel
Prof. Dr. Sanjay Misra
Copyright-Jahr
2018
Verlag
Springer Singapore
Electronic ISBN
978-981-13-2348-5
Print ISBN
978-981-13-2347-8
DOI
https://doi.org/10.1007/978-981-13-2348-5

Premium Partner