Skip to main content

2021 | Buch

Information and Communication Technology and Applications

Third International Conference, ICTA 2020, Minna, Nigeria, November 24–27, 2020, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes revised selected papers from the Third International Conference on Information and Communication Technology and Applications, ICTA 2020, held in Minna, Nigeria, in November 2020. Due to the COVID-19 pandemic the conference was held online.
The 67 full papers were carefully reviewed and selected from 234 submissions. The papers are organized in the topical sections on Artificial Intelligence, Big Data and Machine Learning; Information Security Privacy and Trust; Information Science and Technology.

Inhaltsverzeichnis

Frontmatter

Artificial Intelligence, Big Data and Machine Learning

Frontmatter
Multi-class Model MOV-OVR for Automatic Evaluation of Tremor Disorders in Huntington’s Disease

The abstract should summarize the contents of the paper in short terms, i.e. 150–250 words. This article proposes a method for assessing the symptoms of tremor in patients at an early stage of Huntington’s disease (Huntington’s syndrome, Huntington’s chorea, HD). This approach includes the development of a data collection methodology using smartphones or tablets, data labelling for Support vector machine (SVM) model, multiple-class classification strategy, training the SVM, automatic selection of model parameters, and selection of training and test data sets. More than 3000 data records were obtained during research from subjects and patients with HD in Lithuania. The proposed SVM model achieved an accuracy of 97.09% in relation to 14 different classes, which were built according to the Shoulson-Fahn Total Functional Capacity (TFC) scale for assessing the patient’s tremor condition.

Rytis Maskeliunas, Andrius Lauraitis, Robertas Damasevicius, Sanjay Misra
Application of Big Data Analytics for Improving Learning Process in Technical Vocational Education and Training

The study identifies the application of big data analytics for improving learning process in Technical Vocational Educational and Training (TVET) in Nigeria. Two research questions were answered. The descriptive survey design was employed and the target population was made up of experts in TVET. A structured questionnaire titled “Big Data Analytics in Technical Vocational Education and Training” (BDATVET) is the tool used for data compilation. BDATVET was subjected to face and content validation by three experts, two in TVET and one in Education Technology. Cronbach Alpha was used to determine the reliability coefficient of the questionnaire and it was found to be 0.81. The data collected from the respondents were analyzed using mean and standard deviation. The findings on the problems of big data include among others security, availability and stability of internet network service. The Findings related to the prospects include among others, allows a teacher measuring, monitoring, and responding, in real-time to a student’s understanding of the material. Based on the findings, it was recommended among others that the stakeholders in TVET need to uphold the fundamental security of their data and who may access information about their competencies.

Aliyu Mustapha, Abdullahi Kutiriko Abubakar, Haruna Dokoro Ahmed, Abdulkadir Mohammed
A Survey for Recommender System for Groups

Recommendation system has been seen to be very concentrated on individual recommendations but few of the new techniques are now concentrated on groups. The aim of this paper is to provide an overview of the existing state of the art techniques for collecting ratings, strategies used in aggregating these strategies and the practical application for group recommendations. This study explored five databases which include IEEE, Science Direct, Springer, ACM and Google Scholar, from which 300 publications were screened. Irrelevant, duplicate and ambiguous papers were removed. At the end, 26 papers were used for depth analysis. This study provides a systematic review of the available evidence based literature concerning recommender systems for groups.

Ananya Misra
Prediction of Malaria Fever Using Long-Short-Term Memory and Big Data

Malaria has been identified to be one of the most common diseases with a great public health problem globally and it is caused by mosquitos’ parasites. This prevails in developing nations where healthcare facilities are not enough for the patients. The technological advancement in medicine has resulted in the collection of huge volumes of data from various sources in different formats. A reliable and early parasite-based diagnosis, identification of symptoms, disease monitoring, and prescription are crucial to decreasing malaria occurrence in Nigeria. Hence, the use of deep and machine learning models is essentials to reduce the effect of malaria-endemic and for better predictive models. Therefore, this paper proposes a framework to predict malaria-endemic in Nigeria. To predict the malaria-endemic well, both environmental and clinical data were used using Kwara State as a case study. The study used a deep learning algorithm as a classifier for the proposed system. Three locations were selected from Irepodun Local Government Areas of Kwara State with 34 months periodic pattern. Each location reacted differently based on environmental factors. The findings indicate that both factors are significant in malaria prediction and transmission. The LSTM algorithm provides an efficient method for detecting situations of widespread malaria.

Joseph Bamidele Awotunde, Rasheed Gbenga Jimoh, Idowu Dauda Oladipo, Muyideen Abdulraheem
Improving the Prediction Accuracy of Academic Performance of the Freshman Using Wonderlic Personnel Test and Rey-Osterrieth Complex Figure

Prediction of academic performance of the students continue to be hot topic in educational data mining field. In this paper, a linear regression analysis was conducted on IQ test, Rey-Osterrieth Complex Figure (ROCF) and Cumulative Grade Points Average (CGPA). A dataset from 111 undergraduate (59 females, 52 males) students from 2 different faculties (Medicine and Computer Science) were collected. The results show that both IQ and ROCF are significantly correlated to CGPA. Linear regression test shows that the combination of IQ-ROCF (β = 0.565) can serve as good feature to predict CGPA.

Ochilbek Rakhmanov, Senol Dane
Comparative Analyses of Machine Learning Paradigms for Operators’ Voice Call Quality of Service

Mobile network operators render data, audio calls, and multimedia services to their customers. Though operators enjoy patronage, the number of customers increases over limited resources, the challenge of value for money paid becomes the concern of MNOs, and their customers. A significant weakness in the researchers’ assessment of operators’ voice call services is lack of performance evaluation on models developed base on a data set covering a broader area. This paper carried out the comparative performance analyses of operators’ audio quality of service (QoS) using the six machine language algorithms. The crowdsourcing approach was used to captutre desired data. The results of comparative analyses shows that in terms of accuracy, ID3 ranked first to be followed by support vector machine (SVM), C4.5, neural network while Fuzzy and adaptive neuro fuzzy inference system (ANFIS) ranked fifth and sixth, respectively. The precision result shows that ID3, SVM, ANFIS, and C4.5 ranked first, second, third, and fourth, respectively. In terms of overall ranking, ID3 demonstrated most superior algorithm because it ranked first, SVM, C4.5, ANFIS ranked second, third and fourth, respectively. Evaluated machine learning-based models will have a positive impact on the service delivery of telecommunication network operators.

Jacob O. Mebawondu
LearnFuse: An Efficient Distributed Big Data Fusion Architecture Using Ensemble Learning Technique

The use of Ensemble learning model for big data fusion is a machine learning technique of fusing data in an efficient and optimal way to enhance analytical result. This technique is less computationally expensive when compared with traditional method. In this work, spatial data were acquired, performed image filtration, standardization, feature extraction and Local learning method was adopted to partition the dataset into clusters. The portioned dataset was subject to training and testing, using machine learning algorithm (ensemble techniques) to implement data fusion. The result shows that Random Forest has an improved accuracy value of 87.46% and precision of 0.959% while SVM, Bagging, Naïve Bayes and Boosting has an accuracy of 82.3%, 86.95, 82.45%, 80.5% and precision value of 0.985%, 0.911%, 0.950% and 0.935% respectively.

Salefu Ngbede Odaudu, Ime J. Umoh, Emmanuel A. Adedokun, Chukwuma Jonathan
A Gaussian Mixture Model with Firm Expectation-Maximization Algorithm for Effective Signal Power Coverage Estimation

Generally, the strength of propagated signal power over wireless cellular communication systems is known to be very stochastic due to their susceptibility to various impacting fading conditions of the radio frequency propagation channels. In previous works, one popular method of analyzing and characterizing such signal dataset in high-dimensional space is by means of single Gaussian density function modelling. However, under many circumstances, it is difficult to give an accurate description of such stochastic signal dataset using a single Gaussian density function model, especially when it does not follow approximately ellipsoidal distribution. In this research paper, a machine learning technique based on Gaussian Mixture Model (GMM) is employed for the analysis and modeling of highly dimensional signal power with mix stochastic shadow fading and long term fading components. The signal power was acquired with the aid of telephone mobile system software investigation tools over the air transmission interface of operational Long Term Evolution cellular network, belonging to a commercial network operator in Port Harcourt City. First, in our contribution, we model the probability density of signal power coverage dataset acquired in the high-dimensional space around six different deployed cell sites by the network operator. In our second contribution, we propose a GMM, which employs a firm iterative maximum likelihood estimation technique combined with the expectation-maximization algorithm to effectively model and characterized signal power dataset. By means of Akaike information criterion, the robustness of the proposed GMM with firm iterative EM algorithm over the commonly used single Gaussian density function modelling method is shown. This technique can serve as a valuable step toward effective monitoring and analyzing operational cellular radio network performance.

Isabona Joseph, Ojuh O. Divine
Analysis and Classification of Biomedical Data Using Machine Learning Techniques

Breast cancer research has been the center of discussion on health information technology in recent years, as it is the second largest cause of cancer deaths in women. A biopsy in which tissue is microscopically extracted and analyzed will be used for the detection of breast cancer. The cancer cells are either categorized as cancer or non-cancer. There are various methods for classifying and predicting breast cancer. The proposed work uses a breast cancer dataset from Wisconsin (Diagnostic). The data collection used to track the effectiveness of different machine learning approaches with respect to main parameters including accuracy, F1-score, specificity and recall. The three basic classification models in this paper are i.e., Naïve Bayes, KNN, Decision Tree was used to forecast cancer. The comparative tests show that the proposed Naïve Bayes classifier gives the accuracy of 98.1%, f1 score of 98.3%, specificity of 97.5%, recall of 98.3%.

Sujata Panda, Hima Bindu Maringanti
Emotion Recognition of Speech in Hindi Using Dimensionality Reduction and Machine Learning Techniques

This study primarily focuses on integrating various types of speech features and different statistical techniques to reduce the data in all dimensions. To predict human beings, the existing machine learning algorithms were applied to train the available dataset to obtain better results. This will result in obtaining much better response from machines by processing the human instruction & related emotions. Through machine learning techniques emotion recognition system can be work effectively and results will be comparable through code of python language. To improve the performance of emotion recognition (ER) statistical techniques principal component analysis (PCA) and linear discriminant analysis (LDA) to fetch the essential information from the dataset. After applying Naïve Bayes classification (NBC) on the speech corpus collected from non-dramatic actor the results shows that, NBC generates better results than the exiting machine learning (ML) classification techniques such as K-means nearest neighbor algorithm (KNN),Support vector machine (SVM) & decision tree(DT). This paper presents the results of various ML techniques that is used for finding out ER. Through SVM prediction rate achieved 39% and decision tree recognition rate achieved 29% only and when applied NBC results is 72.77%. This study presents the change of prediction rate while change of dataset happened.

Akshat Agrawal, Anurag Jain
Application of Supervised Machine Learning Based on Gaussian Process Regression for Extrapolative Cell Availability Evaluation in Cellular Communication Systems

Supervised machine learning models and their algorithms play major roles in extrapolative data analysis and getting valuable information of the data. One emerging robust supervised machine learning based model is the Gaussian process regression (GPR) model. A vital strength of the GPR model is its capacity to adaptively model multipath linear/nonlinear functional approximation, dimension reduction and classification problems. Nevertheless, there exist a plethora of approximation (prediction) methods and hyperparameters that systematically impact GPR kernel function for effective modeling and learning capacity. Some of the key approximation methods includes the Exact Gaussian process regression, Fully independent conditional, Subset of data points approximation, and Subset of regressors. Nonetheless, the problem of knowing how to identify or select the best from these approximation methods during data training and learning with GPR model so as to avoid the predictive variance problem is a challenge. In this contribution, we propose a stepwise selection algorithm to tackle the challenge. GPR modelling with stepwise selection algorithm has been tested for extrapolative regression analysis of live cell availability data obtained from an operational telecom service provider. The GPR extrapolative based evaluation results show that the proposed approach is not only tractable, but also yield low errors in terms of accurate service availability quantification and estimation. Also, in terms of mean absolute error, the regression results of the proposed GPR extrapolative based evaluation combined with stepwise kernels selection algorithm were also far better compared with the ones obtained using support vector regression.

Ojuh O. Divine, Isabona Joseph
Anomaly Android Malware Detection: A Comparative Analysis of Six Classifiers

The high proliferation rate of Android devices has exposed the platform to wider vulnerabilities of increasing malware attacks. Emerging trends of the malware threats are employing highly sophisticated and dynamic detection avoidance techniques. This has continued to weaken the capacity of existing signature-based detection systems in their protection against new and unknown threats. Thus, the need for effective detection approaches for unknown and novel Android malware has remained a growing challenge in the field of mobile and information security. This study therefore aimed at investigating the best performing machine learning classification algorithm for the anomaly Android malware detection, leveraging on permission-based feature sets, by conduction a performance comparison analysis between six different classification algorithms namely: Naïve Bayes, Simple Logistics, Random Forest, PART, k-Nearest Neighbours (k-NN), and Support Vector Machine (SVM). The Machine learning tool that was used for the pre-processing of the feature sets and the classification processes is WEKA 3.8.2 suite. Findings of the study showed that Random Forest had the best detection result with false alarm rate of 2.2%, accuracy of $$97.4\%$$ 97.4 % , error rate of $$2.6\%$$ 2.6 % and ROC Area of $$99.6\%$$ 99.6 % . The study concluded that, using Android permission features, Random Forest and k-Nearest Neighbours recoded best performances in Android malware detection, followed by Support Vector Machine and Simple Logistics classification algorithms. Partial Decision Tree (PART) performed relatively well, while Naïve Baye recorded the least performance. Consequently, the deployment of Random Forest model and k-NN model are recommended for the development of an anomaly Android malware detection paradigm.

Benjamin A. Gyunka, Oluwakemi C. Abikoye, Adekeye S. Adekunle
Credit Risk Prediction in Commercial Bank Using Chi-Square with SVM-RBF

Financial credit risk analysis management has been a foremost influence with a lot of challenges, especially for banks reducing their principal loss. In this study, the machine learning technique is a promising area used for credit scoring for analyzing risks in banks. It has become critical to extract beneficial knowledge for a great number of complex datasets. In this study, a machine learning approach using Chi-Square with SVM-RBF classifier was analyzed for Taiwan bank credit data. The model provides important information with enhanced accuracy that will help in predicting loan status. The experiment achieves 93% accuracy compared to the state-of-the-art.

Kayode Omotosho Alabi, Sulaiman Olaniyi Abdulsalam, Roseline Oluwaseun Ogundokun, Micheal Olaolu Arowolo
A Conceptual Hybrid Model of Deep Convolutional Neural Network (DCNN) and Long Short-Term Memory (LSTM) for Masquerade Attack Detection

Over the years, different issues keep emanating in the area of computer security, ranging from hacking to other security issues such as Ransomware, Distributed Denial of Service (DDoS), Malware, Masquerade attacks and so on. Masquerade attack is one of the most dangerous security attacks, dangerous – because it is can easily go undetected as the damage might have been done before realizing the extent to which it is done. The traditional method of detecting such attack has proven to be ineffective as rate of false positives is always on the high side and True positives are low. This paper presents an automatic deep learning method of Convolutional Neural Network (CNN) with Long Short Term Memory (LSTM) model using the dataset from Greenberg and Schonlau.

Adam Adenike Azeezat, Onashoga Sadiat Adebukola, Abayomi-Alli Adebayo, Omoyiola Bayo Olushola
An Automated Framework for Swift Lecture Evaluation Using Speech Recognition and NLP

Feedback produced by students can give useful insights on lecturer’s performance and ability of teaching. Many institutions use the feedbacks from students efficiently to improve the education quality. In this study. A novel framework for collection and swift processing of students’ feedbacks about lecture is proposed in this paper, which addresses the shortcomings of traditional scale-rated surveys and open-end comments. The automated framework uses speech recognition and NLP tools to produce frequency graph of mostly used words, which can help to identify the topics need to be revised. An experiment was successfully conducted to test the framework among 3rd year undergraduate students.

Ochilbek Rakhmanov
DeepFacematch: A Convolutional Neural Network Model for Contactless Attendance on e-SIWES Portal

An attendance system has always been a critical instrument that is normally engaged to determine the response of persons to events, programs, or scheduled classes within the educational context. The Student Industrial Work Experience Scheme (SIWES) is a work-integrated learning program developed and made mandatory for all undergraduate students in professional and sciences courses in the Nigerian Higher Education Institutions (HEIs). This is to expose the students to the world of works in the industry before graduation. However, monitoring students and ensuring that they partake fully in the scheme has proven difficult over the years. In this paper, we developed a Convolutional Neural Network (CNN) model named DeepFacematch to realize face recognition based contactless attendance. Given the model’s validation accuracy of 92.60%, a contactless attendance app containing the validated model and location tracking support was developed, for incorporation into the e-SIWES web portal. This is to achieve effective monitoring of the students’ compliance with the rules of the work experience scheme. Remarkably, the addition of the DeepFacematch feature will be a boost to the viability of e-SIWES portal for improved SIWES coordination across the HEIs in Nigeria. DeepFacematch can also be re-trained and adapted for contactless attendance systems at offices and schools to curb the spread of infectious diseases like coronavirus.

Emmanuel Adetiba, Amarachi E. Opara, Oluwaseun T. Ajayi, Folashade O. Owolabi
Hausa Intelligence Chatbot System

Chatbot is an AI based application that stimulates human conversations. In this research, we proposed a noble approach to the design of Chatbot that interacts with people in Hausa Language, which is the second most famous languages in Africa. Numerous chatbots have been created and continue to be created. To the best of our knowledge, there is very limited or no Chatbot that target Hausa Language, hence the catalyst of this research. The proposed Chatbot is capable of communicating with users in Hausa Language, tell some funny stories and above all has the ability of learning from users. It was designed as a web-based application that can feature in any platform with internet access and a web browser, where PHP was used as programming Language in conjunction with MySQL database. The interface was designed using HTML and CSS with JavaScript for interactivity. Developing this Chatbot system would bring the Hausa language into the Artificial intelligence stage and it would serve as a reference material for future research and development. The result shows that the system is capable of communicating in simple Hausa words and will be able to learn simple words in Hausa. Developing system whose ability is to learn, adapt and perform autonomously can be a major field of competition between technology inventors and Hausa-based language processing over the next few years.

Usman Haruna, Umar Sunusi Maitalata, Murtala Mohammed, Jaafar Zubairu Maitama
An Empirical Study to Investigate Data Sampling Techniques for Improving Code-Smell Prediction Using Imbalanced Data

A code smell refers to a surface indication that usually indicates a deeper problem within a system. Usually it is associated with an easily traceable issue that often indicates a deeper inherent problem in the code. It has been observed that codes containing code smells are more susceptible to a higher probability of change during the software development process. Refactoring the code at an early stage during the development process saves a lot of time and prevents any kind of hassles at later stages. This paper aims at finding eight different types of code smells using feature engineering and sampling techniques with the purpose of handling imbalanced data. Three naive Bayes classifier are used to find code smells over 629 different packages. The results of this research indicate that the Gaussian Naive Bayes classifier performed the best out of all three classifiers in all samples of data. The results also indicate that the original data was the best data to use in which all three classifiers performed better than other two data sets.

Himanshu Gupta, Sanjay Misra, Lov Kumar, N. L. Bhanu Murthy
A Statistical Linguistic Terms Interrelationship Approach to Query Expansion Based on Terms Selection Value

Query expansion has changed the information retrieval process to improve search performance. It aimed at improving the performance of information retrieval system to retrieve user information need. However, the term selection process still lacks precision results due to lexical ambiguity challenge. Many researchers have focused on pseudo-relevance feedback to select terms from the top-retrieved documents using some statistical linguistic techniques. However, their methods have limitations. This paper proposed a statistical linguistic terms interrelationship that exploits term selection in query expansion and retrieved relevance results. The proposed approach was tested on Malay, Hausa and Urdu Quran translated datasets and the results indicate that the proposed approach outperforms the previous method in retrieving relevance results. Future work should focus on the weighting score based on terms interrelationship to improve the query expansion performance.

Nuhu Yusuf, Mohd Amin Mohd Yunus, Norfaradilla Wahid, Mohd Najib Mohd Salleh
Validation of Student Psychological Player Types for Game-Based Learning in University Math Lectures

Game-based learning can make educational activities more manageable and planned, and therefore contribute to the achievement of a more productive educational result. To enable adaptive and personalized learning process, the knowledge of learning game player psychological types is required. Player type can be established using a questionnaire such as HEXAD. We present the analysis of the student survey results using statistical analysis, correlation analysis, Principal Component Analysis (PCA) and Exploratory Factor Analysis (EFA). The simplification of the questionnaire is suggested. We shortened the 30-item HEXAD survey to a 12-item survey (named sHEXAD) using a factor analysis on responses from university course students. Testing demonstrated that the items of the sHEXAD survey correlated and well agreed with the original HEXAD survey items and could be used to derive the same outcomes.

Tatjana Sidekerskienė, Robertas Damaševičius, Rytis Maskeliūnas
Outlier Detection in Multivariate Time Series Data Using a Fusion of K-Medoid, Standardized Euclidean Distance and Z-Score

Data mining technique has been used to extract potentially useful knowledge from big data. However, data mining sometimes faces the issue of incorrect results which could be due to the presence of an outlier in the analyzed data. In the literature, it has been identified that the detection of this outlier could enhance the quality of the dataset. An important type of data that requires outlier detection for accurate prediction and enhanced decision making is time series data. Time series data are valuable as it helps to understand the past behavior which is helpful for future predictions hence, it is important to detect the presence of outliers in time series dataset. This paper proposes an algorithm for outlier detection in Multivariate Time Series (MTS) data based on a fusion of K-medoid, Standard Euclidean Distance (SED), and Z-score. Apart from SED, experiments were also performed on two other distance metrics which are City Block and Euclidean Distance. Z-score performance was compared to that of inter-quartile. However, the result obtained showed that the Z-score technique produced a better outlier detection result of 0.9978 F-measure as compared to inter-quartile of 0.8571 F-measure. Furthermore, SED performed better when combined with both Z-score and inter-quartile than City Block and Euclidean Distance.

Nwodo Benita Chikodili, Mohammed D. Abdulmalik, Opeyemi A. Abisoye, Sulaimon A. Bashir
An Improved Hybridization in the Diagnosis of Diabetes Mellitus Using Selected Computational Intelligence

Artificial Intelligence (AI) in medicine has provided numerous advantages in diagnosis, management, and prediction of highly complicated and uncertain diseases like diabetes. Despite the high rate of complexity and uncertainty in this area, computational intelligent systems such as the Artificial Neural Network (ANN), Fuzzy Logic (FL) and Genetic Algorithm (GA) have been used to enhance healthcare services, reduce medical costs and improve quality of life. Hence, Computational Intelligence Techniques (CIT) has been successfully employed in diabetes disease diagnosis, risk evaluation, patient monitoring, and prediction in the medical field. Using single technique in the diagnosis of diabetes has been comprehensively investigated showing some level of accuracy, but the use of hybridized can still perform better. Diabetes Mellitus (DM) is one of contemporary society’s most chronic and crippling diseases and poses not just a medical issue but also a socio-economic issue. Therefore, the paper develops an improved hybrid system for the diagnosis of diabetes mellitus using FL, ANN, and Genetic Algorithm (GA). FL and ANN was combined for the diagnosis of diabetes mellitus and GA is used for features selection and optimization. The result performed better during the diagnosis process for diabetes mellitus. Hence the results of the comparison showed that Genetic-Neuro-Fuzzy Inferential System (GNFIS) had a better performance with 99.34% accuracy on the whole dataset used when compared with FL and ANN with 96.14% and 95.14% respectively. The proposed system can be used in assisting medical practitioners in diagnose diabetes mellitus and increase its accuracy

Idowu Dauda Oladipo, Abdulrauph Olarewaju Babatunde, Joseph Bamidele Awotunde, Muyideen Abdulraheem
Optimizing the Classification of Network Intrusion Detection Using Ensembles of Decision Trees Algorithm

Over the years, the vulnerability of the network system has completely revolutionized the security system. Attackers simply exploit these exposures to gain undue access to network resources. It is essential to safeguard network resources with the Intrusion Prevention System (IPS) and Intrusion Detection System (IDS). Hence, to develop an optimized IDS model using ensemble modeling of machine learning technique is paramount. Therefore, this research attempts to build a network IDS using an ensemble of Decision Trees (DT) algorithms to optimize an IDS model. Furthermore, the C4.5 DT classifier used on the University of New South Wales Network-Based 2015 (UNSW-NB15) network data set. Then the ensemble techniques of bagging and AdaBoost are compared during the performance evaluation. The results of this work depict that performance increase as training size increases. Consequently, the results showed that adopting the ensemble model for the C4.5 decision tree classified improved network intrusion classification compared to using the machine learning algorithm in isolation. Moreover, the result also showed an improved classification model for network IDS model. The AdaBoost ensemble model of the C4.5 DT algorithm using partitions of 90% of training and 10% of testing data sets performs the best with 98% accuracy and precision.

Olamatanmi J. Mebawondu, Olufunso D. Alowolodu, Adebayo O. Adetunmbi, Jacob O. Mebawondu
Identification of Bacterial Leaf Blight and Powdery Mildew Diseases Based on a Combination of Histogram of Oriented Gradient and Local Binary Pattern Features

Quantity and quality of agricultural products are significantly reduced by diseases. Identification and classification of these plant diseases using plant leaf images is one of the important agricultural areas of research for which machine-learning models can be employed. The Powdery Mildew and Bacterial Leaf Blight diseases are two common diseases that can have a severe effect on crop production. To minimize the loss incurred by Powdery Mildew and Bacterial Leaf Blight diseases and to ensure more accurate automatic detection of these pathogens, this paper proposes an approach for identifying these diseases, based on a combination of Histogram of Oriented Gradient (HOG) and Local Binary Pattern (LBP) features (HOG + LBP) using Naïve Bayes (NB) Classifier. The NB classifier was also trained with only the HOG features and also trained with only the LBP features. However the NB classifier trained with the HOG + LBP features obtained a higher performance accuracy of 95.45% as compared to NB classifier trained with only HOG features and NB classifier trained only with LBP features with accuracy of 90.91% and 86.36% respectively.

Zakari Hassan Mohammed, Oyefolahan I. O, Mohammed D. Abdulmalik, Sulaimon A. Bashir
Feature Weighting and Classification Modeling for Network Intrusion Detection Using Machine Learning Algorithms

Globally, as the upsurge in dependencies on computer network services, so are the activities of attackers that gain undue access to network resources for selfish interest at the expense of the stakeholders. Attackers threaten integrity, availability, and confidentiality of network resources despite various preventive security measures, hence the need to study ways to detect and minimize attackers’ activities. This paper develops a Network-Based Intrusion Detection System (NBIDS) using the machine learning algorithms capable of detecting and preventing anomaly (attack) network traffic from the Internet, thereby reducing cases of successful network attacks. Gain Ratio and Information Gain are used for features ranking on the UNSW-NB15 benchmark network intrusion dataset. The first fifteen highly ranked features are selected for developing classification models using C4.5 and Naïve Bayes (NB), coincidentally the two feature ranking approaches to select the same features but in a different ordering. Empirical results show that the C4.5 algorithm outperformed NB for all simulations based on a different spilled ratio of testing and training sets as it returns the highest accuracy of 90.44% against 75.09% for NB for two-class models on simulation IV. For all the experimental setup, both DT and NB have a constant precision of 91% to 75%, the True Positive value of 90%:75%, and False Positive value of 8.6%:22.8%, respectively. The experiments revealed that accuracy increases as the training ratio increases. The results show that the approach is practicable for real-time network intrusion detection.

Olamatanmi J. Mebawondu, Adebayo O. Adetunmbi, Jacob O. Mebawondu, Olufunso D. Alowolodu
Spoof Detection in a Zigbee Network Using Forge-Resistant Network Characteristics (RSSI and LQI)

The development of a spoof detection framework in a ZigBee network using forge-resistant network characteristics is presented. ZigBee has become ubiquitous in application areas such as Wireless Sensor Networks (WSNs), Home Area Networks (HANs), Smart Metering, Smart Grid, Internet of Things (IoT) and smart devices. Its pervasiveness and suitability for vast applications makes it a tempting target for attackers. Due to the open nature of the wireless medium, ZigBee networks are susceptible to spoofing attacks; where an illegitimate/Sybil node impersonates or disguises as one or multiple legitimate nodes with malicious intentions. A testbed consisting of two ZU10 ZigBee modules was setup to create a real ZigBee network environment. Received Signal Strength Indicator (RSSI) and the corresponding Link Quality Indicator (LQI) data were collected. The Dynamic Time Warping (DTW) algorithm was used for time series classification and similarity measurement of these dataset over variable physical distances. The framework was able to differentiate ZigBee signals that are at least 1 m apart.

Christopher Bahago Martins, Emmanuel Adedokun Adewale, Ime Umoh Jarlath, Muhammed Bashir Mu’azu
Face Morphing Attack Detection in the Presence of Post-processed Image Sources Using Neighborhood Component Analysis and Decision Tree Classifier

Recently, Face Morphing Attack Detection (MAD) has gained a great deal of attention as criminals have started to use freely and easily available digital manipulation techniques to combine two or more subject facial images to create a new facial image that can be viewed as an accurate image of any of the individual images that constitute it. Some of these morphing tools create morphed images of high quality which pose a serious threat to existing Face Recognition Systems (FRS). In the literatures, it has been identified that FRS is vulnerable to multiform morphing attacks. Based on this vulnerability, several types of research on the detection of this morph attack was conducted using several techniques. Despite the remarkable levels of MAD reported in various literature, so far no suitable solution has been found to handle post-processed images such as images modified after morphing with sharpening operation that can dramatically reduce visible artifacts of morphed photos. In this work, an approach is proposed for MAD before image post-processing and after image post-processing built on a combination of Local Binary Pattern (LBP) for extraction of feature, Neighborhood Component Analysis (NCA) for selection of features and classification using K-Nearest Neighbor (KNN), Decision Tree Classifier (DTC) and Naïve Bayes (NB) classifier. The outcome gotten by training the different classifiers with feature vectors selected using the NCA algorithm improved the classification accuracy from 90% to 94%, consequently improving the general performance of the MAD.

Ogbuka Mary Kenneth, Sulaimon Adebayo Bashir, Opeyemi Aderiike Abisoye, Abdulmalik Danlami Mohammed
Enhanced Back-Translation for Low Resource Neural Machine Translation Using Self-training

Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The quality of the backward system – which is trained on the available parallel data and used for the back-translation – has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model. This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT’14 English-German and IWSLT’15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU.

Idris Abdulmumin, Bashir Shehu Galadanci, Abubakar Isa

Information Security Privacy and Trust

Frontmatter
SIMP-REAUTH: A Simple Multilevel Real User Remote Authentication Scheme for Mobile Cloud Computing

Remote authentication is commonly required for online transactions. Many computing activities are now done online through Cloud computing. Most of devices used to access data in the Cloud are low capacity mobile devices. The advancement of computing technology is also accompanied with increased cybercrimes. Thus, security system for securing user data online must meet up with the new wave of attacks. Many solutions for improved remote authentication have been proposed but the security of remote authentication with mobile devices is still facing challenges. This research proposes a Simple Multilevel Real User Remote Authentication Scheme (SIMP-REAUTH) for Mobile Cloud Computing to prevent misuse of authentication token. SIMP-REAUTH is designed to enable implementation of advance authentication on low memory, low capacity mobile devices for a more accurate real user identification. SIMP-REAUTH consists of three stages comprising the facial, voice, and password authentications. The authentication processes are carried out by separate modules. Failure of any of the stages will cause the system to deny the user access to data. The high capacity and memory consuming biometric processes were outsourced to the Cloud. Misuse of authentication information was prevented with liveness test challenges. The reports obtained from the test users and results of simulated attacks carried out using the testbench show that SIMP-REAUTH is well suited for mitigating authentication token misuse by identity thieves. The test results recorded 94% and 92% resistance against the facial and voice attacks tests, respectively.

Omoniyi Wale Salami, Yuri Demchenko, Emmanuel Adewale Adedokun, Muhammed Bashir Mu’azu
Formulation and Optimization of Overcurrent Relay Coordination in Distribution Networks Using Metaheuristic Algorithms

This paper implements an optimized solution to Over Current Relay Coordination (OCR) using a new metaheuristics algorithm called Smell Agent Optimization (SAO). We first formulate the relay coordination as a single objective multi-variable optimization problem. The main objective of the optimization is to optimize the time required for operation of the protective relays. We employed the IEEE standard 15-bus and IEEE standard 30-bus test networks to implement the OCR methods. Simulations were carried using Matlab R2019b and results were compared with Particle Swarm Optimization (PSO). Experimental analysis of results showed that both SAO and PSO algorithms are effective in solving the relay coordination problem. The SAO obtained a better solution when compared with the PSO, however, the PSO convergence was faster than SAO.

Ahmed Tijani Salawudeen, Abdulrahman Adebayo Olaniyan, Gbenga Abidemi Olarinoye, Tajudeen Humble Sikiru
A Gamified Technique to Improve Users’ Phishing and Typosquatting Awareness

Security is an important aspect of technology and it requires continuous attention, but the user agent plays a crucial role in cyber risks that sensitive data are exposed to. Users’ mistakes and nonchalant behavior have been found to be responsible for various breaches resulting in the increase of identity theft, extortion, loss of information and possibilities of intrusion. This paper explores the use of web-based gamification application to motivate users of the web to be cyber conscious through a fun approach designed to improve their online behavior. Results of the study shows the potentials of a gamified technique in enhancing security awareness and building more secure behaviours in users.

Adebayo Omotosho, Divine Awazie, Peace Ayegba, Justice Emuoyibofarhe
Sooner Lightweight Cryptosystem: Towards Privacy Preservation of Resource-Constrained Devices

The use of cryptosystem became popular because of the increased need for exchanges across untrusted medium especially Internet-enabled networks. On the basis of application several forms of cryptosystems have been developed for purpose of authentication, confidentiality, integrity, and non-repudiation. Cryptosystems make use of encryption schemes that convert plaintext to ciphertext in diverse areas of applications. The vast progressions in the Internet of Things (IoT) technology and resource-constrained devices have given rise to massive deployment of sensor devices and growth of services targeted at lightweight devices. Though, these devices support a number of services, they require strong lightweight encryption approaches for privacy protection of data. Existing lightweight cryptosystems fall short on the expected privacy levels and applicability in emerging resource-constrained environment. This paper develops a mathematical model for a Sooner lightweight cryptographic scheme based on reduced and hardened ciphertext block sizes, hash sizes and key sizes of traditional cryptosystems and Public Blockchain technology for ubiquitous systems. Thereafter, the hardening procedure offered by the RSA homomorphic encryption was applied for the purpose of generating stronger, secure and lightweight AES, RSA and SHA-3 in order to deal with untrusted channels exchanges. The proposed Sooner is recommended for adoption in public Blockchain-based smart systems and applications for the purpose of data privacy.

Abraham Ayegba Alfa, John Kolo Alhassan, Olayemi Mikail Olaniyi, Morufu Olalere
Comparative Performance Analysis of Anti-virus Software

The threats and damages posed by malwares these days are alarming as Anti-virus vendors tend to combat the menace of malwares by the design of Anti-Virus software. This software also has tremendous impact on the performance of the computer system which in turn can become vulnerability for malware attacks. Anti-Virus (anti-malware) software is a computer program used to detects, prevents and deletes files infected by malwares from communicating devices by scanning. A virus is a malware which replicates itself by copying its code into other computer programs or software. It can perform harmful task on affected host computer such as processors time, accessing private information, corrupting and deleting files. This research carry out malware evasion and detection techniques and then focuses on the comparative performance analysis of some selected Anti-Virus software (Avast, Kaspersky, Bitdefender and Norton) using a VMware. Quick, full and custom scans and other parameters were used. Based on the analysis of the selected anti-virus software, the parameters that offers the utmost performance considering malware detection, removal rate, memory usage of the installed antivirus, and the interface launch time is considered the best.

Noel Moses Dogonyaro, Waziri Onomza Victor, Abdulhamid Muhammad Shafii, Salisu Lukman Obada
An Efficient Lightweight Cryptographic Algorithm for IoT Security

To perform various tasks, different devices are interconnected and interact in several emerging areas. The emergence of the Internet of Things (IoT) and its applications makes it possible in the network for many constrained and low-resource devices to communicate, compute processes, and making a decision within themselves. But IoT has many challenges and problems, such as system power consumption, limited battery capacity, memory space, performance cost, resource constraints due to their small size and protection of the communication network. The traditional algorithms have been slow for data protection point of view, and cannot be used for data encryption on an IoT platform given the resource constraints. Therefore, this paper proposes lightweight cryptography based on the Tiny Encryption Algorithm (TEA) for an IoT driven setup to enhance speed benefit from software perspective rather than hardware implementation. The proposed algorithm was used to reduce the time for encryption in the IoT platform and to preserve the trade-off between security and efficiency. In terms of memory use, execution time, and precision, the proposed work is compared with recent works on lightweight start-ups. Results show that in an IoT driven setup, the algorithm is more secure and efficient, and more suitable for data securing.

Muyideen Abdulraheem, Joseph Bamidele Awotunde, Rasheed Gbenga Jimoh, Idowu Dauda Oladipo
Analysis and Classification of Some Selected Media Apps Vulnerability

This research investigates popular messaging applications’ traffic in other to assess the security or vulnerability of communication on those applications. The experiment was carried out in a Local Area Network. Wireshark, NetworkMiner and Netwitness Investigators were used to capture and analyse the traffic. Ten (10) instant messaging applications were installed on Android platforms and used for the experiment. Different types of sensitive media files were recovered from the network traffic, including images, documents/texts and audio. The Internet Service Provider (ISP) of the sender was also recovered along with the resident city of the third party. The research classifies the mobile applications into vulnerable and nonvulnerable applications using the gathered data. Thus, it was discovered that out of ten mobile applications investigated, only Viber application was non-vulnerable to tested attacks. The classification result also shows random forest as the best classifier using this research data.

Olawale Surajudeen Adebayo, Joel Sokoyebom Anyam, Shefiu Ganiyu, Sule Ajiboye Salawu
Appraisal of Electronic Banking Acceptance in Nigeria Amidst Cyber Attacks Using Unified Theory of Acceptance and Use of Technology

The rolling out of different technological advancement in the day to day human endeavors are enormous. These advancements have both positive and negative effects. The positive effects are the intended ones and the most recognized. The Negative ones though not intended, but provide opportunity for criminals to thrive on them. Cyber Attack on computers and Internet technology is one of the most prominent negative effects of Information and Communication Technology. Hence, there is need for systemic study of how these negative effects affect the acceptability of these technological advancement. This paper, attempts using the Unified Theory of Acceptance and Use of Technology (UTAUT) to appraise the acceptance of Electronic banking in Nigeria despite prevalence of cyber attacks. The study used a 7 Likert scale rating questionnaire to collect data from 286 respondents from across Nigerian Commercial bankings resident in Gombe State. The study uses the Cronbach’ Alhpa and Spearman’s Rho Correlation to test factors of the adopted theory. The result of the work showed that there is significant correlation amongst all the UTAUT factors, implying that e-banking is accepted and used in Nigeria even with re-occurrence of cyber attacks on such system.

Ali Usman Abdullahi, Stephany Adamu, Ali Muhammad Usman, Munir Adewoye, Mustapha Abubakar Yusuf, Enem Theophlus Aniemeka, Haruna Chiroma

Information Science and Technology

Frontmatter
A Scoping Review of the Literature on the Current Mental Health Status of Developers

Year after year the need for technical solutions increased, which made the role of software developers more important. However, researchers found that developers' productivity increased based on their happiness and unhappiness. In addition, focusing on the developer’s happiness will provide more problem solvers with better abilities of analysis. To understand the happiness and unhappiness of software developers, we need to process in the behavioral software engineering (BSE) area. In this research, we gathered related problems and generate hypotheses that solve those problems. Besides, we specify a research plan for each hypothesis.

Ghaida Albakri, Rahma Bouaziz
Gamifying Users’ Learning Experience of Scrum

Serious games have arisen to boost users’ interaction and efficiency as they reach a particular objective, integrating with the game’s mechanics, thus producing a very enticing mission. The use of serious games in Software Engineering to increase the participation of developers has been studied with great interest to train potential professionals to encounter situations they may face in the development of software. This paper introduces ScrumGame, a serious game to train both students in Software Engineering and software practitioners in Scrum. The game was tested with users who use Scrum in their everyday work using pre-test-post-test style. The SIMS and MSLQ tests were used for this, which were both performed by the users before and after the game was played. We aimed at assessing how game use affects learning strategies and motivation. Backed up with evidence for statistical significance, findings indicate that ScrumGame has had a positive effect on the students.

Guillermo Rodriguez, Alfredo Teyseyre, Pablo Gonzalez, Sanjay Misra
Visualizing Multilevel Test-to-Code Relations

The test-to-code traceability (TCT) links play an important role in the process of software maintenance and re-engineering. However, the main issue is how to visualize these links efficiently and effectively to support the comprehension and maintenance of these links. In this work, a visualization approach is presented that displays the TCT links in two levels, class-level, and method level. Visualization traceability links at a method-level provide detailed information of the traceability links that support the development of software in different tasks such as, software maintenance, refactoring, and change impact analysis. The visualization approach is implemented using our visualization tool namely TCTracVis. The presented tool is evaluated on a real simple project and the achieved results confirm that the proposed approach and tool are efficient to support several tasks in software development.

Nadera Aljawabrah, Abdallah Qusef, Tamás Gergely, Adhyatmananda Pati
Privacy Preservation in Mobile-Based Learning Systems: Current Trends, Methodologies, Challenges, Opportunities and Future Direction

The adoption of mobile technologies in education are evolving like in the business and health sectors. The design of user-centric platform to enable individuals participate in the activities of learning and teaching is currently area of research. The Learning Management Systems (LMS) area assists learners and academic activities but, it continues to fall short of desired impact due to huge demands of the application. More importantly, the mobile applications offer enormous convenience not without the possibility of eavesdropping and maliciously exploiting data about users. The original structure of mobile learning requires that data and processing heads have centralized entity, which is not possible in wireless application arrangements due to communication overhead of transmitting raw data to central learning processor. This led to the use of distributed mobile learning structure, which preserve privacy of learners. This study discusses the challenges, current trends, methodology, opportunities and future direction of privacy preservation in mobile-based learning systems. The study highlighted the use of learners’ private data and behavioral activities by LMS especially in understanding the needs of learners as well as improvement of their experiences. But, it raises concerns about the risks of learners’ privacy on LMS due the mining processes of learners, which were not considered in existing related studies in literature.

Muhammad Kudu Muhammad, Ishaq Oyebisi Oyefolahan, Olayemi Mikail Olaniyi, Ojeniyi Joseph Adebayo
Drug Verification System Using Quick Response Code

Drug errors and abuses are the most frequently reported deficiencies in the healthcare sector worldwide. In the US alone over $3.5 billion has been expended on treatment related to drug errors that concern more than 1.5 million individuals. The drug is an important part of livelihood has faced the problem of authentication because medicines have to be tested to differentiate between the real and the fake. Drug code detection will reduce the risk of these mistakes by supplying the first responders with accurate information that can quickly decode this information using a code scanner on their smartphones and thus take the necessary steps against their use. The previous study implemented a desktop application system that checks for standardized drugs by scanning the Quick Response codes on the pack. Recently, lots of improvements have taken place in terms of smartphone development with various tools like cameras, which can be used to scan drug barcode. Therefore, the study developed a mobile application to scan the drugs' barcode and verify authenticity. The application designed using an integrated database for real-time drug authentications. The application was implemented using SQL running on a server and interacted with an Application Programming Interface (API) to serve as an intermediary between the application and the browser API built with an Object-relational mapping (ORM) called Sequelize. After code is scanned to gets its serial code, the API validates the serial code and releases a quick response code through a JavaScript Object Notation (JSON). The proposed system can be used by doctors, pharmacists and patients for the identification of fakes and harmful drugs, hence reduced the calculations of fakes or harmful drugs.

Roseline Oluwaseun Ogundokun, Joseph Bamidele Awotunde, Sanjay Misra, Dennison Oluwatobi Umoru
A Novel Approach to News Archiving from Newswires

A news archive is the core operational tool a media relations team depends on in order to effectively feed a data-hungry organization. An ingrained approach to news archiving in existence is the use of a relational database. As a consequence, integrating search engines that support full-text search is practically impossible due to the strict data schema that is defined in relational database systems. Therefore, there is a need for news archives that support full-text search with relevance ranking of news. In this paper, an approach that supports full-text search is proposed. The process is started by crawling newswire websites for news that are relevant with respect to some predefined keywords and extracting them. Then, they are stored in a data structure known as an inverted-index which supports full-text search, aggregation, and relevance ranking of search results. Search results are ranked and returned to a user in the order of decreasing relevance to the search term. We were able to provide a software solution written in java, the jsoup library for HTML parsing, and an elasticsearch implementation of a search engine. We tested our solution on nine newswires using ten keywords and were able to retrieve a total of 42 relevant news matching seven keywords. The approach proposed in this paper when compared to the manual approach performed better in terms of retrieval speed and accuracy. We conclude that three main components are important in a good digital archive: relevance, extraction, and search. This work is an integration of a good relevance marking technique, an extraction method, and a search engine.

Bilkisu Larai Muhammad-Bello, Mudi Lukman, Mudi Salim
Mobile Application Software Usability Evaluation: Issues, Methods and Future Research Directions

Recently, the growth and advancement in mobile technology (such as mobile devices, smartphones, mobile wireless networks) have cushioned everyday lives of peoples across the globe. Interestingly, this can be attributed to the greater ease of developing mobile applications for diverse usages such as healthcare, finance, and agriculture. Another reason for this is that, there is the quest to rollout mobile device tailored application software having lower budget, quicker time of delivery, and top-quality product from the developers and the end-user’s perspectives. The challenges of appropriate designs frameworks; and the understanding of the needs of users (that is, the end users) have persisted long after their eventual rollouts. The concept of mobile app usability and accessibility evaluation were developed to enable developers to ascertain the level of usages and relevance of mobile applications in-use or prior release under diverse criteria such as maintainability, understandability, comprehensibility, as well as parameters specified by Usability Standards of ISO 9241-11 (that is, effectiveness, efficiency, and satisfaction). This study undertakes a systematic literature review (SLR) to discuss the subject of mobile application software usability under the specific scope of issues, methods and future research directions. To achieve these, a total of forty (40) peer-reviewed articles from diverse databases/sources of records were selected. The outcomes of this study revealed that, mobile applications usability evaluations and processes are domain-specific (or locality-dependent). Also, there are no generic approaches identified or developed for performing usability and accessibility of mobile applications due to the non-deterministic nature of the domain, and context-of-use.

Blessing Iganya Attah, John Kolo Alhassan, Ishaq Oyebisi Oyefolahan, Sulaimon Adebayo Bashir
Perception of Social Media Privacy Among Computer Science Students

People meet at different places and for different reasons, and those with common interests usually interact and have something to share – this is the traditional model of social networking. Although social networking occurs in a virtual world, it has positively transformed communication in several aspects of our life like education, business, healthcare, government and so on. However, social networking has increased the exposure of private lives, the nuisance of unsolicited chat messages, and unhealthy cyber monitoring. This study aims to investigate the privacy concerns and awareness of social media privacy policies amongst computer science students by presenting a review of some of the privacy challenges with using social networking applications. We further carried out an online survey to collect data on their views by investigating their understanding and usage of privacy settings on their preferred platforms. Our participants were found to mostly use Facebook, WhatsApp, and Instagram, and results show that only 23.8% read the privacy agreement. About 30.1% and 13.3% accept friend requests from the friend of friends and strangers respectively. Also, 89.2% of users have their privacy features enabled and 82.9% indicates satisfaction with the current privacy settings available on their social media sites which may be the reason only 54.2% of users show a high level of concern about their privacy.

Adebayo Omotosho, Peace Ayegba, Justice Emuoyibofarhe
An Efficient Holistic Schema Matching Approach

Schema matching allows a certain way of communication between heterogeneous, autonomous and distributed data sources. We choose the matching approach depending on the number of sources we wish to integrate: pairwise matching approaches for a small to a medium total number of data sources, and holistic matching approaches for a big to a huge number of data sources. Nevertheless, current matching approaches were proven to achieve a very moderate matching accuracy. Moreover, holistic matching approaches operate in a series of two-way matching steps. In this paper, we present hMatcher, an efficient holistic schema matching approach. To execute holistic schema matching, hMatcher captures frequent schema elements in the given domain prior to any matching operation. To achieve high matching accuracy, hMatcher uses a context-based semantic similarity measure. Experimental results on real-world domain show that hMatcher performs holistic schema matching properly, and outperforms current matching approaches in terms of matching accuracy.

Aola Yousfi, Moulay Hafid El Yazidi, Ahmed Zellou
AnnoGram4MD: A Language for Annotating Grammars for High Quality Metamodel Derivation

The quests for transfers of software artifacts between the model ware and grammar ware technical spaces have increased in recent decades. Particularly, the need to port grammar-based concepts into the model ware space has birthed efforts to synthesise Ecore-based metamodels from Extended Backus Naur Form (EBNF)-based grammars. However, automatic derivation of high-quality metamodels from grammars is still a challenge as existing solutions produce metamodels containing either superfluous classes or anonymous classifiers or both, making the results less useful. AnnoGram4MD addresses these issues by adding special annotations to the grammar as complementary information to guide the derivation algorithm towards producing high-quality metamodels. A comparison of AnnoGram4MD with existing solutions when applied to a sample grammar reduced the number of EClassifiers by 52% and without anonymous EClassifiers in the generated metamodel.

Hamzat Olanrewaju Aliyu, Oumar Maïga
Design and Implementation of an IoT Based Baggage Tracking System

Missing pieces of baggage, loss of luggage, and damage to customers’ belongings are the common flaws faced in the aviation industry around the world. Passengers in other transportation sectors are also at risk of luggage theft as they transit from location to another. Therefore, a system needs to be designed and developed to combat these problems. The system has a GSM/GPS module that is integrated into the tracking system to keep it actively connected at all times. Also, an Arduino microcontroller is added to the system for information processing. The system provides the location of luggage on a map for real-time tracking and, that can be achieved when the GPS module retrieves the location coordinates of the bag and sends it to the microcontroller for processing. Afterward, the processed information is sent as an SMS through the GSM module, which provides a connection between the bag and the passenger using the GSM communication system. This IoT based device gives passengers the advantage of seeing the current location of their baggage from anywhere in the world. And, if implemented, this system will reduce the stress experienced by both passengers and the aviation industry in locating missing, misled, or stolen suitcases.

Olamilekan Shobayo, Ayobami Olajube, Obina Okoyeigbo, Jesse Ogbonna
Validation of Computational-Rabi’s Driver Training Model for Prime Decision-Making

This paper explains the validation of Computational-Rabi’s Driver Training (C-RDT) model for Primed Decision-making. To prove the workability of this model, evaluation using validation is indispensable. Hence, validation is a method used to ensure the logical correctness of the model. Therefore, evaluating the model by validation method is yet to be achieved in literature. Hence, this study bridged this gap by providing it, and it serves as the novelty of this study. To validate the C-RDT model, experimental method was adopted whereby an experiment was conducted using human. An adapted game driving simulator features were mapped with the external factors of the awareness component of the model, and the instrument used for the validation was also designed based on the external and temporal factors of the training component of the model. Only a post-test experiment involved to examine the effectiveness of the model factors. The experiment determines the effect of the training with the game simulator on the automaticity of the driver to make effective prime decision. For the experiment, participants were divided into control and experimental groups. The experimental group were trained while the control group were not trained. Two hypotheses were set based on the outcomes of training: H0 and H1. The results obtained has shown that the experimental group participants have better decision-making skill as compared to control group. This supports the alternative hypothesis (H1), which implies that training factors in the model improved the driver’s prime decision-making during emergency and this proved the validity of the model.

Rabi Mustapha, Muhammed Auwal Ahmed, Muhammad Aminu Ahmad
Efficient Approaches to Agile Cost Estimation in Software Industries: A Project-Based Case Study

Agile was invented to improve and overcome the traditional deficiencies of software development. At present, the agile model is used in software development very vastly due to its support to developers and clients. Agile methodology increases the interaction between the developer-client, and it makes software product defects free. The agile model is fast and becoming more popular because of its features and flexibility. The study shows that the agile software development model is an efficient and effective software development strategy that easily accommodates user changes, but it is not free from errors or shortcomings. The study shows that COCOMO and Planning Poker are famous cost estimation procedures, but are not ingenious for agile development. We conduct a study on real-time projects from multinational software industries using different estimation approaches to estimate the project’s cost and time. We thoroughly explain these projects with the limitations of the techniques. The study has proven that the traditional and modern estimation approaches still have limitations to accurate estimation of projects.

Shariq Aziz Butt, Sanjay Misra, Diaz-Martinez Jorge Luis, De la Hoz-Franco Emiro
Efficient Traffic Control System Using Fuzzy Logic with Priority

The increase in the number of vehicles on the road is evident by the rate of traffic congestions on daily basis. Problems of traffic congestions are difficult to be measured. Emission of dangerous substances are some of the worrisome effects on weather, theft and delays to motorist are other effects. More and better road network connections have been found to be effective. However, road networks often have intersection(s) which introduces conflicts to right-of-way. These are solved using road traffic light control systems. In this work, an attempt to improve upon an existing programmed stationary road traffic light control system of the Kaduna Refinery Junction (KRJ) is considered. The KRJ is the major road connection to Kaduna main town from the southern part of the state. During working days, motorist from other parts of the country, public and private servants, students, business men, etc. traveling to other parts of the country through the southern part of the state, meet at the KRJ. Trucks conveying petroleum products from the Kaduna Refinery, and vehicles transporting workers, and business men affect the flow of traffic at the junction. An efficient model of fuzzy logic (FL) technique is developed for the optimal scheduling of traffic light control system using TraCI4MATLAB and Simulation of Urban Mobility (SUMO). An average improvement of 2.74% over an earlier result was obtained. Considering priority for emergency vehicles, an improvement of 66.79% over the static phase scheduling was recorded. This shows that FL can be effective on traffic control system.

Ayuba Peter, Babangida Zachariah, Luhutyit Peter Damuut, Sa’adatu Abdulkadir
An Enhanced WordNet Query Expansion Approach for Ontology Based Information Retrieval System

Ontology-based information retrieval is described as a cutting-edge approach capable to enhance the returns of semantic results from documents. This approach works better when similar and relevant terms are added to user’s initial query terms using data sources such as wordnet; such technique is known as query expansion. However, the precision of the added term(s) tends to be inaccurate because of the existing WordNet’s deficit to handle inflected forms of words. In lieu of this development, this research aims to design Rule based Web Ontology Language (OWL) Information Retrieval System with an enhanced wordnet for query expansion but only limited to the noun subnet database. A combined ontology development methodology was implored; and OWL-2 to develop the ontology for a novel domain of maize crop considering primarily soil, fertilizer and irrigation knowledge. Its rule-based ontology because Competency Questions were modeled using First-Order-Logic (FOL) and encoded with Semantic Web Rule Language (SWRL). Similarly, the wordnet was enhanced on python environment considering the lemmatization’s lookup table and the third party modules of Natural Language Tool Kits (NLTK), pattern.en and enchant. Therefore, in this research, the improved wordnet can handle inflected word without stemming it to the root word. It also correctly suggested related words in the case of user’s wrong spelt word thereby; reduces minimally time wastage and fatigue. This development invariably aids ontology validation along with the other forms of validations carried out. The research ultimately offers an effective ontology-based information retrieval system based on the proposed algorithmic framework.

Enesi Femi Aminu, Ishaq Oyebisi Oyefolahan, Muhammad Bashir Abdullahi, Muhammadu Tajudeen Salaudeen
Design of a Robotic Wearable Shoes for Locomotion Assistance System

The inability of a patient to move freely or one part of the body paralysis is an indication of stroke disease or symptoms. This challenge resulted in locomotion of body impairment. In this research, a robotic wearable locomotion assistance system in a pair of shoes is developed using closed-control of mechatronic and embedded system approach. This is to render assistance for the patient impairment locomotion, to improve the passive control and design of orthoses for the structural support of the people with moderate lower-limb weaknesses. The adaptation of this system is varied in position during motion instantaneously and to manage the stiffness of the joint. This wearable robotic shoe helps the paralytic leg (prosthesis) to track the position of the non-paralytic leg using awareness of the sensor and transceivers to establish the communication between the foot posture and support. It also helps the stroke patient with orthoses or prostheses of (foot and leg) to walk linearly in an upright position (maintaining alignment of foot and leg), improving balance, and support the arch and heel of the patient. This system prototype was implemented and tested, and the results show high accuracy in linear tracking and alignment.

Bala Alhaji Salihu, Lukman Adewale Ajao, Sanusi Adeiza Audu, Blessing Olatunde Abisoye
Design of Cash Advance Payment System in a Developing Country: A Case Study of First Bank of Nigeria Mortgages Limited

As Nigeria's economy continues to improve, an automated cash advance payment system can be achieved by adopting the automated Cash Advance Payment System (CAPS). The aim is to promote -good Corporate Governance, thereby increasing stakeholders' confidence in the day-to-day activities of the Bank. CAPS is a system designed to assist organizations in managing and administering cash advances to her employers and other related parties. The motivation for this work are challenges faced by the method of processing cash advance manually, which are prone to error or fraud, undue long hours in the computation of cash advance, and poor tracking of employee records. Insider abuse of cash advance and expense mismanagement has been the major contributor to this menace. This paper presents the design of a cash advance payment system that curbs non-repayment and prevents fraud. For this study, the languages of choice for implementation are PHP and MySQL. While PHP takes care of the client frontend, MySQL serves as the backend. Use case diagram was employed. The developed system stores information with ease, allows easy retrieval of cash advance details, eliminates cash advance fraud and generates reports, which can be printed from any date and year as hard copy for management review and audit purpose. The paper recommends that an automated cash advanced system can be deployed to overcome the observed and highlighted problems inherent in manual cash advanced systems.

Saka John, Jacob O. Mebawondu, Ajayi O. Olajide, Mebawondu O. Josephine
Users’ Perception of the Telecommunication Technologies Used for Improving Service Delivery at Federal University Libraries in Anambra and Enugu State

The study examines users’ perception of the telecommunication technologies used for improving service delivery at federal university libraries in Anambra and Enugu States of Nigeria. Its specific objectives are to determine users’ awareness and satisfaction on the use of telecommunication technologies in service delivery and also to highlight the roles and challenges associated with the use of the facilities in service delivery in the libraries under study. The paper adopts a descriptive survey design. Its target population was 11609 registered library users. A proportionate random sampling technique was used to select 5% (580 respondents) of the population as sample for the study. A self-developed questionnaire was used for data collection. The data collected were analyzed using mean and standard deviation. The result of the study reveals that users were aware of the use of some telecommunication technologies, but were not satisfied with the functional state of most of the technologies used for service delivery. However, the study identifies various challenging factors that undermined effective use of telecommunication technologies to improve service delivery in the study area. Among the identified challenges were poor funding, lack of competent personnel, irregular power supply and lack of internet connectivity. Suggestions are made to improve the functional state of the facilities to ensure prompt and efficient service delivery.

Rebecca Chidimma Ojobor
A Step by Step Guide for Choosing Project Topics and Writing Research Papers in ICT Related Disciplines

ICT is fast-growing and changing field. A lot of researches are being done in various area of ICT, and results are presented in various platforms like conferences, journal and books. This is common observations in the publications from developing countries (especially in sub – Saharan africa) are not being published in reputed and established publishers even their technical/experiments are good. This is due to lack of several factors including professional presentation, the novelty of the topic, quality of literature review etc. This work guides final year bachelor’s students, PG students (masters and PhD) and young researchers, especially working in computing-related disciplines, on how to convert their project works into quality publications. The authors provide details on how these researchers can select suitable project topics, do a proper review, write up the key components of a paper and present their results in an appropriate form (that is, writing style starting from abstract to conclusion). This paper also presents and guides on how to write various types of review papers.

Sanjay Misra
Backmatter
Metadaten
Titel
Information and Communication Technology and Applications
herausgegeben von
Dr. Sanjay Misra
Bilkisu Muhammad-Bello
Copyright-Jahr
2021
Electronic ISBN
978-3-030-69143-1
Print ISBN
978-3-030-69142-4
DOI
https://doi.org/10.1007/978-3-030-69143-1