Skip to main content
Top

2023 | Book

Soft Computing in Data Science

7th International Conference, SCDS 2023, Virtual Event, January 24–25, 2023, Proceedings

Editors: Marina Yusoff, Tao Hai, Murizah Kassim, Azlinah Mohamed, Eisuke Kita

Publisher: Springer Nature Singapore

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 7th International Conference on Soft Computing in Data Science, SCDS 2023, which was held virtually in January 2023.
The 21 revised full papers presented were carefully reviewed and selected from 61 submissions. The papers are organized in topical sections on ​​​artificial intelligence techniques and applications; computing and optimization; data analytics and technologies; data mining and image processing; mathematical and statistical learning.

Table of Contents

Frontmatter

Artificial Intelligence Techniques and Applications

Frontmatter
Explainability for Clustering Models
Abstract
The field of Artificial Intelligence is growing at a very high pace. Application of bigger and complex algorithms have become commonplace, thus making them harder to understand. The explainability of the algorithms and models in practice has become a necessity as these models are being widely adopted to make significant and consequential decisions. It makes it even more important for us to keep our understanding of the decisions and results of AI up to date. Explainable AI methods are currently addressing the interpretability, explainability, and fairness in supervised learning methods. There has been very less focus on explaining the results of unsupervised learning methods. This paper proposes an extension of the supervised explainability methods to deal with the unsupervised methods as well. We have researched and experimented with widely used clustering models to show the applicability of the proposed solution on most practiced unsupervised problems. We also have thoroughly investigated the methods to validate the results of both supervised and unsupervised explainability modules.
Mahima Arora, Ankush Chopra
Fault Diagnosis Methods of Deep Convolutional Dynamic Adversarial Networks
Abstract
A DCDAN is proposed for intelligent fault diagnosis to address the issue that it is easy to obtain a large amount of labeled fault-type data in a laboratory environment but difficult or impossible to obtain a large amount of labeled data under actual working conditions. This method transfers the fault diagnosis knowledge acquired in the laboratory environment to the actual engineering equipment, obtains more comprehensive fault information by fusing the time domain and frequency domain data, employs the residual network to deeply extract fault features in the feature extraction layer, and makes use of the extracted fault features to improve fault diagnosis. To achieve unsupervised transfer learning, the marginal distributions and conditional probability distributions of the source and target domains are aligned by maximizing the domain classification loss, while the failure classification of mechanical equipment is achieved by minimizing the class prediction loss. The experimental results demonstrate that this model has a high classification accuracy in the unlabeled target data set and can effectively solve the problem of the lack of labels in the data set, i.e., realize intelligent mechanical fault diagnosis, under certain conditions.
Tao Hai, Fuhao Zhang
Carbon-Energy Composite Flow for Transferred Multi-searcher Q-Learning Algorithm with Reactive Power Optimization
Abstract
In the conventional carbon emission computation paradigm, the primary obligation falls on the electricity-generating side. However, according to the theory of carbon emission flow, the grid side and the user side are the primary producers of carbon emissions and must carry the majority of the obligation. To minimize carbon dioxide emissions, it is required to apply the carbon emission flow analysis approach to move the carbon footprint from the power generation side to the grid side and the user side in order to create more efficient energy saving and emission reduction plans. In order to accomplish the low-carbon, energy-saving, and cost-effective operation of the power system, the carbon-energy composite flow is included in the objective function of reactive power optimization in this study. In order to solve the reactive power optimization model of carbon-energy composite flow and to demonstrate the superiority of the Q-learning algorithm of migration multi-searcher, this paper designs a carbon-energy composite flow optimization model on the IEEE 118 node system and adds six algorithms, such as the genetic algorithm, in order to solve the reactive power optimization model of carbon-energy composite flow. Come in for a simulation exercise, including comparison. The simulation and verification outcomes of an example demonstrate that the suggested model and algorithm may achieve economical, low-carbon, and secure functioning of the power system.
Jincheng Zhou, Hongyu Xue
Multi-source Heterogeneous Data Fusion Algorithm Based on Federated Learning
Abstract
With the rapid advancement of science and technology, the number of edge devices with computing and storage capabilities, as well as the generated data traffic, continues to increase, making it difficult for the centralized processing mode centered on cloud computing to efficiently process the data generated by edge devices. Moreover, multimodal data is abundant due to the diversity of edge network devices and the ongoing improvement of data representation techniques. To make full use of heterogeneous data on edge devices and to tackle the “data communication barrier” problem caused by data privacy in edge computing, a multi-source heterogeneous data fusion technique based on Tucker decomposition is developed. The algorithm introduces tensor Tucker decomposition theory and realizes federated learning by constructing a high-order tensor with heterogeneous spatial dimension characteristics to capture the high-dimensional features of heterogeneous data and solve the fusion problem of heterogeneous data assuming no interaction. Combining and remembering. This method, unlike its predecessors, can efficiently integrate multi-source heterogeneous data without data transfer, therefore overcoming privacy and security-related data communication concerns. Finally, the effectiveness of the technique is confirmed using the MOSI data set, and the shortcomings of the original federated learning algorithm are solved by the updated federated weighted average algorithm. Subjective criteria are used to calculate the data quality for the improved federated average approach based on AHP. We present Influence, an improved federated weighted average method for dealing with multi-source data from a data quality perspective.
Jincheng Zhou, Yang Lei
Dynamic Micro-cluster-Based Streaming Data Clustering Method for Anomaly Detection
Abstract
The identification of anomalies in a data stream is a difficulty for decision-making in real time. A memory-constrained online detection system that is able to quickly detect the concept drift of streaming data is required because the constant arrival of massive amounts of streaming data with changing characteristics makes real-time and efficient anomaly detection a difficult task. This is because of the nature of the data itself, which is constantly changing. In this study, a novel model for detecting anomalies using dynamic micro-clusters scheme is developed. The macro-clusters are generated from a network of connected micro-clusters. When new data items are added, the normal patterns that are formed in macro-clusters will update in tandem with the dynamic micro-clusters in an incremental fashion. An outlier may be understood from both a global and a local perspective by examining the global and local densities respectively. The effectiveness of the suggested approach was evaluated with the use of three different datasets. The findings of the experiment demonstrate that the suggested method is superior to earlier algorithms in terms of both the accuracy of detection and the level of computing complexity it requires.
Xiaolan Wang, Md Manjur Ahmed, Mohd Nizam Husen, Hai Tao, Qian Zhao
Federated Ensemble Algorithm Based on Deep Neural Network
Abstract
In the realm of multi-source privacy data protection, federated learning is now one of the most popular study topics. When the data being used is not local, its architecture has the ability to train a common model that can satisfy the needs of many parties. On the other hand, there are circumstances in which local model parameters are challenging to incorporate and cannot be used for security purposes. As a result, a federated ensemble algorithm that is based on deep learning has been presented, and both deep learning and ensemble learning have been used within the context of federated learning. Using the various integrated algorithms that integrate local model parameters, which improve the accuracy of the model and take into account the security of multi-source data, the accuracy of the local model can be improved by optimizing the parameters of the local model. This in turn improves the accuracy of the local model. The results of the experiments show that, in comparison to conventional multi-source data processing technology, the accuracy of the algorithm in the training model for the MNIST dataset, the digits dataset, the letter dataset, and the wine dataset is improved by 1%, 8%, 1%, and 1%, respectively, and the accuracy is guaranteed. Additionally, accuracy is guaranteed. It also improves the security of data and models that come from more than one source, which is a very useful feature.
Dan Wang, Ting Wang
Performance Comparison of Feature Selection Methods for Prediction in Medical Data
Abstract
Along with technological advancement, the application of machine learning algorithms in industry, notably in the medical field, has grown and progressed quickly. Medical databases commonly contain a lot of information about the medical histories of the patients and patient’s conditions, in addition, it is challenging to identify and extract the information that will be relevant and meaningful for machine learning modelling. Not to mention, the efficacy of the predictive machine learning algorithm can be enhanced by using only useful and pertinent information. Hence, feature selection is proposed to determine the significant features. Thus, feature selection should be fully utilized and applied when building machine learning algorithm. This study analyzes filter, wrapper, and embedded feature selection methods for medical data with the predictive machine learning algorithm, Random Forest and CatBoost. The experiment is carried out by evaluating the performances of the machine learning with and without applying feature selection methods. According to the results, CatBoost with RFE shows the best performance, in comparison to Random Forest with other feature selection methods.
Nur Hidayah Mohd Khalid, Amelia Ritahani Ismail, Normaziah Abdul Aziz, Amir Aatieff Amir Hussin
An Improved Mask R-CNN Algorithm for High Object Detection Speed and Accuracy
Abstract
This paper aims at the problems of low grasping accuracy, low detection speed and low grasping success rate for object detection of robot grasping tasks. This research is significant for the robot to grasp tasks in complex environments, such as the disorderly placement of multi-target objects and partial occlusion objects. This paper analyzes the structure and principle of the algorithm in depth based on the Mask-R-CNN target detection algorithm. An improved target recognition, grabbing, and positioning algorithm based on this algorithm is proposed to solve the problem of low operation speed. This method uses adjacent frames as comparison templates to find image differences, thus improving recognition accuracy and reducing repeated estimation of regions. The pre-experiment results of targeting gears in a complex background exhibit better accuracy than the other methods. The CPU utilization of the algorithm proposed is also better than the other methods, which balance the detection speed and accuracy and have high research and reference value. Additionally, reducing the amount of data processing in the operation process during the data processing time and improving the operation efficiency.
Qingchuan Liu, Muhammad Azmi Ayub, Fazlina Ahmat Ruslan, Mohd Nor Azmi Ab Patar, Shuzlina Abdul-Rahman

Computing and Optimization

Frontmatter
Federated Learning with Class Balanced Loss Optimized by Implicit Stochastic Gradient Descent
Abstract
Federated learning is a paradigm for distributed machine learning in which a central server interacts with a large number of remote devices to create the optimal global model. System and data heterogeneity are now the two largest impediments to federated learning. This work suggests a federated learning strategy based on stochastic gradient descent optimization as a solution to the problem of heterogeneity-induced slow convergence, or even non-convergence, of the global model. This work estimates the average global gradient using locally uploaded model parameters without computing the first derivative or updating global model parameters through gradient descent. Allowing the global model to be used with fewer communication rounds. Obtain faster and more reliable convergence results. In experiments simulating varying degrees of heterogeneous settings, the strategy proposed in this work delivered faster and more stable convergence than FedProx and FedAvg. This work offers a strategy that decreases the number of communication cycles on highly heterogeneous synthetic datasets by around 50% compared to FedProx, therefore considerably enhancing the stability and durability of federated learning.
Jincheng Zhou, Maoxing Zheng
Electricity Energy Monitoring System for Home Appliances Using Raspberry Pi and Node-Red
Abstract
This paper research deals with the shortcoming of late electricity bills that are generated only after the electrician comes to the residential area and measures the reading of the electricity meter manually. Electricity bill is only submitted once a month in Malaysia. The main concern of the project is to design an Electricity Energy Monitoring System for Home Appliances using Raspberry Pi and Node-RED; consequently, to generate real-time bill. The architecture of the mechanism is designed on computer, using Node-RED-based coding and Circuito.Io-based simulation to achieve the goals of the project. The circuit design had been simulated before the real system was realized. Node-RED programming tool requires necessary coding to be keyed in into the Raspberry Pi. The microcontroller essentially provides the same function as NodeMCU, but is replaced with Arduino Uno as a more suitable choice, due to issues of the initial components having produced undesirable result. The outcome of this project is there are lots of information provided in dashboard form. The collected and generated data are stored in the SQLite database in Raspberry Pi. This project applies the Internet of Things (IoT) system to remotely monitor home appliances over the internet. In the proposed project, a current sensor senses the electricity energy, which then uploads the data via MQTT. The total consumption can be viewed using Node-RED.
Nur Shazwany Zamani, Muhammad Shafie Mohd Ramli, Khairul Huda Yusof, Norazliani Md. Sapari
Short-Time Fourier Transform with Optimum Window Type and Length: An Application for Sag, Swell and Transient
Abstract
The characteristics of power quality signals are non-stationary, where the behaviour confirms the negative consequence in sensitive equipment. Modern cross-term time-frequency distributions (TFDs) are able to characterize the power quality accurately but suffer from a delay in measurement since the power quality signals, in this case, sag, swell and transient, need to be analyzed in real-time. It is shown that one window shift (OWS) properties of linear time-frequency representation (TFR) results from short-time Fourier transform (STFT) satisfies accuracy, complexity and memory. By optimally selecting the window length of 512, the TFR is able to provide optimal time, and frequency localization, as well as the spectral leakage, can be reduced by the Hanning window. The proposed technique can accurately characterize the power quality signals averagely by 95%, as well as the complexity and memory usage is low. Finally, the paper is concluded by the recommendation of pre-setting for optimum window type and length for real-time power quality measurement.
Muhammad Sufyan Safwan Mohamad Basir, Nur-Adibah Raihan Affendy, Mohamad Azizan Mohamad Said, Rizalafande Che Ismail, Khairul Huda Yusof

Data Analytics and Technologies

Frontmatter
Cox Point Process with Ridge Regularization: A Better Approach for Statistical Modeling of Earthquake Occurrences
Abstract
The inhomogeneous Cox point process is commonly used for modeling natural disasters, such as earthquake occurrences. The inhomogeneous Cox point process is one of the popular models for the analysis of earthquake occurrences involving geological variables. The standard two-step procedure does not however perform well when such variables exhibit high correlation. Since ridge regularization has a reputation in handling multicollinearity problems, in this study we adapt such a procedure to the spatial point process framework. In particular, we modify the two-step procedure by adding ridge regularization for parameter estimation of the Cox point process model. The estimation procedure reduces to either the Poisson-based regression or logistic-based regression. We apply our proposed method to model the earthquake distribution in Sumatra. The results show that considering ridge regularization in the model is advantageous to obtain a smaller value of the Akaike Information Criterion (AIC). Especially, Cox point process model with a logistic-based regression has the smallest AIC.
Alissa Chintyana, Achmad Choiruddin, Sutikno
Discovering Popular Topics of Sarawak Gazette (SaGa) from Twitter Using Deep Learning
Abstract
The emergence of social media as an information-sharing platform is progressively increasing. With the progress of artificial intelligence, it is now feasible to analyze historical document from social media. This study aims to understand more about how people use their social media to share the content of the Sarawak Gazette (SaGa), one of the valuable historical documents of Sarawak. In the study, a short text of Tweet corpus relating to SaGa was built (according to some keyword search criteria). The Tweet corpus will then be analyzed to extract the topic based on a topic modeling, specifically, Latent Dirichlet Allocation (LDA). Then, the topics will be further classified with Convolutional Neural Network (CNN) classifier.
Nur Ain Binti Nor Azizan, Suhaila Binti Saee, Muhammad Abdullah Bin Yusof
Comparison Analysis of LSTM and CNN Variants with Embedding Word Methods for Sentiment Analysis on Food Consumption Behavior
Abstract
Lockdowns, working from home, staying at home, and physical distance are expected to significantly impact consumer attitudes and behaviors during the COVID-19 pandemic. During the implementation of the Movement Control Order, Malaysians’ food preferences are already shifting away, influencing new consumption behavior. Since it has played a significant role in many areas of natural language, mainly using social media data from Twitter, there has been increased interest in sentiment analysis in recent years. However, research on the performance of various sentiment analysis methodologies such as n-gram ranges, lexicon techniques, deep learning, word embedding, and hybrid methods within this domain-specific sentiment is limited. This study evaluates several approaches to determine the best approach for tweets on food consumption behavior in Malaysia during the COVID-19 pandemic. This study combined unigram and bigram ranges with two lexicon-based techniques, TextBlob and VADER, and three deep learning classi-fiers, Long Short-Term Memory Network (LSTM), Convolutional Neural Networks (CNN), and their hybridization. Word2Vector and GloVe are two-word embedding approaches used by LSTM-CNN. The embedding GloVe on TextBlob approach with a combination of Unigram + Bigram [1,2] range produced the best results, with 85.79% accuracy and 85.30% F1-score. According to these findings, LSTM outperforms other classifiers because it achieves the highest scores for both performance metrics. The classification performance can be improved in future studies if the dataset is more evenly distributed across each positive and negative label.
Nurul Izleen Ramzi, Marina Yusoff, Norzaidah Md Noh

Data Mining and Image Processing

Frontmatter
Towards Robust Underwater Image Enhancement
Abstract
Underwater images often suffer from blurring and color distortion due to absorption and scattering in the water. Such effects are undesirable since they may hinder computer vision tasks. Many underwater image enhancement techniques have been explored to address this issue, each to varying degrees of success. The large variety of distortions in underwater images is difficult to handle by any singular method. This study observes four underwater image enhancement methods, i.e., Underwater Light Attenuation Prior (ULAP), statistical Background Light and Transmission Map estimation (BLTM), and Natural-based Underwater Image Color Enhancement (NUCE), and Global–Local Networks (GL-Net). These methods are evaluated on the Underwater Image Enhancement Benchmark (UIEB) dataset using quantitative metrics, e.g., SSIM, PSNR, and CIEDE2000 as the metrics. Additionally, a qualitative analysis of image quality attributes is also performed. The results show that GL-Net achieves the best enhancement result, but based on the qualitative assessment, this method still has room for improvement. A proper combination between the non-learning-based component and learning-based component should be investigated to further improve the robustness of the method.
Jahroo Nabila Marvi, Laksmita Rahadianti
A Comparative Study of Machine Learning Classification Models on Customer Behavior Data
Abstract
Competent marketers aim to accurately predict consumer desires and needs. With the advancement of machine learning, different machine learning models could be applied to solve various challenges, including precisely determining consumer behavior. Meanwhile, discount coupons are a frequent marketing approach to boost sales and encourage recurring business from existing customers. Accordingly, the current study seeks to analyze customer behavior by assessing an in-vehicle coupon recommendation system dataset as the case study. The dataset, which was obtained from the University of California-Irvine (UCI) machine learning repository, could predict consumer decisions as to whether accept the discount coupon influenced by demographic and environmental factors, such as driving destinations, age, current time, and weather. This study also compared six machine learning classification models, including Bayesian Network (BayesNet), Naïve Bayes, Instance-Bases Learning with Parameter-K (Lazy-IBK), Tree J48, Random Forest, and RandomTree, on the dataset to identify a suitable model for predicting customer behavior through two test modes, namely cross-validation and percentage split. The model performance was evaluated by analyzing factors, such as accuracy, precision, processing time, recall, and F-measure, for the model development. The findings discovered that Naïve Bayes and Lazy-IBK consumed the least amount of prediction time, although with the lowest accuracy. RandomTree was the highest processing time, whereas Random Forest provided the highest accuracy, precision, recall and F-measure values.
Nur Ida Aniza Rusli, Farizuwana Akma Zulkifle, Intan Syaherra Ramli
Performance Evaluation of Deep Learning Algorithms for Young and Mature Oil Palm Tree Detection
Abstract
Oil palm trees one of the most essential economic crops in Malaysia have an economic lifespan of 20–30 years. Estimating oil palm tree age automatically through computer vision would be beneficial for plantation management. In this work, the object detection technique is proposed by applying high-resolution satellite imagery, tested with four different deep learning architectures, namely SSD, Faster R-CNN, CenterNet, and EfficientDet. The models are trained using TensorFlow Object Detection API and assessed with performance metrics and visual inspection. It is possible to produce automated oil palm trees detection model on age range estimation, either young or mature based on the crown size. Faster R-CNN is identified as the best model with total loss of 0.0047, mAP of 0.391 and mAR of 0.492, all with IoU threshold from 0.5 to 0.95 with a step size of 0.05. Parameter tuning was done on the best model and further improvement is possible with the increasing batch size.
Soh Hong Say, Nur Intan Raihana Ruhaiyem, Yusri Yusup
Object Detection Based Automated Optical Inspection of Printed Circuit Board Assembly Using Deep Learning
Abstract
Advancement of technologies in the electronics industry has render decrease in electronic components sizes and increase in number of components on Printed Circuit Board (PCB). Industries specialize in manufacturing Printed Circuit Board Assembly (PCBA), also implementing manual visual inspection in In Process Quality Control (IPQC) verification process to ensure quality of products. Such technology advancement has increased workload of operators and time taken to perform inspection. This study is aimed to reduce time consumption and cognitive load of operators, while ensuring consistency of visual inspection during component verification process by utilizing deep learning models to perform object detection based automated optical inspection of images consisting electronic components. Three deep learning algorithms were used in the study, which are Faster R-CNN, YOLO v3 and SSD FPN. Both Faster R-CNN and SSD FPN utilized ResNet-50 backbone, whereas YOLO v3 was built with Darknet-53 backbone. Various input image dimension and image resizing options were explored to determine the best model for object detection. At the end of the study, SSD FPN with input image dimension resized to 640 × 640 by keeping image aspect ratio and with padding is concluded as the best localization and classification model to perform object detection for various types of components present in digital image.
Ong Yee Chiun, Nur Intan Raihana Ruhaiyem

Mathematical and Statistical Learning

Frontmatter
The Characterization of Rainfall Data Set Using Persistence Diagram and Its Relation to Extreme Events: Case Study of Three Locations in Kemaman, Terengganu
Abstract
Floods are recurring phenomena at certain locations because of excessive rainfall, resulting in the overflow of lakes, drains, and rivers. In this work, we employ Persistence Homology (PH) to investigate the relationship between rainfall and flood that occurred from 1997 to 2018. Three stations in Kemaman, Terengganu, have been chosen to study this relationship. Persistence Diagram (PD) is one of the most powerful tools available under the umbrella of PH for detecting topological signatures in high dimension points cloud. In this paper, we use the rainfall time series dataset and express it in higher dimensions by selecting the embedded dimension, \(M = 5\), , and manipulating the time delay τ to obtain the maximum persistence. Then, we compared with past flood events which are labelled based on water level and PD’s max score to identify its suitability for flood identification. The area under the curve of receiver operation characteristics (ROC) have been used to measure the performance with three different thresholds for station 4131001, 4232002, and 4332001. The results clearly show PD’s significance to characterize the rainfall dataset as normal and flood events. The employed maximum persistence is robust despite missing data values.
Z. A. Hasan, R. U. Gobithaasan
Clustering Stock Prices of Industrial and Consumer Sector Companies in Indonesia Using Fuzzy C-Means and Fuzzy C-Medoids Involving ACF and PACF
Abstract
Fundamental and technical analysis that investors generally use to select the best stocks cannot provide information regarding the similarity of stock price characteristics of companies in one sector. Even though companies are in the same sector, each company has a different ability to earn profits and overcome financial difficulties. So, clustering is done to find the stock prices of companies with the same characteristics in one sector. This research uses data on industrial and consumer sector companies’ stock prices because the industrial and consumer sectors are one of the largest sectors in Indonesia. The variables used in this research are open, close, and HML (High Minus Low) stock prices. Several clustering methods that can be used to cluster time series data are Fuzzy C-Means and Fuzzy C-Medoids. In addition, this research also uses several approaches with ACF (Autocorrelation Function) and PACF (Partial Autocorrelation Function), which can handle data with high dimensions and allow the comparison of time series data with different lengths. Based on the highest FS (Fuzzy Silhouette) value, the empirical results show that the two best methods for clustering open, close, and HML stock prices are Fuzzy C-Means and Fuzzy C-Medoids. The clustering results using Fuzzy C-Means are the same for open stock prices and close stock prices data. Meanwhile, there are different clustering results for HML stock price data.
Muhammad Adlansyah Muda, Dedy Dwi Prastyo, Muhammad Sjahid Akbar
Zero-Inflated Time Series Model for Covid-19 Deaths in Kelantan Malaysia
Abstract
The development of zero-inflated time series models is well known to account for excessive number of zeros and overdispersion in discrete count time series data. By using Zero-inflated models, we analyzed the daily count of COVID-19 deaths occurrence in Kelantan with excess zeros. Considering factors such as COVID-19 deaths in neighboring state and lag of 1 to 7 days of COVID-19 death in Kelantan, the Zero-Inflated models (Zero-Inflated Poisson (ZIP) and the Zero-Inflated Negative Binomial (ZINB)) were employed to predict the COVID-19 deaths in Kelantan. The ZIP and ZINB were compared with the basic Poisson and Negative Binomial models to find the significant contributing factors from the model. The final results show that the best model was the ZINB model with lag of 1,2,5 and lag of 6 days of Kelantan COVID-19 death, lag of 1-day COVID-19 deaths in neighboring State of Terengganu and Perak significantly influenced the COVID-19 deaths occurrence in Kelantan. The model gives the smallest value of AIC and BIC compared to the basic Poisson and Negative Binomial model. This indicate that the Zero Inflated model predict the excess zeros in the COVID-19 deaths occurrence well compared to the basic count model. Hence, the fitted models for COVID-19 deaths served as a novel understanding on the disease transmission and dissemination in a particular area.
Muhammad Hazim Ismail, Hasan Basri Roslee, Wan Fairos Wan Yaacob, Nik Nur Fatin Fatihah Sapri
Backmatter
Metadata
Title
Soft Computing in Data Science
Editors
Marina Yusoff
Tao Hai
Murizah Kassim
Azlinah Mohamed
Eisuke Kita
Copyright Year
2023
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-9904-05-1
Print ISBN
978-981-9904-04-4
DOI
https://doi.org/10.1007/978-981-99-0405-1

Premium Partner