Der Artikel präsentiert einen neuartigen Ansatz zur IoT-Sicherheit durch die Integration von Deep Learning und Metaheuristik-Algorithmen für Blockchain-basiertes Datenmanagement. Sie adressiert die wachsenden Herausforderungen der IoT-Sicherheit, insbesondere die Erkennung bösartiger Knoten, indem sie kryptographische Algorithmen und Optimierungstechniken einsetzt. Die vorgeschlagene Methodik zielt darauf ab, die Sicherheit und Effizienz von IoT-Netzwerken zu verbessern und einen soliden Rahmen für das Vertrauensmanagement und die Erkennung von Eindringlingen zu schaffen. Die Studie hebt den Einsatz von Self-Adaptive Tasmanian Devil Optimization (SA-TDO) für die Schlüsselgeneration und Archimedes Optimization Algorithm (AOA) für die Erkennung von Eindringlingen hervor und zeigt das Potenzial dieser fortschrittlichen Techniken zur Sicherung von IoT- und Blockchain-Systemen auf. Die Forschung bewertet auch die Leistung des vorgeschlagenen Modells anhand verschiedener Messgrößen und demonstriert damit seine Überlegenheit gegenüber bestehenden Methoden. Dieser Artikel ist besonders relevant für Fachleute und Forscher, die sich für die Schnittmenge von IoT, Blockchain und Cybersicherheit interessieren und bietet wertvolle Einblicke in die jüngsten Fortschritte in diesen Bereichen.
KI-Generiert
Diese Zusammenfassung des Fachinhalts wurde mit Hilfe von KI generiert.
Abstract
The Internet of Things (IoT) refers to a network where different smart devices are interconnected through the Internet. This network enables these devices to communicate, share data, and exert control over the surrounding physical environment to work as a data-driven mobile computing system. Nevertheless, due to wireless networks' openness, connectivity, resource constraints, and smart devices' resource limitations, the IoT is vulnerable to several different routing attacks. Addressing these security concerns becomes crucial if data exchanged over IoT networks is to remain precise and trustworthy. This study presents a trust management evaluation for IoT devices with routing using the cryptographic algorithms Rivest, Shamir, Adleman (RSA), Self-Adaptive Tasmanian Devil Optimization (SA_TDO) for optimal key generation, and Secure Hash Algorithm 3-512 (SHA3-512), as well as an Intrusion Detection System (IDS) for spotting threats in IoT routing. By verifying the validity and integrity of the data exchanged between nodes and identifying and thwarting network threats, the proposed approach seeks to enhance IoT network security. The stored data is encrypted using the RSA technique, keys are optimally generated using the Tasmanian Devil Optimization (TDO) process, and data integrity is guaranteed using the SHA3-512 algorithm. Deep Learning Intrusion detection is achieved with Convolutional Spiking neural network-optimized deep neural network. The Deep Neural Network (DNN) is optimized with the Archimedes Optimization Algorithm (AOA). The developed model is simulated in Python, and the results obtained are evaluated and compared with other existing models. The findings indicate that the design is efficient in providing secure and reliable routing in IoT-enabled, futuristic, smart vertical networks while identifying and blocking threats. The proposed technique also showcases shorter response times (209.397 s at 70% learn rate, 223.103 s at 80% learn rate) and shorter sharing record times (13.0873 s at 70% learn rate, 13.9439 s at 80% learn rate), which underlines its strength. The performance metrics for the proposed AOA-ODNN model were evaluated at learning rates of 70% and 80%. The highest metrics were achieved at an 80% learning rate, with an accuracy of 0.989434, precision of 0.988886, sensitivity of 0.988886, specificity of 0.998616, F-measure of 0.988886, Matthews Correlation Coefficient (MCC) of 0.895521, Negative predictive value (NPV) of 0.998616, False Positive Rate (FPR) of 0.034365, and False Negative Rate (FNR) of 0.103095.
Hinweise
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
IoT is growing swiftly, leading to a noticeable increase in the number of dispersed IoT users and devices [1]. Smart devices can now gather and transmit data quickly due to the IoT, revolutionizing several businesses and areas of life, including healthcare, business, transportation, and cities. Efficiency, productivity, and convenience all increase [2]. Like the Internet, blockchain technology is a disruptive breakthrough. It is a distributed ledger that uses its nodes to independently validate and store data without the help of a third party. It resolves the problem of trustworthy transmission of trust and value at a cheap cost by using modern cryptography to reach a consensus. It does away with intermediaries and uses distributed consensus to store transactions as a decentralized platform [3]. For the Social IoT to enable socialization, reliable links between smart devices are necessary. But, in the modern era of communication, privacy and security issues present considerable difficulties. By establishing safe and reliable communication between the devices, these problems can be resolved, and the security features of Social Internet of Things (SIoT) devices can be improved [4]. IoT now confronts difficulties with important applications, network performance, security, and massive device access [5]. To guarantee security and dependability in the present and the future, the IoT primarily relies on Artificial Intelligence (AI) techniques. These methods support dependable communication between devices and help to address the difficulties of protecting large-scale IoT systems [6].
1.1 Problem statements
IoT is a revolutionary technology that makes the interconnection of smart devices through the internet possible [7]. These networks allow the exchange of data and coordinated control of physical settings. Such an interconnected and shared foundation is a foundation for data-driven mobile computing systems, which then revolutionize industries and everyday lives [8]. However, this also brings security issues to the wireless networks and the resource constraints and limitations of smart devices. These vulnerabilities render the IoT networks to multiple kinds of assaults; for example, the malicious routing, which can jeopardize data exchange among the devices in the networks [5, 9, 10].
Anzeige
In the modern IoT ecosystem, where connected devices are growing exponentially and the data they generate is overwhelming, security has become a top priority in order to prevent different adverse attacks [11]. Cyber-attacks that aim to unlawfully access IoT networks have become more complex, targeting data transmission and routing to penetrate and disrupt the systems. Therefore, the proliferation of cyber threats demands the development of alternative ways for detection and reduction as well as the security and reliability of data exchange [12].
The biggest difficulty in IoT security is in managing and securing the huge amounts of data that are exchanged across the network [13]. The issue of detecting and neutralizing malicious nodes within the system is also a problem within this security system. Conventional security methods fail to stand up to the dynamics of the IoT architecture that is in constant change and thus necessitate intelligent and fast strategies to counteract route attacks, as well as other forms of network attacks [14]. The complexity of the problem is aggravated by the computational limitations of many IoT devices, which makes the implantation of complex security algorithms harder. This research intends to provide a solution to the current problem with IoT network security, especially blockchain security, for reassurance data management and detecting malicious IoT nodes [15].
1.2 Objectives and contributions
This paper is motivated by the widespread adoption of blockchain in various fields where data security is essential for the effectiveness of the transaction. As the usage of blockchain applications extends to areas like finance, healthcare, and supply chain management, the security and confidentiality of data that is stored by blockchain systems become essential. In the course of the paper, we will evaluate the performance of different cryptographic algorithms to bring invaluable information about the improvement of cryptographic systems and thus contribute to the development of more secure and efficient applications based on blockchain. The objective of this study is to analyze and compare the inputs and outputs of different cryptographic and security algorithms in blockchain technology.
The number of networked devices and the attack surface have both expanded as a result of the IoT rapid growth. Given that IoT devices frequently have limited resources and are vulnerable to numerous forms of attacks, this expansion presents serious security challenges. An intelligent security framework is therefore required to successfully address these issues and safeguard IoT networks. Another objective of this study is to design an intelligent security architecture that solves IoT security issues. The major contributions that have been made to provide a reliable solution are:
To introduce trust assessment for evaluating the trustworthy behaviour of communicating devices.
To incorporate intelligent routing algorithms that optimize routing patterns in IoT networks by using metaheuristics Self Adaptive TDO. As a result, network performance and attack resistance are improved.
To provide secure and tamper-resistant communication, data integrity, and decentralized consensus in resource-constrained IoT scenarios, it also contains a blockchain-based solution with SA_TDO.
To improve overall IoT network security, the framework incorporates the AOA method for real-time intrusion detection. By utilizing optimized-DNN features, the framework enables precise detection and localization of unauthorized actions.
Anzeige
1.3 Organization of the paper
The paper follows a structured format with the introduction provided in the initial section. Subsequently, Sect. 2 presents a comprehensive literature review. Section 3 provides an overview of the suggested technique, and Sect. 4 presents and discusses the findings. Section 5 of the study, summarizes the conclusions and implications.
2 Literature review
In 2020 Bera et al. [16] discussed the development of blockchain technology in the Internet of Everything (IoE), as well as attack tendencies in the IoE ecosystem. For identifying and thwarting hostile attacks in the IoE, a suggested blockchain-based AI access control system was made available. The framework was put into practice, and the transaction number per block and the computing time for mining different numbers of blocks were both calculated. In the same year, Alrubei et al. [17] offered a distributed and decentralized architecture for employing IoT hardware platforms to construct distributed artificial intelligence (DAI). This architecture enabled secure communication and data exchange amongst distributed neurons by utilizing decentralized and self-managed blockchain technology. A new consensus method based on Proof of Activity (PoA) and Proof of Work (PoW), together with specialized block and transaction formats, was built on the platform to handle DAI-related tasks. A specialized testbed using inexpensive IoT devices was used to test the design. Furthermore, in 2020, Xiao et al. [18] developed an approach based on blockchain technology to counter selfish edge assaults and attacks based on fabricated service records in mobile edge computing. The system assessed how well edge devices performed computationally and broadcast this information to nearby edge and mobile devices. For assigning reputation to edge devices and a PoW and Proof of Stake (PoS) consensus procedure, a technique is utilized to add a block to the MEC blockchain that records new service reputations. A deep RL version and a Reinforcement Learning (RL)–based edge CPU allocation approach were suggested to boost computing performance without being aware of the network and service models for mobile devices.
In 2020, Guo et al. [19] outlined a structure for a reliable endogenous network that might handle IoT services like resource management, networked operation, and customization. To establish trust, the proposed system uses consortium blockchain, Software Defined Network (SDN), and network function virtualization. The study offered a framework for on-chain and off-chain collaborative resource allocation that integrates time estimate and resource utilization algorithms to achieve customisation and dynamic modification. The framework addressed issues with state measurements and trust that prevent the virtual network operating platform from being used for resource aggregation in heterogeneous networks.
In 2021, Puri et al. [20] discussed the problems with cloud-based healthcare systems for remote patient monitoring and data management. The authors suggested an AI-enabled decentralized healthcare system based on smart contracts and a public blockchain network to address these issues. The framework's objectives included identifying rogue nodes, trust-building and transparency in patient healthcare information, and IoT device authentication. Real-time testing of the suggested method indicated significant decreases in transaction costs, average latency, throughput, device energy consumption, and data request times.
In 2021 Raza et al. [21] described a family security system that made use of blockchain technology and AI for inside and outside user activity monitoring to spot irregularities. For outdoor tracking, family members' smartphone positions were recorded and kept in a private blockchain. A smart contract powered by Self-Organizing Maps (SOM) was deployed in a smart house to discover anomalies in users' everyday activities. As a proof of concept, the system was developed on a modest scale and demonstrated technological viability.
In 2022 Lv et al. [22] created a method for blockchain-based IoT systems to identify location faking. In contrast to earlier work that used physical layer properties, the research proposed an IoT system that evaluated the accuracy of location proofs at the node and mobile trajectory level using attributes of the blockchain location system. A multilayer fuzzy hierarchical analysis process assessment approach built on blockchain was used in the suggested solution and showed higher performance in simulation results. This offers a foundation for evaluating the reliability of location evidence.
In 2022 Zulkifl et al. [23] developed the Fuzzy and Blockchain-Based Adaptive Security for Healthcare IoTs (FBASHI) framework, which combines blockchain with fuzzy logic to deliver Authentication, Authorization and Audit Logs (AAA) services in healthcare IoT contexts. Hyperledger was used to create the suggested system because of its privacy features and quick reaction times. To offer AAA services, FBASHI used a heuristic technique and a behavior-driven adaptive safety mechanism. And, in 2021 Deepak et al. [24] addressed the concerns about security and privacy when employing AI tools and methodologies. The privacy-preserving in smart contracts using blockchain and artificial intelligence (PPSC-BCAI) framework is the suggested remedy, and it makes use of blockchain and AI to enable privacy-preserving smart contracts. This paradigm makes it easier to understand how people interact with systems, service warnings, security issues, and false claims. Extreme gradient boosting (XGBoost) was also used to detect changes in transmission rates over unstable networks, examine data transactions and sharing, and optimize network load. The implementation and efficacy of the suggested framework were assessed, and the results were encouraging in terms of improving AI-based systems' privacy and security.
In 2023 Habibi et al. [25] addressed security issues with IoT systems, particularly the threat posed by IoT botnets, which can jeopardize the stability of the systems owing to resource limitations. By depending on unlabeled or unreliable datasets, traditional and intelligent technologies for botnet detection, such as Machine Learning (ML) and Deep Learning (DL), were discovered to have limitations, which can impair performance while managing zero-day threats. Additionally, it was shown that existing Generative Adversarial Network (GAN) models and conventional oversampling techniques had drawbacks when it came to creating samples and simulating real tabular data. By overcoming these restrictions, the reviewed research suggested using the CTGAN model, a cutting-edge GAN model that specializes in tabular data modelling and generation, to enhance botnet detection in IoT systems.
The research on how to detect threats in IoT networks provides a range of opinions and methods that could serve as a basis for your study. The [26] is designed to fortify smart environments using a blend of blockchain technology and fuzzy logic. This approach shall give the idea of how blockchain can help threat detection by securely managing data and by making it impossible to alter their contents, which is in the same line with your goal of the study. The relevant [27] study focuses on the application of quantum-based methods for privacy preservation as a part of the consumer IoT environment. In this study, the authors show that combining federated learning and quantum mechanics can provide a novel approach to privacy-preserving threat detection, which may be used to extend your research.
Kiran et al. [28] proposed a blockchain-based security architecture for 5G-enabled IoT networks, integrating adaptive clustering with Evolutionary Adaptive Swarm Intelligent Sparrow Search (EASISS) and Network Efficient Whale Optimization (NEWO) algorithms. Their approach enhances network security through efficient cluster head management and localized blockchain structures, demonstrating superior performance in balancing latency and throughput.
Maftei et al. [29] introduce a decentralized blockchain architecture for IoT data management, ensuring scalability and security. Their solution eliminates centralized vulnerabilities and supports diverse IoT applications, leveraging blockchain for immutability and transparent data handling.
Haque et al. [30] propose a scalable IoT data management framework using Delegated Proof of Stake (DPoS) consensus, Interplanetary File System (IPFS) for storage, and Docker for evaluation. Their approach minimizes latency and resource consumption, outperforming traditional PoS methods in IoT environments.
Tariq [31] explores blockchain's transformative impact on healthcare data management, emphasizing enhanced security, interoperability, and data privacy across healthcare systems. Blockchain technology facilitates seamless data exchange and management, addressing critical challenges in healthcare.
Abbas et al. [32] propose a blockchain-assisted secure data management framework (BSDMF) for the Internet of Medical Things (IoMT), ensuring secure data exchange and management between devices and servers. Their blockchain-based solution achieves high accuracy, precision, and low latency, demonstrating significant advancements in healthcare data security and accessibility.
Several research gaps have been identified across the various studies pertaining to the use of blockchain in IoT and healthcare based on the literature review: These are the issues such as the problems of blockchain application in large-scale IoT networks, the constant issues of data privacy and security in healthcare applications, and the lack of testing of the proposed frameworks in real-world conditions. Furthermore, the application of blockchain-based models in cybersecurity and the further improvement of these models to mitigate new threats such as zero-day attacks, and guarantee perfect compatibility is an essential aspect that requires further research.
This work fills several important gaps in the literature on IoT security. It improves the precision and reliability of data that is exchanged between IoT devices by managing and securing routes through trust management and routing security through cryptographic algorithms such as RSA for encryption and SHA3-512 for data integrity. The use of SA_TDO for key generation and AOA-ODNN, a Convolutional Spiking neural network-optimized deep neural network for intrusion detection, adds to the proposed method’s robustness. These methodologies are intended to prevent routing attacks and to provide secure communication within IoT networks, this is crucial for data credibility. Thus, the results described in the study, such as increased response and sharing record times, indicate the effectiveness of the proposed approach in real IoT applications. Furthermore, the AOA-ODNN model has high accuracy, precision, sensitivity, and specificity as shown by the performance metrics confirming its efficiency in identifying and counteracting network threats.
3 Proposed methodology
Figure 1 shows the complete overview of the Secured IoT malicious detection. The system is designed to recognize and never allow untimely attacks on IoT devices using a multifaceted approach that entails different techniques. Indirect trust computation acquiesces to that end using trust scores to assess the reliability of devices and information. The scores are made from E.g. users' reputation, data source, and the patterns in how both the device and data behave. Secured data management is an integral aspect of a data protection mechanism that ensures data is not prone to malicious parties such as hackers, human error, or technical glitches. This refers to the execution of strong techniques like encryption, access control, and data integrity verification to guarantee that data safety practices are followed. ML is embedded in the body of the system in a way that employs algorithms that were trained on data from known attacks so that such malicious acts could be detected even if they are new. Detection of anomaly is one of the critical approaches to the system, which is targeted on devices as well as data that go off the current standard. Detecting those kinds of anomalies can be seen as a sign of potential unwanted behaviour. The malicious detection system in a secured IoT model is versatile enough to address problems in different types of IoT, such as smart homes, industrial control systems, connected cars, and medical devices. The deployment of these joint tools provides reliable protection, thus safeguarding IoT devices from possible threats and attacks to protect their related data.
Fig. 1
Model design of secured IoT
×
3.1 Trust assessment
Trust assesses the trustworthy behaviour of communicating devices. To calculate the \(TV\) during communication, the distinction was made between the device assessing trust and the apparatus whose reliability is being assessed. The assessed device, denoted as x, determines the \(TV\) of the device y by considering inputs from various connected devices. \({I}_{1},{I}_{2},\dots {I}_{n}\). This approach involves multiple devices, increasing the accuracy of trust measurement for each evaluated device. However, there is a risk of fake \(TV\) generated by malicious devices. To address this, different factors influencing trust were analysed and filtered out the lower trust values produced by intruders. The trust value is employed to conduct a thorough analysis and eliminate misleading outcomes. The following text outlines the process of filtering devices with lower \(TV\) using our proposed trust evaluation process.
Step 1: To analyse devices in the network based on their trustworthiness, a significant threshold is established. Our proposed system sets the threshold value at 0.47; It divides gadgets into highly, somewhat, minimally, and fakely trusted categories. A network or device is regarded as malevolent if its trust value is lower than 47%. Highly infected gadgets follow the commands of intruders in their behaviour. The system can automatically identify and filter out highly infected devices with significantly low \(TV\) compared to the threshold. The following formula can be employed to block highly infectious devices outright.
\(T{V}_{j}\left(rms\right)\) represents the \(RMS\) trust value of device j among n devices present in the network.
Step 2: Introducing the concept of recommender trust.
$$d=\left|T{V}_{xny}-T{V}_{ynx}\right|$$
(2)
\(T{V}_{xny}\) represents the trust of the subject i towards the recommender y, while TVynx denotes the recommendation trust of the recommender k towards the device y. It is crucial for recommender k to convey this trust information to device i.
Step 3: Assessing the degree of comprehension and suggested conflict. This stage entails determining the level of comprehension by taking into account the quantity of prior successful interactions with device x over a specified period. It also takes into account the k variances between device x and device m when recommending the trust value of device y.
A hierarchical trust device is used to imply the confidence of the devices. After trimming and filtering the suggestions made by the m devices, this is done.
The next level is blockchain technology to establish a trust management system that ensures the integrity and confidentiality of IoT data. The TDO algorithm is employed to optimize the RSA encryption parameters, which enhances the security of the blockchain. The SHA3-512 technique [33] is used to provide additional safety to the blockchain by safeguarding that the data is tamper-proof.
Blockchain technology is applied in this suggested concept to improve IoT data security and to offer a trust management system. Data cannot be changed or removed after it has been recorded on a blockchain since this method of data storage is safe and decentralized. The IoT data is encrypted using RSA with the TDO and SHA3-512 techniques, and it is stored and managed using a blockchain-based architecture. These encryption techniques increase the data's security and assault resistance.
In the blockchain domain, a PoW technique is developed to provide data authentication and protect against data poisoning attacks. To address privacy concerns, a PoW mechanism is implemented, utilizing a privacy-preserving module within an address-based blockchain reputation system. This module incorporates a blockchain-based PoW data transformation technique. To create a message digest with a proof of hash and disseminate it over the blockchain network, the PoW process uses blockchain-based PoW. This process ensures the verification of chains of data records and safeguards against inference attacks that could exploit system-based machine learning. By implementing the PoW technique, privacy is maintained for the dataset, enabling the verification of raw data records and preventing inference and poisoning attacks.
The following functions are used for the system implementation:
Encryption: To protect plain data from unwanted access, the encryption technique known as ciphertext uses the RSA algorithm to convert it into an unreadable format. This procedure makes use of cryptographic keys to guarantee that the encrypted data is private and safe while being stored or sent.
Optimal Key Generation: The cryptographic algorithm TDO is employed to generate the optimal cryptographic keys to encrypt the data, while a private key is used to decrypt it.
Blockchain: The encrypted data can be stored on a blockchain. A decentralized, secure, and transparent ledger is a blockchain that maintains a tamper-proof record of all transactions. In this case, the blockchain can be used to store the encrypted data.
Decryption: When the authorized user needs to access the data, they can use their private key to decrypt it. The private key is kept securely by the authorized user and is never shared with anyone else.
3.2.1 Blockchain
A decentralized data structure called a blockchain creates an unchangeable, permanent digital ledger. The ledger is made up of a collection of encrypted, cryptographically linked blocks that also include timestamps and hash values from earlier blocks. Data cannot be altered or removed after it has been recorded. The peer-to-peer network that manages this distributed ledger is accessible to everyone. Nodes adhere to a protocol to send and confirm fresh blocks, maintaining immutability. A linked list of encrypted transactions using a hash function is the main technique for data security. It is difficult for attackers to alter or tamper with the data since SHA3-512 was used to encrypt the inputted data. The proof of work (POW) algorithm requires highly expensive processing costs to alter or damage a blockchain. One needs to resolve the POW before adding a new block to the BC network.
Figure 2 illustrates the fundamental framework of a blockchain, wherein Previous Hash represents the hash value of the preceding block, Timestamp represents the time of block generation, Tx-Root represents the structure for storing transactions, and Nonce represents a random number. In the proposed enhanced Blockchain, high-level security storage is introduced by combining the RSA with an SA-TDO algorithm for converting the original text into ciphertext (Encryption).
Fig. 2
Hash connection of block
×
3.2.2 Encryption
The RSA algorithm is used in the proposed work to encrypt data. A private key and public key are created by the RSA technique and used to encrypt and decrypt data, correspondingly. This process ensures that the data remains secure and confidential during transmission or storage. Once the keys are generated, the encryption process starts, and data blocks are created. The encryption is done using the RSA algorithm.
RSA algorithm
RSA [34] is a cryptographic algorithm designed for securing data communication and ensuring confidentiality by encrypting plaintext into ciphertext using a public key and decrypting it back with a private key. As illustrated in Fig. 3, RSA operates with two distinct keys: a public key for encryption and a private key for decryption. The algorithm relies on the complexity of factorizing the product of two large prime numbers to maintain the private key’s secrecy. The sender uses the recipient’s public key to encrypt the plaintext into ciphertext, which remains indecipherable without the private key. The ciphertext is then sent to the recipient, who uses their private key to decrypt it back into the original plaintext. The security of RSA is based on the difficulty of this factorization process, which ensures that the private key remains secure as long as factorization remains computationally infeasible.
Fig. 3
RSA Technique
×
The RSA algorithm involves the following steps.
(a)
To initiate the RSA algorithm, two large prime numbers, p and q, each approximately 256 bits (75 digits) in size, are chosen to generate a public key \((k1)\) and a private key (\(k2)\).
(b)
After selecting two large prime numbers \(p\ and\ q\), multiply them to obtain \(n\). The product \(n\) is public, but the prime factors \(p\ and \ q\) remain secret. It is almost impossible to determine \(p\ and\ q\) even if \(n\) is known, due to the computational complexity of factorizing very large numbers.
(c)
To create a public key, select a number e that is coprime with the "totient" function
Create a private key by selecting a number \(d\) that is the inverse of e modulo the totient function \(\varphi (n)\). Keep in mind that the public key is \(<k1,n>\), which is accessible to everyone, while the private key is < k2,n > , which is only known to the individual who needs to decrypt or sign the message.
(e)
To encrypt a message \(m<n\),
$$c={m}^{k1}mod n$$
(8)
The output of this operation is the ciphertext \(c,\) which is the encrypted version of the original message.
(f)
Decrypt the cipher text.
$$m={c}^{k2} mod n$$
(9)
(g)
Sign the message by encrypting it with the private key \(<k2, n >\) and decrypting it with the public key \(<k1,n>.\) After key generation, it is optimized by Tasmanian devil Optimizer to ensure the security and suitability of cryptographic systems and make the key more resistant to attacks.
The key size generated by RSA is optimized with the self-adaptive TDO algorithm. The SA-TDO minimizes the key-generation time of the RSA; therefore, the RSA’s performance is becoming more efficient than the traditional RSA. The following shows a detailed description of the RSA algorithm. The SA-TDO algorithm is explained in full in the following section.
The TDO [35] algorithm is being considered to optimize the RSA key, which could enhance the security and efficiency of the encryption-decryption process. SA_TDO specifically demonstrates adaptability, fast convergence, and robustness, which make it a suitable choice for optimal key generation in the proposed framework.
The TD is a carnivorous marsupial found in Tasmania. They are opportunistic animals and can either hunt prey or feed on carrion. They have two feeding strategies: feeding on carrion when available and hunting and attacking prey.
It is mainly the TDO algorithm over other optimization algorithms, due to its adaptability and superior performance, that is utilized for key generation in IoT networks. Trending Data Orientation's self-adaptive mechanisms help it to automatically change its parameters according to the conditions and nature of the IoT environment that keep on changing. This flexibility makes TDO a perfect fit for the ongoing need to address the different capabilities of devices and a wide variety of network situations in IoT systems. Furthermore, the algorithm’s capability to go through search spaces effectively can lead to the best generation of cryptography keys, which in turn improves the security and performance of a network. Its exploration–exploitation balance is a means of preventing the algorithm from converging too soon, which leads to more robust and secure conclusions. As well, TDO's speed and accuracy for key generation in a resource-limited IoT device can also be helpful for these devices to perform their functions timely and effectively.
The mathematical modelling of TDO is as follows.
Step 1: Initialization
A population of searcher agents based on TDs is used by the TDO method. A vector with an element count proportional to the number of the variables in the problem, which can be expressed by a matrix, is used to represent each member of the population.
(10)
The TDO algorithm's population of TDs is represented by the symbol \(Y\), where \({Y}_{m}\) stands for the \(m^{th}\) candidate solution and \({y}_{m,n}\) represents the \(n^{th}\) variable. \(A\) is the size of the population, and there are \(p\) variables in the problem.
Step 2: Random generation
The input vectors are generated randomly.
Step 3: Fitness evaluation
The candidate solutions are allocated to the values of the function's variables to construct the issue's objective function. This process results in a vector as in (Eq. 11)
The vector F, where Fm is the value reached by the mth candidate solution, represents the values of the objective function. Analysing the values of the objective function allows one to assess the quality of the prospective solutions. The best member of the population is the candidate solution that produces the best value for the goal function, and it is updated based on new data for each iteration. The TDO population update process is based on the TDs' carrion-eating and prey-hunting strategies.
Step 4: Exploration (Feeding by eating carrion)
TDs occasionally choose to eat carrion rather than hunt, particularly when other predators have killed huge prey but are unable to finish it all. Similar to how TDO searches for algorithms to explore the space for addressing problems, this behaviour of looking for carrion is similar. This displays TDO's capacity to investigate diverse regions of the search space to get the ideal answer. The TDO algorithm assumes that the positions of the other population members in the search space represent carrion locations for each TD. To simulate the random selection of one of these locations, Eq. (5) is used, which identifies the mth TD's target carrion as the lth population member. The value of \(l\) is randomly chosen between \(1\ and\ A\), while the value of \(m\) is fixed.
$${C}_{m}={Y}_{l, } m=\text{1,2}\dots A, l\in \left\{\text{1,2},\dots A|l\ne m\right\}$$
(12)
\(here\) the selected carrion is \({C}_{m}\) by the \(m^{th}\) TD. In this strategy, the TD's movement is based on the quality of the carrion it has found. If the carrion's value is good, except the devil approaches, it will turn away. The new position is acceptable if the objective function value increases; otherwise, the devil remains in its former position as,
In the first strategy, \({Y}_{m}^{new,S1}\) represents the new state of the \(m^{th}\)\(TD\). The value of the \(n^{th}\) variable in the new state is represented by \({y}_{m,n}^{new,S1}\); \({F}_{m}^{new,S1}\) indicates the new state's goal function value. \({F}_{Cm}\) is a representation of the value of the selected carrion's function of objective. The variable \(u\) is a random number between \(0\ and\ 1\), while \(V\) is a random number that can be either 1 or 2.
To prevent candidates from getting stuck in local minima the Cauchy distribution is utilized to generate random numbers that are highly diverse. To generate Cauchy-based random numbers, the formula used is given below.
where \({R}_{Cy}\) represents the Cauchy-based random number and ε lies in \([\text{0,1}]\). To include the Cauchy distribution in the algorithm, the equation can be reformulated as:
$${Y}_{m}^{new,S1}={y}_{m,n}^{new,S1}{R}_{Cy}$$
(16)
Step 5: Exploitation (Feeding by eating prey)
The second method of feeding used by the TD involves hunting and consuming prey. The procedure is divided into two steps: first, the devil examines the surroundings to identify and pick its target for assault, and then it pursues and devours the prey. The selection of the carcass is the goal of the first stage, which is comparable to the first feeding plan. In the second technique, when updating the mth TD, the placements of other population members are taken into account as potential prey sites. The l th population member, where l is a random natural number between 1 and A, is picked at random to be the prey. The method of selecting prey is described by:
The selected prey is \({H}_{m}\) by the \(m^{th}{TD}\). Once the prey's spot is determined, a new spot is determined for the TD. The calculation of this new spot depends on whether the objective function value of the selected prey is better or worse than the current position. The model for this process is presented in (18). The former position is replaced if the new one increases the value of the objective function.
\({Y}_{m}^{new,S2}\) is the status of the \(mth\) Tasmanian based on the second strategy. \({y}_{m,n}^{new,S2}\) is its value of the \(nth\) variable. \({F}_{m}^{new,S2}\) is its function’s objective. \({F}_{hm}\) is its measurable function of the chosen prey.
Step 6: Improvised new position
The \({F}_{m}^{new,S2}\) is improved using the Levy flight approach for further exploration purpose is given by:
where \({r}_{5},{r}_{6}\) lies in \([\text{0,1}]\), \(1<\beta \le 2,\)\(\lambda \left(x\right)=(z-1)!\). The step length, denoted by Levy(λ), is determined by utilizing a Levy distribution characterized by infinite variance and mean values of [1, 2].
Verify the stopping requirements. Stop the procedure when the halting criteria have been iterated the most. If not, go to step 4.
The SA-TDO algorithm, which stands for Secured IoT Malicious Detection, is a method designed to identify and eliminate malicious attacks on IoT devices, as depicted in Fig. 4. This multi-step algorithm starts with the initialization of parameters and the generation of an initial population of candidate solutions representing different IoT network configurations. It then evaluates these solutions based on an objective function that considers factors like security, resource usage, and efficiency. Through a process of random selection and information exchange, the algorithm applies exploration and exploitation techniques to refine and improve the solutions. It continuously updates and analyzes candidate solutions against acceptance criteria and termination conditions to find the most effective solution for network security. The SA-TDO algorithm leverages artificial intelligence and a population-based search strategy to optimize IoT network security against potential attackers, focusing on achieving the best security configuration.
Fig. 4
Flow chart of SA-TDO
×
When the key is optimized, key hashing is performed to generate a fixed-length value that is unique to that key. The hashed key, also known as a digest or fingerprint, is used in place of the original key in cryptographic operations such as encryption and decryption. To be specific, the key hashing can ensure more security for data stored in the blockchain. Therefore, the proposed model has adopted a SHA3-512 for hashing the key generated through the SA-TDO algorithm with RSA.
The TDO algorithm, Self-Adaptive Tasmanian Devil Optimization, has presented significant scalability and adaptability benefits, and thus it is a good option for various IoT environment types. Although this could be an issue, the paper should provide a more detailed description of the algorithm’s ability to self-adjust itself to the specific conditions of different IoT applications. Likewise, the TDO algorithm should be capable of dynamically adjusting its parameters concerning the computing power and memory of the device, while at the same time generating keys that will not put too much strain on any individual IoT device.
3.3.1 SHA3_512 based Key Hashing
SHAs such as SHA3-512 are used in embedded systems to ensure data integrity and prevent falsification [36] [37]. The RSA with SA-TDO based key is hashed and stored into the blockchain along with the encrypted data. This protects the data from being manipulated without authorization, notably from a man-in-the-middle attack. The message is hashed, encrypted using the sender's private key, concatenated with the original plaintext message, and finally encrypted with a symmetric key to verify the message's integrity. The communication is split into plaintext and encrypted hashed messages at the receiver's end after being decoded. The encrypted hashed message is decrypted by the receiver using the sender's public key, which is also used to hash the plaintext message. Comparing the two generated hashes confirms the message's integrity. Therefore, RSA with SA-TDO based key is highly secured with the SHA3-512. As discussed, the hashed key is maintained with the ciphertext in the Blockchain. Then the decryption is to convert the cipher-text into the original text.
RSA is a famous asymmetric cryptographic algorithm that depends on a pair of keys for encryption and decryption: public and private. It is safe since Prime numbers are difficult to factor in, therefore it is reliable for IoT networks. RSA not only provides data secure transmission but also can be used to generate digital signatures that make it easier to manage the keys and ensure data integrity. However, RSA has the advantage of being interoperable with the current systems, which is a crucial feature in the resource-scarce environment of IoT devices. SHA3-512 is among the SHA3 family, which is a cryptographic hash function that provides increased security compared to the previous versions. It generates a unique 512-bit hash value for the data and guarantees its integrity as well as flagging any kind of meddling of the data. SHA3-512 is immune to collision attacks and length extension attacks while allowing setting different sizes for the input. SHA3-512 with its integrity properties, unlike other solutions, is considered to be a reliable one for the IoT networks. RSA and SHA3 are both powerful tools that together provide a solid data management foundation in the IoT.
3.3.2 Decryption process
To safeguard data shared among IoT devices and other components of the IoT ecosystem, decryption is employed in the IoT to reverse the procedure of encryption and recover the original plaintext data from the ciphertext. To restore the ciphertext to its original form, the decryption procedure often entails utilizing a decryption key that is only known to authorized parties. To prevent unwanted parties from accessing the unencrypted material, this key must be kept private and safe. During the decryption procedure, the first data block is retrieved from the generated key itself, and the SHA3-512 hash computes is then applied to the following block. At last, the third level of security is enabled based on a CSNN_ODNN. Therefore, the decrypted IoT data is further analyzed and discussed in the following section.
3.4 Deep learning-based malicious detection
Deep learning-based techniques are utilized to identify malicious attacks on the IoT network. Our proposed model uses a routing-based traffic classification method to identify the traffic type on the IoT network. The traffic is then analyzed using a deep learning-based approach (CSNN_ODNN) to detect any malicious activities.
Malicious detection can be performed by the following steps.
1.
Pre-processing
The collected raw data is utilized for preprocessing so that the data will become in to clear format. The data preprocessing is based on data cleaning and a data normalization approach. The data collected may have contained duplicate records, incomplete data, or noisy data. So, the data was divided into distinct sets of features, with each feature consisting of multiple data points, to ensure that these features were free of any missing values and errors. Any missing values were replaced with suitable averages. This process ensures that the data is thoroughly cleaned and prepared for analysis. The data may exhibit disparate numbers, different means, and variances, which can lead to difficulties in learning and decrease the efficiency and accuracy of learning methods. To address this, the min–max scaling technique was employed to minimize the negative impact of outliers by transforming all data values into a standardized range from zero to one. Then the preprocessed data is transferred into the feature extraction phase.
2.
Feature extraction
The pre-processed data is utilized for feature extraction based on statistical features. To make a data set appropriate for analysis or machine learning, feature extraction involves choosing and modifying the most important attributes or features from the data set. The following parameters are taken into account for feature extraction.
3.
Weighted Entropy and variance
For prominent feature selection, the weighted entropy variance is used. It is given by
The extracted features specifically Weighted Entropy and Variance are obtained using a new SA-TDO.
4.
Optimal Feature selection
The SA-TDO is used to choose the features from the retrieved characteristics in the best possible way.
5.
Malicious Detection and Classification
The features are selected and then classified for the Intrusion with the optimized DNN. The DNN is optimized with the AOA. Moreover, the prediction accuracy of the malicious attack detection model is improved by fine-tuning the activation function of the DNN using the SA-TDO algorithm. The outcome from the hybrid classifier will be the detected outcome.
3.4.1 Convolution spiking neural network
The CSNN [38] can capture the sequential nature of events and the time dependencies between different data points. This enables the CSNN to identify suspicious activities that may occur over a while. Each stage of the model consists of three fully connected layers, two max-pooling layers, two dropout layers, one spike encoding layer, two convolutional layers, and two flattened layers. Figure 5 is a conceptual diagram that demonstrates the CNN model for the Spiking Neural Network (SNN), indicating various components and the relationships between both. Input Data Set is the next stage, which is a set representing data, that might consist of images, time series or grid or tensor format data. The Convolution operation, present at the heart of the CNN, is the one where the filter is slid over the input data; the dot product is calculated, based on which features of the data are highlighted through the feature map.
Fig. 5
Convolutional Spiking NN
×
3.4.2 Stage I: Feature extraction
The pre-processed data is subjected to a series of convolution, maxpooling, and dropout functions to extract features. Different feature levels, filter counts, kernel counts, activation functions, padding, and pooling operators are displayed for each feature extraction layer. Feature maps are the output and input for each layer, respectively. Additionally, the pre-processed data is compressed using a (3, 3) kernel size. By leveraging spatially local connections among the nodes of adjacent layers, the convolutional layer reinforces a local connection pattern. Following that, the max-pooling layer is linked to the feature maps that were created from each convolutional layer. When used jointly, convolutional and max pooling operations act as compositional operators with a local selection process. Convolutional and max-pooling layer output has been taken out, respectively, with probabilities of 0.5 and 0.6. In addition, the ReLU is used to activate every convolutional layer.
3.4.2.1 Convolution
A mathematical concept called convolution is frequently used in the processing of digital information. The majority of these signals are represented as numerical data. Convolution may be explained in simple words as a technique for adhering two-time functions together. The mathematical formula for the 2-D convolution function with two input matrices of dimensions \(I (ra, ca)\) and \(K(rb, cb)\) is
where \(0 \le i < ra + rb - 1\text{ and }0 \le j < ca + cb - 1\). The convolutional layer applies k filters with sizes i × j to the input data to conduct the convolution operation. The convolutional layer thus offers a set of k feature maps with sizes i × j. The model employs filters to identify different features in the input matrix. These filters are initialized with varied distributions, enabling them to learn unique features automatically. To maximize feature learning, a substantial number of filters, specifically 22, are utilized. The resulting activation map, comprising 22 feature maps of size 3 × 3, is then passed to the subsequent layer during the forward pass.
3.4.2.2 Max-pooling
The input is down-sampled in a nonlinear manner by the Max-Pooling procedure. The input signal is divided into several rectangular, non-overlapping areas by this process. It generates output as a maximum value for each of these regions, assisting in the necessary feature size reduction.
where (k feature mappings of size (i j) from convolutional layer) c is the input matrix, z is the output matrix, and p is the padding. Each feature map is individually operated by the pooling layer, which uses the max operation to enlarge each feature map regionally. Additionally, the pooling layer is used to overcome the convolutional layers' restriction, which is that they only store the precise position of features. Small adjustments like cropping, moving, and rotation can cause these features' positions to change, which could lead to a different feature map. By downsampling, the pooling layer creates a condensed version of the convolutional layer's feature maps. The ability to pool layers is known as "model invariance to local translation" and helps to ensure that features are in the same place as in the convolutional layer.
3.4.2.3 Drop out
The concept behind dropout in neural networks is to erratically remove units from both the public and hidden layers. You may think of it as a regularization strategy to prevent over-fitting during the training phase. This procedure weakens intricate neuronal co-adaptations. As a result, it aids in understanding more powerful features. To deactivate neurons throughout training, a mask made of zeros and ones is produced.
3.4.2.4 Non-linear gating
Conventional CNN non-linear filters employ a non-linear gating function with a linear function that is uniformly applied to each component of a feature map. The ReLU function is the most used nonlinear gating method. Additionally, the derivative of the derivative can include the ReLU function by restricting the contributions from convolution kernels that the ReLU function has disabled. A filter has the following mathematical formula:
$${z}_{ik}=\text{max}\{0,{x}_{ik}\}$$
(29)
where z is the output and x is the input.
3.4.3 Stage II: Feature encoding
To convert the feature maps into feature vectors, which the classifier needs as input, the retrieved features must be flattened. The Soft-LIf model, a differentiable variation of the leaky integrate-and-fire (LIf) model with a refractory time, is used to spike encode the resulting feature vectors. Here, we go over the design concepts guiding the use of spiking neurons in Soft-LIf neurons to represent the input feature vector. The membrane potential behaviour and the spike-reset mechanism make up the two halves of the LIf neuron. The dynamics of the LIf neuron membrane potential vLIf (t) with respect to the input data set ts(t) are controlled by the differential equation shown below.
Here the membrane resistance and capacitance are \(R\;\mathrm{and}\;C\) respectively. This formula yields the normalised LIf neuron rate response with a refractory period:
A DNN is a mathematical function that maps an input \((p)\) to an output \((q)\) using a set of parameters. Multi-Layer Perceptron (MLP) is a type of NN where information is propagated forward through fully connected neurons in each layer. The data is transformed from one layer to the next, and the mathematical representation of MLP is \(O : {\mathbb{Z}}^{k}\times {\mathbb{Z}}^{l}\), where \(k,l\) is the size of the input vector \(p\), and the output vector \(O\left(p\right)\) respectively. The computation of each hidden layer \({S}_{i}\) is given by
Here \({S}_{i}: {\mathbb{Z}}^{{d}_{i}-1}\to {\mathbb{Z}}^{{d}_{i}}\), \(f:{\mathbb{Z}}\to {\mathbb{Z}}\), \({w}_{i}\in {\mathbb{Z}}^{d\times {d}_{i-1}}\), \(b\in {\mathbb{Z}}^{{d}_{i}}\), \({d}_{i}\) is the input size. The nonlinear activation function is \(f\) which can either be a \(sigmoid [\text{0,1}]\) or a \(tangent [1,-1]\) function. For the multi-class classification, the MLP model utilizes \(softmax\) function as the non-linear activation function. The \(softmax\) function calculates the probabilities of each class the output is determined by selecting the option with the highest probability. This ensures more accurate results. The mathematical formulas for the \(sigmoid, tangent. softmax\) activation functions are provided below
A DNN is formed by stacking multiple hidden layers on top of each other within a neural network architecture as shown in Fig. 6.
Fig. 6
DNN architecture
×
The input is given by \(p={p}_{1},{p}_{2}\dots \dots ..{p}_{k-1}, {p}_{k}\) and the outputs are \(q={q}_{1}, {q}_{2}, \dots .{q}_{l-1}, {q}_{l}\). Rectified Linear Unit \(ReLU\) activation function is used in neural networks to reduce the issue of vanishing gradients and error gradients. One of the main benefits of \(ReLU\) is its speed compared to other non-linear activation functions. This allows for efficient training of MLP models with a large number of hidden layers.
3.5 Archimedes optimization algorithm (AOA)
The AOA [39] is a population-based algorithm that uses objects immersed in a fluid as individuals. It starts with an initial population of objects with random properties and evaluates their fitness. AOA then updates the properties of each object based on collisions with neighbouring objects, determining their new positions. The algorithm iterates until a termination condition is met.
\({X}_{m}\) denotes the \(m\) th object among a population of \(I\) objects. The variables \({lb}_{m}\) and \({ub}_{m}\) indicate the lower and upper limits, respectively, of the search space.
Initialize volume (\({v}_{m}\)) and density \({d}_{m}\) and acceleration \({a}_{m}\) for each \(m\) th object using
The cross-entropy loss, referred to as the SoftMax loss, is utilized for training DNN in classification tasks. Embeddings are often taken from the DNN's last hidden layer for intrusion categorization. The SoftMax loss function can be mathematically expressed as follows:
Here, m is the dimension of the features that were retrieved from the final hidden layer, and n is the total number of categories in the classification job. The connection weights between the last hidden layer and the output layer are represented by the weight matrix of the output layer or w. The bias terms for each class are represented by b, the output layer's bias vector. \(w=\left[{w}_{1},{w}_{2},\dots .{w}_{n}.\right] and b=\left[{b}_{1},{b}_{2},\dots .{b}_{n}.\right]\) are the bias weight vectors of the output layer.
When DNNs are trained, the SoftMax loss function is used to minimize the difference between predicted probabilities and the actual labels, or ground truth, to optimize the network's parameters (weights and biases). It incentivizes the network to give the right class labels a higher probability while punishing the wrong predictions.
The fitness function is the minimization of the loss function described by Eq. (34). In the fitness function, an object is the collection of unknown variables.
Here \({d}_{best}, {v}_{best}\) are the best object's density and volume found so far. \(rand\) is the uniformly distributed random number.
Step 5: Transfer operator and density factor
Initially, objects in the AOA algorithm experience collisions, but over time, they strive to reach an equilibrium state. This transition from exploration to exploitation is facilitated by the transfer operator TF, which is responsible for transforming the search process. Using Eq. (44), the transfer operator TF is described.
The transfer operator \(TF\) gradually increases over time until it reaches a value of 1. Here \(t, {t}_{max}\) are the current, maximum iterations. Additionally, the density decreasing factor \(b\) is utilized by AOA to facilitate global to local search. It gradually decreases over time.
The variable \({b}^{t+1}\) decreases over time, enabling AOA to converge towards a promising region that has already been identified. The AOA algorithm's exploration and exploitation trade-offs are balanced with proper control of this variable.
(a)
Exploration phase (collision between objects occurs)
When one item collides with another if \(TF\le 0.5\), change the object's acceleration for the \((t+1)^{th}\) iteration as follows:
\({d}_{{rand}_{m} },{v}_{{rand}_{m}},{a}_{{rand}_{m}}\) correspond to the density, volume, and acceleration of random material. It is worth noting that \(TF \le 0.5\) guarantees exploration in one-third of the repetitions. A number other than 0.5 will change the algorithm's balance between exploration and exploitation.
(b)
Exploitation phase (no collision between objects)
If \(TF>0.5,\) and no collision occurs between objects, the acceleration is updated as follows
Here \(u, l\) is the normalization range which is set to \(0.9\ and\ 0.1\) respectively. The acceleration value of each agent in AOA, \({a}_{m(norm)}^{t+1}\), determines the step change percentage. If an object is far from the global optimum, it has a higher acceleration value, indicating exploration. As the object approaches the optimum, it enters the exploitation phase. A balance between exploration and exploitation is achieved by the acceleration factor's initial big value and gradual decrease over time.
Step 6: Position update
The \(m\) th object for the next iteration is given by
$${p}_{m}^{t+1}={p}_{m}^{t}+{D}_{1}\times rand \times {a}_{m(norm)}^{t+1}\times k\times \left({p}_{rand}-{p}_{m}^{t}\right) , if TF\le0.5$$
\(T\) increases with time in [\({D}_{3}\times \text{0.3,1}].\)
The change in direction of motion with flag \(F\) is given by
$$F=\left\{\begin{array}{c}+1 if S\le 0.5\\ -1 if S>o.5\end{array}\right.$$
(57)
Where
$$S=2\times rand-{D}_{3}$$
(58)
Step 7: Evaluation
Each object is assessed using the objective function f. The method allocates the values of \({d}_{best},\ {v}_{best},\ {a}_{best},\ {p}_{best}^{t}\) in accordance with the best answer thus far.
Step 8: Termination
Verify the stopping requirements. Stop the procedure when the halting criteria have been iterated the most. If not, go to step 4. Algorithm 1 presents the AOA optimised DNN Loss function.
Algorithm 1
Archimedes optimization Algorithm for DNN loss function
×
3.6 Dataset
The NF-UQ-NIDS Network Intrusion Detection Dataset [40] is a comprehensive and inclusive dataset that combines multiple smaller datasets, bringing together flows from diverse network setups and encompassing various attack configurations such as DoS, Benign (Normal), Scanning, DDoS, XSS, Bot, Reconnaissance, Fuzzers, and injection attacks.
4 Results and discussion
4.1 Experimental setup
There was a thorough experimental setup organized to run a test to evaluate the proposed trust management and intrusion detection system for IoT routing. The NF-UQ-NIDS dataset was the principal data source to help create the real-world and the current data-driven system. The IoT network environment was simulated and tried to be the closest to the real-world IoT scenarios, with different IoT devices and network nodes. The implementation was carried out in Python, utilizing RSA for data encryption, SHA3-512 for data integrity, and the Self-Adaptive TDO algorithm for optimal key generation. This environment was designed in such a way that it included different types of IoT devices, including sensors, actuators and smart appliances, to give an idea of how vast real-world IoT ecosystems can be. Nodes were thought to be the main actors of the network, representing roles such as gateways and servers. The simulation was therefore done to verify the proposed system in full. The testbed brought into use RSA for data encryption purposes with its known security and wide acceptance in the industry.
4.2 Performance metrics
In assessing the trust management evaluation and intrusion detection system for IoT routing, the system efficacy and efficiency are measured through the several performance metrics that are used to appraise the system in different aspects. These metrics are outlined below:
Encryption Time(s): Encryption time refers to the period required to translate the plaintext data into ciphertext using an approved encryption algorithm. The shorter encryption time is advantageous in cases when data security is a major concern since it makes it possible to encrypt the data fast during the transmitting or storing process.
where, \({{\varvec{T}}}_{{\varvec{e}}{\varvec{n}}{\varvec{c}}}\) denotes the encryption time, \({{\varvec{T}}}_{{\varvec{e}}{\varvec{n}}{\varvec{c}}{\varvec{r}}{\varvec{y}}{\varvec{p}}{\varvec{t}}{\varvec{i}}{\varvec{o}}{\varvec{n}}}\) is the total time taken for encryption, and \({\varvec{N}}\) is the number of data units encrypted. This metric helps in comparing different encryption techniques by providing a standardized measure of their performance.
Decryption Time(s): Decryption time is the operation's duration that takes a ciphertext and uses a given decryption algorithm to convert it into a plaintext. An abbreviated decryption period makes it possible to obtain the needed information in a short time, which is vital to the efficient use of data and the implementation of real-time operations.
where, \({{\varvec{T}}}_{{\varvec{d}}}\) denotes the decryption time, \({{\varvec{T}}}_{{\varvec{D}}{\varvec{e}}{\varvec{c}}{\varvec{r}}{\varvec{y}}{\varvec{p}}{\varvec{t}}{\varvec{i}}{\varvec{o}}{\varvec{n}}}\) is the total time taken for decryption, and \({\varvec{N}}\) is the number of data units decrypted. This metric is essential for evaluating the efficiency of decryption algorithms, particularly in environments with limited resources, such as the IoT.
Key Generation Time(s): Keys generation time is the parameter that determines the time it takes for producing cryptographic keys to encode and decrypt the data. For encryption systems, the effective key generation is a basic factor affecting both the performance and the overall quality of cryptography.
Here, \({{\varvec{T}}}_{{\varvec{k}}{\varvec{e}}{\varvec{y}}}\) denotes the key generation time, \({{\varvec{T}}}_{{\varvec{g}}{\varvec{e}}{\varvec{n}}}\) is the total time taken for key generation, and \({\varvec{N}}\) is the number of keys generated. This metric is particularly important for assessing the efficiency of key generation processes in cryptographic systems.
Restoration Efficiency: Restoration efficiency is a measure of system recovery capability to achieve a stable state after failure or attack. High restoration efficiency is paramount in protecting data integrity and availability—crucial in achieving good cryptographic systems.
where, \({{\varvec{E}}}_{{\varvec{r}}{\varvec{e}}{\varvec{s}}{\varvec{t}}}\) is the restoration efficiency, \({{\varvec{R}}}_{{\varvec{m}}{\varvec{a}}{\varvec{x}}}\) denotes the maximum possible recovery state, and \({{\varvec{R}}}_{{\varvec{f}}{\varvec{a}}{\varvec{i}}{\varvec{l}}}\) is the recovery state after a failure or attack. This metric is crucial for understanding and evaluating the resilience and robustness of a system, especially in the context of IoT networks where devices and services are often subjected to various forms of cyber threats and operational failures. The restoration efficiency measures how effectively a system can recover and return to its optimal or near-optimal state after a disruption.
Response Time(s): Response time is the time starting from when a request is made, up until the time when the request is executed, including encryption, decryption, and other related processes. Faster reaction time allows a more efficient and trustworthy blockchain environment, which is essential for real-time use.
In this formula, \({{\varvec{T}}}_{{\varvec{r}}{\varvec{e}}{\varvec{s}}}\) is the response time, \({{\varvec{T}}}_{{\varvec{r}}{\varvec{e}}{\varvec{q}}{\varvec{u}}{\varvec{e}}{\varvec{s}}{\varvec{t}}}\) denotes the time taken to make the request, and \({{\varvec{T}}}_{{\varvec{p}}{\varvec{r}}{\varvec{o}}{\varvec{c}}{\varvec{e}}{\varvec{s}}{\varvec{s}}}\) is the time required for processing the request, including tasks such as encryption and decryption.
Sharing Record Time(s): Besides, record time indicates the length of time to forward or send the cryptographic data on the blockchain. Shorter sharing records times facilitate faster access to data which is a key for the blockchain and IoT systems.
where, \({{\varvec{T}}}_{{\varvec{s}}{\varvec{h}}{\varvec{a}}{\varvec{r}}{\varvec{e}}}\) is the sharing record time, \({{\varvec{T}}}_{{\varvec{s}}{\varvec{h}}{\varvec{a}}{\varvec{r}}{\varvec{e}}{\varvec{d}}\boldsymbol{ }{\varvec{t}}{\varvec{i}}{\varvec{m}}{\varvec{e}}}\) denotes the total time taken to share the records, and \({\varvec{R}}\) is the number of records shared.
Various performance metrics are used to evaluate the intrusion detection system including Accuracy, Precision, Sensitivity, Specificity, F_measure, MCC, NPV, FPR, and FNR. These metrics are defined as follows:
Accuracy: Accuracy indicates the percentage of correct predictions (both true positives and true negatives) over the total estimated predictions. The higher precision means that the model can accomplish the task well among different classes, thus this metric becomes an important factor in assessing any intrusion detection system. In the following equation, \({\varvec{T}}{\varvec{P}}\) stands for True Positives, \({\varvec{T}}{\varvec{N}}\) for True Negatives, \({\varvec{F}}{\varvec{P}}\) for False Positives, and \({\varvec{F}}{\varvec{N}}\) for False Negatives.
Precision: Accuracy is the ratio of all true positive outcomes to all positive outcomes resulting from the model prediction. An increase in precision means that the model has fewer false positives which is very important for a system that must be minimizing extra alerts in an intrusion detection system.
Sensitivity (Recall): Sensitivity or recall indicates that of true positive classifications, out of all real positive cases. High sensitivity is required to ensure that the model is highly accurate in the detection of most actual positive cases (such as intrusions), hence fewer chances of missing attacks.
Specificity: Specificity determines the number of negative cases that are correctly classified among all actual negative cases. The high specificity of the model makes it able to correctly recognize the non-threatening instances which reduces the number of false alarms and thus improves the system reliability.
F-Measure (F1-Score): The F-measure, also known as the F1-score, is a harmonic mean of precision and recall. The F1-score is a weighted average of model precision and recall which is also a comprehensive measure of model effectiveness, especially for situations with imbalanced class distributions.
Matthews Correlation Coefficient (MCC): The MCC benchmark takes into account true and false positives and negatives to evaluate the quality of a binary classifier. Closer values to 1 mean that the computational model works better. MCC provides a balanced assessment of model performance. Therefore, MCC helps to make decisions more reliable in model evaluation.
Negative Predictive Value (NPV): NPV looks at the number of correct negative outputs out of all negative outputs produced by the model. The high NPV signifies the competence of the model to identify the negative cases correctly which is a vital aspect of the credibility of an intrusion detection system.
False Positive Rate (FPR): FPR calculates this rate as a fraction of the false positive predictions and overall actual negative examples. A tendency to FPR lowering is required to avoid false alarms and increase system efficiency and reliability.
False Negative Rate (FNR): FNR is the percentage of cases missed by the model out of all real negative instances. Reducing FNR is of the utmost importance in the sense of providing a system with the capability to detect threats and minimize the risk of an intrusion being missed.
Table 1 compares the algorithms’ running performances: AOA, MSA, SSO, and the SA_TDO as a learner rate of 70% is used. Security as a blockchain feature is the subject. The investigation brings to attention such important elements as encryption time, decryption time, key generation time, rate of restoration of data, reaction time, and time required for sharing records. The running time of the proposed SA_TDO algorithm is indicated to be 0.120983 s, thereby, its efficiency is compared to AOA (0.124544), MSA (0.136845), and SSO (0.126651). The SA-TDO algorithm that has been proposed shows expected results even in decryption time (0.112509 s), key generation time (0.233493 s), and restoration efficiency (0.840707) when compared to the other algorithms at the 70% learning rate. Such outcomes evidenced that the SA_TDO algorithm has managed to show powerful performance compared to the rest of the methods in the context of blockchain security; on some parameters, it displayed even higher efficiency and power.
Table 1
Performance metrics comparison for cryptographic operations using various optimization algorithms at 70% learning rate
Learn Rate 70%
AOA
MSA
SSO
Proposed (SA_TDO)
Encryption Time(s)
0.124544
0.136845
0.126651
0.120983
Decryption Time(s)
0.115821
0.122186
0.117408
0.112509
Key Generation Time(s)
0.240365
0.259031
0.244059
0.233493
Restoration Efficiency
0.834575
0.787497
0.831198
0.840707
Response Time(s)
215.352
224.402
210.886
209.397
Sharing Record Time(s)
13.4595
14.0251
13.1804
13.0873
Table 2 below shows the performance of the 4 models (AOA, MSA, SSO and the SA_TDO) using a learning rate of 80% including the factors that are relevant to the application of security within the blockchain. The datasets comprise binary encryption time, binary decryption time, key generation time, share restoration rate, answer supply time, and record time. Surely, the proposed SA_TDO algorithm has a good performance on both the speed and accuracy of change-points detection. The encryption period for proposed SA_TDO is 0.115627 s, which is more efficient than that of AOA, MSA, and SSO. In this case, the proposed algorithm is shown to possess efficiency not only in terms of the time it takes to decrypt (0.107529 s) but also in the time it takes to generate the key (0.223156 s), which beats the other algorithms at 80% learning rate. Moreover, SA_TDO algorithm SA_TDO algorithm restoration effectiveness equals 0.916865, and it is quite a high accuracy. The reaction time (223.103 s) and record time for the data sharing (13.9439 s) bring to light the competitive nature of the S A_TDO algorithm as compared with AOA, MSA, and SSO, at an 80% learning rate. These results demonstrate the support for the algorithm's efficiency in timing and its resilience of operation in securing blockchain applications.
Table 2
Performance metrics comparison for cryptographic operations using various optimization algorithms at 80% learning rate
Learn Rate 80%
AOA
MSA
SSO
Proposed (SA_TDO)
Encryption Time(s)
0.118519
0.127053
0.125998
0.115627
Decryption Time(s)
0.110218
0.124004
0.10885
0.107529
Key Generation Time(s)
0.228737
0.251056
0.234848
0.223156
Restoration Efficiency
0.898445
0.858836
0.790639
0.916865
Response Time(s)
233.724
234.493
226.518
223.103
Sharing Record Time(s)
14.3275
14.5129
142193
13.9439
Figure 7 gives an eyesight of a variety of performance measures important in such circumstances as cryptography and security. The subfigures of the graph provide viewers with a more accurate understanding of the parts that make a cryptographic system work. Graph 'a' about Encryption Time shows the duration of the encryption process, which is among the main aspects of cryptography. Quicker coding time means rapid security for data which is critical for emergent and accurate communication or storage. Subfigure 'b', on the right side, specifies Decryption Time, that is, the amount of time needed for the decoding of encrypted data. A shorter decryption time is beneficial in that it makes sure that the information kept secret is accessible in a short period and all the extracted data is utilized most efficiently. 'c' presents the Key Generation Time whose purpose may be to emphasize the period for generating cryptographic keys. The ability of efficient key generation is the foundation of an encryption system as a whole, the extent and the quality of the performance that it will provide. The graph labelled 'd' is a composite graph of Response Time, which offers the reader a broader view that includes time of encryption, decryption as well as other processes. Less time to respond means a faster and more reliable cryptographic platform. 'e' looks into Sharing Record Time and explains what it takes to transmit or record cryptographic details (e). Real-time record sharing is crucial for better and faster information dissemination in such systems as blockchain. This helps in safeguarding the information. Finally, the efficiency of Restoration is discussed under 'f', by giving the evaluation of how efficient the data restoration was after the rescue action. Being able to restore more reliably means that the recovery process is robust and this is significant when it comes to the persistence of data integrity and availability in cryptographic systems.
Fig. 7
a Encryption and b Decryption Time c Key Generation time d Response time e Sharing Record time f restoration efficiency
×
The performance metrics for 70% and 80% learn rate were evaluated under four different architectures: Archimedes Optimization Algorithm, Moth Search Algorithm, Salp Swarm Optimization, and self-adaptive TDO. The proposed SA_TDO model consistently outperforms AOA, MSA, and SSO algorithms in both learn rate scenarios. It achieves shorter encryption times (0.120983 s in 70% learn rate, 0.115627 s in 80% learn rate), decryption times (0.112509 s in 70% learn rate, 0.107529 s in 80% learn rate), and key generation times (0.233493 s in 70% learn rate, 0.223156 s in 80% learn rate) compared to the other algorithms. Moreover, SA_TDO demonstrates higher restoration efficiency (0.840707, 0.916865), indicating better recovery capabilities. Additionally, it achieves lower response times (209.397 s, 223.103 s) and shorter sharing record times (13.0873 s, 13.9439 s).
The study incorporates testing in the context of the IoT, which helps overcome limitations associated with small testing environments [21]. The study incorporates testing in the context of the IoT, which helps overcome limitations associated with small testing environments. Japkowicz proposes a technique known as a focused sample to address the imbalanced effect of datasets through under-sampling the majority class. This method involves generating a subset by specifically targeting and removing outliers from the dominant class. By eliminating these outliers, a more balanced representation of the dataset can be achieved. Tables 3 and 4 illustrate the evaluation metrics for different models including DNN, CNN, RNN, and LSTM. It is obvious from Figs. 8 and 9, that the proposed model outperforms DNN, CNN, RNN, and LSTM in terms of performance metrics, achieving higher accuracy (0.965693 in 70% learn rate, 0.989434 in 80% learn rate), precision (0.889409 in 70% learn rate, 0.988886 in 80% learn rate), sensitivity (0.889409 in 70% learn rate, 0.988886 in 80% learn rate), specificity (0.991121 in 70% learn rate, 0.998616 in 80% learn rate), F-measure (0.889409 in 70% learn rate, 0.988886 in 80% learn rate). The error metrics for intrusion detection are analyzed with FNR, FPR, MCC and NPV. Analyzing the "MCC" (Matthews Correlation Coefficient) metric, the Proposed architecture consistently achieves the highest values of 0.8385 and 0.8943 for both learn rates. The MCC measures the quality of a binary classification model, with values closer to 1 indicating better performance. The Proposed architecture consistently exhibits the highest NPV scores of 0.9911 and 0.9973 indicating its ability to accurately predict negative instances. Examining the "FPR" metrics, we observe that the proposed architecture consistently achieves the lowest values of 0.0508 and 0.0343 for both learn rates. The FNR for the proposed model is 0.1525 and 0.1029. Lower values for both metrics indicate better performance, and the proposed model demonstrates superior performance in minimizing.
Table 3
Performance evaluation of deep learning models at 70% learning rate for classification tasks
Learn Rate 70%
DNN
CNN
RNN
LSTM
Proposed
Accuracy
0.934023
0.826910
0.925454
0.922954
0.965693
Precision
0.780495
0.566269
0.763357
0.758358
0.889409
Sensitivity
0.780495
0.566269
0.763357
0.758358
0.889409
Specificity
0.985199
0.913790
0.979486
0.977820
0.991121
F_measure
0.780495
0.566269
0.763357
0.758358
0.889409
MCC
0.678142
0.392509
0.655292
0.648627
0.838553
NPV
0.985199
0.913790
0.979486
0.977820
0.991121
FPR
0.102352
0.173761
0.108065
0.109731
0.050856
FNR
0.307056
0.521282
0.324194
0.329193
0.152568
Table 4
Performance evaluation of deep learning models at 80% learning rate for classification tasks
Learn Rate 80%
DNN
CNN
RNN
LSTM
Proposed
Accuracy
0.936767
0.915674
0.922935
0.958207
0.989434
Precision
0.820233
0.778046
0.792569
0.863112
0.988886
Sensitivity
0.820233
0.778046
0.792569
0.863112
0.988886
Specificity
0.975612
0.961549
0.966391
0.989905
0.998616
F_measure
0.820233
0.778046
0.792569
0.863112
0.988886
MCC
0.742544
0.686294
0.705659
0.799716
0.895521
NPV
0.975612
0.961549
0.966391
0.989905
0.998616
FPR
0.077689
0.091752
0.086911
0.063396
0.034365
FNR
0.233068
0.275255
0.260732
0.190189
0.103095
Fig. 8
a Accuracy b Precision c Specificity d Sensitivity e F-Measure
Fig. 9
a NPV b MCC c FPR d FNR
×
×
Table 5 demonstrates the proposed model has been evaluated and the performance metrics are shown compared with reference models. Its evaluation is carried out in comparison with existing models. The tabulation comprises inter alia accuracy, precision, recall, and F1-measure metrics for the models. The models dated [37, 41], and [42] demonstrate varying degrees of performance. Model [37] ensures accuracy, precision, recall, and F1-Measure of 98.23%, which richly illustrates its reliable and well-balanced delivery of all the metrics mentioned. However, [41] becomes the slightly better option as it can achieve an accuracy of 98.75% and precision, recall, and F1-Measure all at 98.75% which confirms this level of correctness and completeness. Model [42], reaching the accuracy of 97.25%, is still failing to provide the appropriate precision and recall values, implying the presence of possible misbalances within the predictive capabilities. While the present model achieves a high level of accuracy, precision, recall, and F1-Measure, the proposed model performs at an overall high level of accuracy with 98.94% accuracy, and precision, recall, and F1-Measure all at 98.88%. In other words, the proposed model is more efficient in detecting the intrusions and categorizing them than the mentioned models and this makes it a superior model in detecting network intrusions. A visual depiction of these results is presented in Fig. 10.
Table 5
Performance evaluation of proposed model with reference models using NF-UQ-NIDS dataset
Learn Rate 80%
DNN
CNN
RNN
LSTM
Proposed
Accuracy
0.936767
0.915674
0.922935
0.958207
0.989434
Precision
0.820233
0.778046
0.792569
0.863112
0.988886
Sensitivity
0.820233
0.778046
0.792569
0.863112
0.988886
Specificity
0.975612
0.961549
0.966391
0.989905
0.998616
F_measure
0.820233
0.778046
0.792569
0.863112
0.988886
MCC
0.742544
0.686294
0.705659
0.799716
0.895521
NPV
0.975612
0.961549
0.966391
0.989905
0.998616
FPR
0.077689
0.091752
0.086911
0.063396
0.034365
FNR
0.233068
0.275255
0.260732
0.190189
0.103095
Fig. 10
Performance evaluation of proposed model with referenced models using NF-UQ-NIDS dataset
×
The proposed model outperforms the referenced models with an accuracy of 98.94% compared to 98.23% and 98.75%. It achieves precision, recall, and F1-measure scores of 98.88%, surpassing the values reported by the other models. These results highlight the superior performance of the proposed model in accurately detecting network anomalies. The ramifications side of this study is big because it introduces an effective method of trust management evaluation and intrusion detection system for IoT routing which can be applied to various areas to secure blockchain security. The compared algorithms and architectures succeed in beating the existent ones and prove their resilience in different environments—thus, they show the prospects for the practical realization in securing IoT networks and block chain apps. The analysis employs a simulated IoT network which is built in a lab for the assessment of the proposed trust management and intrusion detection system in the analysis stage. These simulations might not manage to accurately mimic the intricacy and dynamics of a real-life IoT ecosystem. What we need now is to validate the proposed cybersecurity system using practical, diverse and dynamic environments. The research relies on NF-UQ-NIDS dataset for data testing. However, the extent of trust in the presented solution's performance on different datasets, as well as the conditions of numerous IoT scenarios, cannot be confirmed at this stage. A more diverse and wider dataset could be analyzed considering different network environments and attack scenarios which will result in well-rounded results.
5 Discussion
The preliminary results of implementing deep learning and metaheuristics for reassurance data management using blockchain, which has been used for the detection of malicious IoT nodes, have been outstanding. The proposed scheme equipped with strong cryptographic algorithms like RSA and SHA3-512 as well as a SA_TDO and deep neural networks namely CNN, RNN, and LSTM has achieved reliable and secure routing in the IoT networks. Interestingly, this study has primarily focused on utilizing the AOA. The results demonstrate considerable advancement in model performance, considering the encryption and decryption time, key generation time, restoration efficiency, and reaction time. The model proposed here has been shown to outperform traditional methods like AOA, MSA, and SSO because it is better at accuracy in real-time intrusion detection and event classification. This implies a conclusion that mixing deep learning techniques with optimization is the best way to create a robust framework for protection and optimization in an IoT-enabled network.
SA_TDO technique is also showcasing shorter response times (209.397 s at 70% learn rate, 223.103 s at 80% learn rate) and shorter sharing record times (13.0873 s at 70% learn rate, 13.9439 s at 80% learn rate), which underlines its strength. In addition, the evaluations demonstrate that the SA_TDO algorithm is better in terms of accuracy, precision, sensitivity, specificity, F-measure, and MCC while having the lowest FPR and the highest NPV. In general, SA_TDO reaches better results when compared with other models, e.g. DNN, CNN, RNN, and LSTM, for the same metrics. The results imply that the SA-TDO algorithm stands out among other solutions in the context of blockchain security and cryptographic systems, as it is faster at encryption and decryption, more efficient in key generation, provides high restoration efficiency, and better performance in terms of intruder detection and the operation resilience.
5.1 Practical implications
The results of the study have a reflective application in the real world. The trust management evaluation and intrusion detection system may be employed by different sectors that adopt IoT technologies, smart homes, medical care, transportation and industrial automation. It provides a safer way of managing information that is transferred over IoT networks so that the credibility and secrecy of the data will be maintained. Adding to the list of grounds mentioned above, the success of the suggested model indicated that it could be one of the key assets in making blockchain applications more secure. It could be the way to create more confident and effective blockchain-based systems that will be not so vulnerable to malicious attacks and attacks on critical information.
5.2 Limitations and future research
Whereas our model works well for IoT networks as well as blockchain applications, it has some limitations, thus, generalization is needed. The emulation of an IoT network environment may be far from what is found in the real world because these scenarios are not only complex but also very diverse. Therefore, the next research step should be to examine to what extent the model is universal, reliable, and applicable in real-life, demanding and dynamic practice. Furthermore, as the research uses the NF-UQ-NIDS dataset for data testing, it comes with the problem of reducing the applicability of the results. To overcome this, future work should be directed towards the assessment of the model with a wider variety of datasets and a different network topology for evaluating the performance across various attack scenarios.
5.3 Potential practical deployment challenges and mitigation strategies
The proposed model would be a good fit for IoT and blockchain in the real world but some challenges need to be considered before it can be applied correctly. Scalability is a crucial part of the model functioning since the network is growing; the optimization of the model can be completed by tuning its architecture and making use of more powerful hardware resources. The model needs to be fast enough for low latency, that is, the speed of the model should be optimized to ensure that threats are not detected at a time when they are already posing a great risk. The resource constraints including limited computing power and the memory in IoT devices can be handled by deploying a lightweight model or dividing it among multiple devices. Another factor is adaptability, which is meant to promote changes in the model with new threats and network environments. Relentless learning and the use of fresh data have a major role to play in the model's capacity to detect emerging dangers. Incorporating the proposed model into existing IoT and blockchain systems may entail drastic modifications to the existing infrastructures.
Compatibility is a critical factor, and it must be ensured that new security protocols are compatible with existing ones and can integrate smoothly with data management systems. If collaboration with industry experts and system architects is done, this process can be facilitated. Regulatory compliance is essential for conformity with the standards of the General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act (HIPAA) regulations, which are meant to protect data privacy and integrity. Interoperability among different platforms and protocols becomes critical given the fact that IoT networks often consist of devices that have been manufactured by different makers with varying communication standards. Also, the information about the benefits of the model and the reasons why it is needed can help to improve the acceptance and adoption of the model. An intuitive interface and explicitly defined function of the model should enable easier model integration.
6 Conclusion
The Intelligent Security Framework for IoT Routing with Metaheuristic SA_TDO Blockchain and Intrusion Detection with AOA demonstrates its effectiveness in addressing security challenges in IoT networks. The Proposed architecture consistently outperforms other architectures, achieving high accuracy, precision, sensitivity, and specificity. The trust-aware Blockchain where SA_TDO-based optimized RSA with SHA3_512 is utilized for ensuring secure storage and data transmission. Then the CSNN-ODNN with AOA framework is further utilized for enabling a high-level IDS framework. Therefore, the proposed model can enable three level securities (i) Trust Analysis, (ii) Secured data management, and (iii) Malicious detection. The performance of the proposed method was evaluated in comparison to existing methods, revealing its superiority in precision, recall, and F-measure in assessing data trust. Additionally, the proposed method showcased better performance than other existing methods in the realm of malicious detection. Future work should focus on scalability, performance optimization, and compatibility with different IoT architectures. Overall, the framework adds to improving the safety and resilience of IoT deployments.
Declarations
Ethics approval
Not Applicable.
Conflicts of interest
The authors declare no competing interests.
Consent to publish
Yes, all authors agreed to publish the paper.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.