Zum Inhalt

Knowledge Learning for Securing Workflow Scheduling Algorithm in Mobile Edge Computing

  • Open Access
  • 19.12.2025
  • Research

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Mobile Edge Computing (MEC) is capable of inheriting cloud and Internet of Things (IoT) resources to the brim of the computational and communication networks. Conservatively, the MEC is outsourced from the IoT/ cloud platforms to Maximize the workflow and task completion abilities. However, due to outsourcing features, the security requirements for workflow scheduling and task completion rely on trusted device selection and high normalization. To satisfy these security demands, this article introduces a secure workflow scheduling algorithm using the knowledge learning concept. The proposed algorithm verifies the operative and failing device features under diverse allocation parameters. Based on the workflow completion lag, new scheduling or offloading decisions are Made. The decision support is provided by the knowledge learning is retains the previous operational status of the edge devices through stage-based updates. The stages for scheduling, classification, and offloading are updated periodically to Maximize the device selection and to reduce overhead in the process. Thus the consolidated process is adaptable to scheduling, offloading, and workflow completion regardless of the devices, allocation time, and device selection processes. This proposed algorithm is reliable in improving the normalized security by 14.17% by reducing the device selection overhead by 12.82% for the maximum allocation rates.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Mobile edge computing (MEC) is a network architecture that allows cloud capabilities for edge computing. MEC is used to reduce latency and congestion which occur on mobile networks [1]. Workflow scheduling is a process that produces sufficient maps to execute work within a process. Workflow scheduling in MEC is a crucial task to perform which requires the actual priority level of tasks in the networks [2]. An energy-efficient workflow scheduling framework is used in MEC-enabled applications. The scheduling framework analyzes the energy efficiency that is required to perform scheduling tasks for the network [3]. The framework identifies the issues which cause optimization problems and provides significant solutions to the issues [4]. The scheduling framework minimizes the overall computational cost and complexity rate of the MEC environments. A predictive energy-aware scheduling strategy is also used in computing systems [5]. The strategy uses an improved ant lion optimizer (ALO) algorithm to optimize the problems. The scheduling strategy reduces the execution time latency and elevates the energy-efficiency rate of the computing systems [6].
Securing workflow scheduling in MEC is a complicated task that ensures the privacy and safety level of data from attacks. A security, cost, and energy-aware real-time workflow scheduling approach is used in MEC systems [7]. The approach is used to elevate the overall security range of the network from attacks. The scheduling approach evaluates the cloud data and gathers sufficient information for further scheduling services [8]. The approach provides various security measures that eliminate unwanted threats and attacks from the networks. The approach evaluates the security measures and provides security-aware policies to the networks [9]. The scheduling approach maximizes the quality of service (QoS) and performance level of MEC systems. A deadline-constrained cost-aware workflow scheduling (DCWS) algorithm is also employed in MEC systems [10]. The DCWS algorithm is used to detect the issues in task scheduling which are eliminated via deadline constraints. The algorithm uses a virtual machine to reduce the execution time delay and computational cost rate of the systems. The DCWS algorithm enlarges the performance and feasibility range of MEC from attacks [11, 12].
Machine learning (ML) algorithms are used to ensure the safety range of workflow scheduling processes in MEC systems. ML algorithms are used to elevate the precision in providing scheduling services to the networks [13]. A cost-aware quantum-inspired genetic algorithm (CQGA) is implemented in MEC environments. The CQGA is used to minimize the overall cost and execution period in the networks [14]. Quantum operators and GA are employed here to avoid unwanted deadline violations in the systems. The CQGA uses quantum computing to provide relevant fitness values to the workflow scheduling services. The algorithm also distributes effective measures to the systems. The CQGA elevates the convergence performance and efficiency level of the systems [15, 16]. A reinforcement-learning (RL) algorithm-based security-aware workflow scheduling framework is also used in MEC systems. The RL algorithm addresses the issues that cause risks to the scheduling services [17]. The issues are eliminated by providing sufficient solutions via the RL algorithm. The scheduling framework improves the precision and specificity range in workflow scheduling which improves the reliability of the systems [18, 19]. MEC seeks to reduce delays and network congestion by moving task processing closer to end devices. However, MEC’s performance enhancement might be tempered by computation outsourcing to untrusted or compromised nodes. Such nodes may be malicious or faulty and can alter results, leak sensitive information, or interfere with operations to maintain the service’s uninterrupted status. To ensure data, integrity, and system reliability, security-aware MEC task outsourcing frameworks designed performance enhancements do not undermine data integrity, require fundamental system trustworthiness augmentations. This security MEC framework proposal addresses this gap using trust evaluation augmented by secure allocation mechanisms with fundamental latency and congestion optimization. The major contributions of the article are:
(i)
The proposal and discussion of the novel secure workflow scheduling using edge devices with the application of knowledge is given.
 
(ii)
A detailed explanation of the knowledge learning in addressing the allocation failures due to risk-causing edge devices with mathematical models and decision snippets is provided.
 
(iii)
The scheduled evaluation focuses on the internal parameters, ongoing monitoring of task allocation, scheduling, and secure device detection. It ensures that the tasks are assigned securely and efficiently, minimizing allocation failures and improving completion rates.
 
(iv)
The comparative assessment includes a broader comparison of the proposed algorithm with existing methods, using graphical illustrations to demonstrate its superior performance in terms of security and reduced overhead.
 

2.1 Machine Learning Based Scheduling Process

Pan and Wei [20] developed a deep reinforcement learning (DRL) based workflow scheduling framework for cloud environments. The framework is used as a traditional framework which fixes the issues during the workflow scheduling process. The developed framework is used to assign the tasks which are provided via working scheduling services. The developed framework improves the scalability, robustness, and performance level of the environments. Zhang et al. [21] designed a real-time workflow scheduling method using genetic algorithm (GA) and DRL for cloud computing environments. The designed method is used to decrease the execution time and response period of the networks. The DRL algorithm is employed here to analyze the features that are necessary to generate paths for scheduling services. The designed method maximizes the overall performance and effectiveness range of cloud environments. Although it is comparable in that it makes use of intelligent decision-making for scheduling, it is distinct in that it has more of an emphasis on device-level verification for security in MEC contexts as opposed to merely performance.

2.2 Meta-Heuristic Optimization Based Scheduling

Ma et al. [22] introduced a gene-inspired meta-heuristic algorithm (GIMA) for workflow scheduling in mobile edge computing (MEC) systems. A conditional insertion scheme is employed in the algorithm which provides high-quality scheduling services to the systems. The algorithm evaluates the conditions and reduces the overhead ratio. The introduced GIMA reduces the computational cost and energy consumption rate of the systems. An improved version of [20] is proposed by Bansal and Aggarwal [23] using a hybrid particle swarm optimization (PSO) algorithm for cloud-fog computing networks. The proposed framework also uses the whale optimization algorithm (WOA) to minimize the optimization problems which enhances the precision of execution time. The framework provides high-quality coverage services to the networks. The proposed framework enlarges the performance and efficiency level of scheduling services. Zade et al. [24] introduced a multi-objective scheduling algorithm based on the Caledonian Crow learning algorithm for cloud computing. The introduced algorithm is used to schedule actual workflow paths for the computing systems. The algorithm analyses the virtual machines to gather sufficient data for optimization and computation. When compared with others, the introduced algorithm improves the accuracy rate in providing scheduling services to the systems. Mohammadzadeh et al. [25] developed a chaotic hybrid multi-objective optimization algorithm (HSOS-SOA) for scientific workflow scheduling in cloud environments. The developed algorithm is used to generate random maps and routes to perform tasks in multiple cloud networks. The algorithm elevates the convergence rate of the network which improves the precision of scheduling services. The developed HSOS-SOA reduces the computational cost and time consumption rate during scheduling. An improved version of [24] is proposed by Wang et al. [26] named, a heuristic privacy-preserving workflow scheduling algorithm (PWHSA). The algorithm is used to address the optimization problems and provides significant solutions to solve the problems. The proposed algorithm schedules the maps based on analyzed parameters. It also produces comprehensive scheduling policies that reduce computational costs. Experimental results show that the proposed PWHSA minimizes the latency in execution time. The technique incorporates security normalization metrics and risk-aware allocation, both of which are not expressly included in current methodologies, but is similar in that it focuses on optimization.

2.3 Resource Allocation in Cloud and Fog Computing

Lyu et al. [27] designed a cloud-edge collaborative computing (CECC) for workflow scheduling for the Internet of Things (IoT). The designed algorithm is also used as a resource allocation algorithm which allocates resources for the tasks. The algorithm analyses the critical challenges and produces appropriate solutions to solve the issues. The designed algorithm elevates the accuracy and efficiency range of the workflow scheduling process. Abdi et al. [28] developed a cost-aware workflow offloading using GA for edge-cloud computing applications. The algorithm is used to minimize the monetary cost and execution time consumption rate of the systems. The developed algorithm also provides sufficient solutions to solve optimization problems that occur during offloading. The GA is used here to eliminate unwanted problems and issues from the network. The developed algorithm enhances the Quality of Service (QoS) by providing offloading services to the applications. Li et al. [29] introduced a cost-effective security-aware scheduling algorithm for edge computing. GA is implemented here to analyze the problem which produces sufficient data for scheduling services. The algorithm allocates the resources and tasks based on priorities which enhance the computational quality of the computing systems. The introduced algorithm achieves high precision in task scheduling and reduces the execution cost rate of the networks.
Li et al. [30] designed a make-span and security-aware workflow scheduling for cloud services. It is used as a two-stage algorithm that solves optimization problems that occur during scheduling services. The designed algorithm uses an improved firefly algorithm (IFA) to schedule the tasks to virtual machines. The algorithm also provides a relevant scheduling scheme that provides quick paths to the networks. The designed algorithm decreases the overall computational cost and latency level of the systems. An updated version of [22] is proposed by Sajnani et al. [31] using a hybrid optimization algorithm for MEC. The proposed framework is used to schedule the workflow with high-security measures. The framework also identifies the cause of the problem and provides feasible solutions to eliminate the problems from the systems. When compared with others, the proposed framework reduces the energy consumption and execution time of the computing environments. The approach is similar to these in that it operates across edge–cloud boundaries; however, it places a higher priority on device credibility checks before allocation, which makes it more resistant to hacked edge nodes.

2.4 Security-Prioritized Scheduling

Morabito et al. [32] introduced a sever-by-design serverless workflow scheduling for edge-cloud computing environments. The introduced framework uses osmotic computing paradigms to analyze the complicated content before scheduling services. The framework provides advanced security measures to secure network attacks and threats. The introduced framework improves the performance and execution time of the systems. Zhang et al. [33] developed a secured software-defined network (SDN) based task scheduling framework for edge computing systems. A priority-aware semi-greedy with GA (PSG-GA) is employed here to identify the priorities of the tasks before providing scheduling services to the networks. The PSG-GA reduces the overall computational cost ratio of the systems. Experimental results show that the developed framework increases the accuracy range in scheduling tasks for the systems.
Alam et al. [34] designed a security-prioritized multiple workflow allocation (SPMWA) model for cloud computing environments. The model is used as a mapping scheme that provides high-quality paths to the tasks. The actual priority and significance rate of the tasks are identified and optimal resources via allocation services. The designed model elevates the performance and efficiency range of the computing systems. Javanmardi et al. [35] proposed a secure workflow scheduling approach for SDN-based IoT-Fog networks. It is used as a proper traditional scheduling algorithm that schedules the paths as per necessity. The unwanted vulnerable problems are eliminated by providing significant solutions to the networks. The proposed approach minimizes the overall distributed denial of service (DDoS) which enlarges the effectiveness of the networks. Liang et al. [36] introduced a secure and makespan-oriented workflow (SMWE) framework for serverless computing systems. The SMWE is used to accomplish complex tasks by providing optimal scheduling services to the networks. The introduced framework protects the network from problems which maximizes the performance range of the systems. The introduced SMWE framework maximizes the overall security and safety level of the systems. From the provided statement, the problem focuses on the automated tasks scheduling workflows in a MEC system with limited resources while maintaining the demanded efficiency, performance and security. Traditional scheduling approaches, either greedy, meta-heuristic or reinforcement learning based, focus optimizing the execution time, cost or energy. In focusing on optimization, credibility and trustworthiness of edge devices which take part of the execution is sidelined. This poses a security and performance vulnerability where unreliable or compromised devices may be incorporated. In addition, with the heterogeneous nature of MEC systems, the inconsistent resource supply, communication latency, the dynamic nature of devices adds more difficulty to scheduling. The core problem, therefore, is to develop a scheduling mechanism which optimally allocates tasks in a high-level computational efficiency while also including real time trust evaluation, security normalization, and adaptive allocation ensuring tasks execution by trustworthy devices.

3 System Model

The experimental system model is first described to disclose the number of devices, tasks, and time-based measurements. The system for the proposed technique is simulated in a standalone computer with 16GB RAM 256 GB SSD and a 2.20 GHz \(\:\:i3\)processor. Besides the platform architecture support is 64-bit built on Windows 11 supported by 2GB NVIDIA graphics. As far as the experiment configurations, the number of edge devices assigned is 30 accommodating 5 tasks/1 min interval. The average task completion time is 30 min to 90 min. Therefore a workflow (batch) is modelled with 150 to 950 tasks in an interval and a device accommodates 5 to 15 workflows for a Maximum of 6 h operation time. The authentication security for device access and task allocation is administered using a central server device connected to the cloud. The value described in the experimental setting part is used as variants for studying the proposed technique’s performance.

3.1 Proposed Secure Workflow Scheduling Algorithm

Secure workflow processing in edge devices is designed by using scheduling and offloading, to improve task completion rates. The workflow on edge devices is attained with high security by detecting the risk of device failure. Here, the scheduling is followed up for the secure device sharing between the edge devices. Security among the edge devices is attained by scheduling the task according to the user’s needs. Before stepping into scheduling, the device verification is done whether it is ideal or busy state. Based on this identification the task distribution is done to reduce the scheduling lag. Figure 1 presents the overall illustration of the proposed technique.
Fig. 1
Overall illustration of the proposed technique
Bild vergrößern
The scheduling is classified through secure device selection and allocation failure based on these two functions; the analysis is done whether the device reaches the completion state or not. Based on this offloading is performed if there is any scheduling lag that occurs due to incompletion of task. The main drawback is to reduce the risk device which is due to insecure edge devices that degrades the reliability. Here, the reliability is higher by reducing the insecure edge device for this knowledge-based learning is introduced. The secure workflow is evaluated by completing the task. The initial step here is an objective of this method, where the security for workflow devices is computed as follows and the description of each parameter is illustrated in parameter description.
Parameter Description
Symbol
Description
Role in Model/Impact on Computation
 \(\triangle'\) 
Security related objective value
It is used to estimate the measure to identify the secure workflow during device scheduling.
 \(k_s\) 
Task count
Represented that the task scheduled which affects the ∆′ in which tasks minimize the per-task security margin
 \(AI_O\) 
Allocation factor
Represents that how effectively task is assigned to device.
 \(r_t\) 
Request rate
Computes the incoming task requests; high values indicate the high system load affects the loading decisions.
 \(P_e\) 
Response rate
the rate at which responses are generated; works with to regulate communication efficiency.
 \(me_t\) 
Computation time
Time required to complete a task
 \(w_f\) 
Workflow factor
Signifies workflow complexity; influences detection and allocation processes.
 \(Te'\) 
Detection time
Time taken to notice and confirm device security; impacts scheduling delay.
 \(Ed_v\) 
Edge device factor
Replicates security status of the edge device; insecure devices lower reliability.
 \(S_{y'}\) 
Security level
Indicates current evaluated security of the workflow; higher values improve system trust.
 \(Br'\) 
Reliability factor
Signifies system reliability; used with security measures to ensure safe completion of tasks.
 \(U_y\) 
Busy state indicator
Number of devices currently in busy state; affects allocation feasibility.
 \(i_a\) 
Idle state indicator
Number of devices in idle state; higher values provide more flexibility for task allocation.
 \(ir'\) 
Irregularity rate
Signifies operational irregularities or anomalies; higher values reduce stability.
$$\:\\\\\\\begin{array}{c}\triangle^{'\:}=\frac1{k_s}+\sum\:_{i_a}^{u_y}Al_0\ast\left(r_t+p_e\right)-me_t+\\\left\{\left[\left(W_f\ast Te'\:\right)\ast\left(Sy^{'\:}+Ed_v\right)\right]+\left(Ed_v\ast Al_0\right)\right\}\\\left\{\left[\left(me_t+p_e\right)+\left(Al_0\ast W_f\right)\right]\ast Te^{'\:}\right\}\\\ast\sum\:_{u_y-i_a}\left(Al_0-Ed_v\right)+\left|Sy^{'\:}\ast Br^{'\:}\right|\ast\\+\sum\:_{W_f}Ed_v\ast\left(Te^{'\:}\ast k_s\right)\\+\left[\left(r_t+p_e\right)-ir'\:\right]\end{array}$$
(1)
The computation is represented as\(\:\:\varDelta\:{\prime\:}\), and the task is denoted as\(\:\:{k}_{s}\), the allocation is described as\(\:\:A{l}_{0}\). The objective is computed in the above equation where request and response are symbolized as\(\:\:{r}_{t}\:\text{a}\text{n}\text{d}\:{p}_{e}\). Here, the busy and ideal state of the devices are verified and then, the allocation of tasks is evaluated and they are represented as\(\:\:{u}_{y}\:\text{a}\text{n}\text{d}\:{i}_{a}\). The time of computation is considered and it is described as\(\:\:m{e}_{t}\),\(\:\:{W}_{f}\) is the workflow whereas, \(\:\:Te{\prime\:}\) is detection. The edge device is\(\:\:E{d}_{v}\) and \(\:\:Sy{\prime\:}\) is security and \(\:\:Br{\prime\:}\) is described as reliability. The task allocation rate values based on the provided system model are tabulated in Table 1.
Table 1
Task allocation rate valuation
\(\:{\varvec{k}}_{\varvec{s}}\)
\(\:\text{D}\text{e}\text{v}\text{i}\text{c}\text{e}\:\text{C}\text{o}\text{u}\text{n}\text{t}\:\varvec{E}{\varvec{d}}_{\varvec{v}}\:(\varvec{A}{\varvec{l}}_{\varvec{o}}/\varvec{m}\varvec{i}\varvec{n}\))
5
10
15
20
25
30
150
8
5
6
8
7
9
175
5
6
8
7
12
7
200
12
8
8
9
8
8
225
9
5
8
5
11
7
250
8
7
9
9
7
6
275
5
6
8
8
12
5
300
12
9
8
11
9
5
325
6
6
5
10
5
8
350
13
13
13
12
11
13
375
12
11
14
12
12
14
400
12
12
15
14
13
15
425
13
11
13
12
13
14
450
14
13
12
14
13
14
The task allocation increases when periodically the identification of busy and ideal state of devices are evaluated. Here, the scheduling is done for the edge devices based on the request and response \(\:A{l}_{0}*\left({r}_{t}+{p}_{e}\right)\) where the workflow is processed with security. The task allocation \(\:{k}_{s}\left(A{l}_{0}+E{d}_{v}\right)*Sh+D{i}^{{\prime\:}}\)is given to the ideal devices where it avoids allocation failure. Here, the allocation is evaluated by scheduling\(\:\:{S}_{\beta\:}\left(E{d}_{v}+Sh\right)*{k}_{s}+S{y}^{{\prime\:}}*{r}_{t}\) the edge devices based on the request and ensure secure task sharing. Thus, scheduling is properly computed\(\:\:{S}_{\beta\:}+\left(Vut*{k}_{s}\right)+A{l}_{0}\) to improve the task allocation (Table 2). Here, the objective is to improve the reliability by reducing the risk device during detection and ensuring the security of the workflow devices\(\:\:\sum\:_{{u}_{y}-{i}_{a}}\left(A{l}_{0}-E{d}_{v}\right)\). Based on the objective the proposal is designed by scheduling and allocating the edge device with the task allocation. The secured workflow is transmitted to the edge device on the required time interval\(\:\:A{l}_{0}(m{e}_{t}-{k}_{s})\). Thus, the computation is processed for the edge devices, and from this objective equation, the device allocation is evaluated below.
$$\:Vut=\left(S{y}^{{\prime\:}}+E{d}_{v}\right)*\sum\:_{{k}_{s}}^{\varDelta\:{\prime\:}}A{l}_{0}*\left({u}_{y}+{i}_{a}\right)+\left(\frac{m{e}_{t}*{W}_{f}}{\sum\:_{E{d}_{v}}A{l}_{0}}\right)$$
(2)
Table 2
Scheduling rate tabulation for different variables
Variable=\(\:{\varvec{k}}_{\varvec{s}}\)
150
200
250
300
350
400
450
\(\:\varvec{E}{\varvec{d}}_{\varvec{v}}\)
10
0.625
0.674
0.714
0.789
0.821
0.932
0.951
20
0.614
0.651
0.698
0.751
0.785
0.904
0.924
30
0.593
0.621
0.652
0.701
0.741
0.851
0.897
Variable=\(\:\varvec{A}{\varvec{l}}_{\varvec{o}}\) Time (min)
30
40
50
60
70
80
90
\(\:\varvec{E}{\varvec{d}}_{\varvec{v}}\)
10
0.741
0.721
0.701
0.691
0.681
0.697
0.687
20
0.725
0.704
0.694
0.674
0.667
0.654
0.657
30
0.701
0.691
0.614
0.604
0.621
0.601
0.598
Variable=\(\:\varvec{m}{\varvec{e}}_{\varvec{t}}\) (min)
10
20
30
40
50
60
\(\:\varvec{E}{\varvec{d}}_{\varvec{v}}\)
10
0.652
0.714
0.824
0.924
0.954
0.995
20
0.605
0.698
0.801
0.901
0.869
0.821
30
0.618
0.621
0.741
0.894
0.852
0.801
The evaluation is done on the above equation and it is described as\(\:\:Vut\), the edge device is associated with task sharing for the requested device. The edge device requests for the task and securely it is shared on the required time interval. The allocation is processed for the secure workflow where it finds the busy and ideal device and based on this sharing is done. Based on the security the workflow is securely transmitted. The task allocation is evaluated to ensure the security between the sender and the receiver which is said to be edge devices. Here, computation is done to improve reliability and provide security within the assigned time interval and it is represented as\(\:\:\left(\frac{m{e}_{t}*{W}_{f}}{\sum\:_{E{d}_{v}}A{l}_{0}}\right)\).
The allocation is based on the device state whether it is busy or ideal, if it is busy then, the task is not allocated to the specific device. In other cases, if it is ideal then, the tasks are assigned to the particular devices. This is how the task allocation is processed with the busy and ideal state of the edge devices and ensures security. Post this allocation step, the scheduling is done for the pending tasks in edge devices.

3.2 Scheduling

Task scheduling is used to decide which device should hold which task and based on this task distribution is evaluated. The evaluation takes place by defining the device workflow and allocation process. Both the process is considered and scheduling is done for the task completion with security. The scheduling is evaluated in the below equation as follows.
$$\:{S}_{\beta\:}=\frac{A{l}_{0}-m{e}_{t}}{\sum\:_{{W}_{f}}\left(E{d}_{v}+{k}_{s}\right)}*\sum\:_{Te{\prime\:}}^{{W}_{f}}\left({k}_{s}+\varDelta\:{\prime\:}\right)*\left({u}_{y}+{i}_{a}\right)-m{e}_{t}$$
The above equation is re-written as,
$$\:A{l}_{0}\left({k}_{s}\right)={\varDelta\:}^{{\prime\:}}*\left[\left({u}_{y}-{i}_{a}\right)+{W}_{f}\right]-m{e}_{t}$$
Similarly,
$$\:\varDelta\:{\prime\:}\left({W}_{f}\right)={k}_{s}\left(Sh\right)+E{d}_{v}-m{e}_{t}\left({i}_{a}\right)$$
Consequently,
$$\:A{l}_{0}\left(E{d}_{v}\right)=\left({i}_{a}-Te{\prime\:}\right)*\sum\:_{{u}_{y}}\left({r}_{t}-{p}_{e}\right)$$
$$\:Vut\left(Sh\right)=A{l}_{0}\left(Sh\right)+{k}_{s}\left({i}_{a}\right)-Te{\prime\:}$$
(3)
Scheduling is a process based on the decision-making of which task must be allocated to which edge device. The workflow of the edge device is considered and evaluates the scheduling on the required time interval. Here, the scheduling is represented as\(\:\:{S}_{\beta\:}\), where it verifies the status of the devices such as if it is busy or ideal. By computing the scheduling process, the throughput is increased by reducing the computation time and it is described as\(\:\:\left({k}_{s}+\varDelta\:{\prime\:}\right)*\left({u}_{y}+{i}_{a}\right)-m{e}_{t}\). The scheduling process is described in Fig. 2.
Fig. 2
Scheduling process illustrations
Bild vergrößern
The scheduling and allocation process of the proposed technique are depicted using Fig. 2. This \(\:\:{S}_{\beta\:}\)is different from the existing/conventional methods by concurrent allocation of \(\:\:E{d}_{v}\)based on \(\:\:{i}_{a}\) and \(\:\:\left[1-{W}_{f}\left(m{e}_{t}*B{r}^{{\prime\:}}\right)\right]\)computations. These computations validate the \(\:\:T{e}^{{\prime\:}}=max\) and \(\:\:T{e}^{{\prime\:}}=min\) allocations across active devices are used to differentiate \(\:\:{p}_{e}\) and \(\:\:m{e}_{t}\) scheduling demands. In the \(\:\:{S}_{\beta\:}\)process, both\(\:\:{p}_{e}\) and \(\:\:A{l}_{o}\)using\(\:\:E{d}_{v}\in\:{i}_{a}=true\) validated for the remaining\(\:\:m{e}_{t}\). The \(\:\:{i}_{a}\ne\:true\) relates the \(\:\:{u}_{y}=true\) such that the tasks are pending between \(\:\:\left[\left(m{e}_{t}*B{r}^{{\prime\:}}\right),\:m{e}_{t}\right]\) interval. Similarly, if the allocation is monotonous, then\(\:\:{r}_{t}\) is incremented for the consecutive task acceptance and \(\:\:A{l}_{o}\:\forall\:E{d}_{v}\) takes place. This differentiation in\(\:\:{S}_{\beta\:}\) requires a security-based classification for device selection to prevent task/\(\:\:{w}_{f}\) failures. The scheduling rate for different variables is tabulated in Table 2.
The scheduling rate increases concerning the task allocation\(\:\:{k}_{s}\left(A{l}_{0}*{W}_{f}\right)+E{d}_{v}\) for the edge devices. Here, the computation states the scheduling process and indicates the task-sharing\(\:\:{k}_{s}+Sh\left(A{l}_{0}*Vut\right)+\sum\:_{{W}_{f}}{v}_{s}*\left(E{d}_{v}\right)\) to the required user within the fixed time frame. Thus, the scheduling takes place by increasing the workflow \(\:\:{W}_{f}\left({S}_{\beta\:}+{v}_{s}\right)*\prod\:_{Sy{\prime\:}}Te{\prime\:}({u}_{y}-{i}_{a})\) by considering whether the device is busy or ideal. The workflow scheduling is described in Snippet 1.
From this identification process, the scheduling rate is improved. Here, the evaluation for scheduling\(\:\:Vut\left({S}_{\beta\:}+{v}_{s}\right)*{k}_{s}-\left({r}_{t}-{p}_{e}\right)\) is processed with secure task sharing to the requested devices (Table 3). Therefore, the classification of scheduling is equated in the below Eq.
$$\\\begin{array}{c}\:FL_i=\\\left(W_f+Di'\:\right)\ast\sum\:_{i_a}S_{\beta\:}+\left(Te^{'\:}\ast Al_0\right)\\-me_t,\forall\:\:Secure\:device\:selection\\\left(\frac{{\backslash\mathrm{varDelta}\:}^{'\:}+W_f}{\sum\:_{Te'\:}S_{\beta\:}}\right)\ast\prod\:_{Al_0}Te^{'\:}\\\:+\left(Ed_v\ast u_y\right),\:\forall\:Allocation\:failure\end{array}$$
(4)
Table 3
\(\:\:\varvec{A}{\varvec{c}}_{\varvec{f}}\) Observed before and after\(\:\:\varvec{F}{\varvec{L}}_{\varvec{i}}\)
 
Processes
\(\:\varvec{A}{\varvec{l}}_{\varvec{o}}\)
\(\:\varvec{V}\varvec{u}\varvec{t}\)
\(\:{\varvec{S}}_{\varvec{\beta\:}}\)
Variables
Before
After
Before
After
Before
After
\(\:\varvec{E}{\varvec{d}}_{\varvec{v}}\)
10
0.5112
0.3020
0.7366
0.5044
0.8601
0.8907
20
0.3684
0.2108
0.5537
0.3616
0.8506
0.7132
30
0.2902
0.1842
0.5258
0.4806
0.8516
0.6088
 
150
0.1711
0.4643
0.5773
0.4571
0.9078
0.6156
200
0.5019
0.1136
0.5755
0.4402
0.9092
0.3695
250
0.5150
0.4741
0.5709
0.3338
0.8009
0.3091
300
0.2302
0.4407
0.5314
0.2429
0.8908
0.5043
350
0.4439
0.2393
0.5330
0.3153
0.8310
0.2842
400
0.0471
0.1721
0.5462
0.1336
0.9117
0.4222
450
0.1911
0.2128
0.5746
0.4737
0.8816
0.5951
\(\:\varvec{m}{\varvec{e}}_{\varvec{t}}\) (min)
10
0.5173
0.3399
0.6339
0.5673
0.9166
0.1269
20
0.0407
0.1144
0.5992
0.3812
0.8556
0.8036
30
0.2644
0.2990
0.5943
0.2600
0.8715
0.7463
40
0.2720
0.2652
0.5500
0.4138
0.8444
0.6575
50
0.4523
0.3747
0.5934
0.4486
0.8362
0.7953
60
0.2112
0.2020
0.5366
0.4044
0.8601
0.7907
The classification for scheduling is evaluated in the above equation and it is expressed as\(\:\:F{L}_{i}\). The first condition is secure device selection where the allocation of tasks is given for the ideal edge device. Here, the device selection is based on the workflow for the ideal edge device and decides whether the task can be allocated to the specific edge device or not. The decision is described as\(\:\:Di{\prime\:}\), the sharing is\(\:\:Sh\). The second condition refers to the allocation failure where the task is given to the busy device on the respective time interval and it is represented as\(\:\:T{e}^{{\prime\:}}+\left(E{d}_{v}*{u}_{y}\right)\). Thus, the classification is done for the scheduling process; post to this allocation failure is detected in the following Eq.
$$\:\:A{c}_{f}={S}_{\beta\:}+\sum\:_{E{d}_{v}}^{Te{\prime\:}}\left({u}_{y}+Sh\right)*{k}_{s}+\left|D{i}^{{\prime\:}}+A{l}_{0}\right|$$
(5)
The allocation failure is detected on the above derivative where the task is allocated to the busy edge device on this process the failure occurs. The allocation failure is described as\(\:\:A{c}_{f}\), where the task is not scheduled based on the decision process. Here, it defines the sharing of tasks is not done properly so allocation failure occurs, it is sorted out with knowledge-based learning which uses the prior data for the current model and provides the result. The allocation failure computation is presented in Snippet 2.
Bild vergrößern
This allocation failure is due to the detection of risk devices and it is formulated below.
$$\:T{e}^{{\prime\:}}={R}^{{\prime\:}}\left({i}_{a}\right)*\sum\:_{E{d}_{v}}^{{S}_{\beta\:}}F{L}_{i}+\left[\left(A{c}_{f}*Vut\right)+{\varDelta\:}^{{\prime\:}}\right]*Sh$$
The task sharing to the busy device is expressed as,
$$\:{k}_{s}\left(Sh\right)={R}^{{\prime\:}}\left({u}_{y}\right)+{\varDelta\:}^{{\prime\:}}*\frac{m{e}_{t}}{S{y}^{{\prime\:}}}-Vut$$
The above equation is rewritten as,
$$\:Vut\left(E{d}_{v}\right)=S{y}^{{\prime\:}}\left({k}_{s}\right)*Sh+{p}_{e}\left(D{i}^{{\prime\:}}\right)$$
It means,
$$\:Vut\left({k}_{s}\right)=D{i}^{{\prime\:}}\left({k}_{s}\right)*S{y}^{{\prime\:}}+\sum\:_{Vut}\left({\varDelta\:}^{{\prime\:}}*E{d}_{v}\right)$$
(6)
The risk device is detected to reduce the failure during workflow on edge devices. The risk device is described as\(\:\:R{\prime\:}\), here decision is made to share the data to the responder. Based on the classification presented, the\(\:\:A{c}_{f}\) observed is tabulated in Table 3.
The allocation failure decreases if no secure sharing\(\:\:S{y}^{{\prime\:}}+Sh\left({k}_{s}\right)*\sum\:_{A{l}_{0}}\left({W}_{f}+Vut\right)\) takes place between the edge devices. Here, the scheduling is computed to share the task \(\:\:Sh\left({k}_{s}+{r}_{t}\right)\to\:{p}_{e}\left({W}_{f}\right)-{i}_{a}-m{e}_{t}\:\)to the ideal device and ensures the allocation failure. The failure is due to the device, where if the device is busy and task is assigned to the device means the assigned new task is not completed. In this case, the allocation failure occurs so the identification\(\:\:Vut\left(E{d}_{v}*{v}_{s}\right)+A{l}_{0}\) of busy and ideal devices is detected and so the allocation failure is reduced (Table 4 Data). The classification is evaluated for secure task sharing, at some point risk devices lead to failure. To sort out these issues the risk device is detected which is equated in the above equation by the use of scheduling. Henceforth, the workflow termination is analyzed in the below Eq.
$$\\\begin{array}{c}\:\omega\:\left(W_f\right)=\prod\:_{Ed_v}^{Te'\:}S_{\beta\:}\ast\left(R^{'\:}-i_a\right)+\\k_s\ast Em_0-Ac_f+\\\left[\left(Ac_f\ast Te^{'\:}\right)\ast\left(Ac_f+u_y\right)\right]\\\left\{+{\backslash\mathrm{varDelta}\:}^{'\:}\right\}-me_t\end{array}$$
(7)
The termination workflow is expressed above where the risk device is detected which leads to failure. Here, the computation is processed to the termination if the risk is detected during task allocation, termination is\(\:\:E{m}_{0}\), analysis is\(\:\:\omega\:\). Thus, the termination point is analyzed from the risk device detection. The device detection rates for different variables are tabulated in Table 4.
Table 4
Device detection rates for different variables
 
\(\:{\varvec{W}}_{\varvec{f}}\)
150
200
250
300
350
400
450
\(\:\varvec{F}{\varvec{L}}_{\varvec{i}}\)
0.5
0.7650
0.7720
0.8092
0.8207
0.8532
0.9220
0.9421
0.6
0.7789
0.7805
0.7839
0.8431
0.8637
0.9241
0.9222
0.7
0.7947
0.7833
0.8515
0.8519
0.8925
0.8977
0.9420
0.8
0.7795
0.8216
0.8535
0.8622
0.8507
0.8992
0.9273
0.9
0.7856
0.8066
0.8181
0.8659
0.9014
0.9089
0.9280
1.0
0.7876
0.7910
0.8101
0.8520
0.8526
0.9226
0.9361
\(\:\varvec{A}{\varvec{c}}_{\varvec{f}}\)
0–0.2.2
0.7925
0.8547
0.8724
0.9197
0.9228
0.9057
0.9465
0.21–0.4
0.7985
0.8324
0.8224
0.8819
0.9099
0.9352
0.9635
0.41–0.6
0.7737
0.8326
0.8251
0.8742
0.8719
0.8931
0.9617
0.61–0.8
0.7712
0.7913
0.8414
0.8725
0.9080
0.8918
0.9297
0.81–1.0.81.0
0.7764
0.8264
0.7811
0.8766
0.9067
0.9284
0.9557
\(\:\varvec{A}{\varvec{l}}_{\varvec{o}}\) (min)
30
0.7717
0.7869
0.8213
0.8437
0.9080
0.8950
0.9396
60
0.7950
0.8020
0.8592
0.8607
0.9032
0.9220
0.9421
90
0.7789
0.8205
0.8739
0.8531
0.9137
0.9241
0.9422
The device detection is higher when the edge device responds to the necessary task requested. Here, the evaluation takes place by scheduling the task and classification is done for the secure device selection and allocation failure. Here, the allocation is done to identify the risk device, and then,\(\:\:{W}_{f}*\prod\:_{{k}_{s}}\left(Sh+E{d}_{v}\right)\) workflow is generated for the scheduled task. The detection is done to ensure the security\(\:\:S{y}^{{\prime\:}}*\prod\:_{{\delta\:}_{0}}^{{v}_{s}}\left(F{L}_{i}+A{c}_{f}\right)\) and improves device detection by computing offloading. The knowledge-based learning is introduced where it uses the prior task and detects the current task (Table 5). The device detection\(\:\:R{\prime\:}\) is described in Snippet 3.
Bild vergrößern
Table 5
Offloading verification tabulation
Variants
Values
\(\:\varvec{O}{\varvec{f}}^{\varvec{{\prime\:}}}\)
\(\:\varvec{y}{\varvec{e}}_{\varvec{f}}\)
\(\:\varvec{O}{\varvec{f}}^{\varvec{{\prime\:}}}\left(\varvec{C}{\varvec{m}}_{\varvec{p}}\right)\)
\(\:\varvec{y}{\varvec{e}}_{\varvec{f}}\)
\(\:\varvec{D}{\varvec{U}}_{\varvec{g}}\)
\(\:\varvec{E}{\varvec{d}}_{\varvec{v}}\)
10
0.574
0.956
0.818
0.891
0.1342
20
0.421
0.952
0.846
0.908
0.1280
30
0.369
0.962
0.814
0.922
0.1346
\(\:{\varvec{W}}_{\varvec{f}}\)
150
0.526
0.952
0.829
0.934
0.1294
200
0.547
0.957
0.912
0.941
0.1235
250
0.415
0.952
0.898
0.912
0.1139
300
0.569
0.954
0.888
0.957
0.1097
350
0.587
0.955
0.921
0.939
0.1232
400
0.541
0.954
0.846
0.952
0.1084
450
0.524
0.957
0.876
0.937
0.1172
\(\:\varvec{A}{\varvec{l}}_{\varvec{o}}\) Time (min)
30
0.354
0.953
0.837
0.950
0.1141
60
0.415
0.968
0.865
0.936
0.0934
90
0.514
0.955
0.945
0.947
0.0613
The termination is reduced by knowing the prior knowledge of the task allocation and scheduling and it is done by introducing knowledge-based learning.

3.3 Knowledge Learning

Knowledge learning is described as having a deep knowledge about the data which helps to retrieve the prior knowledge about the data and reduce the failure. It is used to know about the task which is processed in the prior section and gains the knowledge and gives to the current task. The below equation is computed to store and notify the data with the prior knowledge.
$$\\\begin{array}{c}KL_g=ri_0-k_s\left(R^{'\:}+Ac_f\right)\\+\sum\:_{{\triangle\:}^{'\:}}^{Te^{'\:}}\left(Sh+Di^{'\:}\right)\ast\\\:n_f\left(Vut\right)\\\ast Ed_v+s_r\end{array}$$
(8)
The knowledge is\(\:\:K{L}_{g}\), the prior is represented as, the data is notified and it is described as\(\:\:{n}_{f}\) whereas data storage is represented as\(\:\:{s}_{r}\). Based on this computation step, the prior knowledge is used to decide the workflow of edge devices. The stored data is retrieved from the prior knowledge and notified whether there is any risk to processing the task which is been allocated. Based on this notification the risk is reduced and improves the reliability for workflow devices. Post this knowledge extraction, the secure device selection is evaluated in the following equation.
$$\:{v}_{s}\left({\delta\:}_{0}\right)={S}_{\beta\:}+\prod\:_{{S}_{y}{\prime\:}}A{l}_{0}*\left(E{d}_{v}+{W}_{f}\right)$$
Consequently,
$$\:{W}_{f}\left({S}_{\beta\:}\right)=\left|E{d}_{v}+A{l}_{0}\right|-r{i}_{0}$$
$$\:r{i}_{0}={s}_{r}\to\:{n}_{f}\left({p}_{e}\right)+{k}_{s}-m{e}_{t}$$
The above equation is re-written as,
$$\:{n}_{f}=E{d}_{v}+{R}^{{\prime\:}}\left(T{e}^{{\prime\:}}\right)-m{e}_{t}$$
$$\:T{e}^{{\prime\:}}\left(m{e}_{t}\right)=E{d}_{v}\left({s}_{r}+{n}_{f}\right)*K{L}_{g}$$
(9)
The secure device is\(\:\:{v}_{s}\), the selection is represented as\(\:\:{\delta\:}_{0}\), the security is given for the task allocation and reaches the destination as task completion. The task selection is securely processed and ensures the notification with the prior knowledge. Here, scheduling takes place by selecting the edge device securely and reduces the insecurity. It is evaluated by using knowledge learning to retrieve the data and gives the notification regarding the task sharing to the responder. From this task completion is verified in the below derivative.
$$\\\\\begin{array}{c}k_s\left(Cm_p\right)=r_t\left(Ed_v\right)\\\ast\left(i_a-u_y\right)+\sum\:_{W_f}^{S_{\beta\:}}\\\:KL_g-\\ri_0+Sh\end{array}$$
(10)
The task completion is represented as\(\:\:C{m}_{p}\), where it verifies the current and the previous knowledge and provides the notification process where the insecure device selection is reduced by improving reliability. The secure device selection procedure is presented in Snippet 4.
Bild vergrößern
The reliability is higher by using knowledge-based learning and completing the task. The task completion is done by securely sharing between the sender and receiver. Here, the scheduling takes place by deciding which task is assigned to which edge device. By evaluating this process, the task is completed in the desired time interval. Post to this scheduling lag is identified in the below Eq. 
$$\:D{U}_{g}={S}_{\beta\:}+\left(E{d}_{v}\right)+{k}_{s}\to\:{p}_{e}\left(m{e}_{t}\right)*A{l}_{0}\left(K{L}_{g}\right)-C{m}_{p}$$
(11)
The scheduling lag occurs if the task is not completed properly and it is reduced with the knowledge-based learning assessments to fix the failure or risk devices and stop the scheduling. Once the scheduling is stopped then, the allocation of tasks is processed on the acquired time interval. The scheduling lag is represented as\(\:\:D{U}_{g}\) where it is due to the edge device being busy and upcoming tasks being scheduled means the lag occurs. So, the initial step is to check whether the device is busy or ideal based on this scheduling is done to reduce the lag. The role of\(\:\:K{L}_{g}\) is explained using Fig. 3 illustrations.
Fig. 3
Role of\(\:\:\varvec{K}{\varvec{L}}_{\varvec{g}}\) through illustrations
Bild vergrößern
In Fig. 3, the \(\:\:K{L}_{g}\)for 3 processes: \(\:\:{r}_{t}=0,\:A{l}_{o}=0\)and\(\:\:{n}_{f}=true\) are presented. Based on the \(\:\:F{L}_{i}\), the \(\:\left({T}_{e}^{{\prime\:}}*A{l}_{o}\right)=m{e}_{t}\) is the satisfying condition to ensure a risk-free device is selected such that the \(\:\:r{i}_{o}\) at\(\:\:{S}_{\beta\:}=\)maximum is satisfied. Based on the available \(\:\:{p}_{e}=\) maximum condition, the need for new\(\:\:F{L}_{i}\) is pursued. If the above conditions for \(\:\:{r}_{t}=0\) is not satisfied, then the \(\:\:A{c}_{f}\) estimation using Eq. (5) is computed. Therefore, the number of \(\:\:E{d}_{v}\)satisfying\(\:\:{S}_{y}{\prime\:}\) is reduced and \(\:\:{W}_{f}\) allocation is increased. Thus the \(\:\:{R}^{{\prime\:}}=false\) identifies the maximum number of allocation failures across various\(\:\:m{e}_{t}\). This ensures \(\:\:E{d}_{v}\) is risk-oriented and therefore new\(\:\:A{l}_{o}\) are terminated and the current \(\:\:{W}_{f}\) is terminated as\(\:\:\omega\:\left({W}_{f}\right)\). When a new task comes in, the \(\:K{L}_{g}\), assesses previously generated reports to ascertain whether the current device can securely and efficiently execute the task. If the prior report indicates low security or considerable delays, the system either reallocates the task to a more trustworthy edge device or modifies the allocation parameters. This approach enhances the system’s reliability while reducing the probability of task failure. Then the process of \(\:K{L}_{g}\) updating in \(\:r{i}_{o}\)
Bild vergrößern
pseudocode is illustrated in Snippet 5.

3.4 Offloading

The specific task is offloaded if scheduling lag occurs to improve the reliability. The offloading is transferring the task to another edge device to increase the performance of workflow. The below equation is used to compute the offloading.
$$\:O{f}^{{\prime\:}}=\frac{E{d}_{v}}{D{U}_{g}}+\prod\:_{{i}_{a}}^{{u}_{y}}y{e}_{f}*\left(Sh+{p}_{e}\right)-m{e}_{t}$$
The above equation is re-written as,
$$\:y{e}_{f}=\left({u}_{y}-{i}_{a}\right)+\omega\:*\sum\:_{Sh}^{Vut}\left(C{m}_{p}+{v}_{s}\right)$$
Consequently,
$$\:C{m}_{p}\left(E{d}_{v}\right)={v}_{s}\left({S}_{\beta\:}\right)*\sum\:_{Vut}\omega\:+\left[\left(y{e}_{f}-{W}_{f}\right)-A{l}_{0}\right]\:\:$$
Similarly,
$$\:y{e}_{f}\left(E{d}_{v},A{l}_{0}\right)=\prod\:_{{p}_{e}}{S}_{\beta\:}+\left({W}_{f}*{v}_{s}\right)+Te{\prime\:}\:$$
$$\:T{e}^{{\prime\:}}\left(D{U}_{g},{k}_{s}\right)=\left({S}_{\beta\:}+y{e}_{f}\right)*\left(A{l}_{0}+E{d}_{v}\right)-{v}_{s}$$
(12)
The offloading is represented as\(\:\:Of{\prime\:}\) where verification is described as\(\:\:y{e}_{f}\) and it is evaluated with the sharing process. The computation is done once the scheduling lag occurs to sort out this lag the offloading is performed to complete the assigned task to the edge devices. The offloading process is described in Snippet 5.
Bild vergrößern
The detection of task completion is estimated by scheduling the requested task to the user, for this, the initial step is to detect whether the device is ideal or busy based on this scheduling is performed and it is represented as\(\:\:\prod\:_{{i}_{a}}^{{u}_{y}}y{e}_{f}*\left(Sh+{p}_{e}\right)-m{e}_{t}\). Post to this offloading computation, the task completion with offloading is evaluated below.
$$\:O{f}^{{\prime\:}}\left(C{m}_{p}\right)=\left(A{l}_{0}+E{d}_{v}\right)-{v}_{s}*D{U}_{g}\left({S}_{\beta\:}\right)+y{e}_{f}({u}_{y}-{i}_{a})$$
(13)
The task completion is done with the offloading process where the verification is the preliminary step to perform scheduling. Here, the scheduling is properly done with knowledge-based learning and ensures a secure workflow among the edge devices. The offloading verification values are tabulated in Table 5.
The discrepancies in Table 10 suggest that a greater \(\:y{e}_{f}\) (workflow efficiency factor) does not always result in a greater \(\:Of{\prime\:}\) (offloading factor) value, which is the case for \(\:E{d}_{v}=30\:\)where \(\:y{e}_{f}=0.962\) while \(\:Of{\prime\:}=0.369\). This disparity indicates that while internal scheduling for offloading workflows has high efficiency, the level of offloading being performed is low—probably because the local processing capacity is sufficient. Under these scenarios, the throughput appears stable. However, there are missed chances to optimize the resource load balance among the devices. On the other hand, those configurations where \(\:y{e}_{f}\:\)is lower, however, \(\:Of{\prime\:}\text{}\)is greater, suggest that offloading is a lot heavier. This may enhance load distribution, but may also add network latency. The impact to the completion of the exchanges lies in the balance of both conditions. Too little offloading burdens local nodes while too much offloading incurs delay from overhead due to communication costs. The work out for the trade-off is a globally optimal adaptive\(\:\eta\) tuning \(\:Of{\prime\:}\) based on \(\:y{e}_{f}\). Real time \(\:y{e}_{f}\)based off local execution require optimum leverage with network distributed processing in a reliable manner. The device verification increases when the task is completed\(\:\:{k}_{s}\left(y{e}_{f}+{S}_{\beta\:}\right)+\sum\:_{E{d}_{v}}{p}_{e}\) from the scheduling process. The workflow is done securely\(\:\:S{y}^{{\prime\:}}\left(A{l}_{0}+E{d}_{v}\right)*\prod\:_{\omega\:}\left({\varDelta\:}^{{\prime\:}}+{\delta\:}_{0}\right)\) and evaluates the allocation to the edge device. The reliability is enhanced between the edge device\(\:\:B{r}^{{\prime\:}}\left(E{d}_{v}*\omega\:\right)+\left|A{l}_{0}*{S}_{\beta\:}\right|\) and evaluates the decision-making to share the task. The allocation failure is estimated to detect the risk device\(\:\:A{l}_{0}\left(O{f}^{{\prime\:}}*y{e}_{f}\right)+\sum\:_{{\delta\:}_{0}}{W}_{f}\) and results in termination. The workflow is securely \(\:\:S{y}^{{\prime\:}}\left(O{f}^{{\prime\:}}+C{m}_{p}\right)*y{e}_{f}\) and verifies by making decision\(\:\:D{i}^{{\prime\:}}\left(y{e}_{f}+{R}^{{\prime\:}}\right)*A{c}_{f}\) (Table 10). Here, it uses the prior knowledge which provides the notification regarding the task assigned and produces the resultant. Thus, the task completion with offloading is higher by reducing the scheduling lag. Here forth, the reliability is enhanced and it is expressed in the below Eq. 
$$\:B{r}^{{\prime\:}}={S}_{\beta\:}+O{f}^{{\prime\:}}\left(C{m}_{p}\right)-m{e}_{t}-r{i}_{o}\left(y{e}_{f}\right)+E{d}_{v}$$
(14)
The reliability is improved by processing the offloading-based task completion by using prior knowledge on the required time frame \(\:m{e}_{t}-r{i}_{o}\left(y{e}_{f}\right)+E{d}_{v}\). Here, the reliability is described as\(\:\:Br{\prime\:}\), by considering the workflow of the edge device the request and response are handled properly. The insecure selection of devices is reduced by benchmarking the secure task sharing to the requested devices. Thus, scheduling and offloading are integrated to process the secure task sharing between the edge devices. In Snippet 6, the secure scheduling verification is discussed.
Bild vergrößern

4 Results and Discussion

In the results and discussion section, the comparative analysis for normalized security and \(\:\:{\delta\:}_{o}\)overhead is presented. Setting up the implementation was done in a simulated environment which emulated the functionality of multiple edge devices and various levels of task loads. The setup for the experiment had a workstation with an Intel® Core™ i3-1215U processor and 16 GB of DDR4 RAM which served as the host for the simulation framework. This simulation environment was set up to run device scheduling, security assessment and task offloading in a scenario with 30 edge devices and a task range of 150 to 950. All the performance metrics including normalized security and \(\:{\delta\:}_{o}\:\)overhead were computed from these simulation runs, which in turn means the results were purely theoretical. The methods proposed in [32] and [36] are used alongside the proposed technique to verify its reliability. These two methods selected for this analysis focus on the most recent and advanced methodologies in secure workflow scheduling for edge–cloud and serverless environments. While Morabito et al. [32] concentrate on osmotic computing systems with integrated security features, Liang et al. [36] focus on achieving optimal make span and security during complex task execution. Assessing these frameworks enhances the comparative analysis underscoring the effectiveness and trustworthiness of the proposed strategy which operates alongside intensely security-oriented scheduling frameworks. In this comparative analysis, the following variables along the x-axis are considered;\(\:\:{W}_{f}\) (150 to 450)\(\:\:A{l}_{o}\) time (30 min to 90 min),\(\:\:E{d}_{v}\left(5\:to\:30\right)\), and allocation rate \(\:\:(5/\text{min}to\:15/\text{m}\text{i}\text{n})\). As of now, the evaluation’s rating baseline has the value set at \(\:E{d}_{v}\)=30, as it permits the simulation to remain computationally manageable. Even though this value corresponds to the small-to-medium MEC cluster scale, practical deployments are usually configured at much larger device counts, for example, \(\:E{d}_{v}\)≥100. Testing the proposed method in these extended conditions would provide evidence of responsiveness to scaling in network size and task concurrency. This simulation will be first augmented with \(\:E{d}_{v}\)=100, and up, to verify if the considerable security and overhead reduction achieved in the scaled CES with CSM, persist for the large MEC network topology. The impact of these variables is studied over the two metrics using the graphical illustrations in Figs. 4 and 5.
Fig. 4
Comparative analysis of normalized\(\:\:{\varvec{S}}_{\varvec{y}}\varvec{{\prime\:}}\)
Bild vergrößern
Fig. 5
Comparative analysis of\(\:\:{\varvec{\delta\:}}_{\varvec{o}}\)overhead
Bild vergrößern
The normalized security is higher compared to the existing method\(\:\:{v}_{s}\left({\delta\:}_{0}+F{L}_{i}\right)*\sum\:_{{W}_{f}}{S}_{\beta\:}\) where the offloading is evaluated with a secure device. The task sharing\(\:\:Sh\left({k}_{s}\right)>A{l}_{0},\:for\:all\:A{l}_{0}\in\:r{i}_{0}\) where scheduling is computed to the requested device. The secure device selection is evaluated\(\:\:{v}_{s}\left(E{d}_{v}*A{l}_{0}\right)+\sum\:_{Of{\prime\:}}R{\prime\:}\). Based on prior knowledge the detection of risk devices is computed \(\:\:{R}^{{\prime\:}}\left(E{d}_{v}+{v}_{s}\right)*\prod\:_{{k}_{s}}\left({\varDelta\:}^{{\prime\:}}-m{e}_{t}\right)\approx\:\omega\:\left(y{e}_{f}\right)\) and from this security is normalized for the edge devices. From this knowledge-based learning is used to improve the security level \(\:\:K{L}_{G}\left(r{i}_{0}-{k}_{s}\right)+\sum\:_{{W}_{f}}\left({v}_{s}+y{e}_{f}\right)\) and verifies the security between the edge devices (Fig. 4). A weighted combination of two primary protection metrics is used to compute this score. These metrics are the threat detection rate, which is the percentage of attack attempts that have been detected and mitigated, and encryption strength, which is measured as the effective bit-length and complexity of the encryption mechanism that is used in data transmission and storage. Following the calculation of the raw Sy score for each test case, the score is then normalized to a scale ranging from 0 to 3 by the utilization of min–max normalization is computed as \(\:Sy{{\prime\:}}_{\text{norm}}=\frac{Sy{\prime\:}-Sy{{\prime\:}}_{\text{m}\text{i}\text{n}}}{Sy{{\prime\:}}_{\text{m}\text{a}\text{x}}-Sy{{\prime\:}}_{\text{m}\text{i}\text{n}}}\times\:3\). This normalization makes it possible to make direct comparisons across different workflow sizes (\(\:{W}_{f}\)), allocation times (\(\:A{l}_{o}\)), number of edge devices (\(\:E{d}_{v}\)), and allocation rates. This serves to ensure that the plotted results accurately represent relative protection performance rather than raw values that cannot be compared to one another. The overhead comparative analysis is presented below.
The device selection overhead is lesser considering the previous methods (Fig. 5), the secure task \(\:\:S{y}^{{\prime\:}}+\left(Sh*{k}_{s}\right)-Di{\prime\:}\)is shared to the edge device. Here, the classification is evaluated for the secure device selection and allocation failure\(\:\:F{L}_{i}\left({v}_{s}\left({\delta\:}_{0}\right)*A{c}_{f}\right)>D{u}_{g}\), where the scheduling lag is reduced. The lag is addressed and it is decreased to improve the workflow among the edge devices\(\:\:{W}_{f}\left(E{d}_{v}+{S}_{\beta\:}\right)*C{m}_{p}\), and detects the risk of failure. The device selection is evaluated \(\:\:E{d}_{v}\left(D{u}_{g}*{W}_{f}\right)>y{e}_{f},\:for\:all\:y{e}_{f}\in\:{\delta\:}_{0}\) and leads to task completion. The selection overhead is reduced if risk failure occurs\(\:\:\left({\delta\:}_{0}*F{L}_{i}\right)+\prod\:_{{S}_{\beta\:}}K{L}_{g}\) where the knowledge is repeatedly visited to update the prior and current outcomes. Despite achieving a Maximum overhead reduction of 12.44% compared to references [32] and [36], the proposed method’s improvement is offset by the extra computation expense brought about by the knowledge learning process. More specifically, the retrieval, evaluation, and updating of prior knowledge for each task incurs a processing overhead of about 3–5% for each allocation cycle in the simulation. However, this slight increase in computation expense is more than offset by the increase in security and reliability, which renders the approach feasible in practice for edge–cloud scheduling in real world scenarios. The proposed method functions within an indirect threat model whereby risks are observed through device comportment instead of through an adversary’s direct approach. This simplification of the system’s model comes at the cost of some internal compromise models (i.e., insider threat or colluding devices). These assumptions can be expanded upon by incorporating multi-factor trust evaluation which integrates device authentication history, behavioral anomaly, and traffic pattern anomaly to identify contrived compliant devices. With regards to negative scenarios like DDoS attacks, the current model does address some of these concerns by classifying devices into risk groups and excluding high-risk devices from scheduling, though this approach requires prior classification instead of real-time traffic analysis. Implementing rate limiting or traffic shaping at the MEC level, as well as packet filtering and adaptive load redistribution, can strengthen resiliency and shift the scheduling process from dependence to convergence, thus maintaining uninterrupted service even during external pressure. Moving the security framework from assumption-dependent to assumption-minimized would enhance its reliability in real-world scenarios.

5 Conclusion

In this article, the secure workflow scheduling technique using mobile edge devices is introduced. The proposed technique administers and verifies security for workflow allocation and scheduling. In both processes, security is administered using device verification on account of its previous completion outputs. The device verification for allocation, scheduling, offloading, and workflow completions are validated before and after security classifications. If the above demands are satisfied using the non-risk devices identified, then the security normalization is high. Besides the risk-induced device identified is monitored for its allocation failure and completion lag that detects the credibility of the device. Therefore knowledge learning updates the existing inputs with the updated outcomes based on the above parameters. This process is repeated until the maximum scheduling and completion are achieved. Besides, the number of pending/lagging workflows are allocated to the verified devices for completion to Maximize security. The proposed algorithm improved normalized security by 14.061% and reduced the overhead by 12.44% for the highest\(\:\:{W}_{f}\) defined. The simulation framework presumes perfect alignment with current Mobile Edge Computing (MEC) protocols. However, in practical scenarios, cross-compatibility with different MEC frameworks, authentication protocols, and latency constraints would impact the actual benefits realized. While the approach implemented in this simulation includes both verification and security normalization in the allocation and scheduling phases, the lack of live MEC protocol evaluation means factors such as control signalling delays and policy conflicts have not been directly observed. These elements in real-world conditions, alongside the need for real-world validation after MEC testing, would underscore the need for simulative and real-world MEC testing.

Acknowledgments

This work has been supported by the Universiti Kebangsaan Malaysia (UKM) the Grant Scheme DIP 2025-033. The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/447/46.

Declarations

Competing interests

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
download
DOWNLOAD
print
DRUCKEN
Titel
Knowledge Learning for Securing Workflow Scheduling Algorithm in Mobile Edge Computing
Verfasst von
Taher M. Ghazal
Ala Eldin Awouda
Mohammad Kamrul Hasan
Abdul Hadi Abd Rahman
Shayla Islam
Rashid A. Saeed
Hashim Elshafie
Adeel Iqbal
Abdelrahman H. Hussein
Publikationsdatum
19.12.2025
Verlag
Springer US
Erschienen in
Mobile Networks and Applications
Print ISSN: 1383-469X
Elektronische ISSN: 1572-8153
DOI
https://doi.org/10.1007/s11036-025-02465-6
1.
Zurück zum Zitat Hussain M, Wei LF, Rehman A, Hussain A, Ali M, Javed MH (2024) An electricity price and energy-efficient workflow scheduling in geographically distributed cloud data centers. J King Saud Univ 36(8):102170CrossRef
2.
Zurück zum Zitat Saeedizade E, Ashtiani M (2024) Scientific workflow scheduling algorithms in cloud environments: a comprehensive taxonomy, survey, and future directions. J Sched. https://doi.org/10.1007/s10951-024-00820-1CrossRef
3.
Zurück zum Zitat Khaleel MI (2023) A fault tolerance aware green IoT workflow scheduling algorithm for multi-dimensional resource utilization in sustainable cloud computing. Internet of Things 23:100909CrossRef
4.
Zurück zum Zitat Nazeri M, Soltanaghaei M, Khorsand R (2024) A predictive energy-aware scheduling strategy for scientific workflows in fog computing. Expert Syst Appl 247:123192CrossRef
5.
Zurück zum Zitat Hussain M, Luo MX, Hussain A, Javed MH, Abbas Z, Wei LF (2023) Deadline-constrained cost-aware workflow scheduling in hybrid cloud. Simul Model Pract Theory 129:102819CrossRef
6.
Zurück zum Zitat Raeisi-Varzaneh M, Dakkak O, Fazea Y, Kaosar MG (2024) Advanced cost-aware max–min workflow tasks allocation and scheduling in cloud computing systems. Cluster Comput. https://doi.org/10.1007/s10586-024-04594-1CrossRef
7.
Zurück zum Zitat Stavrinides GL, Karatza HD (2024) Security, cost and energy aware scheduling of real-time IoT workflows in a mist computing environment. Inf Syst Front 26(4):1223–1241CrossRef
8.
Zurück zum Zitat Alam M, Shahid M, Mustajab S (2024) Security challenges for workflow allocation model in cloud computing environment: a comprehensive survey, framework, taxonomy, open issues, and future directions. J Supercomputing, 1–65
9.
Zurück zum Zitat Taghinezhad-Niar A, Taheri J (2024) Security, reliability, cost, and energy-aware scheduling of real-time workflows in compute-continuum environments. IEEE Trans Cloud Comput. https://doi.org/10.1109/TCC.2024.3426282CrossRef
10.
Zurück zum Zitat Khaledian N, Voelp M, Azizi S, Shirvani MH (2024) AI-based & heuristic workflow scheduling in cloud and fog computing: a systematic review. Cluster Comput. https://doi.org/10.1007/s10586-024-04442-2CrossRef
11.
Zurück zum Zitat Ye L, Yang L, Xia Y, Zhan Y, Zhao X (2024) Deadline-Constrained and Cost-Effective Multi-Workflow scheduling with uncertainty in cloud control systems. J Syst Sci Complexity, 1–26
12.
Zurück zum Zitat Zhang S, Zhao Z, Liu C, Qin S (2023) Data-intensive workflow scheduling strategy based on deep reinforcement learning in multi-clouds. J Cloud Comput 12(1):125CrossRef
13.
Zurück zum Zitat Talha A, Malki MOC (2023) Ppts-pso: a new hybrid scheduling algorithm for scientific workflow in cloud environment. Multimed Tools Appl 82(21):33015–33038CrossRef
14.
Zurück zum Zitat Hosseini Shirvani M (2024) A survey study on task scheduling schemes for workflow executions in cloud computing environment: classification and challenges. J Supercomput 80(7):9384–9437CrossRef
15.
Zurück zum Zitat Kamanga CT, Bugingo E, Badibanga SN, Mukendi EM (2023) A multi-criteria decision making heuristic for workflow scheduling in cloud computing environment. J Supercomput 79(1):243–264CrossRef
16.
Zurück zum Zitat Xia Y, Luo X, Yang W, Jin T, Li J, Xing L, Pan L (2024) Dynamic variable analysis guided adaptive evolutionary multi-objective scheduling for large-scale workflows in cloud computing. Swarm Evol Comput 90:101654CrossRef
17.
Zurück zum Zitat Khaledian N, Khamforoosh K, Akraminejad R, Abualigah L, Javaheri D (2024) An energy-efficient and deadline-aware workflow scheduling algorithm in the fog and cloud environment. Computing 106(1):109–137CrossRef
18.
Zurück zum Zitat Ye L, Yang L, Xia Y, Zhao X (2024) A cost-driven intelligence scheduling approach for deadline-constrained IoT workflow applications in cloud computing. IEEE Internet Things J. https://doi.org/10.1109/JIOT.2024.3351630CrossRef
19.
Zurück zum Zitat Hussain M, Wei LF, Rehman A, Ali M, Waqas SM, Abbas F (2024) Cost-aware quantum-inspired genetic algorithm for workflow scheduling in hybrid clouds. J Parallel Distrib Comput 191:104920CrossRef
20.
Zurück zum Zitat Pan J, Wei Y (2024) A deep reinforcement learning-based scheduling framework for real-time workflows in the cloud environment. Expert Syst Appl 255:124845CrossRef
21.
Zurück zum Zitat Zhang J, Cheng L, Liu C, Zhao Z, Mao Y (2023) Cost-aware scheduling systems for real-time workflows in cloud: an approach based on genetic algorithm and deep reinforcement learning. Expert Syst Appl 234:120972CrossRef
22.
Zurück zum Zitat Ma L, Zhang Y, Zhou J, Zhang G (2024) A gene-inspired metaheuristic for scheduling workflow tasks in mobile edge computing-supported cyber–physical systems. J Syst Archit 151:103136CrossRef
23.
Zurück zum Zitat Bansal S, Aggarwal H (2024) An efficient workflow scheduling in Cloud–Fog computing environment using a hybrid particle Whale optimization algorithm. Wireless Pers Commun 137(1):441–475CrossRef
24.
Zurück zum Zitat Zade BMH, Javidi MM, Mansouri N (2023) An improved Caledonian crow learning algorithm based on ring topology for security-aware workflow scheduling in cloud computing. Peer-to-Peer Netw Appl 16(6):2929–2984CrossRef
25.
Zurück zum Zitat Mohammadzadeh A, Javaheri D, Artin J (2024) Chaotic hybrid multi-objective optimization algorithm for scientific workflow scheduling in multisite clouds. J Oper Res Soc 75(2):314–335CrossRef
26.
Zurück zum Zitat Wang S, Yuan Z, Zhang X, Wu J, Wang Y (2024) Cloud-edge-end workflow scheduling with multiple privacy levels. J Parallel Distrib Comput 189:104882CrossRef
27.
Zurück zum Zitat Lyu S, Dai X, Ma Z, Zhou Y, Liu X, Gao Y, Hu Z (2023) A heterogeneous cloud-edge collaborative computing architecture with affinity-based workflow scheduling and resource allocation for internet-of-things applications. Mob Networks Appl, 1–17
28.
Zurück zum Zitat Abdi S, Ashjaei M, Mubeen S (2024) Cost-aware workflow offloading in edge-cloud computing using a genetic algorithm. J Supercomput 80(17):24835–24870CrossRef
29.
Zurück zum Zitat Li Z, Yu H, Fan G, Tang Q, Zhang J, Chen L (2023) Cost-efficient security-aware scheduling for dependent tasks with endpoint contention in edge computing. Comput Commun 211:119–133CrossRef
30.
Zurück zum Zitat Li L, Zhou C, Cong P, Shen Y, Zhou J, Wei T (2024) Makespan and security-aware workflow scheduling for cloud service cost minimization. IEEE Trans Cloud Comput. https://doi.org/10.1109/TCC.2024.3382351CrossRef
31.
Zurück zum Zitat Sajnani DK, Li X, Mahesar AR (2024) Secure workflow scheduling algorithm utilizing hybrid optimization in mobile edge computing environments. Comput Commun 226:107929CrossRef
32.
Zurück zum Zitat Morabito G, Sicari C, Ruggeri A, Celesti A, Carnevale L (2023) Secure-by-design serverless workflows on the edge–cloud continuum through the osmotic computing paradigm. Internet of Things 22:100737CrossRef
33.
Zurück zum Zitat Zhang S, Tang Y, Wang D, Karia N, Wang C (2023) Secured Sdn based task scheduling in edge computing for smart city health monitoring operation management system. J Grid Comput 21(4):71CrossRef
34.
Zurück zum Zitat Alam M, Shahid M, Mustajab S (2024) Security prioritized multiple workflow allocation model under precedence constraints in cloud computing environment. Cluster Comput 27(1):341–376CrossRef
35.
Zurück zum Zitat Javanmardi S, Shojafar M, Mohammadi R, Persico V, Pescapè A (2023) S-fos: a secure workflow scheduling approach for performance optimization in SDN-based IoT-fog networks. J Inf Secur Appl 72:103404
36.
Zurück zum Zitat Liang H, Zhang S, Liu X, Cheng G, Ma H, Wang Q (2024) SMWE: a framework for secure and makespan-oriented workflow execution in serverless computing. Electronics 13(16):3246CrossRef