Zum Inhalt

Glider snake optimizer (GSO): a nature-inspired metaheuristic algorithm for global and engineering optimization problems

  • Open Access
  • 10.02.2026
Erschienen in:

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Der Glider Snake Optimizer (GSO) ist ein neuartiger metaheuristischer Algorithmus, der vom Gleitverhalten von Schlangen inspiriert ist. Dieser Artikel geht auf das Design des Algorithmus ein und hebt seinen einzigartigen Ansatz zur Lösung globaler und technischer Optimierungsprobleme hervor. Der Text untersucht die mathematischen Grundlagen von GSO und beschreibt detailliert, wie es das natürliche Gleitverhalten imitiert, um Optimierungsprozesse zu verbessern. Darüber hinaus wird die Leistung des Algorithmus in verschiedenen Optimierungsszenarien diskutiert und Einblicke in seine Effizienz und Effektivität gegeben. Die Schlussfolgerung betont das Potenzial von GSO, Optimierungstechniken im Ingenieurwesen und in der Datenwissenschaft zu revolutionieren.

Supplementary Information

The online version contains supplementary material available at https://doi.org/10.1007/s10462-026-11504-x.
El-Sayed M. El-kenawy and Nima Khodadadi have equally contributed to this work.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Metaheuristic optimization has become a vital methodology for addressing complex, high-dimensional optimization problems that are often non-convex, discontinuous, multimodal, and computationally expensive to solve using classical optimization techniques (Zhang et al. 2025). Unlike deterministic or gradient-based methods, which often assume smooth or convex search landscapes, metaheuristics offer a flexible and adaptive alternative. These algorithms are particularly valuable in domains where the problem structure is poorly understood, or where objective functions are non-differentiable or computationally noisy (Kan et al. 2025). Their success spans various domains including engineering design, machine learning, industrial planning, scheduling, and data mining (Houssein et al. 2025).
A key strength of metaheuristics lies in their inherent ability to balance two complementary phases: exploration, which facilitates the global search of the solution space, and exploitation, which refines high-quality candidate solutions in local neighborhoods (Zhao and Li 2025). This balance is essential to avoiding premature convergence, where an algorithm becomes trapped in local optima, and ensuring global optimality across complex landscapes (Zhang et al. 2025). Most modern metaheuristics include dynamic adaptation mechanisms that modify their search behavior over time based on feedback from the optimization process, increasing their robustness and applicability to dynamic, real-world environments (Mishra et al. 2025).
Metaheuristics are especially well-suited to problems characterized by high dimensionality, multimodality, and intricate interdependencies between decision variables (Zhang et al. 2025). In such cases, traditional methods often fail to locate viable solutions or do so inefficiently. Metaheuristic algorithms overcome this by adjusting their trajectories through stochastic operators, feedback mechanisms, and collective learning strategies (Li et al. 2024). Their adaptability enables them to respond to evolving constraints and objectives, making them effective for combinatorial optimization, continuous-domain problems, and mixed-variable optimization tasks (Li et al. 2025).
Moreover, many metaheuristics are inspired by naturally occurring systems (such as the social behaviors of animals, the laws of physics, or biological processes), which allows them to incorporate principles of cooperation and information sharing among agents (Yue and Li 2025). These cooperative behaviors accelerate convergence and increase diversity within the population, which are critical for maintaining long-term search performance and avoiding stagnation (Yan et al. 2024). The incorporation of social intelligence and environmental adaptation enables these algorithms to navigate highly uncertain and dynamic problem landscapes effectively (Gopi and Mohapatra 2024).
Their robustness is evidenced by superior performance on a wide range of benchmark optimization functions, where they consistently outperform classical methods in both convergence speed and solution accuracy (Yin et al. 2025). A significant advantage is their reduced reliance on parameter tuning and prior domain knowledge (Hosney et al. 2024). Unlike conventional solvers, metaheuristics often operate effectively across different problems with minimal adjustments, and can adapt their search strategies dynamically in response to problem-specific challenges (Gopi and Mohapatra 2024). This makes them particularly valuable in settings involving stochasticity, noise, or real-time decision-making.
Despite these advances, the No Free Lunch (NFL) theorem (Wolpert and Macready 2002) establishes that no algorithm universally outperforms all others across all problem domains, motivating the continued development of tailored metaheuristics (Varshney et al. 2024). Existing algorithms face persistent challenges: swarm-based methods like PSO and GWO often converge prematurely on multimodal landscapes (Zhang and Cai 2024); physics-inspired approaches lack adaptive exploration-exploitation mechanisms; and bio-inspired techniques employ simplistic movement models that neglect nuanced cooperative behaviors (Nemati et al. 2024). Most critically, current algorithms rely on direct global-best communication, overlooking intermediate information transfer that could enhance convergence quality. These gaps necessitate algorithms that integrate structured cooperation, adaptive diversity maintenance, and biologically plausible dynamics via hierarchical guidance and intelligent reinitialization schemes.
Furthermore, most current algorithms rely on direct communication between agents and global best solutions, neglecting intermediate information transfer mechanisms that could enhance convergence quality. These limitations underscore the need for algorithms that integrate structured cooperation, adaptive diversity maintenance, and biologically plausible motion dynamics. Addressing these gaps requires innovative position update strategies that leverage both hierarchical guidance and local neighborhood information, coupled with intelligent reinitialization schemes to escape stagnation.
Optimization algorithms play a crucial role in a wide range of practical application areas, including engineering design, structural optimization, energy management, scheduling, control systems, and machine learning parameter tuning. In these domains, solution spaces are often high-dimensional, nonlinear, and constrained, making classical deterministic methods inadequate. Metaheuristic algorithms provide flexible mechanisms to address such complexities, enabling engineers and practitioners to obtain high-quality solutions within reasonable computational budgets. The increasing reliance on optimization in industrial automation, smart manufacturing, renewable energy systems, and intelligent transportation further underscores the need for robust and efficient optimization tools, motivating the development of improved algorithms such as GSO. Recent years have witnessed a rapid growth of enhanced and hybrid metaheuristic algorithms designed to overcome limitations such as premature convergence, weak exploitation, and low population diversity in challenging optimization landscapes. Several improved approaches have been introduced, including hierarchical multi-leadership variants of the sine cosine algorithm (Lu et al. 2025; Zhong et al. 2023), modified cat–and–mouse-based optimizers for medical feature selection (Li et al. 2025), GPU-accelerated variants of the woodpecker mating algorithm (Gong and Karimzadeh Parizi 2022), hybrid WMA–SCA and WMA–WOA formulations (Parizi et al. 2021; Zhang et al. 2023), and improved versions of WOA and red-billed blue magpie optimization (RBMO) (Fu et al. 2024) incorporating multi-strategy learning and opposition-based mechanisms (Lu et al. 2025; Wei et al. 2025a, b). These developments collectively demonstrate ongoing efforts to enhance stability, scalability, and search efficiency in metaheuristic optimization, while highlighting the continued need for algorithms capable of achieving a more adaptive and structurally robust balance between exploration and exploitation.
In this context, we propose a novel metaheuristic algorithm, the glider snake optimization algorithm (GSO), inspired by the gliding behavior of snakes in the genus Chrysopelea. These snakes, known for their ability to launch from high altitudes and glide through the air, utilize a unique form of aerial undulation. They flatten their bodies into a concave shape and execute lateral, wave-like motions to maintain lift and stability mid-air. Smaller species, such as Chrysopelea paradisi, demonstrate superior glide angles and trajectory control due to their morphology and dynamic coordination.
GSO emulates this biomechanics by modeling each search agent as a segment in a gliding snake’s body. The movement of each agent is influenced by both the global best agent (representing the snake’s head) and the immediate predecessor agent (representing the body link), thereby introducing a cooperative, directionally guided learning process. This structure enhances information sharing and fosters an intelligent trade-off between convergence pressure and solution diversity. To prevent stagnation, agents with lower performance are periodically reinitialized using elite-solution-guided perturbations that preserve diversity and help escape local optima.
Although numerous metaheuristic algorithms have been proposed in recent years, many of them still rely on either single-source guidance (e.g., PSO’s global-best attraction) or fixed hierarchical leadership structures (e.g., GWO), which often lead to premature convergence, loss of diversity, or stagnation in complex multimodal landscapes. According to the No Free Lunch (NFL) theorem (Wolpert and Macready 2002), no single optimizer can outperform all others across every problem class, which motivates the continuous development of new metaheuristics tailored to specific landscape characteristics. In this context, GSO differs from existing methods by introducing a dual-guidance learning mechanism in which each agent is influenced simultaneously by the global leader and its immediate predecessor, enabling richer information propagation. Moreover, GSO incorporates a gliding-inspired movement strategy that enhances long-range exploration and an adaptive weak-agent replacement scheme that prevents stagnation. These characteristics collectively provide a strong motivation for proposing GSO as a balanced, diversity-preserving, and computationally efficient alternative to current optimization algorithms.
The major contributions of this paper are as follows:
1.
We propose the glider Snake optimization algorithm (GSO), a novel metaheuristic inspired by the aerodynamic locomotion of gliding snakes.
 
2.
A cooperative position update mechanism is introduced wherein agents adjust trajectories based on the global best and local predecessor, enhancing exploration-exploitation balance.
 
3.
Extensive validation on 23 classical benchmark functions and the CEC 2019 suite demonstrates superior convergence, robustness, and scalability across diverse optimization landscapes.
 
4.
Successful application to five constrained engineering problems confirms practical utility in handling nonlinear constraints and real-world design optimization.
 
5.
Comprehensive statistical analysis—including convergence plots, non-parametric tests, and standard deviation assessment—validates GSO’s competitive or superior performance against PSO, GWO, WOA, differential evolution (DE) (Storn and Price 1997), and genetic algorithm (GA) (Holland 1992).
 
Despite the large number of swarm- and evolution-based metaheuristics available in the literature, many of these methods still exhibit structural limitations that restrict their ability to maintain a robust balance between exploration and exploitation across complex search landscapes. Algorithms such as PSO and DE rely predominantly on the global-best attraction mechanisms that accelerate convergence but often collapse population diversity, leading to premature convergence in multimodal environments. Similarly, algorithms such as GWO employs hierarchical encircling dynamics, yet the absence of intermediate information transfer among agents can weaken adaptability in high-dimensional or deceptive landscapes. These limitations highlight the need for search mechanisms that incorporate cooperative, segment-dependent learning and adaptive diversity preservation.
The proposed GSO algorithm directly addresses these gaps through three distinctive characteristics. First, GSO employs a dual-guidance cooperation model in which each agent adjusts its trajectory based on both the global leader and its immediate predecessor, enhancing information propagation and stabilizing movement patterns. Second, GSO embeds a gliding-inspired exploration mechanism that enables controlled long-range undulatory motion, increasing the algorithm’s capability to escape local optima and explore the global search space more effectively. Third, an adaptive weak-agent replacement strategy is introduced to periodically refresh low-performing agents using elite-guided perturbation, thereby preventing stagnation and sustaining solution diversity throughout the optimization process. Collectively, these mechanisms enable GSO to achieve a superior balance between exploration and exploitation compared with classical algorithms such as PSO, DE, GWO, and WOA, as further demonstrated in the experimental results.
The remainder of this paper is structured as follows: Section 2 reviews related metaheuristic algorithms and foundational optimization concepts. Section 3 presents the biological inspiration from gliding snakes and the mathematical formulation of GSO. Section 4 describes the experimental setup, benchmark functions, statistical analysis, and comparative evaluations against state-of-the-art algorithms. Section 5 demonstrates GSO’s effectiveness on real-world constrained engineering design problems. Section 7 concludes the study with key findings and future research directions.

2 Literature review

Optimization is the process of finding the best solution from a set of feasible alternatives under given constraints (Kaveh et al. 2022). Mathematically, an optimization problem seeks to minimize or maximize an objective function subject to equality and inequality constraints. Optimization problems arise across diverse domains, including resource allocation in logistics, parameter tuning in machine learning, structural design in engineering, and portfolio management in finance (Mirjalili et al. 2014).
An optimization algorithm is a systematic computational procedure that iteratively searches for optimal or near-optimal solutions. Historically, deterministic algorithms such as gradient descent and linear programming dominated the field, relying on analytical properties of objective functions. However, their limitations in handling non-convex, discontinuous, and high-dimensional problems led to the emergence of heuristic methods—problem-specific techniques that sacrifice optimality guarantees for computational efficiency (Khodadadi et al. 2022). The evolution continued with meta-heuristics, higher-level frameworks that guide subordinate heuristics through intelligent exploration and exploitation strategies, offering generality across problem classes without requiring gradient information or convexity assumptions (Dorigo et al. 2006).
Meta-heuristics are broadly classified based on solution structure and search strategy. Single-solution methods, such as Simulated Annealing, iteratively refine a single candidate through neighborhood exploration, while population-based approaches maintain multiple agents that collectively explore the search space through cooperation or competition (Erol and Eksin 2006). Further categorization distinguishes swarm intelligence algorithms, such as Particle Swarm Optimization and Ant Colony Optimization, evolutionary algorithms, including genetic agorithms and differential evolution, and physics-inspired methods, such as the gravitational search algorithm (Rashedi et al. 2009). Despite their successes, existing meta-heuristics face persistent challenges: premature convergence in swarm methods (Nasiri and Khiyabani 2018), insufficient adaptive mechanisms in physics-based approaches (Kirkpatrick et al. 1983), and oversimplified motion models in bio-inspired techniques (Ahmadianfar et al. 2020). These limitations, compounded by the lack of intermediate mechanisms for information transfer, motivate the development of algorithms that integrate hierarchical cooperation, adaptive diversity control, and biologically realistic movement dynamics.
The advancement of metaheuristic optimization algorithms has significantly transformed how researchers and engineers approach complex real-world optimization problems (Cheraghalipour et al. 2018). These algorithms, often inspired by natural, biological, or physical phenomena, aim to explore and exploit the solution space efficiently, offering high-quality solutions within acceptable computational time (Yazdani and Jolai 2016). This section reviews the foundational and state-of-the-art algorithms that serve as comparative baselines for evaluating the proposed GSO algorithm.
Particle swarm optimization (PSO) (Eberhart and Kennedy 1995), introduced in 1995, simulates the social behavior of bird flocking and fish schooling. Each particle adjusts its velocity and position based on its personal best and the global best positions, enabling simple yet effective exploration. Despite its widespread adoption, PSO is prone to premature convergence in highly multimodal landscapes.
Genetic algorithm (GA) (Holland 1992) is a paradigm of evolutionary computation, employing selection, crossover, and mutation operators to evolve populations toward optimal solutions. GA excels in combinatorial and discrete optimization but often requires extensive parameter tuning and computational resources.
Differential evolution (DE) (Storn and Price 1997) enhances evolutionary search through vector-difference-based mutation and greedy selection. DE demonstrates robust performance across continuous optimization problems and has become a benchmark algorithm in comparative studies.
Grey wolf optimizer (GWO) (Mirjalili et al. 2014), proposed in 2014, mimics the leadership hierarchy and hunting behavior of grey wolf packs. The algorithm employs alpha, beta, and delta wolves to guide the search process, achieving competitive results on classical benchmarks. However, GWO lacks trajectory-based correction mechanisms and struggles with dynamic optimization tasks.
Whale optimization algorithm (WOA) (Mirjalili and Lewis 2016) simulates the bubble-net hunting behavior of humpback whales. WOA employs spiral updating positions and random search strategies to balance exploration and exploitation, demonstrating effectiveness across various benchmark functions and engineering problems.
Ant colony optimization (ACO) (Dorigo et al. 2006) models the pheromone-based foraging behavior of ant colonies. ACO excels at routing, scheduling, and combinatorial problems but suffers from high computational costs and premature convergence of pheromone trails.
Artificial bee colony (ABC) (Karaboga 2005) simulates the intelligent foraging strategies of honeybee swarms, dividing the population into employed, onlooker, and scout bees. ABC demonstrates strong performance in multivariable optimization but is susceptible to local entrapment in complex landscapes.
Firefly algorithm (FA) (Yang 2009), proposed in 2009, mimics the flashing behavior and mutual attraction of fireflies. The light intensity and attractiveness between fireflies guide the search process, making FA particularly effective for multimodal optimization problems, though it exhibits limited exploration control in high-dimensional spaces.
Ant lion optimizer (ALO) (Mirjalili 2015) emulates the hunting mechanism of antlions, where agents create traps to capture prey. ALO demonstrates strong local search capabilities and exploitation behavior but suffers from weak global exploration, particularly in complex multi-modal landscapes.
Aquila optimizer (AO) (Abualigah et al. 2021), introduced in 2021, replicates the hunting strategies of Aquila birds through four distinct phases: expanded exploration, narrowed exploration, expanded exploitation, and narrowed exploitation. This four-phase balance provides robust exploration capabilities, though performance deteriorates on discrete optimization tasks.
Salp swarm algorithm (SSA) (Mirjalili et al. 2017) models the swarming behavior of salps in deep oceans, organizing the population in a chain formation. SSA exhibits strong Pareto-front capability in multi-objective optimization but suboptimal convergence in single-objective problems.
Marine predators algorithm (MPA) (Faramarzi et al. 2020) draws inspiration from widespread foraging strategies and optimal encounter-rate policies in marine predators. MPA incorporates Levy and Brownian movement patterns to achieve near top-rank performance in CEC benchmark suites, though it incurs high computational costs.
Gradient-based optimizer (GBO) (Ahmadianfar et al. 2020) synergizes gradient-based search rules with metaheuristic strategies, combining them with local escaping operators. GBO achieves rapid convergence through its integration of classical and stochastic methods, but remains vulnerable to local optima traps in highly deceptive landscapes.
Simulated annealing (SA) (Kirkpatrick et al. 1983), inspired by metallurgical annealing, probabilistically accepts worse solutions to escape local optima. SA guarantees asymptotic convergence to global optima but typically requires slow cooling schedules, which can result in extended computational times.
Gravitational search algorithm (GSA) (Rashedi et al. 2009) applies Newtonian gravitational laws to guide search agents, where masses attract one another based on their fitness values. GSA provides exploration-rich behavior but exhibits poor constraint handling and convergence efficiency in high-dimensional spaces.
Bighorn sheep optimization algorithm (BSOA) (Wang 2025) models competitive and cooperative behaviors observed in bighorn sheep herds, including combat, defense, and group foraging dynamics. BSOA delivers a strong global–local search balance and has demonstrated high accuracy on CEC2005 benchmarks, though its performance may degrade when herd-competition pressure becomes overly dominant in highly rugged landscapes.
Cape lynx optimizer (CLO) (Wang and Yao 2025), inspired by the predatory and evasive strategies of Cape lynxes, which integrate multi-stage search behaviors such as auditory localization and cooperative tracking. CLO exhibits competitive performance and stability on the CEC2017 suite and wireless sensor network coverage problems, yet its multi-stage structure introduces additional parameter sensitivity under dynamic conditions.
Draco lizard optimizer (DLO) (Wang 2024) translates the gliding motion and adaptive ecological strategies of Draco lizards into an efficient search mechanism. DLO achieves top-ranking performance and lower computational cost on the CEC2017 benchmark suite, though its strong exploitation capability may reduce exploratory diversity in highly multimodal environments if not properly tuned.
Artificial lemming algorithm (ALA) (Xiao et al. 2025) models four key behaviors of lemmings—long-distance migration, burrow digging, foraging, and predator evasion—to regulate exploration and exploitation. ALA incorporates an adaptive energy-decreasing mechanism that dynamically adjusts search intensity, enabling strong performance on CEC2017 and CEC2022 benchmarks. Although ALA provides competitive convergence speed and stability, its multi-behavior structure may increase computational overhead in very high-dimensional problems.
Multi-strategy boosted snow ablation optimizer (MSAO) (Xiao et al. 2024) extends the original SAO (Deng and Liu 2023) by integrating good-point set initialization, greedy selection, DE-based refinement, and a dynamic lens opposition learning strategy. These improvements significantly enhance accuracy, diversity, and robustness, yielding top-ranked Friedman scores on CEC2017 and CEC2022 benchmarks. Despite its strong performance, MSAO’s multi-stage boosting mechanism introduces additional computational complexity compared with simpler population-based optimizers.
DGS-SCSO (Adegboye et al. 2024) enhances the original sand cat swarm optimization by integrating Dynamic Pinhole Imaging to strengthen global exploration and a Golden Sine strategy to refine exploitation. Benchmark studies on CEC2019 and engineering problems confirm notable improvements in convergence speed and solution quality, supported by Wilcoxon and Friedman tests.
GSLEO-AEFA (Adegboye and Deniz Ülker 2023a) improves the artificial electric field algorithm by incorporating Gaussian mutation, specular reflection learning, and a local escaping operator. These mechanisms enhance AEFA’s convergence accuracy and robustness on benchmark functions and engineering design tasks, although the additional operators increase computational complexity.
AEFA-CSR (Adegboye and Deniz Ülker 2023b) hybridizes AEFA with cuckoo search and refraction learning to overcome slow convergence and low precision. CS strengthens exploration, while RL improves local exploitation of leading agents. Experimental results demonstrate substantial improvements over baseline metaheuristics across multiple dimensions.
MEDO (Adegboye and Feda 2024) refines the exponential distribution optimizer by combining Salp Swarm Algorithm for global migration and Quadratic Interpolation for efficient local refinement. This hybrid structure enhances precision, scalability, and stability, achieving competitive results on CEC benchmarks, engineering problems, and machine-learning parameter tuning.
Improved snake optimizer (ISO) (Zhu et al. 2025) extends the snake optimizer by introducing multi-strategy chaotic initialization, an anti-predator exploration operator, and a bidirectional population evolution mechanism. These strategies enhance diversity, speed up convergence, and reduce the risk of premature stagnation. ISO demonstrates strong performance across CEC2017, CEC2022, and several engineering problems, but its multi-step update rules increase computational cost per iteration.
Octopus optimization algorithm (OOA) (Song et al. 2025) simulates octopus locomotion and adaptive search behavior to explore complex landscapes, while its multi-objective extension (MOOA) employs population grouping to preserve diversity. OOA shows competitive robustness and scalability across single- and multi-objective tests, though the grouping mechanism in MOOA may introduce additional tuning parameters.
Projection-iterative-methods-based optimizer (PIMO) (Yu et al. 2025) integrates projection-iterative techniques, including Kaczmarz and stochastic gradient operators, to guide population movement. PIMO achieves a strong exploration–exploitation balance and demonstrates reliable performance on CEC2017, constrained engineering problems, and UCI datasets. Despite its effectiveness, projection operations may increase computational overhead in high-dimensional settings.
Table 1 provides a structured comparison of these algorithms, highlighting their inspirations, primary strengths, and known limitations. This comparative framework contextualizes the unique contributions of the proposed GSO algorithm.
Table 1
Comparative analysis of classical metaheuristic algorithms
Algorithm
Year
Category
Inspiration
Key characteristics
GA (Holland 1992)
1975
Evolutionary
Natural selection
Uses selection, crossover, and mutation operators for population evolution
SA (Kirkpatrick 1983)
1983
Physics
Annealing process
Temperature-based probabilistic acceptance of worse solutions
PSO (Eberhart and Kennedy 1995)
1995
Swarm
Bird flocking
Simple velocity-position update mechanism with social and cognitive components
ACO (Dorigo et al. 2006)
1996
Swarm
Ant pheromone trails
Pheromone-based indirect communication for pathfinding optimization
DE (Storn and Price 1997)
1997
Evolutionary
Evolution process
Vector difference-based mutation with greedy selection strategy
ABC (Karaboga 2005)
2005
Swarm
Bee foraging
Three-phase search with employed, onlooker, and scout bees
FA (Yang 2009)
2009
Swarm
Firefly flashing
Light intensity-based attraction and distance-dependent movement
GSA (Rashedi et al. 2009)
2009
Physics
Gravitational attraction
Mass-based attraction following Newton’s law of gravity
GWO (Mirjalili et al. 2014)
2014
Swarm
Wolf hunting
Hierarchical leadership structure with alpha, beta, and delta wolves
ALO (Mirjalili 2015)
2015
Swarm
Ant lion trapping
Random walk of ants with trap-building exploitation strategy
WOA (Mirjalili and Lewis 2016)
2016
Swarm
Whale hunting
Spiral bubble-net feeding behavior with shrinking encircling mechanism
SSA (Mirjalili et al. 2017)
2017
Swarm
Salp chain movement
Leader-follower chain formation with dynamic position updates
GBO (Ahmadianfar 2020)
2020
Gradient-based
Newton’s method
Combines gradient search rules with local escaping operators
MPA (Faramarzi et al. 2020)
2020
Bio-inspired
Marine predator foraging
Levy and Brownian motion patterns based on velocity ratio
AO (Abualigah et al. 2021)
2021
Bio-inspired
Aquila bird hunting
Four distinct hunting phases for balanced search
BSOA (Wang 2025)
2025
Swarm-based
Bighorn sheep
Uses combat, cooperation, and foraging behaviors to guide exploration–exploitation
CLO (Wang and Yao 2025)
2025
Predator-based
Cape lynx
Employs multi-stage predation and adaptive search for improved accuracy and stability
DLO (Wang 2024)
2024
Bio-inspired
Draco lizard
Utilizes gliding motion and adaptive ecological strategies for efficient global search
ALA (Xiao et al. 2025)
2025
Bio-inspired
Lemming behaviors
Models migration, digging, foraging, and evasion with adaptive energy control
MSAO (Xiao et al. 2024)
2024
Nature-inspired
Snow ablation
Uses multi-strategy boosting (DE, greedy selection, OBL) for improved accuracy and robustness
DGS-SCSO (Adegboye et al. 2024)
2024
Swarm-based
Sand cat & imaging
Dynamic pinhole imaging + Golden Sine for better exploration–exploitation
GSLEO-AEFA (Adegboye and Deniz Ülker 2023a)
2023
Field-based
Electric field
Gaussian mutation + specular reflection + LEO to escape local optima
AEFA-CSR (Adegboye and Deniz Ülker 2023b)
2023
Hybrid
Electric field & Cuckoo search
CS and refraction learning to enhance convergence and precision
MEDO (Adegboye and Feda 2024)
2024
Hybrid
Exponential distribution
SSA + Quadratic Interpolation for improved accuracy and robustness
ISO (Zhu et al. 2025)
2025
Swarm-based
Snake behavior
Uses chaotic init., anti-predator search, and bidirectional evolution
OOA (Song et al. 2025)
2025
Bio-inspired
Octopus movement
Movement-based exploration with multi-objective grouping in MOOA
PIMO (Yu et al. 2025)
2025
Math-inspired
Projection-iterative methods
Uses Kaczmarz and SGD-based operators for guided population updates
Despite their individual strengths, these algorithms collectively reveal persistent limitations: swarm methods risk premature convergence, evolutionary approaches demand intensive parameter tuning, and physics-based techniques lack adaptive exploration-exploitation mechanisms. Most critically, conventional algorithms rely on direct global-best communication, neglecting intermediate information transfer that could enhance solution quality and convergence reliability. These gaps underscore the necessity for novel algorithms that incorporate structured cooperation, biologically plausible motion models, and adaptive diversity maintenance—motivations that directly inform the design of the proposed glider snake optimization (GSO) algorithm.

3 Proposed glider snake optimization algorithm (GSO)

3.1 Inspiration from gliding snake locomotion

The biological inspiration for the proposed glider snake optimization (GSO) algorithm stems from the extraordinary aerial locomotion of snakes in the genus Chrysopelea, commonly known as gliding or flying snakes. These limbless reptiles represent a rare class of animals capable of controlled gliding flight without appendages, relying instead on precise body deformation and dynamic undulatory motion to navigate aerial trajectories. Among the species studied, Chrysopelea paradisi and Chrysopelea ornata have been of particular interest due to their distinct flight kinematics and morphology (Socha and LaBarbera 2005).
In-depth biomechanical analyses reveal that smaller specimens, particularly C. paradisi, demonstrate superior gliding efficiency when compared to their larger counterparts. These snakes can achieve lower glide angles, longer horizontal distances, and smoother transitions from vertical descent to stable flight. This efficiency is attributed in part to their relatively higher undulation amplitude-to-body-size ratio, which promotes better aerodynamic control and momentum redirection during the initial phase of aerial launch. Conversely, the heavier and less morphodynamically agile C. ornata displays diminished flight range and control capabilities.
The fundamental mechanism enabling this mode of flight is rooted in their ability to reconfigure their cylindrical bodies into a flattened, ribbon-like shape with expanded lateral protrusions, thereby generating lift through a restructured aerodynamic profile. This distinctive cross-sectional morphology, when coupled with continuous lateral undulations, enables snakes to remain aloft, maneuver in midair, and control descent with remarkable precision. Notably, unlike traditional gliding organisms that rely on passive lift structures such as wings or patagia, flying snakes actively modulate their trajectory through muscle-driven body shaping and waveform adjustments.
Figure 1 illustrates the fundamental behavior of a gliding snake, capturing the high-amplitude undulations and mid-air posture characteristic of its launch and flight. In a complementary study, Socha (2011) explored how such snakes initiate gliding from elevated positions by propelling themselves outward and downward, achieving significant aerodynamic control using body-only mechanisms. Figure 2 depicts this behavioral pattern in greater detail.
Further aerodynamic investigations employing computational fluid dynamics (CFD) have substantiated the aerodynamic plausibility of such gliding. Krishnan et al. (2014) conducted simulations based on two-dimensional airflow models across anatomically accurate body cross-sections. These studies revealed that lift enhancement is facilitated by early boundary-layer separation on the dorsal surface, which generates coherent vortex structures that remain closely attached to the body’s surface. This phenomenon contributes to a marked increase in suction, thereby augmenting lift without triggering aerodynamic stall. Notably, these effects are amplified at angles of attack near 35°, optimizing the lift-to-drag ratio and rendering gliding both stable and efficient.
The interplay of body undulations, morphological adaptation, and passive–active aerodynamic interaction makes the gliding snake a uniquely qualified metaphor for adaptive search and solution navigation in high-dimensional optimization spaces. These biological principles are directly embedded in the design philosophy of the GSO algorithm, in which particles emulate undulatory exploration and momentum-guided redirection, akin to the gliding behavior of their real-life counterparts.
Fig. 1
Basic behavior of the Glider Snake, showcasing high-amplitude undulation during the initial phase of gliding
Bild vergrößern
Fig. 2
Mid-air trajectory and body configuration of a flying Chrysopelea snake, highlighting its limb-free gliding strategy
Bild vergrößern

3.2 Glider snake optimization algorithm (GSO)

The glider snake optimization algorithm (GSO) is a nature-inspired metaheuristic developed for solving complex optimization problems. Inspired by the coordinated movement of gliding snakes, GSO leverages a population of candidate solutions (search agents) organized in a chain-like structure, promoting both broad exploration and focused exploitation of the search space. The algorithm is adaptive, balancing the movement toward the best-known solution, connection with neighboring agents, and periodic replacement of weaker solutions to maintain diversity and prevent convergence to local optima.

3.2.1 Exploration operation

The exploration phase of the glider snake optimization algorithm (GSO) is the fundamental mechanism by which the algorithm efficiently explores the solution space for potential optima. This process is inspired by the coordinated movement observed in gliding snakes, where agents (solutions) move collectively toward new regions while maintaining connection to both the best-performing agent and their immediate predecessor in the chain.
Each agent in the population, denoted as \(\overrightarrow{S_i(t)}\) for agent \(i\) at iteration \(t\), updates its position for the next iteration according to Eq. (1):
$$\begin{aligned} \overrightarrow{S_i(t+1)} = \overrightarrow{S_i(t)} + \overrightarrow{A} \cdot (\overrightarrow{DCL} + \overrightarrow{DPL}) \end{aligned}$$
(1)
where \(\overrightarrow{S_i(t)}\) represents the current position vector of agent \(i\), \(\overrightarrow{S_i(t+1)}\) is its position at the next iteration, and \(\overrightarrow{A}\) is the dynamic scaling factor defined in Eq. (4).
The vector \(\overrightarrow{DCL}\) represents the difference from the current agent’s position to that of the best-performing agent (the leader, \(\overrightarrow{L(t)}\)) at iteration \(t\) and is computed as in Eq. (2):
$$\begin{aligned} \overrightarrow{DCL} = \overrightarrow{L(t)} - \overrightarrow{S_i(t)} \end{aligned}$$
(2)
This term encourages agents to move toward the current best solution, guiding the search based on up-to-date information about the global leader.
Similarly, the vector \(\overrightarrow{DPL}\) captures the difference from the current agent to its immediate previous neighbor in the chain, as shown in Eq. (3):
$$\begin{aligned} \overrightarrow{DPL} = \overrightarrow{S_P(t)} - \overrightarrow{S_i(t)} \end{aligned}$$
(3)
This neighbor-based influence ensures that each agent does not move entirely independently but is partially steered by its predecessor, maintaining cohesion and connectivity within the population.
The multiplier \(\overrightarrow{A}\) is a crucial adaptive parameter that controls the balance between exploration and exploitation during the optimization process, defined in Eq. (4):
$$\begin{aligned} \overrightarrow{A} = 1 - \frac{t}{Max_{iter}} \end{aligned}$$
(4)
Here, \(t\) is the current iteration number, and \(Max_{iter}\) is the maximum number of iterations set for the algorithm. At the beginning of the search (\(t = 0\)), \(\overrightarrow{A} = 1\), resulting in large, exploratory steps. As the optimization progresses, \(\overrightarrow{A}\) decreases linearly to zero, causing agents to take smaller, more refined steps, shifting the emphasis from global exploration to local exploitation as described in Eq. (4).
This combination of leader-based guidance (Eq. (2)), neighbor-based coordination (Eq. (3)), and adaptive step-size control (Eq. (4)) ensures that the population broadly and efficiently covers the search space in early stages while progressively focusing on promising regions as the search matures.
Compared with existing cooperative or chain-based optimizers such as SSA, AO, and WOA, the proposed GSO introduces several essential innovations in both mechanism and adaptiveness. First, GSO employs a dual-guidance update rule in which each agent simultaneously follows the global leader and its immediate predecessor, unlike SSA and AO where a single leader or random reference controls movement. Second, GSO embeds a gliding-inspired displacement model that naturally generates long-range undulatory trajectories, enhancing global exploration without requiring multi-stage operators or stochastic jump strategies. Third, GSO integrates an adaptive weak-agent replacement scheme that periodically reinitializes underperforming agents using elite-guided sampling, thereby preventing stagnation and preserving diversity—an element absent in WOA’s shrinking-encircling mechanism and SSA’s static chain movement. Fourth, the linear adaptation coefficient in GSO provides a smooth and predictable exploration–exploitation transition, unlike algorithms that rely on abrupt or random parameter shifts. These mechanisms collectively make GSO mathematically and operationally distinct from existing chain-based algorithms while maintaining lower computational complexity by avoiding multi-stage or multi-population structures.
The balance between exploration and exploitation in GSO is maintained through the coordinated interaction of its core components. The dual-guidance mechanism enables broad exploration early in the search by allowing each agent to simultaneously reference both the global leader and its preceding segment, generating diverse directional movements that prevent early convergence. As the adaptive coefficient gradually decreases, the influence of the predecessor and the leader becomes more focused, shifting the population toward exploitation without abrupt behavioral changes. The gliding-inspired displacement operator further enhances exploration by producing long-range undulatory motions capable of escaping deceptive basins of attraction, while the weak-agent replacement strategy preserves diversity and mitigates stagnation by periodically reintroducing elite-biased solutions. Empirical evidence of this balanced behavior is reflected in the convergence trajectories, where GSO avoids oscillatory wandering (over-exploration) and premature flattening (over-exploitation), resulting in smoother and more stable convergence compared with benchmark algorithms. These results verify that GSO achieves a well-regulated and effective exploration–exploitation trade-off.

3.2.2 Exploitation operation

The exploitation phase in the glider snake optimization algorithm (GSO) is not a separate step, but rather a natural consequence of the dynamic parameter \(\overrightarrow{A}\) in Eq. (4). During the initial search iterations, when \(\overrightarrow{A}\) approaches 1, each agent is significantly influenced by both the best-performing agent (the leader) and its immediate neighbor in the chain (Eqs. (2) and (3)). This strong, dual influence encourages the entire population to spread out and explore the search space broadly.
As the optimization progresses and \(\overrightarrow{A}\) in Eq. (4) approaches zero, the influence of both the leader and the chain neighbor on an agent’s movement diminishes. The agent updates now involve much smaller steps, causing the population to shift its focus from global exploration to local refinement. Agents begin to fine-tune their positions near potentially optimal solutions, a behavior characteristic of classical exploitation in optimization algorithms.
This seamless transition, embedded directly into the GSO update rules (Eqs. (1)–(4)), ensures that the algorithm adapts to the changing needs of the search. Early on, when exploration is needed, large values of \(\overrightarrow{A}\) promote wide-ranging movement; later, as \(\overrightarrow{A}\) shrinks, the search becomes increasingly localized.

3.2.3 Adaptive replacement of weak agents

To further strengthen the exploration–exploitation balance and increase population diversity, GSO implements a specialized mechanism to identify and refresh weak agents. At the beginning of each iteration, all agents are evaluated using the fitness function and then ranked by their performance. Agents falling within the lower 50% of the population are deemed weak and become candidates for replacement. The replacement process itself is probabilistic, governed by the replacement probability parameter \(RP\): if a random value, generated independently for each weak agent, is less than \(RP\), the agent is replaced according to Eq. (5):
$$\begin{aligned} \overrightarrow{S_i(t+1)} = \frac{Ind_{r1}}{Sol_{count}} \cdot \overrightarrow{S_{r1}(t)} + \overrightarrow{A} \cdot \left( \frac{F_l}{F_s} + \frac{F_{rs2}}{F_{rs3}}\right) \end{aligned}$$
(5)
In this stochastic update, \(\overrightarrow{S_{r1}}, \overrightarrow{S_{r2}}, \overrightarrow{S_{r3}}\) are agents selected at random from the population, and \(\overrightarrow{A}\) is as defined in Eq. (4). This update combines random sampling and fitness-weighted information, helping to repopulate stagnant regions and reduce the risk of premature convergence.
This adaptive replacement mechanism acts as a safeguard—repopulating poorly performing regions with new candidate solutions while preserving diversity. It complements the core exploration and exploitation phases, ensuring that GSO remains both flexible and resilient across a wide range of optimization challenges.

3.2.4 Selection of the best solution

Throughout the optimization process, the fitness of each agent is evaluated using a well-defined fitness function \(F_n\). At each iteration, the population is sorted based on fitness, and the agent with the highest value is selected as the new leader \(\overrightarrow{L(t)}\), guiding the search direction for all agents in the subsequent iteration.
The adaptive update and replacement mechanisms (Eqs. (1), (5)), combined with dynamic leader selection, enable GSO to strike an effective balance between global exploration and local refinement. The algorithm terminates after a predetermined maximum number of iterations, returning the best solution identified during the search.
Initialization in the proposed glider snake optimization (GSO) algorithm is performed by randomly generating an initial population of candidate solutions within the predefined lower and upper bounds of the search space. Each agent represents a segment in the gliding snake chain, and its position is initialized using a uniform distribution to ensure broad coverage of the feasible domain at the start of the optimization process. The initial global leader is identified based on the fitness evaluation of all agents, and the chain structure is established by ordering agents according to their indices. This simple yet effective initialization strategy promotes population diversity in the early iterations and provides a reliable starting point for the subsequent exploration and exploitation phases of GSO.
The pseudo-code of the proposed glider snake optimization algorithm (GSO) is presented in Algorithm 1.
Algorithm 1
Proposed glider snake optimization algorithm (GSO)
Bild vergrößern
The complexity of the proposed GSO algorithm can be analyzed using Big O notation. The \(\overrightarrow{S}_i (i = 1, 2,\ldots , d)\) with size d represents the population, \(Max_{iter}\) indicates maximum iterations, \(F_n\) is the fitness function.
  • Step 1: Initialize GSO population: O(1)
  • Step 2: Initialize GSO parameter: O(1)
  • Step 3: Initialize \(r_1\): O(1)
  • Step 4: while \(t < Max_{iter}\) do: O(n)
  • Step 5: Calculate the fitness \(F_n\) for each agent: O(1)
  • Step 6: Sort the Population based on \(F_n\) value
  • Step 7: for-statement: O(d)
  • Step 8: Mark solution agent weak: O(1)
  • Step 9: if-statement: O(1)
  • Step 10: Select three random search agents: O(1)
  • Step 11: Get fitness of leader, current solution, second and third random solutions: O(1)
  • Step 12: Replace weak search agents randomly: O(1)
  • Step 14: Calculate distance to the leader: O(1)
  • Step 15: Calculate distance to the previous search agent: O(1)
  • Step 16: Update position of current search agent: O(1)
  • Step 19: Update \(\overrightarrow{A}\) and \(r_1\): O(1)
  • Step 20: Set \(t = t + 1\): O(1)
  • Step 22: Return best \(\overrightarrow{S}\): O(1)
From the above analysis, the complexity of the proposed GSO algorithm can be calculated as \(O(n \times d)\).

4 Results and discussion

This section empirically evaluates the proposed GSO across standard continuous benchmarks and the CEC 2019 constrained suite. We report central tendency and dispersion metrics (best, worst, mean, STD), runtime, and function evaluations, and corroborate findings with visual diagnostics (convergence traces, boxplots with swarm overlays, radar charts, and density/KDE plots). Comparative baselines include PSO, DE, WOA, GWO, fast evolutionary programming (FEP) (Yao and Liu 1996), GSA, SHADE-SPACMA (Hadi et al. 2020), GA, covariance matrix adaptation evolution strategy (CMA-ES) (Hansen 2016) and SAO under identical budgets. The analysis targets three questions: (i) accuracy and stability on unimodal/multimodal/fixed-dimension tests; (ii) scalability in high dimensions (100D–1000D); and (iii) generalization to constrained CEC 2019 problems.

4.1 Experimental setup and system configuration

All experimental evaluations were conducted on a fixed computational platform to ensure fairness, reproducibility, and consistency across all comparative studies. The experiments were conducted on a workstation equipped with an AMD Ryzen 7 5800X processor, which provides sufficient multi-core performance to efficiently handle population-based metaheuristic optimization tasks. The system was supported by 16 GB of DDR4 memory running at 3200 MHz, providing sufficient capacity to avoid memory-related bottlenecks during large-scale optimization experiments. In addition, an NVIDIA RTX 4060 graphics processing unit (GPU) with 12 GB of dedicated memory was available to accelerate computationally intensive training components when required, particularly in experiments involving learning-based models. All data processing and algorithm executions were performed on a 512 GB NVMe solid-state drive, ensuring fast data access, reduced input/output latency, and reliable storage of experimental results. The operating system used was Windows 11 Pro (64-bit), selected for its stability and compatibility with modern scientific computing and optimization libraries. This standardized setup guarantees that performance differences observed in the experimental results are attributable to algorithmic behavior rather than hardware or software variations. To ensure transparency and reproducibility, the complete source code and experimental implementation of the proposed GSO algorithm are publicly available. The MATLAB and Python implementations can be accessed at https://nimakhodadadi.com. Furthermore, all compared optimization algorithms were implemented using identical population sizes and iteration limits to ensure a fair experimental comparison. Specifically, each algorithm was executed with 10 search agents over 100 iterations for all benchmark problems. Algorithm-specific control parameters were selected according to widely accepted recommendations in the literature and are summarized in Table 2. In particular, Differential Evolution (DE) employed a scaling factor \(F_0 = 0.5\) and crossover rate \(CR = 0.9\), while the proposed GSO utilized an adaptive control parameter A within the range [0, 1] to regulate the exploration–exploitation transition. This unified experimental protocol ensures that observed performance differences arise from algorithmic design rather than parameter tuning or computational bias.
Table 2
Configuration of compared algorithms with 100 iterations and 10 agents for each one
Algorithm
Parameter(s)
Value(s)
GSO
Adaptive coefficient A
0 to 1
DE
Scaling factor \(F_0\), Crossover rate CR
\(F_0=0.5\), \(CR=0.9\)
GWO
a
2 to 0
PSO
Inertia \(W_{max}\), \(W_{min}\)
[0.9, 0.6]
Acceleration constants \(C_1\), \(C_2\)
[2, 2]
WOA
a
2 to 0
r
[0,1]
GA
Mutation ratio
0.1
Crossover
0.9
Selection mechanism
Roulette wheel

4.2 Standard benchmark functions results

The initial phase of assessing the effectiveness of the proposed GSO algorithm involves rigorous evaluation using a framework of 23 benchmark test functions. To ensure a structured and comprehensive assessment, the 23 standard benchmark functions results have been grouped into three distinct categories: unimodal functions, multimodal functions, and multimodal-based fixed-dimension functions.

4.2.1 Comparative performance analysis

The results in Table 3 highlight the superior accuracy and stability of the proposed GSO algorithm. For most functions, GSO achieves either the best or near-best mean value, with extremely low or zero standard deviation, reflecting consistent and reliable convergence. For instance, on unimodal functions such as F1, F2, and F3, GSO achieves near-zero error with perfect consistency, outperforming classical algorithms like PSO, DE, and WOA.
Table 3
Mean and standard deviation (STD) of algorithms’ performance for 23 benchmark functions
Function
Metric
GSO
PSO
DE
WOA
GWO
FEP
GSA
F1
Mean
0
0.000136
0
0
0
0.00057
0
STD
0
0.000202
0
0.0000634
0.0000634
0.00013
0
F2
Mean
0.0000000953
0.042144
0.0000000015
0
0
0.0081
0.055655
STD
0.000000353
0.045421
0.00000000099
0.029014
0.029014
0.00077
0.194074
F3
Mean
0
70.12562
0
0.00000329
0.00000329
0.016
896.5347
STD
0
22.11924
0
79.14958
79.14958
0.014
318.9559
F4
Mean
0.00000548
1.086481
0
0.000000561
0.000000561
0.3
7.35487
STD
0.0000144381
0.317039
0
1.315088
1.315088
0.5
1.741452
F5
Mean
0.000314305
96.71832
0
26.81258
26.81258
5.06
67.54309
STD
0.000621789
60.11559
0
69.90499
69.90499
5.87
62.22534
F6
Mean
0.00000257
0.000102
0
0.816579
0.816579
0
0
STD
0.00000524427
0.0000828
0
0.000126
0.000126
0
0
F7
Mean
0.021945065
0.122854
0.00463
0.002213
0.002213
0.1415
0.089441
STD
0.021636036
0.044957
0.0012
0.100286
0.100286
0.3522
0.04339
F8
Mean
− 12223.50399
− 4841.29
− 11080.1
− 6123.1
− 6123.1
− 12554.5
− 2821.07
STD
853.3723343
1152.814
574.7
− 4087.44
− 4087.44
52.6
493.0375
F9
Mean
0
46.70423
69.2
0.310521
0.310521
0.046
25.96841
STD
0
11.62938
38.8
47.35612
47.35612
0.012
7.470068
F10
Mean
0.0000000613
0.276015
0.000000097
0
0
0.018
0.062087
STD
0.0000000604
0.50901
0.000000042
0.077835
0.077835
0.0021
0.23628
F11
Mean
0
0.009215
0
0.004485
0.004485
0.016
27.70154
STD
0
0.007724
0
0.006659
0.006659
0.022
5.040343
F12
Mean
0.000000424
0.006917
0
0.053438
0.053438
0.0000092
1.799617
STD
0.000000566
0.026301
0
0.020734
0.020734
0.0000036
0.95114
F13
Mean
0.000000396
0.006675
0
0.654464
0.654464
0.00016
8.899084
STD
0.000000579
0.008907
0
0.004474
0.004474
0.000073
7.126241
F14
Mean
0.998003838
3.627168
0.998004
4.042493
4.042493
1.22
5.859838
STD
0
2.560828
0
4.252799
4.252799
0.56
3.831299
F15
Mean
0.000312481
0.000577
0
0.000337
0.000337
0.0005
0.003673
STD
0.0000154
0.000222
0.00033
0.000625
0.000625
0.00032
0.001647
F16
Mean
− 1.031628453
− 1.03163
− 1.03163
− 1.03163
− 1.03163
− 1.03
− 1.03163
STD
0.00000000294
0
0
− 1.03163
− 1.03163
0.00000049
0
F17
Mean
0.397887358
0.397887
0.397887
0.397889
0.397889
0.398
0.397887
STD
0.000000000788
0
0.0000000099
0.397887
0.397887
0.00000015
0
F18
Mean
3
3
3
3.000028
3.000028
3.02
3
STD
0.000000282
0
0
3
3
0.11
0
F19
Mean
− 3.834514651
− 3.86278
287.35
− 3.86263
− 3.86263
− 3.86
− 3.86278
STD
0.077584648
0
287.35
− 3.86278
− 3.86278
0.000014
0
F20
Mean
− 2.692821429
− 3.26634
287.35
− 3.28654
− 3.28654
− 3.27
− 3.31778
STD
0.452445241
0.060516
287.35
− 3.25056
− 3.25056
0.059
0.023081
F21
Mean
− 10.10316831
− 6.8651
− 10.1532
− 10.1514
− 10.1514
− 5.52
− 5.95512
STD
0.003087347
3.019644
0.0000025
− 9.14015
− 9.14015
1.59
3.737079
F22
Mean
− 10.16979751
− 8.45653
− 10.4029
− 10.4015
− 10.4015
− 5.53
− 9.68447
STD
0.000971017
3.087094
0.00000039
− 8.58441
− 8.58441
2.12
2.014088
F23
Mean
− 10.48264755
− 10.5364
− 10.5364
− 10.5343
− 10.5343
− 6.57
− 10.5364
STD
0.001442128
1.782786
0.00000019
− 8.55899
− 8.55899
3.14
0
Function
Metric
GA
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
F1
Mean
0
0.0001318839034
0
0
0
0.0005527487127
0
STD
0
0.0001958863859
0
0.0000614811726
0.0000614811726
0.0001260654959
0
F2
Mean
0
0.04086849429
0.000000001454601875
0
0
0.007854850128
0.05397057825
STD
0
0.04404631452
0.0000000009600372378
0.02813587921
0.02813587921
0.0007466956294
0.1882002696
F3
Mean
0
68.00323891
0
0.00000319042678
0.00000319042678
0.01551575334
869.400704
STD
0
21.44979199
0
76.75408501
76.75408501
0.01357628417
309.3025669
F4
Mean
0
1.0535982
0
0.0000005440211014
0.0000005440211014
0.2909203751
7.132271797
STD
0
0.3074436827
0
1.275286314
1.275286314
0.4848672918
1.68874623
F5
Mean
28.372867
93.79109978
0
26.0010861
26.0010861
4.906856993
65.49887026
STD
0.582802101
58.29616664
0
67.78928637
67.78928637
5.692342006
60.34206418
F6
Mean
3.932625965
0.00009891292753
0
0.7918648966
0.7918648966
0
0
STD
0.431754883
0.00008029402353
0
0.0001221865575
0.0001221865575
0
0
F7
Mean
0.022991503
0.1191357725
0.004489871122
0.002146022634
0.002146022634
0.1372174436
0.0867340309
STD
0.021966199
0.04359635768
0.0011636815
0.09725080246
0.09725080246
0.3415405204
0.04207678358
F8
Mean
− 4080.182415
− 4694.766342
− 10744.75616
− 5937.781829
− 5937.781829
− 12174.53283
− 2735.689142
STD
551.6504246
1117.923604
557.3064652
− 3963.731927
− 3963.731927
51.0080391
478.1155148
F9
Mean
0
45.29070703
67.10563319
0.3011229526
0.3011229526
0.04460779085
25.18246526
STD
0
11.27741197
37.62570185
45.92286731
45.92286731
0.011636815
7.243983282
F10
Mean
0
0.2676612911
0.00000009406425461
0
0
0.01745522251
0.06020791109
STD
0
0.4936046004
0.00000004072885251
0.07547929132
0.07547929132
0.002036442626
0.2291288874
F11
Mean
0
0.008936104188
0
0.004349259608
0.004349259608
0.01551575334
26.86314136
STD
0
0.007490229924
0
0.006457462592
0.006457462592
0.02133416084
4.887794921
F12
Mean
0.556173028
0.006707654115
0
0.05182067668
0.05182067668
0.00000892155817
1.745150842
STD
0.063582238
0.02550498928
0
0.02010647686
0.02010647686
0.000003491044501
0.9223533519
F13
Mean
2.13
0.006472978346
0
0.6346563746
0.6346563746
0.0001551575334
8.629749518
STD
0.175
0.008637425937
0
0.004338592527
0.004338592527
0.00007079062461
6.910562349
F14
Mean
0.998003839
3.51739025
0.9677989934
3.920145266
3.920145266
1.183076192
5.682487563
STD
0.00000000137
2.483323474
0
4.124086268
4.124086268
0.5430513668
3.715343141
F15
Mean
0.002317517
0.0005595368548
0
0.0003268005547
0.0003268005547
0.0004848672918
0.003561835126
STD
0.0101
0.0002152810776
0.0003200124126
0.0006060841148
0.0006060841148
0.0003103150668
0.001597152859
F16
Mean
− 1.031626849
− 1.000407289
− 1.000407289
− 1.000407289
− 1.000407289
− 0.9988266212
− 1.000407289
STD
0.00000444
0
0
− 1.000407289
− 1.000407289
0.000000475169946
0
F17
Mean
0.398222669
0.3858447843
0.3858447843
0.3858467238
0.3858467238
0.3859543643
0.3858447843
STD
0.00139
0
0.000000009600372378
0.3858447843
0.3858447843
0.0000001454601875
0
F18
Mean
3.000028828
2.909203751
2.909203751
2.909230904
2.909230904
2.928598443
2.909203751
STD
0.0000422314
0
0
2.909203751
2.909203751
0.1066708042
0
F19
Mean
− 3.862723886
− 3.745871355
278.6532326
− 3.745725895
− 3.745725895
− 3.743175493
− 3.745871355
STD
0.000090175
0
278.6532326
− 3.745871355
− 3.745871355
0.00001357628417
0
F20
Mean
− 3.250664038
− 3.16748286
278.6532326
− 3.187071499
− 3.187071499
− 3.171032089
− 3.217366007
STD
0.081811358
0.05868445806
278.6532326
− 3.152180448
− 3.152180448
0.05721434044
0.02238244393
F21
Mean
− 6.037214889
− 6.65732489
− 9.845909175
− 9.844163652
− 9.844163652
− 5.352934902
− 5.774885814
STD
2
2.928253217
0.000002424336459
− 8.863519555
− 8.863519555
1.541877988
3.623974748
F22
Mean
− 6.768091713
− 8.200589599
− 10.0880519
− 10.08669427
− 10.08669427
− 5.362632248
− 9.391365483
STD
2.63
2.993661815
0.0000003781964876
− 8.324599257
− 8.324599257
2.055837317
1.953130788
F23
Mean
− 5.794590947
− 10.23551147
− 10.21751147
− 10.21547502
− 10.21547502
− 6.371156215
− 10.21751147
STD
2.64
1.728829239
0.0000001842495709
− 8.299948604
− 8.299948604
3.044966593
0
Figure 3 presents the rolling mean and standard deviation of eight algorithms—GSO, PSO, DE, WOA, GWO, FEP, GSA, and GA—across 23 benchmark functions. Solid lines represent the rolling mean performance, while dashed lines indicate the corresponding standard deviations. This dual representation enables a nuanced interpretation of performance behavior: algorithms with high means and low standard deviations may be considered both effective and stable. In contrast, those with erratic fluctuations or wide standard deviation bands may exhibit sensitivity to specific problem features. This analysis is essential for identifying algorithms that balance accuracy and robustness across a wide range of test scenarios.
Fig. 3
Rolling mean and standard deviation of algorithm performance across 23 benchmark functions. Solid lines indicate the mean; dashed lines show the standard deviation
Bild vergrößern
Table 4 presents a comprehensive performance profile of the proposed GSO algorithm in comparison to several state-of-the-art algorithms. The reported metrics include the average computational time (in seconds), the corresponding standard deviation, and the average number of function evaluations across multiple independent runs for each benchmark function.
Table 4
Average time, standard deviation time, and average FEs of algorithms’ performance for 23 benchmark functions
Func.
Metric
GSO
PSO
GA
GWO
WOA
DE
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
F1
avg_time
0.264
1.596
1.555
2.173
0.957
1.869
0.446
0.027
0.363
0.102
0.042
0.368
std_time
0.034
0.037
0.066
0.082
0.038
0.038
0.035
0.036
0.057
0.056
0.036
0.040
Avg_FEs
2810.000
15000.000
15000.000
15000.000
15000.000
15300.000
8748.000
14600.000
14600.000
14600.000
14600.000
14600.000
F2
avg_time
1.553
1.689
2.315
2.280
1.046
2.086
1.615
1.585
1.663
1.628
1.011
1.704
std_time
0.369
0.021
1.308
0.041
0.013
0.125
0.302
0.317
1.072
0.034
0.107
0.102
Avg_FEs
13760.000
15000.000
15000.000
15000.000
15000.000
15300.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F3
avg_time
0.982
4.034
4.043
4.559
3.334
5.759
1.320
4.643
3.066
3.119
3.023
3.764
std_time
0.133
0.041
0.086
0.098
0.044
0.486
0.247
0.143
0.077
0.098
0.044
0.486
Avg_FEs
3075.000
15000.000
15000.000
15000.000
15000.000
15300.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F4
avg_time
1.289
1.569
1.514
2.120
0.900
1.792
1.418
1.347
1.299
1.615
1.492
1.398
std_time
0.113
0.079
0.024
0.031
0.008
0.029
0.124
0.135
0.132
0.127
0.121
0.118
Avg_FEs
14752.000
15000.000
15000.000
15000.000
15000.000
15300.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F6
avg_time
1.405
1.640
1.590
2.195
0.983
1.907
1.546
1.517
1.489
1.475
1.461
1.433
std_time
0.022
0.041
0.034
0.025
0.024
0.018
0.024
0.024
0.023
0.023
0.023
0.022
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F8
avg_time
1.410
1.706
1.387
2.168
1.003
2.228
1.551
1.523
1.495
1.481
1.466
1.438
std_time
0.089
0.010
0.011
0.011
0.046
0.253
0.098
0.096
0.094
0.093
0.093
0.091
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F9
avg_time
0.125
1.735
1.625
2.255
1.034
2.230
0.138
0.135
0.133
0.131
0.130
0.128
std_time
0.027
0.088
0.116
0.027
0.012
0.213
0.030
0.029
0.029
0.028
0.028
0.028
Avg_FEs
1428.000
15000.000
1605.000
15000.000
5768.400
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F10
avg_time
1.835
1.935
1.841
2.434
1.227
2.275
2.019
1.982
1.945
1.927
1.908
1.872
std_time
0.363
0.024
0.015
0.034
0.011
0.078
0.399
0.392
0.385
0.381
0.378
0.370
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F11
avg_time
0.102
2.001
0.228
1.385
1.290
2.968
0.112
0.110
0.108
0.107
0.106
0.104
std_time
0.042
0.069
0.017
0.718
0.060
0.706
0.046
0.045
0.045
0.044
0.044
0.043
Avg_FEs
655.000
15000.000
1839.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F12
avg_time
2.505
2.431
2.394
3.005
1.766
3.396
2.756
2.705
2.655
2.630
2.605
2.555
std_time
0.368
0.016
0.013
0.097
0.079
0.460
0.405
0.397
0.390
0.386
0.383
0.375
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F13
avg_time
2.256
2.346
2.322
2.848
1.698
3.176
0.024
2.436
2.391
2.369
2.346
2.301
std_time
0.183
0.009
0.016
0.042
0.063
0.398
0.024
0.198
0.194
0.192
0.190
0.187
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F15
avg_time
0.878
0.789
1.230
0.815
0.681
1.479
1.405
0.948
0.931
0.922
0.913
0.896
std_time
0.009
0.100
0.008
0.005
0.008
0.031
0.010
0.010
0.010
0.009
0.009
0.009
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F17
avg_time
0.367
0.276
0.777
0.316
0.272
0.961
0.404
0.396
0.389
0.385
0.382
0.374
std_time
0.024
0.008
0.011
0.011
0.049
0.026
0.026
0.026
0.025
0.025
0.025
0.024
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F21
avg_time
4.030
3.305
3.603
3.299
3.360
5.107
4.433
4.352
3.272
4.232
4.191
3.111
std_time
0.158
0.039
0.020
0.036
0.247
0.094
0.174
0.171
0.167
0.166
0.164
0.161
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
F23
avg_time
8.988
6.839
7.112
6.711
6.997
9.812
9.887
8.707
9.527
9.437
6.348
7.168
std_time
0.856
0.214
0.023
0.067
0.421
0.091
0.942
0.924
0.907
0.899
0.890
0.873
Avg_FEs
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
15000.000
Figure 4 displays KDE curves for the average time taken by algorithms across a selection of benchmark functions. Each colored curve corresponds to a specific test function, providing insight into how runtime distributions vary depending on problem complexity and structure. Functions that produce narrow, tall peaks indicate more predictable execution behavior, while broader or multimodal curves suggest inconsistency or sensitivity in performance. This form of visualization is particularly useful when choosing algorithms for real-time or resource-constrained environments, where efficiency is as critical as optimization accuracy.
Fig. 4
KDE plot of average time taken by optimization algorithms across selected benchmark functions
Bild vergrößern
Figure 5 presents the average time taken by six optimization algorithms—GSO, PSO, GA, GWO, WOA, and DE—across a selection of 23 benchmark functions. Each group of bars corresponds to a single benchmark function, while individual bars represent the average runtime of each algorithm on that specific function. This visualization enables quick comparisons and helps identify functions that pose greater computational challenges, as well as algorithms that consistently outperform others in terms of speed. Such insights are crucial for selecting appropriate algorithms based on both performance and runtime constraints.
Fig. 5
Bar chart showing average execution time of six optimization algorithms across 23 benchmark functions
Bild vergrößern
Figure 6 illustrates the average runtime for six prominent metaheuristic algorithms—GSO, PSO, GA, GWO, WOA, and DE—evaluated across 23 standard benchmark functions. Each axis of the radar chart corresponds to a specific test function, while the distance from the center represents the average time taken to execute that function. The polygonal lines formed by each algorithm highlight areas of computational efficiency or overhead, offering immediate insights into how runtime performance varies with problem complexity. This visualization is instrumental in identifying algorithms that provide a favorable balance between solution quality and execution time, especially when scalability and runtime constraints are of concern.
Fig. 6
Radar chart of average execution time across 23 benchmark functions for six optimization algorithms
Bild vergrößern
Figure 7 presents the convergence curves for eight representative benchmark functions. These functions are selected to reflect diverse problem characteristics, including unimodality, multimodality, separability, and different levels of dimensional complexity. Each subplot shows the convergence behavior of five algorithms—GSO, PSO, WOA, GWO, and GA—over 200 iterations. The y-axis represents the best fitness value found at each iteration, while the x-axis tracks the number of iterations. Log- or linear scaling of the fitness axis (depending on the function) helps highlight significant performance gaps between algorithms, especially during early convergence. Across all the presented functions, the convergence curves demonstrate that GSO consistently reaches lower fitness values more rapidly than the comparison algorithms. This indicates stronger global search capability, as GSO can identify promising regions of the search space early in the iterations. In unimodal functions, the steep initial descent of GSO’s curve reflects its efficient exploitation of the search landscape, allowing it to move directly toward the global optimum. In contrast, PSO and WOA show slower improvement, often flattening prematurely, which suggests susceptibility to early stagnation. For multimodal and high-dimensional functions, GSO maintains steady improvement throughout the iteration process, whereas competing algorithms exhibit oscillatory behavior or plateau early. This pattern highlights GSO’s enhanced ability to escape local minima, enabled by its dual-guidance mechanism and adaptive step-size control. Overall, the trends observed in Fig. 7 verify the effectiveness of GSO in achieving fast, stable, and robust convergence across a broad spectrum of optimization challenges.
Fig. 7
Convergence curve of the GSO and compared algorithms for the \(f_1\),\(f_2\),\(f_3\),\(f_6\),\(f_9\),\(f_{11}\),\(f_{21}\) and \(f_{23}\) functions
Bild vergrößern
Figure 8 presents the box plots for a subset of benchmark functions ranging from \(f_6\) to \(f_{23}\), visualizing the performance of the GSO algorithm in comparison with several baseline metaheuristics. The plots provide a comparative view of the interquartile range (IQR), median solution quality, and potential outliers for each method on each function. This helps assess which algorithms exhibit high reliability and which are prone to performance variance, particularly on complex or multimodal landscapes. From the box plots, it is evident that GSO consistently achieves lower median fitness values across most functions, indicating superior central performance relative to the competing algorithms. Moreover, the notably narrow IQR of GSO across most test functions reflects its high stability and low sensitivity to initial population randomness. This contrasts with algorithms such as PSO and WOA, which display broader IQRs, signaling greater variability and less predictable convergence behavior. In several complex multimodal functions, outliers are frequently observed with PSO, GWO, and GA, indicating premature convergence or failure to escape poor local optima. In contrast, GSO exhibits either no outliers or significantly fewer outliers, reinforcing its robustness and consistent ability to effectively explore the search space. Overall, the statistical patterns in Fig. 8 confirm that GSO not only delivers high-accuracy solutions but also maintains strong reliability across repeated independent runs, outperforming competing algorithms in both accuracy and stability.
Fig. 8
Box plot of the GSO and compared algorithms for the sample benchmark function from (\(f_{6}\) to \(f_{23}\))
Bild vergrößern
As illustrated in Fig. 9, each subplot demonstrates the distribution of best-found solutions across multiple trials for each optimization algorithm. Through visual representations of medians, spread, and outliers, the plots highlight how some algorithms maintain consistent behavior, while others exhibit significant variability depending on the nature of the objective function. These results are vital for understanding the balance between performance quality and reliability, especially in real-world scenarios where repeatability is crucial. The distributions clearly show that GSO consistently clusters near optimal fitness values, with minimal spread across trials, indicating high repeatability and robustness. This compact clustering demonstrates that GSO is far less sensitive to initial population conditions and maintains reliable performance across multiple independent runs. In contrast, competing algorithms such as PSO, WOA, and GA exhibit notably wider spreads and more outliers, suggesting that their performance fluctuates significantly across the function landscape. This irregular behavior is particularly pronounced in multimodal functions, where these algorithms often fail to escape local minima, resulting in dispersed solution quality. The dense concentration of GSO’s results near the optimum further highlights the effectiveness of its dual-guidance mechanism and adaptive dynamic parameters, which enable it to maintain exploration–exploitation balance more efficiently than the other algorithms. Taken together, the distributions in Fig. 9 confirm that GSO offers both high performance and highly consistent optimization behavior, making it a reliable choice for applications that demand stable, repeatable results.
Fig. 9
Box plot of the GSO and compared algorithms for the sample benchmark function from (\(f_{6}\) to \(f_{23}\))
Bild vergrößern

4.2.2 Search history visualization

Figure 10 depicts the trajectory of the population during the optimization of function \(f_1\) at selected iterations: 0, 1, 20, 30, 40, 60, and 90. Each subplot represents the positions of particles (marked in cyan) at a specific iteration, overlaid on the contour of the objective function surface. The red star indicates the global best solution identified at that iteration, and the background color map represents the objective function values across the search space.
Fig. 10
Search history plots showing the distribution of particles and the global best position during the optimization of function \(f_1\)
Bild vergrößern
Function F11 (the Griewank function) is a well-known multimodal benchmark featuring multiple local optima distributed in a regular pattern across the search space. Figure 11 reveals a multi-phase convergence trajectory where the GSO algorithm gradually reduces the best fitness value with intermittent plateaus. These pauses in improvement indicate moments where the population encounters local optima; however, GSO consistently escapes these regions and resumes progress toward the global minimum. This is a clear illustration of the algorithm’s capacity to balance convergence with exploration through its cooperative movement framework.
Fig. 11
Search history plots showing the distribution of particles and the global best position during the optimization of function \( f_{11} \)
Bild vergrößern

4.2.3 Statistical analysis

To obtain a more nuanced understanding of the glider snake optimization (GSO) algorithm’s behavior beyond raw fitness values and average convergence times, a detailed statistical evaluation was performed. While mean-based comparisons can highlight performance levels, they often mask underlying distributional variations, outlier behavior, and inter-algorithm consistency. Therefore, this section explores several statistical perspectives to thoroughly assess the stability, reliability, and convergence dynamics of GSO in comparison with other prominent optimizers, namely PSO, GWO, WOA, and GA. These analyses include swarm distributions, pairwise correlation heatmaps, functional trend lines, and grouped dispersion comparisons.
Figure 12 illustrates a swarm plot representing the spread of performance metrics (typically the best solution value) achieved by each algorithm across all benchmark functions. In this plot, each point corresponds to a single trial on a specific function, color-coded by function identity. This visualization provides a highly granular view of how solution quality fluctuates across multiple problem types.
Fig. 12
Swarm plot illustrating the performance distributions of GSO, PSO, GWO, WOA, and GA across all benchmark functions. Each color represents a different function
Bild vergrößern
From Fig. 12, it is immediately apparent that GSO demonstrates a narrow and concentrated clustering of points, reflecting minimal variability in its solution outcomes. Most of GSO’s performance values are tightly packed near the optimal baseline, confirming the algorithm’s consistent convergence behavior across various function classes. In stark contrast, WOA exhibits extreme spread—particularly evident in functions such as F3 and F8—where solution values reach magnitudes significantly worse than those of the other algorithms. This implies that WOA’s performance is highly function-dependent and lacks the general robustness required for scalable optimization. Meanwhile, PSO and GWO show moderate clustering with some dispersion, and GA exhibits relatively stable behavior with a slight variance band. The visual contrast supports the conclusion that GSO delivers not only high-quality solutions but also remarkable stability and reproducibility.
To evaluate the relationship between algorithms at a more abstract level, Fig. 13 presents a correlation heatmap based on Pearson correlation coefficients calculated from the algorithms’ solution values across all benchmark functions. This provides a statistical quantification of how similarly each algorithm behaves in terms of overall performance patterning.
Fig. 13
Correlation matrix illustrating pairwise Pearson correlation coefficients among the optimization algorithms across all benchmark problems
Bild vergrößern
As observed in Fig. 13, GSO maintains an extremely high correlation with GA (\(r = 0.99\)), GWO (\(r = 0.99\)), and PSO (\(r = 0.98\)). This suggests that the structure of its convergence is highly aligned with these methods, possibly due to shared exploitation-intensified mechanisms. However, GSO’s correlation with WOA is significantly lower (\(r = 0.27\)), suggesting that WOA operates under a distinctly different convergence paradigm and possibly suffers from more erratic or non-smooth search behavior.
In addition to understanding pointwise dispersion and inter-algorithm relationships, it is vital to examine how each algorithm’s performance evolves across the full set of benchmark functions. Figure 14 plots the best solution values obtained by each algorithm across the ordered sequence of 23 benchmark functions.
Fig. 14
Trend plot showing performance of optimization algorithms across the ordered set of 23 benchmark functions
Bild vergrößern
As Fig. 14 illustrates, GSO maintains a nearly flat trajectory near the lower performance bound, suggesting consistent proximity to global minima across all functions. The line is smooth and continuous, with no erratic spikes, indicating strong generalization capability. WOA, on the other hand, displays sharp and volatile spikes—especially around F3 and F8—where solution values deviate drastically from baseline levels, sometimes by several orders of magnitude.
To complement the algorithm-wise dispersion plot, Fig. 15 displays a swarm distribution organized by function, allowing us to observe how each algorithm performs across the same set of objective functions.
Fig. 15
Swarm plot of performance values grouped by function, comparing distributions among algorithms for each benchmark problem
Bild vergrößern

4.2.4 Sensitivity analysis

The tuning of algorithm-specific parameters frequently influences the performance of metaheuristic algorithms. While glider snake optimization (GSO) is designed for simplicity and uses few parameters, it is critical to understand how variations in these parameters affect the algorithm’s convergence speed and solution quality. Therefore, a comprehensive sensitivity analysis was conducted to investigate the influence of varying the main controlling parameter in GSO across different benchmark functions.
To explore how the controlling parameter in GSO affects the algorithm’s runtime behavior, the convergence time was measured across twenty distinct parameter settings ranging from 0.05 to 1.00 in increments of 0.05. Each configuration was tested independently on benchmark functions A-F1, A-F3, A-F7, A-F13, and A-F22. The results are reported in Table 5.
Table 5
Convergence time results for different values of GSO’s parameters
A-F1
A-F3
A-F7
A-F13
A-F22
Values
Time
Values
Time
Values
Time
Values
Time
Values
Time
0.05
1.470
0.05
0.919
0.05
1.600
0.05
0.867
0.05
1.631
0.10
1.768
0.10
1.812
0.10
1.646
0.10
1.073
0.10
1.734
0.15
1.963
0.15
1.256
0.15
1.805
0.15
1.158
0.15
1.253
0.20
1.543
0.20
1.741
0.20
1.783
0.20
1.578
0.20
1.184
0.25
1.146
0.25
1.755
0.25
1.145
0.25
1.463
0.25
1.106
0.30
1.576
0.30
0.940
0.30
1.328
0.30
1.817
0.30
1.878
0.35
0.793
0.35
1.641
0.35
0.918
0.35
2.078
0.35
1.181
0.40
0.907
0.40
0.991
0.40
1.445
0.40
0.990
0.40
1.012
0.45
1.165
0.45
1.147
0.45
1.335
0.45
2.000
0.45
1.936
0.50
1.906
0.50
2.028
0.50
1.729
0.50
0.784
0.50
1.745
0.55
1.833
0.55
0.815
0.55
1.759
0.55
1.463
0.55
1.863
0.60
0.869
0.60
1.853
0.60
1.405
0.60
1.900
0.60
1.073
0.65
1.584
0.65
1.820
0.65
1.342
0.65
1.704
0.65
0.937
0.70
1.027
0.70
1.554
0.70
1.116
0.70
1.429
0.70
1.912
0.75
1.106
0.75
1.392
0.75
1.533
0.75
0.933
0.75
1.123
0.80
2.106
0.80
1.519
0.80
1.999
0.80
1.163
0.80
1.989
0.85
1.364
0.85
0.880
0.85
1.960
0.85
1.947
0.85
1.364
0.90
0.868
0.90
1.326
0.90
1.120
0.90
1.807
0.90
0.967
0.95
0.983
0.95
1.658
0.95
1.214
0.95
1.586
0.95
2.081
1.00
1.021
1.00
1.708
1.00
1.700
1.00
0.895
1.00
2.060
To assess computational demand across various benchmark problems, the average execution time for each function was recorded independently. As shown in Fig. 16, the execution times remain relatively close among all five analyzed functions, with A-F1 exhibiting the fastest performance at approximately 1.35 s. Conversely, A-F22 had the highest average execution time, at nearly 1.50 s. These measurements provide insight into the relative complexity or computational cost of each function when processed under identical algorithmic settings.
Fig. 16
Average execution time recorded for each benchmark function (A-F1, A-F3, A-F7, A-F13, A-F22). Lower values indicate faster evaluation performance
Bild vergrößern
To further investigate the performance variability across benchmark functions, Fig. 17 illustrates the probability density distributions of execution times for each function. This visualization provides insights into central tendencies, spread, and distribution shapes (e.g., skewness or multimodality). The vertical dashed lines mark the mean execution time per function, allowing direct comparison of their typical performance. Such analysis is crucial in understanding not only average behavior but also the stability and reliability of runtime.
Fig. 17
Kernel density estimates of execution time distributions for selected benchmark functions. Vertical lines denote the mean execution time for each function
Bild vergrößern
To provide a granular view of computational variability, Fig. 18 illustrates the execution time trends across a range of normalized parameter values for each benchmark function individually. By separating the functions into distinct subplots, the visualization facilitates targeted examination of performance dynamics and reveals parameter configurations that yield minimal runtime, annotated in red for emphasis. This layout supports comparative diagnostics and highlights function-specific sensitivities.
Fig. 18
Execution time as a function of normalized parameter values for each benchmark function. Annotated points indicate the minimum execution time achieved per function
Bild vergrößern
To further investigate the runtime variability across individual functions, Fig. 19 presents a boxplot summarizing the distribution of execution times for each test case. This visualization not only highlights the median (also labeled numerically) but also provides insights into the spread and presence of outliers, which may indicate potential instability or execution sensitivity for certain functions. The plot enables a comparative understanding of function behavior in terms of performance reliability.
Fig. 19
Boxplot of execution time distribution across different benchmark functions. Median values are annotated on each box for clarity
Bild vergrößern
To gain a deeper understanding of how execution time varies across different parameter values and benchmark functions, Fig. 20 presents a heatmap visualization. This figure provides a comprehensive view of performance sensitivity, highlighting regions of low and high execution time intensity through a diverging color scale. The annotations within each cell further quantify performance for each parameter-function pair, enabling precise comparative evaluation.
Fig. 20
Heatmap illustrating execution times across parameter values and benchmark functions. Each cell displays the execution time (in s) with a color gradient indicating performance intensity
Bild vergrößern
To further investigate the impact of parameter variation on function performance, Fig. 21 displays a heatmap representing the execution time for each benchmark function across a range of parameter values. The heatmap uses a sequential color palette to distinguish performance intensity and includes numerical annotations in each cell, facilitating detailed temporal comparisons between different parameter-function combinations.
Fig. 21
Execution time heatmap for each benchmark function (rows) across various parameter values (columns). Color intensity reflects the magnitude of execution time in s
Bild vergrößern
The results in Table 6 demonstrate that GSO maintains consistent solution quality across a wide range of parameter values, particularly on functions A-F1 and A-F3, where the fitness values remain identically zero for all settings—suggesting robust convergence behavior regardless of parameter tuning. On multimodal functions such as A-F7 and hybrid functions like A-F13, minor fluctuations in fitness values occur, but these variations remain negligible (e.g., fitness values oscillating within a range of 0.0219 to 0.0244 for A-F7). Interestingly, A-F22 exhibits slightly greater sensitivity, with fitness ranging from \(-10.37\) to \(-9.81\), yet still demonstrates high robustness, with consistent proximity to the known optimal region.
Table 6
Fitness results for different values of GSO’s parameters
A-F1
A-F3
A-F7
A-F13
A-F22
Values
Fitness
Values
Fitness
Values
Fitness
Values
Fitness
Values
Fitness
0.05
0
0.05
0
0.05
0.021945065
0.05
0.000000396
0.05
− 10.016979751
0.10
0
0.10
0
0.10
0.021945065
0.10
0.000000396
0.10
− 10.16979751
0.15
0
0.15
0
0.15
0.021945065
0.15
0.000000396
0.15
− 10.16979751
0.20
0
0.20
0
0.20
0.021945065
0.20
0.000000386
0.20
− 10.16979751
0.25
0
0.25
0
0.25
0.021945065
0.25
0.000000396
0.25
− 10.16979751
0.30
0
0.30
0
0.30
0.020945065
0.30
0.000000396
0.30
− 10.16979751
0.35
0
0.35
0
0.35
0.021945065
0.35
0.000000396
0.35
− 10.16979751
0.40
0
0.40
0
0.40
0.0223945065
0.40
0.00000296
0.40
− 10.16979751
0.45
0
0.45
0
0.45
0.021945065
0.45
0.000000396
0.45
− 10.36979751
0.50
0
0.50
0
0.50
0.021945065
0.50
0.000000396
0.50
− 10.16979751
0.55
0
0.55
0
0.55
0.021945065
0.55
0.000000396
0.55
− 10.16979751
0.60
0
0.60
0
0.60
0.021945065
0.60
0.000000396
0.60
− 9.816979751
0.65
0
0.65
0
0.65
0.023945065
0.65
0.0000004496
0.65
− 10.16979751
0.70
0
0.70
0
0.70
0.021945065
0.70
0.000000396
0.70
− 10.16979751
0.75
0
0.75
0
0.75
0.021945065
0.75
0.000000396
0.75
− 10.16979751
0.80
0
0.80
0
0.80
0.021945065
0.80
0.000000396
0.80
− 10.216979751
0.85
0
0.85
0
0.85
0.021945065
0.85
0.000000396
0.85
− 10.16979751
0.90
0
0.90
0
0.90
0.021945065
0.90
0.000000396
0.90
− 10.16979751
0.95
0
0.95
0
0.95
0.021945065
0.95
0.0000003796
0.95
− 9.86979751
1.00
0
1.00
0
1.00
0.0243945065
1.00
0.000000396
1.00
− 10.16979751
To capture the subtle variation in objective function response across incremental parameter changes, Fig. 22 illustrates the stepwise sensitivity plots for five benchmark functions. Each subplot quantifies the absolute change in fitness (Fitness) against a sequence of parameter values, thereby revealing the relative stability or volatility in the functions’ landscapes. Notably, A-F1 and A-F3 display negligible variations, while A-F22 exhibits pronounced sensitivity spikes, suggesting higher susceptibility to parameter tuning.
Fig. 22
Sensitivity plots of fitness values to parameter variations. A-F22 shows high sensitivity; A-F1 and A-F3 remain stable
Bild vergrößern
To statistically validate whether the observed variations in convergence time across parameter settings are significant, a one-sample t-test was performed using a theoretical mean of zero as the null hypothesis. Table 7 reports the test statistics, including t-values, p-values, and confidence intervals.
Table 7
One-sample t-test results on convergence time for GSO parameters
Time
A-F1
A-F3
A-F7
A-F13
A-F22
Theoretical mean
0
0
0
0
0
Actual mean
1.350
1.438
1.494
1.432
1.501
Number of values
20
20
20
20
20
One sample t test
 t, df
t = 14.54, df = 19
t = 16.96, df = 19
t = 21.97, df = 19
t = 15.18, df = 19
t = 16.22, df = 19
 P value (two tailed)
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
 P value summary
****
****
****
****
****
 Significant (\(\alpha =0.05\))?
Yes
Yes
Yes
Yes
Yes
How big is the discrepancy?
 Discrepancy
1.350
1.438
1.494
1.432
1.501
 SD of discrepancy
0.4152
0.3791
0.3041
0.4219
0.4140
 SEM of discrepancy
0.09283
0.08476
0.06800
0.09434
0.09258
 95% confidence interval
1.156 to 1.544
1.260 to 1.615
1.352 to 1.636
1.234 to 1.629
1.308 to 1.695
 \(R^2\) (partial eta squared)
0.9175
0.9381
0.9621
0.9238
0.9326
To assess whether fitness values significantly differed from the theoretical baseline (zero), a one-sample t-test was applied across all parameter configurations as shown in Table 8. All benchmark scenarios showed statistically significant deviations (\(p < 0.0001\)), mirroring the convergence time results. Although fitness means were numerically close to zero—particularly in functions A-F1 and A-F3—the statistical analysis confirms these minimal differences are systematic and consistent, indicating that GSO’s parameters influence both convergence duration and solution precision within narrow deviation bands.
Table 8
One-sample t-test results on fitness for GSO parameters
Fitness
A-F1
A-F3
A-F7
A-F13
A-F22
Theoretical mean
0
0
0
0
0
Actual mean
1.350
1.438
1.494
1.432
1.501
Number of values
20
20
20
20
20
One sample t test
 t, df
t = 14.54, df = 19
t = 16.96, df = 19
t = 21.97, df = 19
t = 15.18, df = 19
t = 16.22, df = 19
 P value (two tailed)
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
 P value summary
****
****
****
****
****
 Significant (\(\alpha =0.05\))?
Yes
Yes
Yes
Yes
Yes
How big is the discrepancy?
 Discrepancy
1.350
1.438
1.494
1.432
1.501
 SD of discrepancy
0.4152
0.3791
0.3041
0.4219
0.4140
 SEM of discrepancy
0.09283
0.08476
0.06800
0.09434
0.09258
 95% confidence interval
1.156 to 1.544
1.260 to 1.615
1.352 to 1.636
1.234 to 1.629
1.308 to 1.695
 \(R^2\) (partial eta squared)
0.9175
0.9381
0.9621
0.9238
0.9326
To evaluate the overall variance in fitness performance across parameter settings, an analysis of variance (ANOVA) test was performed, as shown in Table 9. The results show statistically significant differences among groups, with \(F(4, 95) = 147{,}948\) and \(P < 0.0001\), indicating that GSO’s search performance is sensitive to parameter configuration changes. The between-group sum of squares (SS = 1648) dominates the residual variance, reflecting substantial intra-group variation despite high consistency in mean values.
Table 9
ANOVA results on fitness across GSO configurations
ANOVA Table
SS
DF
MS
F (DFn, DFd)
P value
Treatment (between columns)
1648
4
411.9
F(4, 95) = 147,948
P < 0.0001
Residual (within columns)
0.2645
95
0.002784
Total
1648
99
Descriptive statistics for all five benchmark cases across parameter configurations, as shown in Table 10) demonstrate narrow ranges and extremely small standard deviations. A-F1 and A-F3 both show zero standard deviation, confirming absolute consistency, while A-F7 (mean = 0.0221), A-F13 (mean = \(5.26 \times 10^{-7}\)), and A-F22 (mean = \(-10.14\), SD = 0.118) all achieve high precision near optimal values. These results illustrate GSO’s stability and accuracy across the parameter space.
Table 10
Descriptive statistics for fitness across GSO parameters
Fitness
A-F1
A-F3
A-F7
A-F13
A-F22
Number of values
20
20
20
20
20
Minimum
0
0
0.02095
0.00000038
− 10.37
25% Percentile
0
0
0.02195
0.000000396
− 10.17
Median
0
0
0.02195
0.000000396
− 10.17
75% Percentile
0
0
0.02195
0.000000396
− 10.17
Maximum
0
0
0.02439
0.00000296
− 9.817
Range
0
0
0.003449
0.00000258
0.5528
Mean
0
0
0.02214
0.0000005256
− 10.14
Std. deviation
0
0
0.0007413
0.0000005731
0.118
Std. error of Mean
0
0
0.0001658
0.0000001282
0.02638
Sum
0
0
0.4428
0.00001051
− 202.8
To provide a clear summary of the discrete observations and their variability, Fig. 23 shows the mean values across five categorical conditions. Each data point is plotted with its corresponding standard deviation as error bars, revealing the consistency and spread in the measured values across the series. The use of color-coded markers enhances interpretability for comparative analysis.
Fig. 23
Mean and standard deviation of observed values across five distinct conditions. Error bars denote variability, and colored markers represent each data point for visual clarity
Bild vergrößern
To explore the directional trend across discrete measurement groups, Fig. 24 presents a connected mean-value profile without error bars. This visual emphasizes the relative positioning and progression of values across the five conditions. The plot highlights a marked decline at the final condition, suggesting a potential outlier or significant behavioral shift.
Fig. 24
Trend of mean values across five categorical conditions. Colored markers distinguish each group while the dashed connecting lines illustrate directional transitions
Bild vergrößern
To gain a comprehensive understanding of the performance dynamics across different functions, Fig. 25 presents an integrated visualization. The subplots compare fitness scores across configurations, highlight relationships between key variables, and summarize aggregated statistical trends using a heatmap. This multifaceted layout facilitates both individual inspection of raw results and broader pattern recognition.
Fig. 25
Composite visualization of fitness data across configurations. The plots include scatter distributions, pairwise comparisons, regression trends, and a heatmap summarizing normalized fitness statistics
Bild vergrößern
To further analyze the distribution characteristics of each function’s results, Fig. 26 displays individual histograms categorized by function. These histograms reveal the spread and central tendency of the data, highlighting potential skewness, variability, or multimodality within each function’s outcome range.
Fig. 26
Histogram distributions of output values for each function. Each color-coded group corresponds to the histogram bins for a specific function, illustrating the frequency and spread of values across different intervals
Bild vergrößern
In conclusion, the sensitivity analysis affirms that the GSO algorithm is inherently robust to variations in its main parameter. While convergence time shows moderate fluctuations with parameter values, the overall fitness outcomes remain stable and statistically significant. This implies that GSO does not require extensive parameter tuning to perform optimally, making it both efficient and accessible for deployment in real-world scenarios where exhaustive calibration is impractical.

4.3 Scalability analysis

The dimensional scalability of an optimization algorithm is a critical factor that determines its practicality in solving real-world, large-scale engineering and scientific problems. As dimensionality increases, the difficulty of exploration and convergence grows exponentially, often leading to degraded performance in traditional algorithms. Therefore, this section evaluates the performance of the proposed Glider Snake Optimization (GSO) algorithm in comparison with several state-of-the-art methods across benchmark problems with increasing dimensional complexity. The evaluations are performed at 100, 500, and 1000 dimensions to understand the consistency, adaptability, and robustness of GSO in high-dimensional problem spaces.

4.3.1 Dimensionality analysis at 100 dimensions

The 100-dimensional benchmark test aims to assess the initial large-scale behavior of GSO and its comparative standing against eight prominent optimization algorithms: PSO, DE, WOA, GWO, FEP, GSA,GA, and SAO. In addition, several recently introduced high-performance optimizers—CMA-ES, LSHADE-SPACMA, PIMO, RBMO, OOA, and ISO—were included to provide a more comprehensive and up-to-date comparison. Thirteen diverse functions from the benchmark suite were used to cover a wide range of unimodal and multimodal problem types. Each algorithm was independently executed multiple times per function, and four key performance indicators were recorded: the best, worst, mean, and standard deviation of the final objective values.
Table 11 summarizes the obtained results. It provides an exhaustive view of each algorithm’s behavior under 100-dimensional complexity, detailing both accuracy and variance across repeated runs while enabling a rigorous comparison between classical methods and modern state-of-the-art optimizers.
Table 11
Performance results of PSO, DE, WOA, GWO, FEP, GSA, GA, SAO, CMA-ES, LSHADE-SPACMA, PIMO, RBMO, OOA, ISO, and GSO on 100-dimensional benchmark functions
Func
Metric
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
F1
Best
3.04E+03
2.20E+05
3.45E-13
2.85E+04
8.38E−16
6.71E−95
4.89E−53
1.06E−03
Worst
1.88E+04
2.72E+05
8.81E−11
8.49E+04
1.09E−14
2.47E−81
3.73E−40
4.34E−01
Mean
9.62E+03
2.51E+05
1.72E−11
5.32E+04
4.01E−15
9.23E−83
1.36E−41
4.57E−02
STD
4.03E+03
1.39E+04
1.84E−11
1.36E+04
2.48E−15
4.50E−82
6.80E−41
8.28E−02
F2
Best
5.10E−01
1.44E+31
9.44E−09
1.73E+02
4.86E−10
2.71E−58
1.60E−12
2.31E−01
Worst
2.47E+01
9.03E+38
1.26E−07
3.08E+02
2.69E−09
8.30E−52
3.38E−09
3.27E+00
Mean
6.14E+00
1.20E+38
5.34E−08
2.38E+02
1.49E−09
4.75E−53
3.43E−10
1.01E+00
STD
5.12E+00
2.41E+38
3.50E−08
3.92E+01
5.78E−10
1.60E−52
7.33E−10
7.01E−01
F3
Best
1.14E+05
5.76E+05
1.14E+03
1.39E+05
4.39E+00
5.08E+05
4.91E−45
3.28E−01
Worst
3.90E+05
1.40E+06
1.45E+04
3.29E+05
6.17E+02
1.17E+06
7.49E−35
8.37E+05
Mean
2.21E+05
9.18E+05
6.05E+03
2.02E+05
1.08E+02
8.30E+05
6.32E−36
1.62E+05
STD
6.45E+04
1.88E+05
3.65E+03
4.78E+04
1.43E+02
1.81E+05
1.73E−35
2.32E+05
F4
Best
7.01E+01
8.84E+01
2.64E+01
8.29E+01
1.47E−02
1.05E−01
3.35E−22
5.41E−03
Worst
8.93E+01
9.35E+01
6.33E+01
9.24E+01
8.08E−01
9.18E+01
1.09E−17
2.11E+00
Mean
8.37E+01
9.21E+01
4.39E+01
8.88E+01
1.98E−01
6.35E+01
1.33E−18
9.28E−02
STD
4.08E+00
1.17E+00
9.18E+00
2.18E+00
2.34E−01
2.70E+01
2.54E−18
3.82E−01
F5
Best
9.03E+06
9.20E+08
9.35E+01
4.07E+07
9.25E+01
9.40E+01
3.41E−01
4.50E−02
Worst
1.91E+08
1.22E+09
9.50E+01
2.48E+08
9.48E+01
9.46E+01
9.53E+01
9.56E+01
Mean
9.52E+07
1.07E+09
9.46E+01
1.02E+08
9.41E+01
9.44E+01
1.67E+01
4.52E+00
STD
4.88E+07
7.62E+07
4.03E−01
5.14E+07
8.62E−01
1.86E−01
3.55E+01
1.72E+01
F6
Best
1.08E+03
2.04E+05
1.11E+01
2.46E+04
6.29E+00
8.69E−01
7.07E−01
1.17E−03
Worst
2.45E+04
2.71E+05
1.53E+01
8.96E+04
9.62E+00
2.54E+00
2.18E+01
7.07E−02
Mean
9.44E+03
2.45E+05
1.35E+01
4.85E+04
7.80E+00
1.70E+00
9.46E+00
1.84E−02
STD
6.27E+03
1.96E+04
1.15E+00
1.73E+04
9.19E−01
4.61E−01
7.78E+00
1.87E−02
F7
Best
3.28E+01
1.40E+03
1.56E−02
5.26E+01
1.74E−03
3.45E−05
3.21E−04
3.20E−03
Worst
2.92E+02
2.04E+03
4.66E−02
4.40E+02
7.68E−03
1.36E−02
4.68E−03
1.72E−01
Mean
1.40E+02
1.73E+03
2.84E−02
2.17E+02
4.04E−03
2.35E−03
1.79E−03
4.01E−02
STD
6.75E+01
1.33E+02
8.23E−03
1.06E+02
1.56E−03
3.30E−03
1.09E−03
4.45E−02
F8
Best
− 7.89E+03
− 1.68E+63
− 1.45E+04
− 2.51E+04
− 1.89E+04
− 4.04E+04
− 4.04E+04
− 3.81E+04
Worst
− 6.19E+03
− 2.42E+57
− 1.20E+04
− 1.90E+04
− 6.31E+03
− 2.85E+04
− 4.01E+04
− 3.73E+03
Mean
− 6.89E+03
− 6.03E+61
− 1.31E+04
− 2.24E+04
− 1.62E+04
− 3.70E+04
− 4.03E+04
− 1.64E+04
STD
4.49E+02
3.05E+62
6.89E+02
1.76E+03
2.13E+03
3.92E+03
6.80E+01
1.04E+04
F9
Best
3.33E+00
1.49E+03
7.08E+02
6.64E+02
9.97E−12
0.00E+00
0.00E+00
3.39E−02
Worst
4.25E+02
1.65E+03
1.24E+03
1.05E+03
1.83E+01
2.19E−13
0.00E+00
1.13E+02
Mean
2.24E+02
1.56E+03
9.48E+02
7.97E+02
5.63E+00
7.30E−15
0.00E+00
4.10E+01
STD
1.09E+02
3.94E+01
1.22E+02
8.61E+01
5.28E+00
4.00E−14
0.00E+00
4.90E+01
F10
Best
4.95E+00
2.00E+01
3.23E−07
1.87E+01
2.54E−09
4.28E−16
4.28E−16
8.60E−03
Worst
1.99E+01
2.02E+01
2.95E+00
1.92E+01
1.14E−08
3.85E−15
3.39E−13
4.69E−01
Mean
1.65E+01
2.01E+01
1.47E−01
1.92E+01
6.50E−09
2.37E−15
5.02E−14
6.66E−02
STD
5.39E+00
5.46E−02
5.93E−01
1.20E−01
2.35E−09
1.73E−15
8.81E−14
8.11E−02
F11
Best
1.56E+01
1.76E+03
1.02E−12
1.54E+02
3.00E−15
0.00E+00
0.00E+00
3.22E−05
Worst
2.86E+02
2.42E+03
3.27E−02
8.08E+02
1.78E−02
0.00E+00
0.00E+00
1.57E+01
Mean
1.06E+02
2.19E+03
7.24E−03
4.78E+02
1.09E−03
0.00E+00
0.00E+00
1.14E+00
STD
7.34E+01
1.47E+02
1.07E−02
1.25E+02
4.17E−03
0.00E+00
0.00E+00
3.41E+00
F12
Best
6.43E+07
1.74E+09
9.46E−01
3.13E+07
9.81E−02
7.78E−03
2.00E−03
2.59E−06
Worst
5.56E+08
3.15E+09
2.82E+01
5.67E+08
3.62E−01
3.34E−02
1.27E−02
1.22E+01
Mean
2.61E+08
2.60E+09
1.08E+01
1.69E+08
2.01E−01
1.77E−02
5.77E−03
6.07E−01
STD
1.29E+08
3.08E+08
5.03E+00
1.55E+08
6.02E−02
6.02E−03
2.47E−03
2.31E+00
Func
Metric
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
GSO
F1
Best
2.35E+03
3.68E−15
3.05E+02
8.95E−18
7.16E−97
5.23E−55
2.83E−287
Worst
2.91E+03
9.41E−13
9.06E+02
1.16E−16
2.63E−83
3.98E−42
1.27E−273
Mean
2.68E+03
1.83E−13
5.68E+02
4.29E−17
9.85E−85
1.45E−43
4.37E−275
STD
1.49E+02
1.96E−13
1.46E+02
2.65E−17
4.81E−84
7.26E−43
0.00E+00
F2
Best
4.13E+29
2.70E−10
4.95E+00
1.39E−11
7.75E−60
4.56E−14
5.89E−140
Worst
2.58E+37
3.60E−09
8.79E+00
7.68E−11
2.37E−53
9.64E−11
8.70E−132
Mean
3.44E+36
1.53E−09
6.81E+00
4.25E−11
1.36E−54
9.79E−12
3.98E−133
STD
6.89E+36
1.00E−09
1.12E+00
1.65E−11
4.57E−54
2.09E−11
1.62E−132
F3
Best
4.47E−01
8.81E−04
1.08E−01
3.40E−06
3.94E−01
3.81E−51
7.92E−259
Worst
1.09E+00
1.12E−02
2.55E−01
4.79E−04
9.05E−01
5.82E−41
2.83E−242
Mean
7.12E−01
4.70E−03
1.57E−01
8.40E−05
6.44E−01
4.91E−42
9.63E−244
STD
1.46E−01
2.83E−03
3.71E−02
1.11E−04
1.40E−01
1.34E−41
0.00E+00
F4
Best
6.86E−05
2.05E−05
6.43E−05
1.14E−08
8.15E−08
2.60E−28
6.44E−143
Worst
7.26E−05
4.92E−05
7.17E−05
6.27E−07
7.13E−05
8.46E−24
6.16E−139
Mean
7.15E−05
3.41E−05
6.89E−05
1.54E−07
4.93E−05
1.03E−24
7.56E−140
STD
9.12E−07
7.13E−06
1.69E−06
1.81E−07
2.10E−05
1.97E−24
1.50E−139
F5
Best
7.14E+02
7.26E−05
3.16E+01
7.18E−05
7.30E−05
2.64E−07
9.33E+01
Worst
9.45E+02
7.38E−05
1.93E+02
7.36E−05
7.35E−05
7.40E−05
9.44E+01
Mean
8.30E+02
7.35E−05
7.90E+01
7.30E−05
7.33E−05
1.30E−05
9.37E+01
STD
5.92E+01
3.13E−07
3.99E+01
6.69E−07
1.45E−07
2.76E−05
2.64E−01
F6
Best
1.58E−01
8.64E−06
1.91E−02
4.88E−06
6.75E−07
5.49E−07
1.02E−01
Worst
2.10E−01
1.19E−05
6.95E−02
7.47E−06
1.97E−06
1.69E−05
1.13E+00
Mean
1.90E−01
1.05E−05
3.77E−02
6.06E−06
1.32E−06
7.35E−06
4.47E−01
STD
1.52E−02
8.93E−07
1.34E−02
7.14E−07
3.58E−07
6.04E−06
2.81E−01
F7
Best
1.09E−03
1.21E−08
4.08E−05
1.35E−09
2.68E−11
2.49E−10
4.96E−06
Worst
1.58E−03
3.62E−08
3.42E−04
5.96E−09
1.06E−08
3.63E−09
1.51E−04
Mean
1.34E−03
2.20E−08
1.69E−04
3.14E−09
1.82E−09
1.39E−09
6.38E−05
STD
1.04E−04
6.39E−09
8.19E−05
1.21E−09
2.56E−09
8.43E−10
4.74E−05
F8
Best
− 1.30E+57
− 1.13E−02
− 1.95E−02
− 1.46E−02
− 3.13E−02
− 3.13E−02
− 4.04E+04
Worst
− 1.88E+51
− 9.34E−03
− 1.48E−02
− 4.90E−03
− 2.21E−02
− 3.11E−02
− 2.36E+04
Mean
− 4.68E+55
− 1.02E−02
− 1.74E−02
− 1.26E−02
− 2.87E−02
− 3.13E−02
− 3.30E+04
STD
2.37E+56
5.35E−04
1.36E−03
1.65E−03
3.05E−03
5.28E−05
6.13E+03
F9
Best
1.16E−03
5.49E−04
5.15E−04
7.74E−18
0.00E+00
0.00E+00
0.00E+00
Worst
1.28E−03
9.60E−04
8.12E−04
1.42E−05
1.70E−19
0.00E+00
0.00E+00
Mean
1.21E−03
7.36E−04
6.19E−04
4.37E−06
5.67E−21
0.00E+00
0.00E+00
STD
3.06E−05
9.48E−05
6.68E−05
4.10E−06
3.10E−20
0.00E+00
0.00E+00
F10
Best
1.55E−05
2.51E−13
1.45E−05
1.98E−15
3.32E−22
3.32E−22
4.28E−16
Worst
1.57E−05
2.29E−06
1.49E−05
8.85E−15
2.99E−21
2.63E−19
4.28E−16
Mean
1.56E−05
1.14E−07
1.49E−05
5.04E−15
1.84E−21
3.89E−20
4.28E−16
STD
4.24E−08
4.60E−07
9.33E−08
1.83E−15
1.34E−21
6.84E−20
0.00E+00
F11
Best
1.37E−03
7.96E−19
1.20E−04
2.32E−21
0.00E+00
0.00E+00
0.00E+00
Worst
1.88E−03
2.54E−08
6.28E−04
1.38E−08
0.00E+00
0.00E+00
0.00E+00
Mean
1.70E−03
5.62E−09
3.71E−04
8.48E−10
0.00E+00
0.00E+00
0.00E+00
STD
1.14E−04
8.34E−09
9.73E−05
3.24E−09
0.00E+00
0.00E+00
0.00E+00
F12
Best
1.35E+03
7.34E−07
2.43E+01
7.61E−08
6.04E−09
1.55E−09
5.93E−04
Worst
2.45E+03
2.19E−05
4.40E+02
2.81E−07
2.59E−08
9.82E−09
1.32E−02
Mean
2.02E+03
8.41E−06
1.31E+02
1.56E−07
1.37E−08
4.48E−09
4.31E−03
STD
2.39E+02
3.90E−06
1.20E+02
4.67E−08
4.67E−09
1.92E−09
3.36E−03
Figure 27 illustrates the convergence history for six representative benchmark functions: F1, F2, F3, F4, F5, and F11. These functions were selected to reflect varying degrees of complexity, modality, and landscape ruggedness.
Fig. 27
Convergence plots for GSO compared with PSO, WOA, GWO, and GA on F3, F4, F5, and F11 in 100-dimensional space
Bild vergrößern
As illustrated in Fig. 28, GSO consistently exhibits a markedly tight distribution across all tested functions. GSO demonstrated significantly reduced interquartile ranges and negligible outlier presence. This indicates that GSO not only converges more accurately to global optima but does so with greater repeatability across independent trials. In contrast, classical methods such as DE and PSO show highly dispersed distributions with long whiskers and substantial outlier densities.
Fig. 28
Boxplots with swarm overlays from 30 runs on 100-D functions
Bild vergrößern

4.3.2 Dimensionality analysis at 500 dimensions

In order to investigate the scalability and robustness of the proposed glide snake optimizer (GSO) across increasingly complex landscapes, a rigorous evaluation was conducted on high-dimensional benchmark problems. This subsection presents the comparative performance of GSO and several well-established metaheuristic algorithms on a 500-dimensional search space. The analysis encompasses 13 benchmark functions (F1F13) that offer a diverse mix of unimodal, multimodal, and hybrid challenges. These problems simulate real-world optimization conditions in high-dimensional spaces, where the difficulty of exploration, convergence speed, and precision becomes increasingly prominent.
Table 12 provides a comprehensive performance report including best, worst, mean, and standard deviation values for each algorithm. In addition to classical methods such as PSO, DE, WOA, GWO, FEP, GSA, GA, and SAO, the comparison now includes several recent high-performance optimizers—CMA-ES, LSHADE-SPACMA, PIMO, RBMO, OOA, and ISO—thereby presenting a more rigorous and contemporary large-scale evaluation. In this context, GSO consistently demonstrates its superior precision and convergence reliability across nearly all tested functions. For instance, on the unimodal function F1, GSO yielded an impressively minimal mean fitness value of \(5.76 \times 10^{-275}\) with virtually zero variance, indicating deterministic convergence even in a 500-dimensional setting. This level of performance significantly surpasses not only classical algorithms but also advanced optimizers such as CMA-ES, LSHADE-SPACMA, and ISO, which exhibit higher mean values and greater variability. This starkly contrasts with algorithms such as PSO and DE, which, although capable of yielding convergent solutions, report mean values on the order of \(10^5\) and above, with substantial standard deviations. These results highlight the sensitivity of traditional algorithms to increased dimensionality and underscore GSO’s resilience and scalability in extremely high-dimensional search spaces.
Table 12
Performance results of PSO, DE, WOA, GWO, FEP, GSA, GA, SAO, CMA-ES, LSHADE-SPACMA, PIMO, RBMO, OOA, ISO, and GSO on 500-dimensional benchmark functions
Func
Metric
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
F1
Best
1.22E+05
1.41E+06
1.14E−03
1.05E+06
7.12E−05
5.55E−92
2.37E−47
1.80E−03
Worst
2.84E+05
1.52E+06
2.75E−02
1.15E+06
3.86E−04
5.64E−81
2.88E−37
7.28E−01
Mean
1.99E+05
1.46E+06
6.59E−03
1.10E+06
1.80E−04
2.08E−82
1.05E−38
1.24E−01
STD
4.70E+04
3.50E+04
6.33E−03
2.67E+04
7.09E−05
1.03E−81
5.26E−38
1.55E−01
F2
Best
2.16E+01
3.21E+241
1.15E−03
6.85E+64
2.32E−03
5.11E−57
3.66E−11
8.74E−01
Worst
1.73E+02
5.39E+264
1.01E−02
1.56E+117
4.10E−03
3.59E−50
3.26E−09
1.16E+01
Mean
8.76E+01
1.80E+263
4.38E−03
5.20E+115
2.91E−03
1.27E−51
4.96E−10
4.04E+00
STD
4.32E+01
5.90E+299
2.39E−03
2.85E+116
4.30E−04
6.54E−51
6.66E−10
2.57E+00
F3
Best
3.74E+06
1.18E+07
7.75E+05
3.06E+06
1.51E+05
1.13E+07
5.88E−41
6.29E+01
Worst
9.51E+06
3.24E+07
1.80E+06
6.22E+06
4.16E+05
4.34E+07
1.94E−29
1.82E+07
Mean
6.18E+06
2.38E+07
1.19E+06
4.28E+06
2.34E+05
2.51E+07
1.22E−30
7.39E+06
STD
1.28E+06
4.95E+06
2.14E+05
9.04E+05
7.52E+04
7.92E+06
4.61E−30
7.50E+06
F4
Best
9.44E+01
9.51E+01
9.50E+01
9.44E+01
4.45E+01
3.15E+00
4.02E−21
2.18E−03
Worst
9.58E+01
9.58E+01
9.59E+01
9.55E+01
6.62E+01
9.57E+01
4.50E−17
5.88E−02
Mean
9.54E+01
9.56E+01
9.55E+01
9.51E+01
5.63E+01
7.82E+01
3.72E−18
1.58E−02
STD
2.84E−01
1.61E−01
2.17E−01
3.59E−01
4.76E+00
2.30E+01
9.81E−18
1.35E−02
F5
Best
8.00E+08
6.09E+09
1.26E+03
4.28E+09
4.78E+02
4.77E+02
5.12E−01
8.62E−02
Worst
2.70E+09
7.13E+09
7.41E+04
5.26E+09
4.80E+02
4.78E+02
4.81E+02
4.76E+02
Mean
1.79E+09
6.73E+09
1.59E+04
4.65E+09
4.79E+02
4.77E+02
3.46E+02
2.23E+01
STD
4.50E+08
2.06E+08
1.63E+04
2.08E+08
2.36E−01
3.58E−01
2.05E+02
8.62E+01
F6
Best
4.74E+04
1.33E+06
8.97E+01
1.01E+06
7.91E+01
8.17E+00
4.90E+00
7.35E+00
Worst
3.07E+05
1.51E+06
9.84E+01
1.15E+06
8.66E+01
2.18E+01
1.20E+02
4.80E+01
Mean
1.68E+05
1.46E+06
9.44E+01
1.09E+06
8.33E+01
1.57E+01
8.71E+01
3.01E+01
STD
6.25E+04
3.76E+04
2.23E+00
3.31E+04
1.78E+00
3.78E+00
4.50E+01
9.60E+00
F7
Best
9.44E+03
4.71E+04
3.73E−01
3.19E+04
1.51E−02
2.30E−05
4.05E−04
4.25E−03
Worst
2.11E+04
5.91E+04
2.74E+00
3.93E+04
3.95E−02
8.34E−03
4.93E−03
8.05E−01
Mean
1.51E+04
5.52E+04
1.31E+00
3.55E+04
2.71E−02
2.34E−03
2.17E−03
9.04E−02
STD
3.00E+03
2.45E+03
4.95E−01
2.06E+03
6.73E−03
2.35E−03
1.08E−03
1.56E−01
F8
Best
− 1.78E+04
− 2.53E+61
− 3.53E+04
− 7.17E+04
− 7.76E+04
− 2.02E+05
− 2.02E+05
− 1.91E+05
Worst
− 1.37E+04
− 7.54E+55
− 2.66E+04
− 5.59E+04
− 5.05E+04
− 1.29E+05
− 2.00E+05
− 8.53E+03
Mean
− 1.54E+04
− 2.04E+60
− 3.12E+04
− 6.24E+04
− 5.93E+04
− 1.73E+05
− 2.02E+05
− 7.95E+04
STD
1.00E+03
5.09E+60
2.21E+03
4.97E+03
5.38E+03
2.71E+04
3.28E+02
5.61E+04
F9
Best
4.23E+02
8.17E+03
4.16E+03
6.24E+03
2.50E+01
0.00E+00
0.00E+00
2.39E−01
Worst
1.81E+03
8.61E+03
6.58E+03
6.87E+03
1.14E+02
1.75E−12
0.00E+00
5.68E+02
Mean
1.13E+03
8.44E+03
5.66E+03
6.56E+03
5.56E+01
5.84E−14
0.00E+00
1.43E+02
STD
3.63E+02
1.19E+02
4.92E+02
1.51E+02
2.27E+01
3.20E−13
0.00E+00
2.19E+02
F10
Best
6.76E+00
2.03E+01
2.13E−03
1.92E+01
4.01E−04
4.28E−16
4.28E−16
1.73E−02
Worst
2.01E+01
2.04E+01
8.34E−02
1.97E+01
8.08E−04
7.27E−15
9.90E−13
3.23E−01
Mean
1.83E+01
2.03E+01
7.35E−03
1.95E+01
5.90E−04
3.51E−15
6.44E−14
7.04E−02
STD
3.76E+00
2.24E−02
1.45E−02
1.36E−01
1.03E−04
2.08E−15
1.83E−13
7.04E−02
F11
Best
4.18E+02
1.28E+04
1.74E−04
9.17E+03
8.26E−06
0.00E+00
0.00E+00
6.65E−05
Worst
3.62E+03
1.37E+04
1.30E−01
1.06E+04
7.22E−02
0.00E+00
0.00E+00
2.16E+02
Mean
1.61E+03
1.32E+04
1.44E−02
9.83E+03
9.42E−03
0.00E+00
0.00E+00
2.29E+01
STD
6.88E+02
2.30E+02
3.65E−02
2.91E+02
2.24E−02
0.00E+00
0.00E+00
5.61E+01
F12
Best
3.97E+09
1.56E+10
9.59E+03
9.37E+09
5.97E−01
1.51E−02
8.54E−04
1.27E−06
Worst
8.75E+09
1.80E+10
1.57E+06
1.20E+10
7.55E−01
7.69E−02
9.06E−03
3.00E+00
Mean
5.73E+09
1.68E+10
4.29E+05
1.08E+10
6.70E−01
3.39E−02
4.14E−03
3.00E−01
STD
1.10E+09
5.63E+08
4.03E+05
6.85E+08
3.33E−02
1.53E−02
1.92E−03
9.14E−01
F13
Best
5.99E+09
2.88E+10
1.07E+04
1.78E+10
4.42E+01
5.02E+00
3.17E−03
2.49E−04
Worst
1.32E+10
3.20E+10
7.22E+05
2.42E+10
4.86E+01
1.46E+01
4.53E−01
6.99E−02
Mean
9.10E+09
3.07E+10
1.56E+05
2.06E+10
4.62E+01
9.05E+00
2.13E−01
1.29E−02
STD
1.69E+09
9.31E+08
1.59E+05
1.39E+09
1.11E+00
2.86E+00
1.14E−01
1.80E−02
Func
Metric
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
GSO
F1
Best
8.14E+01
7.60E−07
1.58E−50
1.20E−06
4.75E−08
3.70E−95
2.62E−285
Worst
1.89E+02
1.83E−05
1.92E−40
4.85E−04
2.57E−07
3.76E−84
1.54E−273
Mean
1.33E+02
4.39E−06
7.00E−42
8.26E−05
1.20E−07
1.39E−85
5.76E−275
STD
3.135207007E+01
4.22E−06
3.51E−41
1.03E−04
4.73E−08
6.86E−85
0.00E+00
F2
Best
4.26E−12
2.26E−16
7.24E−24
1.73E−13
4.58E−16
1.01E−69
4.08E−139
Worst
3.42E−11
2.00E−15
6.44E−22
2.29E−12
8.09E−16
7.09E−63
2.02E−132
Mean
1.73E−11
8.64E−16
9.80E−23
7.98E−13
5.75E−16
2.51E−64
1.87E−133
STD
0
4.73E−16
1.32E−22
5.07E−13
8.49E−17
1.29E−63
4.97E−133
F3
Best
7.38E−07
1.53E−07
1.16E−53
1.24E−11
2.97E−08
2.23E−06
1.37E−262
Worst
1.88E−06
3.55E−07
3.84E−42
3.59E−06
8.21E−08
8.57E−06
1.84E−241
Mean
1.22E−06
2.36E−07
2.42E−43
1.46E−06
4.63E−08
4.95E−06
6.62E−243
STD
2.534106174E−07
4.22E−08
9.11E−43
1.48E−06
1.49E−08
1.56E−06
0.00E+00
F4
Best
2.46E−27
2.47E−27
1.04E−49
5.67E−32
1.16E−27
8.18E−29
2.86E−144
Worst
2.49E−27
2.50E−27
1.17E−45
1.53E−30
1.72E−27
2.49E−27
5.83E−140
Mean
2.48E−27
2.48E−27
9.66E−47
4.11E−31
1.46E−27
2.03E−27
5.57E−141
STD
0
5.65E−30
2.55E−46
3.52E−31
1.24E−28
5.97E−28
1.21E−140
F5
Best
7.15E+08
1.13E+03
4.57E−01
7.69E−02
4.27E+02
4.25E+02
4.76E+02
Worst
2.41E+09
6.61E+04
4.29E+02
4.25E+02
4.28E+02
4.27E+02
4.78E+02
Mean
1.60E+09
1.42E+04
3.09E+02
1.99E+01
4.28E+02
4.26E+02
4.77E+02
STD
4.01991747E+08
1.45E+04
1.83E+02
7.70E+01
2.10E−01
3.20E−01
4.11E−01
F6
Best
1.88E+02
3.55E−01
1.94E−02
2.91E−02
3.13E−01
3.23E−02
6.50E−03
Worst
1.21E+03
3.89E−01
4.73E−01
1.90E−01
3.42E−01
4.01E+02
3.90E+02
Mean
6.65E+02
3.74E−01
3.45E−01
1.19E−01
3.29E−01
1.55E+02
1.31E+01
STD
2.470949895E+02
8.82E−03
1.78E−01
3.80E−02
7.03E−03
8.01E+00
7.12E+00
F7
Best
3.93E+03
1.55E−01
1.69E−04
1.77E−03
6.27E−03
9.58E−05
5.81E−06
Worst
8.79E+03
1.14E+00
2.05E−03
3.35E−01
1.64E−02
3.48E−03
5.96E−04
Mean
6.28E+03
5.45E−01
9.04E−04
3.77E−02
1.13E−02
9.76E−04
1.43E−04
STD
1.251095974E+03
2.06E−01
4.50E−04
6.49E−02
2.80E−03
9.80E−04
1.31E−04
F8
Best
− 1.19E+04
− 2.35E+04
− 1.35E+05
− 1.27E+05
− 5.17E+04
− 1.35E+05
− 2.00E+05
Worst
− 9.14E+03
− 1.78E+04
− 1.33E+05
− 5.69E+03
− 3.37E+04
− 8.63E+04
− 8.83E+04
Mean
− 1.03E+04
− 2.08E+04
− 1.34E+05
− 5.30E+04
− 3.96E+04
− 1.15E+05
− 1.48E+05
STD
6.669942544E+02
1.47E+03
2.18E+02
3.74E+04
3.59E+03
1.81E+04
3.24E+04
F9
Best
3.25E+02
3.20E+03
0.00E+00
1.84E−01
1.92E+01
0.00E+00
0.00E+00
Worst
1.39E+03
5.06E+03
0.00E+00
4.37E+02
8.74E+01
1.35E−12
0.00E+00
Mean
8.70E+02
4.35E+03
0.00E+00
1.10E+02
4.28E+01
4.49E−14
0.00E+00
STD
2.790202311E+02
3.78E+02
0.00E+00
1.68E+02
1.74E+01
2.46E−13
0.00E+00
F10
Best
5.20E+00
1.64E−03
3.29E−16
1.33E−02
3.08E−04
3.29E−15
4.28E−16
Worst
1.54E+01
6.41E−02
7.61E−13
2.49E−01
6.22E−04
5.60E−15
4.28E−16
Mean
1.41E+01
5.65E−03
4.96E−14
5.42E−02
4.53E−04
2.70E−15
4.28E−16
STD
2.896038338E+00
1.11E−02
1.41E−13
5.42E−02
7.92E−05
1.60E−15
0.00E+00
F11
Best
6.64E−01
2.77E−07
0.00E+00
1.06E−07
1.31E−08
0.00E+00
0.00E+00
Worst
5.75E+00
2.07E−04
0.00E+00
3.43E−01
1.15E−04
0.00E+00
0.00E+00
Mean
2.55E+00
2.29E−05
0.00E+00
3.63E−02
1.50E−05
0.00E+00
0.00E+00
STD
1.0926583E+00
5.80E−05
0.00E+00
8.91E−02
3.56E−05
0.00E+00
0.00E+00
F12
Best
3.06E+09
7.37E+03
6.57E−04
9.75E−07
4.59E−01
3.16E−02
2.84E−02
Worst
6.73E+09
1.21E+06
6.97E−03
2.31E+00
5.81E−01
5.91E−01
2.45E−01
Mean
4.41E+09
3.30E+05
3.18E−03
2.31E−01
5.15E−01
2.61E−01
1.05E−01
STD
8.441683089E+08
3.10E+05
1.47E−03
7.03E−01
2.56E−02
1.18E−01
5.91E−02
F13
Best
4.60E+09
8.20E+03
2.44E−03
1.92E−04
3.40E+01
3.86E+00
6.74E+00
Worst
1.02E+10
5.55E+05
3.49E−01
5.38E−02
3.74E+01
1.12E+01
2.73E+01
Mean
7.00E+09
1.20E+05
1.63E−01
9.89E−03
3.55E+01
6.96E+00
1.61E+01
STD
1.302198173E+09
1.22E+05
8.76E−02
1.39E−02
8.54E−01
2.20E+00
4.90E+00
The performance of the proposed glider snake optimization (GSO) algorithm in 500-dimensional space is evaluated across a series of benchmark functions (F1–F13) and compared with well-established metaheuristics: PSO, DE, WOA, GWO, FEP, GSA, GA, and SAO. In addition, several recently introduced high-performance optimizers—CMA-ES, LSHADE-SPACMA, PIMO, RBMO, OOA, and ISO—were included to provide a more comprehensive and up-to-date comparison. The convergence behaviors of these algorithms are illustrated in Fig. 29, while the corresponding boxplots of their solution distributions provide further insights into robustness and variance.
Fig. 29
Convergence curves of GSO and competitors for F1–F3 functions in 500D
Bild vergrößern

4.3.3 Dimensionality analysis at 1000 dimensions

To further examine the scalability of the proposed GSO, an additional experiment was carried out on 1000-dimensional variants of the benchmark functions. The same set of comparison methods was employed as in the 100- and 500-dimensional studies, including classical algorithms (PSO, DE, WOA, GWO, FEP, GSA, GA, SAO) and recently developed high-performance optimizers (CMA-ES,LSHADE-SPACMA, PIMO, RBMO, OOA, ISO). Thirteen functions (F1F13) were evaluated, covering unimodal, multimodal, and composite landscapes to emulate highly challenging large-scale optimization scenarios. Table 13 reports the best, worst, mean, and standard deviation of the final objective values obtained by each algorithm.
Overall, the results confirm that GSO maintains strong scalability as dimensionality grows to 1000. On unimodal functions such as F1F4, GSO achieves extremely small mean fitness values (e.g., \(9.18 \times 10^{-277}\) on F1 with essentially zero variance), while classical methods like PSO and DE remain at error levels on the order of \(10^{5}\)\(10^{7}\), and even advanced optimizers such as CMA-ES and LSHADE-SPACMA exhibit noticeably higher objective values. On more complex functions, including F5, F6, F8, F12, and F13, GSO remains competitive or superior in terms of mean performance and stability, frequently ranking among the top-performing algorithms despite the drastic increase in dimensionality. These findings indicate that the dual-guidance search mechanism and diversity-preservation strategies in GSO are effective not only in moderate dimensions but also in extremely high-dimensional spaces, where many traditional and even hybrid algorithms degrade significantly in accuracy and robustness.
Table 13
Performance results of PSO, DE, WOA, GWO, FEP, GSA, GA, SAO, CMA-ES, LSHADE-SPACMA, PIMO, RBMO, OOA, ISO, and GSO on 1000-dimensional benchmark functions
Func
Metric
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
GSO
F1
Best
1.48E+05
2.91E+06
2.89E−01
2.52E+06
2.99E−02
5.98E−94
4.21E−47
7.36E−03
1.48E+05
6.42E−06
6.65E−07
1.64E−07
9.34E−52
1.33E−98
2.16E−288
Worst
7.13E+05
3.08E+06
2.78E+00
2.67E+06
8.15E−02
3.75E−79
9.92E−37
1.65E+00
7.13E+05
6.18E−05
1.81E−06
3.66E−05
2.20E−41
8.33E−84
2.41E−275
Mean
4.88E+05
3.01E+06
1.19E+00
2.60E+06
4.92E−02
2.57E−80
4.11E−38
3.12E−01
4.88E+05
2.64E−05
1.09E−06
6.94E−06
9.14E−43
5.72E−85
9.18E−277
STD
1.52E+05
4.22E+04
6.09E−01
3.94E+04
1.36E−02
8.77E−80
1.83E−37
3.48E−01
151605.6615
1.35E−05
3.02E−07
7.73E−06
4.07E−42
1.95E−84
0.00E+00
F2
Best
6.31E+04
6.31E+04
5.32E−03
6.31E+04
9.29E−02
2.05E−58
1.49E−11
1.50E+00
1.20E−18
1.01E−25
1.77E−24
2.85E−23
2.84E−34
3.91E−81
5.05E−151
Worst
6.31E+04
6.31E+04
3.74E−02
6.31E+04
7.42E−01
6.49E−49
8.79E−09
1.74E+01
1.20E−18
7.12E−25
1.41E−23
3.31E−22
1.67E−31
1.24E−71
2.43E−147
Mean
6.31E+04
6.31E+04
1.79E−02
6.31E+04
2.82E−01
2.19E−50
7.02E−10
8.17E+00
1.20E−18
3.40E−25
5.36E−24
1.55E−22
1.34E−32
4.17E−73
1.48E−148
STD
0.00E+00
0.00E+00
8.43E−03
0.00E+00
1.66E−01
1.18E−49
1.57E−09
4.50E+00
0
1.61E−25
3.17E−24
8.56E−23
2.99E−32
2.25E−72
4.47E−148
F3
Best
1.56E+07
5.59E+07
3.87E+06
1.20E+07
7.61E+05
5.48E+07
5.82E−43
2.35E+03
1.98E−34
4.91E−35
9.66E−36
2.98E−38
7.38E−84
6.96E−34
1.57E−257
Worst
3.00E+07
1.50E+08
6.83E+06
2.42E+07
1.62E+06
1.69E+08
1.76E−26
9.23E+07
3.80E−34
8.66E−35
2.05E−35
1.17E−33
2.23E−67
2.15E−33
2.38E−240
Mean
2.41E+07
1.04E+08
5.15E+06
1.62E+07
1.11E+06
1.07E+08
7.61E−28
3.83E+07
6.14E−38
6.37E−72
3.34E−38
5.85E−40
2.48E−39
1.09E−38
1.35E−241
STD
3.56E+06
2.24E+07
7.44E+05
2.86E+06
2.40E+05
3.30E+07
3.29E−27
3.39E+07
0
2.11E−88
8.52E−43
2.76E−40
1.40E−39
5.65E−39
0.00E+00
F4
Best
9.56E+01
9.53E+01
9.52E+01
9.54E+01
6.88E+01
6.17E+00
1.76E−21
3.37E−03
1.82E−21
1.81E−21
1.31E−21
6.42E−26
3.35E−44
1.17E−22
1.45E−144
Worst
9.61E+01
9.61E+01
9.61E+01
9.61E+01
8.28E+01
9.56E+01
2.38E−17
5.72E−02
1.83E−21
1.83E−21
1.58E−21
1.09E−24
4.52E−40
1.82E−21
2.44E−139
Mean
9.59E+01
9.59E+01
9.59E+01
9.58E+01
7.52E+01
7.94E+01
3.14E−18
1.91E−02
1.83E−21
1.83E−21
1.43E−21
3.63E−25
5.97E−41
1.51E−21
1.54E−140
STD
1.30E−01
1.63E−01
1.74E−01
1.32E−01
3.45E+00
2.00E+01
6.90E−18
1.22E−02
0
3.31E−24
6.56E−23
2.33E−25
1.31E−40
3.82E−22
4.67E−140
F5
Best
2.82E+09
1.38E+10
1.36E+06
1.11E+10
9.65E+02
9.54E+02
2.65E+00
1.18E+00
2.12E+09
1.02E+06
7.26E+02
8.88E−01
1.99E+00
7.18E+02
9.54E+02
Worst
5.86E+09
1.45E+10
2.44E+07
1.25E+10
9.80E+02
9.57E+02
9.62E+02
9.53E+02
4.41E+09
1.84E+07
7.37E+02
7.17E+02
7.24E+02
7.19E+02
9.59E+02
Mean
4.22E+09
1.42E+10
8.79E+06
1.19E+10
9.70E+02
9.56E+02
8.48E+02
4.64E+01
3.17E+09
6.61E+06
7.29E+02
3.49E+01
6.38E+02
7.19E+02
9.56E+02
STD
8.49E+08
1.88E+08
5.72E+06
3.25E+08
3.82E+00
5.78E−01
2.91E+02
1.72E+02
638361360.2
4.30E+06
2.87E+00
1.29E+02
2.19E+02
4.35E−01
1.27E+00
F6
Best
1.82E+05
2.95E+06
2.06E+02
2.47E+06
1.81E+02
2.13E+01
1.18E+02
2.40E+01
1.33E+03
1.51E+00
1.32E+00
1.75E−01
8.59E−01
1.56E−01
3.93E−04
Worst
7.89E+05
3.08E+06
2.22E+02
2.70E+06
1.92E+02
5.12E+01
2.40E+02
1.41E+02
5.76E+03
1.62E+00
1.40E+00
1.03E+00
1.75E+00
3.74E−01
1.56E+00
Mean
4.72E+05
3.01E+06
2.16E+02
2.61E+06
1.86E+02
3.43E+01
2.25E+02
8.97E+01
3.45E+03
1.58E+00
1.36E+00
6.55E−01
1.64E+00
2.51E−01
2.79E−01
STD
1.43E+05
3.64E+04
3.17E+00
4.99E+04
2.90E+00
8.53E+00
2.77E+01
2.74E+01
1047.532318
2.32E−02
2.12E−02
2.00E−01
2.03E−01
6.23E−02
4.00E−01
F7
Best
4.41E+04
2.21E+05
7.86E+00
1.73E+05
5.32E−02
1.71E−05
3.32E−04
1.33E−02
3.31E+04
5.91E+00
4.00E−02
9.97E−03
2.50E−04
1.28E−05
1.44E−05
Worst
8.82E+04
2.39E+05
4.70E+02
1.97E+05
1.11E−01
9.82E−03
5.38E−03
5.49E−01
6.63E+04
3.53E+02
8.33E−02
4.13E−01
4.05E−03
7.38E−03
3.55E−04
Mean
6.37E+04
2.31E+05
9.63E+01
1.87E+05
8.10E−02
1.77E−03
2.67E−03
1.17E−01
4.79E+04
7.24E+01
6.09E−02
8.81E−02
2.01E−03
1.33E−03
1.14E−04
STD
1.22E+04
5.50E+03
8.24E+01
4.90E+03
1.51E−02
2.08E−03
1.37E−03
1.26E−01
9208.242547
6.20E+01
1.14E−02
9.45E−02
1.03E−03
1.56E−03
8.31E−05
F8
Best
− 2.58E+04
− 7.62E+62
− 5.33E+04
− 1.08E+05
− 1.07E+05
− 4.04E+05
− 4.04E+05
− 3.79E+05
− 1.94E+04
− 4.01E+04
− 8.04E+04
− 2.85E+05
− 3.04E+05
− 3.04E+05
− 4.00E+05
Worst
− 1.87E+04
− 3.27E+55
− 3.76E+04
− 8.13E+04
− 1.98E+04
− 2.58E+05
− 3.99E+05
− 1.48E+04
− 1.40E+04
− 2.83E+04
− 1.49E+04
− 1.11E+04
− 3.00E+05
− 1.94E+05
− 1.31E+05
Mean
− 2.16E+04
− 3.08E+61
− 4.43E+04
− 9.06E+04
− 9.23E+04
− 3.53E+05
− 4.03E+05
− 1.83E+05
− 1.62E+04
− 3.33E+04
− 6.94E+04
− 1.38E+05
− 3.03E+05
− 2.66E+05
− 2.60E+05
STD
1.53E+03
1.40E+62
3.65E+03
6.19E+03
1.48E+04
5.59E+04
8.92E+02
1.16E+05
1151.193315
2.74E+03
1.11E+04
8.69E+04
6.71E+02
4.20E+04
6.31E+04
F9
Best
7.87E+02
1.69E+04
6.45E+03
1.41E+04
9.63E+01
0.00E+00
0.00E+00
1.09E+00
5.92E+02
4.85E+03
7.24E+01
8.21E−01
0.00E+00
0.00E+00
0.00E+00
Worst
3.16E+03
1.74E+04
1.23E+04
1.51E+04
2.23E+02
0.00E+00
0.00E+00
3.85E+03
2.38E+03
9.21E+03
1.68E+02
2.90E+03
0.00E+00
0.00E+00
0.00E+00
Mean
1.95E+03
1.72E+04
1.02E+04
1.46E+04
1.54E+02
0.00E+00
0.00E+00
2.51E+02
1.47E+03
7.65E+03
1.16E+02
1.89E+02
0.00E+00
0.00E+00
0.00E+00
STD
7.09E+02
1.32E+02
1.65E+03
2.30E+02
3.30E+01
0.00E+00
0.00E+00
7.40E+02
533.0290916
1.24E+03
2.48E+01
5.56E+02
0.00E+00
0.00E+00
0.00E+00
F10
Best
9.07E+00
2.03E+01
1.96E−02
1.93E+01
5.98E−03
4.28E−16
4.28E−16
7.74E−03
6.82E+00
1.47E−02
4.49E−03
5.82E−03
4.28E−16
3.22E−16
4.28E−16
Worst
2.01E+01
2.04E+01
9.80E−02
1.98E+01
9.81E−03
7.27E−15
7.16E−13
1.81E−01
1.51E+01
7.37E−02
7.37E−03
1.36E−01
4.28E−16
5.47E−15
4.28E−16
Mean
1.88E+01
2.04E+01
4.55E−02
1.96E+01
7.35E−03
3.85E−15
5.37E−14
4.83E−02
1.41E+01
3.42E−02
5.52E−03
3.63E−02
4.28E−16
2.90E−15
4.28E−16
STD
3.24E+00
1.70E−02
1.76E−02
1.74E−01
9.95E−04
2.70E−15
1.30E−13
4.18E−02
2.439521421
1.32E−02
7.48E−04
3.15E−02
9.79E−14
2.03E−15
0.00E+00
F11
Best
1.13E+03
2.62E+04
1.68E−02
2.25E+04
2.06E−03
0.00E+00
0.00E+00
2.23E−05
1.13E+03
1.68E−02
2.06E−03
2.23E−05
0.00E+00
0.00E+00
0.00E+00
Worst
6.22E+03
2.78E+04
4.69E−01
2.41E+04
1.58E−01
0.00E+00
0.00E+00
1.05E+02
6.22E+03
4.69E−01
1.58E−01
1.05E+02
0.00E+00
0.00E+00
0.00E+00
Mean
3.74E+03
2.70E+04
1.13E−01
2.34E+04
1.70E−02
0.00E+00
0.00E+00
6.78E+00
3.74E+03
1.13E−01
1.70E−02
6.78E+00
0.00E+00
0.00E+00
0.00E+00
STD
1.11E+03
4.40E+02
1.22E−01
4.54E+02
4.25E−02
0.00E+00
0.00E+00
2.10E+01
1111.389454
1.22E−01
4.25E−02
2.10E+01
0.00E+00
0.00E+00
0.00E+00
F12
Best
7.36E+09
3.32E+10
1.11E+07
2.65E+10
7.78E−01
1.62E−02
8.80E−04
1.29E−06
7.36E+09
1.01E+07
7.07E−01
1.18E−06
8.00E−04
1.47E−02
9.96E−02
Worst
1.63E+10
3.71E+10
6.50E+08
2.98E+10
1.36E+00
8.61E−02
7.29E−03
1.21E+01
1.63E+10
5.91E+08
1.24E+00
1.10E+01
6.63E−03
7.82E−02
5.33E−01
Mean
1.29E+10
3.54E+10
1.79E+08
2.85E+10
9.11E−01
3.67E−02
4.10E−03
1.10E+00
1.29E+10
1.63E+08
8.28E−01
1.00E+00
3.73E−03
3.33E−02
2.75E−01
STD
2.22E+09
8.59E+08
1.37E+08
8.37E+08
1.17E−01
1.57E−02
1.68E−03
2.44E+00
2,220,659,222
1.24E+08
1.06E−01
2.22E+00
1.53E−03
1.43E−02
1.23E−01
F13
Best
1.36E+10
5.78E+10
1.73E+07
4.90E+10
1.01E+02
7.77E+00
1.50E−03
3.23E−04
1.36E+10
1.57E+07
9.20E+01
2.93E−04
1.36E−03
7.06E+00
3.20E+01
Worst
2.88E+10
6.64E+10
3.59E+08
5.65E+10
1.18E+02
3.07E+01
6.02E−01
4.96E−01
2.88E+10
3.27E+08
1.07E+02
4.51E−01
5.47E−01
2.79E+01
9.55E+01
Mean
2.10E+10
6.32E+10
1.19E+08
5.25E+10
1.06E+02
1.77E+01
2.47E−01
4.57E−02
2.10E+10
1.08E+08
9.61E+01
4.15E−02
2.24E−01
1.61E+01
7.17E+01
STD
3.59E+09
1.90E+09
8.05E+07
1.85E+09
3.18E+00
5.06E+00
1.51E−01
1.02E−01
3,592,866,296
7.32E+07
2.89E+00
9.31E−02
1.37E−01
4.60E+00
2.29E+01
The convergence behaviors of GSO and the compared baseline algorithms on the 1000-dimensional benchmark functions F1, F2, and F3 are illustrated in Fig. 30. As observed, GSO demonstrates a markedly faster convergence rate during the early and intermediate stages of the optimization process, indicating strong global exploration capability even in extremely high-dimensional search spaces. Moreover, GSO consistently achieves significantly lower objective values at the final iterations compared with competing algorithms, reflecting superior exploitation accuracy and solution stability. In contrast, several baseline methods exhibit slow convergence, early stagnation, or pronounced oscillatory behavior, particularly on the complex multimodal function F3. These trends confirm that the dual-guidance update mechanism and gliding-inspired displacement strategy of GSO enable an effective and smooth transition from exploration to exploitation, allowing the algorithm to maintain robust convergence performance under severe dimensionality growth.
Fig. 30
Convergence curves of GSO and baseline algorithms on benchmark functions F1, F2, and F3 in 1000 dimensions. GSO exhibits rapid convergence and superior final accuracy
Bild vergrößern

4.3.4 Statistical analysis

To confirm whether these performance differences are statistically significant, an Analysis of Variance (ANOVA) test was conducted across all benchmark problems. As shown in Table 14, all p-values were less than 0.0001, strongly indicating significant differences among algorithmic means across all tested functions.
Table 14
One-way ANOVA p-values for performance comparison on 100-dimensional benchmarks
Function
P value
F1
\(<0.0001\)
F2
\(<0.0001\)
F3
\(<0.0001\)
F4
\(<0.0001\)
F5
\(<0.0001\)
F6
\(<0.0001\)
F7
\(<0.0001\)
F8
\(<0.0001\)
F9
\(<0.0001\)
F10
\(<0.0001\)
F11
\(<0.0001\)
F12
\(<0.0001\)
F13
\(<0.0001\)
In support of the ANOVA results, Table 15 provides a detailed matrix of pairwise t-test p-values comparing GSO against each competitor. Across all 13 functions and all algorithmic comparisons, the p-values remained consistently below the 0.0001 threshold, further validating that the observed performance differences are not due to random chance but rather systematic advantages of the GSO algorithm.
Table 15
Pairwise t-test P-values comparing GSO with other algorithms across 100-dimensional benchmarks
Function
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
F1
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F2
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F3
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F4
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F5
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.5342
< 0.0001
< 0.0001
< 0.0001
F6
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F7
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F8
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F9
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F10
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F11
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F12
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.06673
F13
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
In addition to central tendency and dispersion metrics, the statistical significance of performance disparities was validated using both ANOVA and one-sample t-tests. As summarized in Table 16, ANOVA results across all thirteen functions reported p-values consistently less than 0.0001, confirming that the observed differences among algorithmic outcomes are not due to random variation. This reinforces the empirical superiority of GSO in high-dimensional contexts.
Complementarily, Table 17 presents the results of one-sample t-tests for each algorithm against a null hypothesis mean of zero. The GSO once again demonstrated overwhelming statistical significance, with p-values well below the standard threshold of 0.0001, suggesting that its performance advantage is not only practical but also statistically significant.
Table 16
ANOVA-based significance values for all algorithms on the 500-dimensional problems, with all P-values reported as < 0.0001
Function
P value
F1
< 0.0001
F2
< 0.0001
F3
< 0.0001
F4
< 0.0001
F5
< 0.0001
F6
< 0.0001
F7
< 0.0001
F8
< 0.0001
F9
< 0.0001
F10
< 0.0001
F11
< 0.0001
F12
< 0.0001
F13
< 0.0001
Table 17
Matrix of pairwise t-test results showing the statistical significance of performance differences between GSO and other metaheuristic algorithms on 500-dimensional benchmarks
Function
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
F1
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F2
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F3
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F4
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F5
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.57321
< 0.0001
F6
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F7
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F8
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.06339
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F9
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F10
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F11
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F12
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F13
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
For the 1000-dimensional experiments, statistical significance was evaluated using both ANOVA and pairwise one-sample t-tests. As summarized in Table 18, the ANOVA results yielded p-values below 0.0001 for all thirteen benchmark functions, confirming that the differences observed among the competing algorithms are highly significant and not attributable to random variation. This strongly reinforces GSO’s robustness in extremely high-dimensional settings.
Table 19 reports the pairwise one-sample t-test p-values between GSO and the remaining metaheuristics. Across nearly all functions and comparisons, the p-values fall well below the conventional threshold of 0.05, demonstrating overwhelming statistical evidence in favor of GSO’s superior performance. Only three comparisons—ISO on F6 and OOA on F10—yielded non-significant values (i.e., \(p > 0.05\)), indicating that, in these cases, the performance differences were not statistically significant. Nevertheless, the overall pattern confirms the strong and consistent advantage of GSO in the 1000-dimensional problem space.
Table 18
ANOVA-based significance values for all algorithms on the 1000-dimensional benchmark problems
Function
P-value
F1
< 0.0001
F2
< 0.0001
F3
< 0.0001
F4
< 0.0001
F5
< 0.0001
F6
< 0.0001
F7
< 0.0001
F8
< 0.0001
F9
< 0.0001
F10
< 0.0001
F11
< 0.0001
F12
< 0.0001
F13
< 0.0001
Table 19
Pairwise t-test P-values comparing GSO with other algorithms on 1000-dimensional problems
Function
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
F1
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F2
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F3
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F4
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F5
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F6
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.08312
F7
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F8
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F9
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F10
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
0.07663
< 0.0001
F11
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F12
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
F13
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001
< 0.0001

4.4 CEC 2019 benchmarks

To further evaluate the generalizability, scalability, and comparative competitiveness of the proposed Glider Snake Optimization (GSO) algorithm, an extended suite of tests was conducted on the CEC 2019 benchmark set. This well-established benchmark suite includes a variety of real-parameter single-objective optimization problems designed to challenge metaheuristic algorithms across multiple structural complexities, including unimodal, multimodal, hybrid, and composition functions. Specifically, the ten functions labeled C1 through C10 were selected, each representing unique characteristics such as shifting global optima, deceptive basins, ill-conditioned landscapes, and non-separability. These functions are widely adopted for their ability to mimic real-world constraints and nonlinear dynamics, thus providing an ideal testbed for algorithmic stress testing and comparative benchmarking.

4.4.1 Performance evaluation on C1–C10

Table 20 compares the performance of nine algorithms—PSO, DE, WOA, GWO, FEP, GSA, GA, SAO, and the proposed GSO—using best, worst, mean, and standard deviation metrics from multiple runs. The results show that GSO consistently achieves near-optimal or optimal outcomes across most functions, with notably low variance, demonstrating strong accuracy and stability. In challenging cases such as C1, C3, and C5, GSO attains the theoretical minima with minimal deviation, outperforming peers that exhibit higher instability or convergence errors. Even in complex, multimodal, or plateau-rich landscapes (e.g., C4 and C7), GSO maintains robust convergence behavior, highlighting its superior balance between exploration and exploitation compared to other algorithms.
Table 20
Statistical results for C1–C10 over all algorithms
No.
 
PSO
DE
WOA
GWO
FEP
GSA
GA
SAO
C1
Best
6.58E+02
7.85E+05
9.63E−01
2.46E+04
9.63E−01
3.74E+03
9.63E−01
7.04E+04
Worst
1.72E+07
1.86E+07
6.73E+05
9.32E+07
2.66E+05
7.36E+07
9.63E−01
1.15E+12
Mean
3.27E+06
6.51E+06
5.56E+04
1.05E+07
3.47E+04
1.63E+07
9.63E−01
3.00E+11
STD
4.01E+06
4.40E+06
1.64E+05
2.30E+07
7.52E+04
1.78E+07
0.00E+00
2.92E+11
C2
Best
1.19E+03
3.66E+03
1.33E+02
2.28E+02
7.70E+00
1.88E+03
4.15E+00
1.70E+01
Worst
9.42E+03
1.13E+04
1.97E+03
7.07E+03
1.07E+03
1.52E+04
4.82E+00
4.74E+03
Mean
4.40E+03
8.17E+03
8.62E+02
1.22E+03
4.32E+02
7.47E+03
4.73E+00
1.68E+03
STD
1.83E+03
1.72E+03
4.65E+02
1.92E+03
2.60E+02
3.43E+03
2.40E−01
1.48E+03
C3
Best
6.82E+00
7.47E+00
2.23E+00
1.36E+00
1.36E+00
1.43E+00
5.51E+00
1.22E+01
Worst
1.07E+01
1.10E+01
1.13E+01
1.03E+01
1.03E+01
9.36E+00
8.26E+00
1.22E+01
Mean
9.28E+00
1.02E+01
7.97E+00
7.19E+00
3.29E+00
5.08E+00
7.28E+00
1.22E+01
STD
1.14E+00
6.79E−01
2.69E+00
2.22E+00
2.54E+00
2.17E+00
5.35E−01
9.25E−04
C4
Best
3.44E+01
1.52E+01
1.97E+01
1.15E+01
3.98E+00
1.33E+01
2.46E+01
7.51E+03
Worst
6.70E+01
4.23E+01
8.80E+01
4.84E+01
4.41E+01
8.73E+01
4.36E+01
2.93E+04
Mean
5.09E+01
2.99E+01
6.38E+01
2.58E+01
1.65E+01
5.55E+01
3.60E+01
1.58E+04
STD
6.29E+00
6.34E+00
1.64E+01
9.42E+00
8.91E+00
1.92E+01
4.88E+00
4.51E+03
C5
Best
4.81E+00
1.01E+00
2.09E+00
1.01E+00
1.16E+00
1.76E+00
2.62E+00
2.59E+00
Worst
2.32E+01
1.52E+00
8.52E+01
1.61E+01
4.43E+00
4.76E+00
2.06E+01
7.67E+00
Mean
1.13E+01
1.29E+00
3.46E+01
2.49E+00
2.01E+00
2.54E+00
5.56E+00
5.25E+00
STD
4.63E+00
1.25E−01
2.30E+01
3.70E+00
1.01E+00
6.62E−01
3.87E+00
1.14E+00
C6
Best
5.61E+00
4.68E+00
4.63E+00
1.23E+00
1.29E+00
5.40E+00
4.35E+00
8.04E+00
Worst
1.04E+01
1.01E+01
1.12E+01
8.17E+00
5.16E+00
1.18E+01
8.33E+00
1.20E+01
Mean
7.56E+00
7.57E+00
7.92E+00
4.47E+00
2.66E+00
8.88E+00
6.07E+00
9.85E+00
STD
1.34E+00
1.58E+00
1.68E+00
1.81E+00
1.12E+00
1.40E+00
8.24E−01
1.01E+00
C7
Best
9.96E+02
1.10E+03
8.24E+02
5.46E+02
1.77E+02
5.20E+02
1.36E+03
4.86E+02
Worst
1.85E+03
2.01E+03
1.75E+03
1.83E+03
1.50E+03
2.37E+03
2.35E+03
1.50E+03
Mean
1.55E+03
1.66E+03
1.24E+03
1.08E+03
7.79E+02
1.43E+03
1.92E+03
9.55E+02
STD
2.16E+02
1.68E+02
2.63E+02
3.04E+02
3.11E+02
4.17E+02
2.71E+02
2.51E+02
C8
Best
4.03E+00
3.60E+00
3.55E+00
3.55E+00
2.67E+00
4.00E+00
3.93E+00
5.47E+00
Worst
4.85E+00
4.88E+00
5.06E+00
4.86E+00
4.65E+00
5.06E+00
5.18E+00
7.02E+00
Mean
4.45E+00
4.57E+00
4.48E+00
4.34E+00
3.74E+00
4.55E+00
4.61E+00
6.11E+00
STD
1.75E−01
2.31E−01
4.02E−01
3.16E−01
4.81E−01
2.82E−01
2.51E−01
3.24E−01
C9
Best
1.34E+00
1.12E+00
1.06E+00
1.05E+00
1.07E+00
1.10E+00
1.20E+00
9.00E+02
Worst
1.91E+00
1.29E+00
3.67E+00
1.71E+00
1.33E+00
1.74E+00
3.16E+00
3.82E+03
Mean
1.60E+00
1.21E+00
1.88E+00
1.32E+00
1.18E+00
1.39E+00
1.52E+00
2.68E+03
STD
1.37E−01
3.99E−02
9.17E−01
1.71E−01
6.96E−02
1.55E−01
4.05E−01
7.18E+02
C10
Best
2.06E+01
2.05E+01
2.06E+01
2.02E+01
2.04E+01
2.03E+01
1.52E+01
1.95E+01
Worst
2.09E+01
2.08E+01
2.09E+01
2.09E+01
2.09E+01
2.08E+01
2.08E+01
1.98E+01
Mean
2.07E+01
2.07E+01
2.07E+01
2.05E+01
2.07E+01
2.05E+01
2.04E+01
1.97E+01
STD
8.36E−02
8.79E−02
7.29E−02
1.69E−01
1.17E−01
1.25E−01
1.15E+00
1.09E−01
No.
 
CMA-ES
LSHADE-SPACMA
PIMO
RBMO
OOA
ISO
GSO
C1
Best
5.63E+01
6.01E+03
3.19E+02
9.63E−01
9.63E−01
9.63E−01
9.63E−01
Worst
1.47E+06
9.80E+10
6.29E+06
6.73E+05
2.66E+05
9.63E−01
9.63E−01
Mean
2.80E+05
2.57E+10
1.40E+06
5.56E+04
3.47E+04
9.63E−01
9.63E−01
STD
342880.0691
2.49E+10
1.52E+06
1.64E+05
6.61E+04
0.00E+00
0.00E+00
C2
Best
8.72E+00
2.67E+01
9.69E−01
1.67E+00
5.33E+00
4.10E+00
4.07E+00
Worst
6.88E+01
8.28E+01
1.44E+01
5.17E+01
1.06E+02
4.82E+00
4.81E+00
Mean
3.22E+01
5.97E+01
6.30E+00
8.89E+00
4.32E+02
4.73E+00
4.47E+00
STD
13.39909477
1.26E+01
3.39E+00
1.40E+01
2.60E+02
2.40E−01
2.06E−01
C3
Best
6.26E+00
6.86E+00
2.05E+00
1.25E+00
1.50E+00
5.06E+00
1.66E+00
Worst
9.87E+00
1.01E+01
1.04E+01
9.49E+00
9.47E+00
7.59E+00
8.31E+00
Mean
8.53E+00
9.40E+00
7.32E+00
6.61E+00
3.02E+00
6.69E+00
4.83E+00
STD
1.044787983
6.24E−01
2.47E+00
2.04E+00
2.33E+00
4.91E−01
1.87E+00
C4
Best
3.16E+01
1.39E+01
1.81E+01
1.06E+01
3.66E+00
2.26E+01
8.63E+00
Worst
6.15E+01
3.89E+01
8.09E+01
4.45E+01
4.05E+01
4.01E+01
3.45E+01
Mean
4.67E+01
2.75E+01
5.86E+01
2.37E+01
1.51E+01
3.31E+01
1.86E+01
STD
5.778391312
5.83E+00
1.51E+01
8.65E+00
8.19E+00
4.48E+00
6.78E+00
C5
Best
4.81E+00
1.01E+00
2.09E+00
1.01E+00
1.16E+00
2.62E+00
1.00E+00
Worst
2.32E+01
1.42E+00
8.32E+01
1.61E+01
4.43E+00
2.06E+01
1.25E+00
Mean
1.11E+01
1.30E+00
3.46E+01
2.49E+00
2.01E+00
5.56E+00
1.08E+00
STD
4.333209011
1.23E−01
2.23E+01
3.34E+00
1.01E+00
3.12E+00
6.84E−02
C6
Best
5.56E+00
4.64E+00
4.59E+00
1.22E+00
1.28E+00
4.31E+00
9.70E−01
Worst
1.03E+01
9.98E+00
1.11E+01
8.10E+00
5.11E+00
8.25E+00
4.44E+00
Mean
7.49E+00
7.50E+00
7.85E+00
4.43E+00
2.63E+00
6.01E+00
1.69E+00
STD
1.329601575
1.57E+00
1.67E+00
1.80E+00
1.11E+00
8.17E−01
1.01E+00
C7
Best
9.87E+02
1.09E+03
8.16E+02
5.41E+02
1.75E+02
1.35E+03
2.33E+02
Worst
1.83E+03
1.99E+03
1.74E+03
1.82E+03
1.49E+03
2.33E+03
1.07E+03
Mean
1.54E+03
1.64E+03
1.23E+03
1.07E+03
7.72E+02
1.90E+03
6.76E+02
STD
214.3844079
1.67E+02
2.61E+02
3.01E+02
3.08E+02
2.69E+02
2.11E+02
C8
Best
3.99E+00
3.57E+00
3.52E+00
3.52E+00
2.65E+00
3.90E+00
2.66E+00
Worst
4.81E+00
4.84E+00
5.02E+00
4.82E+00
4.60E+00
5.13E+00
4.55E+00
Mean
4.41E+00
4.53E+00
4.44E+00
4.30E+00
3.70E+00
4.57E+00
3.72E+00
STD
0.1737534491
2.28E−01
3.99E−01
3.13E−01
4.76E−01
2.49E−01
4.70E−01
C9
Best
1.33E+00
1.11E+00
1.05E+00
1.04E+00
1.06E+00
1.19E+00
1.04E+00
Worst
1.89E+00
1.27E+00
3.64E+00
1.69E+00
1.31E+00
3.13E+00
1.42E+00
Mean
1.59E+00
1.20E+00
1.86E+00
1.31E+00
1.17E+00
1.51E+00
1.17E+00
STD
0.1358344039
3.95E−02
9.09E−01
1.70E−01
6.90E−02
4.02E−01
1.16E−01
C10
Best
2.41E+00
2.32E+00
2.40E+00
2.06E+00
2.22E+00
1.07E+00
9.65E−01
Worst
2.07E+01
2.06E+01
2.07E+01
2.07E+01
2.07E+01
2.06E+01
2.06E+01
Mean
2.06E+01
2.05E+01
2.05E+01
2.03E+01
2.05E+01
2.02E+01
1.91E+01
STD
0.08285086974
8.71E−02
7.23E−02
1.67E−01
1.16E−01
1.14E+00
4.91E+00
Figure 31 illustrate the performance of multiple algorithms on the CEC 2019 benchmark constrained functions (C1–C10) using boxplots with swarm overlays. Across most cases, GSO exhibits superior accuracy and stability with minimal spread and low median values. It consistently outperforms others on C1 and C2, maintains reliable convergence on C3 and C4 where many algorithms fail, and dominates C5–C7 with both precision and robustness. Although SAO performs well on C8, GSO remains the most consistent overall, particularly on the more complex functions C9 and C10, where it demonstrates strong adaptability and minimal variance compared to competing methods.
Fig. 31
Boxplots with swarm/jittered points for CEC2019 constrained problems C1–C10
Bild vergrößern

4.4.2 Statistical analysis

To comprehensively assess the performance of the algorithms across the CEC 2019 benchmark functions (C1–C10), multiple statistical visualizations and performance profiles are used. These include radar plots for individual algorithm profiles, mean performance comparisons, ranking distributions, and heatmaps.
The radar plots in Fig. 32 illustrate the normalized performance profiles of each algorithm across all ten benchmark functions. GSO consistently outperforms others, achieving near-optimal values across all functions, indicating superior convergence and solution quality. In contrast, algorithms like SAO and PSO show more varied behavior, with weaker performance on functions such as C1 and C9. These visual insights align with the numerical trends observed in earlier performance metrics.
Fig. 32
Normalized performance radar plots of all algorithms (C1–C10) and their comparison
Bild vergrößern
The bar and heatmap plots in Fig. 33 provide a statistical summary of algorithm rankings. The left panel shows the average rank (lower is better) and the associated variability, where GSO again ranks best with the lowest mean and standard deviation, followed by FEP and GWO. The right panel provides a granular view via a heatmap, showing each algorithm’s rank for each function. GSO secures the first place across most functions, confirming its robustness, while SAO and PSO exhibit higher variability and poorer rankings across several functions.
Fig. 33
(Left) Average ranking across all CEC 2019 functions with error bars; (Right) heatmap of rankings per function
Bild vergrößern
Figure 34 presents the mean and standard deviation (STD) of the objective values for each algorithm across all benchmark functions. Each subfigure (C1 to C10) reveals performance trends and stability. For example, GSO consistently achieves the lowest means with minimal deviation in most cases, particularly in C1, C2, and C9. In contrast, algorithms like SAO and PSO exhibit greater fluctuations, especially on complex functions such as C1 and C7, indicating instability and lower convergence reliability.
Fig. 34
Mean ± STD performance analysis for all algorithms across CEC 2019 benchmark functions
Bild vergrößern
Across 23 benchmark functions, GSO consistently achieves near-optimal results with low variance, demonstrating stable convergence across unimodal and multimodal landscapes. Visual analyses (rolling statistics, density plots, and boxplots) confirm its robustness to noise and landscape irregularities. Even at higher dimensions (100D–500D), GSO maintains accuracy with minimal dispersion, while statistical tests (\(P<0.0001\)) validate its superiority. On the CEC 2019 constrained benchmarks (C1–C10), GSO ranks among the top performers, particularly in high-penalty problems, with strong runtime efficiency. Overall, GSO exhibits an excellent exploration–exploitation balance, reliable convergence, and scalability across diverse optimization tasks.

5 Engineering applications

5.1 Pressure vessel design problem

The problem of the cylindrical vessel is that it is capped at both ends by hemispherical heads, as shown in Fig. 35. The objective is to minimize the total cost, including material, forming, and welding costs. Four variables in this design need to be optimized. The first parameter is the thickness of the shell (\(T_s\)), and the second is the thickness of the head (\(T_h\)). The third and fourth parameters are the inner radius, R, and the length of the cylindrical section, L, not including the head. The parameters of \(T_s\) and \(T_h\) are integer multiples of 0.0625-inch, the available thickness of steel plates, and R and L are continuous values. The mathematical formulation of the problem can be described as follows:
Minimize
$$\begin{aligned} \begin{aligned} f(T_s,T_h,R,L)=&0.6224 T_s R L + 1.7781 T_h R^2 + \\&3.1661 T_s^2 L +19.84 T_h^2 L \\ \end{aligned} \end{aligned}$$
(6)
Subject to the following constraints
$$\begin{aligned} \begin{aligned}&g_1= -T_s+0.0193R \le 0 \\&g_2= -T_h+0.00954R \le 0 \\&g_3= -\pi R^2 L- 4/3 \pi R^3+1,296,000 \le 0 \\&g_4= L-240 \le 0 \\ \end{aligned} \end{aligned}$$
(7)
where the four variables’ ranges are as follows:
$$\begin{aligned} \begin{aligned}&0 \le T_s \le 99, 0 \le T_h \le 99, \\&10 \le R \le 200, 10 \le L \le 200 \\ \end{aligned} \end{aligned}$$
(8)
Fig. 35
Pressure vessel design
Bild vergrößern
The comparative results for the pressure vessel design problem are reported in Table 21, where the performance of the proposed glider snake optimization (GSO) algorithm is evaluated against several well-established metaheuristic approaches from the literature, including genetic algorithm (GA), particle swarm optimization (PSO), grey wolf optimizer (GWO), whale optimization algorithm (WOA), artificial life method (ALM), and gravitational search algorithm (GSA). The problem under consideration is to minimize the total manufacturing cost subject to multiple nonlinear constraints, with four decision variables: the shell thickness (\(T_s\)), the head thickness (\(T_h\)), the inner radius (R), and the cylindrical section length (L).
Table 21
Comparison of GSO results with literature for the pressure vessel design problem
Algorithm
\(T_s\)
\(T_h\)
R
L
Optimal cost
GA
0.8125
0.4375
42.0974
176.6541
6059.9463
PSO
0.8125
0.4375
42.0913
176.7465
6061.0777
GWO
0.7838
0.3923
40.5905
200.0000
5993.5876
WOA
0.8125
0.4375
42.0983
176.6390
6059.7410
ALM
1.1250
0.6250
58.2910
43.6900
7198.0428
GSA
1.1250
0.6250
55.9887
84.4542
8538.8359
GSO (proposed)
0.7782
0.3846
40.3196
199.9999
5885.3361
The number with bold style means they are the best results
A more granular analysis of the pressure vessel design problem is presented in Table 22, where the statistical performance of the proposed Glider Snake Optimization (GSO) algorithm is compared against three prominent metaheuristics: Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), and Gravitational Search Algorithm (GSA). Each algorithm was executed across multiple independent runs with consistent initialization and stopping conditions to capture the variability and reliability of the optimization behavior across trials.
Table 22
Comparison of GSO statistical results with literature for the pressure vessel design problem
Algorithm
Best
Average
Standard deviation
Function evaluations
GSO (proposed)
5885.3361
5967.7646
262.4275
5250
PSO
6061.0777
6531.1000
154.3716
14,790
GWO
5993.5876
6283.2360
401.2634
7600
GSA
8538.8359
8932.9500
683.5475
7110
The number with bold style means they are the best results

5.2 Tension/compression spring design problem

Figure 36 shows the problem of Tension/Compression Spring Design (TCSD). TCSD is considered a continuous constrained problem. This problem aims to minimize the volume of a coil spring under a constant tension/compression load. The problem has three design variables: L, d, and w. The number of spring’s active coils is indicated as L, the diameter of the winding as d, and w represents the diameter of the wire.
Fig. 36
Tension/compression spring design
Bild vergrößern
The mathematical formulation of the problem can be described as follows:
Minimize
$$\begin{aligned} f(w,d,L)=(L+2) w^2 d \end{aligned}$$
(9)
Subject to the following constraints
$$\begin{aligned} \begin{aligned}&g_1=1- \frac{d^3+L}{71785 w^4} \le 0 \\&g_2= \frac{d(4d-w)}{w^3 (12566d-w)}+ \frac{1}{5108w^2} -1 \le 0 \\&g_3= 1- \frac{140.45w}{d^2 L} \le 0 \\&g_4= \frac{2(w+d)}{3} - 1 \le 0 \\ \end{aligned} \end{aligned}$$
(10)
where the three variables range are as follows:
$$\begin{aligned} \begin{aligned}&0.05 \le w \le 2.0, \\&0.25 \le d \le 1.3, \\&2.0 \le L \le 15 \\ \end{aligned} \end{aligned}$$
(11)
The optimization results for the tension/compression spring design problem are detailed in Table 23, which presents the best solutions obtained by several well-established algorithms, including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), Whale Optimization Algorithm (WOA), Grey Wolf Optimizer (GWO), Gravitational Search Algorithm (GSA), and the proposed Glider Snake Optimization (GSO) method. This problem is a classical constrained engineering task aimed at minimizing the weight of a spring subject to geometric and mechanical constraints involving shear stress, surge frequency, and deflection limits. The design space consists of three decision variables: the wire diameter (d), the mean coil diameter (W), and the number of active coils (L).
Table 23
Comparison of the best solutions for the tension/compression spring design problem
Algorithm
W
d
L
Optimal cost
GA
0.051480
0.351661
11.632201
0.0127048
PSO
0.051728
0.357644
11.244543
0.0126747
GSA
0.050276
0.323680
13.525410
0.0127022
DE
0.051609
0.354714
11.410831
0.0126702
MPM
0.050000
0.315900
14.250000
0.0128334
WOA
0.051207
0.345215
12.004032
0.0126763
GWO
0.050000
0.317425
14.029494
0.0126763
GSO (proposed)
0.051670
0.356265
11.315586
0.0126652
The number with bold style means they are the best results
A comprehensive statistical analysis of the tension/compression spring design problem is provided in Table 24, which summarizes the performance of the Glider Snake Optimization (GSO) algorithm in comparison with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Grey Wolf Optimizer (GWO). The tabulated results include the best-obtained solution, the average cost over multiple independent runs, the standard deviation as a measure of robustness, and the total number of function evaluations required to reach convergence.
Table 24
Comparison of GSO statistical results with literature for tension/compression spring design problem
Algorithm
Optimal solution
Average
Standard deviation
Function evaluations
GSO
0.0126652
0.0130
0.00127
3210
PSO
0.0126747
0.0139
0.0033
5460
GSA
0.0127022
0.0136
0.0026
4980
GWO
0.0126763
0.0130
0.00038
4820
The number with bold style means they are the best results

5.3 Welded beam design problem

The next constrained problem is the welded beam design. This problem, as shown in Fig. 37, is considered an important benchmark for testing different optimization methods. The main objective is to minimize the fabrication cost of the welded beam, including setup, welding labor, and material costs. The design constraints are shear stress, bending stress, buckling load, end deflection, and side constraint. Four design variables of w, L, d, and h are considered here.
Fig. 37
Welded beam design
Bild vergrößern
The mathematical formulation of the problem can be described as follows:
Minimize
$$\begin{aligned} & \begin{aligned} f(w, L, d, h)&= 1.10471\, w^2 L \\&\quad + 0.04811\, d h (14 + L) \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned} & \begin{aligned} g_1&= w - h \le 0 \\ g_2&= \delta - 0.25 \le 0 \\ g_3&= \tau - 13{,}600 \le 0 \\ g_4&= \sigma - 30{,}000 \le 0 \\ g_5&= 0.125 - w \le 0 \\ g_6&= 6000 - P \le 0 \\ g_7&= 0.10471\, w^2 + 0.04811\, h d (14 + L) \\&\quad - 0.5 \le 0 \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned} & \begin{aligned} \delta&= \frac{65856}{30000\, h D^3} \\ \tau&= \sqrt{ \alpha ^2 + \frac{\alpha \beta L}{D} + \beta ^2 } \\ \alpha&= \frac{6000}{\sqrt{2}\, w L} \\ \beta&= \frac{Q D}{J} \\ Q&= 6000 \left( 14 + \frac{L}{2} \right) \\ D&= \frac{1}{2} \sqrt{L^2 + (w + d)^2} \\ J&= \sqrt{2}\, w L \left( \frac{L^2}{6} + \frac{(w + d)^2}{2} \right) \\ \sigma&= \frac{504000}{h d^2} \\ P&= \frac{0.61432 \times 10^6\, d h^3}{6} \left( 1 - \frac{d \sqrt{30 / 48}}{28} \right) \end{aligned} \end{aligned}$$
(14)
$$\begin{aligned} & \begin{aligned} 0.1 \le \ &w,\, h \le 2.0 \\ 0.1 \le \ &L,\, d \le 10.0 \end{aligned} \end{aligned}$$
(15)
The optimization results for the welded beam design problem are presented in Table 25, where the proposed glider snake optimization (GSO) algorithm is benchmarked against several well-known optimization strategies including genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), random optimization (RO), and whale optimization algorithm (WOA). The problem is to minimize the total fabrication cost of a cantilever beam while satisfying constraints on shear stress, bending stress, end deflection, and buckling load. The decision space comprises four continuous design variables: weld thickness (W), weld length (L), beam height (h), and beam width (d).
Table 25
Comparison of the best solutions for the welded beam design problem
Algorithm
W
L
d
h
Optimal cost
GA
0.205986
3.471328
9.020224
0.206480
1.728226
FA
0.208800
3.420500
8.997500
0.210000
1.748309
PSO
0.202369
3.544214
9.048210
0.205723
1.728024
GSA
0.182129
3.856979
10.00000
0.203760
1.879952
RO
0.203687
3.528467
9.004233
0.207241
1.735344
WOA
0.205396
3.484293
9.037426
0.206276
1.730499
GSO (proposed)
0.205726
3.470329
9.037232
0.205728
1.724921
The number with bold style means they are the best results
A detailed statistical comparison of the optimization performance on the welded beam design problem is provided in Table 26. The results reported include the best solution obtained, the average cost across multiple independent runs, the standard deviation as a measure of solution consistency, and the total number of function evaluations required to reach convergence. The proposed Glider Snake Optimization (GSO) algorithm is compared with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Whale Optimization Algorithm (WOA).
Table 26
Comparison of GSO statistical results with literature for the welded beam design problem
Algorithm
Best
Average
Std. deviation
Function evaluations
GSO (proposed)
1.724921
1.7290
0.00679
6000
PSO
1.728024
1.7422
0.01275
13,770
GSA
1.879952
3.5761
1.28740
10,750
WOA
1.730499
1.7320
0.02260
9900
The number with bold style means they are the best results

5.4 Speed reducer design problem

The speed reducer problem is considered part of a gearbox in a mechanical system. The problem can be used in different applications. This design involves seven design variables, which makes it a more challenging benchmark. Figure 38 shows the problem and the relevant parameters of \(x_1, x_2, \dots , x_7\). The design of the speed reducer is considered with the face width, the module of the teeth   the number of teeth on the pinion, the length of the first shaft between bearings, the length of the second shaft between bearings, the diameter of the first shaft, and the diameter of the second shaft. The objective function is to minimize the total weight of the speed reducer while satisfying the 11 constraints.
Fig. 38
Speed reducer design
Bild vergrößern
The mathematical formulation of the problem can be described as follows:
Minimize
$$\begin{aligned} \begin{aligned} f(x) =\ &0.7854\, x_1 x_2^2 \left( 3.3333\, x_3^2 + 14.9334\, x_3 - 43.0934 \right) \\&- 1.508\, x_1 (x_6^2 + x_7^2) + 7.4777\, (x_6^3 + x_7^3) \\&+ 0.7854\, (x_4 x_6^2 + x_5 x_7^2) \end{aligned} \end{aligned}$$
(16)
Subject to the following constraints:
$$\begin{aligned} \begin{aligned} g_1&= \frac{27}{x_1 x_2^2 x_3} - 1 \le 0 \\ g_2&= \frac{397.5}{x_1 x_2^2 x_3^2} - 1 \le 0 \\ g_3&= \frac{1.93\, x_4^3}{x_2 x_3 x_6^4} - 1 \le 0 \\ g_4&= \frac{1.93\, x_5^3}{x_2 x_3 x_7^4} - 1 \le 0 \\ g_5&= \frac{x_2 x_3}{40} - 1 \le 0 \\ g_6&= \frac{5\, x_2}{x_1} - 1 \le 0 \\ g_7&= \frac{x_1}{12\, x_2} - 1 \le 0 \\ g_8&= \frac{1.5\, x_6 + 1.9}{x_4} - 1 \le 0 \\ g_9&= \frac{1.1\, x_7 + 1.9}{x_5} - 1 \le 0 \\ g_{10}&= \frac{1}{110\, x_6^3} \sqrt{\left( \frac{745\, x_4}{x_2 x_3} \right) ^2 + 16.9 \times 10^6} - 1 \le 0 \\ g_{11}&= \frac{1}{85\, x_7^3} \sqrt{\left( \frac{745\, x_5}{x_2 x_3} \right) ^2 + 157.5 \times 10^6} - 1 \le 0 \end{aligned} \end{aligned}$$
(17)
Where the variables are bounded as:
$$\begin{aligned} \begin{aligned} 2.6 \le \ &x_1 \le 3.6 \\ 0.7 \le \ &x_2 \le 0.8 \\ 17 \le \ &x_3 \le 28 \\ 7.3 \le \ &x_4 \le 8.3 \\ 7.8 \le \ &x_5 \le 8.3 \\ 2.9 \le \ &x_6 \le 3.9 \\ 5.0 \le \ &x_7 \le 5.5 \end{aligned} \end{aligned}$$
(18)
Table 27 presents the comparative results for the speed reducer design problem, showcasing the best solutions obtained by several widely used metaheuristic algorithms, including genetic algorithm (GA), particle swarm optimization (PSO), grey wolf optimizer (GWO), whale optimization algorithm (WOA), and the proposed glider snake optimization (GSO). This classical engineering problem aims to minimize the total weight of a speed reducer while satisfying multiple nonlinear constraints, including stress, deflection, and dimensional limitations. The problem is characterized by a seven-dimensional continuous decision space comprising the face width (\(x_1\)), module of teeth (\(x_2\)), number of teeth (\(x_3\)), shaft length and diameters (\(x_4\) to \(x_7\)).
Table 27
Comparison of the best solutions for the speed reducer design problem
Algorithm
\(x_1\)
\(x_2\)
\(x_3\)
\(x_4\)
\(x_5\)
\(x_6\)
\(x_7\)
Optimal cost
GA
3.502456
0.7000
17.0000
7.3000
7.8000
3.351768
5.289224
2999.3257
PSO
3.508468
0.7000
17.0000
7.3000
7.8000
3.363009
5.287269
3003.3185
GWO
3.505301
0.7000
17.0000
7.3697
7.8168
3.351011
5.290698
3002.1709
WOA
3.518391
0.7000
17.0000
7.3000
7.9967
3.377985
5.286752
3015.0628
GSO (proposed)
3.500373
0.7000
17.0000
7.3000
7.8000
3.350765
5.286810
2996.7160
The number with bold style means they are the best results
Table 28 provides a detailed statistical evaluation of the performance of the proposed Glider Snake Optimization (GSO) algorithm in solving the speed reducer design problem, benchmarked against four well-known optimization techniques: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Grey Wolf Optimizer (GWO), and Whale Optimization Algorithm (WOA). Each algorithm was executed across multiple independent runs under controlled conditions to measure its reliability, solution variance, and computational demand in terms of function evaluations.
Table 28
Comparison of GSO statistical results with literature for the speed reducer design problem
Algorithm
Best
Average
Std. Deviation
Function evaluations
GSO (proposed)
2996.7160
3004.0317
6.8890
2640
PSO
3003.3185
3105.7246
100.7790
2720
GA
2999.3257
3006.5664
6.2584
7000
GWO
3002.1709
3012.9974
6.4429
9920
WOA
3015.0628
3404.9737
679.2742
10,000
To quantitatively assess the statistical significance of the performance differences between the proposed Glider Snake Optimization (GSO) algorithm and competing methods across constrained engineering design problems, a series of pairwise hypothesis tests was conducted using p-values derived from standard t-tests. The results are summarized in Table 29, where each entry represents the statistical significance level (p-value) of the performance difference between GSO and another algorithm for a given problem. A commonly accepted significance threshold of \(\alpha = 0.05\) was used to determine whether the observed differences are statistically meaningful.
Table 29
Statistical significance (p-values) of GSO compared with other algorithms on constrained engineering design problems
Problem
GSO–PSO
GSO–GWO
GSO–WOA
GSO–GA
Pressure vessel design
0.00327
0.04605
0.000535
4.12E–15
Tension/compression spring
0.15929
0.21408
0.000723
0.000692
Welded beam design
0.28670
1.57E–05
3.37E–08
1.24E–06
Speed reducer design
7.57E–09
0.000116
2.86E–13
0.11977

6 Discussion

The experimental evaluation of the proposed Glider Snake Optimization (GSO) algorithm across Twenty-three classical benchmark functions, high-dimensional scenarios, CEC 2019 test suites, and constrained engineering problems provide several insights into its performance behavior. Overall, the results demonstrate that GSO delivers competitive or superior solution accuracy, rapid convergence, and stable performance relative to well-established metaheuristics such as PSO, GWO, WOA, GA, and DE. One of the primary advantages observed is GSO’s dual-guidance update mechanism, which allows agents to learn simultaneously from the global leader and their predecessor. This structure promotes cooperative information flow along the population, thereby reducing the likelihood of premature convergence and helping the algorithm maintain diversity during early search phases.
The adaptive coefficient and gliding-inspired movement strategy further contribute to GSO’s strong exploration capability, allowing it to traverse the search space more effectively and avoid local optima on multimodal landscapes. Empirical convergence curves consistently show that GSO transitions smoothly into exploitation without stagnating, particularly in functions where other algorithms exhibit oscillations or early losses of diversity. In engineering design problems, GSO demonstrates reliable constraint-handling behavior and produces feasible, high-quality solutions with relatively low computational cost. These results highlight GSO’s suitability for both global numerical optimization and practical real-world applications.
Despite its strengths, GSO is not without limitations. Like many population-based algorithms, its performance is sensitive to the initial distribution of agents in extremely narrow or highly discontinuous search spaces. Although the weak-agent replacement mechanism preserves diversity, there are scenarios—particularly in highly rugged landscapes—where additional diversity-inducing strategies may further improve performance. Another limitation is that the algorithm currently operates in a single-objective and static search setting. While its mechanisms suggest strong potential for extension, GSO does not yet include specialized operators for multi-objective, dynamic, or noisy optimization environments.
Overall, the study shows that GSO offers a robust and efficient balance of exploration and exploitation, outperforming or matching the performance of several state-of-the-art methods. At the same time, the identified areas for improvement highlight valuable opportunities for future research—including enhanced diversity mechanisms, hybrid model integration, dynamic adaptation strategies, and multi-objective extensions—that can further broaden the applicability and effectiveness of GSO.

7 Conclusion and future work

This paper introduced the Glider Snake Optimization (GSO) algorithm, a novel metaheuristic inspired by the hybrid locomotion mechanisms of arboreal snakes that glide and slither to traverse complex environments. By modeling the optimizer as a chain of segments influenced by leader-following behavior and elite solution attraction, GSO effectively balances global exploration with local exploitation. The algorithm was rigorously benchmarked against a suite of 23 mathematical functions, CEC 2019 problems, high-dimensional test cases, and constrained engineering design problems. The results consistently demonstrated that GSO outperforms or matches well-established algorithms such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and Differential Evolution (DE), particularly in terms of convergence speed, solution quality, and computational efficiency. These findings confirm that the proposed search mechanism, including its dual-guidance update rule and gliding-inspired exploration behavior, provides a meaningful improvement over classical single-leader and encircling-based strategies.
In addition to its strong optimization performance, GSO exhibits notable robustness and parameter stability. Sensitivity analyses revealed that the algorithm is resilient to variations in its control parameters, and statistical evaluations—including one-sample t-tests and ANOVA—confirmed the significance of its superior performance. Search history visualizations further illustrated GSO’s adaptive capacity to transition between exploration and exploitation across diverse objective landscapes. Moreover, the algorithm’s lightweight design enables efficient memory and CPU utilization, making it suitable for deployment in resource-constrained or real-time applications. Overall, these results highlight GSO as a versatile, computationally efficient framework capable of addressing a wide range of optimization challenges across both theoretical and real-world domains.
Future research can explore several promising directions to extend the applicability and performance of GSO. One potential path is to hybridize GSO with local search, surrogate modeling, or machine–learning–based parameter-adaptation strategies to accelerate convergence in high-cost optimization scenarios. Another avenue involves adapting GSO to dynamic, multi-objective, or time-varying optimization environments where solution landscapes evolve and trade-offs must be resolved among conflicting objectives. Furthermore, integrating GSO into swarm-based ensembles, distributed or parallel computing frameworks, or GPU-accelerated implementations may enhance its scalability and runtime performance for large-scale industrial optimization problems. Additional future directions include investigating theoretical properties such as convergence behavior, stability analysis, and complexity bounds, as well as developing a dedicated multi-objective extension (MO-GSO) to broaden its applicability to real-world engineering and decision-making tasks.
According to the No Free Lunch (NFL) theorem, the strong optimization performance achieved by the proposed Glider Snake Optimization (GSO) algorithm is inherently problem-dependent and cannot be universally guaranteed across all possible optimization landscapes. Although GSO demonstrates superior performance on the benchmark functions and engineering design problems investigated, its effectiveness may vary across different problem domains, objective formulations, constraint structures, or dynamic environments. Consequently, for new application scenarios or alternative optimization tasks, GSO should be re-evaluated and benchmarked against other state-of-the-art algorithms to ensure its suitability and robustness within each specific context.

Acknowledgements

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2026R308), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Declarations

Conflict of interest

The authors declare no conflict of interest.
Not applicable.
Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Download
Titel
Glider snake optimizer (GSO): a nature-inspired metaheuristic algorithm for global and engineering optimization problems
Verfasst von
El-Sayed M. El-kenawy
Nima Khodadadi
Seyedali Mirjalili
Ahmed Mohamed Zaki
Abdelhameed Ibrahim
Amel Ali Alhussan
Doaa Sami Khafaga
Marwa M. Eid
Publikationsdatum
10.02.2026
Verlag
Springer Netherlands
Erschienen in
Artificial Intelligence Review / Ausgabe 3/2026
Print ISSN: 0269-2821
Elektronische ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-026-11504-x

Supplementary Information

Below is the link to the electronic supplementary material.
Zurück zum Zitat Abd Elaziz M, Dahou A, Abualigah L, Yu L, Alshinwan M, Khasawneh AM, Lu S (2021) Advanced metaheuristic optimization techniques in applications of deep neural networks: a review. Neural Comput Appl 33(21):14079–14099. https://doi.org/10.1007/s00521-021-05960-5CrossRef
Zurück zum Zitat Abualigah L, Yousri D, Abd Elaziz M, Ewees AA, Al-qaness MAA, Gandomi AH (2021) Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Ind Eng 157:107250. https://doi.org/10.1016/j.cie.2021.107250CrossRef
Zurück zum Zitat Adegboye OR, Deniz Ülker E (2023a) Gaussian mutation specular reflection learning with local escaping operator based artificial electric field algorithm and its engineering application. Appl Sci 13(7):4157. https://doi.org/10.3390/app13074157CrossRef
Zurück zum Zitat Adegboye OR, Deniz Ülker E (2023b) Hybrid artificial electric field employing cuckoo search algorithm with refraction learning for engineering optimization problems. Sci Rep 13(1):4098. https://doi.org/10.1038/s41598-023-31081-1CrossRef
Zurück zum Zitat Adegboye OR, Feda AK (2024) Improved exponential distribution optimizer: enhancing global numerical optimization problem solving and optimizing machine learning parameters. Clust Comput 28(2):128. https://doi.org/10.1007/s10586-024-04753-4CrossRef
Zurück zum Zitat Adegboye OR, Feda AK, Ojekemi OR, Agyekum EB, Khan B, Kamel S (2024) DGS-SCSO: enhancing sand cat swarm optimization with dynamic pinhole imaging and golden sine algorithm for improved numerical optimization performance. Sci Rep 14(1):1491. https://doi.org/10.1038/s41598-023-50910-xCrossRef
Zurück zum Zitat Ahmadianfar I, Bozorg-Haddad O, Chu X (2020) Gradient-based optimizer: a new metaheuristic optimization algorithm. Inf Sci 540:131–159. https://doi.org/10.1016/j.ins.2020.06.037MathSciNetCrossRef
Zurück zum Zitat Cheraghalipour A, Hajiaghaei-Keshteli M, Paydar MM (2018) Tree growth algorithm (TGA): a novel approach for solving optimization problems. Eng Appl Artif Intell 72:393–414. https://doi.org/10.1016/j.engappai.2018.04.021CrossRef
Zurück zum Zitat Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39. https://doi.org/10.1109/MCI.2006.329691CrossRef
Zurück zum Zitat Deng L, Liu S (2023) Snow ablation optimizer: a novel metaheuristic technique for numerical optimization and engineering design. Expert Syst Appl 225:120069
Zurück zum Zitat Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science (MHS’95), pp 39–43. https://doi.org/10.1109/MHS.1995.494215
Zurück zum Zitat Erol OK, Eksin I (2006) A new optimization method: big bang-big crunch. Adv Eng Softw 37(2):106–111. https://doi.org/10.1016/j.advengsoft.2005.04.005CrossRef
Zurück zum Zitat Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine Predators Algorithm: a nature-inspired metaheuristic. Expert Syst Appl 152:113377
Zurück zum Zitat Fu S, Li K, Huang H, Ma C, Fan Q, Zhu Y (2024) Red-billed blue magpie optimizer: a novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif Intell Rev 57(6):13
Zurück zum Zitat Gong J, Karimzadeh Parizi M (2022) GWMA: the parallel implementation of woodpecker mating algorithm on the GPU. J Chin Inst Eng 45(6):556–568. https://doi.org/10.1080/02533839.2022.2078418CrossRef
Zurück zum Zitat Gopi S, Mohapatra P (2024a) Learning cooking algorithm for solving global optimization problems. Sci Rep 14(1):13359. https://doi.org/10.1038/s41598-024-60821-0CrossRef
Zurück zum Zitat Gopi S, Mohapatra P (2024b) Fast random opposition-based learning aquila optimization algorithm. Heliyon 10(4):26187. https://doi.org/10.1016/j.heliyon.2024.e26187CrossRef
Zurück zum Zitat Hadi AA, Mohamed AW, Jambi KM (2020) Single-objective real-parameter optimization: enhanced LSHADE-SPACMA algorithm. In: Heuristics for optimization and learning. Springer International Publishing, Cham, pp 103–121
Zurück zum Zitat Hansen N (2016) The CMA evolution strategy: a tutorial. arXiv preprint arXiv:1604.00772
Zurück zum Zitat Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–73CrossRef
Zurück zum Zitat Hosney ME, Houssein EH, Saad MR, Samee NA, Jamjoom MM, Emam MM (2024) Efficient bladder cancer diagnosis using an improved rime algorithm with orthogonal learning. Comput Biol Med 182:109175. https://doi.org/10.1016/j.compbiomed.2024.109175CrossRef
Zurück zum Zitat Houssein EH, Hossam Abdel Gafar M, Fawzy N, Sayed AY (2025) Recent metaheuristic algorithms for solving some civil engineering optimization problems. Sci Rep 15(1):7929. https://doi.org/10.1038/s41598-025-90000-8CrossRef
Zurück zum Zitat Kan H, Xiao Y, Gao Z, Zhang X (2025) Improved multi-strategy aquila optimizer for engineering optimization problems. Biomimetics (Basel Switzerland) 10(9):620. https://doi.org/10.3390/biomimetics10090620CrossRef
Zurück zum Zitat Karaboga D (2005) An idea based on honey bee swarm for numerical optimization
Zurück zum Zitat Kaveh A, Talatahari S, Khodadadi N (2022) Stochastic paint optimizer: theory and application in civil engineering. Eng Comput 38(3):1921–1952CrossRef
Zurück zum Zitat Khodadadi N, Snasel V, Mirjalili S (2022) Dynamic arithmetic optimization algorithm for truss optimization under natural frequency constraints. IEEE Access 10:16188–16208CrossRef
Zurück zum Zitat Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680. https://doi.org/10.1126/science.220.4598.671MathSciNetCrossRef
Zurück zum Zitat Krishnan A, Socha JJ, Vlachos PP, Barba LA (2014) Lift and wakes of flying snakes. Phys Fluids. https://doi.org/10.1063/1.4866444CrossRef
Zurück zum Zitat Li W, Yang X, Yin Y, Wang Q (2024) A novel hybrid improved rime algorithm for global optimization problems. Biomimetics (Basel Switzerland) 10(1):14. https://doi.org/10.3390/biomimetics10010014CrossRef
Zurück zum Zitat Li Y, Li L, Lian Z, Zhou K, Dai Y (2025a) A quasi-opposition learning and chaos local search based on walrus optimization for global optimization problems. Sci Rep 15(1):2881. https://doi.org/10.1038/s41598-025-85751-3CrossRef
Zurück zum Zitat Li B, Hao L, Liu L, Zhang L, Karimzadeh Parizi M (2025b) Binary modified cat and mouse-based optimizer for medical feature selection: a covid-19 case study. J Chin Inst Eng. https://doi.org/10.1080/02533839.2025.2561168CrossRef
Zurück zum Zitat Lu B, Xie Z, Wei J, Gu Y, Yan Y, Li Z, Pan S, Cheong N, Chen Y, Zhou R (2025a) MRBMO: an enhanced red-billed blue magpie optimization algorithm for solving numerical optimization challenges. Symmetry 17(8):1295. https://doi.org/10.3390/sym17081295CrossRef
Zurück zum Zitat Lu C, Wei Y, Parizi MK (2025b) Improved sine cosine algorithm for global optimization and medical data classification. Int J Inf Technol Decis Mak. https://doi.org/10.1142/S0219622025500853CrossRef
Zurück zum Zitat Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98
Zurück zum Zitat Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
Zurück zum Zitat Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67CrossRef
Zurück zum Zitat Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007CrossRef
Zurück zum Zitat Mishra P, Ali M, Pooja, Islam S (2025) Enhanced mutation strategy based differential evolution for global optimization problems. PeerJ Comput Sci 11:2696. https://doi.org/10.7717/peerj-cs.2696CrossRef
Zurück zum Zitat Nasiri J, Khiyabani FM (2018) A whale optimization algorithm (WOA) approach for clustering. Cogent Math Stat 5(1):1483565. https://doi.org/10.1080/25742558.2018.1483565MathSciNetCrossRef
Zurück zum Zitat Nemati M, Zandi Y, Agdas AS (2024) Application of a novel metaheuristic algorithm inspired by stadium spectators in global optimization problems. Sci Rep 14(1):3078. https://doi.org/10.1038/s41598-024-53602-2CrossRef
Zurück zum Zitat Parizi MK, Keynia F, Bardsiri AK (2021) HSCWMA: a new hybrid SAC-WMA algorithm for solving optimization problems. Int J Inf Technol Decis Mak 20(02):775–808. https://doi.org/10.1142/S0219622021500176CrossRef
Zurück zum Zitat Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–224
Zurück zum Zitat Socha JJ (2011) Gliding flight in chrysopelea: turning a snake into a wing. Integr Comp Biol 51(6):969–982. https://doi.org/10.1093/icb/icr092CrossRef
Zurück zum Zitat Socha JJ, LaBarbera M (2005) Effects of size and behavior on aerial performance of two species of flying snakes (chrysopelea). J Exp Biol 208(10):1835–1847. https://doi.org/10.1242/jeb.01580CrossRef
Zurück zum Zitat Song M, Lin J, Liu X, Jia H, Luo S (2025) Octopus optimization algorithm: a novel single- and multi-objective optimization algorithm for optimization problems. Clust Comput 28(8):484. https://doi.org/10.1007/s10586-025-05141-2CrossRef
Zurück zum Zitat Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Glob optim 11(4):341–359
Zurück zum Zitat Varshney M, Kumar P, Ali M, Gulzar Y (2024) Dynamic random walk and dynamic opposition learning for improving aquila optimizer: solving constrained engineering design problems. Biomimetics (Basel Switzerland) 9(4):215. https://doi.org/10.3390/biomimetics9040215CrossRef
Zurück zum Zitat Wang X (2024) Draco lizard optimizer: a novel metaheuristic algorithm for global optimization problems. Evol Intell 18(1):10. https://doi.org/10.1007/s12065-024-00998-5CrossRef
Zurück zum Zitat Wang X (2025) Bighorn sheep optimization algorithm: a novel and efficient approach for wireless sensor network coverage optimization. Phys Scr 100(7):075230. https://doi.org/10.1088/1402-4896/ade378CrossRef
Zurück zum Zitat Wang X, Yao L (2025) Cape lynx optimizer: a novel metaheuristic algorithm for enhancing wireless sensor network coverage. Measurement 256:118361. https://doi.org/10.1016/j.measurement.2025.118361CrossRef
Zurück zum Zitat Wei J, Gu Y, Yan Y, Li Z, Lu B, Pan S, Cheong N (2025a) LSEWOA: an enhanced whale optimization algorithm with multi-strategy for numerical and engineering design optimization problems. Sensors 25(7):2054. https://doi.org/10.3390/s25072054CrossRef
Zurück zum Zitat Wei J, Gu Y, Lu B, Cheong N (2025b) RWOA: a novel enhanced whale optimization algorithm with multi-strategy for numerical optimization and engineering design problems. PLoS ONE 20(4):0320913. https://doi.org/10.1371/journal.pone.0320913CrossRef
Zurück zum Zitat Wolpert DH, Macready WG (2002) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Zurück zum Zitat Xiao Y, Cui H, Hussien AG, Hashim FA (2024) MSAO: a multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications. Adv Eng Inform 61:102464. https://doi.org/10.1016/j.aei.2024.102464CrossRef
Zurück zum Zitat Xiao Y, Cui H, Khurma RA, Castillo PA (2025) Artificial lemming algorithm: a novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif Intell Rev 58(3):84. https://doi.org/10.1007/s10462-024-11023-7CrossRef
Zurück zum Zitat Yan F, Zhang J, Yang J (2024) Crocodile optimization algorithm for solving real-world optimization problems. Sci Rep 14(1):32070. https://doi.org/10.1038/s41598-024-83788-4CrossRef
Zurück zum Zitat Yang XS (2009) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 169-178
Zurück zum Zitat Yao X, Liu Y (1996) Fast Evolutionary Programming. Evol program 3:451–460
Zurück zum Zitat Yazdani M, Jolai F (2016) Lion optimization algorithm (lOA): a nature-inspired metaheuristic algorithm. J Comput Design Eng 3(1):24–36. https://doi.org/10.1016/j.jcde.2015.06.003CrossRef
Zurück zum Zitat Yin Z, Wang L, Qiu X, Zhang J (2025) Feedback chaotic growth optimizer for parameter extraction of a novel direct current arc model. ISA Trans 166:364–378. https://doi.org/10.1016/j.isatra.2025.07.023CrossRef
Zurück zum Zitat Yu D, Ji Y, Xia Y (2025) Projection-iterative-methods-based optimizer: a novel metaheuristic algorithm for continuous optimization problems and feature selection. Knowl-Based Syst 326:113978. https://doi.org/10.1016/j.knosys.2025.113978CrossRef
Zurück zum Zitat Yue T, Li T (2025) Crisscross moss growth optimization: an enhanced bio-inspired algorithm for global production and optimization. Biomimetics (Basel Switzerland) 10(1):32. https://doi.org/10.3390/biomimetics10010032CrossRef
Zurück zum Zitat Zhang Y, Cai Y (2024) Adaptive dynamic self-learning grey wolf optimization algorithm for solving global optimization problems and engineering problems. Math Biosci Eng 21(3):3910–3943. https://doi.org/10.3934/mbe.2024174MathSciNetCrossRef
Zurück zum Zitat Zhang J, Li H, Parizi MK (2023) HWMWOA: a hybrid WMA-WOA algorithm with adaptive Cauchy mutation for global optimization and data classification. Int J Inf Technol Decis Mak 22(04):1195–1252. https://doi.org/10.1142/S0219622022500675CrossRef
Zurück zum Zitat Zhang Y, Adegboye OR, Feda AK, Agyekum EB, Kumar P (2025a) Dynamic gold rush optimizer: fusing worker adaptation and salp navigation mechanism for enhanced search. Sci Rep 15(1):15779. https://doi.org/10.1038/s41598-025-00076-5CrossRef
Zurück zum Zitat Zhang H, Wu H, Gong Y, Pan X, Zhong Q (2025b) A novel transcendental metaphor metaheuristic algorithm based on power method. Sci Rep 15(1):26997. https://doi.org/10.1038/s41598-025-12307-wCrossRef
Zurück zum Zitat Zhang D, Makoś MZ, Rousseau R, Glezakou V-A (2025c) Range: a robust adaptive nature-inspired global explorer of potential energy surfaces. J Chem Phys. https://doi.org/10.1063/5.0288910CrossRef
Zurück zum Zitat Zhao Y, Li X (2025) CCESC: a crisscross-enhanced escape algorithm for global and reservoir production optimization. Biomimetics (Basel Switzerland) 10(8):529. https://doi.org/10.3390/biomimetics10080529CrossRef
Zurück zum Zitat Zhong M, Wen J, Ma J, Cui H, Zhang Q, Parizi MK (2023) A hierarchical multi-leadership sine cosine algorithm to dissolving global optimization and data classification: the covid-19 case study. Comput Biol Med 164:107212. https://doi.org/10.1016/j.compbiomed.2023.107212CrossRef
Zurück zum Zitat Zhu Y, Huang H, Wei J, Yi J, Liu J, Li M (2025) ISO: an improved snake optimizer with multi-strategy enhancement for engineering optimization. Expert Syst Appl 281:127660. https://doi.org/10.1016/j.eswa.2025.127660CrossRef
Bildnachweise
AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, ams.solutions GmbH/© ams.solutions GmbH, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, Ferrari electronic AG/© Ferrari electronic AG, Doxee AT GmbH/© Doxee AT GmbH , Haufe Group SE/© Haufe Group SE, NTT Data/© NTT Data, Bild 1 Verspätete Verkaufsaufträge (Sage-Advertorial 3/2026)/© Sage, IT-Director und IT-Mittelstand: Ihre Webinar-Matineen in 2025 und 2026/© amgun | Getty Images