1 Introduction

1.1 Evolutionary Algorithm

In recent years, there has been a growing scholarly emphasis on the exploration of nature-inspired optimization algorithms, primarily due to their remarkable capabilities in addressing complex optimization challenges. Within this context, the work of Mirjalili, Mirjalili [1] introduced the MVO algorithm, which draws inspiration from cosmological concepts. Their study not only demonstrates its competitive performance across benchmark assessments but also within real-world engineering scenarios, highlighting its potential to tackle complex challenges characterized by intricate search spaces. Similarly, Mirjalili [2] presented SCA, showcasing its effectiveness through rigorous benchmark testing and the optimization of an aircraft wing’s cross-section. The study emphasizes its promise in resolving intricate real-world problems, particularly those constrained by both the complexity and the obscurity of their search domains.

It is noteworthy that the landscape of optimization algorithms includes the differential evolution (DE) method developed by Storn and Price [3], renowned for its simplicity and effectiveness in global optimization. Building on this trajectory, Mirjalili [4] introduced the MFO, inspired by the transverse orientation behavior of moths. The investigation demonstrates the algorithm’s competitiveness through comprehensive benchmark tests and applications in real-world engineering domains. Notably, Mirjalili and Lewis [5] pioneered the WOA, drawing inspiration from the intricate social behavior of humpback whales. Their work resonates with the algorithm’s competitive prowess, illustrated through exhaustive assessments of mathematical optimization landscapes and the intricacies of structural design problems.

Adding further to this diverse spectrum, Saremi, Mirjalili [6] proposed GOA, deriving insights from the collective behavior of grasshopper swarms. Their study provides compelling evidence of its efficacy in solving optimization challenges, supported by rigorous benchmarking exercises and practical applications to intricate structural optimization scenarios. Finally, the work of Mirjalili, Gandomi [7] unveiled SSA, inspired by the cooperative swarming behavior of salps. Their comprehensive exploration demonstrates the algorithm’s effectiveness in both single and multi-objective optimization landscapes, validated through mathematical function evaluations and real-world engineering design complexities.

In evolutionary algorithms reliant on population-based methods, the optimization process is commonly divided into two critical phases, irrespective of the algorithm’s specific characteristics [8, 9]. The initial phase, often referred to as exploration, is designed to scan the search landscape and identify high-potential regions. In this phase, significant shifts in directions are made, potentially leading to notable results. The subsequent phase, known as exploitation, focuses on refining the existing choices based on the data that has been gathered during the exploration phase. These data are employed to facilitate the algorithm’s convergence. Achieving a judicious balance between exploration and exploitation is considered essential for the effective accomplishment of comprehensive optimization by the algorithm.

The ongoing advancements in algorithmic design and optimization have captured significant scholarly attention [10]. This focus is substantiated by the commonly held belief that no single algorithm can universally address diverse optimization challenges. Consequently, a strong motivation has been observed among researchers to either augment existing methodologies or develop innovative algorithms capable of competing effectively with established solutions. In the specific area of multi-facility production scheduling, Pham, Trang [11] introduced an integration of the gray wolf optimizer (GWO) and the dragonfly algorithm (DA) to enhance optimization processes. In a similar vein, Son and Nguyen Dang [12] proposed an MVO model aimed at simultaneous time and cost optimization in small-scale scenarios. In the realm of environmental impact, Qiao, Lu [13] unveiled a hybrid algorithm that merges the lion swarm optimizer with a genetic algorithm (GA). The algorithm was found to improve both the stability and accuracy of carbon dioxide emissions forecasts, outperforming existing models. Regarding structural optimization, a study by Altay, Cetindemir [14] evaluated the SSA and introduced a modified version, termed modified SSA (MSSA), for optimizing truss system structures. The study found that, unlike SSA, MSSA effectively addresses convergence issues and proves especially effective for discrete problems. In the domain of construction, Pham and Soulisa [15] proposed a hybrid ant-lion optimizer (ALO) algorithm. This algorithm demonstrated improved capabilities for site layout planning by combining optimization techniques with heuristic methods. Meanwhile, Goksal, Karaoglan [16] introduced a heuristic solution for the vehicle routing problem, an NP-hard problem, by utilizing a PSO algorithm enhanced with variable neighborhood descent (VND) for local searches. Furthermore, Son, Duy [17] introduced a novel optimization algorithm that merges the DA and PSO to control construction material costs effectively.

1.2 Sine Cosine Algorithm

Since its inception in 2016, the SCA has garnered significant attention as a potential optimization technique. Its applications span diverse fields, addressing an array of complex issues. For example, in the realm of engineering, Shang, Zhou [18] unveiled a modified SCA to expedite convergence speed and promote population diversity. This modification involved redefining the position update formula and incorporating a Levy random walk mutation strategy for solving intricate engineering design problems. In the field of electrical networks, Raut and Mishra [19] introduced an SCA variant specifically tailored for the power distribution network reconfiguration (PDNR) problem. The algorithm aimed to minimize power loss as its sole objective. In a similar vein, Reddy, Panwar [20] presented a binary SCA aimed at optimizing the profit-based unit commitment (PBUC) problem in competitive electricity markets, demonstrating enhanced solution quality and convergence rates compared to existing methods. Within the sphere of bioinformatics and environmental science, Sahlol, Ewees [21] employed an SCA-optimized neural network model to enhance the prediction accuracy of oxidative stress biomarkers in fish liver tissue. Specifically, the model demonstrated improved performance when assessing the impact of varying selenium nanoparticle concentrations. For community detection and system modelling, Zhao, Zou [22] presented a discrete SCA tailored for community detection in complex networks. The algorithm showed superior effectiveness compared to existing methods like FM, BGLL, and GA on real-world network data. Aydin, Gozde [23] utilized both WOA and SCA for estimating critical parameters in photovoltaic (PV) cell models, targeting improved accuracy in system analysis and electrical generation efficiency.

Given the diverse nature of optimization problems, it is widely acknowledged that there is no universally applicable optimization algorithm competent in addressing diverse optimization problems [10]. As a result, there have been numerous investigations aimed at improving the effectiveness of the SCA. For instance, Cheng and Duan [24] proposed a hybrid version that combines SCA and the cloud model to handle benchmark test functions with different dimensions. Bureerat and Pholdee [25] developed a hybrid model that combines SCA and DE for detecting structural damage. Turgut [26] proposed a model that integrates the SCA with the backtracking search algorithm to effectively address multi-objective problems in heat exchanger design. Bairathi and Gopalani [27] improved SCA by integrating the opposition-based mechanism to instruct multi-layer neural networks. Qu, Zeng [28] introduced an upgraded version of the SCA by incorporating a neighborhood search technique and a greedy Levy mutation. Son and Nguyen Dang [29] proposed a hybrid SCA model to optimize simultaneously time and cost in large-scale projects. Finally, Pham and Nguyen [30] proposed an integrated SCA version with tournament selection, OBL, and mutation and crossover methods to handle cement transport routing.

1.3 The Motivation of this Study

Since its introduction, the SCA has witnessed growing popularity across various scientific disciplines, a trend primarily attributed to its straightforward methodology. However, the algorithm has been criticized for its tendency toward premature convergence, a drawback often ascribed to an inadequately defined exploitation strategy within its search landscape [31]. As a result, academic interest has been piqued in the development of enhanced versions of the SCA framework, viewed as potential solutions for overcoming the intricate challenges frequently encountered in optimization tasks.

Numerous efforts have been undertaken to enhance the efficacy of the SCA, encompassing a range of strategies including its fusion with OBL [27], its integration with tournament selection [30], incorporation of the Levy flight approach [18, 28], and hybridizations with other algorithmic paradigms [25, 26, 28]. However, the integration of both the RWS and OBL methodologies to achieve a harmonious balance between the exploration and exploitation phases remains an underexplored area. This comprehensive integration aims to culminate in the pursuit of global optimization. Within the research landscape, this study endeavors to address this notable gap by embarking on a journey to unify the RWS and OBL techniques. This unification not only seeks to bridge an existing research void but also aims to present a streamlined and efficient tool for tackling optimization challenges, catering to a distinct requirement within the research panorama.

In the following section, the formulation of the nSCA is detailed. Section 3 is devoted to an exhaustive evaluation of the algorithm’s convergence properties, including an analysis of its performance metrics and behavioral patterns. Section 4 provides an empirical substantiation of the model’s efficacy, achieved through its application in five real-world optimization case studies. Finally, the key findings of the research are summarized in Sect. 5, where potential avenues for future academic inquiry are also delineated.

2 Novel Version of Sine Cosine Algorithm

2.1 Roulette Wheel Selection (RWS)

The RWS mechanism is extensively employed across various optimization algorithms, including cuckoo search (CS), PSO, DE, GA, and ant colony optimization (ACO), marking its prominence as a commonly adopted technique in optimization disciplines. Pandey, Kulhari [32] introduced a roulette wheel-based cuckoo search clustering method for sentiment analysis. This method was found to outperform existing clustering methods like K-means and GWO in terms of mean accuracy, precision, and recall across nine sentimental datasets. Zhu, Yang [33] introduced a ranking weight-based RWS method to enhance the performance of comprehensive learning PSO. Experimental results indicate that this method surpasses other selection techniques in overall optimization efficiency. Yu, Fu [34] presented an improved RWS method designed for GA, targeting the traveling salesman problem. The method showed enhanced result precision and faster convergence rates. Ho-Huu, Nguyen-Thoi [35] introduced ReDE, a variant of the DE algorithm enhanced with RWS and elitist techniques. This variant was aimed at optimizing truss structures with frequency constraints, and numerical results suggest it outperforms several existing optimization methods. Lloyd and Amos [36] conducted the first comprehensive analysis of Independent Roulette (I-Roulette), an alternative to standard RWS in parallel ACO. The study revealed its capability for dynamic adaptation and faster convergence, especially when implemented on high-performance parallel architectures like GPUs.

2.2 Opposition-Based Learning (OBL)

The OBL technique has garnered significant attention for its wide-ranging applicability and effectiveness in various optimization applications. Originally introduced by Tizhoosh [37] in 2005, OBL serves as a novel framework for computational intelligence, creating complementary solutions to existing ones. Subsequent work has extended the utility of OBL in different computational algorithms, thereby yielding promising results in terms of faster convergence and improved performance. For example, Verma, Aggarwal [38] proposed a modified firefly algorithm that incorporates OBL. This innovation not only enhances initial candidate solutions but also employs a dimension-based approach for updating the positions of individual fireflies. Experimental results confirmed faster convergence and superior performance in high-dimensional problems when compared to existing evolutionary algorithms. Similarly, Upadhyay, Kar [39] presented an opposition-based harmony search algorithm aimed at optimizing adaptive infinite impulse response system identification. They reported faster convergence rates and superior mean square error fitness values when compared to traditional optimization methods such as GA, PSO, and DE. In the realm of project management, Luong, Tran [40] introduced a novel algorithm termed opposition-based multiple objective differential evolution. This algorithm employs opposition numbers to address the time–cost-quality trade-off in construction projects, thereby improving both exploration and convergence rates. Wang, Wu [41] proposed an enhanced PSO algorithm named GOPSO, which incorporates generalized opposition-based learning along with Cauchy mutation. This approach was specifically designed to mitigate the problem of premature convergence in complex optimization scenarios. Ewees, Abd Elaziz [42] introduced OBLGOA, an enhanced GOA that incorporates OBL at two distinct stages. This implementation was shown to improve solution quality and reduce time complexity. The algorithm outperformed ten well-known optimization algorithms across twenty-three benchmark functions and four engineering problems. In summary, OBL has been effectively integrated into a variety of optimization algorithms, consistently offering advantages in terms of speed and performance.

2.3 Novel Version of SCA (nSCA)

In the nSCA algorithm, the location of each solution is specified by an array of variables. These arrays collectively constitute sets of solutions, which are systematically organized in a matrix format, as described in Eq. (1). Similarly, the sets of opposite solutions generated during the exploration stage are also presented in a matrix layout, as delineated in Eq. (2). These matrix-based representations facilitate the management and assessment of solutions within the algorithm, thereby enabling more effective exploration and optimization of the search landscape.

$$S=\left[\begin{array}{ccc}\begin{array}{cc}{s}_{1}^{1}& {s}_{1}^{2}\end{array}& \dots & {s}_{1}^{d}\\ \begin{array}{cc}{s}_{2}^{1}& {s}_{2}^{2}\end{array}& \dots & {s}_{2}^{d}\\ \begin{array}{cc}\begin{array}{c}\dots \\ {s}_{N}^{1}\end{array}& \begin{array}{c}\dots \\ {s}_{N}^{2}\end{array}\end{array}& \begin{array}{c}\dots \\ \dots \end{array}& \begin{array}{c}\dots \\ {s}_{N}^{d}\end{array}\end{array}\right],$$
(1)
$${S}^{*}=\left[\begin{array}{ccc}\begin{array}{cc}{s}_{1}^{1*}& {s}_{1}^{2*}\end{array}& \dots & {s}_{1}^{d*}\\ \begin{array}{cc}{s}_{2}^{1*}& {s}_{2}^{2*}\end{array}& \dots & {s}_{2}^{d*}\\ \begin{array}{cc}\begin{array}{c}\dots \\ {s}_{N}^{1*}\end{array}& \begin{array}{c}\dots \\ {s}_{N}^{2*}\end{array}\end{array}& \begin{array}{c}\dots \\ \dots \end{array}& \begin{array}{c}\dots \\ {s}_{N}^{d*}\end{array}\end{array}\right].$$
(2)

In the initial population generation phase, the OBL method is utilized to create opposite solutions, as illustrated in Fig. 1. The specific process for incorporating OBL within nSCA is outlined in the accompanying pseudocode presented in Table 1. Subsequently, a fitness function evaluates both the randomly generated solutions and their oppositional counterparts. This evaluation identifies superior and inferior solutions. The algorithm retains the more performant solutions while discarding the less effective ones, thereby ensuring a consistent population size throughout the optimization process.

Fig. 1
figure 1

The OBL concept

Table 1 Pseudocode of the nSCA

The opposite solution \({s}^{*}\) of the solution \(s\in [{b}_{l},{b}_{u}]\) can be identified as follow:

$${s}^{*}={b}_{u}+{b}_{l}-s,$$
(3)

where bl and bu denote the lower and upper boundary of alternative s, respectively.

Given a solution S characterized by d parameters, where each parameter constrained within \([{b}_{l,j},{b}_{u,j}]\), an opposition solution \({S}^{*}=({s}_{1}^{*},{s}_{2}^{*},{s}_{3}^{*},\dots ,{s}_{d}^{*})\) can be defined as follow:

$${s}_{j}^{*}={b}_{u,j}+{b}_{l,j}-{s}_{j},$$
(4)

where bl,j and bu,j show the lower and upper limits of the jth dimension, respectively.

Upon refreshing the solution set during the initial population creation phase, the solutions undergo sorting to identify the current best-performing candidate. Subsequently, each solution's normalized fitness score is computed. This computation is integral to the functioning of the RWS mechanism, as depicted in Fig. 2. The formula for calculating the normalized fitness score is articulated in Eq. (5), while the mathematical representation of the RWS mechanism is provided in Eq. (6). These computational processes and mechanisms are pivotal in guiding the algorithm's solution selection and subsequent exploratory activities.

Fig. 2
figure 2

The RWS concept

$$NF\left({S}_{i}\right)=\frac{F\left({S}_{i}\right)}{\sqrt[2]{\sum_{1}^{N}{F\left({S}_{i}\right)}^{2}}},$$
(5)
$${s}_{i}^{j}=\left\{\begin{array}{c}{s}_{1}^{j} {\sigma }_{2}<NF\left({S}_{i}\right)\\ {s}_{i}^{j} {\sigma }_{2}\ge NF\left({S}_{i}\right)\end{array}.\right.$$
(6)

In Eqs. (5) and (6), NF(Si) and F(Si) denote the normalized fitness value and the fitness value of the ith solution, Si, respectively. The notation \({s}_{i}^{j}\) represents the jth parameter of the ith solution, while \({s}_{1}^{j}\) refers to the jth parameter of the current best-performing solution. The variable σ2 is a random number that falls within the range of 0 to 1.

The partitioning of the optimization process into exploration and exploitation phases is a recurring theme in the existing literature, particularly in relation to population-based stochastic algorithms [8]. During the exploration phase, the optimization algorithm utilizes a higher degree of randomness to facilitate the combination of diverse solutions, swiftly identifying promising areas within the search space. In contrast, the exploitation phase concentrates on the refinement of existing solutions through incremental adjustments, exhibiting significantly reduced levels of stochastic variability relative to the exploration stage. Within the SCA framework, specific mathematical expressions, represented by Eq. (7), govern the updating of agent positions in both exploration and exploitation stages. These equations are pivotal as they guide the search mechanism of the SCA, thereby enabling efficient exploration and targeted exploitation of the search landscape.

$${s}_{j}^{t+1}=\left\{\begin{array}{c}{s}_{j}^{t}+{\sigma }_{1}\times \mathrm{sin}\left({\sigma }_{4}\right)\times \left|{\sigma }_{5}{P}_{j}^{t}-{s}_{j}^{t}\right| {\sigma }_{3}<0.5\\ {s}_{j}^{t}+{\sigma }_{1}\times \mathrm{cos}\left({\sigma }_{4}\right)\times \left|{\sigma }_{5}{P}_{j}^{t}-{s}_{j}^{t}\right| {\sigma }_{3}\ge 0.5\end{array},\right.$$
(7)

where \({s}_{j}^{t}\) represents the position of the solution in the jth dimension at the tth iteration; σ1 defines the direction of movement; σ3 is a uniformly distributed random variable ranging between 0 and 1; σ4 serves as a stochastic variable that regulates the extent of movement toward or away from the target, while σ5 acts as a randomly determined weight for the destination; the position of the target solution in the jth dimension is denoted by \({D}_{j}^{t}\), and the absolute value is symbolized by ||.

Figure 3 presents a detailed model to elucidate the efficacy of sine and cosine functions within the interval [− 2, 2]. These trigonometric functions serve as versatile tools for navigational purposes, either by confining movement within the ranges defined by them or by facilitating extensions beyond these boundaries. Such flexibility is conducive to steering toward the desired objectives effectively. Importantly, the figure delineates the dynamic ranges of the sine and cosine functions, which play a crucial role in updating the positions of potential solutions. Furthermore, Eq. (7) introduces a stochastic variable, denoted as σ4, with a range between 0 and 2π. The inclusion of this stochastic element imbues the algorithm with a degree of randomness, thereby enhancing its exploratory capabilities. This feature allows for a more thorough evaluation of potential solutions within the given search landscape.

Fig. 3
figure 3

The exploration and exploitation mechanisms of the SCA

During each iteration cycle, the range of the sine and cosine functions, as outlined in Eq. (7), is adaptively modified to achieve a balanced trade-off between exploration and exploitation. This is further illustrated in Fig. 4. This dynamic adjustment is specifically engineered to effectively identify promising regions within the search space, thus facilitating more efficient discovery of the optimal solution. The guidelines for this modification process are set forth in Eq. (8), where the constant v is designated a value of 2. In this equation, Icur symbolizes the current iteration count, and Imax represents the maximum number of iterations permitted.

Fig. 4
figure 4

The range of sine and cosine exhibits a decreasing pattern

$${\sigma }_{1}=v-{I}_{\mathrm{cur}}\frac{v}{{I}_{\mathrm{max}}}.$$
(8)

In the exploitation stage, as detailed in the pseudocode for nSCA presented in Table 1, solution updates are carried out in accordance with Eq. (7). Following these updates, a jumping condition, denoted as JC in Eq. (9), is activated to dynamically generate an opposite solution in accordance with Eq. (10). It is noteworthy that this approach deviates from the methodology employed in the initial phase of population generation. Subsequent to the generation of opposite solutions, the objective function is applied to both the original solutions and the newly formed opposite solutions. The superior solution is retained, while the inferior one is eliminated. This process ensures that the population size remains constant, as mandated by Eq. (11).

$$JC=-{\left(\frac{{I}_{\mathrm{cur}}}{{I}_{\mathrm{max}}}\right)}^{2}+2\left(\frac{{I}_{\mathrm{cur}}}{{I}_{\mathrm{max}}}\right),$$
(9)
$$\mathrm{Create opposite solution} {S}_{i}^{*}\mathrm{of} {S}_{i} \mathrm{if} {\sigma }_{6}<JC,$$
(10)

where Si represents the ith solution while \({S}_{i}^{*}\) represents the opposite solution of the ith solution created by OBL; σ6 is a uniformly distributed random variable between 0 and 1.

$${S}_{\mathrm{new}}=\left\{\begin{array}{c}{S}_{i} if F\left({S}_{i}\right) is superior solution\\ {S}_{i}^{*} if F\left({S}_{i}^{*}\right) is superior solution\end{array}.\right.$$
(11)

3 Convergence Analysis

In the field of optimization, encompassing the application of evolutionary algorithms and metaheuristics, the validation of algorithmic effectiveness is critically dependent on the use of specialized test cases. This is particularly important given the inherently stochastic nature of these methodologies, where achieving optimal results requires the careful selection of a diverse and appropriate set of test functions. The aim of this section is to evaluate the performance of the nSCA algorithm, as substantiated through its application to 23 classical test functions, as well as the CEC2017 set. Each of these test functions has unique characteristics, designed to enable an in-depth assessment of the algorithm’s performance.

3.1 Convergence Analysis on Classical Benchmark Functions

The efficacy of the nSCA algorithm was rigorously assessed using an extensive set of 23 test functions [43,44,45]. These functions were grouped into three distinct categories, as outlined in Table 2: unimodal, multimodal, and fixed functions. The unimodal category consists of functions with a single global optimum and no local optimum, serving as a basis to evaluate the algorithm's capacity for rapid convergence and focused exploitation. In contrast, multimodal functions feature multiple local optima in addition to a global optimum, enabling a thorough assessment of the algorithm’s capability to navigate around local optima for effective exploration of the search space. Finally, the fixed category includes modified versions of both unimodal and multimodal functions, which are altered through operations such as rotation, shifting, and bias. These composite functions are designed to evaluate the algorithm's adaptability and performance in complex optimization landscapes.

Table 2 23 classical benchmark test functions

To rigorously evaluate the performance capabilities of the nSCA algorithm in optimization tasks, an ensemble of 25 search agents was employed to locate the global optimum within a suite of 23 test functions. This experiment was conducted over a span of 300 iterations. The performance of nSCA was subsequently benchmarked against a selection of leading metaheuristic algorithms, including SSA, MVO, MFO, WOA, GOA, and the original SCA. Due to the stochastic components intrinsic to these algorithms, each was executed 30 times to ensure result reliability. Key statistical metrics, including average values (avg) and standard deviations (std), were calculated, and are presented in Tables 3, 4, 5 and 6. This comprehensive approach provides valuable insights into the comparative effectiveness of nSCA and other algorithms in optimization contexts.

Table 3 Findings of unimodal test functions
Table 4 Findings of multi-modal test functions
Table 5 Findings of composite test functions
Table 6 Findings of composite test functions (continued)

In the realm of unimodal test functions, as evidenced by the results in Table 3, nSCA holds a marked advantage over its competitors. Specifically, within the scope of unimodal optimization, nSCA’s exploitation capabilities surpass those of SCA, MFO, MVO, WOA, SSA, and GOA in most test functions. This data effectively emphasizes nSCA’s proficiency in handling unimodal optimization challenges. Regarding multimodal optimization, Table 4 provides data that confirm nSCA’s superior performance over SCA, MFO, MVO, WOA, SSA, and GOA in most test cases. This impressive showing reinforces nSCA’s capabilities in effectively navigating complex search spaces and avoiding local optima. Lastly, when examined in the context of fixed test functions, nSCA shows performance metrics that are on par with those of SCA, WOA, MVO, SSA, GOA, and MFO, as illustrated in Tables 5 and 6. These results lend further support to nSCA’s considerable versatility and competitive edge when compared to other state-of-the-art optimization algorithms.

Additional performance metrics such as the convergence curve, average solution fitness, trajectory of the first solution, and search history were scrutinized to provide a more nuanced assessment of nSCA’s effectiveness. The study employed a configuration of 300 iterations and 25 search agents to examine three representative test functions (f1, f9, and f21). Each of these functions represents a different category: unimodal, multimodal, and composite, as depicted in Fig. 5. Analysis of the convergence curve and average fitness reveals a consistent improvement in the quality of the search agents over successive iterations. This observation underscores nSCA’s capability to enhance the quality of initially randomized solutions in specific optimization tasks.

Fig. 5
figure 5

Convergence curve, average fitness of all solutions, trajectory of the first solution, and search histories of functions f1, f9 and f21

Analysis of the trajectory of the first solution underscores nSCA’s abilities in both convergence and local search optimization. This is supported by the notable fluctuations in average fitness levels during the exploration phase and the relatively stable metrics seen in the exploitation stage, as cited in reference [46]. Further, the search histories associated with functions f1, f9, and f21 substantiate nSCA’s aptitude for identifying and concentrating on high-potential regions within the search space. The incorporation of RWS and OBL mechanisms proves to be beneficial, facilitating initial exploration and contributing to the ultimate convergence of optimal solutions initially identified during the exploration phase.

Figures 6, 7 and 8 display the convergence patterns for the 23 test functions, obtained over 150 iterations employing 25 search agents. The findings suggest that more efficient convergence for the majority of the test functions analyzed is achieved by the nSCA in comparison to other algorithms such as the original SCA, MVO, MFO, SSA, GOA, and WOA.

Fig. 6
figure 6

Convergence behavior of nSCA, SCA, SSA, MVO, MFO, WOA, and GOA for unimodal test functions

Fig. 7
figure 7

Convergence behavior of nSCA, SCA, SSA, MVO, MFO, WOA, and GOA for multimodal test functions

Fig. 8
figure 8figure 8

Convergence behavior of nSCA, SCA, SSA, MVO, MFO, WOA, and GOA for composite test functions

3.2 CEC2017 Benchmark Test Functions

The CEC2017 test functions constitute a specialized set of benchmarks, introduced at the 2017 IEEE Congress on Evolutionary Computation (CEC), focusing on the optimization of real parameters. Building upon the groundwork established by previous benchmark suites, the CEC2017 collection is designed to present a diverse range of challenges to optimization algorithms. These functions are generally considered to provide more realistic problem scenarios in comparison to the traditional set of 23 benchmark functions.

Spanning both unimodal and multimodal optimization landscapes, the CEC2017 suite also encompasses separable and non-separable problem domains. Moreover, it incorporates shifted and rotated variations, thereby offering a comprehensive environment for the testing of optimization algorithms. This extensive array of test scenarios enables researchers to conduct in-depth evaluations, thereby discerning the merits and limitations of various optimization methods under different conditions.

The efficacy of nSCA is evaluated using the IEEE CEC2017 benchmark suites [47]. These test functions are categorized into four distinct groups: unimodal, multimodal, hybrid, and composition. Table 7 offers a comprehensive breakdown of the definitions associated with the CEC2017 benchmark challenges. To increase the level of complexity and rigorously assess the capabilities of the proposed method in handling complex optimization problems, all functions within the CEC2017 suite are configured as 30-dimensional problems.

Table 7 CEC2017 benchmark functions

Tables 8 and 9 provide an in-depth statistical comparison between nSCA and other swarm-based optimization algorithms such as SSA, MVO, MFO, WOA, GOA, and the original SCA. To ensure a rigorous and unbiased evaluation, each algorithm was executed 30 times on a variety of benchmark functions. Statistical metrics like mean values (avg) and standard deviations (std) were subsequently calculated from these multiple runs. For the purposes of this study, a cohort of 50 search agents was deployed, each limited to a maximum of 300 iterations. A careful analysis of the data presented in Tables 8 and 9 clearly shows that nSCA consistently outperforms its counterparts, specifically SSA, MVO, MFO, WOA, GOA, and the original SCA, in various benchmark categories including unimodal, multimodal, hybrid, and composition functions.

Table 8 Results of different algorithms on CEC 2017 test functions
Table 9 Results of different algorithms on CEC 2017 test functions (continued)

4 Engineering Optimization Challenges

The purpose of this section is to assess the performance of nSCA as evidenced through its deployment in five real-world technical optimization problems, each characterized by varying inequality constraints. The primary focus lies in evaluating the capability of the algorithm to manage these constraints effectively throughout the optimization process.

4.1 Cantilever Beam Design Challenge

The objective of this optimization task is to achieve minimization of the weight of a cantilever beam, which is constructed from hollow square blocks. The structure consists of five such blocks, with the first block being fixed in position and the fifth subjected to a vertical load. A visual representation of the five parameters that determine the cross-sectional geometry of the blocks is provided in Fig. 9. Detailed formulations for addressing this problem can be found in Appendix 1.

Fig. 9
figure 9

Cantilever beam design challenge

The findings from an exhaustive analysis of this task are summarized in Table 10, which provides a comprehensive breakdown of key performance indicators. The data convincingly demonstrate that the nSCA algorithm consistently yields results that are either commensurate with or superior to those of leading optimization algorithms such as COA [52], RFO [51], GOA [6], MVO [1], ALO [50], CS [48] and SOS [49]. These findings strongly substantiate the algorithm’s capability to address and optimize complex, constraint-bound problems effectively. Additionally, the results underscore the algorithm's aptitude for real-world engineering applications, highlighting its proficiency in navigating intricate problem landscapes.

Table 10 Comparison findings of cantilever beam design challenge

4.2 Pressure Vessel Design Challenge

The primary objective of this optimization task is the reduction of manufacturing costs associated with the fabrication of a pressure vessel. A representation of the vessel’s unique design, featuring one flat and one hemispherical end, is illustrated in Fig. 10. The variables subject to optimization encompass the inner radius (R), shell thickness (Ts), length of the cylindrical section exclusive of the head (L), and the head's thickness (Th). These variables are pivotal in establishing the optimal design of the vessel. Specific mathematical equations and constraints have been formulated to encapsulate the dual aim of cost minimization and design requirement adherence. Comprehensive formulations for this task can be found in Appendix 1.

Fig. 10
figure 10

Pressure vessel design challenge

The outcomes of a comprehensive evaluation of this problem are summarized in Table 11, which offers a detailed analysis of various performance metrics. The data presented in this table confirm the reliable effectiveness of the nSCA algorithm, often matching or even surpassing other well-established optimization methods such as SCSO [58], RFO [51], AOA [57], GSA [1], MVO [1], ACO [56], ES [55], DE [54], and PSO [53]. These results robustly endorse the capabilities of nSCA in proficiently navigating the search space, an ability further augmented by the integration of roulette wheel selection (RWS) and opposition-based learning (OBL). Additionally, the findings underscore the algorithm’s versatility, demonstrating its suitability for application in engineering contexts, particularly in instances where the attributes of the search domain are either ambiguous or poorly defined.

Table 11 Comparison findings of pressure vessel design challenge

4.3 Three-Bar Truss Design Challenge

The primary objective of this challenge is the weight reduction of the truss structure, to be achieved within the boundaries of various constraints. Successful truss design necessitates the consideration of essential limitations, including those related to stress, deflection, and buckling factors. The engineering characteristics pertinent to this issue are illustrated in Fig. 11. Although the objective function may appear straightforward, it is governed by multiple intricate constraints, rendering the achievement of an optimal solution notably challenging. Detailed formulations relevant to this problem are provided in Appendix 1.

Fig. 11
figure 11

Three-bar truss design challenge

Table 12 provides an exhaustive comparison between the nSCA and various state-of-the-art optimization methods, including GOA [6], MVO [1], ALO [50], MBA [63], CS [48], PSO-DE [62], DEDS [61], as well as models put forth by Ray and Saini [61] and Tsai [62]. The data strongly suggest that nSCA consistently performs at a level comparable to the best algorithms in the field, thereby establishing itself as a formidable competitor in achieving optimal outcomes.

Table 12 Comparison findings of three-bar truss design challenge

4.4 Gear Train Design Challenge

The objective of this technical task, illustrated in Fig. 12, is the minimization of the gear ratio through the optimization of four discrete variables: the tooth counts on gears nA, nB, nC and nD. The gear ratio is utilized as a measure of the relationship between the angular speeds of the output and input shafts. Incrementation by units of one characterizes these discrete variables. Emphasis in the problem formulation is placed on establishing constraints for the permissible range of these variables. Detailed specifications related to this challenge are delineated in Appendix 1.

Fig. 12
figure 12

Gear train design challenge

Table 13 presents an in-depth comparison between the nSCA and a range of well-known optimization techniques. The data in this table highlight a remarkable similarity in the performance of nSCA to that of leading optimization methods, including MVO [1], ISA [66], CS [48], MBA [63], ABC [63], as well as models developed by Deb and Goyal [65] and Kannan and Kramer [64]. These results strongly affirm the effectiveness of the proposed nSCA algorithm, demonstrating its capabilities even when faced with challenges involving discrete variables. The proficiency of nSCA in managing discrete variables expands its range of applicability and emphasizes its suitability for addressing a diverse array of optimization problems across various disciplines.

Table 13 Comparison findings of gear train design challenge

4.5 Welded Beam Design Challenge

The overarching aim of this engineering task is the minimization of manufacturing costs associated with a welded beam. An overview of the system and structural parameters relevant to this challenge is provided in Fig. 13, emphasizing four principal design variables: the length of the attached bar (l), weld thickness (ℎ), the thickness of the bar (b), and the height of the bar (t). For the design to be considered feasible, the beam must satisfy seven specific constraints when subjected to a top-applied load. These constraints encompass various factors, such as side constraints, end deflection of the beam (δ), shear stress (τ), bending stress in the beam (θ), and the buckling load on the bar (Pc). Comprehensive formulations pertinent to this task are outlined in Appendix 1.

Fig. 13
figure 13

Welded beam design challenge

Table 14 presents a comprehensive comparison between the nSCA and various other cutting-edge optimization techniques. The findings presented in the table offer compelling evidence that the nSCA consistently achieves superior outcomes when juxtaposed with established algorithms, including SSA [68], RFO [51], MVO [1], GSA [1], CPSO [1], HS [53], and GA [67]. The outcomes elucidated in Table 14 distinctly illustrate that the nSCA proficiently identifies optimal solutions even within the confines of complex constrained challenges.

Table 14 Comparison findings of welded beam design challenge

The remarkable performance of the nSCA in effectively navigating intricate problem spaces serves to underscore its potential in addressing practical engineering applications marked by multifaceted and intricate constraints. This further underscores the significant role that the nSCA plays as a valuable instrument within the domain of engineering optimization. Its capabilities offer promising avenues for the enhancement of problem-solving strategies and the facilitation of effective decision-making processes.

5 Conclusion

This study introduces an innovative approach that synergistically merges the roulette wheel selection (RWS) mechanism with opposition-based learning (OBL) to enhance the efficacy of the sine cosine algorithm (SCA) in navigating intricate search spaces. This integration gives rise to a novel iteration of the SCA, referred to as nSCA. The comprehensive assessment of nSCA performance is meticulously conducted through comparative experiments involving a range of state-of-the-art algorithms, including MVO, MFO, SSA, WOA, GOA, and the original SCA. To rigorously gauge its capabilities, 23 benchmark test functions are employed, offering a thorough benchmarking of nSCA performance. Additionally, the practical effectiveness of nSCA is demonstrated by successfully addressing five distinct engineering optimization problems. The outcomes underscore the superiority of nSCA when compared to alternative evolutionary computation approaches, highlighting its ability to generate exceptionally competitive solutions across both benchmark test functions and real-world engineering optimization challenges. These compelling findings emphasize the value of nSCA as an indispensable tool in the domain of engineering optimization, promising significant contributions to problem-solving strategies and decision-making processes. Given these substantial insights, it is evident that nSCA presents an impactful and robust approach well-equipped to address intricate optimization challenges encountered in real-world scenarios.