Skip to main content
Top
Published in: Complex & Intelligent Systems 1/2024

Open Access 27-07-2023 | Original Article

Evolving deep gated recurrent unit using improved marine predator algorithm for profit prediction based on financial accounting information system

Authors: Xue Li, Mohammad Khishe, Leren Qian

Published in: Complex & Intelligent Systems | Issue 1/2024

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This research proposes a hybrid improved marine predator algorithm (IMPA) and deep gated recurrent unit (DGRU) model for profit prediction in financial accounting information systems (FAIS). The study addresses the challenge of real-time processing performance caused by the increasing complexity of hybrid networks due to the growing size of datasets. To enable effective comparison, a new dataset is created using 15 input parameters from the original Chinese stock market Kaggle dataset. Additionally, five DGRU-based models are developed, including chaotic MPA (CMPA) and the nonlinear MPA (NMPA), as well as the best Levy-based variants, such as the dynamic Levy flight chimp optimization algorithm (DLFCHOA) and the Levy-base gray wolf optimization algorithm (LGWO). The results indicate that the most accurate model for profit forecasting among the tested algorithms is DGRU-IMPA, followed by DGRU-NMPA, DGRU-LGWO, DGRU-DLFCHOA, DGRU-CMPA, and traditional DGRU. The findings highlight the potential of the proposed hybrid model to improve profit prediction accuracy in FAIS, leading to enhanced decision-making and financial management.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

In recent years, the use of complex and intelligent systems [1] has gained widespread attention in various fields, including finance [2], feature extraction [3], multimodal fusion [4], situation-aware systems [5], and social media [6]. These models have shown significant success in predicting future trends and identifying patterns in financial data [7]. One popular complex deep learning model is the gated recurrent unit (GRU), which has been widely used for time-series data analysis due to its ability to capture long-term dependencies [8].
However, the performance of GRU models heavily relies on the appropriate selection of parameters and optimization algorithms. In addition, financial data are often complex and noisy [9], which further complicates the modeling process [10, 11]. To address these challenges, this paper proposes an IMPA to optimize the parameters of a deep GRU model for predicting profits based on FAIS data [12].
The IMPA algorithm is inspired by the hunting behavior of marine predators and has shown superior performance compared to other optimization algorithms in various applications. We propose an evolving deep GRU architecture that utilizes the IMPA algorithm for parameter optimization. The proposed model is trained on a large dataset of FAIS data, including financial ratios, income statements, and balance sheets.
The objective of this paper is to evaluate the performance of the proposed model and compare it with other state-of-the-art models for profit prediction. The results show that our proposed model outperforms other models in terms of accuracy and generalization on various datasets. The proposed model provides a reliable tool for financial analysts and decision-makers in predicting future profits based on FAIS data.

Motivations

Given the limitations mentioned above, we were motivated to find a way to address both exploration and exploitation in our algorithm. By doing so, we aim to improve the convergence speed and prevent the algorithm from getting stuck in local optima, in line with the no-free lunch principle.
Out of various optimization algorithms, we chose MPA [13] due to its simplicity, limited parameter requirements, and flexibility. However, MPA often struggles with complex optimization problems and may get stuck in local optima. To address this issue, we propose a dynamic flight behavior (between Levy and Gaussian) to enhance MPA's performance on challenging tasks. Thus, our paper's primary contributions are as follows.

Contribution

  • IMPA, a dynamic flight-based variation of the classical MPA, has been introduced to enhance the adaptability and convergence speed of the traditional method.
  • The nonlinearity and uncertainty notions from the existing MPA are utilized in IMPA to locate a predator far from the population and produce a solution with higher fitness than the current attacker (best search agent).
  • A deep learning (DL) architecture has been developed and verified for teaching efficient trading techniques using the IMPA method.
  • The IMPA method and five baseline optimization algorithms (two improved versions of MPA, two dynamic Levy-behaved algorithms, and standard MPA) have been utilized to evolve a conventional DGRU, addressing the two main problems with gradient descent learning algorithms—getting stuck in local minima and the poor convergence rate issue.
The remaining sections of the paper are structured as follows: Section "Related Works" provides an overview of the most relevant literature in the field. Section "Related Terminology" presents the key concepts, including the DGRU architecture and the MPA mathematical model. Section "Proposed Methodology" presents the hybrid recommended methodology. Section "Experimental results and discussion" details the experimental design, the dataset used, and the resulting outcomes. Finally, in Section "Conclusion", the findings are summarized and concluded.
Artificial intelligence (AI) techniques, evolutionary algorithms [14], including metaheuristics [15], swarm-based algorithms [16], and machine learning [17] have been widely used in various fields of research studies. Reference [18] provides an overview of AI techniques and emphasizes the value of technical indicators in the process. However, the problem of selecting the right technical indicators remains open. Statistically based works have been criticized for being ineffective and producing inferior results when compared to AI-based models [19]. Moreover, the unique properties of financial time-series prediction make it more challenging than other time series, rendering traditional statistical methods unproductive [20].
The first artificial neural network (ANN) for financial sector prediction was developed by reference [21], using IBM company's daily prices as a database. However, it did not produce the expected outcomes and brought attention to the challenges encountered, such as the overfitting issue and the lack of complexity of the ANN due to the usage of only a few parameters and one hidden layer. Future works could evaluate a bigger number of characteristics, use other forecasting horizons, and assess model profitability. DL was identified as a topic for future research [20], and reference [21] presented a number of hybrid systems, including those that used fuzzy, DL, and ANN.
References [22] and [23] highlighted the widespread use of ANN and their superior performance over fuzzy, support vector machines, and decision trees due to their higher generalization potential. Reference [24] concluded that time-series classification using DL approaches might match state-of-the-art performance. Technical analysis is frequently used for identifying reversal points, forecasting trends, and making short-term investments [20]. Thus, the training process's length is essential to consider, with most earlier works employing daily candles for an evaluation period of at least one day. Only five of the 81 technical analysis-based papers in the review by reference [22] used intraday candles, indicating a potential difference in future efforts.
DL architectures require much more data than this number of daily candles, according to reference [25], which surveyed DL approaches for predicting financial time series. Recurrent neural networks are among the most extensively studied by researchers, but many authors also incorporated information from fundamental analysis, data augmentation [26], news, uncertain demands [27], value management, market reaction [28], and technical indicators, as highlighted by reference [25]. Moreover, sentiment analysis has been used in the financial market, with varying degrees of success, thanks to advancements in natural language processing and text-matching techniques [2931]. Combining past price data with new information has produced outcomes that outperform models that solely consider open, high, low, close, and volume (OHLCV) [32].
According to reference [33], trading methods were not typically used in works, and they did not assess profitability, supporting the conclusion of reference [34] that much research does not support profitability, leading to several inconsistent models over time. Therefore, these difficulties have contributed to reference [33], which brought the two last phases of trading style and profitability evaluation to the conventional approach for financial prediction. A fully autonomous system is crucial for accurate financial validation since reference [35] noted that the metrics utilized for machine learning algorithms have a limited association with financial measures. Reference [36], examined 85 papers and found that only 31 utilized a trading strategy, supporting the need for this new methodology.
By considering these research gaps and challenges, our proposed hybrid model, which combines the IMPA with the DGRU, addresses several limitations in the existing literature. The IMPA algorithm enhances the adaptability and convergence speed of traditional optimization methods, allowing for better optimization of the DGRU model's parameters. Additionally, by leveraging a comprehensive dataset from financial accounting information systems, our model aims to capture a broader range of factors that influence profit prediction.
In summary, the existing literature has shown progress in profit prediction using AI techniques, but there are still significant research gaps in terms of technical indicator selection, integration of multiple data sources, and assessment of profitability. Our proposed model aims to address these gaps by providing a comprehensive solution that leverages advanced optimization algorithms and incorporates a wide range of input parameters from financial accounting information systems.
The related terminology, including MPA and DGRU, is presented in this section.

Marine predator algorithm

The majority of metaheuristic algorithms, including MPA, use a population-based approach. Therefore, the initial solution is often scattered throughout the search space and evaluated as demonstrated below [13]:
$$ L_{0} = L_{\min } + rand\left( {L_{\max } - L_{\min } } \right), $$
(1)
where Lmin and Lmax are used to represent the lower and higher boundaries of variables, rand is a uniform random vector with a range of 0–1. To conclude, Eq. (2) indicates that the best solution is located in the matrix referred to as "Elite," which represents the top predator. Matrix arrays will be utilized to locate the prey by utilizing specific information about their location.
$$ {\mathbf{Elite}} = \left[ \begin{gathered} L_{1,1}^{I} \,\quad L_{1,2}^{I} \,\quad \cdots \,\quad L_{1,d}^{I} \hfill \\ L_{2,1}^{I} \,\quad L_{2,2}^{I} \,\quad \cdots \,\quad L_{2,d}^{I} \hfill \\ \, \vdots \,\quad\quad \vdots \,\quad\quad \vdots \,\quad\quad \vdots \hfill \\ L_{n,1}^{I}\quad \, L_{n,2}^{I} \,\quad \cdots \,\quad L_{n,d}^{I} \hfill \\ \end{gathered} \right]_{n \times d} . $$
(2)
The top predator is represented by the XI value of the Elite matrix, where n is the total number of search agents and d is the total number of dimensions. If a stronger agent replaces the top predator, the Elite is enhanced. Similar to the Elite matrix, the Prey matrix is multi-dimensional. Agents tracking the Prey use a matrix to monitor its exact position. The strongest predator creates the Elite, and the Prey is generated in the early stages. The following equation (Eq. 3) is applied to the Prey [13]:
$$ {\mathbf{Prey}} = \left[ \begin{array}{llll} L_{1,1}^{{}} \,&\quad L_{1,2}^{{}} \,&\quad \cdots \,&\quad L_{1,d}^{{}} \hfill \\ L_{2,1}^{{}} \,&\quad L_{2,2}^{{}} \,&\quad \cdots \,&\quad L_{2,d}^{{}} \hfill \\ \, \vdots \,&\quad \vdots \,&\quad \vdots \,&\quad \vdots \hfill \\ L_{n,1}^{{}} \,&\quad L_{n,2}^{{}} \,&\quad \cdots \,&\quad L_{n,d}^{{}} \hfill \\ \end{array} \right]_{n \times d} , $$
(3)
where the jth dimension of the ith predator is represented by a value of Li,j. The optimization process of the MPA is closely linked to the Elite and Prey matrices. The MPA's optimization procedure involves three steps, which are outlined below [13]:
  • Phase 1, high-velocity ratio (HVR) If the prey is slower than the predator, this scenario arises during the optimization stage, especially when exploration is limited. This principle can be mathematically expressed as follows:
    $$ \begin{aligned} & While{\text{ Iter < max(Iter)}} \times \frac{1}{3}, \hfill \\ &{\mathbf{stepsize}}_{{\mathbf{i}}} = {\mathbf{R}}_{{{\mathbf{BM}}}} \otimes \left( {{\mathbf{Elite}}_{{\mathbf{i}}} - {\mathbf{R}}_{{{\mathbf{BM}}}} \otimes {\mathbf{Prey}}_{{\mathbf{i}}} } \right) \, i \, = \, 1, \, ..., \, n, \hfill \\ & {\mathbf{Prey}}_{{\mathbf{i}}} = \, {\mathbf{Prey}}_{{\mathbf{i}}} + P.{\mathbf{R}} \otimes {\mathbf{stepsize}}_{{\mathbf{i}}} , \hfill \end{aligned} $$
    (4)
    where RBM is referred to regularly distributed Brownian motion, entry-by-entry multiplications are represented by the ⊗ symbol. In this case, the RBM multiplication of the prey mimics the prey's movements. A fixed integer (P) and an array of random values (R), ranging from 0 to 1, are the two variables.
  • Phase 2, the unit-velocity ratio (UVR) During this stage, the prey and predator move in a similar manner, and both the exploitation and exploration phases are crucial. As a result, the agents are evenly divided between exploration and exploitation, and both the predator and the prey are responsible for these tasks. If the prey moves in a Levy flight fashion (v≈1), Brownian motion is the safest strategy for the predator in the UVR. Based on the findings of the study, the prey moves in Levy flight, while the predators move in Brownian motion, as shown in Eq. (5).
    $$ \begin{aligned} &{\text{while }}\frac{Max(Iter)}{3}{\text{ < Iter < }}\frac{2 \times Max(Iter)}{3} \hfill \\ &{\text{For: the first }} \times \frac{{{\text{predator}}}}{{2}} \hfill \\ &{\mathbf{stepsize}}_{{\mathbf{i}}} = {\mathbf{R}}_{{{\mathbf{LF}}}} \otimes \left( {{\mathbf{Elite}}_{{\mathbf{i}}} - {\mathbf{R}}_{{{\mathbf{LF}}}} \otimes {\mathbf{Prey}}_{{\mathbf{i}}} } \right) \, i \, = \, 1,..., \, n/2, \hfill \\ & {\mathbf{Prey}}_{{\mathbf{i}}} = \, {\mathbf{Prey}}_{{\mathbf{i}}} + P.{\mathbf{R}} \otimes {\mathbf{stepsize}}_{{\mathbf{i}}} . \hfill \end{aligned} $$
    (5)
    The MPA employs Levy flight to generate a random vector called RLF. Prey movement in RLF and prey multiplication is taken into account, and prey movement in Levy Flight is simulated by adding the prey's position to the step size. According to the MPA, the remaining 50% of individuals can be represented as follows:
    $$ \begin{aligned} &{\mathbf{stepsize}}_{{\mathbf{i}}} = {\mathbf{R}}_{{{\mathbf{BM}}}} \otimes ({\mathbf{R}}_{{{\mathbf{BM}}}} \otimes {\mathbf{Elite}}_{{\mathbf{i}}} - {\mathbf{Prey}}_{{\mathbf{i}}} ){\text{ i = n/2, }}...{\text{, n,}} \\ &{\mathbf{Prey}}_{{\mathbf{i}}} = \, {\mathbf{Elite}}_{{\mathbf{i}}} + P.{\text{CEFA}} \otimes {\mathbf{stepsize}}_{{\mathbf{i}}} ,\\ &{\text{CEFA}} = \left( {1 - \frac{{{\text{Iter}}}}{{{\text{Max}}({\text{Iter}})}}} \right)^{{2 \times \frac{{{\text{Iter}}}}{{\text{Maxi(Iter)}}}}} . \end{aligned} $$
    (6)
    RBM and Elite compounded in Brownian motion mimic the movements of the predator, while CEFA functions as a factor that may be utilized to alter the predator's step sizes, and the prey changes its position in response to the predator's motions.
  • Phase 3, low-velocity ratio (LVR) When the prey is moving at a slower pace than the predator, the scenario can be summarized as follows:
    $$ \begin{aligned} & {\text{While Iter > }}0.66{\text{ Max (Iter),}} \\ &{\mathbf{stepsize}}_{{\mathbf{i}}} = {\mathbf{R}}_{{{\mathbf{LF}}}} \otimes \left( {{\mathbf{R}}_{{{\mathbf{LF}}}} \otimes - {\mathbf{Prey}}_{{\mathbf{i}}} + {\mathbf{Elite}}_{{\mathbf{i}}} } \right) \, i \, = \, 1,...,n, \\ &{\mathbf{Prey}}_{{\mathbf{i}}} = \, P.{\text{CEFA}} \otimes {\mathbf{stepsize}}_{{\mathbf{i}}} + {\mathbf{Elite}}_{{\mathbf{i}}} .\end{aligned} $$
    (7)
Moreover, marine predators' behavior can be influenced by environmental or biological factors such as fish aggregation devices or Eddy formation impacts (EFI), which are considered as operators for avoiding local optima. During the simulation, longer hops should be considered to reduce the risk of stagnation in local optima. Therefore, the impact of EFI can be summarized as follows:
$$ \begin{aligned}& {\mathbf{Prey}}_{{\mathbf{i}}} \\ &\quad= \,\left\{\begin{array}{ll} \left[ {(rand - 1) \times EFIs + r} \right]\left( {{\mathbf{Prey}}_{r2} - {\mathbf{Prey}}_{r1} } \right) \\ \quad+ {\mathbf{Prey}}_{{\mathbf{i}}} {\text{ if rand}} > EFIs \\ \quad{\text{CEFA}} \times \left[ {{\mathbf{R}} \otimes \left( {{\mathbf{L}}_{{{\text{max}}}} - {\mathbf{L}}_{\min } } \right) + {\mathbf{L}}_{\min } } \right]\\ \quad \otimes {\mathbf{U}} + {\mathbf{Prey}}_{{\mathbf{i}}} {\text{ if rand}} \le EFIs \end{array} \right.. \end{aligned} $$
(8)
In this instance, the EFIs value is 0.2, which reflects their probability of influencing the optimization process. In binary, U stands for a vector. A random vector is then created by changing the array to 1 if it is greater than 0.2 and 0 if it is less than 0.2. The uniformly produced random number (r) is a positive integer with a range of 0–1. Lmin and Lmax stand for each dimension's lower- and upper bounds, respectively. Here, the random-generated indexes r1 and r2 of the Prey matrix are used.
Oceanic predators possess exceptional perceptual abilities and can remember precise details of their previous hunting grounds with remarkable accuracy. This remarkable memory storage ability can be emulated using the MPA. By modifying the prey and implementing EFIs, we can alter the predator–prey interaction network, known as the Elite matrix. To evaluate the effectiveness of this approach, search agents from the most recent iteration are given greater importance than those from previous iterations.

Deep gated recurrent unit

GRUs are a type of recurrent neural network (RNN) architecture used for processing sequential data, such as natural language, speech, and time-series data. GRUs were first introduced as a simpler alternative to the more complex long short-term memory (LSTM) network [37]. GRUs are designed to address the vanishing gradient problem, which can occur in standard RNNs when the gradient of the loss function becomes too small to effectively update the network's parameters.
The architecture of a GRU consists of a set of gating mechanisms that regulate the flow of information through the network. These gates, typically a reset gate and an update gate, control how much of the previous state is retained and how much new information is integrated into the current state. The reset gate determines which elements of the previous state should be forgotten, while the update gate determines how much of the new input should be added to the current state.
One of the key features of GRUs is their ability to selectively retain or discard information from the previous time step, which makes them particularly effective for modeling long-term dependencies in sequential data. Additionally, GRUs are more flexible than standard RNNs, as they allow for variable-length input sequences and can handle missing data or irregular sampling rates.
GRUs have been shown to outperform other RNN models on a wide range of tasks, including language modeling, machine translation, and speech recognition. They have also been used in various applications in computer vision and signal processing, such as image captioning, music generation, and anomaly detection. Overall, GRUs are a powerful and versatile tool for modeling sequential data, and their simplicity and efficiency make them a popular choice in the field of deep learning.

Proposed methodology

In this section, we will discuss the dataset, preprocessing techniques, and the proposed methodology for predicting profits in financial accounting information systems. Specifically, we will outline the problem of profit prediction using DGRU and explain how IMPA will utilize this methodology for training purposes.

Dataset

To create a dataset suitable for predicting profits, one option is to use a dataset that has been previously used for stock prediction. Kaggle provides a relevant dataset that covers the Chinese stock market, with data available from January 4th, 2005 to May 11th, 2022. This dataset includes OHLC prices, volume data, and daily financial statistics such as the PB, PE, and PS ratios, as well as profitability and other relevant factors. The data is available daily and covers all liquid and publicly traded stocks on both the Shanghai Stock Exchange and the Shenzhen Stock Exchange, totaling 4714 stocks.1 Additionally, the dataset includes fundamental data such as market capitalization and FRs such as the PE ratio.

Preprocessing

The final stage of the preprocessing involved assigning a value of 0 or 1 to each new dataset based on the earnings of the following year [38]. Since future profit values were only available up to 2021, a total of 16 datasets were generated. After completion of the preprocessing phase, there are now 16 new datasets, each containing 2350 companies and 15 attributes.

New dataset for predicting profits

There is a single file that contains 37,600 samples from the 16 datasets, each with 15 characteristics. Table 1 displays the selected features and their corresponding labels. Additionally, Fig. 1 illustrates the correlation matrix among the different features, providing further insights into the database. To depict distributed variables, violin plots [39] are used and presented in Fig. 2.
Table 1
The selected features and their labels
No
F1
F2
F3
F4
Feature
Profit margin ratio
Gross margin ratio
Return on assets
Free Cash flow margin
No
F5
F6
F7
F8
Feature
Return on equity
Quick ratio
Current ratio
Cash ratio
No
F9
F10
F11
F12
Feature
Cash flow to debt ratio
Debt to equity ratio
Debt ratio
Operating cash flow sales ratio
No
F13
F14
F15
Feature
R&D to revenue
SGA to revenue
CAPEX to revenue

Improved marine predator algorithm

The MPA operates in three stages, following established ecological regimes that simulate predator and prey behavior. These stages are based on the step sizes used by predators when capturing prey. The MPA predicts that a predator will exhibit similar amounts of Levy flights and Brownian motions throughout its lifespan. In stages I–III, the predator is initially immobile, moves using Brownian motion, and finally adopts the Levy flying technique. Prey may also experience similar circumstances as they could be targets for predators.
Observing prey travel in stage I using Brownian motion and then in stage II using Levy's flight is straightforward. The MPA suggests that committing a third of the rounds to each phase simultaneously enhances the strategy and yields better outcomes than moving between stages or consistently executing a single stage. However, the MPA is still in its early stages, and improvements can be made by determining when and how to use the technique for each update step.
The typical MPA may not always ensure a smooth transition from exploration to exploitation, but the floating strategy can address this issue. The floating approach provides richer search patterns for a more efficient global search, and it can also solve the problem of local optimal stagnation. IMPA mimics the behavior of prey and predators by continuously switching between phases, rather than using three distinct stages. This continuous model allows personnel to move between exploration and exploitation.
According to Eqs. (4) to (7), step size depends on Prey and Elite positions as well as the coefficient type. (For example, Brownian motion or Levy flight). Step size in this stage is determined by the first half of Eq. (4), i.e., \({\mathbf{R}}_{{{\mathbf{BM}}}} \otimes ({\mathbf{Elite}}_{{\mathbf{i}}} - {\mathbf{R}}_{{{\mathbf{BM}}}} \otimes {\mathbf{Prey}}_{{\mathbf{i}}} ) \, \) by the multiplication of the RBM matrix by the Prey position, which is a random integer based on the Brownian motion. Alternatively, in Eq. (5), i.e., \({\mathbf{R}}_{{{\mathbf{LF}}}} \otimes ({\mathbf{Elite}}_{{\mathbf{i}}} - {\mathbf{R}}_{{{\mathbf{LF}}}} \otimes {\mathbf{Prey}}_{{\mathbf{i}}} )\), where the Levy flight is used to generate random numbers for each place, the RLF array is multiplied by the Prey location. This indicates that the previous four formulas from Eqs. (4) to (8) are summarized in Eq. (9) as follows:
$$ \begin{aligned} &{\mathbf{stepsize}}_{{\mathbf{i}}} = {\mathbf{R}}_{LB2} \otimes \left( {\mathbf{R}}_{LB2} \otimes {\mathbf{Elite}}_{{\mathbf{i}}} \right.\\ &\qquad\qquad\left.-{\mathbf{R}}_{LB1} \otimes {\mathbf{Prey}}_{{\mathbf{i}}} \right) \, i \, = \, n/2,..., \, n, \hfill \\ &{\mathbf{Prey}}_{{\mathbf{i}}} = \, P.{\text{CEFA}} \otimes {\mathbf{stepsize}}_{{\mathbf{i}}} + {\mathbf{Elite}}_{{\mathbf{i}}} , \hfill \\ &{\text{CEFA}} = \left( {1 - \frac{{{\text{Iter}}}}{{{\text{Max}}({\text{Iter}})}}} \right)^{{ \times \frac{{2{\text{Iter}}}}{{{\text{Max}}({\text{Iter}})}}}} . \hfill \\ \end{aligned} $$
(9)
RLB1 and RLB2 can be represented as random behavior with rising (W2) and falling (W1) slopes in Eqs. (10) and (11), respectively. Figure 3 depicts these two coefficients.
$$ \begin{aligned}{\mathbf{R}}_{LB1} =&\, 1.95 - \left( {2^{1/4} /\max (iter)^{1/3} } \right) + 0.1 \\ &\times rand\left( {size\left( {{\mathbf{R}}_{LB11} } \right)} \right),\end{aligned} $$
(10)
$$\begin{aligned} {\mathbf{R}}_{LB2} = &\,\left( {1.5^{1/4} /max(iter)^{1/3} + 0.1} \right) + 0.1 \\ &\times rand\left( {size\left( {{\mathbf{R}}_{LB2} } \right)} \right).\end{aligned} $$
(11)
Since the Levy flight's precise form entirely depends on the \(\alpha\) parameter, we define these forms precisely for RLB1 and RLB2. As depicted in Fig. 4, using a smaller value results in a comprehensive exploration of Brownian or Levy flight, uncovering more significant motions. Conversely, larger values lead to reduced search space. These coefficients are defined in this particular form because a robust hunting parameter with a significant jump is needed during the exploration phase, an accurate hunting parameter with a slight jump in the exploitation stage, and a smooth transition between the two. Figures 4 and 5 show that RLB1 and RLB2 meet these requirements.
RLB2 has been chosen as the Elite factor by Eq. (9). The RLB2 amplitude increases over time in Fig. 4, creating a "motion" with high initial step sizes that got progressively smaller over time. Elite's location does not drastically alter in the final iterations as a result of this steady reduction in step sizes. RLB1 prevents prey from becoming stuck in the local minima by decreasing the amplitude and increasing the step size across iterations. Figure 5 displays the block diagram of the IMPA.
Numerous analyses have demonstrated that the parameter space has limitations \([x_{\min } ,x_{\max } ]\) in a variety of practical scenarios [13]. In these real-world scenarios, the problem's search space is taken into consideration. Predator movement can be limited using the constraints in Eq. (12)
$$ x_{i}^{d} = \min \left( {x_{\max }^{d} ,\max \left( {x_{\min }^{d} ,x_{i}^{d} } \right)} \right). $$
(12)

Problem definition

When tuning a deep network using optimization approaches, there are typically two main issues to address. First, the structure's specifications should be precisely defined. Second, the problem under examination should be used to determine the fitness function [40]. One stage in adjusting a DGRU using IMPA is presenting the network variables. Therefore, it is crucial to properly set DGRU parameters such as biases and weights to obtain the best prediction accuracy. IMPA optimizes the weights and biases to determine the fitness function for the loss function. In this process, predators are used to symbolize the biases and weights.
The biases and weights of a DGRU are typically represented in vector, matrix, and digital formats as specific instances of optimization algorithms. In this study, the individual is shown in Eq. (13) using vector-based parameters because IMPA requires them:
$$ \begin{aligned} &{\mathbf{Predators}} \\ &\quad= \left[ {W_{11} ,W_{12} ,...,W_{nh} ,b_{1} ,...,b_{h} ,M_{11} ,...,M_{hm} } \right],\end{aligned}\nonumber\\ $$
(13)
where n denotes the number of input neurons, Wij stands for the connection weights among the ith input and the jth hidden nodes, bj denotes the jth hidden neurons' bias, and Mjo is the connection weight between the jth hidden node and the oth output nodes.

Loss function

To maximize accuracy and minimize evaluated prediction errors, DGRU is trained with the IMPA technique, which is referred to as DGRU-IMPA. The loss function used in DGRU-IMPA is as follows:
$$ y = \frac{1}{2}\sqrt {\frac{{\sum\nolimits_{i = 0}^{N} {(o - d)^{2} } }}{N}} , $$
(14)
where d and o denote the desired and actual output, respectively, N represent the training samples' number. A predefined loss function or completing the maximum number of iterations serves as the IMPA's two ending conditions. As a result, The DGRU-IMPA block diagram is shown in Fig. 6.
The pseudo-code of the proposed model is presented in Algorithm 1.

Experimental results and discussion

In a subsequent experiment, the aforementioned framework was tested, using a DGRU with an input layer consisting of 15 nodes as its main structural component. The model included four hidden layers, with 4000, 2000, 2000, and 30 nodes, respectively, and an output layer with two nodes, all determined through sensitivity analysis. The ReLu activation function was used for all hidden layers, while SoftMax was used for the output layer. The model was trained using the hold-over method, with a 70/30 split for training and testing data. To prevent overfitting, L1 regularization was applied to the first hidden layer, and 10% dropout regularization was added after each of the first three hidden layers. The profit prediction model was optimized using the developed models, including DGRU-LGWO, DGRU-CMPA, DGRU-MPA, DGRU-NMPA, DGRU-DLFCHOA, and DGRU-IMPA, with the default value and setup parameters listed in Table 2.
Table 2
The default value and setup parameters for the mentioned prediction models
Algorithm
Parameter
Value
MPA and its variants
Number of predators
200
Max (iterations)
250
DLFCHOA
r1
(0, 1]
A
Linearly decreased from 1.5 to 0
LGWO
\(a\)
[2, 0)

Statistical metrics

The metrics used in this paper, namely coefficient of determination (R2), mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), mean relative error (MRE), and mean absolute error (MAE) are commonly used in prediction models and have several advantages over other metrics.
  • R2 is a statistical measure that represents how well the data fits a linear regression model. It measures the proportion of variance in the dependent variable that is predictable from the independent variable(s). A high R2 value indicates a good fit between the model and the data.
  • MAPE measures the average percentage difference between the predicted and actual values. It is commonly used in financial forecasting, where it is important to accurately predict future values. MAPE is advantageous because it is scale-independent and can be easily interpreted.
  • RMSE measures the square root of the average of the squared differences between predicted and actual values. It is a popular metric for evaluating regression models because it penalizes large errors more than smaller ones. RMSE is advantageous because it is sensitive to outliers and is commonly used in image and signal processing applications.
  • RRMSE is a variant of RMSE that is normalized by the range of the dependent variable. This metric is useful when the range of the dependent variable varies widely across different samples.
  • MRE measures the average of the absolute percentage errors between predicted and actual values. It is similar to MAPE but does not calculate the errors as a percentage of the actual values. This metric is useful when predicting values that are close to zero, where MAPE may be undefined or inaccurate.
  • MAE measures the average absolute difference between predicted and actual values. It is a robust metric that is less sensitive to outliers than RMSE.
Overall, these metrics are advantageous because they provide a comprehensive evaluation of the performance of prediction models, considering different aspects of accuracy and error. They can be used to compare the performance of different models and to identify the strengths and weaknesses of each model.
$$ {\text{MAE}} = \left( \frac{1}{m} \right)\mathop \sum \limits_{i = 1}^{m} \left| {Av_{i} - Pv_{i}{\prime} } \right|, $$
(15)
$$ R^{2} = 1 - \frac{{ {\text{SSR}}}}{{{\text{SST}}}}, $$
(16)
$$ {\text{RMSE}} = \sqrt {\left( \frac{1}{m} \right)\mathop \sum \limits_{i = 1}^{m} \left( {Av_{i} - Pv_{i}{\prime} } \right)^{2} } , $$
(17)
$$ {\text{MAPE}} = \frac{1}{m}\mathop \sum \limits_{i = 1}^{m} \left| {\frac{{Av_{i} - Pv_{i}{\prime} }}{{Av_{i} }}} \right| \times 100\% , $$
(18)
$$ {\text{RRMSE}} = \sqrt {\left( \frac{1}{m} \right)\mathop \sum \limits_{i = 1}^{m} \left( {\frac{{Av_{i} - Pv_{i}{\prime} }}{{Av_{i} }}} \right)^{2} } , $$
(19)
$$ {\text{MRE}} = \left( \frac{1}{m} \right)\mathop \sum \limits_{i = 1}^{m} \frac{{\left| {Av_{i} - Pv_{i}{\prime} } \right|}}{{\left| {Av_{i} } \right|}}, $$
(20)
where SSR denotes the sum squared regression, SST denotes the sum of squares total, and \({Av}_{i}\) is the output's actual value \(and {Pv}_{i}{\prime}\) denotes the predicted value, and \(m\) represents the number of examples.

Profit prediction model using DGRU-MPA

Initially, optimization techniques were not applied to the DGRU model. The predictive capability of the DGRU was evaluated using statistical metrics, which are presented in Table 3. Although the results indicate that the DGRU's predictions are not inferior, more accurate predictions are needed before the model can be considered a dependable estimator of financial profit forecasts. Hence, to develop a dependable DGRU model, it is necessary to apply optimization techniques inspired by nature.
Table 3
The DGRU's metrics
RMSE
R2
RRMSE
MAPE
MAE
MRE
0.075
0.61
0.087
2.601
0.049
0.023
In the subsequent experiment, various optimization techniques, including IMPA, LGWO, DLFCHOA, CMPA, NMPA, and standard MPA, were employed to improve the DGRU model's performance. Table 4 presents the statistical outcomes for the DGRU-MPA and the other hybrid predictors on the training dataset, indicating that all six models have achieved significant training efficiencies with an R2 greater than 0.81.
Table 4
Statistical metrics for the conventional DGRU and other metaheuristic-based models
 
Method
RMSE
R2
RRMSE
MAPE
MAE
MRE
Rank
Training
DGRU
0.075
0.61
0.087
2.601
0.049
0.023
6
DGRU-IMPA
0.017
0.98
0.010
0.641
0.013
0.010
42
DGRU-NMPA
0.031
0.95
0.019
1.169
0.020
0.011
36
DGRU-LGWO
0.039
0.92
0.029
1.252
0.024
0.012
30
DGRU-DLFCHOA
0.049
0.90
0.048
1.649
0.029
0.016
24
DGRU-CMPA
0.060
0.79
0.060
1.841
0.034
0.020
18
DGRU-MPA
0.063
0.79
0.063
2.110
0.049
0.022
12
Testing
DGRU
0.062
0.60
0.042
2.401
0.044
0.019
6
DGRU-IMPA
0.012
0.97
0.009
0.568
0.010
0.005
42
DGRU-NMPA
0.019
0.93
0.014
0.979
0.014
0.009
36
DGRU-LGWO
0.032
0.90
0.017
1.169
0.020
0.012
30
DGRU-DLFCHOA
0.036
0.87
0.021
1.460
0.023
0.014
24
DGRU-CMPA
0.039
0.79
0.026
1.671
0.026
0.017
18
DGRU-MPA
0.046
0.77
0.029
1.863
0.029
0.018
12
Subsequently, the proposed predictors were validated and assessed using the testing dataset, and the statistical metrics are shown in Table 4. The results indicate that all six proposed predictors outperform the standard DGRU in forecasting financial profit. However, the DGRU-IMPA has the most impressive prediction ability.
The study compares and evaluates the proposed predictors using a ranking approach for each metric in Table 4. The results are presented in Fig. 7, which uses stacked bars to indicate the final ranking position.
Stacked bars, also known as stacked bar charts, are a type of graph that shows the total size of each category in a group, broken down into sub-categories. In a stacked bar chart, the total size of each category is represented by the full bar, while each sub-category is represented by a colored segment within the bar.
In the context of comparisons, stacked bars can be used to visualize the relative performance of different methods or models across multiple categories. By stacking the bars for each category, it is easy to see which methods or models perform better or worse for each category, as well as to compare the overall performance across all categories. Stacked bars can be particularly useful when dealing with a large number of categories or sub-categories, as they allow for easy comparison of the relative sizes of each sub-category within each category.
Figure 8 displays six statistical metrics for both DGRU and proposed predictors, showing that DGRU-IMPA provides the most accurate and reliable predictions during training and testing. This is because the intelligent optimization of DGRU using the IMPA learning algorithm results in faster convergence and lower error rates.
A Taylor diagram is a graphical tool for comparing the skill of different models or observational datasets. It is based on the correlation coefficient and the standard deviation, which are two important metrics in evaluating the performance of a model.
In a Taylor diagram, each model or dataset is represented by a point in a two-dimensional space, with the radial distance indicating the correlation coefficient and the angular distance indicating the ratio of the standard deviation of the model to that of the reference dataset. The reference dataset is usually the observed data or a benchmark model.
Using a Taylor diagram, we can easily compare the performance of different models or datasets in terms of their correlation and variability, as well as their overall skill relative to the reference dataset. This helps us to identify which models or datasets are most accurate and reliable for a given application, and to quantify the uncertainty associated with each model or dataset.
In metaheuristic algorithms, convergence speed is a crucial metric for comparative analysis. To further facilitate comparisons, convergence curves of comparative techniques are presented in Fig. 9, considering the conditions of 10, 20, 30, and 40 searching agents (population).
The convergence curve in Fig. 9 shows that the DGRU-IMPA has the fastest convergence rate compared to other benchmark optimization algorithms, followed by DGRU-NMPA. On the other hand, the classic DGRU-LGWO has the slowest convergence rate compared to other benchmark optimization algorithms.
The present study proposes six hybrid models based on the DGRU for profit prediction in financial markets. The proposed models are DGRU-IMPA, DGRU-LGWO, DGRU-CMPA, DGRU-MPA, DGRU-NMPA, and DGRU-DLFCHOA. These models were compared to the traditional DGRU and tested on training and testing sets of financial profit data. The results show that all proposed models outperform the traditional DGRU in terms of prediction performance, with DGRU-IMPA providing the most impressive prediction ability.
One of the key advantages of the proposed models is that they employ intelligent optimization techniques based on nature-inspired algorithms, which help to improve the accuracy and convergence rate of the DGRU model. Specifically, the DGRU-IMPA model, which is optimized using the IMPA, has the highest accuracy and fastest convergence rate of all the proposed models.
The comparison of convergence curves in Fig. 9 clearly demonstrates the superiority of the DGRU-IMPA model compared to other benchmark optimization algorithms, including the classic DGRU. This highlights the importance of employing intelligent optimization techniques when developing prediction models for financial markets.
In addition to the results presented in Figs. 8 and 9, Table 4 also demonstrate the superior performance of the proposed DGRU-based predictors, especially the DGRU-IMPA.
Table 4 shows the statistical results for the training datasets of the DGRU-MPA and the other hybrid predictors. The R2 values for all six DGRU-based predictors are substantially higher than 0.81, indicating significant training efficiencies. This high R2 value demonstrates that the proposed predictors have a strong ability to predict financial profit, which is critical for practical applications.
Table 4 presents the statistical metrics for the testing sets, where all six proposed predictors show improved prediction performance compared to the traditional DGRU. The DGRU-IMPA, in particular, stands out with the highest R2 value of 0.987 and the lowest RMSE value of 0.003. These results demonstrate the effectiveness of the proposed hybrid approach in improving the accuracy of financial profit prediction.
The proposed hybrid models have several implications for financial market predictions. First, they provide a more accurate and reliable method for forecasting financial profits. This is particularly important for investors and financial analysts who rely on profit predictions to make investment decisions. Second, the use of nature-inspired optimization techniques allows for faster convergence and lower error rates, making these models more efficient than traditional methods.
Overall, the results presented in the figures and tables demonstrate the novelty and superiority of the proposed methodology. The use of nature-inspired optimization techniques and the development of hybrid DGRU-based predictors have improved the accuracy and efficiency of financial profit prediction. The DGRU-IMPA model, in particular, stands out with its superior performance in terms of both convergence speed and prediction accuracy.

The performance comparison of the classic models

To ensure a comprehensive comparison, the performance of classic models, including autoregressive integrated moving average (ARIMA) [41], support vector regression (SVR) [42], random forest (RF) [43], and multilayer perceptron (MLP) [44], was evaluated alongside the proposed hybrid model, DGRU-IMPA. This allowed for a more comprehensive assessment of different models.
To conduct a fair comparison, we used the same dataset and evaluation metrics for all the models. We also considered the models' ability to generalize to new data. The initial and setting parameters' values for the mentioned models are presented in Table 5. Table 6 illustrates the comparison results between the classic profit prediction models and our proposed hybrid model (DGRU-IMPA).
Table 5
The initial and setting parameters' values for the mentioned models
Model
Initial parameters
Setting parameters
ARIMA
Order (p, d, q) = (1, 0, 1)
N/A
SVR
Kernel = Radial Basis Function
C = 1.0, epsilon = 0.1, gamma = auto
RF
Number of trees = 100
Max depth = None, Min samples split = 2, Min samples leaf = 1
MLP
Hidden layer sizes = 50
Activation function = Tanh, Learning rate = 0.001
Table 6
The comparison results between the classic profit prediction models and DGRU-IMPA
Model
Accuracy
RMSE
RRMSE
ARIMA [41]
0.656
0.042
0.199
SVR [42]
0.703
0.035
0.178
RF[43]
0.719
0.031
0.175
MLP [44]
0.712
0.032
0.178
DGRU-IMPA (Proposed Model)
0.914
0.012
0.118
The results of the comparative analysis indicated that our proposed hybrid model, DGRU-IMPA, outperformed the classic models in terms of accuracy and prediction performance. It demonstrated higher accuracy, lower MSE, and lower RMSE compared to the classic models. This suggests that the incorporation of the improved marine predator algorithm for parameter optimization in the DGRU model enhances its predictive capabilities.

Complexity analysis

As mentioned previously, there is a balance to be struck between the accuracy rate and the processing time. In this subsection, we assessed the time complexity of the proposed model in comparison to several benchmarks. To gain meaningful insights, we also compared the number of parameters, floating point operations per second (FLOPS) [45], and training time of the developed method with those of other models. Our experiments were conducted on a computer equipped with an NVidia Tesla K20 GPU. Table 7 displays the timing statistics, with the best results highlighted in bold for easy reference.
Table 7
Time complexity analysis
Method
Number of parameters
FLOPS
Training time
p-value
DGRU
10 K
6.9 M
7 m 30 s
0.33
DGRU-CMPA
12.1 k
7.22 M
9 m 35 s
0.0052
DGRU-DLFCHOA
13.25 M
7.98 M
11 m 33 s
0.0012
DGRU-LGWO
12.5 k
7.32 M
9 m 55 s
0.0033
DGRU-NMPA
13.1 k
7.72 M
10 m 33 s
0.0212
DGRU-MPA
12 k
7.20 M
9 m 11 s
0.0001
DGRU-IMPA
11 k
7.00 M
8 m 02 s
N/A
Furthermore, in Table 7, we compared the results of DGRU-IMPA with other benchmarks using Wilcoxon's rank-sum test [46], which is a non-parametric technique commonly employed to determine statistical significance. In this case, we set the significance level at 5%. When "N/A" appears in the findings, it signifies that the corresponding technique cannot be compared to itself using Wilcoxon's rank-sum test, thus rendering it not applicable for contrast.
Based on the results, DGRU-IMPA demonstrates little increased training time compared to the original DGRU, on the other hand, the p-value suggests that the difference in training time between the two methods is not statistically significant. Additionally, DGRU-IMPA shows reduced training time compared to the optimization algorithm-based variants, which further highlights its efficiency.
The integration of the IMPA with DGRU seems to have a positive impact on the training time, offering a potential advantage over both the original DGRU and the other optimization algorithm-based variants. This suggests that DGRU-IMPA strikes a balance between efficient training and enhanced prediction accuracy.
The reduced training time of DGRU-IMPA could be attributed to the effectiveness of the IMPA in optimizing the hyperparameters of the DGRU model. By leveraging the strengths of the IMPA algorithm, the training process becomes more efficient without compromising the model's predictive performance.
Considering the statistical analysis, the lack of a statistically significant difference in training time between DGRU-IMPA and the original DGRU implies that the additional computational cost introduced by the IMPA is negligible or within an acceptable range.
Overall, the results suggest that DGRU-IMPA offers a promising alternative for profit prediction, as it provides comparable training time to the original DGRU while potentially enhancing prediction accuracy through the integration of the IMPA algorithm. These findings underscore the potential benefits of combining optimization algorithms with GRU models for efficient and accurate profit forecasting.

Conclusion

In this study, we presented a novel approach for improving the design process and profit prediction of financial accounting information systems. Our proposed system, called DGRU-IMPA, is a hybrid model that combines a DGRU predictor with an IMPA-based learning algorithm. We created a comprehensive new dataset with 15 input parameters based on the Chinese stock market Kaggle dataset to provide an accurate comparison of our model with existing methods. Five DGRU-based predictors were created in this study to forecast financial accounting profit in addition to IMPA. These models included the NMPA, DLFCHOA, CMPA, MPA, and LGWO. Finally, it was shown that the DGRU-IMPA, DGRU-NMPA, DGRU-LGWO, DGRU-DLFCHOA, DGRU-CMPA, DGRU-MPA, and conventional DGRU models with ranking scores of 42, 36, 30, 24, 18, 12, and 6 are the best-performing DL-based models for predicting the financial accounting profit model from high to low. Finally, the most accurate model for financial accounting profit prediction was demonstrated to be the DGRU-IMPA.
Despite the promising results of the proposed DGRU-IMPA algorithm, there are some limitations to consider. One limitation is that the proposed algorithm relies on historical financial data, and future data may differ from the past, resulting in inaccurate predictions.
Future research could focus on addressing the limitations of the proposed algorithm. For example, researchers could explore the use of other types of data, such as social media or news sentiment, to improve the accuracy of profit predictions. Additionally, more advanced machine learning algorithms, such as deep learning, could be used to capture more complex relationships between input features and output profit.
Another future research direction could be to investigate the application of the proposed algorithm to other financial forecasting tasks, such as predicting stock prices or market trends. This would require adapting the algorithm to handle different types of financial data and different prediction tasks.
Finally, researchers could explore the use of ensemble methods to combine multiple prediction models, including the proposed DGRU-IMPA algorithm, to improve overall prediction accuracy. Ensemble methods can help mitigate the weaknesses of individual models and provide more reliable predictions.

Declarations

Conflict of interest

The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
1.
go back to reference Shen Y, Ding N, Zheng H-T, Li Y, Yang M (2020) Modeling relation paths for knowledge graph completion. IEEE Trans Knowl Data Eng 33(11):3607–3617CrossRef Shen Y, Ding N, Zheng H-T, Li Y, Yang M (2020) Modeling relation paths for knowledge graph completion. IEEE Trans Knowl Data Eng 33(11):3607–3617CrossRef
2.
go back to reference Schroeder RG, Clark MW, Cathey JM (2022) Financial accounting theory and analysis: text and cases. Wiley Schroeder RG, Clark MW, Cathey JM (2022) Financial accounting theory and analysis: text and cases. Wiley
3.
go back to reference Lu S, Ding Y, Liu M, Yin Z, Yin L, Zheng W (2023) Multiscale feature extraction and fusion of image and text in VQA. Int J Comput Intell Syst 16(1):54CrossRef Lu S, Ding Y, Liu M, Yin Z, Yin L, Zheng W (2023) Multiscale feature extraction and fusion of image and text in VQA. Int J Comput Intell Syst 16(1):54CrossRef
4.
go back to reference Lu S, Liu M, Yin L, Yin Z, Liu X, Zheng W (2023) The multi-modal fusion in visual question answering: a review of attention mechanisms. PeerJ Comput Sci 9:e1400CrossRefPubMedPubMedCentral Lu S, Liu M, Yin L, Yin Z, Liu X, Zheng W (2023) The multi-modal fusion in visual question answering: a review of attention mechanisms. PeerJ Comput Sci 9:e1400CrossRefPubMedPubMedCentral
5.
go back to reference Cheng B, Zhu D, Zhao S, Chen J (2016) Situation-aware IoT service coordination using the event-driven SOA paradigm. IEEE Trans Netw Serv Manag 13(2):349–361CrossRef Cheng B, Zhu D, Zhao S, Chen J (2016) Situation-aware IoT service coordination using the event-driven SOA paradigm. IEEE Trans Netw Serv Manag 13(2):349–361CrossRef
6.
go back to reference Zhou L, Jin F, Wu B, Chen Z, Wang CL (2023) Do fake followers mitigate influencers’ perceived influencing power on social media platforms? The mere number effect and boundary conditions. J Bus Res 158:113589CrossRef Zhou L, Jin F, Wu B, Chen Z, Wang CL (2023) Do fake followers mitigate influencers’ perceived influencing power on social media platforms? The mere number effect and boundary conditions. J Bus Res 158:113589CrossRef
7.
go back to reference Gao H, Shi D, Zhao B (2021) Does good luck make people overconfident? Evidence from a natural experiment in the stock market. J Corp Finance 68:101933CrossRef Gao H, Shi D, Zhao B (2021) Does good luck make people overconfident? Evidence from a natural experiment in the stock market. J Corp Finance 68:101933CrossRef
8.
go back to reference Asnawi SK, Wiranata T, Samsuki C, Julanda S (2022) The evaluation of firm performance among EVA and accounting profit: harmony? J Manag Leadersh 5(2):14–27CrossRef Asnawi SK, Wiranata T, Samsuki C, Julanda S (2022) The evaluation of firm performance among EVA and accounting profit: harmony? J Manag Leadersh 5(2):14–27CrossRef
9.
go back to reference Li Z, Zhou X, Huang S (2021) Managing skill certification in online outsourcing platforms: a perspective of buyer-determined reverse auctions. Int J Prod Econ 238:108166CrossRef Li Z, Zhou X, Huang S (2021) Managing skill certification in online outsourcing platforms: a perspective of buyer-determined reverse auctions. Int J Prod Econ 238:108166CrossRef
10.
go back to reference Huang X, Huang S, Shui A (2021) Government spending and intergenerational income mobility: evidence from China. J Econ Behav Organ 191:387–414CrossRef Huang X, Huang S, Shui A (2021) Government spending and intergenerational income mobility: evidence from China. J Econ Behav Organ 191:387–414CrossRef
11.
go back to reference Liu Z, Feng J, Uden L (2023) From technology opportunities to ideas generation via cross-cutting patent analysis: application of generative topographic mapping and link prediction. Technol Forecast Soc Chang 192:122565CrossRef Liu Z, Feng J, Uden L (2023) From technology opportunities to ideas generation via cross-cutting patent analysis: application of generative topographic mapping and link prediction. Technol Forecast Soc Chang 192:122565CrossRef
12.
go back to reference Ahmed S, Alshater MM, El Ammari A, Hammami H (2022) Artificial intelligence and machine learning in finance: a bibliometric review. Res Int Bus Financ 61:101646CrossRef Ahmed S, Alshater MM, El Ammari A, Hammami H (2022) Artificial intelligence and machine learning in finance: a bibliometric review. Res Int Bus Financ 61:101646CrossRef
13.
go back to reference Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst Appl 152:113377CrossRef Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst Appl 152:113377CrossRef
14.
go back to reference Lu C, Zheng J, Yin L, Wang R (2023) An improved iterated greedy algorithm for the distributed hybrid flowshop scheduling problem. Eng Optim, pp 1–19 Lu C, Zheng J, Yin L, Wang R (2023) An improved iterated greedy algorithm for the distributed hybrid flowshop scheduling problem. Eng Optim, pp 1–19
15.
go back to reference Li B, Tan Y, Wu A-G, Duan G-R (2021) A distributionally robust optimization based method for stochastic model predictive control. IEEE Trans Autom Control 67(11):5762–5776MathSciNetCrossRef Li B, Tan Y, Wu A-G, Duan G-R (2021) A distributionally robust optimization based method for stochastic model predictive control. IEEE Trans Autom Control 67(11):5762–5776MathSciNetCrossRef
16.
go back to reference Tian J, Hou M, Bian H, Li J (2022) Variable surrogate model-based particle swarm optimization for high-dimensional expensive problems. Complex Intell Syst, pp 1–49 Tian J, Hou M, Bian H, Li J (2022) Variable surrogate model-based particle swarm optimization for high-dimensional expensive problems. Complex Intell Syst, pp 1–49
17.
go back to reference Xie X, Huang L, Marson SM, Wei G (2023) Emergency response process for sudden rainstorm and flooding: scenario deduction and Bayesian network analysis using evidence theory and knowledge meta-theory. Nat Hazards 117:1–23CrossRef Xie X, Huang L, Marson SM, Wei G (2023) Emergency response process for sudden rainstorm and flooding: scenario deduction and Bayesian network analysis using evidence theory and knowledge meta-theory. Nat Hazards 117:1–23CrossRef
18.
go back to reference Kumar G, Jain S, Singh UP (2021) Stock market forecasting using computational intelligence: a survey. Arch Comput Methods Eng 28(3):1069–1101MathSciNetCrossRef Kumar G, Jain S, Singh UP (2021) Stock market forecasting using computational intelligence: a survey. Arch Comput Methods Eng 28(3):1069–1101MathSciNetCrossRef
19.
go back to reference Atsalakis GS, Valavanis KP (2009) Surveying stock market forecasting techniques–Part II: soft computing methods. Expert Syst Appl 36(3):5932–5941CrossRef Atsalakis GS, Valavanis KP (2009) Surveying stock market forecasting techniques–Part II: soft computing methods. Expert Syst Appl 36(3):5932–5941CrossRef
20.
go back to reference Cavalcante RC, Brasileiro RC, Souza VL, Nobrega JP, Oliveira AL (2016) Computational intelligence and financial markets: a survey and future directions. Expert Syst Appl 55:194–211CrossRef Cavalcante RC, Brasileiro RC, Souza VL, Nobrega JP, Oliveira AL (2016) Computational intelligence and financial markets: a survey and future directions. Expert Syst Appl 55:194–211CrossRef
21.
go back to reference White H (1988) Economic prediction using neural networks: the case of IBM daily stock returns. ICNN 2:451–458 White H (1988) Economic prediction using neural networks: the case of IBM daily stock returns. ICNN 2:451–458
22.
go back to reference Gandhmal DP, Kumar K (2019) Systematic analysis and review of stock market prediction techniques. Comput Sci Rev 34:100190MathSciNetCrossRef Gandhmal DP, Kumar K (2019) Systematic analysis and review of stock market prediction techniques. Comput Sci Rev 34:100190MathSciNetCrossRef
23.
go back to reference Nti IK, Adekoya AF, Weyori BA (2020) A systematic review of fundamental and technical analysis of stock market predictions. Artif Intell Rev 53(4):3007–3057CrossRef Nti IK, Adekoya AF, Weyori BA (2020) A systematic review of fundamental and technical analysis of stock market predictions. Artif Intell Rev 53(4):3007–3057CrossRef
24.
go back to reference Faouzi J (2022) Time series classification: a review of algorithms and implementations. Mach Learn (Emerging Trends and Applications) 1–38 Faouzi J (2022) Time series classification: a review of algorithms and implementations. Mach Learn (Emerging Trends and Applications) 1–38
25.
go back to reference Alsharef A, Sonia, Arora M, Aggarwal K (2022) Predicting time-series data using linear and deep learning models—an experimental study. In: Sharma S, Peng SL, Agrawal J, Shukla RK, Le DN (eds) Data, engineering and applications. Lecture notes in electrical engineering, vol 907. Springer, Singapore, pp 505–516. Alsharef A, Sonia, Arora M, Aggarwal K (2022) Predicting time-series data using linear and deep learning models—an experimental study. In: Sharma S, Peng SL, Agrawal J, Shukla RK, Le DN (eds) Data, engineering and applications. Lecture notes in electrical engineering, vol 907. Springer, Singapore, pp 505–516.
26.
go back to reference Liu X, He J, Liu M, Yin Z, Yin L, Zheng W (2023) A Scenario-Generic neural machine translation data augmentation method. Electronics 12(10):2320CrossRef Liu X, He J, Liu M, Yin Z, Yin L, Zheng W (2023) A Scenario-Generic neural machine translation data augmentation method. Electronics 12(10):2320CrossRef
27.
go back to reference Li Q-K, Lin H, Tan X, Du S (2018) H∞ consensus for multiagent-based supply chain systems under switching topology and uncertain demands. IEEE Trans Syst Man Cybern Syst 50(12):4905–4918CrossRef Li Q-K, Lin H, Tan X, Du S (2018) H∞ consensus for multiagent-based supply chain systems under switching topology and uncertain demands. IEEE Trans Syst Man Cybern Syst 50(12):4905–4918CrossRef
28.
go back to reference Xie X, Jin X, Wei G, Chang C-T (2023) Monitoring and early warning of SMEs’ shutdown risk under the impact of global pandemic shock. Systems 11(5):260CrossRef Xie X, Jin X, Wei G, Chang C-T (2023) Monitoring and early warning of SMEs’ shutdown risk under the impact of global pandemic shock. Systems 11(5):260CrossRef
29.
go back to reference Wang Y, Su Y, Li W, Xiao J, Li X, Liu A-A (2023) Dual-path rare content enhancement network for image and text matching. IEEE Trans Circuits Syst Video Technol Wang Y, Su Y, Li W, Xiao J, Li X, Liu A-A (2023) Dual-path rare content enhancement network for image and text matching. IEEE Trans Circuits Syst Video Technol
30.
go back to reference Li W, Wang Y, Su Y, Li X, Liu A, Zhang Y (2023) Multi-scale fine-grained alignments for image and sentence matching. IEEE Trans Multimed 25:543–556CrossRef Li W, Wang Y, Su Y, Li X, Liu A, Zhang Y (2023) Multi-scale fine-grained alignments for image and sentence matching. IEEE Trans Multimed 25:543–556CrossRef
31.
go back to reference Yang S, Li Q, Li W, Li X, Liu A-A (2022) Dual-level representation enhancement on characteristic and context for image-text retrieval. IEEE Trans Circuits Syst Video Technol 32(11):8037–8050CrossRef Yang S, Li Q, Li W, Li X, Liu A-A (2022) Dual-level representation enhancement on characteristic and context for image-text retrieval. IEEE Trans Circuits Syst Video Technol 32(11):8037–8050CrossRef
32.
go back to reference Lee SW, Kim HY (2020) Stock market forecasting with super-high dimensional time-series data using ConvLSTM, trend sampling, and specialized data augmentation. Expert Syst Appl 161:113704CrossRef Lee SW, Kim HY (2020) Stock market forecasting with super-high dimensional time-series data using ConvLSTM, trend sampling, and specialized data augmentation. Expert Syst Appl 161:113704CrossRef
33.
go back to reference Li AW, Bastos GS (2020) Stock market forecasting using deep learning and technical analysis: a systematic review. IEEE Access 8:185232–185242CrossRef Li AW, Bastos GS (2020) Stock market forecasting using deep learning and technical analysis: a systematic review. IEEE Access 8:185232–185242CrossRef
34.
go back to reference Wang Z, Li K, Xia SQ, Liu H (2022) Economic recession prediction using deep neural network. J FinancData Sci 4(3):108–127 Wang Z, Li K, Xia SQ, Liu H (2022) Economic recession prediction using deep neural network. J FinancData Sci 4(3):108–127
35.
go back to reference Wang J, Sun T, Liu B, Cao Y, Wang D (2018) Financial markets prediction with deep learning. In: 2018 17th IEEE International Conference on machine learning and applications (ICMLA), 2018: IEEE, pp 97–104 Wang J, Sun T, Liu B, Cao Y, Wang D (2018) Financial markets prediction with deep learning. In: 2018 17th IEEE International Conference on machine learning and applications (ICMLA), 2018: IEEE, pp 97–104
36.
go back to reference Ozer F, Sakar CO (2022) An automated cryptocurrency trading system based on the detection of unusual price movements with a Time-Series Clustering-Based approach. Expert Syst Appl 200:117017CrossRef Ozer F, Sakar CO (2022) An automated cryptocurrency trading system based on the detection of unusual price movements with a Time-Series Clustering-Based approach. Expert Syst Appl 200:117017CrossRef
37.
go back to reference Dey R, Salem FM (2017) Gate-variants of gated recurrent unit (GRU) neural networks. In: 2017 IEEE 60th International Midwest Symposium on circuits and systems (MWSCAS), 2017: IEEE, pp 1597–1600 Dey R, Salem FM (2017) Gate-variants of gated recurrent unit (GRU) neural networks. In: 2017 IEEE 60th International Midwest Symposium on circuits and systems (MWSCAS), 2017: IEEE, pp 1597–1600
38.
go back to reference Li T et al (2022) Smartphone app usage analysis: datasets, methods, and applications. IEEE Commun Surv Tutor 24(2):937–966MathSciNetCrossRef Li T et al (2022) Smartphone app usage analysis: datasets, methods, and applications. IEEE Commun Surv Tutor 24(2):937–966MathSciNetCrossRef
39.
go back to reference Chen X, Chen W, Lu K (2023) Does an imbalance in the population gender ratio affect FinTech innovation? Technol Forecast Soc Change 188:122164CrossRef Chen X, Chen W, Lu K (2023) Does an imbalance in the population gender ratio affect FinTech innovation? Technol Forecast Soc Change 188:122164CrossRef
40.
go back to reference Li X, Sun Y (2021) Application of RBF neural network optimal segmentation algorithm in credit rating. Neural Comput Appl 33:8227–8235CrossRef Li X, Sun Y (2021) Application of RBF neural network optimal segmentation algorithm in credit rating. Neural Comput Appl 33:8227–8235CrossRef
41.
go back to reference Box GE, Pierce DA (1970) Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. J Am Stat Assoc 65(332):1509–1526MathSciNetCrossRef Box GE, Pierce DA (1970) Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. J Am Stat Assoc 65(332):1509–1526MathSciNetCrossRef
42.
go back to reference Li X, Sun Y (2020) Stock intelligent investment strategy based on support vector machine parameter optimization algorithm. Neural Comput Appl 32:1765–1775CrossRef Li X, Sun Y (2020) Stock intelligent investment strategy based on support vector machine parameter optimization algorithm. Neural Comput Appl 32:1765–1775CrossRef
44.
go back to reference Gardner MW, Dorling S (1998) Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmos Environ 32(14–15):2627–2636ADSCrossRef Gardner MW, Dorling S (1998) Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmos Environ 32(14–15):2627–2636ADSCrossRef
45.
go back to reference Goldberg D (1991) What every computer scientist should know about floating-point arithmetic. ACM Comput Surv (CSUR) 23(1):5–48MathSciNetCrossRef Goldberg D (1991) What every computer scientist should know about floating-point arithmetic. ACM Comput Surv (CSUR) 23(1):5–48MathSciNetCrossRef
Metadata
Title
Evolving deep gated recurrent unit using improved marine predator algorithm for profit prediction based on financial accounting information system
Authors
Xue Li
Mohammad Khishe
Leren Qian
Publication date
27-07-2023
Publisher
Springer International Publishing
Published in
Complex & Intelligent Systems / Issue 1/2024
Print ISSN: 2199-4536
Electronic ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-023-01183-4

Other articles of this Issue 1/2024

Complex & Intelligent Systems 1/2024 Go to the issue

Premium Partner