Abstract

An improved real-coded genetic algorithm (IRCGA) is proposed to solve constrained optimization problems. First, a sorting grouping selection method is given with the advantage of easy realization and not needing to calculate the fitness value. Secondly, a heuristic normal distribution crossover (HNDX) operator is proposed. It can guarantee the cross-generated offsprings to locate closer to the better one among the two parents and the crossover direction to be very close to the optimal crossover direction or to be consistent with the optimal crossover direction. In this way, HNDX can ensure that there is a great chance of generating better offsprings. Thirdly, since the GA in the existing literature has many iterations, the same individuals are likely to appear in the population, thereby making the diversity of the population worse. In IRCGA, substitution operation is added after the crossover operation so that the population does not have the same individuals, and the diversity of the population is rich, thereby helping avoid premature convergence. Finally, aiming at the shortcoming of a single mutation operator which cannot simultaneously take into account local search and global search, this paper proposes a combinational mutation method, which makes the mutation operation take into account both local search and global search. The computational results with nine examples show that the IRCGA has fast convergence speed. As an example application, the optimization model of the steering mechanism of vehicles is formulated and the IRCGA is used to optimize the parameters of the steering trapezoidal mechanism of three vehicle types, with better results than the other methods used.

1. Introduction

The genetic algorithm (abbreviated as GA or GAs) was proposed by Professor John H. Holland and his students at University of Michigan at the end of the 1960s and in the early 1970s [14]. In 1975, Professor Holland published the first monograph devoted to the elaboration of the basic theories and methods of genetic algorithms [1] and put forward the most important schema theorem of the genetic algorithm theory research and development. In the same year, De Hong KA proposed the evolutionary strategy of elitism preservation in his doctoral thesis. Several evolutionary strategies by elitism preservation and selective substitution and duplication were subsequently proposed [58]. At present, the GA is basically computed mostly by these evolutionary strategies. In 1989, Goldberg [9] made a comprehensive and systematic summary and discussion of the genetic algorithm and laid the foundation of the modern genetic algorithm.

The encoding schemes in GA are either binary coding or real coding. The shortcomings of binary coding of GA are [10] as follows: for some high dimensional and high precision continuous function optimization problems, the randomness of the binary genetic algorithm makes the local search ability poor, and the binary coding of adjacent integers may have a large Hamming distance, thus reducing search efficiency and influencing the computational accuracy of the genetic operator; binary coding needs to be encoded and decoded frequently, thus increasing the calculation time, with potential conversion errors; with a finite discrete lattice, an individual approaching the extreme value may be omitted, causing the algorithm to reach premature convergence or optimization speed to be very slow; algorithm efficiency decreases sharply with the increase of variables and the improvement of computational accuracy; as the accuracy of the solution is controlled by the encoding length, binary coding may need too long codes, resulting in excessive computing and memory space as well as reduced computational speed. In 1992, Michalewicz proposed a real-coded genetic algorithm (RCGA) which is more efficient in these regards [11]. RCGA has many attractive properties such as high precision, no coding and decoding required, effective large space search, simple computing, fast convergence, and resistance against falling into a local extreme value [6, 7]. It has been widely applied in areas such as automatic control, combinatorial optimization, machine learning, image processing, self-adaptation control, planning and design, industrial engineering, intelligent manufacturing systems, bioengineering, system engineering, artificial intelligence, intelligent machine system, and artificial life [68, 1216]. It is especially suitable to deal with complex and nonlinear problems that cannot be solved by traditional search methods [6]. Thus, it is one of the key technologies of intelligent computing in the 21st century.

Research on RCGA algorithms can be grouped into the following categories: the determination of an optimal population size [1720]; generation of the initial population [2123]; improvement of the existing crossover operator [2427]; improvement of the existing mutation operator [2830]; improvement of the evolutionary strategy [3133]; automatic adjustment of the operator parameters [3437]. The progress with these algorithms has improved the computational speed and promoted the development of GA theory with new applications.

Crossover operators play a crucial role in expanding the solution space and obtaining the globally optimal solution. Mutations are also important to increase the population diversity, helping GA to expand the search scope and to avoid falling into local optima. As a result, many scholars have focused their attention on the improvement of crossover and mutation operators. In 2016, Chuang et al. [24] proposed a direction-based crossover operator (DBX), but the possible crossover directions of DBX are limited. Although it may generate a crossover direction capable of guiding the chromosomes to move towards the optimal solution, this is not highly probable. Meanwhile, when the dimensionalities of the population are few, the null vector solution is likely to be generated, resulting in no crossover direction. In addition, as some better chromosomes in the population are used to replace the worse chromosomes in the course of ranking selection, the population diversity may become worse after multiple iterations, eventually leading to lost diversity, and thus cannot converge to the global optimal solution. In 1996, D.W. Wang et al. proposed a mutation operator which mutated towards the gradient direction of the objective function [38]. When the objective function is not differentiable, mutation operation cannot be performed. In 2008, Peltokangas et al. summarized the uniform mutation operator, the nonuniform mutation operator, the power mutation operator, and the boundary mutation operator [3944]. The computation times of the nonuniform mutation operator and the power mutation operator are larger. The boundary mutation is only suitable for the optimization problem when the optimal solution is on the boundary of the feasible region and the offsprings of uniform mutation operator have no connection with the individuals of other mutations. In 2016, Chuang et al. proposed a mutation operator for dynamic random mutation [24], whose step size computational formula had some conflicts with the interpretation of the formula. In addition, the mutation operator needs to give the maximum number of iterations in advance, which is difficult to estimate in advance.

In summary, a good crossover operator and mutation operator should fulfill four conditions: the crossover generated offspring individuals should be in the vicinity of the better individual among the two-parent individuals of participation crossover, thus generating better offsprings; the calculations should be as few as possible; the mutation operator should take into account global as well as local search so that the algorithm can quickly converge to the global optimal solution rather than falling into local optima; the loop statement should be avoided in programming to increase speed of computation. Aiming at these four conditions, this paper proposes a new IRCGA to solve constrained optimization problems. The results with nine test functions show that the IRCGA is effective and feasible in solving constrained optimization problems. As an example application, the optimization model of the steering mechanism of vehicles is formulated, and the IRCGA is used to optimize the parameters of the steering trapezoidal mechanism of three vehicle types, and the better results are obtained as compared to other methods used.

2. Penalty Function Method for Constrained Optimization Problems

The mathematic model of a constrained optimization problem can be generally expressed as follows:where n is the population size, is the ith equation constraint, p is the number of equation constraints, is the jth inequality constraint, q is the number of inequality constraints, and is a m-dimensional vector =(,,,).

Equation (1) can also be expressed as

Letting be the optimal solution to the constrained optimization problem means that  : . In addition, if =, the constraint is referred to as active constraint. Under this concept, all the equation constraints =0 (i=1,2,…p) are active at .

The penalty function method can be converted to an unconstrained optimization problem by constructing two constrained functions; meanwhile, the penalty factors are introduced and added to the objective function, thus constructing the penalty function given by [45]where M1 and M2 are the penalty factors, generally chosen as large enough positive constants; the second and third terms on the right are the penalty terms, and is the penalty function.

In (3), when , there should be no penalty to the feasible points, thus P(X, M)=f(X); when , for the nonfeasible points, M1 and M2 should be very big; therefore, the values of the second and third terms in (3) are large, which is equivalent to the ‘penalty’ for the infeasible point. Moreover, when X gets farther away from the feasible region, the penalty should be larger. It is conceivable that when M1 and M2 become sufficiently large, the minimal point X(M) of the unconstrained optimization problem of (3) is close enough to the minimum point of the original constrained optimization problem. And when , it becomes the minimal point of the original constraint problem.

The minimum value of (3) iswhich is equivalent to the minimum value of (1).

3. New Real-Coded Genetic Algorithm

3.1. Sorting Grouping Selection (SGS)

We assume the following: the maximum of the objective function is sought, the population size n is an even number, and the individuals in the population are sorted with respect to their objective function value P(X, M) in descending order. If the minimum of the objective function is sought, the objective function P(X,M) is negated so that we still seek the maximum. The population before sorting is =(,,⋯,), and, after sorting, it becomes =(,,⋯,) and satisfies ,(,)≥⋯≥(,).

First, the chromosomes in the population are arranged into two groups. Group 1 includes the first n/2 chromosomes and Group 2 includes the second n/2 chromosomes. The 1st individual in Group 1 is matched with the 1st individual in Group 2, the 2nd individual in Group 1 is matched with the 2nd individual in Group 2, and so on. In this way, we can obtain n/2 pairs of individuals. See Table 1 for the specific operation of the sorting grouping selection.

In Table 1, we carry out crossover of in Group 1 with in Group 2, in Group 1 with in Group 2, and so on.

As compared with the roulette wheel selection and the tournament selection, the SGS method can enlarge the distance in every pair of the parent individuals and magnify the difference between the parent individuals under matching; in addition, the SGS is a direct operation based on the ranking of objective function values without computing the individual fitness values. The computation is simple and rapid in selecting the chromosomes for crossover.

3.2. Heuristic Normal Distribution Crossover (HNDX)

If the objective function is to solve the optimization problem of maximum value, as the individual of bigger objective function value has higher probability of approaching the optimal solution, it is believed that the optimal solution is likely to be near the individual with a bigger objective function value [46]. Inspired by this, this paper proposes a heuristic normal distribution crossover (HNDX) operator which emphasizes the parent which has a bigger objective function value. With the resulting better crossover direction, two potential offsprings are generated, thus improving the convergence speed of IRCGA.

The key to HNDX is how to make the crossover generated offsprings in the vicinity of the superior parent or in the superior crossover direction. We note that a generated random number according to the normal distribution has a large probability near its mean μ. In the process of crossover, if the better one among the two parents is served as the mean μ of the normal distribution, then it is guaranteed that the generating offspring according to normal distribution is in the vicinity of the better parent. Therefore, an offspring is generated by normal distribution and the normal distribution is denoted as N(μ,σ2). The density function of the normal distribution is shown in Figure 1.

It can be seen from Figure 1 that the probability of the random number X generated by N(μ, σ2) in the interval (μ-σ,μ+σ) is 68.26%, the probability of X in the interval (μ-2σ,μ+2σ) is 95.44%, and the probability of X in the interval (μ-3σ,μ+3σ) is 99.74%. It can be seen that the probability that X falls outside (μ-3σ,μ+3σ) is less than 0.3%, and it is often considered that the corresponding event does not occur in practice. Thus, the interval (μ-3σ,μ+3σ) can be regarded as the actual possible interval of the random variable X, which is called the 6σ principle of the normal distribution. Based on the above analysis, as long as we control the size of σ value, we can basically guarantee that the random number X generated by N(μ, σ2) is near the mean μ.

Let the two parents of participation crossover be (i=1,2,⋯,) and (=++,⋯,). Assuming that is superior to , the two offsprings Yi (=1,2,⋯,) and Yj=++2,⋯,) are generated bywhere is the mean of the normal distribution, is the variance of the normal distribution, is the square of each element in vector , ε is a small positive number to avoid that the variance of the normal distribution is 0, is the crossover direction, and λ is a uniformly distributed random number in .

Let the two-parent individuals of participation crossover be and and be superior to . To illustrate the principle of HNDX, letting the variable dimension m be 2, the principle of HNDX is shown in Figure 2.

In Figure 1, DY1=Y1C, =.

Equation (5) shows that Y1 is an offspring randomly generated according to-, with its mean equal to the mean of and its variance equal to - . In addition, it can be seen from Figure 1 that the random number generated according to has high probability of being in the vicinity of the mean of the normal distribution. According to the above analysis, an offspring Y1 is generated by the crossover of the two parents and and Y1 has a high probability of being within the shadow region of Figure 2; that is to say, Y1 is within a circle whose center is , which can basically guarantee that Y1 is in the vicinity of the better parent .

The second offspring Y2 is generated by the crossover of the two parents and as , where the crossover direction is given by . Therefore, Y2 might be at any point on the line CD. In addition, since Y1 is likely to be at any point within the circle that takes the line segment as radius and as center of the circle, there are numerous possible cross directions . Let be the optimal solution of the problem to be solved. For the parent of participation crossover, the optimal crossover direction is ; and for , the optimal crossover direction is . The crossover direction generated by HNDX is likely to be completely consistent with the optimal crossover direction, even if it is not completely consistent with the optimal crossover direction and . Hence, it has high probability of generating a direction that is very close to the optimal crossover direction and . In summary, HNDX has high probability of generating better offsprings, thereby significantly improving the convergence speed of the algorithm.

Compared with the DBX in [24], HNDX can generate numerous crossover directions while DBX can only generate 2m-1 crossover directions; moreover, the probability that the offsprings generated by HNDX are close to the optimal solution is much higher than that of DBX. In addition, HNDX can avoid use of loop statements in programming, and the program is relatively simple.

Let y be the individuals in the population after sorting according to the objective function values and x be the cross-generated offsprings. We adopt the evolutionary strategy of Section 3.5 and choose the crossover probability equal to 1. The corresponding Matlab program code of HNDX can be as follows:x(1:n/2,:)=normrnd(y(1:n/2,:),0.001+((y(1:n/2,:)-y(n/2+1:n,:))/6).);x(n/2+1:n,:)=unifrnd(0.5,1.5)(x(1:n/2,:)-y(n/2+1:n,:));

It is observed that HNDX Matlab program has neither loop statement nor conditional statement, while DBX has both. As a result, HNDX program is simpler and takes less time to accomplish the corresponding statements.

As an example, in terms of 2D search space, for the spatial positions of two parents and participating in crossover, there are three possible cases: (a) both parents are in the infeasible region; (b) one parent is in the feasible region while the other is not; (c) both parent are in the feasible region. Of these three cases, the HNDX operation in a two-dimensional space is shown in Figure 3.

In Figure 3, the schematic of case (a) shows that the HNDX operator can effectively guide the movement of two parents from an infeasible region to a feasible region. In the case of (b), HNDX crossover enables the individuals in the infeasible region to move into the feasible region. When the individuals are already in the feasible region as in the case of (c), HNDX searches in the neighborhood of the two parents and generates a direction to guide them to move towards the optimal solution.

3.3. Substitution Operation

In the global optimization problem with many local optima, when the algorithm finds a region with an extreme value (whether it is a local extremum or a global extremum), individuals in the population constantly move closer to the region and there may be the same or similar individuals in the population. With the increase of the number of iterations, the same individuals in the population will gradually increase and may even make all the individuals in the population be the same, which will make the diversity of the population worse; thus the convergence speed of GA and the ability of exploring other extreme value regions are affected. In the case of the IRCGA proposed in the literature for the optimization problem with many local optima [24], it is possible that the vast majority of individuals in the population become the same or all individuals become the same such that the algorithm cannot converge to the global optimal solution. In order to avoid the occurrence of the above phenomena and to maintain the diversity of the population during the iteration process, substitution operation is added to the IRCGA proposed herein. Substitution operation is as follows.

If there are the same two or more individuals in a population after crossover, only one of them shall be reserved while eliminating the others. If the current population size is n1, n-n1 individuals are generated at random to maintain the population size unchanged as n. Since the n-n1 individuals in the substitution operation are randomly generated, the substitution operation has the function of jumping out of the local optimum.

3.4. Combinational Mutation

In the cases of the mutation operator given in the existing literature, some local search abilities are strong [47, 48] and some global search abilities are strong [48]. For the optimization problems with less extreme points, a mutation operator with stronger local search ability should be adopted. For the optimization problems with more extreme points, if a mutation operator with stronger local search ability is adopted, it is easy to converge to a local optimum; if a mutation operator with stronger global search ability is adopted and the accuracy requirement of the optimal solution for the problem to be optimized is higher, the convergence speed of the algorithm slows down. With some of the mutation operators given in the literature, there is strong global search capability at the beginning of the iterations, and with the increase of the number of iterations, local search capabilities are enhanced. However, this requires the maximum number of iterations to be given, which is difficult to do in advance. Although this method is theoretically feasible, it is usually worse in practice [24]. In summary, a single mutation operator is difficult to take into account both global and local search capabilities. For this reason, this paper proposes a method of combinatorial mutation which uses three-mutation operators. The first mutation operator enhances the local search ability with the increase of the number of iterations; the second mutation operator has strong global search capability; the third mutation operator has strong local search capability. The specific approach of combinational mutation is as follows.

We divide the number of iterations by 3. When the remainder is 1, the second mutation operator is used; when the remainder is 2, the first mutation operator is used; when the remainder is 0, the third mutation operator is used.

The advantage of combinatorial mutation is that the local search ability of mutation operator is taken into account in the iterative process and the global searching ability of mutation operator is also taken into account. The three-mutation operators are as follows:where gen is the number of iteration, is the offspring from crossover, is the offspring from mutation, vectors a and b are the lower and upper limits of variables, and λ1(bT-aT) is the product by multiplying λ1 with the corresponding element of bT-aT at the same position, =,,…,, in which λ11,λ12,⋯,λ1m are uniformly distributed random numbers in .where Cauchy is standard Cauchy distribution, is an individual to mutate, and is an individual after mutation.where is the mean of the normal distribution and δ2 is the variance of the normal distribution.

δ2 of (8) is given bywhere Xbest is the optimal individual in population.

The global search ability of the first mutation operator at the beginning of the iterations is strong, but when the number of iterations increases above a certain value, the first mutation operator almost loses mutation ability. The second mutation operator is Cauchy mutation, which can generate a larger mutation step compared to the normal mutation operator, giving the algorithm better global search ability. The third mutation operator is normal mutation, which focuses on searching for a local region near the mean with better local search ability, but the ability to guide the individual to jump out of the local optimal solution is weak, which is not conducive to global convergence. Therefore, the combination mutation operator takes into account both global exploration and local search, which makes GA converge quickly to the global optimal solution.

3.5. Evolutionary Strategy (ES)

The evolutionary strategy of the IRCGA proposed in this paper is as follows:(1)The initial population is randomly generated as parents, the population size is n, and the mutation probability is pm.(2)Sorting operation is performed; that is to say, all individuals in the population are sorted in descending order according to their objective function values and s elitist individuals are selected with best ranking among n parents.(3)Selection and crossover operations are carried out according to the sorting results with crossover probability equal to 1.(4)Substitution operation is performed with cross-generated n offsprings and s elitist individuals.(5)All individuals in the population are sorted in descending order according to their objective function values after substitution operation.(6)If the population size is n1 less than + now, randomly generate +- individuals.(7)Repeat the above sorting operation, get new population, and select s elitist individuals and n individuals with best ranking in the new population.(8)Using the mutation operator, modify the individuals.(9)Regenerate the new population consisting of the npm mutated individuals, 1- unmutated individuals and s elitist individuals chosen in Step .(10)Sort the individuals of the regenerated new population in descending order according to their objective function values.(11)If the stop condition is met, output the optimal solution and optimal value;(12)Otherwise select s elitist individuals and n individuals with best ranking within the last population. Go to Step to start the next loop.

The evolutionary strategy flow diagram for IRCGA is shown in Figure 4.

4. Algorithmic Testing and Analysis

Below RCGA proposed in [24] is abbreviated as IRCGA-1, and IRCGA proposed in this paper is abbreviated as IRCGA-2; IRCGA-2 that removes the substitution operation is abbreviated as IRCGA-3; RCGA proposed in [49, 50] is abbreviated, respectively, as IRCGA-4 and IRCGA-5.

In order to verify the validity and feasibility of IRCGA-2, it is to be compared with the existing RCGA. Among the existing references of RCGA, ‎[24] was published in 2016 with a great amount of comparative studies of the RCGA; therefore, IRCGA-2 will be compared with IRCGA-1. In addition, the IRCGA-2 will also be compared with IRCGA-4 and IRCGA-5.

4.1. Iteration Termination Condition

The iteration termination condition for the RCGA can be defined aswhere is the globally optimal value of the ith test function in theory, fi is the optimal value of the ith test function obtained by RCGA, and εi is the precision for the ith test function.

4.2. Diversity of Population

If the individuals in the tth population X(t) are X1(t),X2(t),⋯,Xn(t), respectively and the ith individual is Xi(t)=(xi1(t),xi2(t),…,xim(t)), the population center =,,⋯,) is calculated as follows:

The average value of the sum of the variance of each individual to the population center in a population is D(t) calculated by

The D value can reflect the degree of dispersion of the individual in the population. The larger the D value, the better the diversity of the population; the smaller the D value, the poorer the diversity of the population; when D value is equal to 0, the population diversity is completely lost. That is, all individuals in the population are the same.

4.3. Test Functions

In order to verify the validity and feasibility of IRCGA-2, nine frequently used test functions of certain complexity are selected. They are discussed below:Needle-in-a-haystack function to be maximized

The optimal solution of f1 is located at = (0,0) with f1() =3600.(2)Schaffer function to be maximized

The optimal solution of f2 is located at = (0,0) with f2() =1.Six-hump Camel Back function to be minimized

The optimal solution of f3 is located at = (-0.0898,0.7126) and (0.0898,-0.7126) with =-1.031628.Shubert function to be minimized

The optimal solution of f4 is located at = (-1.42513, 0.80032) with ) =-186.7309.Rosenbrock function to be minimized

The optimal solution of f5 is located at = (1,1) with f5() =0.(6)Michalewiczs function to be minimized

The optimal solution of f6 is located at = (2.0230,2.0230) with f6() =-1.80130.G08 function to be minimized

The optimal solution of f7 is located at = (1.2279723,4.2453733) with f7() =-0.095825.Easom function to be minimized

The optimal solution of f8 is located at = (π,π) with f8() =-1.Generalized Rastrigin function to be minimized

The optimal solution of f8 is located at = (0,0) with f8() =0.

4.4. Parameter Settings

In order to obtain a fair performance comparison, the parameters related to IRCGA proposed in this paper are set as follows:

The computing precision of various test functions ε1=ε2=ε3=ε4=ε5=ε6=ε7=ε8=ε9=10−4; the population size n=100; M1= and M2=107 in the penalty function; in IRCGA-2, the mutation probability pm=0.5 and the crossover probability pc=1. In IRCGA-1, the mutation probability pm=0.05 and the crossover probability pc=0.9; Reserve 50 elite individuals; in IRCGA-4 and IRCGA-5, the population size n=100 and the remaining parameters are the same as in [50, 51].

4.5. Limitations of IRCGA-1

Definition 1. When all the individuals in the population are the same, the individuals in population are called the losing population diversity.

For the optimization problem with more local extreme points, ICRGA-1 has great possibility of falling into local extreme points. For example, consider that the first test function in Section 4.3. With IRCGA-1, the evolution of population as a function of number of iterations is shown in Figure 5. The corresponding results with IRCGA-2 are shown in Figure 6.

It can be seen from Figure 5 that the diversity of the population tends to become poor with the increase of the number of iterations. When the number of iterations is 160, all the individuals in the population are the same and the value of the objective function is 2978.2275. The population has lost its diversity, and the global optimal solution cannot be obtained. It can be seen from Figure 6 that the population diversity of IRCGA-2 tends to become poor with the increase of the number of iterations but the diversity of the population is significantly superior to that of IRCGA-1. In order to compare the population diversities of IRCGA-1 and IRCGA-2 further, consider the needle-in-a-haystack function discussed in Section 4.2. The IRCGA-1 and IRCGA-2 programs were run 1000 times, respectively. With IRCGA-1, the number of loss of population diversity was 697 times and the probability of losing population diversity was 69.7%; with IRCGA-2, the number of loss of population diversity was 0 times and the probability of losing population diversity was 0%. These results indicate that IRCGA-1 is more likely to lose population diversity for the problem of optimizing with many local optima.

In summary, IRCGA-1 has great limitations in solving the optimization problem with many local optima. In many cases, the global optimal solution cannot be obtained.

4.6. Test Results

In order to obtain a fair performance comparison, the 9 test functions were used as examples and each test function was run on the same computer for 1000 times. The computational results of IRCGA-1, IRCGA-2, IRCGA-3, IRCGA-4, and IRCGA-5 are shown in Table 2.

The average running time, the average number of iterations, and the average value of population diversity when converging to the optimal solution in Table 2 were calculated as follows: when the iteration termination condition is satisfied, the numbers of iterations, time, and population diversity of processing at the ith run are, respectively, niter(i), t(i), and div(i), i = 1, 2,⋯, k, where k is the number of runs used in each experiment. Then, the average running time, average number of iterations, and average value of population diversity are computed by where tav is the average running time, niterav is the average number of iterations, and divav is the average value of population diversity.

When the average running time of ICRGA-1 is t1, the average running time of ICRGA-2 is t2, IRCGA-2 reduces the average running time by x% for ith test function fi in comparison to IRCGA-1, and, then, x% is computed by

The computational method of x% is the same as (25) and (26) for IRCGA-2, IRCGA-3, IRCGA-4, and IRCGA-5.

As can be observed in Table 2, the average running time, average number of iterations, and average value of population diversity of IRCGA-2 are significantly superior to those of IRCGA-1, IRCGA-3, IRCGA-4, and IRCGA-5. IRCGA-2 reduces the average running time by 96.7328%, 17.3913%, 96.9867%, and 96.4353% for f1, 96.8280%, 49.8113%, 97.8737%, and 73.0223% for f2, 77.6050%, 47.2527%, 89.0327%, and 54.5731% for f3, 98.5534%, 9.3625%, 98.2681%, and 72.8908% for f4, 72.2076%, 21.9753%, 93.1557%, and 75.6923% f5, 61.9744%, 33.3333%, 88.6401%, and 83.7500% for f6, 65.1568%, 37.5000%, 58.2463%, and 45.6522% for f7, 81.0282%, 20.1220%, 82.1038%, and 82.0320% for f8, 91.9857%, and 50.9375%, 71.9643%, and 60.3535 for f9 in comparison to IRCGA-1, IRCGA-3, IRCGA-4, and IRCGA-5. Similarly, the numbers of iterations are reduced by 88.2454%, 20.2076%, 89.6488%, and 87.7177% for f1, 88.6364%, 48.9817%, 93.6805%, and 84.6963% for f2, 24.5985%, 11.7893%, 73.8572%, and 33.5144% for f3, 95.2673%, 19.4441%, 95.0346%, and 82.6697% for f4, 14.2070%, 4.7474%, 80.2807%, and 78.6939% for f5, 11.0402%, 6.7012%, 93.6547%, and 92.0071% for f6, 15.8179%, 6.1513%, 41.0115%, and 47.6137% for f7, 34.9704%, 11.2156%, 40.9364%, and 42.8695% for f8, and 73.1850%, 23.1377%, 38.9808%, and 30.6282% for f9. Thus, IRCGA-2 is observed to be superior to IRCGA-1, IRCGA-3, IRCGA-4, and IRCGA-5 in terms of all the measures utilized.

5. Parameter Optimization of Vehicle Steering Trapezoid Mechanism

In the autosteering process, the difference between the actual movement trajectory and the theoretical movement trajectory of the vehicle steering mechanism results in the motion error, which increases the tire wear and destroys the safety and stability of steering. Through the optimization of the size and positioning parameters of the steering mechanism, the error can be effectively reduced, thereby improving the vehicle handling performance and improving the steering safety. Here the weighted sum of the relative error of the theoretical rotation angle and the actual rotation angle of the external front steering wheel is chosen as the objective function to be minimized. When the inside front wheel turning angle of the vehicle is α, the schematic diagram of the vehicle ideal steering process and the actual steering process are shown in Figures 7 and 8.

Note that α is actual rotational angle of inside front steering wheel, β is theoretical rotational angle of external front steering wheel, M is the distance between the left and right vertical shafts, L is the distance of rear and front axle of vehicle wheel, R is the steering radius of external front steering wheel, θ is the bottom angle of steering trapezoid mechanism, and a is the distance of wheel and steering pin.

Note that α is the actual rotational angle of inside front steering wheel, β is the theoretical rotational angle of external front steering wheel, M is the distance between the left and right vertical shafts, L is the distance of rear and front axles of the vehicle wheel, θ is the bottom angle of the steering trapezoid mechanism, β′ is the ideal rotational angle of the external front steering wheel, m is the steering arm length of the steering trapezoid mechanism, N is the length of the auxiliary line, and δ1 and δ2 are the interior angles of auxiliary calculation.

5.1. Optimization Model of the Vehicle Steering Trapezoid Mechanism

Using the geometry in Figure 6, we getwhere α is the rotational angle of inside front steering wheel, β is the theoretical rotational angle of the external front steering wheel, M is the distance between the left and right vertical shafts, and L is the distance of rear and front axles of the vehicle wheel.

In the process of vehicle swerve, when the turning radius reaches the minimum, the rotational angle of inside front steering wheel reaches the maximum; that is,where Rmin is the minimum turning radius and αmax is the maximum rotational angle of the inside front steering wheel.

Figure 7 shows that schematic of the vehicle actual steering process when the rotational angle of the inside steering wheel is α. The dotted line represents the positional relationship of the steering trapezium mechanism when the steering is not started. Using the geometry in Figure 7, we getwhere α is the rotational angle of the inside front steering wheel, M is the distance between the left and right vertical shafts, m is the steering arm length of the steering trapezoid mechanism, θ is the bottom angle of the steering trapezoid mechanism, and N is the length of the auxiliary line. We also havewhere S is the length of the tie rod and δ1 and δ2 are the interior angles of auxiliary calculation.

Using (29), (30), and Figure 5, we getwhere β is the theoretical rotational angle of the external front steering wheel and is the ideal rotational angle of the external front steering wheel.

We find

In order to ensure the steering performance of the vehicle, the actual rotational angle function of external front steering wheel should be as close as possible to the ideal rotational angle in the process of vehicle swerve. Hence, the optimization problem of vehicle steering trapezoidal structure can be written aswhere α is the rotational angle of the inside front steering wheel, αmax is the maximum rotational angle of the inside front steering wheel, αI is the angle number corresponding to the rotational angle of the inside front steering wheel, and is a natural number between 1 and αmax, which is dimensionless; ω(αi) is a weighting function.

The computational method is as follows:

The design variables selected by the objective function are the steering trapezoidal bottom angle θ and the trapezoidal arm length m, to be represented by X=[m,θ]. According to the design information in the literature [51, 52], the constraint condition of vehicle steering trapezium mechanism is given by

5.2. Optimization Results

We experimented with Patrol GRX, Patrol GL, and Nissan Duke data. The parameters of the vehicles are shown in Table 3. IRCGA-2 is used to optimize the parameters of the steering trapezium mechanism.

The population size and the maximum number of iterations were chosen as 100 and 5000, respectively. The optimization results are shown in Table 4.

It can be seen from Table 4 that the absolute error between the optimal value of the trapezoidal arm length and the ideal value is within 0.0001 m, the relative error being within 0.1%. The absolute error value of the steering trapezoidal bottom angle is within 0.0005 rad, the relative error being within 0.1%. The errors between the optimal and ideal values of the objective function are thus within 0.1%.

6. Conclusions

ICRGA-2 to solve constrained global optimization problems is proposed. The improvements of IRCGA-2 are mainly in four categories: the selection operator, the crossover operator, the substitution operator, and the mutation operator.

When methods such as roulette wheel selection, tournament selection, and local selection are used to compute the individual’s fitness function, inferior population diversity is developed at later stages of iterations. ICRGA-2 improves performance by incorporating a number of novelties as summarized below.

ICRGA-2 uses the SGS method, which does not need compute the fitness value, achieves superior population diversity, and is easy to realize.

A HNDX operator is developed. Compared with the DBX operator in [24], the HNDX operator can guarantee that the cross-generated offsprings are located near the better one among the two parents and that crossover direction is very close to the optimal crossover direction or consistent with the optimal crossover direction. Thus, HNDX can ensure that there is a great chance of generating better offsprings and thereby improving the convergence speed of IRCGA-2.

In most previous GA algorithms, as iterations increase, the same individuals are likely to appear in the population, making the population diversity worse. To avoid this problem, the substitution operation is added after the crossover so that there is no duplication of the same individual in the population.

The local search ability and the global search ability of different mutation operators are different. Some mutation operators have strong local search ability, and some have strong global search ability. A single mutation operator is difficult to take into account both local search and global search. This paper proposes a combined mutation method which makes the mutation operation not only take into account the local search ability but also the global search ability.

In the optimization problems with many local optima, ICRGA-1 is likely to lose population diversity after many iterations and converges thereby to a local optimum instead of the global optimum. In such problems, the population diversity of IRCGA-2 remains superior to that of IRCGA-1.

The computational results with nine examples show that IRCGA-2 has better performance than the other IRCGA methods with respect to all the measures of performance considered.

As an example application, the optimization model of the steering mechanism of vehicles is formulated and IRCGA-2 is used to optimize the parameters of the steering trapezoidal mechanism of three kind of vehicle types. Better results are obtained with IRCGA-2 as compared to other methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the Project of the Social Science Foundation of Heilongjiang Province, China (Grant no. 16JYB06), in part by the National Social Science Foundation (Grant no. 13JY098), and in part by the subproject of the National Science and Technology Support Program (Grant no. 2014BAD06B01-23).