#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Dual-Subpopulation as reciprocal optional external archives for differential evolution


Authors: Haiming Du aff001;  Zaichao Wang aff001;  Yiqun Fan aff002;  Chengjun Li aff002;  Juan Yao aff003
Authors place of work: School of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, Henan, China aff001;  School of Computer Science, China University of Geosciences, Wuhan, Hubei, China aff002;  College of Informatics, Huazhong Agricultural University, Wuhan, Hubei, China aff003
Published in the journal: PLoS ONE 14(9)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0222103

Summary

Differential Evolution (DE) is powerful for global optimization problems. Among DE algorithms, JADE and its variants, whose mutation strategy is DE/current-to-pbest/1 with optional archive, have good performance. A significant feature of the above mutation strategy is that one individual for difference operation comes from the union of the optional external archive and the population. In existing DE algorithms based on the mutation strategy—JADE and its variants, individuals eliminated from the population are send to the archive. In this paper, we propose a scheme for managing the optional external archive. According to our scheme, two subpopulations are maintained in the population. Each of them regards the other as the archive. In experiments, our scheme is applied in JADE and two of its variants—SHADE and L-SHADE. Experimental results show that our scheme can enhance JADE and its variants. Moreover, it can be seen that L-SHADE with our scheme performs significantly better than four DE algorithms, CoBiDE, MPEDE, EDEV, and MLCCDE.

Keywords:

Biology and life sciences – Physical sciences – Research and analysis methods – Evolutionary biology – Evolutionary processes – Natural selection – Population biology – Mathematics – Simulation and modeling – Population metrics – Applied mathematics – Algorithms – Population size – Ecology and environmental sciences – Ecology – Optimization – Ecological metrics – Species diversity – Research facilities – Computational techniques – Information centers – Archives – Evolutionary algorithms – Evolutionary computation – Convergent evolution

Introduction

Differential evolution (DE), a type of Evolutionary Algorithm (EA) for global optimization problems, has been successfully applied in many fields [1]. In each run of DE, the population, which consists of individuals—candidate solutions of problem, need be maintained. Here, individuals are also called target vectors. In the gth generation of population, mutant vectors {vi,g = (vi,1,g, vi,2,g, …, vi,d,g)|i = 1, 2, …, NP}, where d denotes dimensionality of problem, are generated through mutation based on target vectors {xi,g = (xi,1,g, xi,2,g, …, xi,d,g)}. Then, crossover produces trial vectors {ui,g = (ui,1,g, ui,2,g, …, ui,d,g)} based on xi,g and vi,g. After that, xi,g+1 are selected via selection from xi,g and ui,g according to their fitness to problem—f(xi,g) and f(ui,g).

DE is being constantly improved at different aspects. According to [2],

  • Methods based on both strategy and control parameter adaptations [321];

  • Methods based on only strategy adaptations [1, 2240]; and

  • Methods based on population size control [4146].

are the recent directions of DE study. Most of the above methods are explained as improving or maintaining diversity—difference among individuals. Although so many measures have been presented in literature, satisfactory solutions still cannot be obtained by DE on many occasions. Therefore, further research is still required.

JADE [3] is a state-of-the-art DE algorithm. So far, a number of variants of JADE have been proposed in literature, such as SHADE [47], Rcr-JADE [48], L-SHADE [45], AEPD-JADE [1], JADE-SI [27], JADE_sort [20], ETI-JADE [34], and ETI-SHADE [34]. Not only JADE itself but also its variants are all based on the same mutation strategy, DE/current-to-pbest/1 with optional archive, which is shown in Eq 1.


In the equation, xi,g, xr1,g and x b e s t , g p are target vectors from population P. Further, x b e s t , g p is randomly chosen from the 100p% individuals whose fitness is better than the other individuals, where p ∈ (0, 1]. Meanwhile, x ˜ r 2 , g is an individual from the union of the optional external archive and the population. In addition, both xr1,g and x ˜ r 2 , g are randomly chosen and other than xi,g.

Mutation of DE is always based on difference operation of individuals. In the majority mutation strategies, individuals for difference operation are target vectors in the current generation of population. Nevertheless, a significant feature of DE/current-to-pbest/1 with optional archive is that one of individuals for difference operation comes from the union of the archive and the population. That is, individual for difference operation is selected from a larger range than ever. According to experimental results in literature, DE/current-to-pbest/1 with optional archive leads to good algorithm performance.

Here, we give the motivation of this paper. In JADE and its variants, the optional external archive is managed just by a simple means. Details are given as below. In every generation, target vectors weeded out in selection are sent to archive. A sent individual is accepted by the archive only if it is different with any individuals existing in archive. That is, redundancy is not allowed in archive. When there is no free space in the archive, random individuals in it are removed for accommodating new comers. By this means, potential promising search directions in individuals eliminated from the population may be still kept for evolution. Nevertheless, under the control of the above managing method, individuals in the archive are worse in fitness than target vectors and similar in chromosome with target vectors. Hence, the method for managing the archive is not the best choice for DE/current-to-pbest/1 with optional archive. Thus, how to manage the optional external archive need be further studied for improving DE/current-to-pbest/1 with optional archive.

EAs naturally tend to demonstrate parallelism, since most of their variation operators can be processed in parallel. Among renowned types of parallel EAs, distributed EAs (DEAs) are most widely applied for upgrading different EAs [49]. In DEAs, the large population is divided into subpopulations for making segregation. When a predetermined condition is met, migration is executed to exchange individuals among subpopulations. By this means, for each subpopulation, foreign individuals similar in fitness level with local individuals but different in building blocks of chromosome from local individuals can be provided from time to time. Hence, upgrading an EA to a DEA can improve solutions.

Enlightened by migration of DEA, we propose a scheme to manage the optional external archive in this paper. Details are given below. The population is divided into two subpopulations. The two subpopulations evolve synchronously and independently. Each subpopulation regards the other one as its optional external archive. Between the two subpopulations, individuals are similar in fitness level with local individuals but different in building blocks of chromosome from local individuals. Therefore, under the control of our proposed scheme, individuals more fitting than before can be provided for difference operation of mutation.

Although our scheme is enlightened by migration of DEA, it differs with migration of DEA significantly. In migration, individuals from source subpopulation replace individuals in target subpopulation directly. However, under the control of our scheme, individuals from a subpopulation never migrate to the other subpopulation but just participate difference operation in mutation occurred in the latter subpopulation. In fact, DEA is very costly since multiple subpopulations need be maintained in population. However, DE with our method just need to maintain two subpopulations and then can be directly compared with existing DE algorithms.

Our experiments are based on the IEEE Congress on Evolutionary Computation 2014 (CEC2014) benchmark test suite (http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2014/CEC2014.htm). In the first of experiments, our scheme are applied in JADE and its two variants, SHADE and L-SHADE. When function dimensionality is set 30, 50 and 100, results of DE algorithms with our scheme are compared with results of the original DE algorithms. The experimental results show that our scheme can significantly improve solutions. In the second experiment, the best performer among the three DE algorithms with our method, L-SHADE with our method, is compared with four up-to-date DE algorithms—CoBiDE [6], MPEDE [12], EDEV [21], and MLCCDE [50]. The experimental results show that L-SHADE with our method is competitive in the field of DE.

The rest of this paper is organized as follows. In Section II, related work is presented. Firstly, JADE and its variants, the DE algorithms with optional external archive, are introduced. Then, DE algorithms with subpopulations are introduced. In Section III, our method for managing the optional external archive is given. Then, experimental results are shown and analyzed in Section IV. Finally, a conclusion and a prospect are dealt with in Section V.

Related work

JADE and its variants

JADE employs DE/current-to-pbest with optional archive as its mutation strategy. When implementing the mutation strategy, individuals eliminated from the population are stored in the optional external archive. Moreover, in JADE, scaling factor F and crossover rate CR—the two main parameters of DE—are both adaptively set for each target vector independently. Since details of both DE/current-to-pbest with optional archive and the existing method for managing the optional external archive has been given in the first section, we just introduce the adaptively setting of F and CR here.

As shown in Eq 2, crossover probability of each individual, which is truncated to [0, 1], is generated according to the normal distribution with mean μCR and standard deviation 0.1.


If f(ui,g) < f(xi,g), the value of CRi is collected into SCR. The mean μCR is initialized to be 0.5 and then updated after each generation according to Eq 3.

In Eq 3 c is a positive constant between 0 and 1 and meanA() is the usual arithmetic mean. Similarly, as shownin Eq 4, mutation factor of each individual, which is truncated to be 1 if Fi ≧ 1 or regenerated if Fi ≦ 0, is independently generated according to Cauchy distribution with location μF and scale parameter 0.1.

If f(ui,g) < f(xi,g), the value of Fi is collected into SF. The location parameter μF of Cauchy distribution is initialized to be 0.5 and then updated at the end of each generation according to Eq 5.

In Eq 5, meanL() is Lehmer mean. According to [3], JADE outperforms jDE [51], SaDE [52], the classic DE/rand/1/bin or a canonical PSO algorithm [53].

A parameter adaptation technique which uses a historical memory of successful control parameter settings to guide the selection of future control parameter values is proposed in [47] as an enhancement to JADE. The proposed algorithm is named SHADE. According to the experimental results in [47] for the 28 CEC2013 benchmark functions, SHADE outperforms dynNP-jDE [54], SaDE, JADE, EPSDE [55] and CoDE.

A crossover rate repair technique based on successful parameters are proposed and combined with JADE in [48]. According to the technique, crossover rate is repaired by using the average number of components taken from mutant. Then, Rcr-JADE is obtained based on the technique. The experiments results in [48] indicate that Rcr-JADE is able to obtain significantly better solutions than JADE. Moreover, compared with jDE, SaDE, EPSDE-c [56] and CoDE, Rcr-JADE obtains better, or at least comparable, results for the 25 CEC2005 benchmark functions.

L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), is proposed in [45]. LPSR continually decreases population size in runs according to a linear function. Based on the CEC2014 benchmark functions, L-SHADE is compared with dynNP-jDE, SaDE, JADE, EPSDE and CoDE as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with the above evolutionary algorithms.

A mechanism, auto-enhanced population diversity, is proposed in [1]. This mechanism identifies convergence and stagnation by measuring the distribution of the population in each dimension. Once convergence is detected at a dimension, diversification is executed at that dimension. Similarly, stagnation at a dimension is eliminated as soon as it is found. The AEPD mechanism is incorporated into DE algorithms including JADE and SHADE. The results for the set of 25 CEC2005 benchmark functions show that the mechanism significantly improved the performance of JADE and SHADE. Moreover, AEPD-JADE also has a superior performance in comparison with DE/rand/1/bin [57], JADE, jDE, SaDE, CoDE, Pro DE/rand/1/bin [58], HdDE [59], and EPSDE [56], CLPSO [60] and IPOP-CMA-ES [61].

A scheme based on superior-inferior (SI) crossover is proposed in [27]. When population diversity degree is small, the SI crossover is performed to improve global search. Otherwise, the superior-superior crossover is used to enhance exploitation. The above scheme is applied in four DE algorithms including JADE. Experiments based on 24 functions selected from IEEE Swarm Intelligence Symposium 2005 and CEC2014 benchmark functions show that JADE-SI—JADE with SI crossover—is significantly better than JADE in the majority of cases.

A modified JADE version with sorting crossover rate (CR) is proposed in [20]. In the proposed algorithm JADE_sort, a smaller CR value is assigned to individual better in fitness. Based on the CEC2005 functions, JADE_sort is compared with jDE, SaDE, EPSDE, JADE, CoDE and JADE-SI. The experiments results show JADE_sort is competitive.

The event-triggered impulsive (ETI) control scheme is introduced in [34]. Two types of impulses—stabilizing impulses and destabilizing impulses, are presented. In runs, the number of individuals taking impulsive control is decided by an adaptive mechanism. After that, the decided number of individuals are chosen by ranking assignment. Then these chosen individuals are adaptively modified with the above two kinds of impulses. The ETI control scheme is incorporated into ten DE algorithms including JADE and SHADE. According to the experiments on the CEC2014 benchmark functions, ETI-JADE outperforms not only original JADE but also AEPD-JADE [1]. Also, ETI-SHADE outperforms SHADE and AEPD-SHADE [1].

DE algorithms with subpopulations

In this subsection, we list five DE algorithms with subpopulations. The latest two of them are involved in our experiments for comparison. Although the listed DE algorithms all have more than one subpopulations, they do not belong to DEA, at least do not belong to narrow sense DEA, because different subpopulations in these algorithms are different in operators or settings. Details are given as below.

A dual-population differential evolution (DPDE) with coevolution is proposed in [62] for constrained optimization problems (COPs). In this algorithm, COPs is treated as a bi-objective optimization problem where the first objective is the actual cost or reward function to be optimized, while the second objective accounts for the degree of constraint violations. At each generation in runs, population is divided into two subpopulations based on the solution’s feasibility to treat the both objectives separately. Each subpopulation focuses on only optimizing the corresponding objective which leads to a clear division of work. Furthermore, DPDE makes use of an information-sharing strategy to exchange search information between the subpopulations.

An adaptive multiple subpopulations based DE algorithm, MPADE, is designed in [28]. In MPADE, population is split into three subpopulations based on fitness. Three DE strategies are performed on three subpopulation, respectively. Furthermore, an adaptive approach is designed for parameter adjustment in the three DE strategies. According to its replacement strategy, a few best offspring may replace worst parents.

In [24], mDE-bES is proposed. In this algorithm, population is divided into independent subpopulations, each with different mutation and update strategies. A mutation strategy that uses information from either the best individual or a randomly selected one is used. Selection of individuals for some of the tested mutation strategies utilizes fitness-based ranks of these individuals. Function evaluations are divided into epochs. At the end of each epoch, individuals are exchanged between subpopulations.

MPEDE [12] is an ensemble of multiple mutation strategies with adapted F and CR. These mutation strategies are current-to-pbest/1, current-to-rand/1 and rand/1. Each mutation strategy controls an indicator subpopulation. After every pre-defined number of generations, the best-performing mutation strategy is found by a proposed equation. Then a reward subpopulation, which is randomly allocated to a mutation strategy at beginning, is assigned to the best-performing mutation strategy. In MPEDE, the method to adapt F and CR comes from [3].

EDEV [21] is an ensemble of differential evolution variants and consists of three state-of-the-art DE algorithms, JADE, CoDE and EPSDE. Each constituent DE variant is assigned an indicator subpopulation. According to a mechanism similar to the one in MPEDE, the most efficient constituent DE variant is determined after every pre-defined generation, Furthermore, a reward subpopulation is assigned to the currently best-performing constituent DE variant.

Our method for managing the optional external archive

In DE, the more individuals are involved in mutation or the more individuals can be chosen for mutation, the higher mutation degree may be gotten. It can be seen from Eq 1 that, on one hand, five individuals are required in the mutation strategy. On the other hand, an individual for difference operation is chosen from a larger range than the population. Hence, compared with other mutation methods, DE/current-to-pbest/1 with optional archive show higher mutation degree. Although the archive contains individuals as the population does, it is not another population since no new individual can be produced in it. Therefore, no function evaluation is required for maintaining the optional external archive. In brief, the archive provides additional individuals for mutation without consuming extra function evaluation or leading to high degeneration. That is, diversity of the population is improved in a reasonable manner. Hence, JADE and its variants show good performance.

In JADE or its variants, individuals in the optional external archive are ones eliminated from the population at different generations. Therefore, individuals in the optional external archive have similarities in chromosome to current target vectors since genetic relationships exist. Meanwhile, individuals in the archive are worse in fitness than target vectors because they are all losers in selection. Provided that individuals in archive are very different in chromosome with target vectors but similar in fitness level with current target vectors, DE/current-to-pbest with optional archive may be further enhanced.

In our scheme, two subpopulations need be maintained in DE. The two subpopulations regard each other as the optional external archive. In this way, individuals in the archive are not only different in building blocks of chromosome from current target vectors, but also similar in fitness level with current target vectors. To show details of our method for managing the optional external archive, we adapt the pseudo-code of JADE. Although our method can also be used in any variants of JADE, expressing our method based on original JADE is more concise than based on one of its variant. The adapted pseudo-code is given in Algorithm 1.

Algorithm 1 JADE With our Method for Managing the Optional External Archive

1: Set μCR = 0.5; μ C R ′ = 0 . 5; μF = 0.5; μ F ′ = 0 . 5;

2: Randomly create the initial generation of the two subpopulations SP0, {xi,0|i = 1, 2, …, NP/2}, and S P 0 ′, { x i , 0 ′ | i = 1 , 2 , . . . , N P / 2 }

3: for g = 1 to G do

4:  S F = S F ′ = ⌀; S C R = S C R ′ = ⌀

5:  for i = 1 to NP/2 do

6:   Generate CRi = randni(μCR, 0.1), C R i ′ = r a n d n i ( μ C R ′ , 0 . 1 ), Fi = randci(μF, 0.1), F i ′ = r a n d c i ( μ F ′ , 0 . 1 )

7:   Randomly choose x ˜ r 2 , g ≠ x r 1 , g ≠ x i , g from S P g ∪ S P g ′ and x ˜ r 2 ′ , g ′ ≠ x r 1 ′ , g ′ ≠ x i , g ′ from S P g ′ ∪ S P g

8:   v i , g = x i , g + F i · ( x b e s t , g p - x i , g ) + F i · ( x r 1 , g - x ˜ r 2 , g )

9:   v i , g ′ = x i , g ′ + F i ′ · ( x b e s t , g ′ p - x i , g ′ ) + F i ′ · ( x r 1 ′ , g ′ - x ′ ˜ r 2 ′ , g )

10:   for j = 1 to D do

11:    if j = jrand or rand(0, 1) < CRi then

12:     uj,i,g = vj,i,g

13:    else

14:     uj,i,g = xj,i,g

15:    end if

16:    if j = j r a n d ′ or r a n d ( 0 , 1 ) < C R i ′ then

17:     u j , i , g ′ = v j , i , g ′

18:    else

19:     u j , i , g ′ = x j , i , g ′

20:    end if

21:   end for

22:   if f(xi,g) < f(ui,g) then

23:    xi,g+1 = xi,g

24:   else

25:    xi,g+1 = ui,g, CRiSCR, FiSF

26:   end if

27:   if f ( x i , g ′ ) < f ( u i , g ′ ) then

28:    x i , g + 1 ′ = x i , g ′

29:   else

30:    x i , g + 1 ′ = u i , g ′, C R i ′ → S C R ′, F i ′ → S F ′

31:   end if

32:  end for

33: end for

Experimental studies

Our experiments are based on the 30 CEC2014 benchmark test functions. In the first experiment, original version of JADE and its variants is compared with their version based on our scheme. Then, the best performer among DE algorithms with our scheme is compared with up-to-date DE algorithms in the second experiment.

DE algorithms for experiments

For the first experiment, we need to select variants of JADE beside JADE itself. As mentioned above, SHADE, Rcr-JADE, L-SHADE, AEPD-JADE, JADE-SI, JADE_sort, ETI-JADE and ETI-SHADE are variants of JADE. Among these algorithms, L-SHADE, AEPD-JADE, ETI-JADE and ETI-SHADE are tested based on the CEC 2014 benchmark functions in literature. According to results in [1, 34, 45], it can be seen that L-SHADE performs much better on the CEC2014 functions than the other algorithms. Thus, we select L-SHADE for the first experiment. In addition, SHADE, the foundation of L-SHADE and an variant of JADE, also be selected by us. In short, our method is employed in the three algorithms, JADE, SHADE and L-SHADE, for the first experiment.

For the second experiment, we chose CoBiDE, MPDED, EDEV, and MLCCDE to compare with the best performer among DE algorithms with our scheme. CoBiDE is a state-of-the-art DE algorithm having no relationship with JADE. MPDED and EDEV are recent DE algorithms with subpopulations and belong to related work. MLCCDE is one of the most recent DE algorithms.

Settings

Function dimensionality is set 30, 50 and 100, respectively, in the first experiment, while only 30 in the second experiment. According to the guideline of CEC 2014 competition, maximum fitness evaluations are set 10000 ⋅ D for the all DE algorithms, where D represents function dimensionality. All parameters of the original DE algorithms are given in Table 1 based on [3, 6, 12, 21, 45, 47, 50], respectively. It can be seen from Table 1 that we change population size NP in the original algorithms to arrange two subpopulations for implement our scheme. In the DE algorithms with our method, each subpopulation is allocated NP/2 individuals and regard the other subpopulation as its archive.

Tab. 1.

Settings.

<h2>Settings.</h2>

Comparison between DE algorithms with our scheme and their original version

Experimental results of original DE algorithms and DE algorithms with our method for functions with 30, 50 and 100 dimensions are listed in Tables 24, respectively. According to Table 2, when function dimensionality is 30, our method significantly improves JADE in ten cases out of 30 ones, SHADE in 9/30 cases and L-SHADE in 10/30 cases. Meanwhile, our method statistically deteriorates JADE in 4/30, SHADE in 3/30 cases and L-SHADE in 2/30 cases. In addition, there is no significant difference in other cases. According to Table 3, when function dimensionality is 50, our method significantly improves JADE in 11/30 ones, SHADE in 10/30 cases and L-SHADE in 9/30 cases. Meanwhile, our method statistically deteriorates JADE in 3/30, SHADE in 4/30 cases and L-SHADE in 3/30 cases. In addition, there is no significant difference in other cases. According to Table 4, when function dimensionality is 100, our method significantly improves JADE in 9/30 ones, SHADE in 11/30 cases and L-SHADE in 9/30 cases. Meanwhile, our method statistically deteriorates JADE, SHADE and L-SHADE in two cases, respectively. In addition, there is no significant difference in other cases.

Tab. 2.

Results of DE algorithms with our scheme and original DE algorithms when function dimensionality is set 30.

<h2>Results of DE algorithms with our scheme and original DE algorithms when function dimensionality is set 30.</h2>

“+” denotes the result of a DE algorithm with our method is significant better than the result of its original DE algorithm in terms of Wilcoxon’s rank sum test at a 0.05 significance level, while “−” represents statistical worse. In addition, “≈” shows that there is no significant difference.

Tab. 3.

Results of DE algorithms with our scheme and original DE algorithms when function dimensionality is set 50.

<h2>Results of DE algorithms with our scheme and original DE algorithms when function dimensionality is set 50.</h2>

“+” denotes the result of a DE algorithm with our method is significant better than the result of its original DE algorithm in terms of Wilcoxon’s rank sum test at a 0.05 significance level, while “−” represents statistical worse. In addition, “≈” shows that there is no significant difference.

Tab. 4.

Results of DE algorithms with our scheme and original DE algorithms when function dimensionality is set 100.

<h2>Results of DE algorithms with our scheme and original DE algorithms when function dimensionality is set 100.</h2>

“+” denotes the result of a DE algorithm with our method is significant better than the result of its original DE algorithm in terms of Wilcoxon’s rank sum test at a 0.05 significance level, while “−” represents statistical worse. In addition, “≈” shows that there is no significant difference.

It can be seen from Tables 24 that, for some functions, all DE algorithms with our method significantly win their original DE algorithms. Details go as below. When function dimensionality is 30, for F19 and F29, our method lead to significant improvement in all the cases. When function dimensionality goes to 50, for F19, our method lead to significant improvement in all the cases. When function dimensionality becomes 100, for F1, F9, F19, and F29, our method lead to significant improvement in all the cases. Based on the mean error to the optimum at interval, we plot convergence graphics of runs for one function when function dimensionality is 30, 50, and 100, respectively, in Fig 1.

<h2>Convergence graphics.</h2>
Fig. 1.

Convergence graphics.


A: The convergence graphic for F19 when function dimensionality is 30. B: The convergence graphic for F19 when function dimensionality is 50. C: The convergence graphic for F1 when function dimensionality is 100.

As shown in Fig 1, convergence rate goes lower and lower in all runs. In the figure, runs with our scheme converge more slowly at the initial stage than runs without it but faster at the remaining part. The above phenomenon can be explained as below. The original value of population size in the DE algorithms NPo has been proven to be a fitting value by experiments in literate. In theory, size of each subpopulation in the DE algorithms with our method need be set NPo. That is, population size need be set 2 ⋅ NPo. However, due to the limitation in maximum fitness evaluations, great increase of population size means great decrease of maximum generations. Therefore, subpopulation size needs be set less than NPo to ensure that enough generations can be executed in runs. At the beginning of run, DE algorithms with our scheme converge more slowly than original DE algorithms for the lack of individuals in each subpopulation. Nevertheless, with the implement of our scheme, the disadvantage is offset gradually in many cases. Altogether, our method leads to significant improvement in 88 cases out of 270 ones but statistical deterioration in 25 cases. In summary, JADE and its variants with our scheme outperform their original version.

Comparison between L-SHADE with our scheme and up-to-date DE algorithms

According to Tables 24, L-SHADE based on our scheme is best in performance among the three DE algorithm with our scheme. Thus, we plan to compare L-SHADE based on our scheme with up-to-date DE algorithms, CoBiDE, MPEDE, EDEV, and MLCCDE. In Table 5, the experimental results are listed.

Tab. 5.

Results of L-SHADE based on our method, MLCCDE, EDEV, MPEDE and CoBiDE when function dimensionality is set 30.

<h2>Results of L-SHADE based on our method, MLCCDE, EDEV, MPEDE and CoBiDE when function dimensionality is set 30.</h2>

“+” denotes that the result of L-SHADE based on our method is significant better than the current result in terms of Wilcoxon’s rank sum test at a 0.05 significance level, while “−” represents statistical worse. Meanwhile, “≈” shows that there is no significant difference.

It can be seen from the table that L-SHADE based on our method significantly wins MLCCDE, EDEV, MPEDE and CoBiDE in 9, 13, 14 and 11 cases, respectively. Meanwhile, L-SHADE based on our method loses to MLCCDE, EDEV, MPEDE and CoBiDE in two, three, zero and zero cases, respectively. There is no significant difference in all of other cases. In summary, L-SHADE with our method is very competitive.

Discussion

In JADE and its variants with our scheme, the two subpopulations evolve independently. Individuals in a subpopulation, compared with individuals in the other subpopulation, are different in chromosome but similar in fitness level. Therefore, regarding the other subpopulation as the optional external archive can provide fitting individuals for difference operation. Under the control of our scheme, DE/current-to-pbest/1 with optional archive become more efficient than before. Thus, DE algorithms based on DE/current-to-pbest/1 with optional archive become more powerful than before. In fact, if maximum fitness evaluations can be extended, DE algorithms with our scheme may have a more significant advantage.

Conclusion

JADE and its variants, DE algorithms based on DE/current-to-pbest/1 with optional archive, show good performance in comparisons among DE algorithms. Nevertheless, more powerful DE algorithms are needed for the solving difficulty arisen from the complexity of problems. The mutation strategy of these DE algorithms, DE/current-to-pbest/1 with optional archive, is based on the external optional archive. In this paper, we propose a new scheme for managing the archive. According to our scheme, two subpopulations are maintained in the population. Each of them regards the other as its archive. In this way, the individuals in the archive of a subpopulation are ones similar in fitness level with current target vectors but different in building blocks of chromosome from current target vectors. Experiments based on the CEC2014 benchmark functions not only show that our scheme can significantly improve solutions of JADE and its two variants, SHADE and L-SHADE, but also demonstrate that L-SHADE with our method performs significantly better than CoBiDE, MPEDE, EDEV, and MLCCDE.

As mentioned above, our scheme for managing the archive is enlightened by DEA. Conversely, a new type of distributed DE—DEA in the field of DE—can be developed based on the work in this paper. Further investigation is remained to be done.


Zdroje

1. Yang M, Li C, Cai Z, Guan J. Differential evolution with auto-enhanced population diversity. IEEE transactions on cybernetics. 2015;45(2):302–315. doi: 10.1109/TCYB.2014.2339495 25095277

2. Das S, Mullick SS, Suganthan PN. Recent advances in differential evolution—An updated survey. Swarm and Evolutionary Computation. 2016;27:1–30. doi: 10.1016/j.swevo.2016.01.004

3. Zhang J, Sanderson AC. JADE: adaptive differential evolution with optional external archive. IEEE Transactions on evolutionary computation. 2009;13(5):945–958. doi: 10.1109/TEVC.2009.2014613

4. Wang Y, Cai Z, Zhang Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Transactions on Evolutionary Computation. 2011;15(1):55–66. doi: 10.1109/TEVC.2010.2087271

5. Yu WJ, Shen M, Chen WN, Zhan ZH, Gong YJ, Lin Y, et al. Differential Evolution With Two-Level Parameter Adaptation. IEEE Transactions on Cybernetics. 2014;44(7):1080–1099. doi: 10.1109/TCYB.2013.2279211 24013834

6. Wang Y, Li HX, Huang T, Li L. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Applied Soft Computing. 2014;18:232–247. doi: 10.1016/j.asoc.2014.01.038

7. Li YL, Zhan ZH, Gong YJ, Chen WN, Zhang J, Li Y. Differential evolution with an evolution path: A DEEP evolutionary algorithm. IEEE transactions on cybernetics. 2015;45(9):1798–1810. doi: 10.1109/TCYB.2014.2360752 25314717

8. Tang L, Dong Y, Liu J. Differential evolution with an individual-dependent mechanism. IEEE Transactions on Evolutionary Computation. 2015;19(4):560–574. doi: 10.1109/TEVC.2014.2360890

9. Awad NH, Ali MZ, Suganthan PN, Reynolds RG. An ensemble sinusoidal parameter adaptation incorporated with L-SHADE for solving CEC2014 benchmark problems. In: Evolutionary Computation (CEC), 2016 IEEE Congress on. IEEE; 2016. p. 2958–2965.

10. Fan Q, Yan X. Self-adaptive differential evolution algorithm with zoning evolution of control parameters and adaptive mutation strategies. IEEE transactions on cybernetics. 2016;46(1):219–232. doi: 10.1109/TCYB.2015.2399478 25775502

11. Li G, Lin Q, Cui L, Du Z, Liang Z, Chen J, et al. A novel hybrid differential evolution algorithm with modified CoDE and JADE. Applied Soft Computing. 2016;47:577–599. doi: 10.1016/j.asoc.2016.06.011

12. Wu G, Mallipeddi R, Suganthan PN, Wang R, Chen H. Differential evolution with multi-population based ensemble of mutation strategies. Information Sciences. 2016;329:329–345. doi: 10.1016/j.ins.2015.09.009

13. Fu C, Jiang C, Chen G, Liu Q. An adaptive differential evolution algorithm with an aging leader and challengers mechanism. Applied Soft Computing. 2017;57:60–73. doi: 10.1016/j.asoc.2017.03.032

14. Guo Z, Liu G, Li D, Wang S. Self-adaptive differential evolution with global neighborhood search. Soft Computing. 2017;21(13):3759–3768. doi: 10.1007/s00500-016-2029-x

15. Mohamed AW, Suganthan PN. Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Computing. 2017; p. 1–21.

16. Ali MZ, Awad NH, Suganthan PN, Reynolds RG. An adaptive multipopulation differential evolution with dynamic population reduction. IEEE Transactions on Cybernetics. 2017;47(9):2768–2779. doi: 10.1109/TCYB.2016.2617301 28113798

17. Ghosh A, Das S, Mullick SS, Mallipeddi R, Das AK. A switched parameter differential evolution with optional blending crossover for scalable numerical optimization. Applied Soft Computing. 2017;57:329–352. doi: 10.1016/j.asoc.2017.03.003

18. Tatsis VA, Parsopoulos KE. Differential evolution with grid-based parameter adaptation. Soft Computing. 2017;21(8):2105–2127. doi: 10.1007/s00500-015-1911-2

19. Tian M, Gao X, Dai C. Differential evolution with improved individual-based parameter setting and selection strategy. Applied Soft Computing. 2017;56:286–297. doi: 10.1016/j.asoc.2017.03.010

20. Zhou YZ, Yi WC, Gao L, Li XY. Adaptive differential evolution with sorting crossover rate for continuous optimization problems. IEEE Transactions on Cybernetics. 2017;47(9):2742–2753. doi: 10.1109/TCYB.2017.2676882 28362602

21. Wu G, Shen X, Li H, Chen H, Lin A, Suganthan P. Ensemble of differential evolution variants. Information Sciences. 2018;423:172–186. doi: 10.1016/j.ins.2017.09.053

22. Rakshit P, Konar A, Bhowmik P, Goswami I, Das S, Jain LC, et al. Realization of an Adaptive Memetic Algorithm Using Differential Evolution and Q-Learning: A Case Study in Multirobot Path Planning. IEEE Transactions on Systems Man and Cybernetics Part B. 2013;43(4):814–831. doi: 10.1109/TSMCA.2012.2226024

23. Das S, Mandal A, Mukherjee R. An adaptive differential evolution algorithm for global optimization in dynamic environments. IEEE Transactions on Cybernetics. 2014;44(6):966. doi: 10.1109/TCYB.2013.2278188 23996590

24. Ali MZ, Awad NH, Suganthan PN. Multi-population differential evolution with balanced ensemble of mutation strategies for large-scale global optimization. Applied Soft Computing. 2015;33:304–327. doi: 10.1016/j.asoc.2015.04.019

25. Guo SM, Yang CC. Enhancing differential evolution utilizing eigenvector-based crossover operator. IEEE Transactions on Evolutionary Computation. 2015;19(1):31–49. doi: 10.1109/TEVC.2013.2297160

26. Guo SM, Yang CC, Hsu PH, Tsai JSH. Improving differential evolution with a successful-parent-selecting framework. IEEE Transactions on Evolutionary Computation. 2015;19(5):717–730. doi: 10.1109/TEVC.2014.2375933

27. Xu Y, Fang Ja, Zhu W, Wang X, Zhao L. Differential evolution using a superior–inferior crossover scheme. Computational Optimization and Applications. 2015;61(1):243–274. doi: 10.1007/s10589-014-9701-9

28. Cui L, Li G, Lin Q, Chen J, Lu N. Adaptive differential evolution algorithm with novel mutation strategies in multiple sub-populations. Computers & Operations Research. 2016;67:155–173. doi: 10.1016/j.cor.2015.09.006

29. Ghasemi M, Taghizadeh M, Ghavidel S, Abbasian A. Colonial competitive differential evolution: An experimental study for optimal economic load dispatch. Applied Soft Computing. 2016;40:342–363. doi: 10.1016/j.asoc.2015.11.033

30. Liao J, Cai Y, Wang T, Tian H, Chen Y. Cellular direction information based differential evolution for numerical optimization: an empirical study. Soft Computing. 2016;20(7):2801–2827. doi: 10.1007/s00500-015-1682-9

31. Qiu X, Tan KC, Xu JX. Multiple exponential recombination for differential evolution. IEEE transactions on cybernetics. 2017;47(4):995–1006. doi: 10.1109/TCYB.2016.2536167

32. Yi W, Zhou Y, Gao L, Li X, Mou J. An improved adaptive differential evolution algorithm for continuous optimization. Expert Systems with Applications. 2016;44:1–12. doi: 10.1016/j.eswa.2015.09.031

33. Awad NH, Ali MZ, Suganthan PN, Reynolds RG. CADE: A hybridization of Cultural Algorithm and Differential Evolution for numerical optimization. Information Sciences. 2017;378:215–241. doi: 10.1016/j.ins.2016.10.039

34. Du W, Leung SYS, Tang Y, Vasilakos AV. Differential evolution with event-triggered impulsive control. IEEE transactions on cybernetics. 2017;47(1):244–257. doi: 10.1109/TCYB.2015.2512942 26800559

35. Zheng LM, Liu L, Zhang SX, Zheng SY. Enhancing differential evolution with interactive information. Soft Computing. 2017; p. 1–20.

36. Ghosh A, Das S, Mallipeddi R, Das AK, Dash SS. A Modified Differential Evolution With Distance-based Selection for Continuous Optimization in Presence of Noise. IEEE Access. 2017;5:26944–26964. doi: 10.1109/ACCESS.2017.2773825

37. Zhang X, Zhang X. Improving differential evolution by differential vector archive and hybrid repair method for global optimization. Soft Computing. 2017;21(23):7107–7116. doi: 10.1007/s00500-016-2253-4

38. Zheng LM, Zhang SX, Tang KS, Zheng SY. Differential evolution powered by collective information. Information Sciences. 2017;399:13–29. doi: 10.1016/j.ins.2017.02.055

39. Zhou XG, Zhang GJ. Abstract Convex Underestimation Assisted Multistage Differential Evolution. IEEE Transactions on Cybernetics. 2017;PP(99):1–12.

40. Cui L, Li G, Zhu Z, Lin Q, Wong KC, Chen J, et al. Adaptive multiple-elites-guided composite differential evolution algorithm with a shift mechanism. Information Sciences. 2018;422:122–143. doi: 10.1016/j.ins.2017.09.002

41. Brest J, Maučec MS. Self-adaptive differential evolution algorithm using population size reduction and three strategies. Soft Computing. 2011;15(11):2157–2174. doi: 10.1007/s00500-010-0644-5

42. Yang M, Cai Z, Li C, Guan J. An improved adaptive differential evolution algorithm with population adaptation. In: Conference on Genetic and Evolutionary Computation; 2013. p. 145–152.

43. Zhu W, Tang Y, Fang JA, Zhang W. Adaptive population tuning scheme for differential evolution. Information Sciences. 2013;223(2):164–191. doi: 10.1016/j.ins.2012.09.019

44. Mallipeddi R, Wu G, Lee M, Suganthan PN. Gaussian adaptation based parameter adaptation for differential evolution. In: Evolutionary Computation; 2014. p. 1760–1767.

45. Tanabe R, Fukunaga AS. Improving the search performance of SHADE using linear population size reduction. In: Evolutionary Computation (CEC), 2014 IEEE Congress on. IEEE; 2014. p. 1658–1665.

46. Gonuguntla V, Mallipeddi R, Veluvolu KC. Differential Evolution with Population and Strategy Parameter Adaptation. Mathematical Problems in Engineering. 2015;2015(287607):10–19.

47. Tanabe R, Fukunaga A. Success-history based parameter adaptation for differential evolution. In: Evolutionary Computation (CEC), 2013 IEEE Congress on. IEEE; 2013. p. 71–78.

48. Gong W, Cai Z, Wang Y. Repairing the crossover rate in adaptive differential evolution. Applied Soft Computing. 2014;15:149–168. doi: 10.1016/j.asoc.2013.11.005

49. Alba E, Tomassini M. Parallelism and evolutionary algorithms. IEEE Transactions on Evolutionary Computation. 2002;6(5):443–462. doi: 10.1109/TEVC.2002.800880

50. Zhang SX, Zheng LM, Tang KS, Zheng SY, Chan WS. Multi-layer competitive-cooperative framework for performance enhancement of differential evolution. Information Sciences. 2019;482:86–104. doi: 10.1016/j.ins.2018.12.065

51. Brest J, Greiner S, Boskovic B, Mernik M, Zumer V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Transactions on Evolutionary Computation. 2006;10(6):646–657. doi: 10.1109/TEVC.2006.872133

52. Qin AK, Huang VL, Suganthan PN. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE transactions on Evolutionary Computation. 2009;13(2):398–417. doi: 10.1109/TEVC.2008.927706

53. Trelea IC. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information Processing Letters. 2003;85(6):317–325. doi: 10.1016/S0020-0190(02)00447-7

54. Brest J, Maučec MS. Population size reduction for the differential evolution algorithm. Applied Intelligence. 2008;29(3):228–247. doi: 10.1007/s10489-007-0091-x

55. Mallipeddi R, Suganthan PN, Pan QK, Tasgetiren MF. Differential evolution algorithm with ensemble of parameters and mutation strategies. Applied Soft Computing. 2011;11(2):1679–1696. doi: 10.1016/j.asoc.2010.04.024

56. Mallipeddi R, Suganthan PN. Differential evolution algorithm with ensemble of parameters and mutation and crossover strategies. In: International Conference on Swarm, Evolutionary, and Memetic Computing. Springer; 2010. p. 71–78.

57. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization. 1997;11(4):341–359. doi: 10.1023/A:1008202821328

58. Epitropakis MG, Tasoulis DK, Pavlidis NG, Plagianakos VP, Vrahatis MN. Enhancing differential evolution utilizing proximity-based mutation operators. IEEE Transactions on Evolutionary Computation. 2011;15(1):99–119. doi: 10.1109/TEVC.2010.2083670

59. Dorronsoro B, Bouvry P. Improving classical and decentralized differential evolution with new mutation operator and population topologies. IEEE Transactions on Evolutionary Computation. 2011;15(1):67–98. doi: 10.1109/TEVC.2010.2081369

60. Liang JJ, Qin AK, Suganthan PN, Baskar S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE transactions on Evolutionary Computation. 2006;10(3):281–295. doi: 10.1109/TEVC.2005.857610

61. Auger A, Hansen N. A restart CMA evolution strategy with increasing population size. In: Evolutionary Computation, 2005. The 2005 IEEE Congress on. vol. 2. IEEE; 2005. p. 1769–1776.

62. Gao W, Yen GG, Liu S. A dual-population differential evolution with coevolution for constrained optimization. IEEE Transactions on Cybernetics. 2015;45(5):1094–1107. doi: 10.1109/TCYB.2014.2345478 25137739


Článek vyšel v časopise

PLOS One


2019 Číslo 9
Nejčtenější tento týden
Nejčtenější v tomto čísle
Kurzy

Zvyšte si kvalifikaci online z pohodlí domova

Svět praktické medicíny 1/2024 (znalostní test z časopisu)
nový kurz

Koncepce osteologické péče pro gynekology a praktické lékaře
Autoři: MUDr. František Šenk

Sekvenční léčba schizofrenie
Autoři: MUDr. Jana Hořínková

Hypertenze a hypercholesterolémie – synergický efekt léčby
Autoři: prof. MUDr. Hana Rosolová, DrSc.

Význam metforminu pro „udržitelnou“ terapii diabetu
Autoři: prof. MUDr. Milan Kvapil, CSc., MBA

Všechny kurzy
Kurzy Podcasty Doporučená témata Časopisy
Přihlášení
Zapomenuté heslo

Zadejte e-mailovou adresu, se kterou jste vytvářel(a) účet, budou Vám na ni zaslány informace k nastavení nového hesla.

Přihlášení

Nemáte účet?  Registrujte se

#ADS_BOTTOM_SCRIPTS#