Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

  • Shouheng Tuo ,

    tuo_sh@126.com

    Affiliation School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong, P.R. China

  • Longquan Yong,

    Affiliation School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong, P.R. China

  • Fang’an Deng,

    Affiliation School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong, P.R. China

  • Yanhai Li,

    Affiliation School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong, P.R. China

  • Yong Lin,

    Affiliation School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong, P.R. China

  • Qiuju Lu

    Affiliation School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong, P.R. China

Abstract

Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

1 Introduction

With the scientific and social progress, new complex problems are more and more encountered in the fields of science and engineering. Especially, many high-dimensional optimization problems in engineering design, production scheduling and scientific calculation need urgently to be solved with high performance and high efficiency, for which there are three challenges: the first one is the very large search space owing to the very high dimensional problems (e.g., >500), which makes the enormous computation burden; the second one is the large number of modals (multi-extremum points), which makes the search algorithm be easily trapped into local search; the third one is the particularity of optimization problem that may be discontinuous, non-differentiable and even have no objective function, for which the traditional mathematical optimization algorithms are powerless due to the requiring of substantial gradient information. Therefore, it is of great challenge to discover the globally optimal solution in an efficient time for solving a complex multimodal optimization problem with more than 1000 dimensions, possibly infinite number of local minima and non-differentiable.

To address complex optimization problems, the swarm intelligent algorithms, mimicking the collective behavior of decentralized, self-organized systems, natural or artificial, have received much attention in recent years, most of which are nature inspired, such as genetic algorithm (GA)[1] inspired by biological evolution, particle swarm optimization (PSO)[2] mimicking the foraging process of bird flock, differential evolution (DE)[3], artificial bee colony (ABC)[4], Symbiotic organisms Search (SOS)[5] and so on. Comparing with traditional mathematical optimization algorithms, the swarm intelligent optimization algorithms are not limited by requiring substantial gradient information and not dependent on an initialization.

Both Harmony Search (HS) [67] and Teaching-Learning-Based Optimization (TLBO) [89] are new swarm intelligent optimization methods that have attracted increasing interests owing to their excellent characteristics, such as less parameters, simplicity, utilizing real-number encoding and fewer mathematical requirements and so forth. The advantage of HS is that it maintains population diversity very well during the search process; it has strong exploration power for exploring the unknown space. TLBO is powerful in obtaining extraordinary precision solution due to having very strong convergence ability. However, the HS and TLBO algorithms also have some limitations for solving high-dimensional optimization problems with multimodality. The HS has disadvantages on convergence speed, precision of globally optimal solution over TLBO; conversely the TLBO is easy to fall into a local search owing to convergence speed very rapid, which easily results in globally optimal solution lost for some multimodal problems.

In the case, several HS variants [1025] and modified TLBO algorithms [2629] have been presented to improve the performance for solving complex optimization problems in recent years, such as, SGHS[11], IHS[12], ITHS[14], EHS[15], NGHS[18], DIHS[19], NDHS[20], DSHS[23], ATLBO [26], WTLBO[27], TLBO_GC[28], ITLBO[29] and so on [3031]. However, these improved variants still are not competent enough to tackle the optimization problems with high-dimensionality (larger than 500) and multimodality. For example, the solution precision of IHS is not satisfactory; NGHS, SGHS and NDHS are easy to trap into local search. EHS and DSHS require taking much time for high-dimensional problems; WTLBO, TLBO_GC and ITLBO cannot avoid premature convergence for complex optimization problems with multimodality. The reason is these state-of-the-art intelligent algorithms have not considered an important that, with the increase of dimensionality, the probability that all values of one of dimensions in population became assimilated and lost the diversity will increase, which will make the algorithm lose exploration power if the algorithm has not good disturbance strategy for escaping from the local search. In this work, we find from the merit and demerit of HS and TLBO that the HS and TLBO have many complementary performance each other. We think it is a viable way in solving the high-dimensional optimization problems with multimodality if a good integration of HS and TLBO can be realized.

Commonly, to solve multimodal optimization problems that have one globally optimal solution and many local minimum (maximum) values by employing swarm intelligent algorithm, the overarching goal is to effectively implement the balance between exploration power and exploitation power, where the mission of exploration is to discover the unexplored regions at the early stage of search process, and the exploitation aims to obtain high precision optimal solution in a known region that has been found in the exploration stage. As a consequence, balancing the exploration and the exploitation is very important for solving a high-dimensional optimization with multimodality. Generally, the exploration power is strongly required before discovering the region in which the globally optimal solution is contained inside, however, after the globally optimal region has been found, the exploitation power should be intensified immediately and the exploration power is degraded gradually.

To address the balance, in this study, we propose a hybrid optimization algorithm (HSTLBO) based on HS and TLBO, in which a self-adaptive selection strategy is designed to balance the exploration power and the exploitation power. At the early stage of search, the HS algorithm obtains higher probability for exploring the region that contains the globally optimal solution. When the globally optimal region might have been located, the TLBO algorithm begins to obtain higher probability at the later stage of search process for intensifying the local search and exploiting high precision solution. In the HSTLBO, a self-adaptive selection probability is used to choose HS or TLBO in terms of the population diversity and update-success-rate. The update-success-rate denotes the proportion that the new generated solutions are superior to the old ones, which mean the rate that a new generated solution can successfully replace the old one in one generation.

The rest of this paper is organized as follows: Section 2 introduces the HS and TLBO algorithm. Hybrid HSTLBO algorithm is proposed and self-adaption selection strategy is analyzed in Section 3. In Section 4, the numerical experiments on twenty complex benchmark test functions are performed, the results are analyzed by comparing and statistical test. And the convergence of HSTLBO is also investigated. In Section 5, HSTLBO is used to solve portfolio optimization problem. Section 6 concludes this work.

Some symbols are explained as Box 1.

Box 1

NP  --The population size which is equal to HMS.

Tmax --The maximum evaluation times of objective function.

t   --The current iteration times.

SR  --the selection rate that new generated harmony insteads of the worst harmony.

T   --The cycle length for recalculating the SR.

r   --An uniform distributed random number between 0 and 1.

c1/ c2 --The times of updating old solutions successfully of HS/ TLBO in the tth iteration.

2 Harmony Search and Teaching-Learning-Based Optimization

2.1 Optimization model

Where, X consists of D decision variables (x1, x2, …, xD), D denotes the dimensionality (the number of decision variables) of the optimization problem, xi (i = 1, 2, …, D) represents the ith decision variable. and separately indicate the upper and lower bound of xi.

2.2 HS algorithm

HS algorithm mimics the process of improvising a musical harmony, in which X denotes the harmony, xi(i = 1, 2,…, D) indicates the note of harmony, D is the number of notes in a harmony. Harmony memory (HM) contains HMS harmonies: {X1, X2, …, XHMS}, where , HMS denotes the population size. The pseudo code of HS algorithm is as Fig 1.

In standard HS, three operators (harmony memory consideration, pitch-adjusting and random disturbance) are employed to optimize the harmonies in HM (population), which are good at exploring new region of search space [14]. However, due to the absence of learning-operator, the convergence speed of HS is much slower than that of TLBO.

2.3 TLBO algorithm

A new swarm intelligent optimization algorithm, TLBO, is proposed by R.V.Rao in 2012, which is inspired by the teaching and learning process of a class [8]. In TLBO, The class {X1, X2,…, XNP} is composed of one teacher and some learners, where (j = 1, 2,…, NP) (see Box 1) denotes the jth learner, NP is the class size, D represents the number of major subjects in the class; the represents the learning status of jth learner on ith major subject. The optimization process of TLBO is divided into two stages: “teacher phase” and “learner phase”.

Teacher phase.

In the teacher phase, the learners increase their knowledge depending on the teacher who tries to improve the mean ability of all learners. The teaching operator is as follow, Where Xj,new and Xj,old denote the ith learner’s learning status after and before learning from teacher Xteacher, TF is teaching factor, r is a uniformly distributed random vector in the range [xL, xU]. M denotes the mean knowledge level of all learners.

Learner phase.

In the learner phase, each learner increases knowledge depending on communicating with other learners. The learning operator is as follow, Where r(rj) is a random integer in the range [1, NP].

3. Proposed HSTLBO algorithm

To balance the exploration power and the exploitation power during the searching process, we propose a complementary HSTLBO algorithm.

3.1 HSTLBO algorithm

The flow chart of HSTLBO algorithm is as Fig 2.

In the HSTLBO algorithm, we merge HS and TLBO together to compensate for each other’s deficiencies, where the HS is mainly used for exploring the search space, and the TLBO aims to speed up the exploitation process. In the search process of HSTLBO, the HS and the TLBO compete for the opportunity in each iteration according to a self-adaptive selection rate (SR) (see Box 1). The selection rate (SR) is dynamically changed in terms of the success times that new generated solution is superior to the worst solution of population during T cycle. In the beginning stage, the HS can obtain more opportunity for exploring the unknown regions, when the region containing the globally optimal solution has been found in the later stage, the TLBO will obtain more opportunity for exploiting high-precision solution.

For a new unknown problem, exploring the unknown space in the beginning stage is the first consideration for the HSTLBO algorithm, for which the HS is a good choice. As a consequence, a high selection rate SR (≥ 0.95) is assigned to HS algorithm for exploring unknown regions in the first half of search. In the remaining time, the value of SR will continue to adapt the population status. If the global region has not been found before or the space distribution of population is still extensive, the HS might be still able to obtain a high SR for exploring the unknown areas. However, if the HS obtains no or very low selection rate, (in other words, the TLBO has very high probability), the diversity of population might be lost quickly for high-dimensional optimization problems owing to the quick convergence of TLBO. Therefore, to keep the diversity of population in a certain level, in the second half of stage, the HS is also given more than 0.3 probabilities to run.

In the hybrid HSTLBO algorithm, both HS and TLBO are modified as follows.

3.2 Modified HS

In the modified HS algorithm (see Fig 3), all the steps are identical to the steps of standard HS algorithm except for step 3 (improvising a new harmony). The key difference of step 3 between modified HS and standard HS are:

  1. Standard HS algorithm produces a new harmony in which each element (decision variable) is generated based on three HS rules (a. harmony memory consideration; b. pitch-adjusting; c. randomly disturbance). In our proposed algorithm, the producing process of new harmony is similar to that of DIHS [19]. Dynamic selection strategy is adopted to select some elements of the worst harmony with probability SP for adjusting. The formula of SP is expressed in Eq (1). (1) The selection probability SP is adjusted dynamically according to current iteration t, which is introduced in DIHS [19].
  2. In the standard HS, parameters PAR and fw are constant values. The modified HSTLBO algorithm adopts dynamic strategies to change PAR and fw for balancing the exploration power and the exploitation power (see Eqs (2) and (3)). The fw is divided into two stages. In the first half of the generation, the fw is dynamically changed with the increasing of iteration, which is the same as the bw in IHS [12]. The overarching goal of fw is to maintain strong exploration power. In the second half of generations, in order to adapt the characteristics of problems, the value of fw is changed adaptively in terms of the values of individuals. (2) (3)

3.3 Modified TLBO

The primary difference between standard TLBO and modified TLBO (see Fig 4) is as follows.

  1. In standard TLBO, M is equal to the mean value of learners X. In modified TLBO, it is a combination vector in which each subject M(i) is randomly chosen from the ith subject of all learners.
  2. In each iteration. TLBO performs teacher phase and learner phase, respectively. The modified TLBO only randomly chooses either teacher phase or learner phase to perform.
  3. In standard TLBO, all dimensions of Xnew are produced by learning from teacher or other one learner. Whereas, in modified TLBO, only a portion of dimensions of Xnew are generated by learning from teacher or other one learner, and other dimensions inherit from Xold directly, which is because an excellent learner is also imperfect on some subjects, selective learning on parts of subjects is more effective for improving knowledge level of learner than learning all subjects from one learner. The selection probability SP is shown in Eq (1), the selection of parameters has been explained and analyzed in our proposed DIHS algorithm [19].
  4. As we known, in our real lives, selective learning from multiple excellent learners on some subjects is more effective for improving our knowledge level than learning all subjects only from one excellent learner. As a consequence, in the modified TLBO, the learner on each subject will select one other learner from population for better learning new knowledge.

4. Experimental study

To investigate the performance of proposed HSTLBO algorithm, numerical simulation experiments on twenty benchmark functions [3235] are tested. Parameter settings are listed for all compared HS and TLBO variants in Table 1.

Twenty well-known functions are listed in Table 2, which include 16 multimodal functions (F1-F6, F8, F12-F20) and 4 complex uni-modal problems (F7, F9-F11), 4 hybrid functions (F17-F20) and 10 shift functions (F9-F18).

In the simulation experiment, all the test programs were performed on Windows XP 32 system with Intel(R) Core(TM) i3-2120 CPU@3.30 GHz and 4 GB RAM, and all the program codes were written in MATLAB R2014b.

In our simulation experiments, the dimension of test functions is set to 1000 and each function is run independently 20 times with 5E+6 function evaluations (FEs) as the termination condition. The precision (=|f(Xbest)−f(X*)|) (Prec), standard deviation of the precision (Std dev) and mean run time (Mtime) for each function are calculated over 20 independent runs, where Xbest is the best solution in population when the terminate condition is meet, and X* is the global optimal solution.

4.1. Comparison with state-of-the-art HS variants

HSTLBO is compared with standard HS algorithm and four state-of-the-art HS variants: IHS [12], ITHS [14], EHS [15], and NGHS [18]. To ensure the comparison fair, for all compared algorithms, each test function is run 20 times independently with 5E+6 FEs as the terminal criterion. The parameters of five HS variants are set the same value as the recommended value of original paper. The experimental results of six compared algorithms are summarized in Table 3. Fig 5 displays the convergence curve and box plots of distribution of optimal solutions after 20 independent runs are displayed in Fig 6.

  1. Quality of solution. For twenty test functions with dimension of 1000, the results provided by the five HS variants are far away from the global optimal solution. However, it can be found from Table 3 that HSTLBO is obviously superior to other algorithms, and the optimal solutions obtained by the proposed algorithm are very close to the global optimal solutions for all test functions except for F7,F10,F11,F15-F17.
  2. CPU runs time. It can be seen from Table 3 that the HSTLBO algorithm takes less run time for all 20 test functions than other five HS variants.
  3. Robustness. From Fig 5, the convergence curves of the HSTLBO is much active in whole search process, which demonstrates that the HSTLBO algorithm can maintain the strong search ability during the search process. Nevertheless, the other HS algorithms are easy convergence premature, which demonstrate the algorithms lose the search ability owing to the stagnation of the search. The boxplots (see Fig 6) indicate that optimal solutions of the HSTLBO have a more narrow distribution than those of other algorithms, which illustrates that our method has strong stability and robustness on 20 runs.
thumbnail
Table 3. Experimental results of HS, IHS, ITHS, NGHS, EHS and HSTLBO over 20 independent runs on 20 test functions of 1000 variables with 5E+6 FEs.

“Prec” and “Std Dev” denote the precision and standard deviation of the function error values in 20 runs, respectively. Time(s) is the mean run time over 20 independent runs on 5000000 FEs.

https://doi.org/10.1371/journal.pone.0175114.t003

4.2 Comparison with TLBO variants

In this section, we compare the HSTLBO with standard TLBO and four state-of-the-art TLBO variants: ATLBO [26], WTLBO [27], TLBO_GC [28], and ITLBO [29]. In ATLBO, elitist strategy, weight function and acceleration coefficient are employed to improve the performance. WTLBO introduces a weighted TLBO algorithm for balancing the exploration and the exploitation. In TLBO_GC, a global crossover strategy is proposed for solving global optimization problems. ITLBO adopts a local learning and self-learning methods for improving the global search ability of TLBO.

To ensure the comparison fair, for all compared TLBO algorithms, each test function is run 20 times independently with 5E+6 FEs as the terminal criterion. The experimental results of six compared algorithms are summarized in Table 4. Fig 7 displays the convergence curves and Fig 8 shows the distribution of optimal solutions on 20 independent runs for six TLBO algorithms using box plots.

  1. Quality of solution. In twenty test functions, our method is the winner on 13 functions and is very close to the winner algorithm on other seven functions. Especially, for the complex multimodal functions, such as F3, F6, F9, F13-F15 and F18, the precision of proposed algorithm is much better than other five TLBO variants. For functions F1, F2, F5, F7, F8 and F19, the optimal solution of our method is worse than those of other algorithms, however, the solutions of our method are on the verge of global optimal solutions, which are acceptable to the application.
  2. Robustness. It can be seen from Fig 7 that, comparing with five TLBO algorithms, the convergence curves of the HSTLBO is also much more active in the search process than those of other TLBO algorithms, and the HSTLBO has strong exploitation power (the convergence curve keeps decreasing) in the later stage of search process, which demonstrates that the HSTLBO can keep diverse distribution of population during the search process. From the boxplots (see Fig 8), we can find the HSTLBO is more stable on obtaining the optimal solution.
thumbnail
Table 4. Experimental results of TLBO, ATLBO, WTLBO, TLBO_GC, ITLBO and HSTLBO over 20 independent runs on 20 test functions of 1000 variables with 5E+6 FEs.

“Prec” and “Std Dev” denote the precision and standard deviation of the function error values in 20 runs, respectively. Time(s) is the mean run time over 20 independent runs on 5000000 FEs.

https://doi.org/10.1371/journal.pone.0175114.t004

4.3 Statistical test

To investigate the significant difference between our method and ten compared algorithms, in this section, the Wilcoxon signed rank test is conducted at 5% significance level to judge whether the optimal solutions on 20 independent runs with our method differ significantly from those of compared algorithms. Table 5 records the corresponding p-values for 20 functions between the proposed algorithm and other algorithms, which indicate that all the p-values are less than 0.05. The last three lines of Table 5 records the performance of algorithms, where “+”, “=“, and “-” separately represent the optimal solutions on 20 independent runs of the corresponding algorithm are better than, similar to, and worse than those of HSTLBO. We can see from Table 5 that the performance of proposed algorithm is significantly different with those of other algorithms for each function, and our method is superior to other algorithms for most of functions.

thumbnail
Table 5. Results of Wilcoxon’s rank sum test at 0.05 significance level between HSTLBO and other ten algorithms.

The p-value is shown (NaN denotes no difference).

https://doi.org/10.1371/journal.pone.0175114.t005

To further detect the significant differences between HSTLBO and ten compared algorithms, the multiple-problem Wilcoxon’s test is employed to check the comparisons. Table 6 records the statistical results, where “W+” is the number of cases in which the null hypothesis was rejected and our method shows a statistically superior performance at the 95% significance level, “W-” denotes the number of cases in which the null hypothesis was rejected and the HSTLBO displays an inferior performance, “W=“ represents the number of cases in which the null hypothesis was accepted [36] [37]. We can find from Table 6 that our method has higher “W+” values than “W-” values in all cases, which demonstrates that our method is significantly better than other ten algorithms on 20 test functions.

thumbnail
Table 6. Multi-problem based statistical pairwise comparison of HSTLBO and other ten algorithms.

(α = 0.05, D = 1000).

https://doi.org/10.1371/journal.pone.0175114.t006

4.4 Analysis of exploration and exploitation

In this section, we investigate the convergence of HSTLBO by tracing population diversity in the search process. The population diversity is defined as follows, Where denotes the mean value of ith decision variable in population.

Three complex multimodal optimization problems (F1:Ackley, F3:Levy, F12:Griewank Shift) are employed to investigate the balance between the exploration power and the exploitation. During the search process, we trace the changes of population diversity of each compared algorithm.

The Fig 9 displays the curve changes of population diversity of eleven algorithms, from which we can easily find that the diversity curves of HSTLBO algorithm decrease gradually during the search process, which are more asymptotic and stable than other algorithms. In this way, the HSTLBO possess strong exploration power in the early stage of search for exploring the unknown search regions, with the search continue, the exploitation power increases and the exploration power decreases gradually, in the later stage, the population has gathered into the globally optimal region. And by this time HSTLBO should have obtained much high exploitation power for exploiting the high precision solution. However, compared with HSTLBO, five HS variations (HS, IHS, ITHS, EHS and NGHS) keep high population diversity from beginning to end, which make them have strong exploration power but exploitation power very weak. Conversely, the diversity of five TLBO variations decrease very quickly in the begin stage, which makes them easily be trapped into local search owing to losing the exploration power prematurely.

5. HSTLBO for solving complex portfolio optimization problem

To further investigate the performance of HSTLBO algorithm, complex portfolio optimization problem are employed to test the ability of solving real-world application. The portfolio optimization aims to choose the optimal proportions of various assets for obtaining maximum portfolio return with minimum risk. In this work, we apply HSTLBO algorithm to choose the optimal portfolio proportions for Nikkei 225 stock index that maps companies on the Tokyo Stock Exchange (TSE) (http://en.wikipedia.org/wiki/Nikkei_225) and compare the test results of HSTLBO with four intelligent algorithms (GA, PSO, TS, SA)[3840]. We employ mean Euclidian distance (MED), variance of returns error (VRE) and mean return error (MRE) as performance indexes that are defined in literatures [39,41].

The test data is from http://people.brunel.ac.uk/~mastjjb/jeb/orlib/portinfo.html and the experiments are performed on two conditions:

  1. Unconstraint. The portfolio proportion and desired number of investment assets are not constrained.
  2. Constraint. The portfolio proportion of each asset and ; the desired number of portfolio selection assets K = 10.

In this work, we employ the same method of constraint handle as the literature [41] which can handle the boundary constraint of the portfolio proportion and the desired number of portfolio selection assets very well.

The test results on three evaluation indexes (MED, VRE and MRE) are presented in Table 7, and The comparison of efficient frontiers for different constraint conditions are shown in Fig 10. We can see easily from Table 7 that our method obtains more outstanding performance on MED, VRE and MRE than modified GA, PSO, TS and SA. From Fig 10, we can find that optimal frontiers of our method are almost overlapped with standard efficient frontiers for unconstraint CCMV model, and for constraint CCMV model, the optimal frontiers of our algorithm are also very close to the standard efficient frontiers that are obtained without considering the constraint conditions. Therefore, our approach is effective for solving complex portfolio optimization problems.

thumbnail
Table 7. Simulation results of five algorithms on Nikkei index 225.

https://doi.org/10.1371/journal.pone.0175114.t007

thumbnail
Fig 10. Optimal frontiers for unconstraint and constraint portfolio Nikkei index 225.

https://doi.org/10.1371/journal.pone.0175114.g010

6. Conclusion

Both Harmony Search and Teaching-Learning-Based Optimization are new swarm intelligent optimization algorithms, which have got much attention in recent years. In this work, in order to improve the performance of HS and TLBO, a hybrid HSTLBO algorithm is presented.

In the HSTLBO, both HS and TLBO are improved to enhance the global search ability. A self-adaptive selection strategy is presented to balance the exploration power and the exploitation power. At the early stage of search process, the HS algorithm gets a higher opportunity than TLBO, which aims to explore the unknown regions and avoid lose the globally optimal solution. With the increasing number of iteration, the opportunity of TLBO is raised step by step. At the later stage of search, when the population has gathered into one region which maybe contains the global optimal solution, the TLBO has obtained much more opportunity for exploiting high precision optimal solution.

The experimental results also demonstrate that our method is a promising optimization algorithm in solving large scale and complex optimization problems.

Acknowledgments

This work was supported by the Natural Science Foundation of China under Grants 11401357, the project of Youth star in Science and technology of Shaanxi Province (2016KJXX-95), and the Scientific Research Program funded by Shaanxi Provincial Education Department (no. 16JK1157) and the Scientific Research Program funded by the Projects Program of Shaanxi University of technology Academician Workstation (No. fckt201509).

Author Contributions

  1. Conceptualization: ST.
  2. Data curation: ST.
  3. Formal analysis: ST.
  4. Funding acquisition: LY ST.
  5. Investigation: ST.
  6. Methodology: ST YHL.
  7. Project administration: ST.
  8. Software: ST.
  9. Supervision: ST.
  10. Writing – original draft: ST.
  11. Writing – review & editing: ST LY FD YHL YL QL.

References

  1. 1. Goldberg D E. Genetic Algorithms in Search, Optimization and Machine Learning. 1989; xiii(7):2104–2116.
  2. 2. Kennedy J, Eberhart R. Particle swarm optimization[C]// IEEE International Conference on Neural Networks, 1995. Proceedings. IEEE Xplore, 1995(4):1942–1948.
  3. 3. Storn R, Price K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization, 1997; 11(4):341–359.
  4. 4. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimization, 2007; 39(3):459–471.
  5. 5. Cheng M Y, Prayogo D. Symbiotic Organisms Search: A new metaheuristic optimization algorithm. Computers & Structures, 2014; 139:98–112.
  6. 6. Moh’d Alia O, Mandava R. The variants of the harmony search algorithm: an overview. Artificial Intelligence Review, 2011; 36(1): 49–68.
  7. 7. Geem Z.W., Kim J., Loganathan G., Music-inspired optimization algorithm harmony search, Simulation, 2001; 76:60–68.
  8. 8. Rao R V, Savsani V J, Vakharia D P. Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Information Sciences, 2012; 183(1):1–15.
  9. 9. Rao R V, Patel V. Multi-objective optimization of two stage thermoelectric cooler using a modified teaching–learning-based optimization algorithm. Engineering Applications of Artificial Intelligence, 2013; 26(1):430–445.
  10. 10. Pan Q. K., Suganthan P. N., Liang J. J., & Tasgetiren M. F.. A local-best harmony search algorithm with dynamic subpopulations. Engineering Optimization, 2010; 42(2): 101–117.
  11. 11. Pan Q. K., Suganthan P. N., Tasgetiren M. F., & Liang J. J.. A self-adaptive global best harmony search algorithm for continuous optimization problems. Applied Mathematics and Computation, 2010; 216(3): 830–848.
  12. 12. Mahdavi M, Fesanghary M, Damangir E. An improved harmony search algorithm for solving optimization problems. Applied mathematics and computation, 2007; 188(2): 1567–1579.
  13. 13. Omran M G H, Mahdavi M. Global-best harmony search. Applied Mathematics and Computation, 2008, 198(2): 643–656.
  14. 14. Yadav P., Kumar R., Panda S. K., & Chang C. S. An intelligent tuned harmony search algorithm for optimization. Information Sciences, 2012; 196: 47–72.
  15. 15. Das S., Mukhopadhyay A., Roy A., Abraham A., & Panigrahi B. K. Exploratory power of the harmony search algorithm: analysis and improvements for global numerical optimization. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 2011; 41(1): 89–106.
  16. 16. Tuo Shouheng, Longquan Y. Improved Harmony Search Algorithm with Chaos. Journal of Computational Information Systems, 2012; 8(10): 4269–4276.
  17. 17. Tuo S., Yong L., & Zhou T. An improved harmony search based on teaching-learning strategy for unconstrained optimization problems. Mathematical Problems in Engineering, 2013; (2):1–11.
  18. 18. Zou D., Gao L., Wu J., & Li S. Novel global harmony search algorithm for unconstrained problems. Neurocomputing, 2010; 73(16): 3308–3318.
  19. 19. Tuo S., Zhang J., Yong L., Yuan X., Liu B., & Xu X., et al. A harmony search algorithm for high-dimensional multimodal optimization problems. Digital Signal Processing, 2015; 46(C): 151–163.
  20. 20. Chen J., Pan Q. K., & Li J. Q. Harmony search algorithm with dynamic control parameters. Applied Mathematics & Computation, 2012; 219(2), 592–604.
  21. 21. Alatas B. Chaotic harmony search algorithms. Applied Mathematics and Computation, 2010; 216(9): 2687–2699.
  22. 22. Cobos C, Estupiñán D, Pérez J. GHS+ LEM: Global-best Harmony Search using learnable evolution models. Applied Mathematics and Computation, 2011; 218(6): 2558–2578.
  23. 23. Kattan A, Abdullah R. A dynamic self-adaptive harmony search algorithm for continuous optimization problems. Applied Mathematics and Computation, 2013; 219(16): 8542–8567.
  24. 24. Wang Ling; Yang Ruixin; Xu Yin;Niu Qun; Pardalos Panos M.; Fei Minrui. An improved adaptive binary Harmony Search algorithm, Information Sciences, 2013; 58–87.
  25. 25. Enayatifar R., Yousefi M., Abdullah A. H., & Darus A. N. Lahs: a novel harmony search algorithm based on learning automata. Communications in Nonlinear Science & Numerical Simulation, 2013; 18(18), 3481–3497.
  26. 26. Li G, Niu P, Zhang W, Liu Y. Model NOx emissions by least squares support vector machine with tuning based on ameliorated teaching–learning-based optimization. Chemometrics & Intelligent Laboratory Systems, 2013; 126(8):11–20. (ATLBO)
  27. 27. Satapathy S C, Naik A, Parvathi K. Weighted teaching-learning-based optimization for global function optimization. Applied Mathematics, 2013; 4(03): 429.(WTLBO)
  28. 28. Ouyang H. B., Gao L. Q., Kong X. Y., Zou D. X., & Li S. Teaching-learning based optimization with global crossover for global optimization problems. Applied Mathematics and Computation, 2015; 265: 533–556.(TLBO-GC)
  29. 29. Chen D., Zou F., Li Z., Wang J., & Li S. An improved teaching–learning-based optimization algorithm for solving global optimization problem. Information Sciences, 2015; 297(C):171–190.(ITLBO)
  30. 30. Cheng M.-Y., Prayogo D., Fuzzy adaptive teaching–learning-based optimization for global numerical optimization, Neural Computing and Applications, 2016; 1–19
  31. 31. Cheng M.-Y., Prayogo D., Wu Y.-W., Lukito M.M., A Hybrid Harmony Search algorithm for discrete sizing optimization of truss structure, Automation in Construction,2016; 69: 21–33
  32. 32. Masao FUKUSHIMA. Test Functions for Unconstrained Global Optimization, http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO_files/Page364.htm.
  33. 33. K. Tang, X. Yao, P. N. Suganthan, C. MacNish, Y. P. Chen, C. M. Chen, et al. Benchmark Functions for the CEC’2008 Special Session and Competition on Large Scale Global Optimization, http://www.ntu.edu.sg/home/EPNSugan/, 2008
  34. 34. Tang K, Li X, Suganthan PN, Yang Z, Weise T. Benchmark functions for the CEC’2010 special session and competition on large scale global optimization. Technical Report, Nature Inspired Computation and Applications Laboratory, USTC, China & Nanyang Technological University, 2009. http://nical.ustc.edu.cn/cec10ss.php
  35. 35. Herrera F, Lozano M, Molina D. Test suite for the special issue of soft computing on scalability of evolutionary algorithms and other meta-heuristics for large scale continuous optimization problems. http://sci2s.ugr.es/eamhco/CFP.php.
  36. 36. Derrac J., Garcia S., Molina D., Herrera F., A practical tutorial on the use of nonparametric statistical tests as a methodology or comparing evolutionary and swarm intelligence algorithms, Swarm & Evolution. Compution. 2011; 1: 3–18.
  37. 37. Civicioglu P. Backtracking search optimization algorithm for numerical optimization problems. Applied Mathematics & Computation, 2013; 219(15): 8121–8144.
  38. 38. Chang T. J., Meade N., Beasley J. E., & Sharaiha Y. M.. Heuristics for cardinality constrained portfolio optimisation. Computers & Operations Research, 1999; 27(13): 1271–1302.
  39. 39. Zhen Wang, Models and algorithms for some kinds of portfolio optimization problems [D], Xidian University, 2012 (Chinese).
  40. 40. Cura T.. Particle swarm optimization approach to portfolio optimization. Nonlinear Analysis Real World Applications, 2009; 10(4): 2396–2406.
  41. 41. Tuo S H. A MODIFIED HARMONY SEARCH ALGORITHM FOR PORTFOLIO OPTIMIZATION PROBLEMS. Economic Computation & Economic Cybernetics Studies & Research, 2016; 50(1).