Home > Research > Publications & Outputs > Multi-Start, Random Reselection of Algorithms o...

Links

Text available via DOI:

View graph of relations

Multi-Start, Random Reselection of Algorithms or Both?

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
  • Tarek El-Mihoub
  • Lars Nolle
  • Christoph Tholen
  • Ibrahim Aref
  • Iring Paulenz
Close
Publication date27/07/2022
Host publication2022 IEEE 2nd International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering (MI-STA)
PublisherIEEE
Pages21-26
Number of pages6
ISBN (electronic)9781665479189
<mark>Original language</mark>English

Publication series

Name2022 IEEE 2nd International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering, MI-STA 2022 - Proceeding

Abstract

Most state-of-the-art optimization algorithms utilize restart to resample new initial solutions to avoid the premature convergence problem. However, resampling is not the only way to avoid this problem. Moreover, initial solutions are not the only cause for premature convergence. Starting from the same set of initial solutions cannot always lead to the same local optimum even when using the same stochastic search method. Here, using a different search algorithm may help to escape local optima.
This paper investigates the effectiveness of random selection of a new search algorithm instead of resampling a new initial solution to overcome the premature convergence problem. Selecting a new search algorithm randomly and keeping the same initial solution is compared with sampling a new solution and keeping the same algorithm. The effectiveness of random selection a new algorithm to free solutions that are trapped in local optima is also studied. A number of experiments were conducted to evaluate the success of different random selection approaches in reaching a global optimum. The noise-free BBOB-2010 test suite was used to benchmark different sampling approaches. The results demonstrate the effectiveness of random selection of new algorithms over resampling new initial solutions on a range of optimization problems. Random selection of a new algorithm can improve the success rate by more than 10% compared with that of the best algorithm.