Abstract: Real world problems are often of dynamic nature. They form a class of difficult problems that metaheuristics aim to solve. The goal is not only to attempt to find near-to optimal solutions for a defined objective function, but also to track them in the search space. We will discuss in this article the dynamic optimization in the continuous case. Then we will present the experimentation on a battery of test functions, specially tuned for that purpose, of our ant colony algorithm, DHCIAC (Dynamic Hybrid Continuous Interacting Ant Colony).
Keywords: Dynamic optimization, ant colony, Nelder-Mead, particle swarm optimization, simplex, genetic algorithm.
I. Introduction
Metaheuristics are a class of stochastic methods that proved to be interesting for their versatility [17] [11]. They are generally used to solve combinatorial discrete NP-hard problems, but real world problems are often defined with real, or mixed integer/real parameters.
In the last few years, the literature shows an increasing interest for dynamic problems. Most commonly used metaheuristics are evolutionary algorithms, that can be applied to combinatorial problems. One promising approach in this field is embodied by the Ant Colony Optimization algorithms.
Our concern in this paper is continuous dynamic hard optimization problems, solved by using an ant colony algorithm. In section II, we will shortly expose the related works. Our DHCIAC algorithm is described in detail in section III. In section IV-A the performance measurements found in the literature and used in this paper are listed. Results and comparison with other approaches are presented in section V using a new set of dynamic test functions. Then we conclude in section VI.
II. Metaheuristics for dynamic optimization
Most metaheuristics are sophisticated Monte-Carlo methods. The basic Monte-Carlo method is a random search algorithm which seeks the function optimum by generating a sample of solutions according to an uniform law. At the beginning of 90's, a new research domain in distributed artificial intelligence has emerged, called swarm intelligence. It concerns the study of the utility of mimicing social insects for conceiving new algorithms. Computer science has used the concept of auto-organization and emergence, found in social insects societies, to define what is called swarm intelligence. The ant colony algorithm (based on the deposit and the evaporation of pheromone) and the particle swarm optimization algorithm (PSO) are only two methods among others inspired by nature. One of the major advantages of swarm intelligence algorithms, such as the ant colony algorithms, is their flexibility in dynamic environments. Nevertheless, few works deal with applications of ant colony algorithms to dynamic continuous problems.
Daniel Parrott and Xiaodong Li present in [25] a particle swarm optimization algorithm for multi-modal optimization. T. M. Blackwell uses in [5, 4] a technique to solve dynamic problems, which consists in maintaining diversification, by attributing to particles a repulsion charge, in analogy with the electrostatic charges. The PSO method proposed in [25] is a model dedicated to dynamic continuous environments containing several optima.
Genetic algorithms were largely applied to the dynamic landscapes, but mostly in the discrete case. Jurgen...
This is a preview. Get the full text through your school or public library.