# Satisfying the function evaluation constraints in Evolutionary Algorithms

# Satisfying the function evaluation constraints in Evolutionary Algorithms

Join the DZone community and get the full member experience.

Join For Free**You've been hearing a lot about agile software development, get started with the eBook: Agile Product Development from 321 Gang. **

*Objective function evaluation*

**and**

*Constraint function evaluation*

In a recent work for the IEEE CEC 2010 competition on Constrained Real parameter optimization problems, we (along with Prof. Deb and Ritu ) coupled an idea from the Gradient projection (GP) method for handling equality constraints with our Genetic Algorithms (GA) based algorithm for finding the global optima of the problems.

The approach worked wonderfully well for us. There was a problem, however. There were limits on the maximum number of function evaluations and our algorithm did not perform well for some of the problems, with this constraint! (As if the actual constraints were not enough!! ).

So what we did was, instead of applying the GP method every generation, we started applying it only after some generations. We achieved decent results with this within the specified max. number of function evaluations. In every generation, we applied the GP to every individual. There could be a form of ** evolution control** [1] in play here, which we didn’t explore during the course of our work because of time constraints.

This is a common problem when you are working with EAs such as a GA. Since there is a population of solutions being processed at each generation, the function evaluation count goes up every generation. Besides the huge number of function evaluations, sometimes in complex problems, the time taken for each evaluation is so high that parallel methods have to adopted in order to solve them. (*In a very recent playing-around, the resident CUDA guru here, just parallelized the objective function evaluation in the Innovization guru’s code and they achieved a huge speedup*).

**Using approximation methods:** Since parallelization is not always the best option, one way to avoid expensive function evaluations (or comply to the function evaluation limits – parallelization won’t help here ofcourse) is to use **approximation models** (or **meta-mode**l).

That is, using a set of true data (**samples**) build up a model, and then use this model to obtain the approximation to an unknown input sample, thus saving the actual function evaluation(s). The important consideration, however is how **true** the approximation model, also called the **fidelity** of the model. If the model is not a very good approximation, then the EA might only be able to return a sub-optimal result which is not what we want.

Following are some papers I recently read and will be useful to anyone looking to investigate this field:

- Yaochu Jin , Markus Olhofer , Bernhard Sendhoff, “A Framework for Evolutionary Optimization With Approximate Fitness Functions“
- El-Beltagy, M.A. and Keane, A.J. (1999), “Metamodeling Techniques For Evolutionary Optimization of Computationally Expensive Problems : Promises and Limitations”
- Y. Jin. “A comprehensive survey of ﬁtness approximation in evolutionary computation.”
- Won, Ray, Tai, “A Framework for Optimization Using Approximate Functions”

**Notes**

Some notes on [1] above:

- individual and generation based
**evolution control**- evolution proceeds based on fitness evaluations using not only the approximate fitness model but also the original fitness function **interactive evolutionary computation**and usage of approximate models therein- use in
**multi-modal test function optimization**: smoothen the rough fitness function, and thus prevent stagnation in a local optimum - Two approaches of evolution control: (1) Every generation, only certain individuals are evaluated using the original fitness function (2) All individuals only every N generations per M generations are evaluated- determination of the number of individuals in (1) and generations in (2) has been answered via simulations
- frequency at which the original function is called and the approximate model is updated should be determined by the local fidelity of the approximate model.The lower the fidelity, more frequently the original function should be called. *local fidelity model* is stressed

I hope to experiment with approximation models for the work I mentioned at the beginning. The challenge would be to find the perfect evolution control strategy.

**Download the free agile tools checklist from 321 Gang. This guide will help you choose the right agile tools to position your team for success. **

Published at DZone with permission of Amit Saha , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

## {{ parent.title || parent.header.title}}

## {{ parent.tldr }}

## {{ parent.linkDescription }}

{{ parent.urlSource.name }}