././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1689936700.7401564
deap-1.4.1/ 0000755 0000765 0000024 00000000000 14456461475 011702 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/INSTALL.txt 0000644 0000765 0000024 00000001234 14456461441 013542 0 ustar 00runner staff ================================
UNIX based platforms and Windows
================================
In order to install DEAP from sources, change directory to the root of deap and type in :
$ python setup.py install
This will try to install deap into your package directory, you might need permissions to write to this directory.
=======
Options
=======
Prefix
++++++
You might want to install this software somewhere else by adding the prefix options to the installation.
$ python setup.py install --prefix=somewhere/else
Other
+++++
Other basic options are provided by the building tools of Python, see http://docs.python.org/install/ for more information.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/LICENSE.txt 0000644 0000765 0000024 00000016725 14456461441 013531 0 ustar 00runner staff GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/MANIFEST.in 0000644 0000765 0000024 00000000372 14456461441 013433 0 ustar 00runner staff include *.txt
include *.md
recursive-include deap *.cpp *.c *.hpp *.h
recursive-include examples *.py *.csv *.json *.txt *.cpp *.hpp *.h
recursive-include doc *
recursive-include tests *
prune doc/_build
global-exclude .DS_Store
global-exclude *.pyc
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1689936700.7396955
deap-1.4.1/PKG-INFO 0000644 0000765 0000024 00000031663 14456461475 013010 0 ustar 00runner staff Metadata-Version: 2.1
Name: deap
Version: 1.4.1
Summary: Distributed Evolutionary Algorithms in Python
Home-page: https://www.github.com/deap
Author: deap Development Team
Author-email: deap-users@googlegroups.com
License: LGPL
Keywords: evolutionary algorithms,genetic algorithms,genetic programming,cma-es,ga,gp,es,pso
Platform: any
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Software Development
Description-Content-Type: text/markdown
License-File: LICENSE.txt
# DEAP
[](https://travis-ci.org/DEAP/deap) [](https://pypi.python.org/pypi/deap) [](https://gitter.im/DEAP/deap?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://dev.azure.com/fderainville/DEAP/_build/latest?definitionId=1&branchName=master) [](https://deap.readthedocs.io/en/master/?badge=master)
DEAP is a novel evolutionary computation framework for rapid prototyping and testing of
ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelisation mechanisms such as multiprocessing and [SCOOP](https://github.com/soravux/scoop).
DEAP includes the following features:
* Genetic algorithm using any imaginable representation
* List, Array, Set, Dictionary, Tree, Numpy Array, etc.
* Genetic programming using prefix trees
* Loosely typed, Strongly typed
* Automatically defined functions
* Evolution strategies (including CMA-ES)
* Multi-objective optimisation (NSGA-II, NSGA-III, SPEA2, MO-CMA-ES)
* Co-evolution (cooperative and competitive) of multiple populations
* Parallelization of the evaluations (and more)
* Hall of Fame of the best individuals that lived in the population
* Checkpoints that take snapshots of a system regularly
* Benchmarks module containing most common test functions
* Genealogy of an evolution (that is compatible with [NetworkX](https://github.com/networkx/networkx))
* Examples of alternative algorithms : Particle Swarm Optimization, Differential Evolution, Estimation of Distribution Algorithm
## Downloads
Following acceptance of [PEP 438](http://www.python.org/dev/peps/pep-0438/) by the Python community, we have moved DEAP's source releases on [PyPI](https://pypi.python.org).
You can find the most recent releases at: https://pypi.python.org/pypi/deap/.
## Documentation
See the [DEAP User's Guide](http://deap.readthedocs.org/) for DEAP documentation.
In order to get the tip documentation, change directory to the `doc` subfolder and type in `make html`, the documentation will be under `_build/html`. You will need [Sphinx](http://sphinx.pocoo.org) to build the documentation.
### Notebooks
Also checkout our new [notebook examples](https://github.com/DEAP/notebooks). Using [Jupyter notebooks](http://jupyter.org) you'll be able to navigate and execute each block of code individually and tell what every line is doing. Either, look at the notebooks online using the notebook viewer links at the botom of the page or download the notebooks, navigate to the you download directory and run
```bash
jupyter notebook
```
## Installation
We encourage you to use easy_install or pip to install DEAP on your system. Other installation procedure like apt-get, yum, etc. usually provide an outdated version.
```bash
pip install deap
```
The latest version can be installed with
```bash
pip install git+https://github.com/DEAP/deap@master
```
If you wish to build from sources, download or clone the repository and type
```bash
python setup.py install
```
## Build Status
DEAP build status is available on Travis-CI https://travis-ci.org/DEAP/deap.
## Requirements
The most basic features of DEAP requires Python2.6. In order to combine the toolbox and the multiprocessing module Python2.7 is needed for its support to pickle partial functions. CMA-ES requires Numpy, and we recommend matplotlib for visualization of results as it is fully compatible with DEAP's API.
Since version 0.8, DEAP is compatible out of the box with Python 3. The installation procedure automatically translates the source to Python 3 with 2to3, however this requires having `setuptools<=58`. It is recommended to use `pip install setuptools==57.5.0` to address this issue.
## Example
The following code gives a quick overview how simple it is to implement the Onemax problem optimization with genetic algorithm using DEAP. More examples are provided [here](http://deap.readthedocs.org/en/master/examples/index.html).
```python
import random
from deap import creator, base, tools, algorithms
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, n=100)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
def evalOneMax(individual):
return sum(individual),
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
population = toolbox.population(n=300)
NGEN=40
for gen in range(NGEN):
offspring = algorithms.varAnd(population, toolbox, cxpb=0.5, mutpb=0.1)
fits = toolbox.map(toolbox.evaluate, offspring)
for fit, ind in zip(fits, offspring):
ind.fitness.values = fit
population = toolbox.select(offspring, k=len(population))
top10 = tools.selBest(population, k=10)
```
## How to cite DEAP
Authors of scientific papers including results generated using DEAP are encouraged to cite the following paper.
```xml
@article{DEAP_JMLR2012,
author = " F\'elix-Antoine Fortin and Fran\c{c}ois-Michel {De Rainville} and Marc-Andr\'e Gardner and Marc Parizeau and Christian Gagn\'e ",
title = { {DEAP}: Evolutionary Algorithms Made Easy },
pages = { 2171--2175 },
volume = { 13 },
month = { jul },
year = { 2012 },
journal = { Journal of Machine Learning Research }
}
```
## Publications on DEAP
* François-Michel De Rainville, Félix-Antoine Fortin, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP -- Enabling Nimbler Evolutions", SIGEVOlution, vol. 6, no 2, pp. 17-26, February 2014. [Paper](http://goo.gl/tOrXTp)
* Félix-Antoine Fortin, François-Michel De Rainville, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: Evolutionary Algorithms Made Easy", Journal of Machine Learning Research, vol. 13, pp. 2171-2175, jul 2012. [Paper](http://goo.gl/amJ3x)
* François-Michel De Rainville, Félix-Antoine Fortin, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: A Python Framework for Evolutionary Algorithms", in !EvoSoft Workshop, Companion proc. of the Genetic and Evolutionary Computation Conference (GECCO 2012), July 07-11 2012. [Paper](http://goo.gl/pXXug)
## Projects using DEAP
* Ribaric, T., & Houghten, S. (2017, June). Genetic programming for improved cryptanalysis of elliptic curve cryptosystems. In 2017 IEEE Congress on Evolutionary Computation (CEC) (pp. 419-426). IEEE.
* Ellefsen, Kai Olav, Herman Augusto Lepikson, and Jan C. Albiez. "Multiobjective coverage path planning: Enabling automated inspection of complex, real-world structures." Applied Soft Computing 61 (2017): 264-282.
* S. Chardon, B. Brangeon, E. Bozonnet, C. Inard (2016), Construction cost and energy performance of single family houses : From integrated design to automated optimization, Automation in Construction, Volume 70, p.1-13.
* B. Brangeon, E. Bozonnet, C. Inard (2016), Integrated refurbishment of collective housing and optimization process with real products databases, Building Simulation Optimization, pp. 531–538 Newcastle, England.
* Randal S. Olson, Ryan J. Urbanowicz, Peter C. Andrews, Nicole A. Lavender, La Creis Kidd, and Jason H. Moore (2016). Automating biomedical data science through tree-based pipeline optimization. Applications of Evolutionary Computation, pages 123-137.
* Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of GECCO 2016, pages 485-492.
* Van Geit W, Gevaert M, Chindemi G, Rössert C, Courcol J, Muller EB, Schürmann F, Segev I and Markram H (2016). BluePyOpt: Leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front. Neuroinform. 10:17. doi: 10.3389/fninf.2016.00017 https://github.com/BlueBrain/BluePyOpt
* Lara-Cabrera, R., Cotta, C. and Fernández-Leiva, A.J. (2014). Geometrical vs topological measures for the evolution of aesthetic maps in a rts game, Entertainment Computing,
* Macret, M. and Pasquier, P. (2013). Automatic Tuning of the OP-1 Synthesizer Using a Multi-objective Genetic Algorithm. In Proceedings of the 10th Sound and Music Computing Conference (SMC). (pp 614-621).
* Fortin, F. A., Grenier, S., & Parizeau, M. (2013, July). Generalizing the improved run-time complexity algorithm for non-dominated sorting. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference (pp. 615-622). ACM.
* Fortin, F. A., & Parizeau, M. (2013, July). Revisiting the NSGA-II crowding-distance computation. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference (pp. 623-630). ACM.
* Marc-André Gardner, Christian Gagné, and Marc Parizeau. Estimation of Distribution Algorithm based on Hidden Markov Models for Combinatorial Optimization. in Comp. Proc. Genetic and Evolutionary Computation Conference (GECCO 2013), July 2013.
* J. T. Zhai, M. A. Bamakhrama, and T. Stefanov. "Exploiting Just-enough Parallelism when Mapping Streaming Applications in Hard Real-time Systems". Design Automation Conference (DAC 2013), 2013.
* V. Akbarzadeh, C. Gagné, M. Parizeau, M. Argany, M. A Mostafavi, "Probabilistic Sensing Model for Sensor Placement Optimization Based on Line-of-Sight Coverage", Accepted in IEEE Transactions on Instrumentation and Measurement, 2012.
* M. Reif, F. Shafait, and A. Dengel. "Dataset Generation for Meta-Learning". Proceedings of the German Conference on Artificial Intelligence (KI'12). 2012.
* M. T. Ribeiro, A. Lacerda, A. Veloso, and N. Ziviani. "Pareto-Efficient Hybridization for Multi-Objective Recommender Systems". Proceedings of the Conference on Recommanders Systems (!RecSys'12). 2012.
* M. Pérez-Ortiz, A. Arauzo-Azofra, C. Hervás-Martínez, L. García-Hernández and L. Salas-Morera. "A system learning user preferences for multiobjective optimization of facility layouts". Pr,oceedings on the Int. Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO'12). 2012.
* Lévesque, J.C., Durand, A., Gagné, C., and Sabourin, R., Multi-Objective Evolutionary Optimization for Generating Ensembles of Classifiers in the ROC Space, Genetic and Evolutionary Computation Conference (GECCO 2012), 2012.
* Marc-André Gardner, Christian Gagné, and Marc Parizeau, "Bloat Control in Genetic Programming with Histogram-based Accept-Reject Method", in Proc. Genetic and Evolutionary Computation Conference (GECCO 2011), 2011.
* Vahab Akbarzadeh, Albert Ko, Christian Gagné, and Marc Parizeau, "Topography-Aware Sensor Deployment Optimization with CMA-ES", in Proc. of Parallel Problem Solving from Nature (PPSN 2010), Springer, 2010.
* DEAP is used in [TPOT](https://github.com/rhiever/tpot), an open source tool that uses genetic programming to optimize machine learning pipelines.
* DEAP is also used in ROS as an optimization package http://www.ros.org/wiki/deap.
* DEAP is an optional dependency for [PyXRD](https://github.com/mathijs-dumon/PyXRD), a Python implementation of the matrix algorithm developed for the X-ray diffraction analysis of disordered lamellar structures.
* DEAP is used in [glyph](https://github.com/Ambrosys/glyph), a library for symbolic regression with applications to [MLC](https://en.wikipedia.org/wiki/Machine_learning_control).
* DEAP is used in [Sklearn-genetic-opt](https://github.com/rodrigo-arenas/Sklearn-genetic-opt), an open source tool that uses evolutionary programming to fine tune machine learning hyperparameters.
If you want your project listed here, send us a link and a brief description and we'll be glad to add it.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/README.md 0000644 0000765 0000024 00000030141 14456461441 013151 0 ustar 00runner staff # DEAP
[](https://travis-ci.org/DEAP/deap) [](https://pypi.python.org/pypi/deap) [](https://gitter.im/DEAP/deap?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://dev.azure.com/fderainville/DEAP/_build/latest?definitionId=1&branchName=master) [](https://deap.readthedocs.io/en/master/?badge=master)
DEAP is a novel evolutionary computation framework for rapid prototyping and testing of
ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelisation mechanisms such as multiprocessing and [SCOOP](https://github.com/soravux/scoop).
DEAP includes the following features:
* Genetic algorithm using any imaginable representation
* List, Array, Set, Dictionary, Tree, Numpy Array, etc.
* Genetic programming using prefix trees
* Loosely typed, Strongly typed
* Automatically defined functions
* Evolution strategies (including CMA-ES)
* Multi-objective optimisation (NSGA-II, NSGA-III, SPEA2, MO-CMA-ES)
* Co-evolution (cooperative and competitive) of multiple populations
* Parallelization of the evaluations (and more)
* Hall of Fame of the best individuals that lived in the population
* Checkpoints that take snapshots of a system regularly
* Benchmarks module containing most common test functions
* Genealogy of an evolution (that is compatible with [NetworkX](https://github.com/networkx/networkx))
* Examples of alternative algorithms : Particle Swarm Optimization, Differential Evolution, Estimation of Distribution Algorithm
## Downloads
Following acceptance of [PEP 438](http://www.python.org/dev/peps/pep-0438/) by the Python community, we have moved DEAP's source releases on [PyPI](https://pypi.python.org).
You can find the most recent releases at: https://pypi.python.org/pypi/deap/.
## Documentation
See the [DEAP User's Guide](http://deap.readthedocs.org/) for DEAP documentation.
In order to get the tip documentation, change directory to the `doc` subfolder and type in `make html`, the documentation will be under `_build/html`. You will need [Sphinx](http://sphinx.pocoo.org) to build the documentation.
### Notebooks
Also checkout our new [notebook examples](https://github.com/DEAP/notebooks). Using [Jupyter notebooks](http://jupyter.org) you'll be able to navigate and execute each block of code individually and tell what every line is doing. Either, look at the notebooks online using the notebook viewer links at the botom of the page or download the notebooks, navigate to the you download directory and run
```bash
jupyter notebook
```
## Installation
We encourage you to use easy_install or pip to install DEAP on your system. Other installation procedure like apt-get, yum, etc. usually provide an outdated version.
```bash
pip install deap
```
The latest version can be installed with
```bash
pip install git+https://github.com/DEAP/deap@master
```
If you wish to build from sources, download or clone the repository and type
```bash
python setup.py install
```
## Build Status
DEAP build status is available on Travis-CI https://travis-ci.org/DEAP/deap.
## Requirements
The most basic features of DEAP requires Python2.6. In order to combine the toolbox and the multiprocessing module Python2.7 is needed for its support to pickle partial functions. CMA-ES requires Numpy, and we recommend matplotlib for visualization of results as it is fully compatible with DEAP's API.
Since version 0.8, DEAP is compatible out of the box with Python 3. The installation procedure automatically translates the source to Python 3 with 2to3, however this requires having `setuptools<=58`. It is recommended to use `pip install setuptools==57.5.0` to address this issue.
## Example
The following code gives a quick overview how simple it is to implement the Onemax problem optimization with genetic algorithm using DEAP. More examples are provided [here](http://deap.readthedocs.org/en/master/examples/index.html).
```python
import random
from deap import creator, base, tools, algorithms
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, n=100)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
def evalOneMax(individual):
return sum(individual),
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
population = toolbox.population(n=300)
NGEN=40
for gen in range(NGEN):
offspring = algorithms.varAnd(population, toolbox, cxpb=0.5, mutpb=0.1)
fits = toolbox.map(toolbox.evaluate, offspring)
for fit, ind in zip(fits, offspring):
ind.fitness.values = fit
population = toolbox.select(offspring, k=len(population))
top10 = tools.selBest(population, k=10)
```
## How to cite DEAP
Authors of scientific papers including results generated using DEAP are encouraged to cite the following paper.
```xml
@article{DEAP_JMLR2012,
author = " F\'elix-Antoine Fortin and Fran\c{c}ois-Michel {De Rainville} and Marc-Andr\'e Gardner and Marc Parizeau and Christian Gagn\'e ",
title = { {DEAP}: Evolutionary Algorithms Made Easy },
pages = { 2171--2175 },
volume = { 13 },
month = { jul },
year = { 2012 },
journal = { Journal of Machine Learning Research }
}
```
## Publications on DEAP
* François-Michel De Rainville, Félix-Antoine Fortin, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP -- Enabling Nimbler Evolutions", SIGEVOlution, vol. 6, no 2, pp. 17-26, February 2014. [Paper](http://goo.gl/tOrXTp)
* Félix-Antoine Fortin, François-Michel De Rainville, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: Evolutionary Algorithms Made Easy", Journal of Machine Learning Research, vol. 13, pp. 2171-2175, jul 2012. [Paper](http://goo.gl/amJ3x)
* François-Michel De Rainville, Félix-Antoine Fortin, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: A Python Framework for Evolutionary Algorithms", in !EvoSoft Workshop, Companion proc. of the Genetic and Evolutionary Computation Conference (GECCO 2012), July 07-11 2012. [Paper](http://goo.gl/pXXug)
## Projects using DEAP
* Ribaric, T., & Houghten, S. (2017, June). Genetic programming for improved cryptanalysis of elliptic curve cryptosystems. In 2017 IEEE Congress on Evolutionary Computation (CEC) (pp. 419-426). IEEE.
* Ellefsen, Kai Olav, Herman Augusto Lepikson, and Jan C. Albiez. "Multiobjective coverage path planning: Enabling automated inspection of complex, real-world structures." Applied Soft Computing 61 (2017): 264-282.
* S. Chardon, B. Brangeon, E. Bozonnet, C. Inard (2016), Construction cost and energy performance of single family houses : From integrated design to automated optimization, Automation in Construction, Volume 70, p.1-13.
* B. Brangeon, E. Bozonnet, C. Inard (2016), Integrated refurbishment of collective housing and optimization process with real products databases, Building Simulation Optimization, pp. 531–538 Newcastle, England.
* Randal S. Olson, Ryan J. Urbanowicz, Peter C. Andrews, Nicole A. Lavender, La Creis Kidd, and Jason H. Moore (2016). Automating biomedical data science through tree-based pipeline optimization. Applications of Evolutionary Computation, pages 123-137.
* Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of GECCO 2016, pages 485-492.
* Van Geit W, Gevaert M, Chindemi G, Rössert C, Courcol J, Muller EB, Schürmann F, Segev I and Markram H (2016). BluePyOpt: Leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front. Neuroinform. 10:17. doi: 10.3389/fninf.2016.00017 https://github.com/BlueBrain/BluePyOpt
* Lara-Cabrera, R., Cotta, C. and Fernández-Leiva, A.J. (2014). Geometrical vs topological measures for the evolution of aesthetic maps in a rts game, Entertainment Computing,
* Macret, M. and Pasquier, P. (2013). Automatic Tuning of the OP-1 Synthesizer Using a Multi-objective Genetic Algorithm. In Proceedings of the 10th Sound and Music Computing Conference (SMC). (pp 614-621).
* Fortin, F. A., Grenier, S., & Parizeau, M. (2013, July). Generalizing the improved run-time complexity algorithm for non-dominated sorting. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference (pp. 615-622). ACM.
* Fortin, F. A., & Parizeau, M. (2013, July). Revisiting the NSGA-II crowding-distance computation. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference (pp. 623-630). ACM.
* Marc-André Gardner, Christian Gagné, and Marc Parizeau. Estimation of Distribution Algorithm based on Hidden Markov Models for Combinatorial Optimization. in Comp. Proc. Genetic and Evolutionary Computation Conference (GECCO 2013), July 2013.
* J. T. Zhai, M. A. Bamakhrama, and T. Stefanov. "Exploiting Just-enough Parallelism when Mapping Streaming Applications in Hard Real-time Systems". Design Automation Conference (DAC 2013), 2013.
* V. Akbarzadeh, C. Gagné, M. Parizeau, M. Argany, M. A Mostafavi, "Probabilistic Sensing Model for Sensor Placement Optimization Based on Line-of-Sight Coverage", Accepted in IEEE Transactions on Instrumentation and Measurement, 2012.
* M. Reif, F. Shafait, and A. Dengel. "Dataset Generation for Meta-Learning". Proceedings of the German Conference on Artificial Intelligence (KI'12). 2012.
* M. T. Ribeiro, A. Lacerda, A. Veloso, and N. Ziviani. "Pareto-Efficient Hybridization for Multi-Objective Recommender Systems". Proceedings of the Conference on Recommanders Systems (!RecSys'12). 2012.
* M. Pérez-Ortiz, A. Arauzo-Azofra, C. Hervás-Martínez, L. García-Hernández and L. Salas-Morera. "A system learning user preferences for multiobjective optimization of facility layouts". Pr,oceedings on the Int. Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO'12). 2012.
* Lévesque, J.C., Durand, A., Gagné, C., and Sabourin, R., Multi-Objective Evolutionary Optimization for Generating Ensembles of Classifiers in the ROC Space, Genetic and Evolutionary Computation Conference (GECCO 2012), 2012.
* Marc-André Gardner, Christian Gagné, and Marc Parizeau, "Bloat Control in Genetic Programming with Histogram-based Accept-Reject Method", in Proc. Genetic and Evolutionary Computation Conference (GECCO 2011), 2011.
* Vahab Akbarzadeh, Albert Ko, Christian Gagné, and Marc Parizeau, "Topography-Aware Sensor Deployment Optimization with CMA-ES", in Proc. of Parallel Problem Solving from Nature (PPSN 2010), Springer, 2010.
* DEAP is used in [TPOT](https://github.com/rhiever/tpot), an open source tool that uses genetic programming to optimize machine learning pipelines.
* DEAP is also used in ROS as an optimization package http://www.ros.org/wiki/deap.
* DEAP is an optional dependency for [PyXRD](https://github.com/mathijs-dumon/PyXRD), a Python implementation of the matrix algorithm developed for the X-ray diffraction analysis of disordered lamellar structures.
* DEAP is used in [glyph](https://github.com/Ambrosys/glyph), a library for symbolic regression with applications to [MLC](https://en.wikipedia.org/wiki/Machine_learning_control).
* DEAP is used in [Sklearn-genetic-opt](https://github.com/rodrigo-arenas/Sklearn-genetic-opt), an open source tool that uses evolutionary programming to fine tune machine learning hyperparameters.
If you want your project listed here, send us a link and a brief description and we'll be glad to add it.
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1689936700.6244712
deap-1.4.1/deap/ 0000755 0000765 0000024 00000000000 14456461475 012613 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/__init__.py 0000644 0000765 0000024 00000001371 14456461441 014717 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
__author__ = "DEAP Team"
__version__ = "1.4"
__revision__ = "1.4.1"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/algorithms.py 0000644 0000765 0000024 00000054730 14456461441 015340 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""The :mod:`algorithms` module is intended to contain some specific algorithms
in order to execute very common evolutionary algorithms. The method used here
are more for convenience than reference as the implementation of every
evolutionary algorithm may vary infinitely. Most of the algorithms in this
module use operators registered in the toolbox. Generally, the keyword used are
:meth:`mate` for crossover, :meth:`mutate` for mutation, :meth:`~deap.select`
for selection and :meth:`evaluate` for evaluation.
You are encouraged to write your own algorithms in order to make them do what
you really want them to do.
"""
import random
from . import tools
def varAnd(population, toolbox, cxpb, mutpb):
r"""Part of an evolutionary algorithm applying only the variation part
(crossover **and** mutation). The modified individuals have their
fitness invalidated. The individuals are cloned so returned population is
independent of the input population.
:param population: A list of individuals to vary.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param cxpb: The probability of mating two individuals.
:param mutpb: The probability of mutating an individual.
:returns: A list of varied individuals that are independent of their
parents.
The variation goes as follow. First, the parental population
:math:`P_\mathrm{p}` is duplicated using the :meth:`toolbox.clone` method
and the result is put into the offspring population :math:`P_\mathrm{o}`. A
first loop over :math:`P_\mathrm{o}` is executed to mate pairs of
consecutive individuals. According to the crossover probability *cxpb*, the
individuals :math:`\mathbf{x}_i` and :math:`\mathbf{x}_{i+1}` are mated
using the :meth:`toolbox.mate` method. The resulting children
:math:`\mathbf{y}_i` and :math:`\mathbf{y}_{i+1}` replace their respective
parents in :math:`P_\mathrm{o}`. A second loop over the resulting
:math:`P_\mathrm{o}` is executed to mutate every individual with a
probability *mutpb*. When an individual is mutated it replaces its not
mutated version in :math:`P_\mathrm{o}`. The resulting :math:`P_\mathrm{o}`
is returned.
This variation is named *And* because of its propensity to apply both
crossover and mutation on the individuals. Note that both operators are
not applied systematically, the resulting individuals can be generated from
crossover only, mutation only, crossover and mutation, and reproduction
according to the given probabilities. Both probabilities should be in
:math:`[0, 1]`.
"""
offspring = [toolbox.clone(ind) for ind in population]
# Apply crossover and mutation on the offspring
for i in range(1, len(offspring), 2):
if random.random() < cxpb:
offspring[i - 1], offspring[i] = toolbox.mate(offspring[i - 1],
offspring[i])
del offspring[i - 1].fitness.values, offspring[i].fitness.values
for i in range(len(offspring)):
if random.random() < mutpb:
offspring[i], = toolbox.mutate(offspring[i])
del offspring[i].fitness.values
return offspring
def eaSimple(population, toolbox, cxpb, mutpb, ngen, stats=None,
halloffame=None, verbose=__debug__):
"""This algorithm reproduce the simplest evolutionary algorithm as
presented in chapter 7 of [Back2000]_.
:param population: A list of individuals.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param cxpb: The probability of mating two individuals.
:param mutpb: The probability of mutating an individual.
:param ngen: The number of generation.
:param stats: A :class:`~deap.tools.Statistics` object that is updated
inplace, optional.
:param halloffame: A :class:`~deap.tools.HallOfFame` object that will
contain the best individuals, optional.
:param verbose: Whether or not to log the statistics.
:returns: The final population
:returns: A class:`~deap.tools.Logbook` with the statistics of the
evolution
The algorithm takes in a population and evolves it in place using the
:meth:`varAnd` method. It returns the optimized population and a
:class:`~deap.tools.Logbook` with the statistics of the evolution. The
logbook will contain the generation number, the number of evaluations for
each generation and the statistics if a :class:`~deap.tools.Statistics` is
given as argument. The *cxpb* and *mutpb* arguments are passed to the
:func:`varAnd` function. The pseudocode goes as follow ::
evaluate(population)
for g in range(ngen):
population = select(population, len(population))
offspring = varAnd(population, toolbox, cxpb, mutpb)
evaluate(offspring)
population = offspring
As stated in the pseudocode above, the algorithm goes as follow. First, it
evaluates the individuals with an invalid fitness. Second, it enters the
generational loop where the selection procedure is applied to entirely
replace the parental population. The 1:1 replacement ratio of this
algorithm **requires** the selection procedure to be stochastic and to
select multiple times the same individual, for example,
:func:`~deap.tools.selTournament` and :func:`~deap.tools.selRoulette`.
Third, it applies the :func:`varAnd` function to produce the next
generation population. Fourth, it evaluates the new individuals and
compute the statistics on this population. Finally, when *ngen*
generations are done, the algorithm returns a tuple with the final
population and a :class:`~deap.tools.Logbook` of the evolution.
.. note::
Using a non-stochastic selection method will result in no selection as
the operator selects *n* individuals from a pool of *n*.
This function expects the :meth:`toolbox.mate`, :meth:`toolbox.mutate`,
:meth:`toolbox.select` and :meth:`toolbox.evaluate` aliases to be
registered in the toolbox.
.. [Back2000] Back, Fogel and Michalewicz, "Evolutionary Computation 1 :
Basic Algorithms and Operators", 2000.
"""
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in population if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
record = stats.compile(population) if stats else {}
logbook.record(gen=0, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
# Begin the generational process
for gen in range(1, ngen + 1):
# Select the next generation individuals
offspring = toolbox.select(population, len(population))
# Vary the pool of individuals
offspring = varAnd(offspring, toolbox, cxpb, mutpb)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# Update the hall of fame with the generated individuals
if halloffame is not None:
halloffame.update(offspring)
# Replace the current population by the offspring
population[:] = offspring
# Append the current generation statistics to the logbook
record = stats.compile(population) if stats else {}
logbook.record(gen=gen, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
return population, logbook
def varOr(population, toolbox, lambda_, cxpb, mutpb):
r"""Part of an evolutionary algorithm applying only the variation part
(crossover, mutation **or** reproduction). The modified individuals have
their fitness invalidated. The individuals are cloned so returned
population is independent of the input population.
:param population: A list of individuals to vary.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param lambda\_: The number of children to produce
:param cxpb: The probability of mating two individuals.
:param mutpb: The probability of mutating an individual.
:returns: The final population.
The variation goes as follow. On each of the *lambda_* iteration, it
selects one of the three operations; crossover, mutation or reproduction.
In the case of a crossover, two individuals are selected at random from
the parental population :math:`P_\mathrm{p}`, those individuals are cloned
using the :meth:`toolbox.clone` method and then mated using the
:meth:`toolbox.mate` method. Only the first child is appended to the
offspring population :math:`P_\mathrm{o}`, the second child is discarded.
In the case of a mutation, one individual is selected at random from
:math:`P_\mathrm{p}`, it is cloned and then mutated using using the
:meth:`toolbox.mutate` method. The resulting mutant is appended to
:math:`P_\mathrm{o}`. In the case of a reproduction, one individual is
selected at random from :math:`P_\mathrm{p}`, cloned and appended to
:math:`P_\mathrm{o}`.
This variation is named *Or* because an offspring will never result from
both operations crossover and mutation. The sum of both probabilities
shall be in :math:`[0, 1]`, the reproduction probability is
1 - *cxpb* - *mutpb*.
"""
assert (cxpb + mutpb) <= 1.0, (
"The sum of the crossover and mutation probabilities must be smaller "
"or equal to 1.0.")
offspring = []
for _ in range(lambda_):
op_choice = random.random()
if op_choice < cxpb: # Apply crossover
ind1, ind2 = [toolbox.clone(i) for i in random.sample(population, 2)]
ind1, ind2 = toolbox.mate(ind1, ind2)
del ind1.fitness.values
offspring.append(ind1)
elif op_choice < cxpb + mutpb: # Apply mutation
ind = toolbox.clone(random.choice(population))
ind, = toolbox.mutate(ind)
del ind.fitness.values
offspring.append(ind)
else: # Apply reproduction
offspring.append(random.choice(population))
return offspring
def eaMuPlusLambda(population, toolbox, mu, lambda_, cxpb, mutpb, ngen,
stats=None, halloffame=None, verbose=__debug__):
r"""This is the :math:`(\mu + \lambda)` evolutionary algorithm.
:param population: A list of individuals.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param mu: The number of individuals to select for the next generation.
:param lambda\_: The number of children to produce at each generation.
:param cxpb: The probability that an offspring is produced by crossover.
:param mutpb: The probability that an offspring is produced by mutation.
:param ngen: The number of generation.
:param stats: A :class:`~deap.tools.Statistics` object that is updated
inplace, optional.
:param halloffame: A :class:`~deap.tools.HallOfFame` object that will
contain the best individuals, optional.
:param verbose: Whether or not to log the statistics.
:returns: The final population
:returns: A class:`~deap.tools.Logbook` with the statistics of the
evolution.
The algorithm takes in a population and evolves it in place using the
:func:`varOr` function. It returns the optimized population and a
:class:`~deap.tools.Logbook` with the statistics of the evolution. The
logbook will contain the generation number, the number of evaluations for
each generation and the statistics if a :class:`~deap.tools.Statistics` is
given as argument. The *cxpb* and *mutpb* arguments are passed to the
:func:`varOr` function. The pseudocode goes as follow ::
evaluate(population)
for g in range(ngen):
offspring = varOr(population, toolbox, lambda_, cxpb, mutpb)
evaluate(offspring)
population = select(population + offspring, mu)
First, the individuals having an invalid fitness are evaluated. Second,
the evolutionary loop begins by producing *lambda_* offspring from the
population, the offspring are generated by the :func:`varOr` function. The
offspring are then evaluated and the next generation population is
selected from both the offspring **and** the population. Finally, when
*ngen* generations are done, the algorithm returns a tuple with the final
population and a :class:`~deap.tools.Logbook` of the evolution.
This function expects :meth:`toolbox.mate`, :meth:`toolbox.mutate`,
:meth:`toolbox.select` and :meth:`toolbox.evaluate` aliases to be
registered in the toolbox. This algorithm uses the :func:`varOr`
variation.
"""
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in population if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
record = stats.compile(population) if stats is not None else {}
logbook.record(gen=0, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
# Begin the generational process
for gen in range(1, ngen + 1):
# Vary the population
offspring = varOr(population, toolbox, lambda_, cxpb, mutpb)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# Update the hall of fame with the generated individuals
if halloffame is not None:
halloffame.update(offspring)
# Select the next generation population
population[:] = toolbox.select(population + offspring, mu)
# Update the statistics with the new population
record = stats.compile(population) if stats is not None else {}
logbook.record(gen=gen, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
return population, logbook
def eaMuCommaLambda(population, toolbox, mu, lambda_, cxpb, mutpb, ngen,
stats=None, halloffame=None, verbose=__debug__):
r"""This is the :math:`(\mu~,~\lambda)` evolutionary algorithm.
:param population: A list of individuals.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param mu: The number of individuals to select for the next generation.
:param lambda\_: The number of children to produce at each generation.
:param cxpb: The probability that an offspring is produced by crossover.
:param mutpb: The probability that an offspring is produced by mutation.
:param ngen: The number of generation.
:param stats: A :class:`~deap.tools.Statistics` object that is updated
inplace, optional.
:param halloffame: A :class:`~deap.tools.HallOfFame` object that will
contain the best individuals, optional.
:param verbose: Whether or not to log the statistics.
:returns: The final population
:returns: A class:`~deap.tools.Logbook` with the statistics of the
evolution
The algorithm takes in a population and evolves it in place using the
:func:`varOr` function. It returns the optimized population and a
:class:`~deap.tools.Logbook` with the statistics of the evolution. The
logbook will contain the generation number, the number of evaluations for
each generation and the statistics if a :class:`~deap.tools.Statistics` is
given as argument. The *cxpb* and *mutpb* arguments are passed to the
:func:`varOr` function. The pseudocode goes as follow ::
evaluate(population)
for g in range(ngen):
offspring = varOr(population, toolbox, lambda_, cxpb, mutpb)
evaluate(offspring)
population = select(offspring, mu)
First, the individuals having an invalid fitness are evaluated. Second,
the evolutionary loop begins by producing *lambda_* offspring from the
population, the offspring are generated by the :func:`varOr` function. The
offspring are then evaluated and the next generation population is
selected from **only** the offspring. Finally, when
*ngen* generations are done, the algorithm returns a tuple with the final
population and a :class:`~deap.tools.Logbook` of the evolution.
.. note::
Care must be taken when the lambda:mu ratio is 1 to 1 as a
non-stochastic selection will result in no selection at all as the
operator selects *lambda* individuals from a pool of *mu*.
This function expects :meth:`toolbox.mate`, :meth:`toolbox.mutate`,
:meth:`toolbox.select` and :meth:`toolbox.evaluate` aliases to be
registered in the toolbox. This algorithm uses the :func:`varOr`
variation.
"""
assert lambda_ >= mu, "lambda must be greater or equal to mu."
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in population if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
record = stats.compile(population) if stats is not None else {}
logbook.record(gen=0, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
# Begin the generational process
for gen in range(1, ngen + 1):
# Vary the population
offspring = varOr(population, toolbox, lambda_, cxpb, mutpb)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# Update the hall of fame with the generated individuals
if halloffame is not None:
halloffame.update(offspring)
# Select the next generation population
population[:] = toolbox.select(offspring, mu)
# Update the statistics with the new population
record = stats.compile(population) if stats is not None else {}
logbook.record(gen=gen, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
return population, logbook
def eaGenerateUpdate(toolbox, ngen, halloffame=None, stats=None,
verbose=__debug__):
"""This is algorithm implements the ask-tell model proposed in
[Colette2010]_, where ask is called `generate` and tell is called `update`.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param ngen: The number of generation.
:param stats: A :class:`~deap.tools.Statistics` object that is updated
inplace, optional.
:param halloffame: A :class:`~deap.tools.HallOfFame` object that will
contain the best individuals, optional.
:param verbose: Whether or not to log the statistics.
:returns: The final population
:returns: A class:`~deap.tools.Logbook` with the statistics of the
evolution
The algorithm generates the individuals using the :func:`toolbox.generate`
function and updates the generation method with the :func:`toolbox.update`
function. It returns the optimized population and a
:class:`~deap.tools.Logbook` with the statistics of the evolution. The
logbook will contain the generation number, the number of evaluations for
each generation and the statistics if a :class:`~deap.tools.Statistics` is
given as argument. The pseudocode goes as follow ::
for g in range(ngen):
population = toolbox.generate()
evaluate(population)
toolbox.update(population)
This function expects :meth:`toolbox.generate` and :meth:`toolbox.evaluate` aliases to be
registered in the toolbox.
.. [Colette2010] Collette, Y., N. Hansen, G. Pujol, D. Salazar Aponte and
R. Le Riche (2010). On Object-Oriented Programming of Optimizers -
Examples in Scilab. In P. Breitkopf and R. F. Coelho, eds.:
Multidisciplinary Design Optimization in Computational Mechanics,
Wiley, pp. 527-565;
"""
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
for gen in range(ngen):
# Generate a new population
population = toolbox.generate()
# Evaluate the individuals
fitnesses = toolbox.map(toolbox.evaluate, population)
for ind, fit in zip(population, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
# Update the strategy with the evaluated individuals
toolbox.update(population)
record = stats.compile(population) if stats is not None else {}
logbook.record(gen=gen, nevals=len(population), **record)
if verbose:
print(logbook.stream)
return population, logbook
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/base.py 0000644 0000765 0000024 00000033526 14456461441 014101 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""The :mod:`~deap.base` module provides basic structures to build
evolutionary algorithms. It contains the :class:`~deap.base.Toolbox`, useful
to store evolutionary operators, and a virtual :class:`~deap.base.Fitness`
class used as base class, for the fitness member of any individual. """
import sys
try:
from collections.abc import Sequence
except ImportError:
from collections import Sequence
from copy import deepcopy
from functools import partial
from operator import mul, truediv
class Toolbox(object):
"""A toolbox for evolution that contains the evolutionary operators. At
first the toolbox contains a :meth:`~deap.toolbox.clone` method that
duplicates any element it is passed as argument, this method defaults to
the :func:`copy.deepcopy` function. and a :meth:`~deap.toolbox.map`
method that applies the function given as first argument to every items
of the iterables given as next arguments, this method defaults to the
:func:`map` function. You may populate the toolbox with any other
function by using the :meth:`~deap.base.Toolbox.register` method.
Concrete usages of the toolbox are shown for initialization in the
:ref:`creating-types` tutorial and for tools container in the
:ref:`next-step` tutorial.
"""
def __init__(self):
self.register("clone", deepcopy)
self.register("map", map)
def register(self, alias, function, *args, **kargs):
"""Register a *function* in the toolbox under the name *alias*. You
may provide default arguments that will be passed automatically when
calling the registered function. Fixed arguments can then be overridden
at function call time.
:param alias: The name the operator will take in the toolbox. If the
alias already exist it will overwrite the operator
already present.
:param function: The function to which refer the alias.
:param argument: One or more argument (and keyword argument) to pass
automatically to the registered function when called,
optional.
The following code block is an example of how the toolbox is used. ::
>>> def func(a, b, c=3):
... print a, b, c
...
>>> tools = Toolbox()
>>> tools.register("myFunc", func, 2, c=4)
>>> tools.myFunc(3)
2 3 4
The registered function will be given the attributes :attr:`__name__`
set to the alias and :attr:`__doc__` set to the original function's
documentation. The :attr:`__dict__` attribute will also be updated
with the original function's instance dictionary, if any.
"""
pfunc = partial(function, *args, **kargs)
pfunc.__name__ = alias
pfunc.__doc__ = function.__doc__
if hasattr(function, "__dict__") and not isinstance(function, type):
# Some functions don't have a dictionary, in these cases
# simply don't copy it. Moreover, if the function is actually
# a class, we do not want to copy the dictionary.
pfunc.__dict__.update(function.__dict__.copy())
setattr(self, alias, pfunc)
def unregister(self, alias):
"""Unregister *alias* from the toolbox.
:param alias: The name of the operator to remove from the toolbox.
"""
delattr(self, alias)
def decorate(self, alias, *decorators):
"""Decorate *alias* with the specified *decorators*, *alias*
has to be a registered function in the current toolbox.
:param alias: The name of the operator to decorate.
:param decorator: One or more function decorator. If multiple
decorators are provided they will be applied in
order, with the last decorator decorating all the
others.
.. note::
Decorate a function using the toolbox makes it unpicklable, and
will produce an error on pickling. Although this limitation is not
relevant in most cases, it may have an impact on distributed
environments like multiprocessing.
A function can still be decorated manually before it is added to
the toolbox (using the @ notation) in order to be picklable.
"""
pfunc = getattr(self, alias)
function, args, kargs = pfunc.func, pfunc.args, pfunc.keywords
for decorator in decorators:
function = decorator(function)
self.register(alias, function, *args, **kargs)
class Fitness(object):
"""The fitness is a measure of quality of a solution. If *values* are
provided as a tuple, the fitness is initialized using those values,
otherwise it is empty (or invalid).
:param values: The initial values of the fitness as a tuple, optional.
Fitnesses may be compared using the ``>``, ``<``, ``>=``, ``<=``, ``==``,
``!=``. The comparison of those operators is made lexicographically.
Maximization and minimization are taken care off by a multiplication
between the :attr:`weights` and the fitness :attr:`values`. The comparison
can be made between fitnesses of different size, if the fitnesses are
equal until the extra elements, the longer fitness will be superior to the
shorter.
Different types of fitnesses are created in the :ref:`creating-types`
tutorial.
.. note::
When comparing fitness values that are **minimized**, ``a > b`` will
return :data:`True` if *a* is **smaller** than *b*.
"""
weights = None
"""The weights are used in the fitness comparison. They are shared among
all fitnesses of the same type. When subclassing :class:`Fitness`, the
weights must be defined as a tuple where each element is associated to an
objective. A negative weight element corresponds to the minimization of
the associated objective and positive weight to the maximization.
.. note::
If weights is not defined during subclassing, the following error will
occur at instantiation of a subclass fitness object:
``TypeError: Can't instantiate abstract with
abstract attribute weights.``
"""
wvalues = ()
"""Contains the weighted values of the fitness, the multiplication with the
weights is made when the values are set via the property :attr:`values`.
Multiplication is made on setting of the values for efficiency.
Generally it is unnecessary to manipulate wvalues as it is an internal
attribute of the fitness used in the comparison operators.
"""
def __init__(self, values=()):
if self.weights is None:
raise TypeError("Can't instantiate abstract %r with abstract "
"attribute weights." % (self.__class__))
if not isinstance(self.weights, Sequence):
raise TypeError("Attribute weights of %r must be a sequence."
% self.__class__)
if len(values) > 0:
self.values = values
def getValues(self):
return tuple(map(truediv, self.wvalues, self.weights))
def setValues(self, values):
assert len(values) == len(self.weights), "Assigned values have not the same length than fitness weights"
try:
self.wvalues = tuple(map(mul, values, self.weights))
except TypeError:
_, _, traceback = sys.exc_info()
raise TypeError("Both weights and assigned values must be a "
"sequence of numbers when assigning to values of "
"%r. Currently assigning value(s) %r of %r to a "
"fitness with weights %s."
% (self.__class__, values, type(values),
self.weights)).with_traceback(traceback)
def delValues(self):
self.wvalues = ()
values = property(getValues, setValues, delValues,
("Fitness values. Use directly ``individual.fitness.values = values`` "
"in order to set the fitness and ``del individual.fitness.values`` "
"in order to clear (invalidate) the fitness. The (unweighted) fitness "
"can be directly accessed via ``individual.fitness.values``."))
def dominates(self, other, obj=slice(None)):
"""Return true if each objective of *self* is not strictly worse than
the corresponding objective of *other* and at least one objective is
strictly better.
:param obj: Slice indicating on which objectives the domination is
tested. The default value is `slice(None)`, representing
every objectives.
"""
not_equal = False
for self_wvalue, other_wvalue in zip(self.wvalues[obj], other.wvalues[obj]):
if self_wvalue > other_wvalue:
not_equal = True
elif self_wvalue < other_wvalue:
return False
return not_equal
@property
def valid(self):
"""Assess if a fitness is valid or not."""
return len(self.wvalues) != 0
def __hash__(self):
return hash(self.wvalues)
def __gt__(self, other):
return not self.__le__(other)
def __ge__(self, other):
return not self.__lt__(other)
def __le__(self, other):
return self.wvalues <= other.wvalues
def __lt__(self, other):
return self.wvalues < other.wvalues
def __eq__(self, other):
return self.wvalues == other.wvalues
def __ne__(self, other):
return not self.__eq__(other)
def __deepcopy__(self, memo):
"""Replace the basic deepcopy function with a faster one.
It assumes that the elements in the :attr:`values` tuple are
immutable and the fitness does not contain any other object
than :attr:`values` and :attr:`weights`.
"""
copy_ = self.__class__()
copy_.wvalues = self.wvalues
return copy_
def __str__(self):
"""Return the values of the Fitness object."""
return str(self.values if self.valid else tuple())
def __repr__(self):
"""Return the Python code to build a copy of the object."""
return "%s.%s(%r)" % (self.__module__, self.__class__.__name__,
self.values if self.valid else tuple())
def _violates_constraint(fitness):
return not fitness.valid \
and fitness.constraint_violation is not None \
and sum(fitness.constraint_violation) > 0
class ConstrainedFitness(Fitness):
def __init__(self, values=(), constraint_violation=None):
super(ConstrainedFitness, self).__init__(values)
self.constraint_violation = constraint_violation
@Fitness.values.deleter
def values(self):
self.wvalues = ()
self.constraint_violation = None
def __gt__(self, other):
return not self.__le__(other)
def __ge__(self, other):
return not self.__lt__(other)
def __le__(self, other):
self_violates_constraints = _violates_constraint(self)
other_violates_constraints = _violates_constraint(other)
if self_violates_constraints and other_violates_constraints:
return True
elif self_violates_constraints:
return True
elif other_violates_constraints:
return False
return self.wvalues <= other.wvalues
def __lt__(self, other):
self_violates_constraints = _violates_constraint(self)
other_violates_constraints = _violates_constraint(other)
if self_violates_constraints and other_violates_constraints:
return False
elif self_violates_constraints:
return True
elif other_violates_constraints:
return False
return self.wvalues < other.wvalues
def __eq__(self, other):
self_violates_constraints = _violates_constraint(self)
other_violates_constraints = _violates_constraint(other)
if self_violates_constraints and other_violates_constraints:
return True
elif self_violates_constraints:
return False
elif other_violates_constraints:
return False
return self.wvalues == other.wvalues
def __ne__(self, other):
return not self.__eq__(other)
def dominates(self, other):
self_violates_constraints = _violates_constraint(self)
other_violates_constraints = _violates_constraint(other)
if self_violates_constraints and other_violates_constraints:
return False
elif self_violates_constraints:
return False
elif other_violates_constraints:
return True
return super(ConstrainedFitness, self).dominates(other)
def __str__(self):
"""Return the values of the Fitness object."""
return str((self.values if self.valid else tuple(), self.constraint_violation))
def __repr__(self):
"""Return the Python code to build a copy of the object."""
return "%s.%s(%r, %r)" % (self.__module__, self.__class__.__name__,
self.values if self.valid else tuple(),
self.constraint_violation) ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1689936700.6303043
deap-1.4.1/deap/benchmarks/ 0000755 0000765 0000024 00000000000 14456461475 014730 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/benchmarks/__init__.py 0000644 0000765 0000024 00000062267 14456461441 017047 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""
Regroup typical EC benchmarks functions to import easily and benchmark
examples.
"""
import random
from math import sin, cos, pi, exp, e, sqrt
from operator import mul
from functools import reduce
# Unimodal
def rand(individual):
r"""Random test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization or maximization
* - Range
- none
* - Global optima
- none
* - Function
- :math:`f(\mathbf{x}) = \text{\texttt{random}}(0,1)`
"""
return random.random(),
def plane(individual):
r"""Plane test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- none
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = x_0`
"""
return individual[0],
def sphere(individual):
r"""Sphere test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- none
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = \sum_{i=1}^Nx_i^2`
"""
return sum(gene * gene for gene in individual),
def cigar(individual):
r"""Cigar test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- none
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = x_0^2 + 10^6\sum_{i=1}^N\,x_i^2`
"""
return individual[0]**2 + 1e6 * sum(gene * gene for gene in individual[1:]),
def rosenbrock(individual):
r"""Rosenbrock test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- none
* - Global optima
- :math:`x_i = 1, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = \sum_{i=1}^{N-1} (1-x_i)^2 + 100 (x_{i+1} - x_i^2 )^2`
.. plot:: code/benchmarks/rosenbrock.py
:width: 67 %
"""
return sum(100 * (x * x - y)**2 + (1. - x)**2 \
for x, y in zip(individual[:-1], individual[1:])),
def h1(individual):
r""" Simple two-dimensional function containing several local maxima.
From: The Merits of a Parallel Genetic Algorithm in Solving Hard
Optimization Problems, A. J. Knoek van Soest and L. J. R. Richard
Casius, J. Biomech. Eng. 125, 141 (2003)
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- maximization
* - Range
- :math:`x_i \in [-100, 100]`
* - Global optima
- :math:`\mathbf{x} = (8.6998, 6.7665)`, :math:`f(\mathbf{x}) = 2`\n
* - Function
- :math:`f(\mathbf{x}) = \frac{\sin(x_1 - \frac{x_2}{8})^2 + \
\sin(x_2 + \frac{x_1}{8})^2}{\sqrt{(x_1 - 8.6998)^2 + \
(x_2 - 6.7665)^2} + 1}`
.. plot:: code/benchmarks/h1.py
:width: 67 %
"""
num = (sin(individual[0] - individual[1] / 8))**2 + (sin(individual[1] + individual[0] / 8))**2
denum = ((individual[0] - 8.6998)**2 + (individual[1] - 6.7665)**2)**0.5 + 1
return num / denum,
# Multimodal
def ackley(individual):
r"""Ackley test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-15, 30]`
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = 20 - 20\exp\left(-0.2\sqrt{\frac{1}{N} \
\sum_{i=1}^N x_i^2} \right) + e - \exp\left(\frac{1}{N}\sum_{i=1}^N \cos(2\pi x_i) \right)`
.. plot:: code/benchmarks/ackley.py
:width: 67 %
"""
N = len(individual)
return 20 - 20 * exp(-0.2 * sqrt(1.0 / N * sum(x**2 for x in individual))) \
+ e - exp(1.0 / N * sum(cos(2 * pi * x) for x in individual)),
def bohachevsky(individual):
r"""Bohachevsky test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-100, 100]`
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = \sum_{i=1}^{N-1}(x_i^2 + 2x_{i+1}^2 - \
0.3\cos(3\pi x_i) - 0.4\cos(4\pi x_{i+1}) + 0.7)`
.. plot:: code/benchmarks/bohachevsky.py
:width: 67 %
"""
return sum(x**2 + 2 * x1**2 - 0.3 * cos(3 * pi * x) - 0.4 * cos(4 * pi * x1) + 0.7
for x, x1 in zip(individual[:-1], individual[1:])),
def griewank(individual):
r"""Griewank test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-600, 600]`
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = \frac{1}{4000}\sum_{i=1}^N\,x_i^2 - \
\prod_{i=1}^N\cos\left(\frac{x_i}{\sqrt{i}}\right) + 1`
.. plot:: code/benchmarks/griewank.py
:width: 67 %
"""
return 1.0 / 4000.0 * sum(x ** 2 for x in individual) - \
reduce(mul, (cos(x / sqrt(i + 1.0)) for i, x in enumerate(individual)), 1) + 1,
def rastrigin(individual):
r"""Rastrigin test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-5.12, 5.12]`
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = 10N + \sum_{i=1}^N x_i^2 - 10 \cos(2\pi x_i)`
.. plot:: code/benchmarks/rastrigin.py
:width: 67 %
"""
return 10 * len(individual) + sum(gene * gene - 10 * \
cos(2 * pi * gene) for gene in individual),
def rastrigin_scaled(individual):
r"""Scaled Rastrigin test objective function.
:math:`f_{\text{RastScaled}}(\mathbf{x}) = 10N + \sum_{i=1}^N \
\left(10^{\left(\frac{i-1}{N-1}\right)} x_i \right)^2 - \
10\cos\left(2\pi 10^{\left(\frac{i-1}{N-1}\right)} x_i \right)`
"""
N = len(individual)
return 10 * N + sum((10 ** (i / (N - 1)) * x) ** 2 -
10 * cos(2 * pi * 10 ** (i / (N - 1)) * x) for i, x in enumerate(individual)),
def rastrigin_skew(individual):
r"""Skewed Rastrigin test objective function.
:math:`f_{\text{RastSkew}}(\mathbf{x}) = 10N + \sum_{i=1}^N \left(y_i^2 - 10 \cos(2\pi x_i)\right)`
:math:`\text{with } y_i = \
\begin{cases} \
10\cdot x_i & \text{ if } x_i > 0,\\ \
x_i & \text{ otherwise } \
\end{cases}`
"""
N = len(individual)
return 10*N + sum((10*x if x > 0 else x)**2
- 10*cos(2*pi*(10*x if x > 0 else x)) for x in individual),
def schaffer(individual):
r"""Schaffer test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-100, 100]`
* - Global optima
- :math:`x_i = 0, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = \sum_{i=1}^{N-1} (x_i^2+x_{i+1}^2)^{0.25} \cdot \
\left[ \sin^2(50\cdot(x_i^2+x_{i+1}^2)^{0.10}) + 1.0 \
\right]`
.. plot:: code/benchmarks/schaffer.py
:width: 67 %
"""
return sum((x**2 + x1**2)**0.25 * ((sin(50 * (x**2 + x1**2)**0.1))**2 + 1.0)
for x, x1 in zip(individual[:-1], individual[1:])),
def schwefel(individual):
r"""Schwefel test objective function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-500, 500]`
* - Global optima
- :math:`x_i = 420.96874636, \forall i \in \lbrace 1 \ldots N\rbrace`, :math:`f(\mathbf{x}) = 0`
* - Function
- :math:`f(\mathbf{x}) = 418.9828872724339\cdot N - \
\sum_{i=1}^N\,x_i\sin\left(\sqrt{|x_i|}\right)`
.. plot:: code/benchmarks/schwefel.py
:width: 67 %
"""
N = len(individual)
return 418.9828872724339 * N - sum(x * sin(sqrt(abs(x))) for x in
individual),
def himmelblau(individual):
r"""The Himmelblau's function is multimodal with 4 defined minimums in
:math:`[-6, 6]^2`.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Type
- minimization
* - Range
- :math:`x_i \in [-6, 6]`
* - Global optima
- :math:`\mathbf{x}_1 = (3.0, 2.0)`, :math:`f(\mathbf{x}_1) = 0`\n
:math:`\mathbf{x}_2 = (-2.805118, 3.131312)`, :math:`f(\mathbf{x}_2) = 0`\n
:math:`\mathbf{x}_3 = (-3.779310, -3.283186)`, :math:`f(\mathbf{x}_3) = 0`\n
:math:`\mathbf{x}_4 = (3.584428, -1.848126)`, :math:`f(\mathbf{x}_4) = 0`\n
* - Function
- :math:`f(x_1, x_2) = (x_1^2 + x_2 - 11)^2 + (x_1 + x_2^2 -7)^2`
.. plot:: code/benchmarks/himmelblau.py
:width: 67 %
"""
return (individual[0] * individual[0] + individual[1] - 11)**2 + \
(individual[0] + individual[1] * individual[1] - 7)**2,
def shekel(individual, a, c):
r"""The Shekel multimodal function can have any number of maxima. The number
of maxima is given by the length of any of the arguments *a* or *c*, *a*
is a matrix of size :math:`M\times N`, where *M* is the number of maxima
and *N* the number of dimensions and *c* is a :math:`M\times 1` vector.
:math:`f_\text{Shekel}(\mathbf{x}) = \sum_{i = 1}^{M} \frac{1}{c_{i} +
\sum_{j = 1}^{N} (x_{j} - a_{ij})^2 }`
The following figure uses
:math:`\mathcal{A} = \begin{bmatrix} 0.5 & 0.5 \\ 0.25 & 0.25 \\
0.25 & 0.75 \\ 0.75 & 0.25 \\ 0.75 & 0.75 \end{bmatrix}` and
:math:`\mathbf{c} = \begin{bmatrix} 0.002 \\ 0.005 \\ 0.005
\\ 0.005 \\ 0.005 \end{bmatrix}`, thus defining 5 maximums in
:math:`\mathbb{R}^2`.
.. plot:: code/benchmarks/shekel.py
:width: 67 %
"""
return sum((1. / (c[i] + sum((individual[j] - aij)**2 for j, aij in enumerate(a[i])))) for i in range(len(c))),
# Multiobjectives
def kursawe(individual):
r"""Kursawe multiobjective function.
:math:`f_{\text{Kursawe}1}(\mathbf{x}) = \sum_{i=1}^{N-1} -10 e^{-0.2 \sqrt{x_i^2 + x_{i+1}^2} }`
:math:`f_{\text{Kursawe}2}(\mathbf{x}) = \sum_{i=1}^{N} |x_i|^{0.8} + 5 \sin(x_i^3)`
.. plot:: code/benchmarks/kursawe.py
:width: 100 %
"""
f1 = sum(-10 * exp(-0.2 * sqrt(x * x + y * y)) for x, y in zip(individual[:-1], individual[1:]))
f2 = sum(abs(x)**0.8 + 5 * sin(x * x * x) for x in individual)
return f1, f2
def schaffer_mo(individual):
r"""Schaffer's multiobjective function on a one attribute *individual*.
From: J. D. Schaffer, "Multiple objective optimization with vector
evaluated genetic algorithms", in Proceedings of the First International
Conference on Genetic Algorithms, 1987.
:math:`f_{\text{Schaffer}1}(\mathbf{x}) = x_1^2`
:math:`f_{\text{Schaffer}2}(\mathbf{x}) = (x_1-2)^2`
"""
return individual[0] ** 2, (individual[0] - 2) ** 2
def zdt1(individual):
r"""ZDT1 multiobjective function.
:math:`g(\mathbf{x}) = 1 + \frac{9}{n-1}\sum_{i=2}^n x_i`
:math:`f_{\text{ZDT1}1}(\mathbf{x}) = x_1`
:math:`f_{\text{ZDT1}2}(\mathbf{x}) = g(\mathbf{x})\left[1 - \sqrt{\frac{x_1}{g(\mathbf{x})}}\right]`
"""
g = 1.0 + 9.0 * sum(individual[1:]) / (len(individual) - 1)
f1 = individual[0]
f2 = g * (1 - sqrt(f1 / g))
return f1, f2
def zdt2(individual):
r"""ZDT2 multiobjective function.
:math:`g(\mathbf{x}) = 1 + \frac{9}{n-1}\sum_{i=2}^n x_i`
:math:`f_{\text{ZDT2}1}(\mathbf{x}) = x_1`
:math:`f_{\text{ZDT2}2}(\mathbf{x}) = g(\mathbf{x})\left[1 - \left(\frac{x_1}{g(\mathbf{x})}\right)^2\right]`
"""
g = 1.0 + 9.0 * sum(individual[1:]) / (len(individual) - 1)
f1 = individual[0]
f2 = g * (1 - (f1 / g)**2)
return f1, f2
def zdt3(individual):
r"""ZDT3 multiobjective function.
:math:`g(\mathbf{x}) = 1 + \frac{9}{n-1}\sum_{i=2}^n x_i`
:math:`f_{\text{ZDT3}1}(\mathbf{x}) = x_1`
:math:`f_{\text{ZDT3}2}(\mathbf{x}) = g(\mathbf{x})\left[1 - \sqrt{\frac{x_1}{g(\mathbf{x})}} - \frac{x_1}{g(\mathbf{x})}\sin(10\pi x_1)\right]`
"""
g = 1.0 + 9.0 * sum(individual[1:]) / (len(individual) - 1)
f1 = individual[0]
f2 = g * (1 - sqrt(f1 / g) - f1 / g * sin(10 * pi * f1))
return f1, f2
def zdt4(individual):
r"""ZDT4 multiobjective function.
:math:`g(\mathbf{x}) = 1 + 10(n-1) + \sum_{i=2}^n \left[ x_i^2 - 10\cos(4\pi x_i) \right]`
:math:`f_{\text{ZDT4}1}(\mathbf{x}) = x_1`
:math:`f_{\text{ZDT4}2}(\mathbf{x}) = g(\mathbf{x})\left[ 1 - \sqrt{x_1/g(\mathbf{x})} \right]`
"""
g = 1 + 10 * (len(individual) - 1) + sum(xi**2 - 10 * cos(4 * pi * xi) for xi in individual[1:])
f1 = individual[0]
f2 = g * (1 - sqrt(f1 / g))
return f1, f2
def zdt6(individual):
r"""ZDT6 multiobjective function.
:math:`g(\mathbf{x}) = 1 + 9 \left[ \left(\sum_{i=2}^n x_i\right)/(n-1) \right]^{0.25}`
:math:`f_{\text{ZDT6}1}(\mathbf{x}) = 1 - \exp(-4x_1)\sin^6(6\pi x_1)`
:math:`f_{\text{ZDT6}2}(\mathbf{x}) = g(\mathbf{x}) \left[ 1 - (f_{\text{ZDT6}1}(\mathbf{x})/g(\mathbf{x}))^2 \right]`
"""
g = 1 + 9 * (sum(individual[1:]) / (len(individual) - 1))**0.25
f1 = 1 - exp(-4 * individual[0]) * sin(6 * pi * individual[0])**6
f2 = g * (1 - (f1 / g)**2)
return f1, f2
def dtlz1(individual, obj):
r"""DTLZ1 multiobjective function. It returns a tuple of *obj* values.
The individual must have at least *obj* elements.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825 - 830, IEEE Press, 2002.
:math:`g(\mathbf{x}_m) = 100\left(|\mathbf{x}_m| + \sum_{x_i \in \mathbf{x}_m}\left((x_i - 0.5)^2 - \cos(20\pi(x_i - 0.5))\right)\right)`
:math:`f_{\text{DTLZ1}1}(\mathbf{x}) = \frac{1}{2} (1 + g(\mathbf{x}_m)) \prod_{i=1}^{m-1}x_i`
:math:`f_{\text{DTLZ1}2}(\mathbf{x}) = \frac{1}{2} (1 + g(\mathbf{x}_m)) (1-x_{m-1}) \prod_{i=1}^{m-2}x_i`
:math:`\ldots`
:math:`f_{\text{DTLZ1}m-1}(\mathbf{x}) = \frac{1}{2} (1 + g(\mathbf{x}_m)) (1 - x_2) x_1`
:math:`f_{\text{DTLZ1}m}(\mathbf{x}) = \frac{1}{2} (1 - x_1)(1 + g(\mathbf{x}_m))`
Where :math:`m` is the number of objectives and :math:`\mathbf{x}_m` is a
vector of the remaining attributes :math:`[x_m~\ldots~x_n]` of the
individual in :math:`n > m` dimensions.
"""
g = 100 * (len(individual[obj - 1:]) + sum((xi - 0.5)**2 - cos(20 * pi * (xi - 0.5)) for xi in individual[obj - 1:]))
f = [0.5 * reduce(mul, individual[:obj - 1], 1) * (1 + g)]
f.extend(0.5 * reduce(mul, individual[:m], 1) * (1 - individual[m]) * (1 + g) for m in reversed(range(obj - 1)))
return f
def dtlz2(individual, obj):
r"""DTLZ2 multiobjective function. It returns a tuple of *obj* values.
The individual must have at least *obj* elements.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825 - 830, IEEE Press, 2002.
:math:`g(\mathbf{x}_m) = \sum_{x_i \in \mathbf{x}_m} (x_i - 0.5)^2`
:math:`f_{\text{DTLZ2}1}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \prod_{i=1}^{m-1} \cos(0.5x_i\pi)`
:math:`f_{\text{DTLZ2}2}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \sin(0.5x_{m-1}\pi ) \prod_{i=1}^{m-2} \cos(0.5x_i\pi)`
:math:`\ldots`
:math:`f_{\text{DTLZ2}m}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \sin(0.5x_{1}\pi )`
Where :math:`m` is the number of objectives and :math:`\mathbf{x}_m` is a
vector of the remaining attributes :math:`[x_m~\ldots~x_n]` of the
individual in :math:`n > m` dimensions.
"""
xc = individual[:obj - 1]
xm = individual[obj - 1:]
g = sum((xi - 0.5)**2 for xi in xm)
f = [(1.0 + g) * reduce(mul, (cos(0.5 * xi * pi) for xi in xc), 1.0)]
f.extend((1.0 + g) * reduce(mul, (cos(0.5 * xi * pi) for xi in xc[:m]), 1) * sin(0.5 * xc[m]*pi) for m in range(obj-2, -1, -1))
return f
def dtlz3(individual, obj):
r"""DTLZ3 multiobjective function. It returns a tuple of *obj* values.
The individual must have at least *obj* elements.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825 - 830, IEEE Press, 2002.
:math:`g(\mathbf{x}_m) = 100\left(|\mathbf{x}_m| + \sum_{x_i \in \mathbf{x}_m}\left((x_i - 0.5)^2 - \cos(20\pi(x_i - 0.5))\right)\right)`
:math:`f_{\text{DTLZ3}1}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \prod_{i=1}^{m-1} \cos(0.5x_i\pi)`
:math:`f_{\text{DTLZ3}2}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \sin(0.5x_{m-1}\pi ) \prod_{i=1}^{m-2} \cos(0.5x_i\pi)`
:math:`\ldots`
:math:`f_{\text{DTLZ3}m}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \sin(0.5x_{1}\pi )`
Where :math:`m` is the number of objectives and :math:`\mathbf{x}_m` is a
vector of the remaining attributes :math:`[x_m~\ldots~x_n]` of the
individual in :math:`n > m` dimensions.
"""
xc = individual[:obj - 1]
xm = individual[obj - 1:]
g = 100 * (len(xm) + sum((xi - 0.5)**2 - cos(20 * pi * (xi - 0.5)) for xi in xm))
f = [(1.0 + g) * reduce(mul, (cos(0.5 * xi * pi) for xi in xc), 1.0)]
f.extend((1.0 + g) * reduce(mul, (cos(0.5 * xi * pi) for xi in xc[:m]), 1) * sin(0.5 * xc[m] * pi) for m in range(obj - 2, -1, -1))
return f
def dtlz4(individual, obj, alpha):
r"""DTLZ4 multiobjective function. It returns a tuple of *obj* values. The
individual must have at least *obj* elements. The *alpha* parameter allows
for a meta-variable mapping in :func:`dtlz2` :math:`x_i \rightarrow
x_i^\alpha`, the authors suggest :math:`\alpha = 100`.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825 - 830, IEEE Press, 2002.
:math:`g(\mathbf{x}_m) = \sum_{x_i \in \mathbf{x}_m} (x_i - 0.5)^2`
:math:`f_{\text{DTLZ4}1}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \prod_{i=1}^{m-1} \cos(0.5x_i^\alpha\pi)`
:math:`f_{\text{DTLZ4}2}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \sin(0.5x_{m-1}^\alpha\pi ) \prod_{i=1}^{m-2} \cos(0.5x_i^\alpha\pi)`
:math:`\ldots`
:math:`f_{\text{DTLZ4}m}(\mathbf{x}) = (1 + g(\mathbf{x}_m)) \sin(0.5x_{1}^\alpha\pi )`
Where :math:`m` is the number of objectives and :math:`\mathbf{x}_m` is a
vector of the remaining attributes :math:`[x_m~\ldots~x_n]` of the
individual in :math:`n > m` dimensions.
"""
xc = individual[:obj - 1]
xm = individual[obj - 1:]
g = sum((xi - 0.5)**2 for xi in xm)
f = [(1.0 + g) * reduce(mul, (cos(0.5 * xi ** alpha * pi) for xi in xc), 1.0)]
f.extend((1.0 + g) * reduce(mul, (cos(0.5 * xi**alpha * pi) for xi in xc[:m]), 1) * sin(0.5 * xc[m]**alpha * pi) for m in range(obj - 2, -1, -1))
return f
def dtlz5(ind, n_objs):
r"""DTLZ5 multiobjective function. It returns a tuple of *obj* values. The
individual must have at least *obj* elements.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825-830, IEEE Press, 2002.
"""
g = lambda x: sum([(a - 0.5)**2 for a in x])
gval = g(ind[n_objs - 1:])
theta = lambda x: pi / (4.0 * (1 + gval)) * (1 + 2 * gval * x)
fit = [(1 + gval) * cos(pi / 2.0 * ind[0]) * reduce(lambda x, y: x * y, [cos(theta(a)) for a in ind[1:]])]
for m in reversed(range(1, n_objs)):
if m == 1:
fit.append((1 + gval) * sin(pi / 2.0 * ind[0]))
else:
fit.append((1 + gval) * cos(pi / 2.0 * ind[0]) *
reduce(lambda x, y: x * y, [cos(theta(a)) for a in ind[1:m - 1]], 1) * sin(theta(ind[m - 1])))
return fit
def dtlz6(ind, n_objs):
r"""DTLZ6 multiobjective function. It returns a tuple of *obj* values. The
individual must have at least *obj* elements.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825-830, IEEE Press, 2002.
"""
gval = sum([a**0.1 for a in ind[n_objs - 1:]])
theta = lambda x: pi / (4.0 * (1 + gval)) * (1 + 2 * gval * x)
fit = [(1 + gval) * cos(pi / 2.0 * ind[0]) *
reduce(lambda x, y: x * y, [cos(theta(a)) for a in ind[1:]])]
for m in reversed(range(1, n_objs)):
if m == 1:
fit.append((1 + gval) * sin(pi / 2.0 * ind[0]))
else:
fit.append((1 + gval) * cos(pi / 2.0 * ind[0]) *
reduce(lambda x, y: x * y, [cos(theta(a)) for a in ind[1: m - 1]], 1) * sin(theta(ind[m - 1])))
return fit
def dtlz7(ind, n_objs):
r"""DTLZ7 multiobjective function. It returns a tuple of *obj* values. The
individual must have at least *obj* elements.
From: K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective
Optimization Test Problems. CEC 2002, p. 825-830, IEEE Press, 2002.
"""
gval = 1 + 9.0 / len(ind[n_objs-1:]) * sum([a for a in ind[n_objs-1:]])
fit = [x for x in ind[:n_objs-1]]
fit.append((1 + gval) * (n_objs - sum([a / (1.0 + gval) * (1 + sin(3 * pi * a)) for a in ind[:n_objs-1]])))
return fit
def fonseca(individual):
r"""Fonseca and Fleming's multiobjective function.
From: C. M. Fonseca and P. J. Fleming, "Multiobjective optimization and
multiple constraint handling with evolutionary algorithms -- Part II:
Application example", IEEE Transactions on Systems, Man and Cybernetics,
1998.
:math:`f_{\text{Fonseca}1}(\mathbf{x}) = 1 - e^{-\sum_{i=1}^{3}(x_i - \frac{1}{\sqrt{3}})^2}`
:math:`f_{\text{Fonseca}2}(\mathbf{x}) = 1 - e^{-\sum_{i=1}^{3}(x_i + \frac{1}{\sqrt{3}})^2}`
"""
f_1 = 1 - exp(-sum((xi - 1/sqrt(3))**2 for xi in individual[:3]))
f_2 = 1 - exp(-sum((xi + 1/sqrt(3))**2 for xi in individual[:3]))
return f_1, f_2
def poloni(individual):
r"""Poloni's multiobjective function on a two attribute *individual*. From:
C. Poloni, "Hybrid GA for multi objective aerodynamic shape optimization",
in Genetic Algorithms in Engineering and Computer Science, 1997.
:math:`A_1 = 0.5 \sin (1) - 2 \cos (1) + \sin (2) - 1.5 \cos (2)`
:math:`A_2 = 1.5 \sin (1) - \cos (1) + 2 \sin (2) - 0.5 \cos (2)`
:math:`B_1 = 0.5 \sin (x_1) - 2 \cos (x_1) + \sin (x_2) - 1.5 \cos (x_2)`
:math:`B_2 = 1.5 \sin (x_1) - cos(x_1) + 2 \sin (x_2) - 0.5 \cos (x_2)`
:math:`f_{\text{Poloni}1}(\mathbf{x}) = 1 + (A_1 - B_1)^2 + (A_2 - B_2)^2`
:math:`f_{\text{Poloni}2}(\mathbf{x}) = (x_1 + 3)^2 + (x_2 + 1)^2`
"""
x_1 = individual[0]
x_2 = individual[1]
A_1 = 0.5 * sin(1) - 2 * cos(1) + sin(2) - 1.5 * cos(2)
A_2 = 1.5 * sin(1) - cos(1) + 2 * sin(2) - 0.5 * cos(2)
B_1 = 0.5 * sin(x_1) - 2 * cos(x_1) + sin(x_2) - 1.5 * cos(x_2)
B_2 = 1.5 * sin(x_1) - cos(x_1) + 2 * sin(x_2) - 0.5 * cos(x_2)
return 1 + (A_1 - B_1)**2 + (A_2 - B_2)**2, (x_1 + 3)**2 + (x_2 + 1)**2
def dent(individual, lambda_=0.85):
r"""Test problem Dent. Two-objective problem with a "dent". *individual* has
two attributes that take values in [-1.5, 1.5].
From: Schuetze, O., Laumanns, M., Tantar, E., Coello Coello, C.A., & Talbi, E.-G. (2010).
Computing gap free Pareto front approximations with stochastic search algorithms.
Evolutionary Computation, 18(1), 65--96. doi:10.1162/evco.2010.18.1.18103
Note that in that paper Dent source is stated as:
K. Witting and M. Hessel von Molo. Private communication, 2006.
"""
d = lambda_ * exp(-(individual[0] - individual[1]) ** 2)
f1 = 0.5 * (sqrt(1 + (individual[0] + individual[1]) ** 2) +
sqrt(1 + (individual[0] - individual[1]) ** 2) +
individual[0] - individual[1]) + d
f2 = 0.5 * (sqrt(1 + (individual[0] + individual[1]) ** 2) +
sqrt(1 + (individual[0] - individual[1]) ** 2) -
individual[0] + individual[1]) + d
return f1, f2
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/benchmarks/binary.py 0000644 0000765 0000024 00000011376 14456461441 016567 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
from functools import wraps
def bin2float(min_, max_, nbits):
"""Convert a binary array into an array of float where each
float is composed of *nbits* and is between *min_* and *max_*
and return the result of the decorated function.
"""
def wrap(function):
@wraps(function)
def wrapped_function(individual, *args, **kargs):
# User must take care to make nelem an integer.
nelem = len(individual) // nbits
decoded = [0] * nelem
for i in range(nelem):
gene = int("".join(map(str,
individual[i*nbits:i*nbits+nbits])),
2)
div = 2**nbits - 1
temp = gene/div
decoded[i] = min_ + (temp * (max_ - min_))
return function(decoded, *args, **kargs)
return wrapped_function
return wrap
def trap(individual):
u = sum(individual)
k = len(individual)
if u == k:
return k
else:
return k - 1 - u
def inv_trap(individual):
u = sum(individual)
k = len(individual)
if u == 0:
return k
else:
return u - 1
def chuang_f1(individual):
"""Binary deceptive function from : Multivariate Multi-Model Approach for
Globally Multimodal Problems by Chung-Yao Chuang and Wen-Lian Hsu.
The function takes individual of 40+1 dimensions and has two global optima
in [1,1,...,1] and [0,0,...,0].
"""
total = 0
if individual[-1] == 0:
for i in range(0, len(individual)-1, 4):
total += inv_trap(individual[i:i+4])
else:
for i in range(0, len(individual)-1, 4):
total += trap(individual[i:i+4])
return total,
def chuang_f2(individual):
"""Binary deceptive function from : Multivariate Multi-Model Approach for
Globally Multimodal Problems by Chung-Yao Chuang and Wen-Lian Hsu.
The function takes individual of 40+1 dimensions and has four global optima
in [1,1,...,0,0], [0,0,...,1,1], [1,1,...,1] and [0,0,...,0].
"""
total = 0
if individual[-2] == 0 and individual[-1] == 0:
for i in range(0, len(individual)-2, 8):
total += inv_trap(individual[i:i+4]) + inv_trap(individual[i+4:i+8])
elif individual[-2] == 0 and individual[-1] == 1:
for i in range(0, len(individual)-2, 8):
total += inv_trap(individual[i:i+4]) + trap(individual[i+4:i+8])
elif individual[-2] == 1 and individual[-1] == 0:
for i in range(0, len(individual)-2, 8):
total += trap(individual[i:i+4]) + inv_trap(individual[i+4:i+8])
else:
for i in range(0, len(individual)-2, 8):
total += trap(individual[i:i+4]) + trap(individual[i+4:i+8])
return total,
def chuang_f3(individual):
"""Binary deceptive function from : Multivariate Multi-Model Approach for
Globally Multimodal Problems by Chung-Yao Chuang and Wen-Lian Hsu.
The function takes individual of 40+1 dimensions and has two global optima
in [1,1,...,1] and [0,0,...,0].
"""
total = 0
if individual[-1] == 0:
for i in range(0, len(individual)-1, 4):
total += inv_trap(individual[i:i+4])
else:
for i in range(2, len(individual)-3, 4):
total += inv_trap(individual[i:i+4])
total += trap(individual[-2:]+individual[:2])
return total,
# Royal Road Functions
def royal_road1(individual, order):
"""Royal Road Function R1 as presented by Melanie Mitchell in :
"An introduction to Genetic Algorithms".
"""
nelem = len(individual) // order
max_value = int(2**order - 1)
total = 0
for i in range(nelem):
value = int("".join(map(str, individual[i*order:i*order+order])), 2)
total += int(order) * int(value/max_value)
return total,
def royal_road2(individual, order):
"""Royal Road Function R2 as presented by Melanie Mitchell in :
"An introduction to Genetic Algorithms".
"""
total = 0
norder = order
while norder < order**2:
total += royal_road1(individual, norder)[0]
norder *= 2
return total,
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/benchmarks/gp.py 0000644 0000765 0000024 00000007352 14456461441 015710 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
from math import exp, sin, cos
def kotanchek(data):
r"""Kotanchek benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [-1, 7]^2`
* - Function
- :math:`f(\mathbf{x}) = \\frac{e^{-(x_1 - 1)^2}}{3.2 + (x_2 - 2.5)^2}`
"""
return exp(-(data[0] - 1)**2) / (3.2 + (data[1] - 2.5)**2)
def salustowicz_1d(data):
r"""Salustowicz benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`x \in [0, 10]`
* - Function
- :math:`f(x) = e^{-x} x^3 \cos(x) \sin(x) (\cos(x) \sin^2(x) - 1)`
"""
return exp(-data[0]) * data[0]**3 * cos(data[0]) * sin(data[0]) * (cos(data[0]) * sin(data[0])**2 - 1)
def salustowicz_2d(data):
r"""Salustowicz benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [0, 7]^2`
* - Function
- :math:`f(\mathbf{x}) = e^{-x_1} x_1^3 \cos(x_1) \sin(x_1) (\cos(x_1) \sin^2(x_1) - 1) (x_2 -5)`
"""
return exp(-data[0]) * data[0]**3 * cos(data[0]) * sin(data[0]) * (cos(data[0]) * sin(data[0])**2 - 1) * (data[1] - 5)
def unwrapped_ball(data):
r"""Unwrapped ball benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [-2, 8]^n`
* - Function
- :math:`f(\mathbf{x}) = \\frac{10}{5 + \sum_{i=1}^n (x_i - 3)^2}`
"""
return 10. / (5. + sum((d - 3)**2 for d in data))
def rational_polynomial(data):
r"""Rational polynomial ball benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [0, 2]^3`
* - Function
- :math:`f(\mathbf{x}) = \\frac{30 * (x_1 - 1) (x_3 - 1)}{x_2^2 (x_1 - 10)}`
"""
return 30. * (data[0] - 1) * (data[2] - 1) / (data[1]**2 * (data[0] - 10))
def sin_cos(data):
r"""Sine cosine benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [0, 6]^2`
* - Function
- :math:`f(\mathbf{x}) = 6\sin(x_1)\cos(x_2)`
"""
6 * sin(data[0]) * cos(data[1])
def ripple(data):
r"""Ripple benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [-5, 5]^2`
* - Function
- :math:`f(\mathbf{x}) = (x_1 - 3) (x_2 - 3) + 2 \sin((x_1 - 4) (x_2 -4))`
"""
return (data[0] - 3) * (data[1] - 3) + 2 * sin((data[0] - 4) * (data[1] - 4))
def rational_polynomial2(data):
r"""Rational polynomial benchmark function.
.. list-table::
:widths: 10 50
:stub-columns: 1
* - Range
- :math:`\mathbf{x} \in [0, 6]^2`
* - Function
- :math:`f(\mathbf{x}) = \\frac{(x_1 - 3)^4 + (x_2 - 3)^3 - (x_2 - 3)}{(x_2 - 2)^4 + 10}`
"""
return ((data[0] - 3)**4 + (data[1] - 3)**3 - (data[1] - 3)) / ((data[1] - 2)**4 + 10)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/benchmarks/movingpeaks.py 0000644 0000765 0000024 00000043721 14456461441 017625 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""
Re-implementation of the `Moving Peaks Benchmark
`_ by Jurgen Branke. With the
addition of the fluctuating number of peaks presented in *du Plessis and
Engelbrecht, 2013, Self-Adaptive Environment with Fluctuating Number of
Optima.*
"""
import math
import itertools
import random
try:
from collections.abc import Sequence
except:
from collections import Sequence
def cone(individual, position, height, width):
r"""The cone peak function to be used with scenario 2 and 3.
:math:`f(\mathbf{x}) = h - w \sqrt{\sum_{i=1}^N (x_i - p_i)^2}`
"""
value = 0.0
for x, p in zip(individual, position):
value += (x - p)**2
return height - width * math.sqrt(value)
def sphere(individual, position, height, width):
value = 0.0
for x, p in zip(individual, position):
value += (x - p)**2
return height * value
def function1(individual, position, height, width):
r"""The function1 peak function to be used with scenario 1.
:math:`f(\mathbf{x}) = \\frac{h}{1 + w \sqrt{\sum_{i=1}^N (x_i - p_i)^2}}`
"""
value = 0.0
for x, p in zip(individual, position):
value += (x - p)**2
return height / (1 + width * value)
class MovingPeaks:
"""The Moving Peaks Benchmark is a fitness function changing over time. It
consists of a number of peaks, changing in height, width and location. The
peaks function is given by *pfunc*, which is either a function object or a
list of function objects (the default is :func:`function1`). The number of
peaks is determined by *npeaks* (which defaults to 5). This parameter can
be either a integer or a sequence. If it is set to an integer the number
of peaks won't change, while if set to a sequence of 3 elements, the
number of peaks will fluctuate between the first and third element of that
sequence, the second element is the initial number of peaks. When
fluctuating the number of peaks, the parameter *number_severity* must be
included, it represents the number of peak fraction that is allowed to
change. The dimensionality of the search domain is *dim*. A basis function
*bfunc* can also be given to act as static landscape (the default is no
basis function). The argument *random* serves to grant an independent
random number generator to the moving peaks so that the evolution is not
influenced by number drawn by this object (the default uses random
functions from the Python module :mod:`random`). Various other keyword
parameters listed in the table below are required to setup the benchmark,
default parameters are based on scenario 1 of this benchmark.
=================== ============================= =================== =================== ======================================================================================================================
Parameter :data:`SCENARIO_1` (Default) :data:`SCENARIO_2` :data:`SCENARIO_3` Details
=================== ============================= =================== =================== ======================================================================================================================
``pfunc`` :func:`function1` :func:`cone` :func:`cone` The peak function or a list of peak function.
``npeaks`` 5 10 50 Number of peaks. If an integer, the number of peaks won't change, if a sequence it will fluctuate [min, current, max].
``bfunc`` :obj:`None` :obj:`None` ``lambda x: 10`` Basis static function.
``min_coord`` 0.0 0.0 0.0 Minimum coordinate for the centre of the peaks.
``max_coord`` 100.0 100.0 100.0 Maximum coordinate for the centre of the peaks.
``min_height`` 30.0 30.0 30.0 Minimum height of the peaks.
``max_height`` 70.0 70.0 70.0 Maximum height of the peaks.
``uniform_height`` 50.0 50.0 0 Starting height for all peaks, if ``uniform_height <= 0`` the initial height is set randomly for each peak.
``min_width`` 0.0001 1.0 1.0 Minimum width of the peaks.
``max_width`` 0.2 12.0 12.0 Maximum width of the peaks
``uniform_width`` 0.1 0 0 Starting width for all peaks, if ``uniform_width <= 0`` the initial width is set randomly for each peak.
``lambda_`` 0.0 0.5 0.5 Correlation between changes.
``move_severity`` 1.0 1.5 1.0 The distance a single peak moves when peaks change.
``height_severity`` 7.0 7.0 1.0 The standard deviation of the change made to the height of a peak when peaks change.
``width_severity`` 0.01 1.0 0.5 The standard deviation of the change made to the width of a peak when peaks change.
``period`` 5000 5000 1000 Period between two changes.
=================== ============================= =================== =================== ======================================================================================================================
Dictionaries :data:`SCENARIO_1`, :data:`SCENARIO_2` and
:data:`SCENARIO_3` of this module define the defaults for these
parameters. The scenario 3 requires a constant basis function
which can be given as a lambda function ``lambda x: constant``.
The following shows an example of scenario 1 with non uniform heights and
widths.
.. plot:: code/benchmarks/movingsc1.py
:width: 67 %
"""
def __init__(self, dim, random=random, **kargs):
# Scenario 1 is the default
sc = SCENARIO_1.copy()
sc.update(kargs)
pfunc = sc.get("pfunc")
npeaks = sc.get("npeaks")
self.dim = dim
self.minpeaks, self.maxpeaks = None, None
if hasattr(npeaks, "__getitem__"):
self.minpeaks, npeaks, self.maxpeaks = npeaks
self.number_severity = sc.get("number_severity")
try:
if len(pfunc) == npeaks:
self.peaks_function = pfunc
else:
self.peaks_function = self.random.sample(pfunc, npeaks)
self.pfunc_pool = tuple(pfunc)
except TypeError:
self.peaks_function = list(itertools.repeat(pfunc, npeaks))
self.pfunc_pool = (pfunc,)
self.random = random
self.basis_function = sc.get("bfunc")
self.min_coord = sc.get("min_coord")
self.max_coord = sc.get("max_coord")
self.min_height = sc.get("min_height")
self.max_height = sc.get("max_height")
uniform_height = sc.get("uniform_height")
self.min_width = sc.get("min_width")
self.max_width = sc.get("max_width")
uniform_width = sc.get("uniform_width")
self.lambda_ = sc.get("lambda_")
self.move_severity = sc.get("move_severity")
self.height_severity = sc.get("height_severity")
self.width_severity = sc.get("width_severity")
self.peaks_position = [[self.random.uniform(self.min_coord, self.max_coord) for _ in range(dim)] for _ in range(npeaks)]
if uniform_height != 0:
self.peaks_height = [uniform_height for _ in range(npeaks)]
else:
self.peaks_height = [self.random.uniform(self.min_height, self.max_height) for _ in range(npeaks)]
if uniform_width != 0:
self.peaks_width = [uniform_width for _ in range(npeaks)]
else:
self.peaks_width = [self.random.uniform(self.min_width, self.max_width) for _ in range(npeaks)]
self.last_change_vector = [[self.random.random() - 0.5 for _ in range(dim)] for _ in range(npeaks)]
self.period = sc.get("period")
# Used by the Offline Error calculation
self._optimum = None
self._error = None
self._offline_error = 0
# Also used for auto change
self.nevals = 0
def globalMaximum(self):
"""Returns the global maximum value and position."""
# The global maximum is at one peak's position
potential_max = list()
for func, pos, height, width in zip(self.peaks_function,
self.peaks_position,
self.peaks_height,
self.peaks_width):
potential_max.append((func(pos, pos, height, width), pos))
return max(potential_max)
def maximums(self):
"""Returns all visible maximums value and position sorted with the
global maximum first.
"""
# The maximums are at the peaks position but might be swallowed by
# other peaks
maximums = list()
for func, pos, height, width in zip(self.peaks_function,
self.peaks_position,
self.peaks_height,
self.peaks_width):
val = func(pos, pos, height, width)
if val >= self.__call__(pos, count=False)[0]:
maximums.append((val, pos))
return sorted(maximums, reverse=True)
def __call__(self, individual, count=True):
"""Evaluate a given *individual* with the current benchmark
configuration.
:param indidivudal: The individual to evaluate.
:param count: Whether or not to count this evaluation in
the total evaluation count. (Defaults to
:data:`True`)
"""
possible_values = []
for func, pos, height, width in zip(self.peaks_function,
self.peaks_position,
self.peaks_height,
self.peaks_width):
possible_values.append(func(individual, pos, height, width))
if self.basis_function:
possible_values.append(self.basis_function(individual))
fitness = max(possible_values)
if count:
# Compute the offline error
self.nevals += 1
if self._optimum is None:
self._optimum = self.globalMaximum()[0]
self._error = abs(fitness - self._optimum)
self._error = min(self._error, abs(fitness - self._optimum))
self._offline_error += self._error
# We exhausted the number of evaluation, change peaks for the next one.
if self.period > 0 and self.nevals % self.period == 0:
self.changePeaks()
return fitness,
def offlineError(self):
return self._offline_error / self.nevals
def currentError(self):
return self._error
def changePeaks(self):
"""Order the peaks to change position, height, width and number."""
# Change the number of peaks
if self.minpeaks is not None and self.maxpeaks is not None:
npeaks = len(self.peaks_function)
u = self.random.random()
r = self.maxpeaks - self.minpeaks
if u < 0.5:
# Remove n peaks or less depending on the minimum number of peaks
u = self.random.random()
n = min(npeaks - self.minpeaks, int(round(r * u * self.number_severity)))
for i in range(n):
idx = self.random.randrange(len(self.peaks_function))
self.peaks_function.pop(idx)
self.peaks_position.pop(idx)
self.peaks_height.pop(idx)
self.peaks_width.pop(idx)
self.last_change_vector.pop(idx)
else:
# Add n peaks or less depending on the maximum number of peaks
u = self.random.random()
n = min(self.maxpeaks - npeaks, int(round(r * u * self.number_severity)))
for i in range(n):
self.peaks_function.append(self.random.choice(self.pfunc_pool))
self.peaks_position.append([self.random.uniform(self.min_coord, self.max_coord) for _ in range(self.dim)])
self.peaks_height.append(self.random.uniform(self.min_height, self.max_height))
self.peaks_width.append(self.random.uniform(self.min_width, self.max_width))
self.last_change_vector.append([self.random.random() - 0.5 for _ in range(self.dim)])
for i in range(len(self.peaks_function)):
# Change peak position
shift = [self.random.random() - 0.5 for _ in range(len(self.peaks_position[i]))]
shift_length = sum(s**2 for s in shift)
shift_length = self.move_severity / math.sqrt(shift_length) if shift_length > 0 else 0
shift = [shift_length * (1.0 - self.lambda_) * s
+ self.lambda_ * c for s, c in zip(shift, self.last_change_vector[i])]
shift_length = sum(s**2 for s in shift)
shift_length = self.move_severity / math.sqrt(shift_length) if shift_length > 0 else 0
shift = [s*shift_length for s in shift]
new_position = []
final_shift = []
for pp, s in zip(self.peaks_position[i], shift):
new_coord = pp + s
if new_coord < self.min_coord:
new_position.append(2.0 * self.min_coord - pp - s)
final_shift.append(-1.0 * s)
elif new_coord > self.max_coord:
new_position.append(2.0 * self.max_coord - pp - s)
final_shift.append(-1.0 * s)
else:
new_position.append(new_coord)
final_shift.append(s)
self.peaks_position[i] = new_position
self.last_change_vector[i] = final_shift
# Change peak height
change = self.random.gauss(0, 1) * self.height_severity
new_value = change + self.peaks_height[i]
if new_value < self.min_height:
self.peaks_height[i] = 2.0 * self.min_height - self.peaks_height[i] - change
elif new_value > self.max_height:
self.peaks_height[i] = 2.0 * self.max_height - self.peaks_height[i] - change
else:
self.peaks_height[i] = new_value
# Change peak width
change = self.random.gauss(0, 1) * self.width_severity
new_value = change + self.peaks_width[i]
if new_value < self.min_width:
self.peaks_width[i] = 2.0 * self.min_width - self.peaks_width[i] - change
elif new_value > self.max_width:
self.peaks_width[i] = 2.0 * self.max_width - self.peaks_width[i] - change
else:
self.peaks_width[i] = new_value
self._optimum = None
SCENARIO_1 = {"pfunc": function1,
"npeaks": 5,
"bfunc": None,
"min_coord": 0.0,
"max_coord": 100.0,
"min_height": 30.0,
"max_height": 70.0,
"uniform_height": 50.0,
"min_width": 0.0001,
"max_width": 0.2,
"uniform_width": 0.1,
"lambda_": 0.0,
"move_severity": 1.0,
"height_severity": 7.0,
"width_severity": 0.01,
"period": 5000}
SCENARIO_2 = {"pfunc": cone,
"npeaks": 10,
"bfunc": None,
"min_coord": 0.0,
"max_coord": 100.0,
"min_height": 30.0,
"max_height": 70.0,
"uniform_height": 50.0,
"min_width": 1.0,
"max_width": 12.0,
"uniform_width": 0,
"lambda_": 0.5,
"move_severity": 1.0,
"height_severity": 7.0,
"width_severity": 1.0,
"period": 5000}
SCENARIO_3 = {"pfunc": cone,
"npeaks": 50,
"bfunc": lambda x: 10,
"min_coord": 0.0,
"max_coord": 100.0,
"min_height": 30.0,
"max_height": 70.0,
"uniform_height": 0,
"min_width": 1.0,
"max_width": 12.0,
"uniform_width": 0,
"lambda_": 0.5,
"move_severity": 1.0,
"height_severity": 1.0,
"width_severity": 0.5,
"period": 1000}
def diversity(population):
nind = len(population)
ndim = len(population[0])
d = [0.0] * ndim
for x in population:
d = [di + xi for di, xi in zip(d, x)]
d = [di / nind for di in d]
return math.sqrt(sum((di - xi)**2 for x in population for di, xi in zip(d, x)))
if __name__ == "__main__":
mpb = MovingPeaks(dim=2, npeaks=[1, 1, 10], number_severity=0.1)
print(mpb.maximums())
mpb.changePeaks()
print(mpb.maximums())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/benchmarks/tools.py 0000644 0000765 0000024 00000027604 14456461441 016444 0 ustar 00runner staff """Module containing tools that are useful when benchmarking algorithms
"""
from math import hypot, sqrt
from functools import wraps
from itertools import repeat
try:
import numpy
numpy_imported = True
except ImportError:
numpy_imported = False
try:
import scipy.spatial
scipy_imported = True
except ImportError:
scipy_imported = False
try:
# try importing the C version
from ..tools._hypervolume import hv
except ImportError:
# fallback on python version
from ..tools._hypervolume import pyhv as hv
class translate(object):
"""Decorator for evaluation functions, it translates the objective
function by *vector* which should be the same length as the individual
size. When called the decorated function should take as first argument the
individual to be evaluated. The inverse translation vector is actually
applied to the individual and the resulting list is given to the
evaluation function. Thus, the evaluation function shall not be expecting
an individual as it will receive a plain list.
This decorator adds a :func:`translate` method to the decorated function.
"""
def __init__(self, vector):
self.vector = vector
def __call__(self, func):
# wraps is used to combine stacked decorators that would add functions
@wraps(func)
def wrapper(individual, *args, **kargs):
# A subtraction is applied since the translation is applied to the
# individual and not the function
return func([v - t for v, t in zip(individual, self.vector)],
*args, **kargs)
wrapper.translate = self.translate
return wrapper
def translate(self, vector):
"""Set the current translation to *vector*. After decorating the
evaluation function, this function will be available directly from
the function object. ::
@translate([0.25, 0.5, ..., 0.1])
def evaluate(individual):
return sum(individual),
# This will cancel the translation
evaluate.translate([0.0, 0.0, ..., 0.0])
"""
self.vector = vector
class rotate(object):
"""Decorator for evaluation functions, it rotates the objective function
by *matrix* which should be a valid orthogonal NxN rotation matrix, with N
the length of an individual. When called the decorated function should
take as first argument the individual to be evaluated. The inverse
rotation matrix is actually applied to the individual and the resulting
list is given to the evaluation function. Thus, the evaluation function
shall not be expecting an individual as it will receive a plain list
(numpy.array). The multiplication is done using numpy.
This decorator adds a :func:`rotate` method to the decorated function.
.. note::
A random orthogonal matrix Q can be created via QR decomposition. ::
A = numpy.random.random((n,n))
Q, _ = numpy.linalg.qr(A)
"""
def __init__(self, matrix):
if not numpy_imported:
raise RuntimeError("Numpy is required for using the rotation "
"decorator")
# The inverse is taken since the rotation is applied to the individual
# and not the function which is the inverse
self.matrix = numpy.linalg.inv(matrix)
def __call__(self, func):
# wraps is used to combine stacked decorators that would add functions
@wraps(func)
def wrapper(individual, *args, **kargs):
return func(numpy.dot(self.matrix, individual), *args, **kargs)
wrapper.rotate = self.rotate
return wrapper
def rotate(self, matrix):
"""Set the current rotation to *matrix*. After decorating the
evaluation function, this function will be available directly from
the function object. ::
# Create a random orthogonal matrix
A = numpy.random.random((n,n))
Q, _ = numpy.linalg.qr(A)
@rotate(Q)
def evaluate(individual):
return sum(individual),
# This will reset rotation to identity
evaluate.rotate(numpy.identity(n))
"""
self.matrix = numpy.linalg.inv(matrix)
class noise(object):
"""Decorator for evaluation functions, it evaluates the objective function
and adds noise by calling the function(s) provided in the *noise*
argument. The noise functions are called without any argument, consider
using the :class:`~deap.base.Toolbox` or Python's
:func:`functools.partial` to provide any required argument. If a single
function is provided it is applied to all objectives of the evaluation
function. If a list of noise functions is provided, it must be of length
equal to the number of objectives. The noise argument also accept
:obj:`None`, which will leave the objective without noise.
This decorator adds a :func:`noise` method to the decorated
function.
"""
def __init__(self, noise):
try:
self.rand_funcs = tuple(noise)
except TypeError:
self.rand_funcs = repeat(noise)
def __call__(self, func):
# wraps is used to combine stacked decorators that would add functions
@wraps(func)
def wrapper(individual, *args, **kargs):
result = func(individual, *args, **kargs)
noisy = list()
for r, f in zip(result, self.rand_funcs):
if f is None:
noisy.append(r)
else:
noisy.append(r + f())
return tuple(noisy)
wrapper.noise = self.noise
return wrapper
def noise(self, noise):
"""Set the current noise to *noise*. After decorating the
evaluation function, this function will be available directly from
the function object. ::
prand = functools.partial(random.gauss, mu=0.0, sigma=1.0)
@noise(prand)
def evaluate(individual):
return sum(individual),
# This will remove noise from the evaluation function
evaluate.noise(None)
"""
try:
self.rand_funcs = tuple(noise)
except TypeError:
self.rand_funcs = repeat(noise)
class scale(object):
"""Decorator for evaluation functions, it scales the objective function by
*factor* which should be the same length as the individual size. When
called the decorated function should take as first argument the individual
to be evaluated. The inverse factor vector is actually applied to the
individual and the resulting list is given to the evaluation function.
Thus, the evaluation function shall not be expecting an individual as it
will receive a plain list.
This decorator adds a :func:`scale` method to the decorated function.
"""
def __init__(self, factor):
# Factor is inverted since it is applied to the individual and not the
# objective function
self.factor = tuple(1.0/f for f in factor)
def __call__(self, func):
# wraps is used to combine stacked decorators that would add functions
@wraps(func)
def wrapper(individual, *args, **kargs):
return func([v * f for v, f in zip(individual, self.factor)],
*args, **kargs)
wrapper.scale = self.scale
return wrapper
def scale(self, factor):
"""Set the current scale to *factor*. After decorating the
evaluation function, this function will be available directly from
the function object. ::
@scale([0.25, 2.0, ..., 0.1])
def evaluate(individual):
return sum(individual),
# This will cancel the scaling
evaluate.scale([1.0, 1.0, ..., 1.0])
"""
# Factor is inverted since it is applied to the individual and not the
# objective function
self.factor = tuple(1.0/f for f in factor)
class bound(object):
"""Decorator for crossover and mutation functions, it changes the
individuals after the modification is done to bring it back in the allowed
*bounds*. The *bounds* are functions taking individual and returning
whether of not the variable is allowed. You can provide one or multiple such
functions. In the former case, the function is used on all dimensions and
in the latter case, the number of functions must be greater or equal to
the number of dimension of the individuals.
The *type* determines how the attributes are brought back into the valid
range
This decorator adds a :func:`bound` method to the decorated function.
"""
def _clip(self, individual):
return individual
def _wrap(self, individual):
return individual
def _mirror(self, individual):
return individual
def __call__(self, func):
@wraps(func)
def wrapper(*args, **kargs):
individuals = func(*args, **kargs)
return self.bound(individuals)
wrapper.bound = self.bound
return wrapper
def __init__(self, bounds, type):
try:
self.bounds = tuple(bounds)
except TypeError:
self.bounds = repeat(bounds)
if type == "mirror":
self.bound = self._mirror
elif type == "wrap":
self.bound = self._wrap
elif type == "clip":
self.bound = self._clip
def diversity(first_front, first, last):
"""Given a Pareto front `first_front` and the two extreme points of the
optimal Pareto front, this function returns a metric of the diversity
of the front as explained in the original NSGA-II article by K. Deb.
The smaller the value is, the better the front is.
"""
df = hypot(first_front[0].fitness.values[0] - first[0],
first_front[0].fitness.values[1] - first[1])
dl = hypot(first_front[-1].fitness.values[0] - last[0],
first_front[-1].fitness.values[1] - last[1])
dt = [hypot(first.fitness.values[0] - second.fitness.values[0],
first.fitness.values[1] - second.fitness.values[1])
for first, second in zip(first_front[:-1], first_front[1:])]
if len(first_front) == 1:
return df + dl
dm = sum(dt)/len(dt)
di = sum(abs(d_i - dm) for d_i in dt)
delta = (df + dl + di)/(df + dl + len(dt) * dm)
return delta
def convergence(first_front, optimal_front):
"""Given a Pareto front `first_front` and the optimal Pareto front,
this function returns a metric of convergence
of the front as explained in the original NSGA-II article by K. Deb.
The smaller the value is, the closer the front is to the optimal one.
"""
distances = []
for ind in first_front:
distances.append(float("inf"))
for opt_ind in optimal_front:
dist = 0.
for i in range(len(opt_ind)):
dist += (ind.fitness.values[i] - opt_ind[i])**2
if dist < distances[-1]:
distances[-1] = dist
distances[-1] = sqrt(distances[-1])
return sum(distances) / len(distances)
def hypervolume(front, ref=None):
"""Return the hypervolume of a *front*. If the *ref* point is not
given, the worst value for each objective +1 is used.
:param front: The population (usually a list of undominated individuals)
on which to compute the hypervolume.
:param ref: A point of the same dimensionality as the individuals in *front*.
"""
# Must use wvalues * -1 since hypervolume use implicit minimization
wobj = numpy.array([ind.fitness.wvalues for ind in front]) * -1
if ref is None:
ref = numpy.max(wobj, axis=0) + 1
return hv.hypervolume(wobj, ref)
def igd(A, Z):
"""Inverse generational distance.
"""
if not scipy_imported:
raise ImportError("idg requires scipy module")
distances = scipy.spatial.distance.cdist(A, Z)
return numpy.average(numpy.min(distances, axis=0))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/cma.py 0000644 0000765 0000024 00000121531 14456461441 013721 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
# Special thanks to Nikolaus Hansen for providing major part of
# this code. The CMA-ES algorithm is provided in many other languages
# and advanced versions at http://www.lri.fr/~hansen/cmaesintro.html.
"""A module that provides support for the Covariance Matrix Adaptation
Evolution Strategy.
"""
import copy
from math import sqrt, log, exp
from itertools import cycle
import warnings
import numpy
from . import tools
class Strategy(object):
"""
A strategy that will keep track of the basic parameters of the CMA-ES
algorithm ([Hansen2001]_).
:param centroid: An iterable object that indicates where to start the
evolution.
:param sigma: The initial standard deviation of the distribution.
:param parameter: One or more parameter to pass to the strategy as
described in the following table, optional.
+----------------+---------------------------+----------------------------+
| Parameter | Default | Details |
+================+===========================+============================+
| ``lambda_`` | ``int(4 + 3 * log(N))`` | Number of children to |
| | | produce at each generation,|
| | | ``N`` is the individual's |
| | | size (integer). |
+----------------+---------------------------+----------------------------+
| ``mu`` | ``int(lambda_ / 2)`` | The number of parents to |
| | | keep from the |
| | | lambda children (integer). |
+----------------+---------------------------+----------------------------+
| ``cmatrix`` | ``identity(N)`` | The initial covariance |
| | | matrix of the distribution |
| | | that will be sampled. |
+----------------+---------------------------+----------------------------+
| ``weights`` | ``"superlinear"`` | Decrease speed, can be |
| | | ``"superlinear"``, |
| | | ``"linear"`` or |
| | | ``"equal"``. |
+----------------+---------------------------+----------------------------+
| ``cs`` | ``(mueff + 2) / | Cumulation constant for |
| | (N + mueff + 3)`` | step-size. |
+----------------+---------------------------+----------------------------+
| ``damps`` | ``1 + 2 * max(0, sqrt(( | Damping for step-size. |
| | mueff - 1) / (N + 1)) - 1)| |
| | + cs`` | |
+----------------+---------------------------+----------------------------+
| ``ccum`` | ``4 / (N + 4)`` | Cumulation constant for |
| | | covariance matrix. |
+----------------+---------------------------+----------------------------+
| ``ccov1`` | ``2 / ((N + 1.3)^2 + | Learning rate for rank-one |
| | mueff)`` | update. |
+----------------+---------------------------+----------------------------+
| ``ccovmu`` | ``2 * (mueff - 2 + 1 / | Learning rate for rank-mu |
| | mueff) / ((N + 2)^2 + | update. |
| | mueff)`` | |
+----------------+---------------------------+----------------------------+
.. [Hansen2001] Hansen and Ostermeier, 2001. Completely Derandomized
Self-Adaptation in Evolution Strategies. *Evolutionary Computation*
"""
def __init__(self, centroid, sigma, **kargs):
self.params = kargs
# Create a centroid as a numpy array
self.centroid = numpy.array(centroid)
self.dim = len(self.centroid)
self.sigma = sigma
self.pc = numpy.zeros(self.dim)
self.ps = numpy.zeros(self.dim)
self.chiN = sqrt(self.dim) * (1 - 1. / (4. * self.dim) +
1. / (21. * self.dim ** 2))
self.C = self.params.get("cmatrix", numpy.identity(self.dim))
self.diagD, self.B = numpy.linalg.eigh(self.C)
indx = numpy.argsort(self.diagD)
self.diagD = self.diagD[indx] ** 0.5
self.B = self.B[:, indx]
self.BD = self.B * self.diagD
self.cond = self.diagD[indx[-1]] / self.diagD[indx[0]]
self.lambda_ = self.params.get("lambda_", int(4 + 3 * log(self.dim)))
self.update_count = 0
self.computeParams(self.params)
def generate(self, ind_init):
r"""Generate a population of :math:`\lambda` individuals of type
*ind_init* from the current strategy.
:param ind_init: A function object that is able to initialize an
individual from a list.
:returns: A list of individuals.
"""
arz = numpy.random.standard_normal((self.lambda_, self.dim))
arz = self.centroid + self.sigma * numpy.dot(arz, self.BD.T)
return [ind_init(a) for a in arz]
def update(self, population):
"""Update the current covariance matrix strategy from the
*population*.
:param population: A list of individuals from which to update the
parameters.
"""
population.sort(key=lambda ind: ind.fitness, reverse=True)
old_centroid = self.centroid
self.centroid = numpy.dot(self.weights, population[0:self.mu])
c_diff = self.centroid - old_centroid
# Cumulation : update evolution path
self.ps = (1 - self.cs) * self.ps \
+ sqrt(self.cs * (2 - self.cs) * self.mueff) / self.sigma \
* numpy.dot(self.B, (1. / self.diagD) *
numpy.dot(self.B.T, c_diff))
hsig = float((numpy.linalg.norm(self.ps) /
sqrt(1. - (1. - self.cs) ** (2. * (self.update_count + 1.))) / self.chiN <
(1.4 + 2. / (self.dim + 1.))))
self.update_count += 1
self.pc = (1 - self.cc) * self.pc + hsig \
* sqrt(self.cc * (2 - self.cc) * self.mueff) / self.sigma \
* c_diff
# Update covariance matrix
artmp = population[0:self.mu] - old_centroid
self.C = (1 - self.ccov1 - self.ccovmu + (1 - hsig) *
self.ccov1 * self.cc * (2 - self.cc)) * self.C \
+ self.ccov1 * numpy.outer(self.pc, self.pc) \
+ self.ccovmu * numpy.dot((self.weights * artmp.T), artmp) \
/ self.sigma ** 2
self.sigma *= numpy.exp((numpy.linalg.norm(self.ps) / self.chiN - 1.) *
self.cs / self.damps)
self.diagD, self.B = numpy.linalg.eigh(self.C)
indx = numpy.argsort(self.diagD)
self.cond = self.diagD[indx[-1]] / self.diagD[indx[0]]
self.diagD = self.diagD[indx] ** 0.5
self.B = self.B[:, indx]
self.BD = self.B * self.diagD
def computeParams(self, params):
r"""Computes the parameters depending on :math:`\lambda`. It needs to
be called again if :math:`\lambda` changes during evolution.
:param params: A dictionary of the manually set parameters.
"""
self.mu = params.get("mu", int(self.lambda_ / 2))
rweights = params.get("weights", "superlinear")
if rweights == "superlinear":
self.weights = log(self.mu + 0.5) - \
numpy.log(numpy.arange(1, self.mu + 1))
elif rweights == "linear":
self.weights = self.mu + 0.5 - numpy.arange(1, self.mu + 1)
elif rweights == "equal":
self.weights = numpy.ones(self.mu)
else:
raise RuntimeError("Unknown weights : %s" % rweights)
self.weights /= sum(self.weights)
self.mueff = 1. / sum(self.weights ** 2)
self.cc = params.get("ccum", 4. / (self.dim + 4.))
self.cs = params.get("cs", (self.mueff + 2.) /
(self.dim + self.mueff + 3.))
self.ccov1 = params.get("ccov1", 2. / ((self.dim + 1.3) ** 2 +
self.mueff))
self.ccovmu = params.get("ccovmu", 2. * (self.mueff - 2. +
1. / self.mueff) /
((self.dim + 2.) ** 2 + self.mueff))
self.ccovmu = min(1 - self.ccov1, self.ccovmu)
self.damps = 1. + 2. * max(0, sqrt((self.mueff - 1.) /
(self.dim + 1.)) - 1.) + self.cs
self.damps = params.get("damps", self.damps)
class StrategyOnePlusLambda(object):
r"""
A CMA-ES strategy that uses the :math:`1 + \lambda` paradigm ([Igel2007]_).
:param parent: An iterable object that indicates where to start the
evolution. The parent requires a fitness attribute.
:param sigma: The initial standard deviation of the distribution.
:param lambda_: Number of offspring to produce from the parent.
(optional, defaults to 1)
:param parameter: One or more parameter to pass to the strategy as
described in the following table. (optional)
Other parameters can be provided as described in the next table
+----------------+---------------------------+----------------------------+
| Parameter | Default | Details |
+================+===========================+============================+
| ``d`` | ``1.0 + N / (2.0 * | Damping for step-size. |
| | lambda_)`` | |
+----------------+---------------------------+----------------------------+
| ``ptarg`` | ``1.0 / (5 + sqrt(lambda_)| Target success rate. |
| | / 2.0)`` | |
+----------------+---------------------------+----------------------------+
| ``cp`` | ``ptarg * lambda_ / (2.0 +| Step size learning rate. |
| | ptarg * lambda_)`` | |
+----------------+---------------------------+----------------------------+
| ``cc`` | ``2.0 / (N + 2.0)`` | Cumulation time horizon. |
+----------------+---------------------------+----------------------------+
| ``ccov`` | ``2.0 / (N**2 + 6.0)`` | Covariance matrix learning |
| | | rate. |
+----------------+---------------------------+----------------------------+
| ``pthresh`` | ``0.44`` | Threshold success rate. |
+----------------+---------------------------+----------------------------+
.. [Igel2007] Igel, Hansen, Roth, 2007. Covariance matrix adaptation for
multi-objective optimization. *Evolutionary Computation* Spring;15(1):1-28
"""
def __init__(self, parent, sigma, **kargs):
self.parent = parent
self.sigma = sigma
self.dim = len(self.parent)
self.C = numpy.identity(self.dim)
self.A = numpy.identity(self.dim)
self.pc = numpy.zeros(self.dim)
self.computeParams(kargs)
self.psucc = self.ptarg
def computeParams(self, params):
r"""Computes the parameters depending on :math:`\lambda`. It needs to
be called again if :math:`\lambda` changes during evolution.
:param params: A dictionary of the manually set parameters.
"""
# Selection :
self.lambda_ = params.get("lambda_", 1)
# Step size control :
self.d = params.get("d", 1.0 + self.dim / (2.0 * self.lambda_))
self.ptarg = params.get("ptarg", 1.0 / (5 + sqrt(self.lambda_) / 2.0))
self.cp = params.get("cp", self.ptarg * self.lambda_ / (2 + self.ptarg * self.lambda_))
# Covariance matrix adaptation
self.cc = params.get("cc", 2.0 / (self.dim + 2.0))
self.ccov = params.get("ccov", 2.0 / (self.dim ** 2 + 6.0))
self.pthresh = params.get("pthresh", 0.44)
def generate(self, ind_init):
r"""Generate a population of :math:`\lambda` individuals of type
*ind_init* from the current strategy.
:param ind_init: A function object that is able to initialize an
individual from a list.
:returns: A list of individuals.
"""
# self.y = numpy.dot(self.A, numpy.random.standard_normal(self.dim))
arz = numpy.random.standard_normal((self.lambda_, self.dim))
arz = self.parent + self.sigma * numpy.dot(arz, self.A.T)
return [ind_init(a) for a in arz]
def update(self, population):
"""Update the current covariance matrix strategy from the
*population*.
:param population: A list of individuals from which to update the
parameters.
"""
population.sort(key=lambda ind: ind.fitness, reverse=True)
lambda_succ = sum(self.parent.fitness <= ind.fitness for ind in population)
p_succ = float(lambda_succ) / self.lambda_
self.psucc = (1 - self.cp) * self.psucc + self.cp * p_succ
if self.parent.fitness <= population[0].fitness:
x_step = (population[0] - numpy.array(self.parent)) / self.sigma
self.parent = copy.deepcopy(population[0])
if self.psucc < self.pthresh:
self.pc = (1 - self.cc) * self.pc + sqrt(self.cc * (2 - self.cc)) * x_step
self.C = (1 - self.ccov) * self.C + self.ccov * numpy.outer(self.pc, self.pc)
else:
self.pc = (1 - self.cc) * self.pc
self.C = (1 - self.ccov) * self.C + self.ccov * (numpy.outer(self.pc, self.pc) + self.cc * (2 - self.cc) * self.C)
self.sigma = self.sigma * exp(1.0 / self.d * (self.psucc - self.ptarg) / (1.0 - self.ptarg))
# We use Cholesky since for now we have no use of eigen decomposition
# Basically, Cholesky returns a matrix A as C = A*A.T
# Eigen decomposition returns two matrix B and D^2 as C = B*D^2*B.T = B*D*D*B.T
# So A == B*D
# To compute the new individual we need to multiply each vector z by A
# as y = centroid + sigma * A*z
# So the Cholesky is more straightforward as we don't need to compute
# the squareroot of D^2, and multiply B and D in order to get A, we directly get A.
# This can't be done (without cost) with the standard CMA-ES as the eigen decomposition is used
# to compute covariance matrix inverse in the step-size evolutionary path computation.
self.A = numpy.linalg.cholesky(self.C)
class StrategyMultiObjective(object):
"""Multiobjective CMA-ES strategy based on the paper [Voss2010]_. It
is used similarly as the standard CMA-ES strategy with a generate-update
scheme.
:param population: An initial population of individual.
:param sigma: The initial step size of the complete system.
:param mu: The number of parents to use in the evolution. When not
provided it defaults to the length of *population*. (optional)
:param lambda_: The number of offspring to produce at each generation.
(optional, defaults to 1)
:param indicator: The indicator function to use. (optional, default to
:func:`~deap.tools.hypervolume`)
Other parameters can be provided as described in the next table
+----------------+---------------------------+----------------------------+
| Parameter | Default | Details |
+================+===========================+============================+
| ``d`` | ``1.0 + N / 2.0`` | Damping for step-size. |
+----------------+---------------------------+----------------------------+
| ``ptarg`` | ``1.0 / (5 + 1.0 / 2.0)`` | Target success rate. |
+----------------+---------------------------+----------------------------+
| ``cp`` | ``ptarg / (2.0 + ptarg)`` | Step size learning rate. |
+----------------+---------------------------+----------------------------+
| ``cc`` | ``2.0 / (N + 2.0)`` | Cumulation time horizon. |
+----------------+---------------------------+----------------------------+
| ``ccov`` | ``2.0 / (N**2 + 6.0)`` | Covariance matrix learning |
| | | rate. |
+----------------+---------------------------+----------------------------+
| ``pthresh`` | ``0.44`` | Threshold success rate. |
+----------------+---------------------------+----------------------------+
.. [Voss2010] Voss, Hansen, Igel, "Improved Step Size Adaptation
for the MO-CMA-ES", 2010.
"""
def __init__(self, population, sigma, **params):
self.parents = population
self.dim = len(self.parents[0])
# Selection
self.mu = params.get("mu", len(self.parents))
self.lambda_ = params.get("lambda_", 1)
# Step size control
self.d = params.get("d", 1.0 + self.dim / 2.0)
self.ptarg = params.get("ptarg", 1.0 / (5.0 + 0.5))
self.cp = params.get("cp", self.ptarg / (2.0 + self.ptarg))
# Covariance matrix adaptation
self.cc = params.get("cc", 2.0 / (self.dim + 2.0))
self.ccov = params.get("ccov", 2.0 / (self.dim ** 2 + 6.0))
self.pthresh = params.get("pthresh", 0.44)
# Internal parameters associated to the mu parent
self.sigmas = [sigma] * len(population)
# Lower Cholesky matrix (Sampling matrix)
self.A = [numpy.identity(self.dim) for _ in range(len(population))]
# Inverse Cholesky matrix (Used in the update of A)
self.invCholesky = [numpy.identity(self.dim) for _ in range(len(population))]
self.pc = [numpy.zeros(self.dim) for _ in range(len(population))]
self.psucc = [self.ptarg] * len(population)
self.indicator = params.get("indicator", tools.hypervolume)
def generate(self, ind_init):
r"""Generate a population of :math:`\lambda` individuals of type
*ind_init* from the current strategy.
:param ind_init: A function object that is able to initialize an
individual from a list.
:returns: A list of individuals with a private attribute :attr:`_ps`.
This last attribute is essential to the update function, it
indicates that the individual is an offspring and the index
of its parent.
"""
arz = numpy.random.randn(self.lambda_, self.dim)
individuals = list()
# Make sure every parent has a parent tag and index
for i, p in enumerate(self.parents):
p._ps = "p", i
# Each parent produce an offspring
if self.lambda_ == self.mu:
for i in range(self.lambda_):
# print "Z", list(arz[i])
individuals.append(ind_init(self.parents[i] + self.sigmas[i] * numpy.dot(self.A[i], arz[i])))
individuals[-1]._ps = "o", i
# Parents producing an offspring are chosen at random from the first front
else:
ndom = tools.sortLogNondominated(self.parents, len(self.parents), first_front_only=True)
for i in range(self.lambda_):
j = numpy.random.randint(0, len(ndom))
_, p_idx = ndom[j]._ps
individuals.append(ind_init(self.parents[p_idx] + self.sigmas[p_idx] * numpy.dot(self.A[p_idx], arz[i])))
individuals[-1]._ps = "o", p_idx
return individuals
def _select(self, candidates):
if len(candidates) <= self.mu:
return candidates, []
pareto_fronts = tools.sortLogNondominated(candidates, len(candidates))
chosen = list()
mid_front = None
not_chosen = list()
# Fill the next population (chosen) with the fronts until there is not enough space
# When an entire front does not fit in the space left we rely on the hypervolume
# for this front
# The remaining fronts are explicitly not chosen
full = False
for front in pareto_fronts:
if len(chosen) + len(front) <= self.mu and not full:
chosen += front
elif mid_front is None and len(chosen) < self.mu:
mid_front = front
# With this front, we selected enough individuals
full = True
else:
not_chosen += front
# Separate the mid front to accept only k individuals
k = self.mu - len(chosen)
if k > 0:
# reference point is chosen in the complete population
# as the worst in each dimension +1
ref = numpy.array([ind.fitness.wvalues for ind in candidates]) * -1
ref = numpy.max(ref, axis=0) + 1
for _ in range(len(mid_front) - k):
idx = self.indicator(mid_front, ref=ref)
not_chosen.append(mid_front.pop(idx))
chosen += mid_front
return chosen, not_chosen
def _rankOneUpdate(self, invCholesky, A, alpha, beta, v):
w = numpy.dot(invCholesky, v)
# Under this threshold, the update is mostly noise
if w.max() > 1e-20:
w_inv = numpy.dot(w, invCholesky)
norm_w2 = numpy.sum(w ** 2)
a = sqrt(alpha)
root = numpy.sqrt(1 + beta / alpha * norm_w2)
b = a / norm_w2 * (root - 1)
A = a * A + b * numpy.outer(v, w)
invCholesky = 1.0 / a * invCholesky - b / (a ** 2 + a * b * norm_w2) * numpy.outer(w, w_inv)
return invCholesky, A
def update(self, population):
"""Update the current covariance matrix strategies from the
*population*.
:param population: A list of individuals from which to update the
parameters.
"""
chosen, not_chosen = self._select(population + self.parents)
cp, cc, ccov = self.cp, self.cc, self.ccov
d, ptarg, pthresh = self.d, self.ptarg, self.pthresh
# Make copies for chosen offspring only
last_steps = [self.sigmas[ind._ps[1]] if ind._ps[0] == "o" else None for ind in chosen]
sigmas = [self.sigmas[ind._ps[1]] if ind._ps[0] == "o" else None for ind in chosen]
invCholesky = [self.invCholesky[ind._ps[1]].copy() if ind._ps[0] == "o" else None for ind in chosen]
A = [self.A[ind._ps[1]].copy() if ind._ps[0] == "o" else None for ind in chosen]
pc = [self.pc[ind._ps[1]].copy() if ind._ps[0] == "o" else None for ind in chosen]
psucc = [self.psucc[ind._ps[1]] if ind._ps[0] == "o" else None for ind in chosen]
# Update the internal parameters for successful offspring
for i, ind in enumerate(chosen):
t, p_idx = ind._ps
# Only the offspring update the parameter set
if t == "o":
# Update (Success = 1 since it is chosen)
psucc[i] = (1.0 - cp) * psucc[i] + cp
sigmas[i] = sigmas[i] * exp((psucc[i] - ptarg) / (d * (1.0 - ptarg)))
if psucc[i] < pthresh:
xp = numpy.array(ind)
x = numpy.array(self.parents[p_idx])
pc[i] = (1.0 - cc) * pc[i] + sqrt(cc * (2.0 - cc)) * (xp - x) / last_steps[i]
invCholesky[i], A[i] = self._rankOneUpdate(invCholesky[i], A[i], 1 - ccov, ccov, pc[i])
else:
pc[i] = (1.0 - cc) * pc[i]
pc_weight = cc * (2.0 - cc)
invCholesky[i], A[i] = self._rankOneUpdate(invCholesky[i], A[i], 1 - ccov + pc_weight, ccov, pc[i])
self.psucc[p_idx] = (1.0 - cp) * self.psucc[p_idx] + cp
self.sigmas[p_idx] = self.sigmas[p_idx] * exp((self.psucc[p_idx] - ptarg) / (d * (1.0 - ptarg)))
# It is unnecessary to update the entire parameter set for not chosen individuals
# Their parameters will not make it to the next generation
for ind in not_chosen:
t, p_idx = ind._ps
# Only the offspring update the parameter set
if t == "o":
self.psucc[p_idx] = (1.0 - cp) * self.psucc[p_idx]
self.sigmas[p_idx] = self.sigmas[p_idx] * exp((self.psucc[p_idx] - ptarg) / (d * (1.0 - ptarg)))
# Make a copy of the internal parameters
# The parameter is in the temporary variable for offspring and in the original one for parents
self.parents = chosen
self.sigmas = [sigmas[i] if ind._ps[0] == "o" else self.sigmas[ind._ps[1]] for i, ind in enumerate(chosen)]
self.invCholesky = [invCholesky[i] if ind._ps[0] == "o" else self.invCholesky[ind._ps[1]] for i, ind in enumerate(chosen)]
self.A = [A[i] if ind._ps[0] == "o" else self.A[ind._ps[1]] for i, ind in enumerate(chosen)]
self.pc = [pc[i] if ind._ps[0] == "o" else self.pc[ind._ps[1]] for i, ind in enumerate(chosen)]
self.psucc = [psucc[i] if ind._ps[0] == "o" else self.psucc[ind._ps[1]] for i, ind in enumerate(chosen)]
class StrategyActiveOnePlusLambda(object):
"""A CMA-ES strategy that combines the :math:`(1 + \\lambda)` paradigm
[Igel2007]_, the mixed integer modification [Hansen2011]_, active
covariance update [Arnold2010]_ and constraint handling [Arnold2012]_.
This version of CMA-ES requires the random vector and the mutation
that created each individual. The vector and mutation are stored in each
individual as :attr:`_z` and :attr:`_y` respectively. Updating with
individuals not containing these attributes will result in an
:class:`AttributeError`.
Notes:
When using this strategy (especially when using constraints) you should
monitor the strategy :attr:`condition_number`. If it goes above a given
threshold (say :math:`10^{12}`), you should think of restarting the
optimization as the covariance matrix is going degenerate. See the
constrained active CMA-ES example for a simple example of restart.
:param parent: An iterable object that indicates where to start the
evolution. The parent requires a fitness attribute.
:param sigma: The initial standard deviation of the distribution.
:param step: The minimal step size for each dimension. Use 0 for
continuous dimensions.
:param lambda_: Number of offspring to produce from the parent.
(optional, defaults to 1)
:param **kwargs: One or more parameter to pass to the strategy as
described in the following table. (optional)
+----------------+---------------------------+------------------------------+
| Parameter | Default | Details |
+================+===========================+==============================+
| ``d`` | ``1.0 + N / (2.0 * | Damping for step-size. |
| | lambda_)`` | |
+----------------+---------------------------+------------------------------+
| ``ptarg`` | ``1.0 / (5 + sqrt(lambda_)| Taget success rate |
| | / 2.0)`` | (from 1 + lambda algorithm). |
+----------------+---------------------------+------------------------------+
| ``cp`` | ``ptarg * lambda_ / (2.0 +| Step size learning rate. |
| | ptarg * lambda_)`` | |
+----------------+---------------------------+------------------------------+
| ``cc`` | ``2.0 / (N + 2.0)`` | Cumulation time horizon. |
+----------------+---------------------------+------------------------------+
| ``ccov`` | ``2.0 / (N**2 + 6.0)`` | Covariance matrix learning |
| | | rate. |
+----------------+---------------------------+------------------------------+
| ``ccovn`` | ``0.4 / (N**1.6 + 1.0)`` | Covariance matrix negative |
| | | learning rate. |
+----------------+---------------------------+------------------------------+
| ``cconst`` | ``1.0 / (N + 2.0)`` | Constraint vectors learning |
| | | rate. |
+----------------+---------------------------+------------------------------+
| ``beta`` | ``0.1 / (lambda_ * (N + | Covariance matrix learning |
| | 2.0))`` | rate for constraints. |
| | | |
+----------------+---------------------------+------------------------------+
| ``pthresh`` | ``0.44`` | Threshold success rate. |
+----------------+---------------------------+------------------------------+
.. [Igel2007] Igel, Hansen and Roth. Covariance matrix adaptation for
multi-objective optimization. 2007
.. [Arnold2010] Arnold and Hansen. Active covariance matrix adaptation for
the (1+1)-CMA-ES. 2010.
.. [Hansen2011] Hansen. A CMA-ES for Mixed-Integer Nonlinear Optimization.
Research Report] RR-7751, INRIA. 2011
.. [Arnold2012] Arnold and Hansen. A (1+1)-CMA-ES for Constrained Optimisation.
2012
"""
def __init__(self, parent, sigma, steps, **kargs):
self.parent = parent
self.sigma = sigma
self.dim = len(self.parent)
self.A = numpy.identity(self.dim)
self.invA = numpy.identity(self.dim)
self.condition_number = numpy.linalg.cond(self.A)
self.pc = numpy.zeros(self.dim)
# Save parameters
self.params = kargs.copy()
# Covariance matrix adaptation
self.cc = self.params.get("cc", 2.0 / (self.dim + 2.0))
self.ccovp = self.params.get("ccovp", 2.0 / (self.dim ** 2 + 6.0))
self.ccovn = self.params.get("ccovn", 0.4 / (self.dim ** 1.6 + 1.0))
self.cconst = self.params.get("cconst", 1.0 / (self.dim + 2.0))
self.pthresh = self.params.get("pthresh", 0.44)
self.lambda_ = self.params.get("lambda_", 1)
self.psucc = self.ptarg
self.S_int = numpy.array(steps)
self.i_I_R = numpy.flatnonzero(2 * self.sigma * numpy.diag(self.A)**0.5
< self.S_int)
self.constraint_vecs = None
self.ancestors_fitness = list()
@property
def lambda_(self):
return self._lambda
@lambda_.setter
def lambda_(self, value):
self._lambda = value
self._compute_lambda_parameters()
def _compute_lambda_parameters(self):
"""Computes the parameters depending on :math:`\lambda`. It needs to
be called again if :math:`\lambda` changes during evolution.
"""
# Step size control :
self.d = self.params.get("d", 1.0 + self.dim / (2.0 * self.lambda_))
self.ptarg = self.params.get("ptarg", 1.0 / (5 + numpy.sqrt(self.lambda_)
/ 2.0))
self.cp = self.params.get("cp", (self.ptarg * self.lambda_
/ (2 + self.ptarg * self.lambda_)))
self.beta = self.params.get("beta", 0.1 / (self.lambda_ * (self.dim + 2.0)))
def generate(self, ind_init):
"""Generate a population of :math:`\lambda` individuals of type
*ind_init* from the current strategy.
:param ind_init: A function object that is able to initialize an
individual from a list.
:returns: A list of individuals.
"""
# Generate individuals
z = numpy.random.standard_normal((self.lambda_, self.dim))
y = numpy.dot(self.A, z.T).T
x = self.parent + self.sigma * y + self.S_int * self._integer_mutation()
if any(self.S_int > 0):
# Bring values to the integer steps
round_values = numpy.tile(self.S_int > 0, (self.lambda_, 1))
steps = numpy.tile(self.S_int, (self.lambda_, 1))
x[round_values] = steps[round_values] * numpy.around(x[round_values]
/ steps[round_values])
# The update method requires to remember the y of each individual
population = list(map(ind_init, x))
for ind, yi, zi in zip(population, y, z):
ind._y = yi
ind._z = zi
return population
def _integer_mutation(self):
n_I_R = self.i_I_R.shape[0]
# Mixed integer CMA-ES is developped for (mu/mu , lambda)
# We have a (1 + lambda) setting, thus we make the integer mutation
# probabilistic. The integer mutation is lambda / 2 if all dimensions
# are integers or min(lambda / 2 - 1, lambda / 10 + n_I_R + 1). The minus
# 1 accounts for the last new candidate getting its integer mutation from
# the last best solution. We skip this last best solution part.
if n_I_R == 0:
return numpy.zeros((self.lambda_, self.dim))
elif n_I_R == self.dim:
p = self.lambda_ / 2.0 / self.lambda_
# lambda_int = int(numpy.floor(self.lambda_ / 2))
else:
p = (min(self.lambda_ / 2.0, self.lambda_ / 10.0 + n_I_R / self.dim)
/ self.lambda_)
# lambda_int = int(min(numpy.floor(self.lambda_ / 10) + n_I_R + 1,
# numpy.floor(self.lambda_ / 2) - 1))
Rp = numpy.zeros((self.lambda_, self.dim))
Rpp = numpy.zeros((self.lambda_, self.dim))
# Ri' has exactly one of its components set to one.
# The Ri' are dependent in that the number of mutations for each coordinate
# differs at most by one
for i, j in zip(range(self.lambda_), cycle(self.i_I_R)):
# Probabilistically choose lambda_int individuals
if numpy.random.rand() < p:
Rp[i, j] = 1
Rpp[i, j] = numpy.random.geometric(p=0.7**(1.0/n_I_R)) - 1
I_pm1 = (-1)**numpy.random.randint(0, 2, (self.lambda_, self.dim))
R_int = I_pm1 * (Rp + Rpp)
# Usually in mu/mu, lambda the last individual is set to the step taken.
# We don't use this sheme in the 1 + lambda scheme
# if self.update_count > 0:
# R_int[-1, :] = (numpy.floor(-self.S_int - self.last_best)
# - numpy.floor(-self.S_int - self.centroid))
return R_int
def _rank1update(self, individual, p_succ):
update_cov = False
self.psucc = (1 - self.cp) * self.psucc + self.cp * p_succ
if not hasattr(self.parent, "fitness") \
or self.parent.fitness <= individual.fitness:
self.parent = copy.deepcopy(individual)
self.ancestors_fitness.append(copy.deepcopy(individual.fitness))
if len(self.ancestors_fitness) > 5:
self.ancestors_fitness.pop()
# Must guard if pc is all 0 to prevent w_norm_sqrd to be 0
if self.psucc < self.pthresh or numpy.allclose(self.pc, 0):
self.pc = (1 - self.cc) * self.pc + (numpy.sqrt(self.cc * (2 - self.cc))
* individual._y)
a = numpy.sqrt(1 - self.ccovp)
w = numpy.dot(self.invA, self.pc)
w_norm_sqrd = numpy.linalg.norm(w) ** 2
b = numpy.sqrt(1 - self.ccovp) / w_norm_sqrd \
* (numpy.sqrt(1 + self.ccovp / (1 - self.ccovp) * w_norm_sqrd)
- 1)
else:
self.pc = (1 - self.cc) * self.pc
d = self.ccovp * (1 + self.cc * (2 - self.cc))
a = numpy.sqrt(1 - d)
w = numpy.dot(self.invA, self.pc)
w_norm_sqrd = numpy.linalg.norm(w) ** 2
b = numpy.sqrt(1 - d) \
* (numpy.sqrt(1 + self.ccovp * w_norm_sqrd / (1 - d)) - 1) \
/ w_norm_sqrd
update_cov = True
elif len(self.ancestors_fitness) >= 5 \
and individual.fitness < self.ancestors_fitness[0] \
and self.psucc < self.pthresh:
# Active covariance update requires w = z and not w = inv(A)s
w = individual._z
w_norm_sqrd = numpy.linalg.norm(w) ** 2
if 1 < self.ccovn * (2 * w_norm_sqrd - 1):
ccovn = 1 / (2 * w_norm_sqrd - 1)
else:
ccovn = self.ccovn
a = numpy.sqrt(1 + ccovn)
b = numpy.sqrt(1 + ccovn) / w_norm_sqrd \
* (numpy.sqrt(1 - ccovn / (1 + ccovn) * w_norm_sqrd) - 1)
update_cov = True
if update_cov:
self.A = self.A * a + b * numpy.outer(numpy.dot(self.A, w), w)
self.invA = (1 / a * self.invA
- b / (a ** 2 + a * b * w_norm_sqrd)
* numpy.dot(self.invA, numpy.outer(w, w)))
# TODO: Add integer mutation i_I_R component
self.sigma = self.sigma * numpy.exp(1.0 / self.d
* ((self.psucc - self.ptarg)
/ (1.0 - self.ptarg)))
def _infeasible_update(self, individual):
if not hasattr(individual.fitness, "constraint_violation"):
return
if self.constraint_vecs is None:
shape = len(individual.fitness.constraint_violation), self.dim
self.constraint_vecs = numpy.zeros(shape)
for i in range(self.constraint_vecs.shape[0]):
if individual.fitness.constraint_violation[i]:
self.constraint_vecs[i] = (1 - self.cconst) * self.constraint_vecs[i] \
+ self.cconst * individual._y
W = numpy.dot(self.invA, self.constraint_vecs.T).T # M x N
constraint_violation = numpy.sum(individual.fitness.constraint_violation)
A_prime = (
self.A - self.beta / constraint_violation
* numpy.sum(
list(
numpy.outer(self.constraint_vecs[i], W[i])
/ numpy.dot(W[i], W[i])
for i in range(self.constraint_vecs.shape[0])
if individual.fitness.constraint_violation[i]
),
axis=0
)
)
try:
self.invA = numpy.linalg.inv(A_prime)
except numpy.linalg.LinAlgError:
warnings.warn("Singular matrix inversion, "
"invalid update in CMA-ES ignored", RuntimeWarning)
else:
self.A = A_prime
def update(self, population):
"""Update the current covariance matrix strategy from the *population*.
:param population: A list of individuals from which to update the
parameters.
"""
valid_population = [ind for ind in population if ind.fitness.valid]
invalid_population = [ind for ind in population if not ind.fitness.valid]
if len(valid_population) > 0:
# Rank 1 update
valid_population.sort(key=lambda ind: ind.fitness, reverse=True)
if not hasattr(self.parent, "fitness"):
lambda_succ = len(valid_population)
else:
lambda_succ = sum(self.parent.fitness <= ind.fitness
for ind in valid_population)
# Use len(valid) to not account for individuals violating constraints
self._rank1update(valid_population[0],
float(lambda_succ) / len(valid_population))
if len(invalid_population) > 0 :
# Learn constraint from all invalid individuals
for ind in invalid_population:
self._infeasible_update(ind)
# Used to monitor the convariance matrix conditioning
self.condition_number = numpy.linalg.cond(self.A)
C = numpy.dot(self.A, self.A.T)
self.i_I_R = numpy.flatnonzero(2 * self.sigma * numpy.diag(C)**0.5
< self.S_int) ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/creator.py 0000644 0000765 0000024 00000016022 14456461441 014616 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""The :mod:`~deap.creator` is a meta-factory allowing to create classes that
will fulfill the needs of your evolutionary algorithms. In effect, new
classes can be built from any imaginable type, from :class:`list` to
:class:`set`, :class:`dict`, :class:`~deap.gp.PrimitiveTree` and more,
providing the possibility to implement genetic algorithms, genetic
programming, evolution strategies, particle swarm optimizers, and many more.
"""
import array
import copy
import copyreg
import warnings
class_replacers = {}
"""Some classes in Python's standard library as well as third party library
may be in part incompatible with the logic used in DEAP. To palliate
this problem, the method :func:`create` uses the dictionary
`class_replacers` to identify if the base type provided is problematic, and if
so the new class inherits from the replacement class instead of the
original base class.
`class_replacers` keys are classes to be replaced and the values are the
replacing classes.
"""
try:
import numpy
_ = (numpy.ndarray, numpy.array)
except ImportError:
# Numpy is not present, skip the definition of the replacement class.
pass
except AttributeError:
# Numpy is present, but there is either no ndarray or array in numpy,
# also skip the definition of the replacement class.
pass
else:
class _numpy_array(numpy.ndarray):
def __deepcopy__(self, memo):
"""Overrides the deepcopy from numpy.ndarray that does not copy
the object's attributes. This one will deepcopy the array and its
:attr:`__dict__` attribute.
"""
copy_ = numpy.ndarray.copy(self)
copy_.__dict__.update(copy.deepcopy(self.__dict__, memo))
return copy_
@staticmethod
def __new__(cls, iterable):
"""Creates a new instance of a numpy.ndarray from a function call.
Adds the possibility to instantiate from an iterable."""
return numpy.array(list(iterable)).view(cls)
def __setstate__(self, state):
self.__dict__.update(state)
def __reduce__(self):
return (self.__class__, (list(self),), self.__dict__)
class_replacers[numpy.ndarray] = _numpy_array
class _array(array.array):
@staticmethod
def __new__(cls, seq=()):
return super(_array, cls).__new__(cls, cls.typecode, seq)
def __deepcopy__(self, memo):
"""Overrides the deepcopy from array.array that does not copy
the object's attributes and class type.
"""
cls = self.__class__
copy_ = cls.__new__(cls, self)
memo[id(self)] = copy_
copy_.__dict__.update(copy.deepcopy(self.__dict__, memo))
return copy_
def __reduce__(self):
return (self.__class__, (list(self),), self.__dict__)
class_replacers[array.array] = _array
class MetaCreator(type):
def __new__(cls, name, base, dct):
return super(MetaCreator, cls).__new__(cls, name, (base,), dct)
def __init__(cls, name, base, dct):
# A DeprecationWarning is raised when the object inherits from the
# class "object" which leave the option of passing arguments, but
# raise a warning stating that it will eventually stop permitting
# this option. Usually this happens when the base class does not
# override the __init__ method from object.
dict_inst = {}
dict_cls = {}
for obj_name, obj in dct.items():
if isinstance(obj, type):
dict_inst[obj_name] = obj
else:
dict_cls[obj_name] = obj
def init_type(self, *args, **kargs):
"""Replace the __init__ function of the new type, in order to
add attributes that were defined with **kargs to the instance.
"""
for obj_name, obj in dict_inst.items():
setattr(self, obj_name, obj())
if base.__init__ is not object.__init__:
base.__init__(self, *args, **kargs)
cls.__init__ = init_type
cls.reduce_args = (name, base, dct)
super(MetaCreator, cls).__init__(name, (base,), dict_cls)
def __reduce__(cls):
return (meta_create, cls.reduce_args)
copyreg.pickle(MetaCreator, MetaCreator.__reduce__)
def meta_create(name, base, dct):
class_ = MetaCreator(name, base, dct)
globals()[name] = class_
return class_
def create(name, base, **kargs):
"""Creates a new class named *name* inheriting from *base* in the
:mod:`~deap.creator` module. The new class can have attributes defined by
the subsequent keyword arguments passed to the function create. If the
argument is a class (without the parenthesis), the __init__ function is
called in the initialization of an instance of the new object and the
returned instance is added as an attribute of the class' instance.
Otherwise, if the argument is not a class, (for example an :class:`int`),
it is added as a "static" attribute of the class.
:param name: The name of the class to create.
:param base: A base class from which to inherit.
:param attribute: One or more attributes to add on instantiation of this
class, optional.
The following is used to create a class :class:`Foo` inheriting from the
standard :class:`list` and having an attribute :attr:`bar` being an empty
dictionary and a static attribute :attr:`spam` initialized to 1. ::
create("Foo", list, bar=dict, spam=1)
This above line is exactly the same as defining in the :mod:`creator`
module something like the following. ::
class Foo(list):
spam = 1
def __init__(self):
self.bar = dict()
The :ref:`creating-types` tutorial gives more examples of the creator
usage.
.. warning::
If your are inheriting from :class:`numpy.ndarray` see the
:doc:`tutorials/advanced/numpy` tutorial and the
:doc:`/examples/ga_onemax_numpy` example.
"""
if name in globals():
warnings.warn("A class named '{0}' has already been created and it "
"will be overwritten. Consider deleting previous "
"creation of that class or rename it.".format(name),
RuntimeWarning)
# Check if the base class has to be replaced
if base in class_replacers:
base = class_replacers[base]
meta_create(name, base, kargs)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/gp.py 0000644 0000765 0000024 00000146265 14456461441 013602 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""The :mod:`gp` module provides the methods and classes to perform
Genetic Programming with DEAP. It essentially contains the classes to
build a Genetic Program Tree, and the functions to evaluate it.
This module support both strongly and loosely typed GP.
"""
import copy
import math
import copyreg
import random
import re
import sys
import types
import warnings
from collections import defaultdict, deque
from functools import partial, wraps
from operator import eq, lt
from . import tools # Needed by HARM-GP
######################################
# GP Data structure #
######################################
# Define the name of type for any types.
__type__ = object
class PrimitiveTree(list):
"""Tree specifically formatted for optimization of genetic programming
operations. The tree is represented with a list, where the nodes are
appended, or are assumed to have been appended when initializing an object
of this class with a list of primitives and terminals e.g. generated with
the method **gp.generate**, in a depth-first order.
The nodes appended to the tree are required to have an attribute *arity*,
which defines the arity of the primitive. An arity of 0 is expected from
terminals nodes.
"""
def __init__(self, content):
list.__init__(self, content)
def __deepcopy__(self, memo):
new = self.__class__(self)
new.__dict__.update(copy.deepcopy(self.__dict__, memo))
return new
def __setitem__(self, key, val):
# Check for most common errors
# Does NOT check for STGP constraints
if isinstance(key, slice):
if key.start >= len(self):
raise IndexError("Invalid slice object (try to assign a %s"
" in a tree of size %d). Even if this is allowed by the"
" list object slice setter, this should not be done in"
" the PrimitiveTree context, as this may lead to an"
" unpredictable behavior for searchSubtree or evaluate."
% (key, len(self)))
total = val[0].arity
for node in val[1:]:
total += node.arity - 1
if total != 0:
raise ValueError("Invalid slice assignation : insertion of"
" an incomplete subtree is not allowed in PrimitiveTree."
" A tree is defined as incomplete when some nodes cannot"
" be mapped to any position in the tree, considering the"
" primitives' arity. For instance, the tree [sub, 4, 5,"
" 6] is incomplete if the arity of sub is 2, because it"
" would produce an orphan node (the 6).")
elif val.arity != self[key].arity:
raise ValueError("Invalid node replacement with a node of a"
" different arity.")
list.__setitem__(self, key, val)
def __str__(self):
"""Return the expression in a human readable string.
"""
string = ""
stack = []
for node in self:
stack.append((node, []))
while len(stack[-1][1]) == stack[-1][0].arity:
prim, args = stack.pop()
string = prim.format(*args)
if len(stack) == 0:
break # If stack is empty, all nodes should have been seen
stack[-1][1].append(string)
return string
@classmethod
def from_string(cls, string, pset):
"""Try to convert a string expression into a PrimitiveTree given a
PrimitiveSet *pset*. The primitive set needs to contain every primitive
present in the expression.
:param string: String representation of a Python expression.
:param pset: Primitive set from which primitives are selected.
:returns: PrimitiveTree populated with the deserialized primitives.
"""
tokens = re.split("[ \t\n\r\f\v(),]", string)
expr = []
ret_types = deque()
for token in tokens:
if token == '':
continue
if len(ret_types) != 0:
type_ = ret_types.popleft()
else:
type_ = None
if token in pset.mapping:
primitive = pset.mapping[token]
if type_ is not None and not issubclass(primitive.ret, type_):
raise TypeError("Primitive {} return type {} does not "
"match the expected one: {}."
.format(primitive, primitive.ret, type_))
expr.append(primitive)
if isinstance(primitive, Primitive):
ret_types.extendleft(reversed(primitive.args))
else:
try:
token = eval(token)
except NameError:
raise TypeError("Unable to evaluate terminal: {}.".format(token))
if type_ is None:
type_ = type(token)
if not issubclass(type(token), type_):
raise TypeError("Terminal {} type {} does not "
"match the expected one: {}."
.format(token, type(token), type_))
expr.append(Terminal(token, False, type_))
return cls(expr)
@property
def height(self):
"""Return the height of the tree, or the depth of the
deepest node.
"""
stack = [0]
max_depth = 0
for elem in self:
depth = stack.pop()
max_depth = max(max_depth, depth)
stack.extend([depth + 1] * elem.arity)
return max_depth
@property
def root(self):
"""Root of the tree, the element 0 of the list.
"""
return self[0]
def searchSubtree(self, begin):
"""Return a slice object that corresponds to the
range of values that defines the subtree which has the
element with index *begin* as its root.
"""
end = begin + 1
total = self[begin].arity
while total > 0:
total += self[end].arity - 1
end += 1
return slice(begin, end)
class Primitive(object):
"""Class that encapsulates a primitive and when called with arguments it
returns the Python code to call the primitive with the arguments.
>>> pr = Primitive("mul", (int, int), int)
>>> pr.format(1, 2)
'mul(1, 2)'
"""
__slots__ = ('name', 'arity', 'args', 'ret', 'seq')
def __init__(self, name, args, ret):
self.name = name
self.arity = len(args)
self.args = args
self.ret = ret
args = ", ".join(map("{{{0}}}".format, range(self.arity)))
self.seq = "{name}({args})".format(name=self.name, args=args)
def format(self, *args):
return self.seq.format(*args)
def __eq__(self, other):
if type(self) is type(other):
return all(getattr(self, slot) == getattr(other, slot)
for slot in self.__slots__)
else:
return NotImplemented
class Terminal(object):
"""Class that encapsulates terminal primitive in expression. Terminals can
be values or 0-arity functions.
"""
__slots__ = ('name', 'value', 'ret', 'conv_fct')
def __init__(self, terminal, symbolic, ret):
self.ret = ret
self.value = terminal
self.name = str(terminal)
self.conv_fct = str if symbolic else repr
@property
def arity(self):
return 0
def format(self):
return self.conv_fct(self.value)
def __eq__(self, other):
if type(self) is type(other):
return all(getattr(self, slot) == getattr(other, slot)
for slot in self.__slots__)
else:
return NotImplemented
class MetaEphemeral(type):
"""Meta-Class that creates a terminal which value is set when the
object is created. To mutate the value, a new object has to be
generated.
"""
cache = {}
def __new__(meta, name, func, ret=__type__, id_=None):
if id_ in MetaEphemeral.cache:
return MetaEphemeral.cache[id_]
if isinstance(func, types.LambdaType) and func.__name__ == '':
warnings.warn("Ephemeral {name} function cannot be "
"pickled because its generating function "
"is a lambda function. Use functools.partial "
"instead.".format(name=name), RuntimeWarning)
def __init__(self):
self.value = func()
attr = {'__init__' : __init__,
'name' : name,
'func' : func,
'ret' : ret,
'conv_fct' : repr}
cls = super(MetaEphemeral, meta).__new__(meta, name, (Terminal,), attr)
MetaEphemeral.cache[id(cls)] = cls
return cls
def __init__(cls, name, func, ret=__type__, id_=None):
super(MetaEphemeral, cls).__init__(name, (Terminal,), {})
def __reduce__(cls):
return (MetaEphemeral, (cls.name, cls.func, cls.ret, id(cls)))
copyreg.pickle(MetaEphemeral, MetaEphemeral.__reduce__)
class PrimitiveSetTyped(object):
"""Class that contains the primitives that can be used to solve a
Strongly Typed GP problem. The set also defined the researched
function return type, and input arguments type and number.
"""
def __init__(self, name, in_types, ret_type, prefix="ARG"):
self.terminals = defaultdict(list)
self.primitives = defaultdict(list)
self.arguments = []
# setting "__builtins__" to None avoid the context
# being polluted by builtins function when evaluating
# GP expression.
self.context = {"__builtins__": None}
self.mapping = dict()
self.terms_count = 0
self.prims_count = 0
self.name = name
self.ret = ret_type
self.ins = in_types
for i, type_ in enumerate(in_types):
arg_str = "{prefix}{index}".format(prefix=prefix, index=i)
self.arguments.append(arg_str)
term = Terminal(arg_str, True, type_)
self._add(term)
self.terms_count += 1
def renameArguments(self, **kargs):
"""Rename function arguments with new names from *kargs*.
"""
for i, old_name in enumerate(self.arguments):
if old_name in kargs:
new_name = kargs[old_name]
self.arguments[i] = new_name
self.mapping[new_name] = self.mapping[old_name]
self.mapping[new_name].value = new_name
del self.mapping[old_name]
def _add(self, prim):
def addType(dict_, ret_type):
if ret_type not in dict_:
new_list = []
for type_, list_ in dict_.items():
if issubclass(type_, ret_type):
for item in list_:
if item not in new_list:
new_list.append(item)
dict_[ret_type] = new_list
addType(self.primitives, prim.ret)
addType(self.terminals, prim.ret)
self.mapping[prim.name] = prim
if isinstance(prim, Primitive):
for type_ in prim.args:
addType(self.primitives, type_)
addType(self.terminals, type_)
dict_ = self.primitives
else:
dict_ = self.terminals
for type_ in dict_:
if issubclass(prim.ret, type_):
dict_[type_].append(prim)
def addPrimitive(self, primitive, in_types, ret_type, name=None):
"""Add a primitive to the set.
:param primitive: callable object or a function.
:param in_types: list of primitives arguments' type
:param ret_type: type returned by the primitive.
:param name: alternative name for the primitive instead
of its __name__ attribute.
"""
if name is None:
name = primitive.__name__
prim = Primitive(name, in_types, ret_type)
assert name not in self.context or \
self.context[name] is primitive, \
"Primitives are required to have a unique name. " \
"Consider using the argument 'name' to rename your " \
"second '%s' primitive." % (name,)
self._add(prim)
self.context[prim.name] = primitive
self.prims_count += 1
def addTerminal(self, terminal, ret_type, name=None):
"""Add a terminal to the set. Terminals can be named
using the optional *name* argument. This should be
used : to define named constant (i.e.: pi); to speed the
evaluation time when the object is long to build; when
the object does not have a __repr__ functions that returns
the code to build the object; when the object class is
not a Python built-in.
:param terminal: Object, or a function with no arguments.
:param ret_type: Type of the terminal.
:param name: defines the name of the terminal in the expression.
"""
symbolic = False
if name is None and callable(terminal):
name = terminal.__name__
assert name not in self.context, \
"Terminals are required to have a unique name. " \
"Consider using the argument 'name' to rename your " \
"second %s terminal." % (name,)
if name is not None:
self.context[name] = terminal
terminal = name
symbolic = True
elif terminal in (True, False):
# To support True and False terminals with Python 2.
self.context[str(terminal)] = terminal
prim = Terminal(terminal, symbolic, ret_type)
self._add(prim)
self.terms_count += 1
def addEphemeralConstant(self, name, ephemeral, ret_type):
"""Add an ephemeral constant to the set. An ephemeral constant
is a no argument function that returns a random value. The value
of the constant is constant for a Tree, but may differ from one
Tree to another.
:param name: name used to refers to this ephemeral type.
:param ephemeral: function with no arguments returning a random value.
:param ret_type: type of the object returned by *ephemeral*.
"""
if not name in self.mapping:
class_ = MetaEphemeral(name, ephemeral, ret_type)
else:
class_ = self.mapping[name]
if class_.func is not ephemeral:
raise Exception("Ephemerals with different functions should "
"be named differently, even between psets.")
if class_.ret is not ret_type:
raise Exception("Ephemerals with the same name and function "
"should have the same type, even between psets.")
self._add(class_)
self.terms_count += 1
def addADF(self, adfset):
"""Add an Automatically Defined Function (ADF) to the set.
:param adfset: PrimitiveSetTyped containing the primitives with which
the ADF can be built.
"""
prim = Primitive(adfset.name, adfset.ins, adfset.ret)
self._add(prim)
self.prims_count += 1
@property
def terminalRatio(self):
"""Return the ratio of the number of terminals on the number of all
kind of primitives.
"""
return self.terms_count / float(self.terms_count + self.prims_count)
class PrimitiveSet(PrimitiveSetTyped):
"""Class same as :class:`~deap.gp.PrimitiveSetTyped`, except there is no
definition of type.
"""
def __init__(self, name, arity, prefix="ARG"):
args = [__type__] * arity
PrimitiveSetTyped.__init__(self, name, args, __type__, prefix)
def addPrimitive(self, primitive, arity, name=None):
"""Add primitive *primitive* with arity *arity* to the set.
If a name *name* is provided, it will replace the attribute __name__
attribute to represent/identify the primitive.
"""
assert arity > 0, "arity should be >= 1"
args = [__type__] * arity
PrimitiveSetTyped.addPrimitive(self, primitive, args, __type__, name)
def addTerminal(self, terminal, name=None):
"""Add a terminal to the set."""
PrimitiveSetTyped.addTerminal(self, terminal, __type__, name)
def addEphemeralConstant(self, name, ephemeral):
"""Add an ephemeral constant to the set."""
PrimitiveSetTyped.addEphemeralConstant(self, name, ephemeral, __type__)
######################################
# GP Tree compilation functions #
######################################
def compile(expr, pset):
"""Compile the expression *expr*.
:param expr: Expression to compile. It can either be a PrimitiveTree,
a string of Python code or any object that when
converted into string produced a valid Python code
expression.
:param pset: Primitive set against which the expression is compile.
:returns: a function if the primitive set has 1 or more arguments,
or return the results produced by evaluating the tree.
"""
code = str(expr)
if len(pset.arguments) > 0:
# This section is a stripped version of the lambdify
# function of SymPy 0.6.6.
args = ",".join(arg for arg in pset.arguments)
code = "lambda {args}: {code}".format(args=args, code=code)
try:
return eval(code, pset.context, {})
except MemoryError:
_, _, traceback = sys.exc_info()
raise MemoryError("DEAP : Error in tree evaluation :"
" Python cannot evaluate a tree higher than 90. "
"To avoid this problem, you should use bloat control on your "
"operators. See the DEAP documentation for more information. "
"DEAP will now abort.").with_traceback(traceback)
def compileADF(expr, psets):
"""Compile the expression represented by a list of trees. The first
element of the list is the main tree, and the following elements are
automatically defined functions (ADF) that can be called by the first
tree.
:param expr: Expression to compile. It can either be a PrimitiveTree,
a string of Python code or any object that when
converted into string produced a valid Python code
expression.
:param psets: List of primitive sets. Each set corresponds to an ADF
while the last set is associated with the expression
and should contain reference to the preceding ADFs.
:returns: a function if the main primitive set has 1 or more arguments,
or return the results produced by evaluating the tree.
"""
adfdict = {}
func = None
for pset, subexpr in reversed(list(zip(psets, expr))):
pset.context.update(adfdict)
func = compile(subexpr, pset)
adfdict.update({pset.name: func})
return func
######################################
# GP Program generation functions #
######################################
def genFull(pset, min_, max_, type_=None):
"""Generate an expression where each leaf has the same depth
between *min* and *max*.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: A full tree with all leaves at the same depth.
"""
def condition(height, depth):
"""Expression generation stops when the depth is equal to height."""
return depth == height
return generate(pset, min_, max_, condition, type_)
def genGrow(pset, min_, max_, type_=None):
"""Generate an expression where each leaf might have a different depth
between *min* and *max*.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: A grown tree with leaves at possibly different depths.
"""
def condition(height, depth):
"""Expression generation stops when the depth is equal to height
or when it is randomly determined that a node should be a terminal.
"""
return depth == height or \
(depth >= min_ and random.random() < pset.terminalRatio)
return generate(pset, min_, max_, condition, type_)
def genHalfAndHalf(pset, min_, max_, type_=None):
"""Generate an expression with a PrimitiveSet *pset*.
Half the time, the expression is generated with :func:`~deap.gp.genGrow`,
the other half, the expression is generated with :func:`~deap.gp.genFull`.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: Either, a full or a grown tree.
"""
method = random.choice((genGrow, genFull))
return method(pset, min_, max_, type_)
def genRamped(pset, min_, max_, type_=None):
"""
.. deprecated:: 1.0
The function has been renamed. Use :func:`~deap.gp.genHalfAndHalf` instead.
"""
warnings.warn("gp.genRamped has been renamed. Use genHalfAndHalf instead.",
FutureWarning)
return genHalfAndHalf(pset, min_, max_, type_)
def generate(pset, min_, max_, condition, type_=None):
"""Generate a tree as a list of primitives and terminals in a depth-first
order. The tree is built from the root to the leaves, and it stops growing
the current branch when the *condition* is fulfilled: in which case, it
back-tracks, then tries to grow another branch until the *condition* is
fulfilled again, and so on. The returned list can then be passed to the
constructor of the class *PrimitiveTree* to build an actual tree object.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param condition: The condition is a function that takes two arguments,
the height of the tree to build and the current
depth in the tree.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: A grown tree with leaves at possibly different depths
depending on the condition function.
"""
if type_ is None:
type_ = pset.ret
expr = []
height = random.randint(min_, max_)
stack = [(0, type_)]
while len(stack) != 0:
depth, type_ = stack.pop()
if condition(height, depth):
try:
term = random.choice(pset.terminals[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError("The gp.generate function tried to add "
"a terminal of type '%s', but there is "
"none available." % (type_,)).with_traceback(traceback)
if type(term) is MetaEphemeral:
term = term()
expr.append(term)
else:
try:
prim = random.choice(pset.primitives[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError("The gp.generate function tried to add "
"a primitive of type '%s', but there is "
"none available." % (type_,)).with_traceback(traceback)
expr.append(prim)
for arg in reversed(prim.args):
stack.append((depth + 1, arg))
return expr
######################################
# GP Crossovers #
######################################
def cxOnePoint(ind1, ind2):
"""Randomly select crossover point in each individual and exchange each
subtree with the point as root between each individual.
:param ind1: First tree participating in the crossover.
:param ind2: Second tree participating in the crossover.
:returns: A tuple of two trees.
"""
if len(ind1) < 2 or len(ind2) < 2:
# No crossover on single node tree
return ind1, ind2
# List all available primitive types in each individual
types1 = defaultdict(list)
types2 = defaultdict(list)
if ind1.root.ret == __type__:
# Not STGP optimization
types1[__type__] = list(range(1, len(ind1)))
types2[__type__] = list(range(1, len(ind2)))
common_types = [__type__]
else:
for idx, node in enumerate(ind1[1:], 1):
types1[node.ret].append(idx)
for idx, node in enumerate(ind2[1:], 1):
types2[node.ret].append(idx)
common_types = set(types1.keys()).intersection(set(types2.keys()))
if len(common_types) > 0:
type_ = random.choice(list(common_types))
index1 = random.choice(types1[type_])
index2 = random.choice(types2[type_])
slice1 = ind1.searchSubtree(index1)
slice2 = ind2.searchSubtree(index2)
ind1[slice1], ind2[slice2] = ind2[slice2], ind1[slice1]
return ind1, ind2
def cxOnePointLeafBiased(ind1, ind2, termpb):
"""Randomly select crossover point in each individual and exchange each
subtree with the point as root between each individual.
:param ind1: First typed tree participating in the crossover.
:param ind2: Second typed tree participating in the crossover.
:param termpb: The probability of choosing a terminal node (leaf).
:returns: A tuple of two typed trees.
When the nodes are strongly typed, the operator makes sure the
second node type corresponds to the first node type.
The parameter *termpb* sets the probability to choose between a terminal
or non-terminal crossover point. For instance, as defined by Koza, non-
terminal primitives are selected for 90% of the crossover points, and
terminals for 10%, so *termpb* should be set to 0.1.
"""
if len(ind1) < 2 or len(ind2) < 2:
# No crossover on single node tree
return ind1, ind2
# Determine whether to keep terminals or primitives for each individual
terminal_op = partial(eq, 0)
primitive_op = partial(lt, 0)
arity_op1 = terminal_op if random.random() < termpb else primitive_op
arity_op2 = terminal_op if random.random() < termpb else primitive_op
# List all available primitive or terminal types in each individual
types1 = defaultdict(list)
types2 = defaultdict(list)
for idx, node in enumerate(ind1[1:], 1):
if arity_op1(node.arity):
types1[node.ret].append(idx)
for idx, node in enumerate(ind2[1:], 1):
if arity_op2(node.arity):
types2[node.ret].append(idx)
common_types = set(types1.keys()).intersection(set(types2.keys()))
if len(common_types) > 0:
# Set does not support indexing
type_ = random.sample(common_types, 1)[0]
index1 = random.choice(types1[type_])
index2 = random.choice(types2[type_])
slice1 = ind1.searchSubtree(index1)
slice2 = ind2.searchSubtree(index2)
ind1[slice1], ind2[slice2] = ind2[slice2], ind1[slice1]
return ind1, ind2
######################################
# GP Mutations #
######################################
def mutUniform(individual, expr, pset):
"""Randomly select a point in the tree *individual*, then replace the
subtree at that point as a root by the expression generated using method
:func:`expr`.
:param individual: The tree to be mutated.
:param expr: A function object that can generate an expression when
called.
:returns: A tuple of one tree.
"""
index = random.randrange(len(individual))
slice_ = individual.searchSubtree(index)
type_ = individual[index].ret
individual[slice_] = expr(pset=pset, type_=type_)
return individual,
def mutNodeReplacement(individual, pset):
"""Replaces a randomly chosen primitive from *individual* by a randomly
chosen primitive with the same number of arguments from the :attr:`pset`
attribute of the individual.
:param individual: The normal or typed tree to be mutated.
:returns: A tuple of one tree.
"""
if len(individual) < 2:
return individual,
index = random.randrange(1, len(individual))
node = individual[index]
if node.arity == 0: # Terminal
term = random.choice(pset.terminals[node.ret])
if type(term) is MetaEphemeral:
term = term()
individual[index] = term
else: # Primitive
prims = [p for p in pset.primitives[node.ret] if p.args == node.args]
individual[index] = random.choice(prims)
return individual,
def mutEphemeral(individual, mode):
"""This operator works on the constants of the tree *individual*. In
*mode* ``"one"``, it will change the value of one of the individual
ephemeral constants by calling its generator function. In *mode*
``"all"``, it will change the value of **all** the ephemeral constants.
:param individual: The normal or typed tree to be mutated.
:param mode: A string to indicate to change ``"one"`` or ``"all"``
ephemeral constants.
:returns: A tuple of one tree.
"""
if mode not in ["one", "all"]:
raise ValueError("Mode must be one of \"one\" or \"all\"")
ephemerals_idx = [index
for index, node in enumerate(individual)
if isinstance(type(node), MetaEphemeral)]
if len(ephemerals_idx) > 0:
if mode == "one":
ephemerals_idx = (random.choice(ephemerals_idx),)
for i in ephemerals_idx:
individual[i] = type(individual[i])()
return individual,
def mutInsert(individual, pset):
"""Inserts a new branch at a random position in *individual*. The subtree
at the chosen position is used as child node of the created subtree, in
that way, it is really an insertion rather than a replacement. Note that
the original subtree will become one of the children of the new primitive
inserted, but not perforce the first (its position is randomly selected if
the new primitive has more than one child).
:param individual: The normal or typed tree to be mutated.
:returns: A tuple of one tree.
"""
index = random.randrange(len(individual))
node = individual[index]
slice_ = individual.searchSubtree(index)
choice = random.choice
# As we want to keep the current node as children of the new one,
# it must accept the return value of the current node
primitives = [p for p in pset.primitives[node.ret] if node.ret in p.args]
if len(primitives) == 0:
return individual,
new_node = choice(primitives)
new_subtree = [None] * len(new_node.args)
position = choice([i for i, a in enumerate(new_node.args) if a == node.ret])
for i, arg_type in enumerate(new_node.args):
if i != position:
term = choice(pset.terminals[arg_type])
if isclass(term):
term = term()
new_subtree[i] = term
new_subtree[position:position + 1] = individual[slice_]
new_subtree.insert(0, new_node)
individual[slice_] = new_subtree
return individual,
def mutShrink(individual):
"""This operator shrinks the *individual* by choosing randomly a branch and
replacing it with one of the branch's arguments (also randomly chosen).
:param individual: The tree to be shrunk.
:returns: A tuple of one tree.
"""
# We don't want to "shrink" the root
if len(individual) < 3 or individual.height <= 1:
return individual,
iprims = []
for i, node in enumerate(individual[1:], 1):
if isinstance(node, Primitive) and node.ret in node.args:
iprims.append((i, node))
if len(iprims) != 0:
index, prim = random.choice(iprims)
arg_idx = random.choice([i for i, type_ in enumerate(prim.args) if type_ == prim.ret])
rindex = index + 1
for _ in range(arg_idx + 1):
rslice = individual.searchSubtree(rindex)
subtree = individual[rslice]
rindex += len(subtree)
slice_ = individual.searchSubtree(index)
individual[slice_] = subtree
return individual,
######################################
# GP bloat control decorators #
######################################
def staticLimit(key, max_value):
"""Implement a static limit on some measurement on a GP tree, as defined
by Koza in [Koza1989]. It may be used to decorate both crossover and
mutation operators. When an invalid (over the limit) child is generated,
it is simply replaced by one of its parents, randomly selected.
This operator can be used to avoid memory errors occurring when the tree
gets higher than 90 levels (as Python puts a limit on the call stack
depth), because it can ensure that no tree higher than this limit will ever
be accepted in the population, except if it was generated at initialization
time.
:param key: The function to use in order the get the wanted value. For
instance, on a GP tree, ``operator.attrgetter('height')`` may
be used to set a depth limit, and ``len`` to set a size limit.
:param max_value: The maximum value allowed for the given measurement.
:returns: A decorator that can be applied to a GP operator using \
:func:`~deap.base.Toolbox.decorate`
.. note::
If you want to reproduce the exact behavior intended by Koza, set
*key* to ``operator.attrgetter('height')`` and *max_value* to 17.
.. [Koza1989] J.R. Koza, Genetic Programming - On the Programming of
Computers by Means of Natural Selection (MIT Press,
Cambridge, MA, 1992)
"""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
keep_inds = [copy.deepcopy(ind) for ind in args]
new_inds = list(func(*args, **kwargs))
for i, ind in enumerate(new_inds):
if key(ind) > max_value:
new_inds[i] = random.choice(keep_inds)
return new_inds
return wrapper
return decorator
######################################
# GP bloat control algorithms #
######################################
def harm(population, toolbox, cxpb, mutpb, ngen,
alpha, beta, gamma, rho, nbrindsmodel=-1, mincutoff=20,
stats=None, halloffame=None, verbose=__debug__):
"""Implement bloat control on a GP evolution using HARM-GP, as defined in
[Gardner2015]. It is implemented in the form of an evolution algorithm
(similar to :func:`~deap.algorithms.eaSimple`).
:param population: A list of individuals.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param cxpb: The probability of mating two individuals.
:param mutpb: The probability of mutating an individual.
:param ngen: The number of generation.
:param alpha: The HARM *alpha* parameter.
:param beta: The HARM *beta* parameter.
:param gamma: The HARM *gamma* parameter.
:param rho: The HARM *rho* parameter.
:param nbrindsmodel: The number of individuals to generate in order to
model the natural distribution. -1 is a special
value which uses the equation proposed in
[Gardner2015] to set the value of this parameter :
max(2000, len(population))
:param mincutoff: The absolute minimum value for the cutoff point. It is
used to ensure that HARM does not shrink the population
too much at the beginning of the evolution. The default
value is usually fine.
:param stats: A :class:`~deap.tools.Statistics` object that is updated
inplace, optional.
:param halloffame: A :class:`~deap.tools.HallOfFame` object that will
contain the best individuals, optional.
:param verbose: Whether or not to log the statistics.
:returns: The final population
:returns: A class:`~deap.tools.Logbook` with the statistics of the
evolution
This function expects the :meth:`toolbox.mate`, :meth:`toolbox.mutate`,
:meth:`toolbox.select` and :meth:`toolbox.evaluate` aliases to be
registered in the toolbox.
.. note::
The recommended values for the HARM-GP parameters are *alpha=0.05*,
*beta=10*, *gamma=0.25*, *rho=0.9*. However, these parameters can be
adjusted to perform better on a specific problem (see the relevant
paper for tuning information). The number of individuals used to
model the natural distribution and the minimum cutoff point are less
important, their default value being effective in most cases.
.. [Gardner2015] M.-A. Gardner, C. Gagne, and M. Parizeau, Controlling
Code Growth by Dynamically Shaping the Genotype Size Distribution,
Genetic Programming and Evolvable Machines, 2015,
DOI 10.1007/s10710-015-9242-8
"""
def _genpop(n, pickfrom=[], acceptfunc=lambda s: True, producesizes=False):
# Generate a population of n individuals, using individuals in
# *pickfrom* if possible, with a *acceptfunc* acceptance function.
# If *producesizes* is true, also return a list of the produced
# individuals sizes.
# This function is used 1) to generate the natural distribution
# (in this case, pickfrom and acceptfunc should be let at their
# default values) and 2) to generate the final population, in which
# case pickfrom should be the natural population previously generated
# and acceptfunc a function implementing the HARM-GP algorithm.
producedpop = []
producedpopsizes = []
while len(producedpop) < n:
if len(pickfrom) > 0:
# If possible, use the already generated
# individuals (more efficient)
aspirant = pickfrom.pop()
if acceptfunc(len(aspirant)):
producedpop.append(aspirant)
if producesizes:
producedpopsizes.append(len(aspirant))
else:
opRandom = random.random()
if opRandom < cxpb:
# Crossover
aspirant1, aspirant2 = toolbox.mate(*map(toolbox.clone,
toolbox.select(population, 2)))
del aspirant1.fitness.values, aspirant2.fitness.values
if acceptfunc(len(aspirant1)):
producedpop.append(aspirant1)
if producesizes:
producedpopsizes.append(len(aspirant1))
if len(producedpop) < n and acceptfunc(len(aspirant2)):
producedpop.append(aspirant2)
if producesizes:
producedpopsizes.append(len(aspirant2))
else:
aspirant = toolbox.clone(toolbox.select(population, 1)[0])
if opRandom - cxpb < mutpb:
# Mutation
aspirant = toolbox.mutate(aspirant)[0]
del aspirant.fitness.values
if acceptfunc(len(aspirant)):
producedpop.append(aspirant)
if producesizes:
producedpopsizes.append(len(aspirant))
if producesizes:
return producedpop, producedpopsizes
else:
return producedpop
def halflifefunc(x):
return x * float(alpha) + beta
if nbrindsmodel == -1:
nbrindsmodel = max(2000, len(population))
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in population if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
record = stats.compile(population) if stats else {}
logbook.record(gen=0, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
# Begin the generational process
for gen in range(1, ngen + 1):
# Estimation population natural distribution of sizes
naturalpop, naturalpopsizes = _genpop(nbrindsmodel, producesizes=True)
naturalhist = [0] * (max(naturalpopsizes) + 3)
for indsize in naturalpopsizes:
# Kernel density estimation application
naturalhist[indsize] += 0.4
naturalhist[indsize - 1] += 0.2
naturalhist[indsize + 1] += 0.2
naturalhist[indsize + 2] += 0.1
if indsize - 2 >= 0:
naturalhist[indsize - 2] += 0.1
# Normalization
naturalhist = [val * len(population) / nbrindsmodel for val in naturalhist]
# Cutoff point selection
sortednatural = sorted(naturalpop, key=lambda ind: ind.fitness)
cutoffcandidates = sortednatural[int(len(population) * rho - 1):]
# Select the cutoff point, with an absolute minimum applied
# to avoid weird cases in the first generations
cutoffsize = max(mincutoff, len(min(cutoffcandidates, key=len)))
# Compute the target distribution
def targetfunc(x):
return (gamma * len(population) * math.log(2) /
halflifefunc(x)) * math.exp(-math.log(2) *
(x - cutoffsize) / halflifefunc(x))
targethist = [naturalhist[binidx] if binidx <= cutoffsize else
targetfunc(binidx) for binidx in range(len(naturalhist))]
# Compute the probabilities distribution
probhist = [t / n if n > 0 else t for n, t in zip(naturalhist, targethist)]
def probfunc(s):
return probhist[s] if s < len(probhist) else targetfunc(s)
def acceptfunc(s):
return random.random() <= probfunc(s)
# Generate offspring using the acceptance probabilities
# previously computed
offspring = _genpop(len(population), pickfrom=naturalpop,
acceptfunc=acceptfunc, producesizes=False)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# Update the hall of fame with the generated individuals
if halloffame is not None:
halloffame.update(offspring)
# Replace the current population by the offspring
population[:] = offspring
# Append the current generation statistics to the logbook
record = stats.compile(population) if stats else {}
logbook.record(gen=gen, nevals=len(invalid_ind), **record)
if verbose:
print(logbook.stream)
return population, logbook
def graph(expr):
"""Construct the graph of a tree expression. The tree expression must be
valid. It returns in order a node list, an edge list, and a dictionary of
the per node labels. The node are represented by numbers, the edges are
tuples connecting two nodes (number), and the labels are values of a
dictionary for which keys are the node numbers.
:param expr: A tree expression to convert into a graph.
:returns: A node list, an edge list, and a dictionary of labels.
The returned objects can be used directly to populate a
`pygraphviz `_ graph::
import pygraphviz as pgv
# [...] Execution of code that produce a tree expression
nodes, edges, labels = graph(expr)
g = pgv.AGraph()
g.add_nodes_from(nodes)
g.add_edges_from(edges)
g.layout(prog="dot")
for i in nodes:
n = g.get_node(i)
n.attr["label"] = labels[i]
g.draw("tree.pdf")
or a `NetworX `_ graph::
import matplotlib.pyplot as plt
import networkx as nx
# [...] Execution of code that produce a tree expression
nodes, edges, labels = graph(expr)
g = nx.Graph()
g.add_nodes_from(nodes)
g.add_edges_from(edges)
pos = nx.graphviz_layout(g, prog="dot")
nx.draw_networkx_nodes(g, pos)
nx.draw_networkx_edges(g, pos)
nx.draw_networkx_labels(g, pos, labels)
plt.show()
.. note::
We encourage you to use `pygraphviz
`_ as the nodes might be plotted
out of order when using `NetworX `_.
"""
nodes = list(range(len(expr)))
edges = list()
labels = dict()
stack = []
for i, node in enumerate(expr):
if stack:
edges.append((stack[-1][0], i))
stack[-1][1] -= 1
labels[i] = node.name if isinstance(node, Primitive) else node.value
stack.append([i, node.arity])
while stack and stack[-1][1] == 0:
stack.pop()
return nodes, edges, labels
######################################
# GSGP Mutation #
######################################
def mutSemantic(individual, gen_func=genGrow, pset=None, ms=None, min=2, max=6):
"""
Implementation of the Semantic Mutation operator. [Geometric semantic genetic programming, Moraglio et al., 2012]
mutated_individual = individual + logistic * (random_tree1 - random_tree2)
:param individual: individual to mutate
:param gen_func: function responsible for the generation of the random tree that will be used during the mutation
:param pset: Primitive Set, which contains terminal and operands to be used during the evolution
:param ms: Mutation Step
:param min: min depth of the random tree
:param max: max depth of the random tree
:return: mutated individual
The mutated contains the original individual
>>> import operator
>>> def lf(x): return 1 / (1 + math.exp(-x));
>>> pset = PrimitiveSet("main", 2)
>>> pset.addPrimitive(operator.sub, 2)
>>> pset.addTerminal(3)
>>> pset.addPrimitive(lf, 1, name="lf")
>>> pset.addPrimitive(operator.add, 2)
>>> pset.addPrimitive(operator.mul, 2)
>>> individual = genGrow(pset, 1, 3)
>>> mutated = mutSemantic(individual, pset=pset, max=2)
>>> ctr = sum([m.name == individual[i].name for i, m in enumerate(mutated[0])])
>>> ctr == len(individual)
True
"""
for p in ['lf', 'mul', 'add', 'sub']:
assert p in pset.mapping, "A '" + p + "' function is required in order to perform semantic mutation"
tr1 = gen_func(pset, min, max)
tr2 = gen_func(pset, min, max)
# Wrap mutation with a logistic function
tr1.insert(0, pset.mapping['lf'])
tr2.insert(0, pset.mapping['lf'])
if ms is None:
ms = random.uniform(0, 2)
mutation_step = Terminal(ms, False, object)
# Create the root
new_ind = individual
new_ind.insert(0, pset.mapping["add"])
# Append the left branch
new_ind.append(pset.mapping["mul"])
new_ind.append(mutation_step)
new_ind.append(pset.mapping["sub"])
# Append the right branch
new_ind.extend(tr1)
new_ind.extend(tr2)
return new_ind,
def cxSemantic(ind1, ind2, gen_func=genGrow, pset=None, min=2, max=6):
"""
Implementation of the Semantic Crossover operator [Geometric semantic genetic programming, Moraglio et al., 2012]
offspring1 = random_tree1 * ind1 + (1 - random_tree1) * ind2
offspring2 = random_tree1 * ind2 + (1 - random_tree1) * ind1
:param ind1: first parent
:param ind2: second parent
:param gen_func: function responsible for the generation of the random tree that will be used during the mutation
:param pset: Primitive Set, which contains terminal and operands to be used during the evolution
:param min: min depth of the random tree
:param max: max depth of the random tree
:return: offsprings
The mutated offspring contains parents
>>> import operator
>>> def lf(x): return 1 / (1 + math.exp(-x));
>>> pset = PrimitiveSet("main", 2)
>>> pset.addPrimitive(operator.sub, 2)
>>> pset.addTerminal(3)
>>> pset.addPrimitive(lf, 1, name="lf")
>>> pset.addPrimitive(operator.add, 2)
>>> pset.addPrimitive(operator.mul, 2)
>>> ind1 = genGrow(pset, 1, 3)
>>> ind2 = genGrow(pset, 1, 3)
>>> new_ind1, new_ind2 = cxSemantic(ind1, ind2, pset=pset, max=2)
>>> ctr = sum([n.name == ind1[i].name for i, n in enumerate(new_ind1)])
>>> ctr == len(ind1)
True
>>> ctr = sum([n.name == ind2[i].name for i, n in enumerate(new_ind2)])
>>> ctr == len(ind2)
True
"""
for p in ['lf', 'mul', 'add', 'sub']:
assert p in pset.mapping, "A '" + p + "' function is required in order to perform semantic crossover"
tr = gen_func(pset, min, max)
tr.insert(0, pset.mapping['lf'])
new_ind1 = ind1
new_ind1.insert(0, pset.mapping["mul"])
new_ind1.insert(0, pset.mapping["add"])
new_ind1.extend(tr)
new_ind1.append(pset.mapping["mul"])
new_ind1.append(pset.mapping["sub"])
new_ind1.append(Terminal(1.0, False, object))
new_ind1.extend(tr)
new_ind1.extend(ind2)
new_ind2 = ind2
new_ind2.insert(0, pset.mapping["mul"])
new_ind2.insert(0, pset.mapping["add"])
new_ind2.extend(tr)
new_ind2.append(pset.mapping["mul"])
new_ind2.append(pset.mapping["sub"])
new_ind2.append(Terminal(1.0, False, object))
new_ind2.extend(tr)
new_ind2.extend(ind1)
return new_ind1, new_ind2
if __name__ == "__main__":
import doctest
doctest.testmod()
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 010211 x ustar 00 27 mtime=1689936700.636043
deap-1.4.1/deap/tools/ 0000755 0000765 0000024 00000000000 14456461475 013753 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/__init__.py 0000644 0000765 0000024 00000002472 14456461441 016062 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
"""The :mod:`~deap.tools` module contains the operators for evolutionary
algorithms. They are used to modify, select and move the individuals in their
environment. The set of operators it contains are readily usable in the
:class:`~deap.base.Toolbox`. In addition to the basic operators this module
also contains utility tools to enhance the basic algorithms with
:class:`Statistics`, :class:`HallOfFame`, and :class:`History`.
"""
from .constraint import *
from .crossover import *
from .emo import *
from .indicator import *
from .init import *
from .migration import *
from .mutation import *
from .selection import *
from .support import *
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 010211 x ustar 00 27 mtime=1689936700.638874
deap-1.4.1/deap/tools/_hypervolume/ 0000755 0000765 0000024 00000000000 14456461475 016471 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/_hypervolume/__init__.py 0000644 0000765 0000024 00000001265 14456461441 020577 0 ustar 00runner staff # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/_hypervolume/_hv.c 0000644 0000765 0000024 00000132553 14456461441 017413 0 ustar 00runner staff /*************************************************************************
hypervolume computation
---------------------------------------------------------------------
Copyright (c) 2010
Carlos M. Fonseca
Manuel Lopez-Ibanez
Luis Paquete
Andreia P. Guerreiro
This program is free software (software libre); you can redistribute
it and/or modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version. As a particular
exception, the contents of this file (hv.c) may also be redistributed
and/or modified under the terms of the GNU Lesser General Public
License (LGPL) as published by the Free Software Foundation; either
version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, you can obtain a copy of the GNU
General Public License at:
http://www.gnu.org/copyleft/gpl.html
or by writing to:
Free Software Foundation, Inc., 59 Temple Place,
Suite 330, Boston, MA 02111-1307 USA
----------------------------------------------------------------------
Relevant literature:
[1] C. M. Fonseca, L. Paquete, and M. Lopez-Ibanez. An
improved dimension-sweep algorithm for the hypervolume
indicator. In IEEE Congress on Evolutionary Computation,
pages 1157-1163, Vancouver, Canada, July 2006.
[2] Nicola Beume, Carlos M. Fonseca, Manuel López-Ibáñez, Luís
Paquete, and J. Vahrenhold. On the complexity of computing the
hypervolume indicator. IEEE Transactions on Evolutionary
Computation, 13(5):1075-1082, 2009.
*************************************************************************/
#include "_hv.h"
#include
#include
#include
#include
#include
// Default to variant 4 without having to "make VARIANT=4"
#define VARIANT 4
static int compare_tree_asc(const void *p1, const void *p2);
/*-----------------------------------------------------------------------------
The following is a reduced version of the AVL-tree library used here
according to the terms of the GPL. See the copyright notice below.
*/
#define AVL_DEPTH
/*****************************************************************************
avl.h - Source code for the AVL-tree library.
Copyright (C) 1998 Michael H. Buselli
Copyright (C) 2000-2002 Wessel Dankers
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Augmented AVL-tree. Original by Michael H. Buselli .
Modified by Wessel Dankers to add a bunch of bloat to
the sourcecode, change the interface and squash a few bugs.
Mail him if you find new bugs.
*****************************************************************************/
/* User supplied function to compare two items like strcmp() does.
* For example: cmp(a,b) will return:
* -1 if a < b
* 0 if a = b
* 1 if a > b
*/
typedef int (*avl_compare_t)(const void *, const void *);
/* User supplied function to delete an item when a node is free()d.
* If NULL, the item is not free()d.
*/
typedef void (*avl_freeitem_t)(void *);
typedef struct avl_node_t {
struct avl_node_t *next;
struct avl_node_t *prev;
struct avl_node_t *parent;
struct avl_node_t *left;
struct avl_node_t *right;
void *item;
double domr;
#ifdef AVL_DEPTH
unsigned char depth;
#endif
} avl_node_t;
typedef struct avl_tree_t {
avl_node_t *head;
avl_node_t *tail;
avl_node_t *top;
avl_compare_t cmp;
avl_freeitem_t freeitem;
} avl_tree_t;
/*****************************************************************************
avl.c - Source code for the AVL-tree library.
*****************************************************************************/
static void avl_rebalance(avl_tree_t *, avl_node_t *);
#ifdef AVL_DEPTH
#define NODE_DEPTH(n) ((n) ? (n)->depth : 0)
#define L_DEPTH(n) (NODE_DEPTH((n)->left))
#define R_DEPTH(n) (NODE_DEPTH((n)->right))
#define CALC_DEPTH(n) ((L_DEPTH(n)>R_DEPTH(n)?L_DEPTH(n):R_DEPTH(n)) + 1)
#endif
static int avl_check_balance(avl_node_t *avlnode) {
#ifdef AVL_DEPTH
int d;
d = R_DEPTH(avlnode) - L_DEPTH(avlnode);
return d<-1?-1:d>1?1:0;
#endif
}
static int
avl_search_closest(const avl_tree_t *avltree, const void *item, avl_node_t **avlnode) {
avl_node_t *node;
int c;
if(!avlnode)
avlnode = &node;
node = avltree->top;
if(!node)
return *avlnode = NULL, 0;
for(;;) {
c = compare_tree_asc(item, node->item);
if(c < 0) {
if(node->left)
node = node->left;
else
return *avlnode = node, -1;
} else if(c > 0) {
if(node->right)
node = node->right;
else
return *avlnode = node, 1;
} else {
return *avlnode = node, 0;
}
}
}
static avl_tree_t *
avl_init_tree(avl_tree_t *rc, avl_compare_t cmp, avl_freeitem_t freeitem) {
if(rc) {
rc->head = NULL;
rc->tail = NULL;
rc->top = NULL;
rc->cmp = cmp;
rc->freeitem = freeitem;
}
return rc;
}
static avl_tree_t *
avl_alloc_tree(avl_compare_t cmp, avl_freeitem_t freeitem) {
return avl_init_tree(malloc(sizeof(avl_tree_t)), cmp, freeitem);
}
static void
avl_clear_tree(avl_tree_t *avltree) {
avltree->top = avltree->head = avltree->tail = NULL;
}
static void
avl_clear_node(avl_node_t *newnode) {
newnode->left = newnode->right = NULL;
#ifdef AVL_COUNT
newnode->count = 1;
#endif
#ifdef AVL_DEPTH
newnode->depth = 1;
#endif
}
static avl_node_t *
avl_insert_top(avl_tree_t *avltree, avl_node_t *newnode) {
avl_clear_node(newnode);
newnode->prev = newnode->next = newnode->parent = NULL;
avltree->head = avltree->tail = avltree->top = newnode;
return newnode;
}
static avl_node_t *
avl_insert_before(avl_tree_t *avltree, avl_node_t *node, avl_node_t *newnode) {
/* if(!node)
return avltree->tail
? avl_insert_after(avltree, avltree->tail, newnode)
: avl_insert_top(avltree, newnode);
if(node->left)
return avl_insert_after(avltree, node->prev, newnode);
*/
assert (node);
assert (!node->left);
avl_clear_node(newnode);
newnode->next = node;
newnode->parent = node;
newnode->prev = node->prev;
if(node->prev)
node->prev->next = newnode;
else
avltree->head = newnode;
node->prev = newnode;
node->left = newnode;
avl_rebalance(avltree, node);
return newnode;
}
static avl_node_t *
avl_insert_after(avl_tree_t *avltree, avl_node_t *node, avl_node_t *newnode) {
/* if(!node)
return avltree->head
? avl_insert_before(avltree, avltree->head, newnode)
: avl_insert_top(avltree, newnode);
if(node->right)
return avl_insert_before(avltree, node->next, newnode);
*/
assert (node);
assert (!node->right);
avl_clear_node(newnode);
newnode->prev = node;
newnode->parent = node;
newnode->next = node->next;
if(node->next)
node->next->prev = newnode;
else
avltree->tail = newnode;
node->next = newnode;
node->right = newnode;
avl_rebalance(avltree, node);
return newnode;
}
/*
* avl_unlink_node:
* Removes the given node. Does not delete the item at that node.
* The item of the node may be freed before calling avl_unlink_node.
* (In other words, it is not referenced by this function.)
*/
static void
avl_unlink_node(avl_tree_t *avltree, avl_node_t *avlnode) {
avl_node_t *parent;
avl_node_t **superparent;
avl_node_t *subst, *left, *right;
avl_node_t *balnode;
if(avlnode->prev)
avlnode->prev->next = avlnode->next;
else
avltree->head = avlnode->next;
if(avlnode->next)
avlnode->next->prev = avlnode->prev;
else
avltree->tail = avlnode->prev;
parent = avlnode->parent;
superparent = parent
? avlnode == parent->left ? &parent->left : &parent->right
: &avltree->top;
left = avlnode->left;
right = avlnode->right;
if(!left) {
*superparent = right;
if(right)
right->parent = parent;
balnode = parent;
} else if(!right) {
*superparent = left;
left->parent = parent;
balnode = parent;
} else {
subst = avlnode->prev;
if(subst == left) {
balnode = subst;
} else {
balnode = subst->parent;
balnode->right = subst->left;
if(balnode->right)
balnode->right->parent = balnode;
subst->left = left;
left->parent = subst;
}
subst->right = right;
subst->parent = parent;
right->parent = subst;
*superparent = subst;
}
avl_rebalance(avltree, balnode);
}
/*
* avl_rebalance:
* Rebalances the tree if one side becomes too heavy. This function
* assumes that both subtrees are AVL-trees with consistent data. The
* function has the additional side effect of recalculating the count of
* the tree at this node. It should be noted that at the return of this
* function, if a rebalance takes place, the top of this subtree is no
* longer going to be the same node.
*/
static void
avl_rebalance(avl_tree_t *avltree, avl_node_t *avlnode) {
avl_node_t *child;
avl_node_t *gchild;
avl_node_t *parent;
avl_node_t **superparent;
parent = avlnode;
while(avlnode) {
parent = avlnode->parent;
superparent = parent
? avlnode == parent->left ? &parent->left : &parent->right
: &avltree->top;
switch(avl_check_balance(avlnode)) {
case -1:
child = avlnode->left;
#ifdef AVL_DEPTH
if(L_DEPTH(child) >= R_DEPTH(child)) {
#else
#ifdef AVL_COUNT
if(L_COUNT(child) >= R_COUNT(child)) {
#else
#error No balancing possible.
#endif
#endif
avlnode->left = child->right;
if(avlnode->left)
avlnode->left->parent = avlnode;
child->right = avlnode;
avlnode->parent = child;
*superparent = child;
child->parent = parent;
#ifdef AVL_COUNT
avlnode->count = CALC_COUNT(avlnode);
child->count = CALC_COUNT(child);
#endif
#ifdef AVL_DEPTH
avlnode->depth = CALC_DEPTH(avlnode);
child->depth = CALC_DEPTH(child);
#endif
} else {
gchild = child->right;
avlnode->left = gchild->right;
if(avlnode->left)
avlnode->left->parent = avlnode;
child->right = gchild->left;
if(child->right)
child->right->parent = child;
gchild->right = avlnode;
if(gchild->right)
gchild->right->parent = gchild;
gchild->left = child;
if(gchild->left)
gchild->left->parent = gchild;
*superparent = gchild;
gchild->parent = parent;
#ifdef AVL_COUNT
avlnode->count = CALC_COUNT(avlnode);
child->count = CALC_COUNT(child);
gchild->count = CALC_COUNT(gchild);
#endif
#ifdef AVL_DEPTH
avlnode->depth = CALC_DEPTH(avlnode);
child->depth = CALC_DEPTH(child);
gchild->depth = CALC_DEPTH(gchild);
#endif
}
break;
case 1:
child = avlnode->right;
#ifdef AVL_DEPTH
if(R_DEPTH(child) >= L_DEPTH(child)) {
#else
#ifdef AVL_COUNT
if(R_COUNT(child) >= L_COUNT(child)) {
#else
#error No balancing possible.
#endif
#endif
avlnode->right = child->left;
if(avlnode->right)
avlnode->right->parent = avlnode;
child->left = avlnode;
avlnode->parent = child;
*superparent = child;
child->parent = parent;
#ifdef AVL_COUNT
avlnode->count = CALC_COUNT(avlnode);
child->count = CALC_COUNT(child);
#endif
#ifdef AVL_DEPTH
avlnode->depth = CALC_DEPTH(avlnode);
child->depth = CALC_DEPTH(child);
#endif
} else {
gchild = child->left;
avlnode->right = gchild->left;
if(avlnode->right)
avlnode->right->parent = avlnode;
child->left = gchild->right;
if(child->left)
child->left->parent = child;
gchild->left = avlnode;
if(gchild->left)
gchild->left->parent = gchild;
gchild->right = child;
if(gchild->right)
gchild->right->parent = gchild;
*superparent = gchild;
gchild->parent = parent;
#ifdef AVL_COUNT
avlnode->count = CALC_COUNT(avlnode);
child->count = CALC_COUNT(child);
gchild->count = CALC_COUNT(gchild);
#endif
#ifdef AVL_DEPTH
avlnode->depth = CALC_DEPTH(avlnode);
child->depth = CALC_DEPTH(child);
gchild->depth = CALC_DEPTH(gchild);
#endif
}
break;
default:
#ifdef AVL_COUNT
avlnode->count = CALC_COUNT(avlnode);
#endif
#ifdef AVL_DEPTH
avlnode->depth = CALC_DEPTH(avlnode);
#endif
}
avlnode = parent;
}
}
/*------------------------------------------------------------------------------
end of functions from AVL-tree library.
*******************************************************************************/
#if !defined(VARIANT) || VARIANT < 1 || VARIANT > 4
#error VARIANT must be either 1, 2, 3 or 4, e.g., 'make VARIANT=4'
#endif
#if __GNUC__ >= 3
# define __hv_unused __attribute__ ((unused))
#else
# define __hv_unused /* no 'unused' attribute available */
#endif
#if VARIANT < 3
# define __variant3_only __hv_unused
#else
# define __variant3_only
#endif
#if VARIANT < 2
# define __variant2_only __hv_unused
#else
# define __variant2_only
#endif
typedef struct dlnode {
double *x; /* The data vector */
struct dlnode **next; /* Next-node vector */
struct dlnode **prev; /* Previous-node vector */
struct avl_node_t * tnode;
int ignore;
int ignore_best; //used in define_order
#if VARIANT >= 2
double *area; /* Area */
#endif
#if VARIANT >= 3
double *vol; /* Volume */
#endif
} dlnode_t;
static avl_tree_t *tree;
#if VARIANT < 4
int stop_dimension = 1; /* default: stop on dimension 2 */
#else
int stop_dimension = 2; /* default: stop on dimension 3 */
#endif
static int compare_node(const void *p1, const void* p2)
{
const double x1 = *((*(const dlnode_t **)p1)->x);
const double x2 = *((*(const dlnode_t **)p2)->x);
return (x1 < x2) ? -1 : (x1 > x2) ? 1 : 0;
}
static int compare_tree_asc(const void *p1, const void *p2)
{
const double *x1 = (const double *)p1;
const double *x2 = (const double *)p2;
return (x1[1] > x2[1]) ? -1 : (x1[1] < x2[1]) ? 1
: (x1[0] >= x2[0]) ? -1 : 1;
}
/*
* Setup circular double-linked list in each dimension
*/
static dlnode_t *
setup_cdllist(double *data, int d, int n)
{
dlnode_t *head;
dlnode_t **scratch;
int i, j;
head = malloc ((n+1) * sizeof(dlnode_t));
head->x = data;
head->ignore = 0; /* should never get used */
head->next = malloc( d * (n+1) * sizeof(dlnode_t*));
head->prev = malloc( d * (n+1) * sizeof(dlnode_t*));
head->tnode = malloc ((n+1) * sizeof(avl_node_t));
#if VARIANT >= 2
head->area = malloc(d * (n+1) * sizeof(double));
#endif
#if VARIANT >= 3
head->vol = malloc(d * (n+1) * sizeof(double));
#endif
for (i = 1; i <= n; i++) {
head[i].x = head[i-1].x + d;/* this will be fixed a few lines below... */
head[i].ignore = 0;
head[i].next = head[i-1].next + d;
head[i].prev = head[i-1].prev + d;
head[i].tnode = head[i-1].tnode + 1;
#if VARIANT >= 2
head[i].area = head[i-1].area + d;
#endif
#if VARIANT >= 3
head[i].vol = head[i-1].vol + d;
#endif
}
head->x = NULL; /* head contains no data */
scratch = malloc(n * sizeof(dlnode_t*));
for (i = 0; i < n; i++)
scratch[i] = head + i + 1;
for (j = d-1; j >= 0; j--) {
for (i = 0; i < n; i++)
scratch[i]->x--;
qsort(scratch, n, sizeof(dlnode_t*), compare_node);
head->next[j] = scratch[0];
scratch[0]->prev[j] = head;
for (i = 1; i < n; i++) {
scratch[i-1]->next[j] = scratch[i];
scratch[i]->prev[j] = scratch[i-1];
}
scratch[n-1]->next[j] = head;
head->prev[j] = scratch[n-1];
}
free(scratch);
for (i = 1; i <= n; i++) {
(head[i].tnode)->item = head[i].x;
}
return head;
}
static void free_cdllist(dlnode_t * head)
{
free(head->tnode); /* Frees _all_ nodes. */
free(head->next);
free(head->prev);
#if VARIANT >= 2
free(head->area);
#endif
#if VARIANT >= 3
free(head->vol);
#endif
free(head);
}
static void delete (dlnode_t *nodep, int dim, double * bound __variant3_only)
{
int i;
for (i = stop_dimension; i < dim; i++) {
nodep->prev[i]->next[i] = nodep->next[i];
nodep->next[i]->prev[i] = nodep->prev[i];
#if VARIANT >= 3
if (bound[i] > nodep->x[i])
bound[i] = nodep->x[i];
#endif
}
}
#if VARIANT >= 2
static void delete_dom (dlnode_t *nodep, int dim)
{
int i;
for (i = stop_dimension; i < dim; i++) {
nodep->prev[i]->next[i] = nodep->next[i];
nodep->next[i]->prev[i] = nodep->prev[i];
}
}
#endif
static void reinsert (dlnode_t *nodep, int dim, double * bound __variant3_only)
{
int i;
for (i = stop_dimension; i < dim; i++) {
nodep->prev[i]->next[i] = nodep;
nodep->next[i]->prev[i] = nodep;
#if VARIANT >= 3
if (bound[i] > nodep->x[i])
bound[i] = nodep->x[i];
#endif
}
}
#if VARIANT >= 2
static void reinsert_dom (dlnode_t *nodep, int dim)
{
int i;
for (i = stop_dimension; i < dim; i++) {
dlnode_t *p = nodep->prev[i];
p->next[i] = nodep;
nodep->next[i]->prev[i] = nodep;
nodep->area[i] = p->area[i];
#if VARIANT >= 3
nodep->vol[i] = p->vol[i] + p->area[i] * (nodep->x[i] - p->x[i]);
#endif
}
}
#endif
static double
hv_recursive(dlnode_t *list, int dim, int c, const double * ref,
double * bound)
{
/* ------------------------------------------------------
General case for dimensions higher than stop_dimension
------------------------------------------------------ */
if ( dim > stop_dimension ) {
dlnode_t *p0 = list;
dlnode_t *p1 = list->prev[dim];
double hyperv = 0;
#if VARIANT == 1
double hypera;
#endif
#if VARIANT >= 2
dlnode_t *pp;
for (pp = p1; pp->x; pp = pp->prev[dim]) {
if (pp->ignore < dim)
pp->ignore = 0;
}
#endif
while (c > 1
#if VARIANT >= 3
/* We delete all points x[dim] > bound[dim]. In case of
repeated coordinates, we also delete all points
x[dim] == bound[dim] except one. */
&& (p1->x[dim] > bound[dim]
|| p1->prev[dim]->x[dim] >= bound[dim])
#endif
) {
p0 = p1;
#if VARIANT >=2
if (p0->ignore >= dim)
delete_dom(p0, dim);
else
delete(p0, dim, bound);
#else
delete(p0, dim, bound);
#endif
p1 = p0->prev[dim];
c--;
}
#if VARIANT == 1
hypera = hv_recursive(list, dim-1, c, ref, bound);
#elif VARIANT == 2
int i;
p1->area[0] = 1;
for (i = 1; i <= dim; i++)
p1->area[i] = p1->area[i-1] * (ref[i-1] - p1->x[i-1]);
#elif VARIANT >= 3
if (c > 1) {
hyperv = p1->prev[dim]->vol[dim] + p1->prev[dim]->area[dim]
* (p1->x[dim] - p1->prev[dim]->x[dim]);
if (p1->ignore >= dim)
p1->area[dim] = p1->prev[dim]->area[dim];
else {
p1->area[dim] = hv_recursive(list, dim - 1, c, ref, bound);
/* At this point, p1 is the point with the highest value in
dimension dim in the list, so if it is dominated in
dimension dim-1, so it is also dominated in dimension
dim. */
if (p1->ignore == (dim - 1))
p1->ignore = dim;
}
} else {
int i;
p1->area[0] = 1;
for (i = 1; i <= dim; i++)
p1->area[i] = p1->area[i-1] * (ref[i-1] - p1->x[i-1]);
}
p1->vol[dim] = hyperv;
#endif
while (p0->x != NULL) {
#if VARIANT == 1
hyperv += hypera * (p0->x[dim] - p1->x[dim]);
#else
hyperv += p1->area[dim] * (p0->x[dim] - p1->x[dim]);
#endif
c++;
#if VARIANT >= 2
if (p0->ignore >= dim) {
reinsert_dom (p0, dim);
p0->area[dim] = p1->area[dim];
} else {
#endif
reinsert (p0, dim, bound);
#if VARIANT >= 2
p0->area[dim] = hv_recursive (list, dim-1, c, ref, bound);
if (p0->ignore == (dim - 1))
p0->ignore = dim;
}
#elif VARIANT == 1
hypera = hv_recursive (list, dim-1, c, ref, NULL);
#endif
p1 = p0;
p0 = p0->next[dim];
#if VARIANT >= 3
p1->vol[dim] = hyperv;
#endif
}
#if VARIANT >= 3
bound[dim] = p1->x[dim];
#endif
#if VARIANT == 1
hyperv += hypera * (ref[dim] - p1->x[dim]);
#else
hyperv += p1->area[dim] * (ref[dim] - p1->x[dim]);
#endif
return hyperv;
}
/* ---------------------------
special case of dimension 3
--------------------------- */
else if (dim == 2) {
double hyperv;
double hypera;
double height;
#if VARIANT >= 3
dlnode_t *pp = list->prev[2];
avl_node_t *tnode;
/* All the points that have value of x[2] lower than bound[2] are points
that were previously processed, so there's no need to process them
again. In this case, every point was processed before, so the
volume is known. */
if (pp->x[2] < bound[2])
return pp->vol[2] + pp->area[2] * (ref[2] - pp->x[2]);
pp = list->next[2];
/* In this case, every point has to be processed. */
if (pp->x[2] >= bound[2]) {
pp->tnode->domr = ref[2];
pp->area[2] = (ref[0] - pp->x[0]) * (ref[1] - pp->x[1]);
pp->vol[2] = 0;
pp->ignore = 0;
} else {
/* Otherwise, we look for the first point that has to be in the
tree, by searching for the first point that isn't dominated or
that is dominated by a point with value of x[2] higher or equal
than bound[2] (domr keeps the value of the x[2] of the point
that dominates pp, or ref[2] if it isn't dominated). */
while (pp->tnode->domr < bound[2]) {
pp = pp->next[2];
}
}
pp->ignore = 0;
avl_insert_top(tree,pp->tnode);
pp->tnode->domr = ref[2];
/* Connect all points that aren't dominated or that are dominated and
the point that dominates it has value x[2] (pp->tnode->domr) equal
or higher than bound[2]. */
for (pp = pp->next[2]; pp->x[2] < bound[2]; pp = pp->next[2]) {
if (pp->tnode->domr >= bound[2]) {
avl_node_t *tnodeaux = pp->tnode;
tnodeaux->domr = ref[2];
if (avl_search_closest(tree, pp->x, &tnode) <= 0)
avl_insert_before(tree, tnode, tnodeaux);
else
avl_insert_after(tree, tnode, tnodeaux);
}
}
pp = pp->prev[2];
hyperv = pp->vol[2];
hypera = pp->area[2];
height = (pp->next[2]->x)
? pp->next[2]->x[2] - pp->x[2]
: ref[2] - pp->x[2];
bound[2] = list->prev[2]->x[2];
#else
/* VARIANT <= 2 */
dlnode_t *pp = list->next[2];
hyperv = 0;
hypera = (ref[0] - pp->x[0])*(ref[1] - pp->x[1]);
height = (c == 1)
? ref[2] - pp->x[2]
: pp->next[2]->x[2] - pp->x[2];
avl_insert_top(tree,pp->tnode);
#endif
hyperv += hypera * height;
for (pp = pp->next[2]; pp->x != NULL; pp = pp->next[2]) {
const double * prv_ip, * nxt_ip;
avl_node_t *tnode;
int cmp;
#if VARIANT >= 3
pp->vol[2] = hyperv;
#endif
height = (pp == list->prev[2])
? ref[2] - pp->x[2]
: pp->next[2]->x[2] - pp->x[2];
#if VARIANT >= 2
if (pp->ignore >= 2) {
hyperv += hypera * height;
#if VARIANT >= 3
pp->area[2] = hypera;
#endif
continue;
}
#endif
cmp = avl_search_closest(tree, pp->x, &tnode);
if (cmp <= 0) {
nxt_ip = (double *)(tnode->item);
} else {
nxt_ip = (tnode->next != NULL)
? (double *)(tnode->next->item)
: ref;
}
if (nxt_ip[0] <= pp->x[0]) {
pp->ignore = 2;
#if VARIANT >= 3
pp->tnode->domr = pp->x[2];
pp->area[2] = hypera;
#endif
if (height > 0)
hyperv += hypera * height;
continue;
}
if (cmp <= 0) {
avl_insert_before(tree, tnode, pp->tnode);
tnode = pp->tnode->prev;
} else {
avl_insert_after(tree, tnode, pp->tnode);
}
#if VARIANT >= 3
pp->tnode->domr = ref[2];
#endif
if (tnode != NULL) {
prv_ip = (double *)(tnode->item);
if (prv_ip[0] >= pp->x[0]) {
const double * cur_ip;
tnode = pp->tnode->prev;
/* cur_ip = point dominated by pp with highest
[0]-coordinate. */
cur_ip = (double *)(tnode->item);
while (tnode->prev) {
prv_ip = (double *)(tnode->prev->item);
hypera -= (prv_ip[1] - cur_ip[1]) * (nxt_ip[0] - cur_ip[0]);
if (prv_ip[0] < pp->x[0])
break; /* prv is not dominated by pp */
cur_ip = prv_ip;
avl_unlink_node(tree,tnode);
#if VARIANT >= 3
/* saves the value of x[2] of the point that
dominates tnode. */
tnode->domr = pp->x[2];
#endif
tnode = tnode->prev;
}
avl_unlink_node(tree, tnode);
#if VARIANT >= 3
tnode->domr = pp->x[2];
#endif
if (!tnode->prev) {
hypera -= (ref[1] - cur_ip[1]) * (nxt_ip[0] - cur_ip[0]);
prv_ip = ref;
}
}
} else
prv_ip = ref;
hypera += (prv_ip[1] - pp->x[1]) * (nxt_ip[0] - pp->x[0]);
if (height > 0)
hyperv += hypera * height;
#if VARIANT >= 3
pp->area[2] = hypera;
#endif
}
avl_clear_tree(tree);
return hyperv;
}
/* special case of dimension 2 */
else if (dim == 1) {
const dlnode_t *p1 = list->next[1];
double hypera = p1->x[0];
double hyperv = 0;
dlnode_t *p0;
while ((p0 = p1->next[1])->x) {
hyperv += (ref[0] - hypera) * (p0->x[1] - p1->x[1]);
if (p0->x[0] < hypera)
hypera = p0->x[0];
else if (p0->ignore == 0)
p0->ignore = 1;
p1 = p0;
}
hyperv += (ref[0] - hypera) * (ref[1] - p1->x[1]);
return hyperv;
}
/* special case of dimension 1 */
else if (dim == 0) {
list->next[0]->ignore = -1;
return (ref[0] - list->next[0]->x[0]);
}
else {
fprintf(stderr, "%s:%d: unreachable condition! \n"
"This is a bug, please report it to "
"manuel.lopez-ibanez@ulb.ac.be\n", __FILE__, __LINE__);
exit(EXIT_FAILURE);
}
}
/*
Removes the point from the circular double-linked list, but it
doesn't remove the data.
*/
static void
filter_delete_node(dlnode_t *node, int d)
{
int i;
for (i = 0; i < d; i++) {
node->next[i]->prev[i] = node->prev[i];
node->prev[i]->next[i] = node->next[i];
}
}
/*
Filters those points that do not strictly dominate the reference
point. This is needed to assure that the points left are only those
that are needed to calculate the hypervolume.
*/
static int
filter(dlnode_t *list, int d, int n, const double *ref)
{
int i, j;
/* fprintf (stderr, "%d points initially\n", n); */
for (i = 0; i < d; i++) {
dlnode_t *aux = list->prev[i];
int np = n;
for (j = 0; j < np; j++) {
if (aux->x[i] < ref[i])
break;
filter_delete_node (aux, d);
aux = aux->prev[i];
n--;
}
}
/* fprintf (stderr, "%d points remain\n", n); */
return n;
}
#ifdef EXPERIMENTAL
/*
Verifies up to which dimension k, domr dominates p and returns k
(it is assumed that domr doesn't dominate p in dimensions higher than dim).
*/
static int
test_domr(dlnode_t *p, dlnode_t *domr, int dim, int *order)
{
int i;
for(i = 1; i <= dim; i++){
if (p->x[order[i]] < domr->x[order[i]])
return i - 1;
}
return dim;
}
/*
Verifies up to which dimension k the point pp is dominated and
returns k. This functions is called only to verify points that
aren't dominated for more than dim dimensions, so k will always be
lower or equal to dim.
*/
static int
test_dom(dlnode_t *list, dlnode_t *pp, int dim, int *order)
{
dlnode_t *p0;
int r, r_b = 0;
int i = order[0];
p0 = list->next[i];
/* In every iteration, it is verified if p0 dominates pp and
up to which dimension. The goal is to find the point that
dominates pp in more dimension, starting in dimension 0.
Points are processed in ascending order of the first
dimension. This means that if a point p0 is dominated in
the first k dimensions, where k >=dim, then the point that
dominates it (in the first k dimensions) was already
processed, so p0 won't dominate pp in more dimensions that
the point that dominates p0 (because pp can be dominated,
at most, up to dim dimensions, and so if p0 dominates pp in
the first y dimensions (y < dim), the point that dominates
p0 also dominates pp in the first y dimensions or more, and
this information is already stored in r_b), so p0 is
skipped. */
while (p0 != pp) {
if (p0->ignore < dim) {
r = test_domr (pp, p0, dim, order);
/* if pp is dominated in the first dim + 1 dimensions,
it is not necessary to verify other points that
might dominate pp, because pp won't be dominated in
more that dim+1 dimensions. */
if (r == dim) return r;
else if (r > r_b) r_b = r;
}
p0 = p0->next[i];
}
return r_b;
}
/*
Determines the number of dominated points from dimension 0 to k,
where k <= dim.
*/
static void determine_ndom(dlnode_t *list, int dim, int *order, int *count)
{
dlnode_t *p1;
int i, dom;
int ord = order[0];
for (i = 0; i <= dim; i++)
count[i] = 0;
p1 = list->next[ord];
p1->ignore = 0;
p1 = list->next[ord];
while (p1 != list) {
if (p1->ignore <= dim) {
dom = test_dom(list, p1, dim, order);
count[dom]++;
p1->ignore = dom;
}
p1 = p1->next[ord];
}
}
static void delete_dominated(dlnode_t *nodep, int dim)
{
int i;
for (i = 0; i <= dim; i++) {
nodep->prev[i]->next[i] = nodep->next[i];
nodep->next[i]->prev[i] = nodep->prev[i];
}
}
/*
Determines the number of dominated points from dimension 0 to k,
where k <= dim, for the original order of objectives. Also defines
that this order is the best order so far, so every point has the
information up to which dimension it is dominated (ignore) and it is
considered the highest number of dimensions in which it is dominated
(so ignore_best is also updated).
If there is any point dominated in every dimension, seen that it
doesn't contribute to the hypervolume, it is removed as soon as
possible, this way there's no waste of time with these points.
Returns the number of total points. */
static int
determine_ndomf(dlnode_t *list, int dim, int c, int *order, int *count)
{
dlnode_t *p1;
int i, dom;
int ord = order[0];
for(i = 0; i <= dim; i++)
count[i] = 0;
p1 = list->next[ord];
p1->ignore = p1->ignore_best = 0;
p1 = list->next[ord];
/* Determines up to which dimension each point is dominated and
uses this information to count the number of dominated points
from dimension 0 to k, where k <= dim.
Points that are dominated in more than the first 'dim'
dimensions will continue to be dominated in those dimensions,
and so they're skipped, it's not necessary to find out again up
to which dimension they're dominated. */
while (p1 != list){
if (p1->ignore <= dim) {
dom = test_dom(list, p1, dim, order);
count[dom]++;
p1->ignore = p1->ignore_best = dom;
}
p1 = p1->next[ord];
}
/* If there is any point dominated in every dimension, it is removed and
the number of total points is updated. */
if (count[dim] > 0) {
p1 = list->prev[0];
while (p1->x) {
if (p1->ignore == dim) {
delete_dominated(p1, dim);
c--;
}
p1 = p1->prev[0];
}
}
return c;
}
/*
This function implements the iterative version of MDP heuristic described in
L. While, L. Bradstreet, L. Barone, and P. Hingston, "Heuristics for optimising
the calculation of hypervolume for multi-objective optimisation problems", in
Congress on Evolutionary Computation, B. McKay, Ed. IEEE, 2005, pp. 2225-2232
Tries to find a good order to process the objectives.
This algorithm tries to maximize the number of dominated points
dominated in more dimensions. For example, for a problem with d
dimensions, an order with 20 points dominated from dimension 0 to
dimension d-1 is preferred to an order of objectives in which the
number of points dominated from dimension 0 to d-1 is 10. An order
with the same number of points dominated up to dimension d-1 as a
second order is preferred if it has more points dominated up to
dimension d-2 than the second order. */
static int define_order(dlnode_t *list, int dim, int c, int *order)
{
dlnode_t *p;
// order - keeps the current order of objectives
/* best_order - keeps the current best order for the
objectives. At the end, this array (and the array order) will
have the best order found, to process the objectives.
This array keeps the indexes of the objectives, where
best_order[0] keeps the index of the first objective,
best_order[1] keeps the index of the second objective and so on. */
int *best_order = malloc(dim * sizeof(int));
/* count - keeps the counting of the dominated points
corresponding to the order of objectives in 'order'.
When it's found that a point is dominated at most, for the
first four dimensions, then count[3] is incremented. So,
count[i] is incremented every time it's found a point that is
dominated from dimension 0 to i, but not in dimension i+1. */
int *count = malloc(dim * sizeof(int));
/* keeps the best counting of the dominated points (that is
obtained using the order in best_order). */
int *best_count = malloc(dim * sizeof(int));
int i, j, k;
for (i = 0; i < dim; i++) {
best_order[i] = order[i] = i;
best_count[i] = count[i] = 0;
}
// determines the number of dominated points in the original order.
// c - total number of points excluding points totally dominated
c = determine_ndomf(list, dim-1, c, order, count);
/* the best order so far is the original order, so it's necessary
to register the number of points dominated in the best
order. */
for (i = 0; i < dim; i++) {
best_count[i] = count[i];
}
/* Objectives are chosen from highest to lowest. So we start
defining which is the objective in position dim-1 and then
which is the objective in position dim, and so on. The
objective chosen to be in position i is chosen in a way to
maximize the number of dominated points from dimension 0 to
i-1. So, this cycle, selects a position i, and then we find
the objective (from the remaining objectives that haven't a
position yet, the objectives that are in positions lower or
equal to i) that by being in position i maximizes the number of
points dominated from dimension 0 to i-1. */
for (i = dim - 1; i > 2; i--) {
/* This cycle, in every iteration, assigns a different
objective to position i. It's important to notice
that if we want to maximize the number of dominated
points from dimension 0 to i-1, when we want to now if
an objective k in position i is the one that maximizes
it, it doesn't matter the order of the objectives in
positions lower than i, the number of dominated points
from dimension 0 to i-1 will always be the same, so
it's not necessary to worry about the order of those
objectives.
When this cycle starts, 'order' has the original order and
so 'count' has the number of points dominated from 0
to every k, where k < dim or 'order' has the last
order of objectives used to calculate the best
objective to put in position i+1 that maximizes the
number of dominated points from dimension 0 to i and
so 'count' has the number of points dominated from
dimension 0 to every k, where k < dim, that was
calculated previously.
There on, it is not necessary to calculate the number of
dominated points from dimension 0 to i-1 with the actual
objective in position i (order[i]), because this value was
previously calculated and so it is only necessary to
calculate the number of dominated points when the current
objectives in order[k], where k < i, are in position i. */
for (j = 0; j < i; j++) {
int aux = order[i];
order[i] = order[j];
order[j] = aux;
/* Determine the number of dominated points from dimension
0 to k, where k < i (the number of points dominated
from dimension 0 to t, where t >= i, is already known
from previous calculations) with a different objective
in position i. */
determine_ndom(list, i-1, order, count);
/* If the order in 'order' is better than the previously
best order, than the actual order is now the best. An
order is better than another if the number of dominated
points from dimension 0 to i-1 is higher. If this
number is equal, then the best is the one that has the
most dominated points from dimension 0 to i-2. If this
number is equal, than the last order considered the
best, still remains the best order so far. */
if (best_count[i-1] < count[i-1]
|| (best_count[i-1] == count[i-1]
&& best_count[i-2] < count[i-2])) {
for (k = 0; k <= i; k++) {
best_count[k] = count[k];
best_order[k] = order[k];
}
p = list->prev[0];
while (p != list) {
p->ignore_best = p->ignore;
p = p->prev[0];
}
}
}
/*
If necessary, update 'order' with the best order so far and
the corresponding number of dominated points. In this way,
in the next iteration it is not necessary to recalculate the
number of dominated points from dimension 0 to i-2, when in
position i-1 is the objective that is currently in position
i-1, in the best order so far (best_order[i-1]).
*/
if (order[i] != best_order[i]) {
for (j = 0; j <= i; j++) {
count[j] = best_count[j];
order[j] = best_order[j];
}
p = list->prev[0];
/*
The information about a point being dominated is updated
because, this way, in some cases it is not necessary to
find out (again) if a point is dominated.
*/
while (p != list) {
p->ignore = p->ignore_best;
p = p->prev[0];
}
}
}
free(count);
free(best_count);
free(best_order);
return c;
}
/*
Reorders the reference point's objectives according to an order 'order'.
*/
static void reorder_reference(double *reference, int d, int *order)
{
int j;
double *tmp = (double *) malloc(d * sizeof(double));
for (j = 0; j < d; j++) {
tmp[j] = reference[j];
}
for (j = 0; j < d; j++) {
reference[j] = tmp[order[j]];
}
free(tmp);
}
/*
Reorders the dimensions for every point according to an order.
*/
void reorder_list(dlnode_t *list, int d, int *order)
{
int j;
double *x;
double *tmp = (double *) malloc(d * sizeof(double));
dlnode_t **prev = (dlnode_t **) malloc(d * sizeof(dlnode_t *));
dlnode_t **next = (dlnode_t **) malloc(d * sizeof(dlnode_t *));
dlnode_t *p;
for(j = 0; j < d; j++) {
prev[j] = list->prev[j];
next[j] = list->next[j];
}
for(j = 0; j < d; j++) {
list->prev[j] = prev[order[j]];
list->next[j] = next[order[j]];
}
p = list->next[0];
while (p != list) {
p->ignore = 0;
x = p->x;
for(j = 0; j < d; j++) {
tmp[j] = x[j];
prev[j] = p->prev[j];
next[j] = p->next[j];
}
for(j = 0; j < d; j++) {
x[j] = tmp[order[j]];
p->prev[j] = prev[order[j]];
p->next[j] = next[order[j]];
}
p = p->next[0];
}
free(tmp);
free(prev);
free(next);
}
#endif
double fpli_hv(double *data, int d, int n, const double *ref)
{
dlnode_t *list;
double hyperv;
double * bound = NULL;
int i;
#if VARIANT >= 3
bound = malloc (d * sizeof(double));
for (i = 0; i < d; i++) bound[i] = -DBL_MAX;
#endif
tree = avl_alloc_tree ((avl_compare_t) compare_tree_asc,
(avl_freeitem_t) NULL);
list = setup_cdllist(data, d, n);
n = filter(list, d, n, ref);
if (n == 0) {
hyperv = 0.0;
} else if (n == 1) {
dlnode_t * p = list->next[0];
hyperv = 1;
for (i = 0; i < d; i++)
hyperv *= ref[i] - p->x[i];
} else {
hyperv = hv_recursive(list, d-1, n, ref, bound);
}
/* Clean up. */
free_cdllist (list);
free (tree); /* The nodes are freed by free_cdllist (). */
free (bound);
return hyperv;
}
#ifdef EXPERIMENTAL
#include "timer.h" /* FIXME: Avoid calling Timer functions here. */
double fpli_hv_order(double *data, int d, int n, const double *ref, int *order,
double *order_time, double *hv_time)
{
dlnode_t *list;
double hyperv;
double * bound = NULL;
double * ref_ord = (double *) malloc(d * sizeof(double));
#if VARIANT >= 3
int i;
bound = malloc (d * sizeof(double));
for (i = 0; i < d; i++) bound[i] = -DBL_MAX;
#endif
tree = avl_alloc_tree ((avl_compare_t) compare_tree_asc,
(avl_freeitem_t) NULL);
list = setup_cdllist(data, d, n);
if (d > 3) {
n = define_order(list, d, n, order);
reorder_list(list, d, order);
// copy reference so it will be unchanged for the next data sets.
for (i = 0; i < d; i++)
ref_ord[i] = ref[i];
reorder_reference(ref_ord, d, order);
} else {
for(i = 0; i < d; i++)
ref_ord[i] = ref[i];
}
*order_time = Timer_elapsed_virtual ();
Timer_start();
n = filter(list, d, n, ref_ord);
if (n == 0) {
hyperv = 0.0;
} else if (n == 1) {
hyperv = 1;
dlnode_t * p = list->next[0];
for (i = 0; i < d; i++)
hyperv *= ref[i] - p->x[i];
} else {
hyperv = hv_recursive(list, d-1, n, ref, bound);
}
/* Clean up. */
free_cdllist (list);
free (tree); /* The nodes are freed by free_cdllist (). */
free (bound);
free (ref_ord);
*hv_time = Timer_elapsed_virtual ();
return hyperv;
}
#endif
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/_hypervolume/_hv.h 0000644 0000765 0000024 00000004105 14456461441 017407 0 ustar 00runner staff /*************************************************************************
hv.h
---------------------------------------------------------------------
Copyright (c) 2010
Carlos M. Fonseca
Manuel Lopez-Ibanez
Luis Paquete
Andreia P. Guerreiro
This program is free software (software libre); you can redistribute
it and/or modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version. As a particular
exception, the contents of this file (hv.h) may also be redistributed
and/or modified under the terms of the GNU Lesser General Public
License (LGPL) as published by the Free Software Foundation; either
version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, you can obtain a copy of the GNU
General Public License at:
http://www.gnu.org/copyleft/gpl.html
or by writing to:
Free Software Foundation, Inc., 59 Temple Place,
Suite 330, Boston, MA 02111-1307 USA
----------------------------------------------------------------------
*************************************************************************/
#ifndef HV_H_
#define HV_H_
#ifdef __cplusplus
extern "C" {
#endif
extern int stop_dimension;
double fpli_hv(double *data, int d, int n, const double *ref);
#ifdef EXPERIMENTAL
double fpli_hv_order(double *data, int d, int n, const double *ref, int *order,
double *order_time, double *hv_time);
#endif
#ifdef __cplusplus
}
#endif
#endif
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/_hypervolume/hv.cpp 0000644 0000765 0000024 00000011110 14456461441 017575 0 ustar 00runner staff /*
* This file is part of DEAP.
*
* DEAP is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as
* published by the Free Software Foundation, either version 3 of
* the License, or (at your option) any later version.
*
* DEAP is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with DEAP. If not, see .
*/
#include
#if PY_MAJOR_VERSION >= 3
#define PY3K
#endif
#include
#include
#include "_hv.h"
static PyObject* hypervolume(PyObject *self, PyObject *args){
// Args[0]: Point list
// Args[1]: Reference point
// Return: The hypervolume as a double
PyObject *lPyPointSet = PyTuple_GetItem(args, 0);
PyObject *lPyReference = PyTuple_GetItem(args, 1);
int lNumPoints = 0;
int lDim = -1;
double *lPointSet = NULL;
if(PySequence_Check(lPyPointSet)){
lNumPoints = PySequence_Size(lPyPointSet);
unsigned int lPointCount = 0;
for(int i = 0; i < lNumPoints; ++i){
PyObject *lPyPoint = PySequence_GetItem(lPyPointSet, i);
if(PySequence_Check(lPyPoint)){
if(lDim < 0){
lDim = PySequence_Size(lPyPoint);
lPointSet = new double[lNumPoints*lDim];
}
for(int j = 0; j < lDim; ++j){
PyObject *lPyCoord = PySequence_GetItem(lPyPoint, j);
lPointSet[lPointCount++] = PyFloat_AsDouble(lPyCoord);
Py_DECREF(lPyCoord);
lPyCoord = NULL;
if(PyErr_Occurred()){
PyErr_SetString(PyExc_TypeError,"Points must contain double type values");
delete[] lPointSet;
return NULL;
}
}
Py_DECREF(lPyPoint);
lPyPoint = NULL;
} else {
Py_DECREF(lPyPoint);
lPyPoint = NULL;
PyErr_SetString(PyExc_TypeError,"First argument must contain only points");
free(lPointSet);
return NULL;
}
}
} else {
PyErr_SetString(PyExc_TypeError,"First argument must be a list of points");
return NULL;
}
double *lReference = NULL;
if(PySequence_Check(lPyReference)){
if(PySequence_Size(lPyReference) == lDim){
lReference = new double[lDim];
for(int i = 0; i < lDim; ++i){
PyObject *lPyCoord = PySequence_GetItem(lPyReference, i);
lReference[i] = PyFloat_AsDouble(lPyCoord);
Py_DECREF(lPyCoord);
lPyCoord = NULL;
if(PyErr_Occurred()){
PyErr_SetString(PyExc_TypeError,"Reference point must contain double type values");
delete[] lReference;
return NULL;
}
}
} else {
PyErr_SetString(PyExc_TypeError,"Reference point is not of same dimensionality as point set");
return NULL;
}
} else {
PyErr_SetString(PyExc_TypeError,"Second argument must be a point");
return NULL;
}
double lHypervolume = fpli_hv(lPointSet, lDim, lNumPoints, lReference);
delete[] lPointSet;
delete[] lReference;
return PyFloat_FromDouble(lHypervolume);
}
static PyMethodDef hvMethods[] = {
{"hypervolume", hypervolume, METH_VARARGS,
"Hypervolume Computation"},
{NULL, NULL, 0, NULL} /* Sentinel (?!?) */
};
#ifdef PY3K
static struct PyModuleDef moduledef = {
PyModuleDef_HEAD_INIT,
"hv", /* m_name */
"C Hypervolumes methods.", /* m_doc */
-1, /* m_size */
hvMethods, /* m_methods */
NULL, /* m_reload */
NULL, /* m_traverse */
NULL, /* m_clear */
NULL, /* m_free */
};
#endif
PyMODINIT_FUNC
#ifdef PY3K
PyInit_hv(void)
#else
inithv(void)
#endif
{
#ifdef PY3K
PyObject *lModule = PyModule_Create(&moduledef);
if(lModule == NULL)
return NULL;
return lModule;
#else
(void) Py_InitModule("hv", hvMethods);
#endif
} ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/_hypervolume/pyhv.py 0000644 0000765 0000024 00000026600 14456461441 020026 0 ustar 00runner staff # This file is part of DEAP.
#
# Copyright (C) 2010 Simon Wessing
# TU Dortmund University
#
# In personal communication, the original authors authorized DEAP team
# to use this file under the Lesser General Public License.
#
# You can find the original library here :
# http://ls11-www.cs.uni-dortmund.de/_media/rudolph/hypervolume/hv_python.zip
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see .
import warnings
import numpy
def hypervolume(pointset, ref):
"""Compute the absolute hypervolume of a *pointset* according to the
reference point *ref*.
"""
warnings.warn("Falling back to the python version of hypervolume "
"module. Expect this to be very slow.", RuntimeWarning)
hv = _HyperVolume(ref)
return hv.compute(pointset)
class _HyperVolume:
"""
Hypervolume computation based on variant 3 of the algorithm in the paper:
C. M. Fonseca, L. Paquete, and M. Lopez-Ibanez. An improved dimension-sweep
algorithm for the hypervolume indicator. In IEEE Congress on Evolutionary
Computation, pages 1157-1163, Vancouver, Canada, July 2006.
Minimization is implicitly assumed here!
"""
def __init__(self, referencePoint):
"""Constructor."""
self.referencePoint = referencePoint
self.list = []
def compute(self, front):
"""Returns the hypervolume that is dominated by a non-dominated front.
Before the HV computation, front and reference point are translated, so
that the reference point is [0, ..., 0].
"""
def weaklyDominates(point, other):
for i in range(len(point)):
if point[i] > other[i]:
return False
return True
relevantPoints = []
referencePoint = self.referencePoint
dimensions = len(referencePoint)
#######
# fmder: Here it is assumed that every point dominates the reference point
# for point in front:
# # only consider points that dominate the reference point
# if weaklyDominates(point, referencePoint):
# relevantPoints.append(point)
relevantPoints = front
# fmder
#######
if any(referencePoint):
# shift points so that referencePoint == [0, ..., 0]
# this way the reference point doesn't have to be explicitly used
# in the HV computation
#######
# fmder: Assume relevantPoints are numpy array
# for j in xrange(len(relevantPoints)):
# relevantPoints[j] = [relevantPoints[j][i] - referencePoint[i] for i in xrange(dimensions)]
relevantPoints -= referencePoint
# fmder
#######
self.preProcess(relevantPoints)
bounds = [-1.0e308] * dimensions
hyperVolume = self.hvRecursive(dimensions - 1, len(relevantPoints), bounds)
return hyperVolume
def hvRecursive(self, dimIndex, length, bounds):
"""Recursive call to hypervolume calculation.
In contrast to the paper, the code assumes that the reference point
is [0, ..., 0]. This allows the avoidance of a few operations.
"""
hvol = 0.0
sentinel = self.list.sentinel
if length == 0:
return hvol
elif dimIndex == 0:
# special case: only one dimension
# why using hypervolume at all?
return -sentinel.next[0].cargo[0]
elif dimIndex == 1:
# special case: two dimensions, end recursion
q = sentinel.next[1]
h = q.cargo[0]
p = q.next[1]
while p is not sentinel:
pCargo = p.cargo
hvol += h * (q.cargo[1] - pCargo[1])
if pCargo[0] < h:
h = pCargo[0]
q = p
p = q.next[1]
hvol += h * q.cargo[1]
return hvol
else:
remove = self.list.remove
reinsert = self.list.reinsert
hvRecursive = self.hvRecursive
p = sentinel
q = p.prev[dimIndex]
while q.cargo != None:
if q.ignore < dimIndex:
q.ignore = 0
q = q.prev[dimIndex]
q = p.prev[dimIndex]
while length > 1 and (q.cargo[dimIndex] > bounds[dimIndex] or q.prev[dimIndex].cargo[dimIndex] >= bounds[dimIndex]):
p = q
remove(p, dimIndex, bounds)
q = p.prev[dimIndex]
length -= 1
qArea = q.area
qCargo = q.cargo
qPrevDimIndex = q.prev[dimIndex]
if length > 1:
hvol = qPrevDimIndex.volume[dimIndex] + qPrevDimIndex.area[dimIndex] * (qCargo[dimIndex] - qPrevDimIndex.cargo[dimIndex])
else:
qArea[0] = 1
qArea[1:dimIndex+1] = [qArea[i] * -qCargo[i] for i in range(dimIndex)]
q.volume[dimIndex] = hvol
if q.ignore >= dimIndex:
qArea[dimIndex] = qPrevDimIndex.area[dimIndex]
else:
qArea[dimIndex] = hvRecursive(dimIndex - 1, length, bounds)
if qArea[dimIndex] <= qPrevDimIndex.area[dimIndex]:
q.ignore = dimIndex
while p is not sentinel:
pCargoDimIndex = p.cargo[dimIndex]
hvol += q.area[dimIndex] * (pCargoDimIndex - q.cargo[dimIndex])
bounds[dimIndex] = pCargoDimIndex
reinsert(p, dimIndex, bounds)
length += 1
q = p
p = p.next[dimIndex]
q.volume[dimIndex] = hvol
if q.ignore >= dimIndex:
q.area[dimIndex] = q.prev[dimIndex].area[dimIndex]
else:
q.area[dimIndex] = hvRecursive(dimIndex - 1, length, bounds)
if q.area[dimIndex] <= q.prev[dimIndex].area[dimIndex]:
q.ignore = dimIndex
hvol -= q.area[dimIndex] * q.cargo[dimIndex]
return hvol
def preProcess(self, front):
"""Sets up the list data structure needed for calculation."""
dimensions = len(self.referencePoint)
nodeList = _MultiList(dimensions)
nodes = [_MultiList.Node(dimensions, point) for point in front]
for i in range(dimensions):
self.sortByDimension(nodes, i)
nodeList.extend(nodes, i)
self.list = nodeList
def sortByDimension(self, nodes, i):
"""Sorts the list of nodes by the i-th value of the contained points."""
# build a list of tuples of (point[i], node)
decorated = [(node.cargo[i], node) for node in nodes]
# sort by this value
decorated.sort()
# write back to original list
nodes[:] = [node for (_, node) in decorated]
class _MultiList:
"""A special data structure needed by FonsecaHyperVolume.
It consists of several doubly linked lists that share common nodes. So,
every node has multiple predecessors and successors, one in every list.
"""
class Node:
def __init__(self, numberLists, cargo=None):
self.cargo = cargo
self.next = [None] * numberLists
self.prev = [None] * numberLists
self.ignore = 0
self.area = [0.0] * numberLists
self.volume = [0.0] * numberLists
def __str__(self):
return str(self.cargo)
def __lt__(self, other):
return all(self.cargo < other.cargo)
def __init__(self, numberLists):
"""Constructor.
Builds 'numberLists' doubly linked lists.
"""
self.numberLists = numberLists
self.sentinel = _MultiList.Node(numberLists)
self.sentinel.next = [self.sentinel] * numberLists
self.sentinel.prev = [self.sentinel] * numberLists
def __str__(self):
strings = []
for i in range(self.numberLists):
currentList = []
node = self.sentinel.next[i]
while node != self.sentinel:
currentList.append(str(node))
node = node.next[i]
strings.append(str(currentList))
stringRepr = ""
for string in strings:
stringRepr += string + "\n"
return stringRepr
def __len__(self):
"""Returns the number of lists that are included in this _MultiList."""
return self.numberLists
def getLength(self, i):
"""Returns the length of the i-th list."""
length = 0
sentinel = self.sentinel
node = sentinel.next[i]
while node != sentinel:
length += 1
node = node.next[i]
return length
def append(self, node, index):
"""Appends a node to the end of the list at the given index."""
lastButOne = self.sentinel.prev[index]
node.next[index] = self.sentinel
node.prev[index] = lastButOne
# set the last element as the new one
self.sentinel.prev[index] = node
lastButOne.next[index] = node
def extend(self, nodes, index):
"""Extends the list at the given index with the nodes."""
sentinel = self.sentinel
for node in nodes:
lastButOne = sentinel.prev[index]
node.next[index] = sentinel
node.prev[index] = lastButOne
# set the last element as the new one
sentinel.prev[index] = node
lastButOne.next[index] = node
def remove(self, node, index, bounds):
"""Removes and returns 'node' from all lists in [0, 'index'[."""
for i in range(index):
predecessor = node.prev[i]
successor = node.next[i]
predecessor.next[i] = successor
successor.prev[i] = predecessor
if bounds[i] > node.cargo[i]:
bounds[i] = node.cargo[i]
return node
def reinsert(self, node, index, bounds):
"""
Inserts 'node' at the position it had in all lists in [0, 'index'[
before it was removed. This method assumes that the next and previous
nodes of the node that is reinserted are in the list.
"""
for i in range(index):
node.prev[i].next[i] = node
node.next[i].prev[i] = node
if bounds[i] > node.cargo[i]:
bounds[i] = node.cargo[i]
__all__ = ["hypervolume_kmax", "hypervolume"]
if __name__ == "__main__":
try:
from deap.tools import hv
except ImportError:
hv = None
print("Cannot import C version of hypervolume")
pointset = [(a, a) for a in numpy.arange(1, 0, -0.01)]
ref = numpy.array([2, 2])
print("Python version: %f" % hypervolume(pointset, ref))
if hv:
print("C version: %f" % hv.hypervolume(pointset, ref))
print("Approximated: %f" % hypervolume_approximation(pointset, ref))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/constraint.py 0000644 0000765 0000024 00000017405 14456461441 016511 0 ustar 00runner staff
from functools import wraps
from itertools import repeat
try:
from collections.abc import Sequence
except ImportError:
from collections import Sequence
class DeltaPenalty(object):
r"""This decorator returns penalized fitness for invalid individuals and the
original fitness value for valid individuals. The penalized fitness is made
of a constant factor *delta* added with an (optional) *distance* penalty. The
distance function, if provided, shall return a value growing as the
individual moves away the valid zone.
:param feasibility: A function returning the validity status of any
individual.
:param delta: Constant or array of constants returned for an invalid individual.
:param distance: A function returning the distance between the individual
and a given valid point. The distance function can also return a sequence
of length equal to the number of objectives to affect multi-objective
fitnesses differently (optional, defaults to 0).
:returns: A decorator for evaluation function.
This function relies on the fitness weights to add correctly the distance.
The fitness value of the ith objective is defined as
.. math::
f^\mathrm{penalty}_i(\mathbf{x}) = \Delta_i - w_i d_i(\mathbf{x})
where :math:`\mathbf{x}` is the individual, :math:`\Delta_i` is a user defined
constant and :math:`w_i` is the weight of the ith objective. :math:`\Delta`
should be worst than the fitness of any possible individual, this means
higher than any fitness for minimization and lower than any fitness for
maximization.
See the :doc:`/tutorials/advanced/constraints` for an example.
"""
def __init__(self, feasibility, delta, distance=None):
self.fbty_fct = feasibility
if not isinstance(delta, Sequence):
self.delta = repeat(delta)
else:
self.delta = delta
self.dist_fct = distance
def __call__(self, func):
@wraps(func)
def wrapper(individual, *args, **kwargs):
if self.fbty_fct(individual):
return func(individual, *args, **kwargs)
weights = tuple(1 if w >= 0 else -1 for w in individual.fitness.weights)
dists = tuple(0 for w in individual.fitness.weights)
if self.dist_fct is not None:
dists = self.dist_fct(individual)
if not isinstance(dists, Sequence):
dists = repeat(dists)
return tuple(d - w * dist for d, w, dist in zip(self.delta, weights, dists))
return wrapper
DeltaPenality = DeltaPenalty
class ClosestValidPenalty(object):
r"""This decorator returns penalized fitness for invalid individuals and the
original fitness value for valid individuals. The penalized fitness is made
of the fitness of the closest valid individual added with a weighted
(optional) *distance* penalty. The distance function, if provided, shall
return a value growing as the individual moves away the valid zone.
:param feasibility: A function returning the validity status of any
individual.
:param feasible: A function returning the closest feasible individual
from the current invalid individual.
:param alpha: Multiplication factor on the distance between the valid and
invalid individual.
:param distance: A function returning the distance between the individual
and a given valid point. The distance function can also return a sequence
of length equal to the number of objectives to affect multi-objective
fitnesses differently (optional, defaults to 0).
:returns: A decorator for evaluation function.
This function relies on the fitness weights to add correctly the distance.
The fitness value of the ith objective is defined as
.. math::
f^\mathrm{penalty}_i(\mathbf{x}) = f_i(\operatorname{valid}(\mathbf{x})) - \\alpha w_i d_i(\operatorname{valid}(\mathbf{x}), \mathbf{x})
where :math:`\mathbf{x}` is the individual,
:math:`\operatorname{valid}(\mathbf{x})` is a function returning the closest
valid individual to :math:`\mathbf{x}`, :math:`\\alpha` is the distance
multiplicative factor and :math:`w_i` is the weight of the ith objective.
"""
def __init__(self, feasibility, feasible, alpha, distance=None):
self.fbty_fct = feasibility
self.fbl_fct = feasible
self.alpha = alpha
self.dist_fct = distance
def __call__(self, func):
@wraps(func)
def wrapper(individual, *args, **kwargs):
if self.fbty_fct(individual):
return func(individual, *args, **kwargs)
f_ind = self.fbl_fct(individual)
# print("individual", f_ind)
f_fbl = func(f_ind, *args, **kwargs)
# print("feasible", f_fbl)
weights = tuple(1.0 if w >= 0 else -1.0 for w in individual.fitness.weights)
if len(weights) != len(f_fbl):
raise IndexError("Fitness weights and computed fitness are of different size.")
dists = tuple(0 for w in individual.fitness.weights)
if self.dist_fct is not None:
dists = self.dist_fct(f_ind, individual)
if not isinstance(dists, Sequence):
dists = repeat(dists)
# print("penalty ", tuple( - w * self.alpha * d for f, w, d in zip(f_fbl, weights, dists)))
# print("returned", tuple(f - w * self.alpha * d for f, w, d in zip(f_fbl, weights, dists)))
return tuple(f - w * self.alpha * d for f, w, d in zip(f_fbl, weights, dists))
return wrapper
ClosestValidPenality = ClosestValidPenalty
# List of exported function names.
__all__ = ['DeltaPenalty', 'ClosestValidPenalty', 'DeltaPenality', 'ClosestValidPenality']
if __name__ == "__main__":
from deap import base
from deap import benchmarks
from deap import creator
import numpy
MIN_BOUND = numpy.array([0] * 30)
MAX_BOUND = numpy.array([1] * 30)
creator.create("FitnessMin", base.Fitness, weights=(-1.0, -1.0))
creator.create("Individual", list, fitness=creator.FitnessMin)
def distance(feasible_ind, original_ind):
"""A distance function to the feasibility region."""
return sum((f - o)**2 for f, o in zip(feasible_ind, original_ind))
def closest_feasible(individual):
"""A function returning a valid individual from an invalid one."""
feasible_ind = numpy.array(individual)
feasible_ind = numpy.maximum(MIN_BOUND, feasible_ind)
feasible_ind = numpy.minimum(MAX_BOUND, feasible_ind)
return feasible_ind
def valid(individual):
"""Determines if the individual is valid or not."""
if any(individual < MIN_BOUND) or any(individual > MAX_BOUND):
return False
return True
toolbox = base.Toolbox()
toolbox.register("evaluate", benchmarks.zdt2)
toolbox.decorate("evaluate", ClosestValidPenalty(valid, closest_feasible, 1.0e-6, distance))
ind1 = creator.Individual((-5.6468535666e-01, 2.2483050478e+00, -1.1087909644e+00, -1.2710112861e-01, 1.1682438733e+00, -1.3642007438e+00, -2.1916417835e-01, -5.9137308999e-01, -1.0870160336e+00, 6.0515070232e-01, 2.1532075914e+00, -2.6164718271e-01, 1.5244071578e+00, -1.0324305612e+00, 1.2858152343e+00, -1.2584683962e+00, 1.2054392372e+00, -1.7429571973e+00, -1.3517256013e-01, -2.6493429355e+00, -1.3051320798e-01, 2.2641961090e+00, -2.5027232340e+00, -1.2844874148e+00, 1.9955852925e+00, -1.2942218834e+00, 3.1340109155e+00, 1.6440111097e+00, -1.7750105857e+00, 7.7610242710e-01))
print(toolbox.evaluate(ind1))
print("Individuals is valid: %s" % ("True" if valid(ind1) else "False"))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/crossover.py 0000644 0000765 0000024 00000041637 14456461441 016356 0 ustar 00runner staff import random
import warnings
try:
from collections.abc import Sequence
except ImportError:
from collections import Sequence
from itertools import repeat
######################################
# GA Crossovers #
######################################
def cxOnePoint(ind1, ind2):
"""Executes a one point crossover on the input :term:`sequence` individuals.
The two individuals are modified in place. The resulting individuals will
respectively have the length of the other.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:returns: A tuple of two individuals.
This function uses the :func:`~random.randint` function from the
python base :mod:`random` module.
"""
size = min(len(ind1), len(ind2))
cxpoint = random.randint(1, size - 1)
ind1[cxpoint:], ind2[cxpoint:] = ind2[cxpoint:], ind1[cxpoint:]
return ind1, ind2
def cxTwoPoint(ind1, ind2):
"""Executes a two-point crossover on the input :term:`sequence`
individuals. The two individuals are modified in place and both keep
their original length.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:returns: A tuple of two individuals.
This function uses the :func:`~random.randint` function from the Python
base :mod:`random` module.
"""
size = min(len(ind1), len(ind2))
cxpoint1 = random.randint(1, size)
cxpoint2 = random.randint(1, size - 1)
if cxpoint2 >= cxpoint1:
cxpoint2 += 1
else: # Swap the two cx points
cxpoint1, cxpoint2 = cxpoint2, cxpoint1
ind1[cxpoint1:cxpoint2], ind2[cxpoint1:cxpoint2] \
= ind2[cxpoint1:cxpoint2], ind1[cxpoint1:cxpoint2]
return ind1, ind2
def cxTwoPoints(ind1, ind2):
"""
.. deprecated:: 1.0
The function has been renamed. Use :func:`~deap.tools.cxTwoPoint` instead.
"""
warnings.warn("tools.cxTwoPoints has been renamed. Use cxTwoPoint instead.",
FutureWarning)
return cxTwoPoint(ind1, ind2)
def cxUniform(ind1, ind2, indpb):
"""Executes a uniform crossover that modify in place the two
:term:`sequence` individuals. The attributes are swapped according to the
*indpb* probability.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:param indpb: Independent probability for each attribute to be exchanged.
:returns: A tuple of two individuals.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
"""
size = min(len(ind1), len(ind2))
for i in range(size):
if random.random() < indpb:
ind1[i], ind2[i] = ind2[i], ind1[i]
return ind1, ind2
def cxPartialyMatched(ind1, ind2):
"""Executes a partially matched crossover (PMX) on the input individuals.
The two individuals are modified in place. This crossover expects
:term:`sequence` individuals of indices, the result for any other type of
individuals is unpredictable.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:returns: A tuple of two individuals.
Moreover, this crossover generates two children by matching
pairs of values in a certain range of the two parents and swapping the values
of those indexes. For more details see [Goldberg1985]_.
This function uses the :func:`~random.randint` function from the python base
:mod:`random` module.
.. [Goldberg1985] Goldberg and Lingel, "Alleles, loci, and the traveling
salesman problem", 1985.
"""
size = min(len(ind1), len(ind2))
p1, p2 = [0] * size, [0] * size
# Initialize the position of each indices in the individuals
for i in range(size):
p1[ind1[i]] = i
p2[ind2[i]] = i
# Choose crossover points
cxpoint1 = random.randint(0, size)
cxpoint2 = random.randint(0, size - 1)
if cxpoint2 >= cxpoint1:
cxpoint2 += 1
else: # Swap the two cx points
cxpoint1, cxpoint2 = cxpoint2, cxpoint1
# Apply crossover between cx points
for i in range(cxpoint1, cxpoint2):
# Keep track of the selected values
temp1 = ind1[i]
temp2 = ind2[i]
# Swap the matched value
ind1[i], ind1[p1[temp2]] = temp2, temp1
ind2[i], ind2[p2[temp1]] = temp1, temp2
# Position bookkeeping
p1[temp1], p1[temp2] = p1[temp2], p1[temp1]
p2[temp1], p2[temp2] = p2[temp2], p2[temp1]
return ind1, ind2
def cxUniformPartialyMatched(ind1, ind2, indpb):
"""Executes a uniform partially matched crossover (UPMX) on the input
individuals. The two individuals are modified in place. This crossover
expects :term:`sequence` individuals of indices, the result for any other
type of individuals is unpredictable.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:returns: A tuple of two individuals.
Moreover, this crossover generates two children by matching
pairs of values chosen at random with a probability of *indpb* in the two
parents and swapping the values of those indexes. For more details see
[Cicirello2000]_.
This function uses the :func:`~random.random` and :func:`~random.randint`
functions from the python base :mod:`random` module.
.. [Cicirello2000] Cicirello and Smith, "Modeling GA performance for
control parameter optimization", 2000.
"""
size = min(len(ind1), len(ind2))
p1, p2 = [0] * size, [0] * size
# Initialize the position of each indices in the individuals
for i in range(size):
p1[ind1[i]] = i
p2[ind2[i]] = i
for i in range(size):
if random.random() < indpb:
# Keep track of the selected values
temp1 = ind1[i]
temp2 = ind2[i]
# Swap the matched value
ind1[i], ind1[p1[temp2]] = temp2, temp1
ind2[i], ind2[p2[temp1]] = temp1, temp2
# Position bookkeeping
p1[temp1], p1[temp2] = p1[temp2], p1[temp1]
p2[temp1], p2[temp2] = p2[temp2], p2[temp1]
return ind1, ind2
def cxOrdered(ind1, ind2):
"""Executes an ordered crossover (OX) on the input
individuals. The two individuals are modified in place. This crossover
expects :term:`sequence` individuals of indices, the result for any other
type of individuals is unpredictable.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:returns: A tuple of two individuals.
Moreover, this crossover generates holes in the input
individuals. A hole is created when an attribute of an individual is
between the two crossover points of the other individual. Then it rotates
the element so that all holes are between the crossover points and fills
them with the removed elements in order. For more details see
[Goldberg1989]_.
This function uses the :func:`~random.sample` function from the python base
:mod:`random` module.
.. [Goldberg1989] Goldberg. Genetic algorithms in search,
optimization and machine learning. Addison Wesley, 1989
"""
size = min(len(ind1), len(ind2))
a, b = random.sample(range(size), 2)
if a > b:
a, b = b, a
holes1, holes2 = [True] * size, [True] * size
for i in range(size):
if i < a or i > b:
holes1[ind2[i]] = False
holes2[ind1[i]] = False
# We must keep the original values somewhere before scrambling everything
temp1, temp2 = ind1, ind2
k1, k2 = b + 1, b + 1
for i in range(size):
if not holes1[temp1[(i + b + 1) % size]]:
ind1[k1 % size] = temp1[(i + b + 1) % size]
k1 += 1
if not holes2[temp2[(i + b + 1) % size]]:
ind2[k2 % size] = temp2[(i + b + 1) % size]
k2 += 1
# Swap the content between a and b (included)
for i in range(a, b + 1):
ind1[i], ind2[i] = ind2[i], ind1[i]
return ind1, ind2
def cxBlend(ind1, ind2, alpha):
"""Executes a blend crossover that modify in-place the input individuals.
The blend crossover expects :term:`sequence` individuals of floating point
numbers.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:param alpha: Extent of the interval in which the new values can be drawn
for each attribute on both side of the parents' attributes.
:returns: A tuple of two individuals.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
"""
for i, (x1, x2) in enumerate(zip(ind1, ind2)):
gamma = (1. + 2. * alpha) * random.random() - alpha
ind1[i] = (1. - gamma) * x1 + gamma * x2
ind2[i] = gamma * x1 + (1. - gamma) * x2
return ind1, ind2
def cxSimulatedBinary(ind1, ind2, eta):
"""Executes a simulated binary crossover that modify in-place the input
individuals. The simulated binary crossover expects :term:`sequence`
individuals of floating point numbers.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:param eta: Crowding degree of the crossover. A high eta will produce
children resembling to their parents, while a small eta will
produce solutions much more different.
:returns: A tuple of two individuals.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
"""
for i, (x1, x2) in enumerate(zip(ind1, ind2)):
rand = random.random()
if rand <= 0.5:
beta = 2. * rand
else:
beta = 1. / (2. * (1. - rand))
beta **= 1. / (eta + 1.)
ind1[i] = 0.5 * (((1 + beta) * x1) + ((1 - beta) * x2))
ind2[i] = 0.5 * (((1 - beta) * x1) + ((1 + beta) * x2))
return ind1, ind2
def cxSimulatedBinaryBounded(ind1, ind2, eta, low, up):
"""Executes a simulated binary crossover that modify in-place the input
individuals. The simulated binary crossover expects :term:`sequence`
individuals of floating point numbers.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:param eta: Crowding degree of the crossover. A high eta will produce
children resembling to their parents, while a small eta will
produce solutions much more different.
:param low: A value or a :term:`python:sequence` of values that is the lower
bound of the search space.
:param up: A value or a :term:`python:sequence` of values that is the upper
bound of the search space.
:returns: A tuple of two individuals.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
.. note::
This implementation is similar to the one implemented in the
original NSGA-II C code presented by Deb.
"""
size = min(len(ind1), len(ind2))
if not isinstance(low, Sequence):
low = repeat(low, size)
elif len(low) < size:
raise IndexError("low must be at least the size of the shorter individual: %d < %d" % (len(low), size))
if not isinstance(up, Sequence):
up = repeat(up, size)
elif len(up) < size:
raise IndexError("up must be at least the size of the shorter individual: %d < %d" % (len(up), size))
for i, xl, xu in zip(range(size), low, up):
if random.random() <= 0.5:
# This epsilon should probably be changed for 0 since
# floating point arithmetic in Python is safer
if abs(ind1[i] - ind2[i]) > 1e-14:
x1 = min(ind1[i], ind2[i])
x2 = max(ind1[i], ind2[i])
rand = random.random()
beta = 1.0 + (2.0 * (x1 - xl) / (x2 - x1))
alpha = 2.0 - beta ** -(eta + 1)
if rand <= 1.0 / alpha:
beta_q = (rand * alpha) ** (1.0 / (eta + 1))
else:
beta_q = (1.0 / (2.0 - rand * alpha)) ** (1.0 / (eta + 1))
c1 = 0.5 * (x1 + x2 - beta_q * (x2 - x1))
beta = 1.0 + (2.0 * (xu - x2) / (x2 - x1))
alpha = 2.0 - beta ** -(eta + 1)
if rand <= 1.0 / alpha:
beta_q = (rand * alpha) ** (1.0 / (eta + 1))
else:
beta_q = (1.0 / (2.0 - rand * alpha)) ** (1.0 / (eta + 1))
c2 = 0.5 * (x1 + x2 + beta_q * (x2 - x1))
c1 = min(max(c1, xl), xu)
c2 = min(max(c2, xl), xu)
if random.random() <= 0.5:
ind1[i] = c2
ind2[i] = c1
else:
ind1[i] = c1
ind2[i] = c2
return ind1, ind2
######################################
# Messy Crossovers #
######################################
def cxMessyOnePoint(ind1, ind2):
"""Executes a one point crossover on :term:`sequence` individual.
The crossover will in most cases change the individuals size. The two
individuals are modified in place.
:param ind1: The first individual participating in the crossover.
:param ind2: The second individual participating in the crossover.
:returns: A tuple of two individuals.
This function uses the :func:`~random.randint` function from the python base
:mod:`random` module.
"""
cxpoint1 = random.randint(0, len(ind1))
cxpoint2 = random.randint(0, len(ind2))
ind1[cxpoint1:], ind2[cxpoint2:] = ind2[cxpoint2:], ind1[cxpoint1:]
return ind1, ind2
######################################
# ES Crossovers #
######################################
def cxESBlend(ind1, ind2, alpha):
"""Executes a blend crossover on both, the individual and the strategy. The
individuals shall be a :term:`sequence` and must have a :term:`sequence`
:attr:`strategy` attribute. Adjustment of the minimal strategy shall be done
after the call to this function, consider using a decorator.
:param ind1: The first evolution strategy participating in the crossover.
:param ind2: The second evolution strategy participating in the crossover.
:param alpha: Extent of the interval in which the new values can be drawn
for each attribute on both side of the parents' attributes.
:returns: A tuple of two evolution strategies.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
"""
for i, (x1, s1, x2, s2) in enumerate(zip(ind1, ind1.strategy,
ind2, ind2.strategy)):
# Blend the values
gamma = (1. + 2. * alpha) * random.random() - alpha
ind1[i] = (1. - gamma) * x1 + gamma * x2
ind2[i] = gamma * x1 + (1. - gamma) * x2
# Blend the strategies
gamma = (1. + 2. * alpha) * random.random() - alpha
ind1.strategy[i] = (1. - gamma) * s1 + gamma * s2
ind2.strategy[i] = gamma * s1 + (1. - gamma) * s2
return ind1, ind2
def cxESTwoPoint(ind1, ind2):
"""Executes a classical two points crossover on both the individuals and their
strategy. The individuals shall be a :term:`sequence` and must have a
:term:`sequence` :attr:`strategy` attribute. The crossover points for the
individual and the strategy are the same.
:param ind1: The first evolution strategy participating in the crossover.
:param ind2: The second evolution strategy participating in the crossover.
:returns: A tuple of two evolution strategies.
This function uses the :func:`~random.randint` function from the python base
:mod:`random` module.
"""
size = min(len(ind1), len(ind2))
pt1 = random.randint(1, size)
pt2 = random.randint(1, size - 1)
if pt2 >= pt1:
pt2 += 1
else: # Swap the two cx points
pt1, pt2 = pt2, pt1
ind1[pt1:pt2], ind2[pt1:pt2] = ind2[pt1:pt2], ind1[pt1:pt2]
ind1.strategy[pt1:pt2], ind2.strategy[pt1:pt2] = \
ind2.strategy[pt1:pt2], ind1.strategy[pt1:pt2]
return ind1, ind2
def cxESTwoPoints(ind1, ind2):
"""
.. deprecated:: 1.0
The function has been renamed. Use :func:`cxESTwoPoint` instead.
"""
return cxESTwoPoint(ind1, ind2)
# List of exported function names.
__all__ = ['cxOnePoint', 'cxTwoPoint', 'cxUniform', 'cxPartialyMatched',
'cxUniformPartialyMatched', 'cxOrdered', 'cxBlend',
'cxSimulatedBinary', 'cxSimulatedBinaryBounded', 'cxMessyOnePoint',
'cxESBlend', 'cxESTwoPoint']
# Deprecated functions
__all__.extend(['cxTwoPoints', 'cxESTwoPoints'])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/emo.py 0000644 0000765 0000024 00000100530 14456461441 015075 0 ustar 00runner staff import bisect
from collections import defaultdict, namedtuple
from itertools import chain
import math
from operator import attrgetter, itemgetter
import random
import numpy
######################################
# Non-Dominated Sorting (NSGA-II) #
######################################
def selNSGA2(individuals, k, nd='standard'):
"""Apply NSGA-II selection operator on the *individuals*. Usually, the
size of *individuals* will be larger than *k* because any individual
present in *individuals* will appear in the returned list at most once.
Having the size of *individuals* equals to *k* will have no effect other
than sorting the population according to their front rank. The
list returned contains references to the input *individuals*. For more
details on the NSGA-II operator see [Deb2002]_.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param nd: Specify the non-dominated algorithm to use: 'standard' or 'log'.
:returns: A list of selected individuals.
.. [Deb2002] Deb, Pratab, Agarwal, and Meyarivan, "A fast elitist
non-dominated sorting genetic algorithm for multi-objective
optimization: NSGA-II", 2002.
"""
if nd == 'standard':
pareto_fronts = sortNondominated(individuals, k)
elif nd == 'log':
pareto_fronts = sortLogNondominated(individuals, k)
else:
raise Exception('selNSGA2: The choice of non-dominated sorting '
'method "{0}" is invalid.'.format(nd))
for front in pareto_fronts:
assignCrowdingDist(front)
chosen = list(chain(*pareto_fronts[:-1]))
k = k - len(chosen)
if k > 0:
sorted_front = sorted(pareto_fronts[-1], key=attrgetter("fitness.crowding_dist"), reverse=True)
chosen.extend(sorted_front[:k])
return chosen
def sortNondominated(individuals, k, first_front_only=False):
"""Sort the first *k* *individuals* into different nondomination levels
using the "Fast Nondominated Sorting Approach" proposed by Deb et al.,
see [Deb2002]_. This algorithm has a time complexity of :math:`O(MN^2)`,
where :math:`M` is the number of objectives and :math:`N` the number of
individuals.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param first_front_only: If :obj:`True` sort only the first front and
exit.
:returns: A list of Pareto fronts (lists), the first list includes
nondominated individuals.
.. [Deb2002] Deb, Pratab, Agarwal, and Meyarivan, "A fast elitist
non-dominated sorting genetic algorithm for multi-objective
optimization: NSGA-II", 2002.
"""
if k == 0:
return []
map_fit_ind = defaultdict(list)
for ind in individuals:
map_fit_ind[ind.fitness].append(ind)
fits = list(map_fit_ind.keys())
current_front = []
next_front = []
dominating_fits = defaultdict(int)
dominated_fits = defaultdict(list)
# Rank first Pareto front
for i, fit_i in enumerate(fits):
for fit_j in fits[i+1:]:
if fit_i.dominates(fit_j):
dominating_fits[fit_j] += 1
dominated_fits[fit_i].append(fit_j)
elif fit_j.dominates(fit_i):
dominating_fits[fit_i] += 1
dominated_fits[fit_j].append(fit_i)
if dominating_fits[fit_i] == 0:
current_front.append(fit_i)
fronts = [[]]
for fit in current_front:
fronts[-1].extend(map_fit_ind[fit])
pareto_sorted = len(fronts[-1])
# Rank the next front until all individuals are sorted or
# the given number of individual are sorted.
if not first_front_only:
N = min(len(individuals), k)
while pareto_sorted < N:
fronts.append([])
for fit_p in current_front:
for fit_d in dominated_fits[fit_p]:
dominating_fits[fit_d] -= 1
if dominating_fits[fit_d] == 0:
next_front.append(fit_d)
pareto_sorted += len(map_fit_ind[fit_d])
fronts[-1].extend(map_fit_ind[fit_d])
current_front = next_front
next_front = []
return fronts
def assignCrowdingDist(individuals):
"""Assign a crowding distance to each individual's fitness. The
crowding distance can be retrieve via the :attr:`crowding_dist`
attribute of each individual's fitness.
"""
if len(individuals) == 0:
return
distances = [0.0] * len(individuals)
crowd = [(ind.fitness.values, i) for i, ind in enumerate(individuals)]
nobj = len(individuals[0].fitness.values)
for i in range(nobj):
crowd.sort(key=lambda element: element[0][i])
distances[crowd[0][1]] = float("inf")
distances[crowd[-1][1]] = float("inf")
if crowd[-1][0][i] == crowd[0][0][i]:
continue
norm = nobj * float(crowd[-1][0][i] - crowd[0][0][i])
for prev, cur, next in zip(crowd[:-2], crowd[1:-1], crowd[2:]):
distances[cur[1]] += (next[0][i] - prev[0][i]) / norm
for i, dist in enumerate(distances):
individuals[i].fitness.crowding_dist = dist
def selTournamentDCD(individuals, k):
"""Tournament selection based on dominance (D) between two individuals, if
the two individuals do not interdominate the selection is made
based on crowding distance (CD). The *individuals* sequence length has to
be a multiple of 4 only if k is equal to the length of individuals.
Starting from the beginning of the selected individuals, two consecutive
individuals will be different (assuming all individuals in the input list
are unique). Each individual from the input list won't be selected more
than twice.
This selection requires the individuals to have a :attr:`crowding_dist`
attribute, which can be set by the :func:`assignCrowdingDist` function.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select. Must be less than or equal
to len(individuals).
:returns: A list of selected individuals.
"""
if k > len(individuals):
raise ValueError("selTournamentDCD: k must be less than or equal to individuals length")
if k == len(individuals) and k % 4 != 0:
raise ValueError("selTournamentDCD: k must be divisible by four if k == len(individuals)")
def tourn(ind1, ind2):
if ind1.fitness.dominates(ind2.fitness):
return ind1
elif ind2.fitness.dominates(ind1.fitness):
return ind2
if ind1.fitness.crowding_dist < ind2.fitness.crowding_dist:
return ind2
elif ind1.fitness.crowding_dist > ind2.fitness.crowding_dist:
return ind1
if random.random() <= 0.5:
return ind1
return ind2
individuals_1 = random.sample(individuals, len(individuals))
individuals_2 = random.sample(individuals, len(individuals))
chosen = []
for i in range(0, k, 4):
chosen.append(tourn(individuals_1[i], individuals_1[i+1]))
chosen.append(tourn(individuals_1[i+2], individuals_1[i+3]))
chosen.append(tourn(individuals_2[i], individuals_2[i+1]))
chosen.append(tourn(individuals_2[i+2], individuals_2[i+3]))
return chosen
#######################################
# Generalized Reduced runtime ND sort #
#######################################
def identity(obj):
"""Returns directly the argument *obj*.
"""
return obj
def isDominated(wvalues1, wvalues2):
"""Returns whether or not *wvalues2* dominates *wvalues1*.
:param wvalues1: The weighted fitness values that would be dominated.
:param wvalues2: The weighted fitness values of the dominant.
:returns: :obj:`True` if wvalues2 dominates wvalues1, :obj:`False`
otherwise.
"""
not_equal = False
for self_wvalue, other_wvalue in zip(wvalues1, wvalues2):
if self_wvalue > other_wvalue:
return False
elif self_wvalue < other_wvalue:
not_equal = True
return not_equal
def median(seq, key=identity):
"""Returns the median of *seq* - the numeric value separating the higher
half of a sample from the lower half. If there is an even number of
elements in *seq*, it returns the mean of the two middle values.
"""
sseq = sorted(seq, key=key)
length = len(seq)
if length % 2 == 1:
return key(sseq[(length - 1) // 2])
else:
return (key(sseq[(length - 1) // 2]) + key(sseq[length // 2])) / 2.0
def sortLogNondominated(individuals, k, first_front_only=False):
"""Sort *individuals* in pareto non-dominated fronts using the Generalized
Reduced Run-Time Complexity Non-Dominated Sorting Algorithm presented by
Fortin et al. (2013).
:param individuals: A list of individuals to select from.
:returns: A list of Pareto fronts (lists), with the first list being the
true Pareto front.
"""
if k == 0:
return []
# Separate individuals according to unique fitnesses
unique_fits = defaultdict(list)
for i, ind in enumerate(individuals):
unique_fits[ind.fitness.wvalues].append(ind)
# Launch the sorting algorithm
obj = len(individuals[0].fitness.wvalues)-1
fitnesses = list(unique_fits.keys())
front = dict.fromkeys(fitnesses, 0)
# Sort the fitnesses lexicographically.
fitnesses.sort(reverse=True)
sortNDHelperA(fitnesses, obj, front)
# Extract individuals from front list here
nbfronts = max(front.values())+1
pareto_fronts = [[] for i in range(nbfronts)]
for fit in fitnesses:
index = front[fit]
pareto_fronts[index].extend(unique_fits[fit])
# Keep only the fronts required to have k individuals.
if not first_front_only:
count = 0
for i, front in enumerate(pareto_fronts):
count += len(front)
if count >= k:
return pareto_fronts[:i+1]
return pareto_fronts
else:
return pareto_fronts[0]
def sortNDHelperA(fitnesses, obj, front):
"""Create a non-dominated sorting of S on the first M objectives"""
if len(fitnesses) < 2:
return
elif len(fitnesses) == 2:
# Only two individuals, compare them and adjust front number
s1, s2 = fitnesses[0], fitnesses[1]
if isDominated(s2[:obj+1], s1[:obj+1]):
front[s2] = max(front[s2], front[s1] + 1)
elif obj == 1:
sweepA(fitnesses, front)
elif len(frozenset(map(itemgetter(obj), fitnesses))) == 1:
# All individuals for objective M are equal: go to objective M-1
sortNDHelperA(fitnesses, obj-1, front)
else:
# More than two individuals, split list and then apply recursion
best, worst = splitA(fitnesses, obj)
sortNDHelperA(best, obj, front)
sortNDHelperB(best, worst, obj-1, front)
sortNDHelperA(worst, obj, front)
def splitA(fitnesses, obj):
"""Partition the set of fitnesses in two according to the median of
the objective index *obj*. The values equal to the median are put in
the set containing the least elements.
"""
median_ = median(fitnesses, itemgetter(obj))
best_a, worst_a = [], []
best_b, worst_b = [], []
for fit in fitnesses:
if fit[obj] > median_:
best_a.append(fit)
best_b.append(fit)
elif fit[obj] < median_:
worst_a.append(fit)
worst_b.append(fit)
else:
best_a.append(fit)
worst_b.append(fit)
balance_a = abs(len(best_a) - len(worst_a))
balance_b = abs(len(best_b) - len(worst_b))
if balance_a <= balance_b:
return best_a, worst_a
else:
return best_b, worst_b
def sweepA(fitnesses, front):
"""Update rank number associated to the fitnesses according
to the first two objectives using a geometric sweep procedure.
"""
stairs = [-fitnesses[0][1]]
fstairs = [fitnesses[0]]
for fit in fitnesses[1:]:
idx = bisect.bisect_right(stairs, -fit[1])
if 0 < idx <= len(stairs):
fstair = max(fstairs[:idx], key=front.__getitem__)
front[fit] = max(front[fit], front[fstair]+1)
for i, fstair in enumerate(fstairs[idx:], idx):
if front[fstair] == front[fit]:
del stairs[i]
del fstairs[i]
break
stairs.insert(idx, -fit[1])
fstairs.insert(idx, fit)
def sortNDHelperB(best, worst, obj, front):
"""Assign front numbers to the solutions in H according to the solutions
in L. The solutions in L are assumed to have correct front numbers and the
solutions in H are not compared with each other, as this is supposed to
happen after sortNDHelperB is called."""
key = itemgetter(obj)
if len(worst) == 0 or len(best) == 0:
# One of the lists is empty: nothing to do
return
elif len(best) == 1 or len(worst) == 1:
# One of the lists has one individual: compare directly
for hi in worst:
for li in best:
if isDominated(hi[:obj+1], li[:obj+1]) or hi[:obj+1] == li[:obj+1]:
front[hi] = max(front[hi], front[li] + 1)
elif obj == 1:
sweepB(best, worst, front)
elif key(min(best, key=key)) >= key(max(worst, key=key)):
# All individuals from L dominate H for objective M:
# Also supports the case where every individuals in L and H
# has the same value for the current objective
# Skip to objective M-1
sortNDHelperB(best, worst, obj-1, front)
elif key(max(best, key=key)) >= key(min(worst, key=key)):
best1, best2, worst1, worst2 = splitB(best, worst, obj)
sortNDHelperB(best1, worst1, obj, front)
sortNDHelperB(best1, worst2, obj-1, front)
sortNDHelperB(best2, worst2, obj, front)
def splitB(best, worst, obj):
"""Split both best individual and worst sets of fitnesses according
to the median of objective *obj* computed on the set containing the
most elements. The values equal to the median are attributed so as
to balance the four resulting sets as much as possible.
"""
median_ = median(best if len(best) > len(worst) else worst, itemgetter(obj))
best1_a, best2_a, best1_b, best2_b = [], [], [], []
for fit in best:
if fit[obj] > median_:
best1_a.append(fit)
best1_b.append(fit)
elif fit[obj] < median_:
best2_a.append(fit)
best2_b.append(fit)
else:
best1_a.append(fit)
best2_b.append(fit)
worst1_a, worst2_a, worst1_b, worst2_b = [], [], [], []
for fit in worst:
if fit[obj] > median_:
worst1_a.append(fit)
worst1_b.append(fit)
elif fit[obj] < median_:
worst2_a.append(fit)
worst2_b.append(fit)
else:
worst1_a.append(fit)
worst2_b.append(fit)
balance_a = abs(len(best1_a) - len(best2_a) + len(worst1_a) - len(worst2_a))
balance_b = abs(len(best1_b) - len(best2_b) + len(worst1_b) - len(worst2_b))
if balance_a <= balance_b:
return best1_a, best2_a, worst1_a, worst2_a
else:
return best1_b, best2_b, worst1_b, worst2_b
def sweepB(best, worst, front):
"""Adjust the rank number of the worst fitnesses according to
the best fitnesses on the first two objectives using a sweep
procedure.
"""
stairs, fstairs = [], []
iter_best = iter(best)
next_best = next(iter_best, False)
for h in worst:
while next_best and h[:2] <= next_best[:2]:
insert = True
for i, fstair in enumerate(fstairs):
if front[fstair] == front[next_best]:
if fstair[1] > next_best[1]:
insert = False
else:
del stairs[i], fstairs[i]
break
if insert:
idx = bisect.bisect_right(stairs, -next_best[1])
stairs.insert(idx, -next_best[1])
fstairs.insert(idx, next_best)
next_best = next(iter_best, False)
idx = bisect.bisect_right(stairs, -h[1])
if 0 < idx <= len(stairs):
fstair = max(fstairs[:idx], key=front.__getitem__)
front[h] = max(front[h], front[fstair]+1)
######################################
# Non-Dominated Sorting (NSGA-III) #
######################################
NSGA3Memory = namedtuple("NSGA3Memory", ["best_point", "worst_point", "extreme_points"])
class selNSGA3WithMemory(object):
"""Class version of NSGA-III selection including memory for best, worst and
extreme points. Registering this operator in a toolbox is a bit different
than classical operators, it requires to instantiate the class instead
of just registering the function::
>>> from deap import base
>>> ref_points = uniform_reference_points(nobj=3, p=12)
>>> toolbox = base.Toolbox()
>>> toolbox.register("select", selNSGA3WithMemory(ref_points))
"""
def __init__(self, ref_points, nd="log"):
self.ref_points = ref_points
self.nd = nd
self.best_point = numpy.full((1, ref_points.shape[1]), numpy.inf)
self.worst_point = numpy.full((1, ref_points.shape[1]), -numpy.inf)
self.extreme_points = None
def __call__(self, individuals, k):
chosen, memory = selNSGA3(individuals, k, self.ref_points, self.nd,
self.best_point, self.worst_point,
self.extreme_points, True)
self.best_point = memory.best_point.reshape((1, -1))
self.worst_point = memory.worst_point.reshape((1, -1))
self.extreme_points = memory.extreme_points
return chosen
def selNSGA3(individuals, k, ref_points, nd="log", best_point=None,
worst_point=None, extreme_points=None, return_memory=False):
"""Implementation of NSGA-III selection as presented in [Deb2014]_.
This implementation is partly based on `lmarti/nsgaiii
`_. It departs slightly from the
original implementation in that it does not use memory to keep track
of ideal and extreme points. This choice has been made to fit the
functional api of DEAP. For a version of NSGA-III see
:class:`~deap.tools.selNSGA3WithMemory`.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param ref_points: Reference points to use for niching.
:param nd: Specify the non-dominated algorithm to use: 'standard' or 'log'.
:param best_point: Best point found at previous generation. If not provided
find the best point only from current individuals.
:param worst_point: Worst point found at previous generation. If not provided
find the worst point only from current individuals.
:param extreme_points: Extreme points found at previous generation. If not provided
find the extreme points only from current individuals.
:param return_memory: If :data:`True`, return the best, worst and extreme points
in addition to the chosen individuals.
:returns: A list of selected individuals.
:returns: If `return_memory` is :data:`True`, a namedtuple with the
`best_point`, `worst_point`, and `extreme_points`.
You can generate the reference points using the :func:`uniform_reference_points`
function::
>>> ref_points = tools.uniform_reference_points(nobj=3, p=12) # doctest: +SKIP
>>> selected = selNSGA3(population, k, ref_points) # doctest: +SKIP
.. [Deb2014] Deb, K., & Jain, H. (2014). An Evolutionary Many-Objective Optimization
Algorithm Using Reference-Point-Based Nondominated Sorting Approach,
Part I: Solving Problems With Box Constraints. IEEE Transactions on
Evolutionary Computation, 18(4), 577-601. doi:10.1109/TEVC.2013.2281535.
"""
if nd == "standard":
pareto_fronts = sortNondominated(individuals, k)
elif nd == "log":
pareto_fronts = sortLogNondominated(individuals, k)
else:
raise Exception("selNSGA3: The choice of non-dominated sorting "
"method '{0}' is invalid.".format(nd))
# Extract fitnesses as a numpy array in the nd-sort order
# Use wvalues * -1 to tackle always as a minimization problem
fitnesses = numpy.array([ind.fitness.wvalues for f in pareto_fronts for ind in f])
fitnesses *= -1
# Get best and worst point of population, contrary to pymoo
# we don't use memory
if best_point is not None and worst_point is not None:
best_point = numpy.min(numpy.concatenate((fitnesses, best_point), axis=0), axis=0)
worst_point = numpy.max(numpy.concatenate((fitnesses, worst_point), axis=0), axis=0)
else:
best_point = numpy.min(fitnesses, axis=0)
worst_point = numpy.max(fitnesses, axis=0)
extreme_points = find_extreme_points(fitnesses, best_point, extreme_points)
front_worst = numpy.max(fitnesses[:sum(len(f) for f in pareto_fronts), :], axis=0)
intercepts = find_intercepts(extreme_points, best_point, worst_point, front_worst)
niches, dist = associate_to_niche(fitnesses, ref_points, best_point, intercepts)
# Get counts per niche for individuals in all front but the last
niche_counts = numpy.zeros(len(ref_points), dtype=numpy.int64)
index, counts = numpy.unique(niches[:-len(pareto_fronts[-1])], return_counts=True)
niche_counts[index] = counts
# Choose individuals from all fronts but the last
chosen = list(chain(*pareto_fronts[:-1]))
# Use niching to select the remaining individuals
sel_count = len(chosen)
n = k - sel_count
selected = niching(pareto_fronts[-1], n, niches[sel_count:], dist[sel_count:], niche_counts)
chosen.extend(selected)
if return_memory:
return chosen, NSGA3Memory(best_point, worst_point, extreme_points)
return chosen
def find_extreme_points(fitnesses, best_point, extreme_points=None):
'Finds the individuals with extreme values for each objective function.'
# Keep track of last generation extreme points
if extreme_points is not None:
fitnesses = numpy.concatenate((fitnesses, extreme_points), axis=0)
# Translate objectives
ft = fitnesses - best_point
# Find achievement scalarizing function (asf)
asf = numpy.eye(best_point.shape[0])
asf[asf == 0] = 1e6
asf = numpy.max(ft * asf[:, numpy.newaxis, :], axis=2)
# Extreme point are the fitnesses with minimal asf
min_asf_idx = numpy.argmin(asf, axis=1)
return fitnesses[min_asf_idx, :]
def find_intercepts(extreme_points, best_point, current_worst, front_worst):
"""Find intercepts between the hyperplane and each axis with
the ideal point as origin."""
# Construct hyperplane sum(f_i^n) = 1
b = numpy.ones(extreme_points.shape[1])
A = extreme_points - best_point
try:
x = numpy.linalg.solve(A, b)
except numpy.linalg.LinAlgError:
intercepts = current_worst
else:
if numpy.count_nonzero(x) != len(x):
intercepts = front_worst
else:
intercepts = 1 / x
if (not numpy.allclose(numpy.dot(A, x), b) or
numpy.any(intercepts <= 1e-6) or
numpy.any((intercepts + best_point) > current_worst)):
intercepts = front_worst
return intercepts
def associate_to_niche(fitnesses, reference_points, best_point, intercepts):
"""Associates individuals to reference points and calculates niche number.
Corresponds to Algorithm 3 of Deb & Jain (2014)."""
# Normalize by ideal point and intercepts
fn = (fitnesses - best_point) / (intercepts - best_point + numpy.finfo(float).eps)
# Create distance matrix
fn = numpy.repeat(numpy.expand_dims(fn, axis=1), len(reference_points), axis=1)
norm = numpy.linalg.norm(reference_points, axis=1)
distances = numpy.sum(fn * reference_points, axis=2) / norm.reshape(1, -1)
distances = distances[:, :, numpy.newaxis] * reference_points[numpy.newaxis, :, :] / norm[numpy.newaxis, :, numpy.newaxis]
distances = numpy.linalg.norm(distances - fn, axis=2)
# Retrieve min distance niche index
niches = numpy.argmin(distances, axis=1)
distances = distances[list(range(niches.shape[0])), niches]
return niches, distances
def niching(individuals, k, niches, distances, niche_counts):
selected = []
available = numpy.ones(len(individuals), dtype=bool)
while len(selected) < k:
# Maximum number of individuals (niches) to select in that round
n = k - len(selected)
# Find the available niches and the minimum niche count in them
available_niches = numpy.zeros(len(niche_counts), dtype=bool)
available_niches[numpy.unique(niches[available])] = True
min_count = numpy.min(niche_counts[available_niches])
# Select at most n niches with the minimum count
selected_niches = numpy.flatnonzero(numpy.logical_and(available_niches, niche_counts == min_count))
numpy.random.shuffle(selected_niches)
selected_niches = selected_niches[:n]
for niche in selected_niches:
# Select from available individuals in niche
niche_individuals = numpy.flatnonzero(numpy.logical_and(niches == niche, available))
numpy.random.shuffle(niche_individuals)
# If no individual in that niche, select the closest to reference
# Else select randomly
if niche_counts[niche] == 0:
sel_index = niche_individuals[numpy.argmin(distances[niche_individuals])]
else:
sel_index = niche_individuals[0]
# Update availability, counts and selection
available[sel_index] = False
niche_counts[niche] += 1
selected.append(individuals[sel_index])
return selected
def uniform_reference_points(nobj, p=4, scaling=None):
"""Generate reference points uniformly on the hyperplane intersecting
each axis at 1. The scaling factor is used to combine multiple layers of
reference points.
"""
def gen_refs_recursive(ref, nobj, left, total, depth):
points = []
if depth == nobj - 1:
ref[depth] = left / total
points.append(ref)
else:
for i in range(left + 1):
ref[depth] = i / total
points.extend(gen_refs_recursive(ref.copy(), nobj, left - i, total, depth + 1))
return points
ref_points = numpy.array(gen_refs_recursive(numpy.zeros(nobj), nobj, p, p, 0))
if scaling is not None:
ref_points *= scaling
ref_points += (1 - scaling) / nobj
return ref_points
######################################
# Strength Pareto (SPEA-II) #
######################################
def selSPEA2(individuals, k):
"""Apply SPEA-II selection operator on the *individuals*. Usually, the
size of *individuals* will be larger than *n* because any individual
present in *individuals* will appear in the returned list at most once.
Having the size of *individuals* equals to *n* will have no effect other
than sorting the population according to a strength Pareto scheme. The
list returned contains references to the input *individuals*. For more
details on the SPEA-II operator see [Zitzler2001]_.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:returns: A list of selected individuals.
.. [Zitzler2001] Zitzler, Laumanns and Thiele, "SPEA 2: Improving the
strength Pareto evolutionary algorithm", 2001.
"""
N = len(individuals)
L = len(individuals[0].fitness.values)
K = math.sqrt(N)
strength_fits = [0] * N
fits = [0] * N
dominating_inds = [list() for i in range(N)]
for i, ind_i in enumerate(individuals):
for j, ind_j in enumerate(individuals[i+1:], i+1):
if ind_i.fitness.dominates(ind_j.fitness):
strength_fits[i] += 1
dominating_inds[j].append(i)
elif ind_j.fitness.dominates(ind_i.fitness):
strength_fits[j] += 1
dominating_inds[i].append(j)
for i in range(N):
for j in dominating_inds[i]:
fits[i] += strength_fits[j]
# Choose all non-dominated individuals
chosen_indices = [i for i in range(N) if fits[i] < 1]
if len(chosen_indices) < k: # The archive is too small
for i in range(N):
distances = [0.0] * N
for j in range(i + 1, N):
dist = 0.0
for l in range(L):
val = individuals[i].fitness.values[l] - \
individuals[j].fitness.values[l]
dist += val * val
distances[j] = dist
kth_dist = _randomizedSelect(distances, 0, N - 1, K)
density = 1.0 / (kth_dist + 2.0)
fits[i] += density
next_indices = [(fits[i], i) for i in range(N)
if not i in chosen_indices]
next_indices.sort()
# print next_indices
chosen_indices += [i for _, i in next_indices[:k - len(chosen_indices)]]
elif len(chosen_indices) > k: # The archive is too large
N = len(chosen_indices)
distances = [[0.0] * N for i in range(N)]
sorted_indices = [[0] * N for i in range(N)]
for i in range(N):
for j in range(i + 1, N):
dist = 0.0
for l in range(L):
val = individuals[chosen_indices[i]].fitness.values[l] - \
individuals[chosen_indices[j]].fitness.values[l]
dist += val * val
distances[i][j] = dist
distances[j][i] = dist
distances[i][i] = -1
# Insert sort is faster than quick sort for short arrays
for i in range(N):
for j in range(1, N):
l = j
while l > 0 and distances[i][j] < distances[i][sorted_indices[i][l - 1]]:
sorted_indices[i][l] = sorted_indices[i][l - 1]
l -= 1
sorted_indices[i][l] = j
size = N
to_remove = []
while size > k:
# Search for minimal distance
min_pos = 0
for i in range(1, N):
for j in range(1, size):
dist_i_sorted_j = distances[i][sorted_indices[i][j]]
dist_min_sorted_j = distances[min_pos][sorted_indices[min_pos][j]]
if dist_i_sorted_j < dist_min_sorted_j:
min_pos = i
break
elif dist_i_sorted_j > dist_min_sorted_j:
break
# Remove minimal distance from sorted_indices
for i in range(N):
distances[i][min_pos] = float("inf")
distances[min_pos][i] = float("inf")
for j in range(1, size - 1):
if sorted_indices[i][j] == min_pos:
sorted_indices[i][j] = sorted_indices[i][j + 1]
sorted_indices[i][j + 1] = min_pos
# Remove corresponding individual from chosen_indices
to_remove.append(min_pos)
size -= 1
for index in reversed(sorted(to_remove)):
del chosen_indices[index]
return [individuals[i] for i in chosen_indices]
def _randomizedSelect(array, begin, end, i):
"""Allows to select the ith smallest element from array without sorting it.
Runtime is expected to be O(n).
"""
if begin == end:
return array[begin]
q = _randomizedPartition(array, begin, end)
k = q - begin + 1
if i < k:
return _randomizedSelect(array, begin, q, i)
else:
return _randomizedSelect(array, q + 1, end, i - k)
def _randomizedPartition(array, begin, end):
i = random.randint(begin, end)
array[begin], array[i] = array[i], array[begin]
return _partition(array, begin, end)
def _partition(array, begin, end):
x = array[begin]
i = begin - 1
j = end + 1
while True:
j -= 1
while array[j] > x:
j -= 1
i += 1
while array[i] < x:
i += 1
if i < j:
array[i], array[j] = array[j], array[i]
else:
return j
__all__ = ['selNSGA2', 'selNSGA3', 'selNSGA3WithMemory', 'selSPEA2', 'sortNondominated', 'sortLogNondominated',
'selTournamentDCD', 'uniform_reference_points']
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/indicator.py 0000644 0000765 0000024 00000002252 14456461441 016273 0 ustar 00runner staff import numpy
try:
# try importing the C version
from ._hypervolume import hv as hv
except ImportError:
# fallback on python version
from ._hypervolume import pyhv as hv
def hypervolume(front, **kargs):
"""Returns the index of the individual with the least the hypervolume
contribution. The provided *front* should be a set of non-dominated
individuals having each a :attr:`fitness` attribute.
"""
# Must use wvalues * -1 since hypervolume use implicit minimization
# And minimization in deap use max on -obj
wobj = numpy.array([ind.fitness.wvalues for ind in front]) * -1
ref = kargs.get("ref", None)
if ref is None:
ref = numpy.max(wobj, axis=0) + 1
def contribution(i):
# The contribution of point p_i in point set P
# is the hypervolume of P without p_i
return hv.hypervolume(numpy.concatenate((wobj[:i], wobj[i+1:])), ref)
# Parallelization note: Cannot pickle local function
contrib_values = [contribution(i) for i in range(len(front))]
# Select the maximum hypervolume value (correspond to the minimum difference)
return numpy.argmax(contrib_values)
__all__ = ["hypervolume"]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/init.py 0000644 0000765 0000024 00000006323 14456461441 015265 0 ustar 00runner staff def initRepeat(container, func, n):
"""Call the function *func* *n* times and return the results in a
container type `container`
:param container: The type to put in the data from func.
:param func: The function that will be called n times to fill the
container.
:param n: The number of times to repeat func.
:returns: An instance of the container filled with data from func.
This helper function can be used in conjunction with a Toolbox
to register a generator of filled containers, as individuals or
population.
>>> import random
>>> random.seed(42)
>>> initRepeat(list, random.random, 2) # doctest: +ELLIPSIS,
... # doctest: +NORMALIZE_WHITESPACE
[0.6394..., 0.0250...]
See the :ref:`list-of-floats` and :ref:`population` tutorials for more examples.
"""
return container(func() for _ in range(n))
def initIterate(container, generator):
"""Call the function *container* with an iterable as
its only argument. The iterable must be returned by
the method or the object *generator*.
:param container: The type to put in the data from func.
:param generator: A function returning an iterable (list, tuple, ...),
the content of this iterable will fill the container.
:returns: An instance of the container filled with data from the
generator.
This helper function can be used in conjunction with a Toolbox
to register a generator of filled containers, as individuals or
population.
>>> import random
>>> from functools import partial
>>> random.seed(42)
>>> gen_idx = partial(random.sample, range(10), 10)
>>> initIterate(list, gen_idx) # doctest: +SKIP
[1, 0, 4, 9, 6, 5, 8, 2, 3, 7]
See the :ref:`permutation` and :ref:`arithmetic-expr` tutorials for
more examples.
"""
return container(generator())
def initCycle(container, seq_func, n=1):
"""Call the function *container* with a generator function corresponding
to the calling *n* times the functions present in *seq_func*.
:param container: The type to put in the data from func.
:param seq_func: A list of function objects to be called in order to
fill the container.
:param n: Number of times to iterate through the list of functions.
:returns: An instance of the container filled with data from the
returned by the functions.
This helper function can be used in conjunction with a Toolbox
to register a generator of filled containers, as individuals or
population.
>>> func_seq = [lambda:1 , lambda:'a', lambda:3]
>>> initCycle(list, func_seq, n=2)
[1, 'a', 3, 1, 'a', 3]
See the :ref:`funky` tutorial for an example.
"""
return container(func() for _ in range(n) for func in seq_func)
__all__ = ['initRepeat', 'initIterate', 'initCycle']
if __name__ == "__main__":
import doctest
import random
random.seed(64)
doctest.run_docstring_examples(initRepeat, globals())
random.seed(64)
doctest.run_docstring_examples(initIterate, globals())
doctest.run_docstring_examples(initCycle, globals())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/migration.py 0000644 0000765 0000024 00000005246 14456461441 016316 0 ustar 00runner staff def migRing(populations, k, selection, replacement=None, migarray=None):
"""Perform a ring migration between the *populations*. The migration first
select *k* emigrants from each population using the specified *selection*
operator and then replace *k* individuals from the associated population
in the *migarray* by the emigrants. If no *replacement* operator is
specified, the immigrants will replace the emigrants of the population,
otherwise, the immigrants will replace the individuals selected by the
*replacement* operator. The migration array, if provided, shall contain
each population's index once and only once. If no migration array is
provided, it defaults to a serial ring migration (1 -- 2 -- ... -- n --
1). Selection and replacement function are called using the signature
``selection(populations[i], k)`` and ``replacement(populations[i], k)``.
It is important to note that the replacement strategy must select *k*
**different** individuals. For example, using a traditional tournament for
replacement strategy will thus give undesirable effects, two individuals
will most likely try to enter the same slot.
:param populations: A list of (sub-)populations on which to operate
migration.
:param k: The number of individuals to migrate.
:param selection: The function to use for selection.
:param replacement: The function to use to select which individuals will
be replaced. If :obj:`None` (default) the individuals
that leave the population are directly replaced.
:param migarray: A list of indices indicating where the individuals from
a particular position in the list goes. This defaults
to a ring migration.
"""
nbr_demes = len(populations)
if migarray is None:
migarray = list(range(1, nbr_demes)) + [0]
immigrants = [[] for i in range(nbr_demes)]
emigrants = [[] for i in range(nbr_demes)]
for from_deme in range(nbr_demes):
emigrants[from_deme].extend(selection(populations[from_deme], k))
if replacement is None:
# If no replacement strategy is selected, replace those who migrate
immigrants[from_deme] = emigrants[from_deme]
else:
# Else select those who will be replaced
immigrants[from_deme].extend(replacement(populations[from_deme], k))
for from_deme, to_deme in enumerate(migarray):
for i, immigrant in enumerate(immigrants[to_deme]):
indx = populations[to_deme].index(immigrant)
populations[to_deme][indx] = emigrants[from_deme][i]
__all__ = ['migRing']
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/mutation.py 0000644 0000765 0000024 00000023041 14456461441 016156 0 ustar 00runner staff import math
import random
from itertools import repeat
try:
from collections.abc import Sequence
except ImportError:
from collections import Sequence
######################################
# GA Mutations #
######################################
def mutGaussian(individual, mu, sigma, indpb):
"""This function applies a gaussian mutation of mean *mu* and standard
deviation *sigma* on the input individual. This mutation expects a
:term:`sequence` individual composed of real valued attributes.
The *indpb* argument is the probability of each attribute to be mutated.
:param individual: Individual to be mutated.
:param mu: Mean or :term:`python:sequence` of means for the
gaussian addition mutation.
:param sigma: Standard deviation or :term:`python:sequence` of
standard deviations for the gaussian addition mutation.
:param indpb: Independent probability for each attribute to be mutated.
:returns: A tuple of one individual.
This function uses the :func:`~random.random` and :func:`~random.gauss`
functions from the python base :mod:`random` module.
"""
size = len(individual)
if not isinstance(mu, Sequence):
mu = repeat(mu, size)
elif len(mu) < size:
raise IndexError("mu must be at least the size of individual: %d < %d" % (len(mu), size))
if not isinstance(sigma, Sequence):
sigma = repeat(sigma, size)
elif len(sigma) < size:
raise IndexError("sigma must be at least the size of individual: %d < %d" % (len(sigma), size))
for i, m, s in zip(range(size), mu, sigma):
if random.random() < indpb:
individual[i] += random.gauss(m, s)
return individual,
def mutPolynomialBounded(individual, eta, low, up, indpb):
"""Polynomial mutation as implemented in original NSGA-II algorithm in
C by Deb.
:param individual: :term:`Sequence ` individual to be mutated.
:param eta: Crowding degree of the mutation. A high eta will produce
a mutant resembling its parent, while a small eta will
produce a solution much more different.
:param low: A value or a :term:`python:sequence` of values that
is the lower bound of the search space.
:param up: A value or a :term:`python:sequence` of values that
is the upper bound of the search space.
:returns: A tuple of one individual.
"""
size = len(individual)
if not isinstance(low, Sequence):
low = repeat(low, size)
elif len(low) < size:
raise IndexError("low must be at least the size of individual: %d < %d" % (len(low), size))
if not isinstance(up, Sequence):
up = repeat(up, size)
elif len(up) < size:
raise IndexError("up must be at least the size of individual: %d < %d" % (len(up), size))
for i, xl, xu in zip(range(size), low, up):
if random.random() <= indpb:
x = individual[i]
delta_1 = (x - xl) / (xu - xl)
delta_2 = (xu - x) / (xu - xl)
rand = random.random()
mut_pow = 1.0 / (eta + 1.)
if rand < 0.5:
xy = 1.0 - delta_1
val = 2.0 * rand + (1.0 - 2.0 * rand) * xy ** (eta + 1)
delta_q = val ** mut_pow - 1.0
else:
xy = 1.0 - delta_2
val = 2.0 * (1.0 - rand) + 2.0 * (rand - 0.5) * xy ** (eta + 1)
delta_q = 1.0 - val ** mut_pow
x = x + delta_q * (xu - xl)
x = min(max(x, xl), xu)
individual[i] = x
return individual,
def mutShuffleIndexes(individual, indpb):
"""Shuffle the attributes of the input individual and return the mutant.
The *individual* is expected to be a :term:`sequence`. The *indpb* argument is the
probability of each attribute to be moved. Usually this mutation is applied on
vector of indices.
:param individual: Individual to be mutated.
:param indpb: Independent probability for each attribute to be exchanged to
another position.
:returns: A tuple of one individual.
This function uses the :func:`~random.random` and :func:`~random.randint`
functions from the python base :mod:`random` module.
"""
size = len(individual)
for i in range(size):
if random.random() < indpb:
swap_indx = random.randint(0, size - 2)
if swap_indx >= i:
swap_indx += 1
individual[i], individual[swap_indx] = \
individual[swap_indx], individual[i]
return individual,
def mutFlipBit(individual, indpb):
"""Flip the value of the attributes of the input individual and return the
mutant. The *individual* is expected to be a :term:`sequence` and the values of the
attributes shall stay valid after the ``not`` operator is called on them.
The *indpb* argument is the probability of each attribute to be
flipped. This mutation is usually applied on boolean individuals.
:param individual: Individual to be mutated.
:param indpb: Independent probability for each attribute to be flipped.
:returns: A tuple of one individual.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
"""
for i in range(len(individual)):
if random.random() < indpb:
individual[i] = type(individual[i])(not individual[i])
return individual,
def mutUniformInt(individual, low, up, indpb):
"""Mutate an individual by replacing attributes, with probability *indpb*,
by a integer uniformly drawn between *low* and *up* inclusively.
:param individual: :term:`Sequence ` individual to be mutated.
:param low: The lower bound or a :term:`python:sequence` of
of lower bounds of the range from which to draw the new
integer.
:param up: The upper bound or a :term:`python:sequence` of
of upper bounds of the range from which to draw the new
integer.
:param indpb: Independent probability for each attribute to be mutated.
:returns: A tuple of one individual.
"""
size = len(individual)
if not isinstance(low, Sequence):
low = repeat(low, size)
elif len(low) < size:
raise IndexError("low must be at least the size of individual: %d < %d" % (len(low), size))
if not isinstance(up, Sequence):
up = repeat(up, size)
elif len(up) < size:
raise IndexError("up must be at least the size of individual: %d < %d" % (len(up), size))
for i, xl, xu in zip(range(size), low, up):
if random.random() < indpb:
individual[i] = random.randint(xl, xu)
return individual,
def mutInversion(individual):
"""Select two points (indices) in the individual, reverse the order of the
attributes between these points [low, high] and return the mutated individual.
This implementation allows for the length of the inversion to be 0 and 1,
which would cause no change. This mutation is useful in situations where the
order/adjacency of elements is important.
:param individual: Individual to be mutated.
:returns: A tuple of one individual.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
"""
size = len(individual)
if size == 0:
return individual,
index_one = random.randrange(size)
index_two = random.randrange(size)
start_index = min(index_one, index_two)
end_index = max(index_one, index_two)
# Reverse the contents of the individual between the indices
individual[start_index:end_index] = individual[start_index:end_index][::-1]
return individual,
######################################
# ES Mutations #
######################################
def mutESLogNormal(individual, c, indpb):
r"""Mutate an evolution strategy according to its :attr:`strategy`
attribute as described in [Beyer2002]_. First the strategy is mutated
according to an extended log normal rule, :math:`\\boldsymbol{\sigma}_t =
\\exp(\\tau_0 \mathcal{N}_0(0, 1)) \\left[ \\sigma_{t-1, 1}\\exp(\\tau
\mathcal{N}_1(0, 1)), \ldots, \\sigma_{t-1, n} \\exp(\\tau
\mathcal{N}_n(0, 1))\\right]`, with :math:`\\tau_0 =
\\frac{c}{\\sqrt{2n}}` and :math:`\\tau = \\frac{c}{\\sqrt{2\\sqrt{n}}}`,
the the individual is mutated by a normal distribution of mean 0 and
standard deviation of :math:`\\boldsymbol{\sigma}_{t}` (its current
strategy) then . A recommended choice is ``c=1`` when using a :math:`(10,
100)` evolution strategy [Beyer2002]_ [Schwefel1995]_.
:param individual: :term:`Sequence ` individual to be mutated.
:param c: The learning parameter.
:param indpb: Independent probability for each attribute to be mutated.
:returns: A tuple of one individual.
.. [Beyer2002] Beyer and Schwefel, 2002, Evolution strategies - A
Comprehensive Introduction
.. [Schwefel1995] Schwefel, 1995, Evolution and Optimum Seeking.
Wiley, New York, NY
"""
size = len(individual)
t = c / math.sqrt(2. * math.sqrt(size))
t0 = c / math.sqrt(2. * size)
n = random.gauss(0, 1)
t0_n = t0 * n
for indx in range(size):
if random.random() < indpb:
individual.strategy[indx] *= math.exp(t0_n + t * random.gauss(0, 1))
individual[indx] += individual.strategy[indx] * random.gauss(0, 1)
return individual,
__all__ = ['mutGaussian', 'mutPolynomialBounded', 'mutShuffleIndexes',
'mutFlipBit', 'mutUniformInt', 'mutInversion', 'mutESLogNormal']
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/selection.py 0000644 0000765 0000024 00000032015 14456461441 016304 0 ustar 00runner staff import random
import numpy as np
from functools import partial
from operator import attrgetter
######################################
# Selections #
######################################
def selRandom(individuals, k):
"""Select *k* individuals at random from the input *individuals* with
replacement. The list returned contains references to the input
*individuals*.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:returns: A list of selected individuals.
This function uses the :func:`~random.choice` function from the
python base :mod:`random` module.
"""
return [random.choice(individuals) for i in range(k)]
def selBest(individuals, k, fit_attr="fitness"):
"""Select the *k* best individuals among the input *individuals*. The
list returned contains references to the input *individuals*.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param fit_attr: The attribute of individuals to use as selection criterion
:returns: A list containing the k best individuals.
"""
return sorted(individuals, key=attrgetter(fit_attr), reverse=True)[:k]
def selWorst(individuals, k, fit_attr="fitness"):
"""Select the *k* worst individuals among the input *individuals*. The
list returned contains references to the input *individuals*.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param fit_attr: The attribute of individuals to use as selection criterion
:returns: A list containing the k worst individuals.
"""
return sorted(individuals, key=attrgetter(fit_attr))[:k]
def selTournament(individuals, k, tournsize, fit_attr="fitness"):
"""Select the best individual among *tournsize* randomly chosen
individuals, *k* times. The list returned contains
references to the input *individuals*.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param tournsize: The number of individuals participating in each tournament.
:param fit_attr: The attribute of individuals to use as selection criterion
:returns: A list of selected individuals.
This function uses the :func:`~random.choice` function from the python base
:mod:`random` module.
"""
chosen = []
for i in range(k):
aspirants = selRandom(individuals, tournsize)
chosen.append(max(aspirants, key=attrgetter(fit_attr)))
return chosen
def selRoulette(individuals, k, fit_attr="fitness"):
"""Select *k* individuals from the input *individuals* using *k*
spins of a roulette. The selection is made by looking only at the first
objective of each individual. The list returned contains references to
the input *individuals*.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param fit_attr: The attribute of individuals to use as selection criterion
:returns: A list of selected individuals.
This function uses the :func:`~random.random` function from the python base
:mod:`random` module.
.. warning::
The roulette selection by definition cannot be used for minimization
or when the fitness can be smaller or equal to 0.
"""
s_inds = sorted(individuals, key=attrgetter(fit_attr), reverse=True)
sum_fits = sum(getattr(ind, fit_attr).values[0] for ind in individuals)
chosen = []
for i in range(k):
u = random.random() * sum_fits
sum_ = 0
for ind in s_inds:
sum_ += getattr(ind, fit_attr).values[0]
if sum_ > u:
chosen.append(ind)
break
return chosen
def selDoubleTournament(individuals, k, fitness_size, parsimony_size, fitness_first, fit_attr="fitness"):
"""Tournament selection which use the size of the individuals in order
to discriminate good solutions. This kind of tournament is obviously
useless with fixed-length representation, but has been shown to
significantly reduce excessive growth of individuals, especially in GP,
where it can be used as a bloat control technique (see
[Luke2002fighting]_). This selection operator implements the double
tournament technique presented in this paper.
The core principle is to use a normal tournament selection, but using a
special sample function to select aspirants, which is another tournament
based on the size of the individuals. To ensure that the selection
pressure is not too high, the size of the size tournament (the number
of candidates evaluated) can be a real number between 1 and 2. In this
case, the smaller individual among two will be selected with a probability
*size_tourn_size*/2. For instance, if *size_tourn_size* is set to 1.4,
then the smaller individual will have a 0.7 probability to be selected.
.. note::
In GP, it has been shown that this operator produces better results
when it is combined with some kind of a depth limit.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param fitness_size: The number of individuals participating in each \
fitness tournament
:param parsimony_size: The number of individuals participating in each \
size tournament. This value has to be a real number\
in the range [1,2], see above for details.
:param fitness_first: Set this to True if the first tournament done should \
be the fitness one (i.e. the fitness tournament producing aspirants for \
the size tournament). Setting it to False will behaves as the opposite \
(size tournament feeding fitness tournaments with candidates). It has been \
shown that this parameter does not have a significant effect in most cases\
(see [Luke2002fighting]_).
:param fit_attr: The attribute of individuals to use as selection criterion
:returns: A list of selected individuals.
.. [Luke2002fighting] Luke and Panait, 2002, Fighting bloat with
nonparametric parsimony pressure
"""
assert (1 <= parsimony_size <= 2), "Parsimony tournament size has to be in the range [1, 2]."
def _sizeTournament(individuals, k, select):
chosen = []
for i in range(k):
# Select two individuals from the population
# The first individual has to be the shortest
prob = parsimony_size / 2.
ind1, ind2 = select(individuals, k=2)
if len(ind1) > len(ind2):
ind1, ind2 = ind2, ind1
elif len(ind1) == len(ind2):
# random selection in case of a tie
prob = 0.5
# Since size1 <= size2 then ind1 is selected
# with a probability prob
chosen.append(ind1 if random.random() < prob else ind2)
return chosen
def _fitTournament(individuals, k, select):
chosen = []
for i in range(k):
aspirants = select(individuals, k=fitness_size)
chosen.append(max(aspirants, key=attrgetter(fit_attr)))
return chosen
if fitness_first:
tfit = partial(_fitTournament, select=selRandom)
return _sizeTournament(individuals, k, tfit)
else:
tsize = partial(_sizeTournament, select=selRandom)
return _fitTournament(individuals, k, tsize)
def selStochasticUniversalSampling(individuals, k, fit_attr="fitness"):
"""Select the *k* individuals among the input *individuals*.
The selection is made by using a single random value to sample all of the
individuals by choosing them at evenly spaced intervals. The list returned
contains references to the input *individuals*.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:param fit_attr: The attribute of individuals to use as selection criterion
:return: A list of selected individuals.
This function uses the :func:`~random.uniform` function from the python base
:mod:`random` module.
"""
s_inds = sorted(individuals, key=attrgetter(fit_attr), reverse=True)
sum_fits = sum(getattr(ind, fit_attr).values[0] for ind in individuals)
distance = sum_fits / float(k)
start = random.uniform(0, distance)
points = [start + i * distance for i in range(k)]
chosen = []
for p in points:
i = 0
sum_ = getattr(s_inds[i], fit_attr).values[0]
while sum_ < p:
i += 1
sum_ += getattr(s_inds[i], fit_attr).values[0]
chosen.append(s_inds[i])
return chosen
def selLexicase(individuals, k):
"""Returns an individual that does the best on the fitness cases when
considered one at a time in random order.
http://faculty.hampshire.edu/lspector/pubs/lexicase-IEEE-TEC.pdf
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:returns: A list of selected individuals.
"""
selected_individuals = []
for i in range(k):
fit_weights = individuals[0].fitness.weights
candidates = individuals
cases = list(range(len(individuals[0].fitness.values)))
random.shuffle(cases)
while len(cases) > 0 and len(candidates) > 1:
f = max if fit_weights[cases[0]] > 0 else min
best_val_for_case = f(x.fitness.values[cases[0]] for x in candidates)
candidates = [x for x in candidates if x.fitness.values[cases[0]] == best_val_for_case]
cases.pop(0)
selected_individuals.append(random.choice(candidates))
return selected_individuals
def selEpsilonLexicase(individuals, k, epsilon):
"""
Returns an individual that does the best on the fitness cases when
considered one at a time in random order. Requires a epsilon parameter.
https://push-language.hampshire.edu/uploads/default/original/1X/35c30e47ef6323a0a949402914453f277fb1b5b0.pdf
Implemented epsilon_y implementation.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:returns: A list of selected individuals.
"""
selected_individuals = []
for i in range(k):
fit_weights = individuals[0].fitness.weights
candidates = individuals
cases = list(range(len(individuals[0].fitness.values)))
random.shuffle(cases)
while len(cases) > 0 and len(candidates) > 1:
if fit_weights[cases[0]] > 0:
best_val_for_case = max(x.fitness.values[cases[0]] for x in candidates)
min_val_to_survive_case = best_val_for_case - epsilon
candidates = [x for x in candidates if x.fitness.values[cases[0]] >= min_val_to_survive_case]
else:
best_val_for_case = min(x.fitness.values[cases[0]] for x in candidates)
max_val_to_survive_case = best_val_for_case + epsilon
candidates = [x for x in candidates if x.fitness.values[cases[0]] <= max_val_to_survive_case]
cases.pop(0)
selected_individuals.append(random.choice(candidates))
return selected_individuals
def selAutomaticEpsilonLexicase(individuals, k):
"""
Returns an individual that does the best on the fitness cases when considered one at a
time in random order.
https://push-language.hampshire.edu/uploads/default/original/1X/35c30e47ef6323a0a949402914453f277fb1b5b0.pdf
Implemented lambda_epsilon_y implementation.
:param individuals: A list of individuals to select from.
:param k: The number of individuals to select.
:returns: A list of selected individuals.
"""
selected_individuals = []
for i in range(k):
fit_weights = individuals[0].fitness.weights
candidates = individuals
cases = list(range(len(individuals[0].fitness.values)))
random.shuffle(cases)
while len(cases) > 0 and len(candidates) > 1:
errors_for_this_case = [x.fitness.values[cases[0]] for x in candidates]
median_val = np.median(errors_for_this_case)
median_absolute_deviation = np.median([abs(x - median_val) for x in errors_for_this_case])
if fit_weights[cases[0]] > 0:
best_val_for_case = max(errors_for_this_case)
min_val_to_survive = best_val_for_case - median_absolute_deviation
candidates = [x for x in candidates if x.fitness.values[cases[0]] >= min_val_to_survive]
else:
best_val_for_case = min(errors_for_this_case)
max_val_to_survive = best_val_for_case + median_absolute_deviation
candidates = [x for x in candidates if x.fitness.values[cases[0]] <= max_val_to_survive]
cases.pop(0)
selected_individuals.append(random.choice(candidates))
return selected_individuals
__all__ = ['selRandom', 'selBest', 'selWorst', 'selRoulette',
'selTournament', 'selDoubleTournament', 'selStochasticUniversalSampling',
'selLexicase', 'selEpsilonLexicase', 'selAutomaticEpsilonLexicase']
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/deap/tools/support.py 0000644 0000765 0000024 00000063602 14456461441 016041 0 ustar 00runner staff from bisect import bisect_right
from collections import defaultdict
from copy import deepcopy
from functools import partial
from itertools import chain
from operator import eq
def identity(obj):
"""Returns directly the argument *obj*.
"""
return obj
class History(object):
"""The :class:`History` class helps to build a genealogy of all the
individuals produced in the evolution. It contains two attributes,
the :attr:`genealogy_tree` that is a dictionary of lists indexed by
individual, the list contain the indices of the parents. The second
attribute :attr:`genealogy_history` contains every individual indexed
by their individual number as in the genealogy tree.
The produced genealogy tree is compatible with `NetworkX
`_, here is how to plot the genealogy
tree ::
history = History()
# Decorate the variation operators
toolbox.decorate("mate", history.decorator)
toolbox.decorate("mutate", history.decorator)
# Create the population and populate the history
population = toolbox.population(n=POPSIZE)
history.update(population)
# Do the evolution, the decorators will take care of updating the
# history
# [...]
import matplotlib.pyplot as plt
import networkx
graph = networkx.DiGraph(history.genealogy_tree)
graph = graph.reverse() # Make the graph top-down
colors = [toolbox.evaluate(history.genealogy_history[i])[0] for i in graph]
networkx.draw(graph, node_color=colors)
plt.show()
Using NetworkX in combination with `pygraphviz
`_ (dot layout) this amazing
genealogy tree can be obtained from the OneMax example with a population
size of 20 and 5 generations, where the color of the nodes indicate their
fitness, blue is low and red is high.
.. image:: /_images/genealogy.png
:width: 67%
.. note::
The genealogy tree might get very big if your population and/or the
number of generation is large.
"""
def __init__(self):
self.genealogy_index = 0
self.genealogy_history = dict()
self.genealogy_tree = dict()
def update(self, individuals):
"""Update the history with the new *individuals*. The index present in
their :attr:`history_index` attribute will be used to locate their
parents, it is then modified to a unique one to keep track of those
new individuals. This method should be called on the individuals after
each variation.
:param individuals: The list of modified individuals that shall be
inserted in the history.
If the *individuals* do not have a :attr:`history_index` attribute,
the attribute is added and this individual is considered as having no
parent. This method should be called with the initial population to
initialize the history.
Modifying the internal :attr:`genealogy_index` of the history or the
:attr:`history_index` of an individual may lead to unpredictable
results and corruption of the history.
"""
try:
parent_indices = tuple(ind.history_index for ind in individuals)
except AttributeError:
parent_indices = tuple()
for ind in individuals:
self.genealogy_index += 1
ind.history_index = self.genealogy_index
self.genealogy_history[self.genealogy_index] = deepcopy(ind)
self.genealogy_tree[self.genealogy_index] = parent_indices
@property
def decorator(self):
"""Property that returns an appropriate decorator to enhance the
operators of the toolbox. The returned decorator assumes that the
individuals are returned by the operator. First the decorator calls
the underlying operation and then calls the :func:`update` function
with what has been returned by the operator. Finally, it returns the
individuals with their history parameters modified according to the
update function.
"""
def decFunc(func):
def wrapFunc(*args, **kargs):
individuals = func(*args, **kargs)
self.update(individuals)
return individuals
return wrapFunc
return decFunc
def getGenealogy(self, individual, max_depth=float("inf")):
"""Provide the genealogy tree of an *individual*. The individual must
have an attribute :attr:`history_index` as defined by
:func:`~deap.tools.History.update` in order to retrieve its associated
genealogy tree. The returned graph contains the parents up to
*max_depth* variations before this individual. If not provided
the maximum depth is up to the beginning of the evolution.
:param individual: The individual at the root of the genealogy tree.
:param max_depth: The approximate maximum distance between the root
(individual) and the leaves (parents), optional.
:returns: A dictionary where each key is an individual index and the
values are a tuple corresponding to the index of the parents.
"""
gtree = {}
visited = set() # Adds memory to the breadth first search
def genealogy(index, depth):
if index not in self.genealogy_tree:
return
depth += 1
if depth > max_depth:
return
parent_indices = self.genealogy_tree[index]
gtree[index] = parent_indices
for ind in parent_indices:
if ind not in visited:
genealogy(ind, depth)
visited.add(ind)
genealogy(individual.history_index, 0)
return gtree
class Statistics(object):
"""Object that compiles statistics on a list of arbitrary objects.
When created the statistics object receives a *key* argument that
is used to get the values on which the function will be computed.
If not provided the *key* argument defaults to the identity function.
The value returned by the key may be a multi-dimensional object, i.e.:
a tuple or a list, as long as the statistical function registered
support it. So for example, statistics can be computed directly on
multi-objective fitnesses when using numpy statistical function.
:param key: A function to access the values on which to compute the
statistics, optional.
::
>>> import numpy
>>> s = Statistics()
>>> s.register("mean", numpy.mean)
>>> s.register("max", max)
>>> s.compile([1, 2, 3, 4]) # doctest: +SKIP
{'max': 4, 'mean': 2.5}
>>> s.compile([5, 6, 7, 8]) # doctest: +SKIP
{'mean': 6.5, 'max': 8}
"""
def __init__(self, key=identity):
self.key = key
self.functions = dict()
self.fields = []
def register(self, name, function, *args, **kargs):
"""Register a *function* that will be applied on the sequence each
time :meth:`record` is called.
:param name: The name of the statistics function as it would appear
in the dictionary of the statistics object.
:param function: A function that will compute the desired statistics
on the data as preprocessed by the key.
:param argument: One or more argument (and keyword argument) to pass
automatically to the registered function when called,
optional.
"""
self.functions[name] = partial(function, *args, **kargs)
self.fields.append(name)
def compile(self, data):
"""Apply to the input sequence *data* each registered function
and return the results as a dictionary.
:param data: Sequence of objects on which the statistics are computed.
"""
values = tuple(self.key(elem) for elem in data)
entry = dict()
for key, func in self.functions.items():
entry[key] = func(values)
return entry
class MultiStatistics(dict):
"""Dictionary of :class:`Statistics` object allowing to compute
statistics on multiple keys using a single call to :meth:`compile`. It
takes a set of key-value pairs associating a statistics object to a
unique name. This name can then be used to retrieve the statistics object.
The following code computes statistics simultaneously on the length and
the first value of the provided objects.
::
>>> from operator import itemgetter
>>> import numpy
>>> len_stats = Statistics(key=len)
>>> itm0_stats = Statistics(key=itemgetter(0))
>>> mstats = MultiStatistics(length=len_stats, item=itm0_stats)
>>> mstats.register("mean", numpy.mean, axis=0)
>>> mstats.register("max", numpy.max, axis=0)
>>> mstats.compile([[0.0, 1.0, 1.0, 5.0], [2.0, 5.0]]) # doctest: +SKIP
{'length': {'mean': 3.0, 'max': 4}, 'item': {'mean': 1.0, 'max': 2.0}}
"""
def compile(self, data):
"""Calls :meth:`Statistics.compile` with *data* of each
:class:`Statistics` object.
:param data: Sequence of objects on which the statistics are computed.
"""
record = {}
for name, stats in self.items():
record[name] = stats.compile(data)
return record
@property
def fields(self):
return sorted(self.keys())
def register(self, name, function, *args, **kargs):
"""Register a *function* in each :class:`Statistics` object.
:param name: The name of the statistics function as it would appear
in the dictionary of the statistics object.
:param function: A function that will compute the desired statistics
on the data as preprocessed by the key.
:param argument: One or more argument (and keyword argument) to pass
automatically to the registered function when called,
optional.
"""
for stats in self.values():
stats.register(name, function, *args, **kargs)
class Logbook(list):
"""Evolution records as a chronological list of dictionaries.
Data can be retrieved via the :meth:`select` method given the appropriate
names.
The :class:`Logbook` class may also contain other logbooks referred to
as chapters. Chapters are used to store information associated to a
specific part of the evolution. For example when computing statistics
on different components of individuals (namely :class:`MultiStatistics`),
chapters can be used to distinguish the average fitness and the average
size.
"""
def __init__(self):
self.buffindex = 0
self.chapters = defaultdict(Logbook)
"""Dictionary containing the sub-sections of the logbook which are also
:class:`Logbook`. Chapters are automatically created when the right hand
side of a keyworded argument, provided to the *record* function, is a
dictionary. The keyword determines the chapter's name. For example, the
following line adds a new chapter "size" that will contain the fields
"max" and "mean". ::
logbook.record(gen=0, size={'max' : 10.0, 'mean' : 7.5})
To access a specific chapter, use the name of the chapter as a
dictionary key. For example, to access the size chapter and select
the mean use ::
logbook.chapters["size"].select("mean")
Compiling a :class:`MultiStatistics` object returns a dictionary
containing dictionaries, therefore when recording such an object in a
logbook using the keyword argument unpacking operator (**), chapters
will be automatically added to the logbook.
::
>>> fit_stats = Statistics(key=attrgetter("fitness.values"))
>>> size_stats = Statistics(key=len)
>>> mstats = MultiStatistics(fitness=fit_stats, size=size_stats)
>>> # [...]
>>> record = mstats.compile(population)
>>> logbook.record(**record)
>>> print logbook
fitness length
------------ ------------
max mean max mean
2 1 4 3
"""
self.columns_len = None
self.header = None
"""Order of the columns to print when using the :data:`stream` and
:meth:`__str__` methods. The syntax is a single iterable containing
string elements. For example, with the previously
defined statistics class, one can print the generation and the
fitness average, and maximum with
::
logbook.header = ("gen", "mean", "max")
If not set the header is built with all fields, in arbitrary order
on insertion of the first data. The header can be removed by setting
it to :data:`None`.
"""
self.log_header = True
"""Tells the log book to output or not the header when streaming the
first line or getting its entire string representation. This defaults
:data:`True`.
"""
def record(self, **infos):
"""Enter a record of event in the logbook as a list of key-value pairs.
The information are appended chronologically to a list as a dictionary.
When the value part of a pair is a dictionary, the information contained
in the dictionary are recorded in a chapter entitled as the name of the
key part of the pair. Chapters are also Logbook.
"""
apply_to_all = {k: v for k, v in infos.items() if not isinstance(v, dict)}
for key, value in list(infos.items()):
if isinstance(value, dict):
chapter_infos = value.copy()
chapter_infos.update(apply_to_all)
self.chapters[key].record(**chapter_infos)
del infos[key]
self.append(infos)
def select(self, *names):
"""Return a list of values associated to the *names* provided
in argument in each dictionary of the Statistics object list.
One list per name is returned in order.
::
>>> log = Logbook()
>>> log.record(gen=0, mean=5.4, max=10.0)
>>> log.record(gen=1, mean=9.4, max=15.0)
>>> log.select("mean")
[5.4, 9.4]
>>> log.select("gen", "max")
([0, 1], [10.0, 15.0])
With a :class:`MultiStatistics` object, the statistics for each
measurement can be retrieved using the :data:`chapters` member :
::
>>> log = Logbook()
>>> log.record(**{'gen': 0, 'fit': {'mean': 0.8, 'max': 1.5},
... 'size': {'mean': 25.4, 'max': 67}})
>>> log.record(**{'gen': 1, 'fit': {'mean': 0.95, 'max': 1.7},
... 'size': {'mean': 28.1, 'max': 71}})
>>> log.chapters['size'].select("mean")
[25.4, 28.1]
>>> log.chapters['fit'].select("gen", "max")
([0, 1], [1.5, 1.7])
"""
if len(names) == 1:
return [entry.get(names[0], None) for entry in self]
return tuple([entry.get(name, None) for entry in self] for name in names)
@property
def stream(self):
"""Retrieve the formatted not streamed yet entries of the database
including the headers.
::
>>> log = Logbook()
>>> log.append({'gen' : 0})
>>> print log.stream # doctest: +NORMALIZE_WHITESPACE
gen
0
>>> log.append({'gen' : 1})
>>> print log.stream # doctest: +NORMALIZE_WHITESPACE
1
"""
startindex, self.buffindex = self.buffindex, len(self)
return self.__str__(startindex)
def __delitem__(self, key):
if isinstance(key, slice):
for i, in range(*key.indices(len(self))):
self.pop(i)
for chapter in self.chapters.values():
chapter.pop(i)
else:
self.pop(key)
for chapter in self.chapters.values():
chapter.pop(key)
def pop(self, index=0):
"""Retrieve and delete element *index*. The header and stream will be
adjusted to follow the modification.
:param item: The index of the element to remove, optional. It defaults
to the first element.
You can also use the following syntax to delete elements.
::
del log[0]
del log[1::5]
"""
if index < self.buffindex:
self.buffindex -= 1
return super(self.__class__, self).pop(index)
def __txt__(self, startindex):
columns = self.header
if not columns:
columns = sorted(self[0].keys()) + sorted(self.chapters.keys())
if not self.columns_len or len(self.columns_len) != len(columns):
self.columns_len = [len(c) for c in columns]
chapters_txt = {}
offsets = defaultdict(int)
for name, chapter in self.chapters.items():
chapters_txt[name] = chapter.__txt__(startindex)
if startindex == 0:
offsets[name] = len(chapters_txt[name]) - len(self)
str_matrix = []
for i, line in enumerate(self[startindex:]):
str_line = []
for j, name in enumerate(columns):
if name in chapters_txt:
column = chapters_txt[name][i + offsets[name]]
else:
value = line.get(name, "")
string = "{0:n}" if isinstance(value, float) else "{0}"
column = string.format(value)
self.columns_len[j] = max(self.columns_len[j], len(column))
str_line.append(column)
str_matrix.append(str_line)
if startindex == 0 and self.log_header:
header = []
nlines = 1
if len(self.chapters) > 0:
nlines += max(map(len, chapters_txt.values())) - len(self) + 1
header = [[] for i in range(nlines)]
for j, name in enumerate(columns):
if name in chapters_txt:
length = max(len(line.expandtabs()) for line in chapters_txt[name])
blanks = nlines - 2 - offsets[name]
for i in range(blanks):
header[i].append(" " * length)
header[blanks].append(name.center(length))
header[blanks + 1].append("-" * length)
for i in range(offsets[name]):
header[blanks + 2 + i].append(chapters_txt[name][i])
else:
length = max(len(line[j].expandtabs()) for line in str_matrix)
for line in header[:-1]:
line.append(" " * length)
header[-1].append(name)
str_matrix = chain(header, str_matrix)
template = "\t".join("{%i:<%i}" % (i, l) for i, l in enumerate(self.columns_len))
text = [template.format(*line) for line in str_matrix]
return text
def __str__(self, startindex=0):
text = self.__txt__(startindex)
return "\n".join(text)
class HallOfFame(object):
"""The hall of fame contains the best individual that ever lived in the
population during the evolution. It is lexicographically sorted at all
time so that the first element of the hall of fame is the individual that
has the best first fitness value ever seen, according to the weights
provided to the fitness at creation time.
The insertion is made so that old individuals have priority on new
individuals. A single copy of each individual is kept at all time, the
equivalence between two individuals is made by the operator passed to the
*similar* argument.
:param maxsize: The maximum number of individual to keep in the hall of
fame.
:param similar: An equivalence operator between two individuals, optional.
It defaults to operator :func:`operator.eq`.
The class :class:`HallOfFame` provides an interface similar to a list
(without being one completely). It is possible to retrieve its length, to
iterate on it forward and backward and to get an item or a slice from it.
"""
def __init__(self, maxsize, similar=eq):
self.maxsize = maxsize
self.keys = list()
self.items = list()
self.similar = similar
def update(self, population):
"""Update the hall of fame with the *population* by replacing the
worst individuals in it by the best individuals present in
*population* (if they are better). The size of the hall of fame is
kept constant.
:param population: A list of individual with a fitness attribute to
update the hall of fame with.
"""
for ind in population:
if len(self) == 0 and self.maxsize != 0:
# Working on an empty hall of fame is problematic for the
# "for else"
self.insert(population[0])
continue
if ind.fitness > self[-1].fitness or len(self) < self.maxsize:
for hofer in self:
# Loop through the hall of fame to check for any
# similar individual
if self.similar(ind, hofer):
break
else:
# The individual is unique and strictly better than
# the worst
if len(self) >= self.maxsize:
self.remove(-1)
self.insert(ind)
def insert(self, item):
"""Insert a new individual in the hall of fame using the
:func:`~bisect.bisect_right` function. The inserted individual is
inserted on the right side of an equal individual. Inserting a new
individual in the hall of fame also preserve the hall of fame's order.
This method **does not** check for the size of the hall of fame, in a
way that inserting a new individual in a full hall of fame will not
remove the worst individual to maintain a constant size.
:param item: The individual with a fitness attribute to insert in the
hall of fame.
"""
item = deepcopy(item)
i = bisect_right(self.keys, item.fitness)
self.items.insert(len(self) - i, item)
self.keys.insert(i, item.fitness)
def remove(self, index):
"""Remove the specified *index* from the hall of fame.
:param index: An integer giving which item to remove.
"""
del self.keys[len(self) - (index % len(self) + 1)]
del self.items[index]
def clear(self):
"""Clear the hall of fame."""
del self.items[:]
del self.keys[:]
def __len__(self):
return len(self.items)
def __getitem__(self, i):
return self.items[i]
def __iter__(self):
return iter(self.items)
def __reversed__(self):
return reversed(self.items)
def __str__(self):
return str(self.items)
class ParetoFront(HallOfFame):
"""The Pareto front hall of fame contains all the non-dominated individuals
that ever lived in the population. That means that the Pareto front hall of
fame can contain an infinity of different individuals.
:param similar: A function that tells the Pareto front whether or not two
individuals are similar, optional.
The size of the front may become very large if it is used for example on
a continuous function with a continuous domain. In order to limit the number
of individuals, it is possible to specify a similarity function that will
return :data:`True` if the genotype of two individuals are similar. In that
case only one of the two individuals will be added to the hall of fame. By
default the similarity function is :func:`operator.eq`.
Since, the Pareto front hall of fame inherits from the :class:`HallOfFame`,
it is sorted lexicographically at every moment.
"""
def __init__(self, similar=eq):
HallOfFame.__init__(self, None, similar)
def update(self, population):
"""Update the Pareto front hall of fame with the *population* by adding
the individuals from the population that are not dominated by the hall
of fame. If any individual in the hall of fame is dominated it is
removed.
:param population: A list of individual with a fitness attribute to
update the hall of fame with.
"""
for ind in population:
is_dominated = False
dominates_one = False
has_twin = False
to_remove = []
for i, hofer in enumerate(self): # hofer = hall of famer
if not dominates_one and hofer.fitness.dominates(ind.fitness):
is_dominated = True
break
elif ind.fitness.dominates(hofer.fitness):
dominates_one = True
to_remove.append(i)
elif ind.fitness == hofer.fitness and self.similar(ind, hofer):
has_twin = True
break
for i in reversed(to_remove): # Remove the dominated hofer
self.remove(i)
if not is_dominated and not has_twin:
self.insert(ind)
__all__ = ['HallOfFame', 'ParetoFront', 'History', 'Statistics', 'MultiStatistics', 'Logbook']
if __name__ == "__main__":
import doctest
doctest.run_docstring_examples(Statistics, globals())
doctest.run_docstring_examples(Statistics.register, globals())
doctest.run_docstring_examples(Statistics.compile, globals())
doctest.run_docstring_examples(MultiStatistics, globals())
doctest.run_docstring_examples(MultiStatistics.register, globals())
doctest.run_docstring_examples(MultiStatistics.compile, globals())
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 010211 x ustar 00 27 mtime=1689936700.627589
deap-1.4.1/deap.egg-info/ 0000755 0000765 0000024 00000000000 14456461475 014305 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936700.0
deap-1.4.1/deap.egg-info/PKG-INFO 0000644 0000765 0000024 00000031663 14456461474 015412 0 ustar 00runner staff Metadata-Version: 2.1
Name: deap
Version: 1.4.1
Summary: Distributed Evolutionary Algorithms in Python
Home-page: https://www.github.com/deap
Author: deap Development Team
Author-email: deap-users@googlegroups.com
License: LGPL
Keywords: evolutionary algorithms,genetic algorithms,genetic programming,cma-es,ga,gp,es,pso
Platform: any
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Software Development
Description-Content-Type: text/markdown
License-File: LICENSE.txt
# DEAP
[](https://travis-ci.org/DEAP/deap) [](https://pypi.python.org/pypi/deap) [](https://gitter.im/DEAP/deap?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://dev.azure.com/fderainville/DEAP/_build/latest?definitionId=1&branchName=master) [](https://deap.readthedocs.io/en/master/?badge=master)
DEAP is a novel evolutionary computation framework for rapid prototyping and testing of
ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelisation mechanisms such as multiprocessing and [SCOOP](https://github.com/soravux/scoop).
DEAP includes the following features:
* Genetic algorithm using any imaginable representation
* List, Array, Set, Dictionary, Tree, Numpy Array, etc.
* Genetic programming using prefix trees
* Loosely typed, Strongly typed
* Automatically defined functions
* Evolution strategies (including CMA-ES)
* Multi-objective optimisation (NSGA-II, NSGA-III, SPEA2, MO-CMA-ES)
* Co-evolution (cooperative and competitive) of multiple populations
* Parallelization of the evaluations (and more)
* Hall of Fame of the best individuals that lived in the population
* Checkpoints that take snapshots of a system regularly
* Benchmarks module containing most common test functions
* Genealogy of an evolution (that is compatible with [NetworkX](https://github.com/networkx/networkx))
* Examples of alternative algorithms : Particle Swarm Optimization, Differential Evolution, Estimation of Distribution Algorithm
## Downloads
Following acceptance of [PEP 438](http://www.python.org/dev/peps/pep-0438/) by the Python community, we have moved DEAP's source releases on [PyPI](https://pypi.python.org).
You can find the most recent releases at: https://pypi.python.org/pypi/deap/.
## Documentation
See the [DEAP User's Guide](http://deap.readthedocs.org/) for DEAP documentation.
In order to get the tip documentation, change directory to the `doc` subfolder and type in `make html`, the documentation will be under `_build/html`. You will need [Sphinx](http://sphinx.pocoo.org) to build the documentation.
### Notebooks
Also checkout our new [notebook examples](https://github.com/DEAP/notebooks). Using [Jupyter notebooks](http://jupyter.org) you'll be able to navigate and execute each block of code individually and tell what every line is doing. Either, look at the notebooks online using the notebook viewer links at the botom of the page or download the notebooks, navigate to the you download directory and run
```bash
jupyter notebook
```
## Installation
We encourage you to use easy_install or pip to install DEAP on your system. Other installation procedure like apt-get, yum, etc. usually provide an outdated version.
```bash
pip install deap
```
The latest version can be installed with
```bash
pip install git+https://github.com/DEAP/deap@master
```
If you wish to build from sources, download or clone the repository and type
```bash
python setup.py install
```
## Build Status
DEAP build status is available on Travis-CI https://travis-ci.org/DEAP/deap.
## Requirements
The most basic features of DEAP requires Python2.6. In order to combine the toolbox and the multiprocessing module Python2.7 is needed for its support to pickle partial functions. CMA-ES requires Numpy, and we recommend matplotlib for visualization of results as it is fully compatible with DEAP's API.
Since version 0.8, DEAP is compatible out of the box with Python 3. The installation procedure automatically translates the source to Python 3 with 2to3, however this requires having `setuptools<=58`. It is recommended to use `pip install setuptools==57.5.0` to address this issue.
## Example
The following code gives a quick overview how simple it is to implement the Onemax problem optimization with genetic algorithm using DEAP. More examples are provided [here](http://deap.readthedocs.org/en/master/examples/index.html).
```python
import random
from deap import creator, base, tools, algorithms
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, n=100)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
def evalOneMax(individual):
return sum(individual),
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
population = toolbox.population(n=300)
NGEN=40
for gen in range(NGEN):
offspring = algorithms.varAnd(population, toolbox, cxpb=0.5, mutpb=0.1)
fits = toolbox.map(toolbox.evaluate, offspring)
for fit, ind in zip(fits, offspring):
ind.fitness.values = fit
population = toolbox.select(offspring, k=len(population))
top10 = tools.selBest(population, k=10)
```
## How to cite DEAP
Authors of scientific papers including results generated using DEAP are encouraged to cite the following paper.
```xml
@article{DEAP_JMLR2012,
author = " F\'elix-Antoine Fortin and Fran\c{c}ois-Michel {De Rainville} and Marc-Andr\'e Gardner and Marc Parizeau and Christian Gagn\'e ",
title = { {DEAP}: Evolutionary Algorithms Made Easy },
pages = { 2171--2175 },
volume = { 13 },
month = { jul },
year = { 2012 },
journal = { Journal of Machine Learning Research }
}
```
## Publications on DEAP
* François-Michel De Rainville, Félix-Antoine Fortin, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP -- Enabling Nimbler Evolutions", SIGEVOlution, vol. 6, no 2, pp. 17-26, February 2014. [Paper](http://goo.gl/tOrXTp)
* Félix-Antoine Fortin, François-Michel De Rainville, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: Evolutionary Algorithms Made Easy", Journal of Machine Learning Research, vol. 13, pp. 2171-2175, jul 2012. [Paper](http://goo.gl/amJ3x)
* François-Michel De Rainville, Félix-Antoine Fortin, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: A Python Framework for Evolutionary Algorithms", in !EvoSoft Workshop, Companion proc. of the Genetic and Evolutionary Computation Conference (GECCO 2012), July 07-11 2012. [Paper](http://goo.gl/pXXug)
## Projects using DEAP
* Ribaric, T., & Houghten, S. (2017, June). Genetic programming for improved cryptanalysis of elliptic curve cryptosystems. In 2017 IEEE Congress on Evolutionary Computation (CEC) (pp. 419-426). IEEE.
* Ellefsen, Kai Olav, Herman Augusto Lepikson, and Jan C. Albiez. "Multiobjective coverage path planning: Enabling automated inspection of complex, real-world structures." Applied Soft Computing 61 (2017): 264-282.
* S. Chardon, B. Brangeon, E. Bozonnet, C. Inard (2016), Construction cost and energy performance of single family houses : From integrated design to automated optimization, Automation in Construction, Volume 70, p.1-13.
* B. Brangeon, E. Bozonnet, C. Inard (2016), Integrated refurbishment of collective housing and optimization process with real products databases, Building Simulation Optimization, pp. 531–538 Newcastle, England.
* Randal S. Olson, Ryan J. Urbanowicz, Peter C. Andrews, Nicole A. Lavender, La Creis Kidd, and Jason H. Moore (2016). Automating biomedical data science through tree-based pipeline optimization. Applications of Evolutionary Computation, pages 123-137.
* Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of GECCO 2016, pages 485-492.
* Van Geit W, Gevaert M, Chindemi G, Rössert C, Courcol J, Muller EB, Schürmann F, Segev I and Markram H (2016). BluePyOpt: Leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front. Neuroinform. 10:17. doi: 10.3389/fninf.2016.00017 https://github.com/BlueBrain/BluePyOpt
* Lara-Cabrera, R., Cotta, C. and Fernández-Leiva, A.J. (2014). Geometrical vs topological measures for the evolution of aesthetic maps in a rts game, Entertainment Computing,
* Macret, M. and Pasquier, P. (2013). Automatic Tuning of the OP-1 Synthesizer Using a Multi-objective Genetic Algorithm. In Proceedings of the 10th Sound and Music Computing Conference (SMC). (pp 614-621).
* Fortin, F. A., Grenier, S., & Parizeau, M. (2013, July). Generalizing the improved run-time complexity algorithm for non-dominated sorting. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference (pp. 615-622). ACM.
* Fortin, F. A., & Parizeau, M. (2013, July). Revisiting the NSGA-II crowding-distance computation. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference (pp. 623-630). ACM.
* Marc-André Gardner, Christian Gagné, and Marc Parizeau. Estimation of Distribution Algorithm based on Hidden Markov Models for Combinatorial Optimization. in Comp. Proc. Genetic and Evolutionary Computation Conference (GECCO 2013), July 2013.
* J. T. Zhai, M. A. Bamakhrama, and T. Stefanov. "Exploiting Just-enough Parallelism when Mapping Streaming Applications in Hard Real-time Systems". Design Automation Conference (DAC 2013), 2013.
* V. Akbarzadeh, C. Gagné, M. Parizeau, M. Argany, M. A Mostafavi, "Probabilistic Sensing Model for Sensor Placement Optimization Based on Line-of-Sight Coverage", Accepted in IEEE Transactions on Instrumentation and Measurement, 2012.
* M. Reif, F. Shafait, and A. Dengel. "Dataset Generation for Meta-Learning". Proceedings of the German Conference on Artificial Intelligence (KI'12). 2012.
* M. T. Ribeiro, A. Lacerda, A. Veloso, and N. Ziviani. "Pareto-Efficient Hybridization for Multi-Objective Recommender Systems". Proceedings of the Conference on Recommanders Systems (!RecSys'12). 2012.
* M. Pérez-Ortiz, A. Arauzo-Azofra, C. Hervás-Martínez, L. García-Hernández and L. Salas-Morera. "A system learning user preferences for multiobjective optimization of facility layouts". Pr,oceedings on the Int. Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO'12). 2012.
* Lévesque, J.C., Durand, A., Gagné, C., and Sabourin, R., Multi-Objective Evolutionary Optimization for Generating Ensembles of Classifiers in the ROC Space, Genetic and Evolutionary Computation Conference (GECCO 2012), 2012.
* Marc-André Gardner, Christian Gagné, and Marc Parizeau, "Bloat Control in Genetic Programming with Histogram-based Accept-Reject Method", in Proc. Genetic and Evolutionary Computation Conference (GECCO 2011), 2011.
* Vahab Akbarzadeh, Albert Ko, Christian Gagné, and Marc Parizeau, "Topography-Aware Sensor Deployment Optimization with CMA-ES", in Proc. of Parallel Problem Solving from Nature (PPSN 2010), Springer, 2010.
* DEAP is used in [TPOT](https://github.com/rhiever/tpot), an open source tool that uses genetic programming to optimize machine learning pipelines.
* DEAP is also used in ROS as an optimization package http://www.ros.org/wiki/deap.
* DEAP is an optional dependency for [PyXRD](https://github.com/mathijs-dumon/PyXRD), a Python implementation of the matrix algorithm developed for the X-ray diffraction analysis of disordered lamellar structures.
* DEAP is used in [glyph](https://github.com/Ambrosys/glyph), a library for symbolic regression with applications to [MLC](https://en.wikipedia.org/wiki/Machine_learning_control).
* DEAP is used in [Sklearn-genetic-opt](https://github.com/rodrigo-arenas/Sklearn-genetic-opt), an open source tool that uses evolutionary programming to fine tune machine learning hyperparameters.
If you want your project listed here, send us a link and a brief description and we'll be glad to add it.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936700.0
deap-1.4.1/deap.egg-info/SOURCES.txt 0000644 0000765 0000024 00000014672 14456461474 016202 0 ustar 00runner staff INSTALL.txt
LICENSE.txt
MANIFEST.in
README.md
setup.py
deap/__init__.py
deap/algorithms.py
deap/base.py
deap/cma.py
deap/creator.py
deap/gp.py
deap.egg-info/PKG-INFO
deap.egg-info/SOURCES.txt
deap.egg-info/dependency_links.txt
deap.egg-info/requires.txt
deap.egg-info/top_level.txt
deap/benchmarks/__init__.py
deap/benchmarks/binary.py
deap/benchmarks/gp.py
deap/benchmarks/movingpeaks.py
deap/benchmarks/tools.py
deap/tools/__init__.py
deap/tools/constraint.py
deap/tools/crossover.py
deap/tools/emo.py
deap/tools/indicator.py
deap/tools/init.py
deap/tools/migration.py
deap/tools/mutation.py
deap/tools/selection.py
deap/tools/support.py
deap/tools/_hypervolume/__init__.py
deap/tools/_hypervolume/_hv.c
deap/tools/_hypervolume/_hv.h
deap/tools/_hypervolume/hv.cpp
deap/tools/_hypervolume/pyhv.py
doc/Makefile
doc/about.rst
doc/conf.py
doc/contributing.rst
doc/index.rst
doc/installation.rst
doc/overview.rst
doc/pip_req.txt
doc/porting.rst
doc/releases.rst
doc/_images/constraints.png
doc/_images/genealogy.png
doc/_images/gptree.png
doc/_images/gptypederrtree.png
doc/_images/gptypedtree.png
doc/_images/gptypedtrees.png
doc/_images/more.png
doc/_images/nsga3.png
doc/_images/twin_logbook.png
doc/_static/DEAP.pdf
doc/_static/copybutton.js
doc/_static/deap_icon-39x55.png
doc/_static/deap_icon_16x16.ico
doc/_static/deap_long.png
doc/_static/deap_orange_icon_16x16.ico
doc/_static/deap_orange_icon_32.ico
doc/_static/lvsn.png
doc/_static/sidebar.js
doc/_static/ul.gif
doc/_templates/indexsidebar.html
doc/_templates/layout.html
doc/_themes/pydoctheme/theme.conf
doc/_themes/pydoctheme/static/pydoctheme.css
doc/api/algo.rst
doc/api/base.rst
doc/api/benchmarks.rst
doc/api/creator.rst
doc/api/gp.rst
doc/api/index.rst
doc/api/tools.rst
doc/code/benchmarks/ackley.py
doc/code/benchmarks/bohachevsky.py
doc/code/benchmarks/griewank.py
doc/code/benchmarks/h1.py
doc/code/benchmarks/himmelblau.py
doc/code/benchmarks/kursawe.py
doc/code/benchmarks/movingsc1.py
doc/code/benchmarks/rastrigin.py
doc/code/benchmarks/rosenbrock.py
doc/code/benchmarks/schaffer.py
doc/code/benchmarks/schwefel.py
doc/code/benchmarks/shekel.py
doc/code/examples/nsga3_ref_points.py
doc/code/examples/nsga3_ref_points_combined.py
doc/code/examples/nsga3_ref_points_combined_plot.py
doc/code/tutorials/part_1/1_where_to_start.py
doc/code/tutorials/part_2/2_1_fitness.py
doc/code/tutorials/part_2/2_2_1_list_of_floats.py
doc/code/tutorials/part_2/2_2_2_permutation.py
doc/code/tutorials/part_2/2_2_3_arithmetic_expression.py
doc/code/tutorials/part_2/2_2_4_evolution_strategy.py
doc/code/tutorials/part_2/2_2_5_particle.py
doc/code/tutorials/part_2/2_2_6_funky_one.py
doc/code/tutorials/part_2/2_3_1_bag.py
doc/code/tutorials/part_2/2_3_2_grid.py
doc/code/tutorials/part_2/2_3_3_swarm.py
doc/code/tutorials/part_2/2_3_4_demes.py
doc/code/tutorials/part_2/2_3_5_seeding_a_population.py
doc/code/tutorials/part_2/my_guess.json
doc/code/tutorials/part_3/3_6_2_tool_decoration.py
doc/code/tutorials/part_3/3_6_using_the_toolbox.py
doc/code/tutorials/part_3/3_7_variations.py
doc/code/tutorials/part_3/3_8_algorithms.py
doc/code/tutorials/part_3/3_next_step.py
doc/code/tutorials/part_3/logbook.py
doc/code/tutorials/part_3/multistats.py
doc/code/tutorials/part_3/stats.py
doc/code/tutorials/part_4/4_4_Using_Cpp_NSGA.py
doc/code/tutorials/part_4/4_5_home_made_eval_func.py
doc/code/tutorials/part_4/SNC.cpp
doc/code/tutorials/part_4/installSN.py
doc/code/tutorials/part_4/sortingnetwork.py
doc/examples/bipop_cmaes.rst
doc/examples/cmaes.rst
doc/examples/cmaes_plotting.rst
doc/examples/coev_coop.rst
doc/examples/eda.rst
doc/examples/es_fctmin.rst
doc/examples/es_onefifth.rst
doc/examples/ga_knapsack.rst
doc/examples/ga_onemax.rst
doc/examples/ga_onemax_numpy.rst
doc/examples/ga_onemax_short.rst
doc/examples/gp_ant.rst
doc/examples/gp_multiplexer.rst
doc/examples/gp_parity.rst
doc/examples/gp_spambase.rst
doc/examples/gp_symbreg.rst
doc/examples/index.rst
doc/examples/nsga3.rst
doc/examples/pso_basic.rst
doc/examples/pso_multiswarm.rst
doc/tutorials/advanced/benchmarking.rst
doc/tutorials/advanced/checkpoint.rst
doc/tutorials/advanced/constraints.rst
doc/tutorials/advanced/gp.rst
doc/tutorials/advanced/numpy.rst
doc/tutorials/basic/part1.rst
doc/tutorials/basic/part2.rst
doc/tutorials/basic/part3.rst
doc/tutorials/basic/part4.rst
examples/bbob.py
examples/speed.txt
examples/coev/coop_adapt.py
examples/coev/coop_base.py
examples/coev/coop_evol.py
examples/coev/coop_gen.py
examples/coev/coop_niche.py
examples/coev/hillis.py
examples/coev/symbreg.py
examples/de/basic.py
examples/de/dynamic.py
examples/de/sphere.py
examples/eda/emna.py
examples/eda/pbil.py
examples/es/cma_1+l_minfct.py
examples/es/cma_bipop.py
examples/es/cma_minfct.py
examples/es/cma_mo.py
examples/es/cma_plotting.py
examples/es/fctmin.py
examples/es/onefifth.py
examples/ga/evoknn.py
examples/ga/evoknn_jmlr.py
examples/ga/evosn.py
examples/ga/heart_scale.csv
examples/ga/knapsack.py
examples/ga/knn.py
examples/ga/kursawefct.py
examples/ga/mo_rhv.py
examples/ga/nqueens.py
examples/ga/nsga2.py
examples/ga/nsga3.py
examples/ga/onemax.py
examples/ga/onemax_island.py
examples/ga/onemax_island_scoop.py
examples/ga/onemax_mp.py
examples/ga/onemax_multidemic.py
examples/ga/onemax_numpy.py
examples/ga/onemax_short.py
examples/ga/sortingnetwork.py
examples/ga/tsp.py
examples/ga/xkcd.py
examples/ga/pareto_front/dtlz1_front.json
examples/ga/pareto_front/dtlz2_front.json
examples/ga/pareto_front/dtlz3_front.json
examples/ga/pareto_front/dtlz4_front.json
examples/ga/pareto_front/zdt1_front.json
examples/ga/pareto_front/zdt2_front.json
examples/ga/pareto_front/zdt3_front.json
examples/ga/pareto_front/zdt4_front.json
examples/ga/pareto_front/zdt6_front.json
examples/ga/tsp/gr120.json
examples/ga/tsp/gr17.json
examples/ga/tsp/gr24.json
examples/gp/__init__.py
examples/gp/adf_symbreg.py
examples/gp/ant.py
examples/gp/multiplexer.py
examples/gp/parity.py
examples/gp/spambase.csv
examples/gp/spambase.py
examples/gp/symbreg.py
examples/gp/symbreg_epsilon_lexicase.py
examples/gp/symbreg_harm.py
examples/gp/symbreg_numpy.py
examples/gp/ant/AntSimulatorFast.cpp
examples/gp/ant/AntSimulatorFast.hpp
examples/gp/ant/buildAntSimFast.py
examples/gp/ant/santafe_trail.txt
examples/pso/basic.py
examples/pso/basic_numpy.py
examples/pso/multiswarm.py
examples/pso/speciation.py
tests/test_algorithms.py
tests/test_benchmarks.py
tests/test_convergence.py
tests/test_creator.py
tests/test_init.py
tests/test_logbook.py
tests/test_multiproc.py
tests/test_mutation.py
tests/test_operators.py
tests/test_pickle.py
tests/test_statistics.py ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936700.0
deap-1.4.1/deap.egg-info/dependency_links.txt 0000644 0000765 0000024 00000000001 14456461474 020352 0 ustar 00runner staff
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936700.0
deap-1.4.1/deap.egg-info/requires.txt 0000644 0000765 0000024 00000000006 14456461474 016700 0 ustar 00runner staff numpy
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936700.0
deap-1.4.1/deap.egg-info/top_level.txt 0000644 0000765 0000024 00000000005 14456461474 017031 0 ustar 00runner staff deap
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1689936700.6440876
deap-1.4.1/doc/ 0000755 0000765 0000024 00000000000 14456461475 012447 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/doc/Makefile 0000644 0000765 0000024 00000006107 14456461441 014104 0 ustar 00runner staff # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
PYTHONPATH=${PWD}/../ $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/EAP.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/EAP.qhc"
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1689936700.6500487
deap-1.4.1/doc/_images/ 0000755 0000765 0000024 00000000000 14456461475 014053 5 ustar 00runner staff ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689936673.0
deap-1.4.1/doc/_images/constraints.png 0000644 0000765 0000024 00000123333 14456461441 017126 0 ustar 00runner staff PNG
IHDR , ӽJP sBIT|d pHYs a a?i IDATxwXg,u45%FcC[,ĊK+%1j-"jj|Qk4X5bO ʲ.ę̜ٝ3c1c13P2`1c1cLN`1c1c1 ,c1c1f8c1c1'c1c1ƘAc1c13hb1c1cX1c1c̠q1c1c4N`1c1c1 ,c1c1f8c1c1'c1c1ƘAc1c13hb1c1cX1c1c̠q1fPZjLLbd:
777`$882G:c ,V,n?uօ5PjUt6mBFF!j)RZbL&S377;>3CEOwŘ!P7VVܽ{WXhaa*UiӦ?~ jIBM=߱cǤS#Eqb(Pw LK
31_8RʞPдiSmVVVx1;ÇùsU+C8A-L^^^ٳ' 99111Ǯ]>,1n {{{y]1f-ܼyaĉ x.]koAN+ȑ#^|lʹssqq)hG6n80 ժU+oqQ<~&&&pqqAǎ1qD8::JV%qkժ;l1W* (ccc/_UVUsw
օ+]͟?/s]cСؼy+R-Bpp0
-+VHS̘1D\.| u oFX 6l>AתU+9rFAXԩCϧ4Ο?AX
UPO eA͛7 6? AJe_ԩّU^NJ/^(yfݻ7\.Sfh˖-oٲ%d2i111$뵝ɉʗ/O)))3vX4g4AwkǓrJJJ3fPZH.5iӆ~7eiGUV%SSSX"uޝN:3ևR@@ UR(<<\gT#ܴ.k$Ah͚5*Ǻ髯"///%\NժUnݺѡC1XӟX.EA(33-ZD5j 333VM:UcJDt
$233J*/}w:t(999)U\)..N6M]:t( @S{ S
_~Ѻ;wR֖֭Օ@ϟ'wWӟ"oѣjoQ̨֖f͚4}thAKVv Nׯ_WEr)&&̃ Wt~XR\\\H&i$ҭ[tR~n,g{Ŋ'X7o @~iH\2;MFu!AE+UZ5kF4zh7nUZA,vZiԨQ4{l1b5jԈ5jDDD/^`ruu%A($$D|;
8MF9y{{+cTY⤷_~dnnNiӦQ-HP6XgϞ%AR9MqTbE
iӦQHS%''jrjذ!ҬYhĈH ЬYԖו
{; au59::}_FM`qJݻ`Vh4rHrtt$L;}4Rvh4sL0` YXX1cWPzhu~F})? l\\\^ۯ_?>C8q"͜9L]tIy\ussS96L|(ۗhذa4i$H:tk߯ڵ+͚5I&I.}'4}t'sss.
M`i:tERJԤI/hСdnnN2>>dXJhĈ4k,4h999)۔pٳ' @zRI(m oAi̙ Iϟ?WY %+ܹCdffF֭[G PZ(;;[9]WbUEt\\M>4h@dffF...4byBCCݝ͍̙C:۔X>,,,T:M۹A!$v^I(5{8=kKCzNIII4k,SYXX5կ_f̘Aj۹ ,qUŵQtS#GGG233#j߾=ڵm_MrwG
"255%GGG}T9=33tB ТET^B P.]TFh=ylllښ}M*z:ܹqYHڹstZߟA~)_1h+H P6mHlA ___zʲ$M8Qez~;#> ,]*Rh3g2VU+)###ڽ{/^rzrzrr{TJUXYYYz}>")7 @&&&*;6lrs5" EA裏>RIR5Ȉ=zS*_<֕={F666TR%VYZZϒbۧ\6X Phh$uIeO Pƍڹ,mh?uiJǓ ?GM Ј#TdeܹsI߿233APO+9%KƆzM&LSRH&QʕՎ?ԣGj֬ISL?[n:۔]9f̘AGV.o"_]M&O"EўҫW5%KKǺ;w
6)S_|A;w&sssϖ-}ݻwu.z@۱ ׯ'###>={l>|8կ__' ]|_ל9sH&QiԫW/dT|y:{.N`"#:::_6lW\\+h.B CʕS դɣD
2]qp7ok0E}acP'N$ʕ+<)zoܸRJ*uٽ{7 VT ,w#&rhԨkid)ǩK.ڷo I˸qHzZ:t7k˹B/_$Ayz?6 Б#G^8BUV o+$"剸>v]#d2ʅtAXTbEiu%LF.]3$,X@ ܹsՖYYYD~KV):7nܘ粊΄*$C;đFFF*I&"m۶ ԴiS}ٳgTzum_Qtp)$.rrr"LFigcI&$-[Lc9;۹"i6qDUV3 v_vNSݻZi"d\
;vP&4wYpw&/BPjUܽ{/_Dlll4U