pax_global_header 0000666 0000000 0000000 00000000064 14725554646 0014534 g ustar 00root root 0000000 0000000 52 comment=d8ccc690d16b9e35d977448a5d13449927a68329
BayesianOptimization-2.0.1/ 0000775 0000000 0000000 00000000000 14725554646 0015676 5 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/.github/ 0000775 0000000 0000000 00000000000 14725554646 0017236 5 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/.github/ISSUE_TEMPLATE/ 0000775 0000000 0000000 00000000000 14725554646 0021421 5 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/.github/ISSUE_TEMPLATE/bug_report.md 0000664 0000000 0000000 00000002223 14725554646 0024112 0 ustar 00root root 0000000 0000000 ---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug, enhancement
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
Ex: Using `scipy==1.8` with `bayesian-optimization==1.2.0` results in `TypeError: 'float' object is not subscriptable`.
**To Reproduce**
A concise, self-contained code snippet that reproduces the bug you would like to report.
Ex:
```python
from bayes_opt import BayesianOptimization
black_box_function = lambda x, y: -x ** 2 - (y - 1) ** 2 + 1
pbounds = {'x': (2, 4), 'y': (-3, 3)}
optimizer = BayesianOptimization(
f=black_box_function,
pbounds=pbounds
)
optimizer.maximize()
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Arch Linux, macOS, Windows]
- `python` Version [e.g. 3.8.9]
- `numpy` Version [e.g. 1.21.6]
- `scipy` Version [e.g. 1.8.0]
- `bayesian-optimization` Version [e.g. 1.2.0]
**Additional context**
Add any other context about the problem here.
BayesianOptimization-2.0.1/.github/ISSUE_TEMPLATE/feature_request.md 0000664 0000000 0000000 00000001412 14725554646 0025144 0 ustar 00root root 0000000 0000000 ---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**References or alternative approaches**
If this feature was described in literature, please add references here. Additionally, feel free to add descriptions of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
**Are you able and willing to implement this feature yourself and open a pull request?**
- [ ] Yes, I can provide this feature.
BayesianOptimization-2.0.1/.github/workflows/ 0000775 0000000 0000000 00000000000 14725554646 0021273 5 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/.github/workflows/build_docs.yml 0000664 0000000 0000000 00000007276 14725554646 0024141 0 ustar 00root root 0000000 0000000 name: docs
on:
release:
types: [published]
push:
branches:
- master
pull_request:
concurrency:
group: ${{ github.workflow }}
jobs:
build-docs-and-publish:
runs-on: ubuntu-20.04
permissions:
contents: write
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v3
with:
python-version: '3.10'
- name: Get tag
uses: olegtarasov/get-tag@v2.1
- name: Install pandoc
run: sudo apt-get install -y pandoc
- name: Install Poetry
uses: snok/install-poetry@v1
- name: Install package and test dependencies
run: |
poetry install --with dev,nbtools
- name: build sphinx docs
run: |
cd docsrc
poetry run make github
- name: Determine directory to publish docs to
id: docs-publish-dir
uses: jannekem/run-python-script-action@v1
with:
script: |
import os, re
github_ref = os.environ.get('GITHUB_REF')
m = re.match(r'^refs/tags/v([0-9]+\.[0-9]+\.[0-9]+(-dev\.[0-9]+)?)$',
github_ref)
if m:
target = m.group(1)
elif github_ref == 'refs/heads/master':
target = 'master'
else:
target = ''
set_output('target', target)
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
if: steps.docs-publish-dir.outputs.target != ''
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./docs/html
destination_dir: ${{ steps.docs-publish-dir.outputs.target }}
keep_files: false
outputs:
docs-target: ${{ steps.docs-publish-dir.outputs.target }}
update-versions:
name: Update docs versions JSON
needs: build-docs-and-publish
if: needs.build-docs-and-publish.outputs.docs-target != ''
runs-on: Ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v3
with:
ref: gh-pages
- name: Write versions to JSON file
uses: jannekem/run-python-script-action@v1
with:
script: |
import json
import re
# dependency of sphinx, so should be installed
from packaging import version as version_
from pathlib import Path
cwd = Path.cwd()
versions = sorted((item.name for item in cwd.iterdir()
if item.is_dir() and not item.name.startswith('.')),
reverse=True)
# Filter out master and dev versions
parseable_versions = []
for version in versions:
try:
version_.parse(version)
except version_.InvalidVersion:
continue
parseable_versions.append(version)
if parseable_versions:
max_version = max(parseable_versions, key=version_.parse)
else:
max_version = None
target_dir = Path('gh-pages')
target_dir.mkdir(parents=True)
versions = [
dict(
version=version,
title=version + ' (stable)' if version == max_version else version,
aliases=['stable'] if version == max_version else [],
) for version in versions
]
target_file = target_dir / 'versions.json'
with target_file.open('w') as f:
json.dump(versions, f)
- name: Publish versions JSON to GitHub pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: gh-pages
keep_files: true
BayesianOptimization-2.0.1/.github/workflows/format_and_lint.yml 0000664 0000000 0000000 00000001065 14725554646 0025160 0 ustar 00root root 0000000 0000000 name: Code format and lint
on:
push:
branches: [ "master" ]
pull_request:
permissions:
contents: read
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.9
uses: actions/setup-python@v3
with:
python-version: "3.9"
- name: Install Poetry
uses: snok/install-poetry@v1
- name: Install dependencies
run: |
poetry install --with dev
- name: Run pre-commit
run : poetry run pre-commit run --all-files --show-diff-on-failure --color=always
BayesianOptimization-2.0.1/.github/workflows/python-publish.yml 0000664 0000000 0000000 00000001136 14725554646 0025004 0 ustar 00root root 0000000 0000000 # This workflow will upload a Python Package using poetry when a release is created
# Note that you must manually update the version number in pyproject.toml before attempting this.
name: Upload Python Package
on:
release:
types: [published]
permissions:
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and publish to pypi
uses: JRubics/poetry-publish@v2.0
with:
pypi_token: ${{ secrets.PYPI_API_TOKEN }}
# python_version: "3.10"
# poetry_version: "==1.8" # can lock versions if we want
BayesianOptimization-2.0.1/.github/workflows/run_tests.yml 0000664 0000000 0000000 00000002160 14725554646 0024043 0 ustar 00root root 0000000 0000000 # This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: tests
on:
push:
branches: [ "master" ]
pull_request:
permissions:
contents: read
jobs:
build:
name: Python ${{ matrix.python-version }} - numpy ${{ matrix.numpy-version }}
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
numpy-version: [">=1.25,<2", ">=2"]
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install Poetry
uses: snok/install-poetry@v1
- name: Install test dependencies
run: |
poetry add "numpy${{ matrix.numpy-version }}"
poetry install --with dev,nbtools
- name: Test with pytest
run: |
poetry run pytest --cov-report xml --cov=bayes_opt/
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
BayesianOptimization-2.0.1/.gitignore 0000664 0000000 0000000 00000000575 14725554646 0017675 0 ustar 00root root 0000000 0000000 .ipynb_checkpoints
*.pyc
*.egg-info/
build/
dist/
scratch/
.idea/
.DS_Store
bo_eg*.png
gif/
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
*temp*
docs/*
docsrc/.ipynb_checkpoints/*
docsrc/*.ipynb
docsrc/static/*
docsrc/README.md
poetry.lock BayesianOptimization-2.0.1/.pre-commit-config.yaml 0000664 0000000 0000000 00000000313 14725554646 0022154 0 ustar 00root root 0000000 0000000 repos:
- hooks:
- id: ruff
name: ruff-lint
- id: ruff-format
name: ruff-format
args: [--check]
repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.6 BayesianOptimization-2.0.1/LICENSE 0000664 0000000 0000000 00000002101 14725554646 0016675 0 ustar 00root root 0000000 0000000 The MIT License (MIT)
Copyright (c) 2014 Fernando M. F. Nogueira
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. BayesianOptimization-2.0.1/README.md 0000664 0000000 0000000 00000020017 14725554646 0017155 0 ustar 00root root 0000000 0000000
# Bayesian Optimization

[](https://bayesian-optimization.github.io/BayesianOptimization/index.html)
[](https://codecov.io/github/bayesian-optimization/BayesianOptimization?branch=master)
[](https://pypi.python.org/pypi/bayesian-optimization)

Pure Python implementation of bayesian global optimization with gaussian
processes.
This is a constrained global optimization package built upon bayesian inference
and gaussian processes, that attempts to find the maximum value of an unknown
function in as few iterations as possible. This technique is particularly
suited for optimization of high cost functions and situations where the balance
between exploration and exploitation is important.
## Installation
* pip (via PyPI):
```console
$ pip install bayesian-optimization
```
* Conda (via conda-forge):
```console
$ conda install -c conda-forge bayesian-optimization
```
## How does it work?
See the [documentation](https://bayesian-optimization.github.io/BayesianOptimization/) for how to use this package.
Bayesian optimization works by constructing a posterior distribution of functions (gaussian process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not, as seen in the picture below.

As you iterate over and over, the algorithm balances its needs of exploration and exploitation taking into account what it knows about the target function. At each step a Gaussian Process is fitted to the known samples (points previously explored), and the posterior distribution, combined with a exploration strategy (such as UCB (Upper Confidence Bound), or EI (Expected Improvement)), are used to determine the next point that should be explored (see the gif below).

This process is designed to minimize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. See the references for a proper discussion of this method.
This project is under active development. If you run into trouble, find a bug or notice
anything that needs correction, please let us know by filing an issue.
## Basic tour of the Bayesian Optimization package
### 1. Specifying the function to be optimized
This is a function optimization package, therefore the first and most important ingredient is, of course, the function to be optimized.
**DISCLAIMER:** We know exactly how the output of the function below depends on its parameter. Obviously this is just an example, and you shouldn't expect to know it in a real scenario. However, it should be clear that you don't need to. All you need in order to use this package (and more generally, this technique) is a function `f` that takes a known set of parameters and outputs a real number.
```python
def black_box_function(x, y):
"""Function with unknown internals we wish to maximize.
This is just serving as an example, for all intents and
purposes think of the internals of this function, i.e.: the process
which generates its output values, as unknown.
"""
return -x ** 2 - (y - 1) ** 2 + 1
```
### 2. Getting Started
All we need to get started is to instantiate a `BayesianOptimization` object specifying a function to be optimized `f`, and its parameters with their corresponding bounds, `pbounds`. This is a constrained optimization technique, so you must specify the minimum and maximum values that can be probed for each parameter in order for it to work
```python
from bayes_opt import BayesianOptimization
# Bounded region of parameter space
pbounds = {'x': (2, 4), 'y': (-3, 3)}
optimizer = BayesianOptimization(
f=black_box_function,
pbounds=pbounds,
random_state=1,
)
```
The BayesianOptimization object will work out of the box without much tuning needed. The main method you should be aware of is `maximize`, which does exactly what you think it does.
There are many parameters you can pass to maximize, nonetheless, the most important ones are:
- `n_iter`: How many steps of bayesian optimization you want to perform. The more steps the more likely to find a good maximum you are.
- `init_points`: How many steps of **random** exploration you want to perform. Random exploration can help by diversifying the exploration space.
```python
optimizer.maximize(
init_points=2,
n_iter=3,
)
```
| iter | target | x | y |
-------------------------------------------------
| 1 | -7.135 | 2.834 | 1.322 |
| 2 | -7.78 | 2.0 | -1.186 |
| 3 | -19.0 | 4.0 | 3.0 |
| 4 | -16.3 | 2.378 | -2.413 |
| 5 | -4.441 | 2.105 | -0.005822 |
=================================================
The best combination of parameters and target value found can be accessed via the property `optimizer.max`.
```python
print(optimizer.max)
>>> {'target': -4.441293113411222, 'params': {'y': -0.005822117636089974, 'x': 2.104665051994087}}
```
While the list of all parameters probed and their corresponding target values is available via the property `optimizer.res`.
```python
for i, res in enumerate(optimizer.res):
print("Iteration {}: \n\t{}".format(i, res))
>>> Iteration 0:
>>> {'target': -7.135455292718879, 'params': {'y': 1.3219469606529488, 'x': 2.8340440094051482}}
>>> Iteration 1:
>>> {'target': -7.779531005607566, 'params': {'y': -1.1860045642089614, 'x': 2.0002287496346898}}
>>> Iteration 2:
>>> {'target': -19.0, 'params': {'y': 3.0, 'x': 4.0}}
>>> Iteration 3:
>>> {'target': -16.29839645063864, 'params': {'y': -2.412527795983739, 'x': 2.3776144540856503}}
>>> Iteration 4:
>>> {'target': -4.441293113411222, 'params': {'y': -0.005822117636089974, 'x': 2.104665051994087}}
```
## Minutiae
### Citation
If you used this package in your research, please cite it:
```
@Misc{,
author = {Fernando Nogueira},
title = {{Bayesian Optimization}: Open source constrained global optimization tool for {Python}},
year = {2014--},
url = " https://github.com/bayesian-optimization/BayesianOptimization"
}
```
If you used any of the advanced functionalities, please additionally cite the corresponding publication:
For the `SequentialDomainTransformer`:
```
@article{
author = {Stander, Nielen and Craig, Kenneth},
year = {2002},
month = {06},
pages = {},
title = {On the robustness of a simple domain reduction scheme for simulation-based optimization},
volume = {19},
journal = {International Journal for Computer-Aided Engineering and Software (Eng. Comput.)},
doi = {10.1108/02644400210430190}
}
```
For constrained optimization:
```
@inproceedings{gardner2014bayesian,
title={Bayesian optimization with inequality constraints.},
author={Gardner, Jacob R and Kusner, Matt J and Xu, Zhixiang Eddie and Weinberger, Kilian Q and Cunningham, John P},
booktitle={ICML},
volume={2014},
pages={937--945},
year={2014}
}
```
BayesianOptimization-2.0.1/bayes_opt/ 0000775 0000000 0000000 00000000000 14725554646 0017663 5 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/bayes_opt/__init__.py 0000664 0000000 0000000 00000001363 14725554646 0021777 0 ustar 00root root 0000000 0000000 """Pure Python implementation of bayesian global optimization with gaussian processes."""
from __future__ import annotations
import importlib.metadata
from bayes_opt import acquisition
from bayes_opt.bayesian_optimization import BayesianOptimization, Events
from bayes_opt.constraint import ConstraintModel
from bayes_opt.domain_reduction import SequentialDomainReductionTransformer
from bayes_opt.logger import JSONLogger, ScreenLogger
from bayes_opt.target_space import TargetSpace
__version__ = importlib.metadata.version("bayesian-optimization")
__all__ = [
"acquisition",
"BayesianOptimization",
"TargetSpace",
"ConstraintModel",
"Events",
"ScreenLogger",
"JSONLogger",
"SequentialDomainReductionTransformer",
]
BayesianOptimization-2.0.1/bayes_opt/acquisition.py 0000664 0000000 0000000 00000103724 14725554646 0022574 0 ustar 00root root 0000000 0000000 """Acquisition functions for Bayesian Optimization.
The acquisition functions in this module can be grouped the following way:
- One of the base acquisition functions
(:py:class:`UpperConfidenceBound`,
:py:class:`ProbabilityOfImprovement` and
:py:class:`ExpectedImprovement`) is always dictating the basic
behavior of the suggestion step. They can be used alone or combined with a meta acquisition function.
- :py:class:`GPHedge` is a meta acquisition function that combines multiple
base acquisition functions and determines the most suitable one for a particular problem.
- :py:class:`ConstantLiar` is a meta acquisition function that can be
used for parallelized optimization and discourages sampling near a previously suggested, but not yet
evaluated, point.
- :py:class:`AcquisitionFunction` is the base class for all
acquisition functions. You can implement your own acquisition function by subclassing it. See the
`Acquisition Functions notebook <../acquisition.html>`__ to understand the many ways this class can be
modified.
"""
from __future__ import annotations
import abc
import warnings
from copy import deepcopy
from typing import TYPE_CHECKING, Any, Literal, NoReturn
import numpy as np
from numpy.random import RandomState
from scipy.optimize import minimize
from scipy.special import softmax
from scipy.stats import norm
from sklearn.gaussian_process import GaussianProcessRegressor
from bayes_opt.exception import (
ConstraintNotSupportedError,
NoValidPointRegisteredError,
TargetSpaceEmptyError,
)
from bayes_opt.target_space import TargetSpace
if TYPE_CHECKING:
from collections.abc import Callable, Sequence
from numpy.typing import NDArray
from scipy.optimize import OptimizeResult
from bayes_opt.constraint import ConstraintModel
Float = np.floating[Any]
class AcquisitionFunction(abc.ABC):
"""Base class for acquisition functions.
Parameters
----------
random_state : int, RandomState, default None
Set the random state for reproducibility.
"""
def __init__(self, random_state: int | RandomState | None = None) -> None:
if random_state is not None:
if isinstance(random_state, RandomState):
self.random_state = random_state
else:
self.random_state = RandomState(random_state)
else:
self.random_state = RandomState()
self.i = 0
@abc.abstractmethod
def base_acq(self, *args: Any, **kwargs: Any) -> NDArray[Float]:
"""Provide access to the base acquisition function."""
def _fit_gp(self, gp: GaussianProcessRegressor, target_space: TargetSpace) -> None:
# Sklearn's GP throws a large number of warnings at times, but
# we don't really need to see them here.
with warnings.catch_warnings():
warnings.simplefilter("ignore")
gp.fit(target_space.params, target_space.target)
if target_space.constraint is not None:
target_space.constraint.fit(target_space.params, target_space._constraint_values)
def suggest(
self,
gp: GaussianProcessRegressor,
target_space: TargetSpace,
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
fit_gp: bool = True,
) -> NDArray[Float]:
"""Suggest a promising point to probe next.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
target_space : TargetSpace
The target space to probe.
n_random : int, default 10_000
Number of random samples to use.
n_l_bfgs_b : int, default 10
Number of starting points for the L-BFGS-B optimizer.
fit_gp : bool, default True
Whether to fit the Gaussian Process to the target space.
Set to False if the GP is already fitted.
Returns
-------
np.ndarray
Suggested point to probe next.
"""
if len(target_space) == 0:
msg = (
"Cannot suggest a point without previous samples. Use "
" target_space.random_sample() to generate a point and "
" target_space.probe(*) to evaluate it."
)
raise TargetSpaceEmptyError(msg)
self.i += 1
if fit_gp:
self._fit_gp(gp=gp, target_space=target_space)
acq = self._get_acq(gp=gp, constraint=target_space.constraint)
return self._acq_min(acq, target_space.bounds, n_random=n_random, n_l_bfgs_b=n_l_bfgs_b)
def _get_acq(
self, gp: GaussianProcessRegressor, constraint: ConstraintModel | None = None
) -> Callable[[NDArray[Float]], NDArray[Float]]:
"""Prepare the acquisition function for minimization.
Transforms a base_acq Callable, which takes `mean` and `std` as
input, into an acquisition function that only requires an array of
parameters.
Handles GP predictions and constraints.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
constraint : ConstraintModel, default None
A fitted constraint model, if constraints are present and the
acquisition function supports them.
Returns
-------
Callable
Function to minimize.
"""
dim = gp.X_train_.shape[1]
if constraint is not None:
def acq(x: NDArray[Float]) -> NDArray[Float]:
x = x.reshape(-1, dim)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
mean: NDArray[Float]
std: NDArray[Float]
p_constraints: NDArray[Float]
mean, std = gp.predict(x, return_std=True)
p_constraints = constraint.predict(x)
return -1 * self.base_acq(mean, std) * p_constraints
else:
def acq(x: NDArray[Float]) -> NDArray[Float]:
x = x.reshape(-1, dim)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
mean: NDArray[Float]
std: NDArray[Float]
mean, std = gp.predict(x, return_std=True)
return -1 * self.base_acq(mean, std)
return acq
def _acq_min(
self,
acq: Callable[[NDArray[Float]], NDArray[Float]],
bounds: NDArray[Float],
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
) -> NDArray[Float]:
"""Find the maximum of the acquisition function.
Uses a combination of random sampling (cheap) and the 'L-BFGS-B'
optimization method. First by sampling `n_warmup` (1e5) points at random,
and then running L-BFGS-B from `n_iter` (10) random starting points.
Parameters
----------
acq : Callable
Acquisition function to use. Should accept an array of parameters `x`.
bounds : np.ndarray
Bounds of the search space. For `N` parameters this has shape
`(N, 2)` with `[i, 0]` the lower bound of parameter `i` and
`[i, 1]` the upper bound.
n_random : int
Number of random samples to use.
n_l_bfgs_b : int
Number of starting points for the L-BFGS-B optimizer.
Returns
-------
np.ndarray
Parameters maximizing the acquisition function.
"""
if n_random == 0 and n_l_bfgs_b == 0:
error_msg = "Either n_random or n_l_bfgs_b needs to be greater than 0."
raise ValueError(error_msg)
x_min_r, min_acq_r = self._random_sample_minimize(acq, bounds, n_random=n_random)
x_min_l, min_acq_l = self._l_bfgs_b_minimize(acq, bounds, n_x_seeds=n_l_bfgs_b)
# Either n_random or n_l_bfgs_b is not 0 => at least one of x_min_r and x_min_l is not None
if min_acq_r < min_acq_l:
return x_min_r
return x_min_l
def _random_sample_minimize(
self, acq: Callable[[NDArray[Float]], NDArray[Float]], bounds: NDArray[Float], n_random: int
) -> tuple[NDArray[Float] | None, float]:
"""Random search to find the minimum of `acq` function.
Parameters
----------
acq : Callable
Acquisition function to use. Should accept an array of parameters `x`.
bounds : np.ndarray
Bounds of the search space. For `N` parameters this has shape
`(N, 2)` with `[i, 0]` the lower bound of parameter `i` and
`[i, 1]` the upper bound.
n_random : int
Number of random samples to use.
Returns
-------
x_min : np.ndarray
Random sample minimizing the acquisition function.
min_acq : float
Acquisition function value at `x_min`
"""
if n_random == 0:
return None, np.inf
x_tries = self.random_state.uniform(bounds[:, 0], bounds[:, 1], size=(n_random, bounds.shape[0]))
ys = acq(x_tries)
x_min = x_tries[ys.argmin()]
min_acq = ys.min()
return x_min, min_acq
def _l_bfgs_b_minimize(
self, acq: Callable[[NDArray[Float]], NDArray[Float]], bounds: NDArray[Float], n_x_seeds: int = 10
) -> tuple[NDArray[Float] | None, float]:
"""Random search to find the minimum of `acq` function.
Parameters
----------
acq : Callable
Acquisition function to use. Should accept an array of parameters `x`.
bounds : np.ndarray
Bounds of the search space. For `N` parameters this has shape
`(N, 2)` with `[i, 0]` the lower bound of parameter `i` and
`[i, 1]` the upper bound.
n_x_seeds : int
Number of starting points for the L-BFGS-B optimizer.
Returns
-------
x_min : np.ndarray
Minimal result of the L-BFGS-B optimizer.
min_acq : float
Acquisition function value at `x_min`
"""
if n_x_seeds == 0:
return None, np.inf
x_seeds = self.random_state.uniform(bounds[:, 0], bounds[:, 1], size=(n_x_seeds, bounds.shape[0]))
min_acq: float | None = None
x_try: NDArray[Float]
x_min: NDArray[Float]
for x_try in x_seeds:
# Find the minimum of minus the acquisition function
res: OptimizeResult = minimize(acq, x_try, bounds=bounds, method="L-BFGS-B")
# See if success
if not res.success:
continue
# Store it if better than previous minimum(maximum).
if min_acq is None or np.squeeze(res.fun) >= min_acq:
x_min = res.x
min_acq = np.squeeze(res.fun)
if min_acq is None:
min_acq = np.inf
x_min = np.array([np.nan] * bounds.shape[0])
# Clip output to make sure it lies within the bounds. Due to floating
# point technicalities this is not always the case.
return np.clip(x_min, bounds[:, 0], bounds[:, 1]), min_acq
class UpperConfidenceBound(AcquisitionFunction):
r"""Upper Confidence Bound acquisition function.
The upper confidence bound is calculated as
.. math::
\text{UCB}(x) = \mu(x) + \kappa \sigma(x).
Parameters
----------
kappa : float, default 2.576
Governs the exploration/exploitation tradeoff. Lower prefers
exploitation, higher prefers exploration.
exploration_decay : float, default None
Decay rate for kappa. If None, no decay is applied.
exploration_decay_delay : int, default None
Delay for decay. If None, decay is applied from the start.
random_state : int, RandomState, default None
Set the random state for reproducibility.
"""
def __init__(
self,
kappa: float = 2.576,
exploration_decay: float | None = None,
exploration_decay_delay: int | None = None,
random_state: int | RandomState | None = None,
) -> None:
if kappa < 0:
error_msg = "kappa must be greater than or equal to 0."
raise ValueError(error_msg)
super().__init__(random_state=random_state)
self.kappa = kappa
self.exploration_decay = exploration_decay
self.exploration_decay_delay = exploration_decay_delay
def base_acq(self, mean: NDArray[Float], std: NDArray[Float]) -> NDArray[Float]:
"""Calculate the upper confidence bound.
Parameters
----------
mean : np.ndarray
Mean of the predictive distribution.
std : np.ndarray
Standard deviation of the predictive distribution.
Returns
-------
np.ndarray
Acquisition function value.
"""
return mean + self.kappa * std
def suggest(
self,
gp: GaussianProcessRegressor,
target_space: TargetSpace,
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
fit_gp: bool = True,
) -> NDArray[Float]:
"""Suggest a promising point to probe next.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
target_space : TargetSpace
The target space to probe.
n_random : int, default 10_000
Number of random samples to use.
n_l_bfgs_b : int, default 10
Number of starting points for the L-BFGS-B optimizer.
fit_gp : bool, default True
Whether to fit the Gaussian Process to the target space.
Set to False if the GP is already fitted.
Returns
-------
np.ndarray
Suggested point to probe next.
"""
if target_space.constraint is not None:
msg = (
f"Received constraints, but acquisition function {type(self)} "
"does not support constrained optimization."
)
raise ConstraintNotSupportedError(msg)
x_max = super().suggest(
gp=gp, target_space=target_space, n_random=n_random, n_l_bfgs_b=n_l_bfgs_b, fit_gp=fit_gp
)
self.decay_exploration()
return x_max
def decay_exploration(self) -> None:
"""Decay kappa by a constant rate.
Adjust exploration/exploitation trade-off by reducing kappa.
Note
----
This method is called automatically at the end of each ``suggest()`` call.
"""
if self.exploration_decay is not None and (
self.exploration_decay_delay is None or self.exploration_decay_delay <= self.i
):
self.kappa = self.kappa * self.exploration_decay
class ProbabilityOfImprovement(AcquisitionFunction):
r"""Probability of Improvement acqusition function.
Calculated as
.. math:: \text{POI}(x) = \Phi\left( \frac{\mu(x)-y_{\text{max}} - \xi }{\sigma(x)} \right)
where :math:`\Phi` is the CDF of the normal distribution.
Parameters
----------
xi : float, positive
Governs the exploration/exploitation tradeoff. Lower prefers
exploitation, higher prefers exploration.
exploration_decay : float, default None
Decay rate for xi. If None, no decay is applied.
exploration_decay_delay : int, default None
Delay for decay. If None, decay is applied from the start.
random_state : int, RandomState, default None
Set the random state for reproducibility.
"""
def __init__(
self,
xi: float,
exploration_decay: float | None = None,
exploration_decay_delay: int | None = None,
random_state: int | RandomState | None = None,
) -> None:
super().__init__(random_state=random_state)
self.xi = xi
self.exploration_decay = exploration_decay
self.exploration_decay_delay = exploration_decay_delay
self.y_max = None
def base_acq(self, mean: NDArray[Float], std: NDArray[Float]) -> NDArray[Float]:
"""Calculate the probability of improvement.
Parameters
----------
mean : np.ndarray
Mean of the predictive distribution.
std : np.ndarray
Standard deviation of the predictive distribution.
Returns
-------
np.ndarray
Acquisition function value.
Raises
------
ValueError
If y_max is not set.
"""
if self.y_max is None:
msg = (
"y_max is not set. If you are calling this method outside "
"of suggest(), you must set y_max manually."
)
raise ValueError(msg)
z = (mean - self.y_max - self.xi) / std
return norm.cdf(z)
def suggest(
self,
gp: GaussianProcessRegressor,
target_space: TargetSpace,
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
fit_gp: bool = True,
) -> NDArray[Float]:
"""Suggest a promising point to probe next.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
target_space : TargetSpace
The target space to probe.
n_random : int, default 10_000
Number of random samples to use.
n_l_bfgs_b : int, default 10
Number of starting points for the L-BFGS-B optimizer.
fit_gp : bool, default True
Whether to fit the Gaussian Process to the target space.
Set to False if the GP is already fitted.
Returns
-------
np.ndarray
Suggested point to probe next.
"""
y_max = target_space._target_max()
if y_max is None and not target_space.empty:
# If target space is empty, let base class handle the error
msg = (
"Cannot suggest a point without an allowed point. Use "
"target_space.random_sample() to generate a point until "
" at least one point that satisfies the constraints is found."
)
raise NoValidPointRegisteredError(msg)
self.y_max = y_max
x_max = super().suggest(
gp=gp, target_space=target_space, n_random=n_random, n_l_bfgs_b=n_l_bfgs_b, fit_gp=fit_gp
)
self.decay_exploration()
return x_max
def decay_exploration(self) -> None:
r"""Decay xi by a constant rate.
Adjust exploration/exploitation trade-off by reducing xi.
Note
----
This method is called automatically at the end of each ``suggest()`` call.
"""
if self.exploration_decay is not None and (
self.exploration_decay_delay is None or self.exploration_decay_delay <= self.i
):
self.xi = self.xi * self.exploration_decay
class ExpectedImprovement(AcquisitionFunction):
r"""Expected Improvement acqusition function.
Similar to Probability of Improvement (`ProbabilityOfImprovement`), but also considers the
magnitude of improvement.
Calculated as
.. math::
\text{EI}(x) = (\mu(x)-y_{\text{max}} - \xi) \Phi\left(
\frac{\mu(x)-y_{\text{max}} - \xi }{\sigma(x)} \right)
+ \sigma(x) \phi\left(
\frac{\mu(x)-y_{\text{max}} - \xi }{\sigma(x)} \right)
where :math:`\Phi` is the CDF and :math:`\phi` the PDF of the normal
distribution.
Parameters
----------
xi : float, positive
Governs the exploration/exploitation tradeoff. Lower prefers
exploitation, higher prefers exploration.
exploration_decay : float, default None
Decay rate for xi. If None, no decay is applied.
exploration_decay_delay : int, default None
random_state : int, RandomState, default None
Set the random state for reproducibility.
"""
def __init__(
self,
xi: float,
exploration_decay: float | None = None,
exploration_decay_delay: int | None = None,
random_state: int | RandomState | None = None,
) -> None:
super().__init__(random_state=random_state)
self.xi = xi
self.exploration_decay = exploration_decay
self.exploration_decay_delay = exploration_decay_delay
self.y_max = None
def base_acq(self, mean: NDArray[Float], std: NDArray[Float]) -> NDArray[Float]:
"""Calculate the expected improvement.
Parameters
----------
mean : np.ndarray
Mean of the predictive distribution.
std : np.ndarray
Standard deviation of the predictive distribution.
Returns
-------
np.ndarray
Acquisition function value.
Raises
------
ValueError
If y_max is not set.
"""
if self.y_max is None:
msg = (
"y_max is not set. If you are calling this method outside "
"of suggest(), ensure y_max is set, or set it manually."
)
raise ValueError(msg)
a = mean - self.y_max - self.xi
z = a / std
return a * norm.cdf(z) + std * norm.pdf(z)
def suggest(
self,
gp: GaussianProcessRegressor,
target_space: TargetSpace,
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
fit_gp: bool = True,
) -> NDArray[Float]:
"""Suggest a promising point to probe next.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
target_space : TargetSpace
The target space to probe.
n_random : int, default 10_000
Number of random samples to use.
n_l_bfgs_b : int, default 10
Number of starting points for the L-BFGS-B optimizer.
fit_gp : bool, default True
Whether to fit the Gaussian Process to the target space.
Set to False if the GP is already fitted.
Returns
-------
np.ndarray
Suggested point to probe next.
"""
y_max = target_space._target_max()
if y_max is None and not target_space.empty:
# If target space is empty, let base class handle the error
msg = (
"Cannot suggest a point without an allowed point. Use "
"target_space.random_sample() to generate a point until "
" at least one point that satisfies the constraints is found."
)
raise NoValidPointRegisteredError(msg)
self.y_max = y_max
x_max = super().suggest(
gp=gp, target_space=target_space, n_random=n_random, n_l_bfgs_b=n_l_bfgs_b, fit_gp=fit_gp
)
self.decay_exploration()
return x_max
def decay_exploration(self) -> None:
r"""Decay xi by a constant rate.
Adjust exploration/exploitation trade-off by reducing xi.
Note
----
This method is called automatically at the end of each ``suggest()`` call.
"""
if self.exploration_decay is not None and (
self.exploration_decay_delay is None or self.exploration_decay_delay <= self.i
):
self.xi = self.xi * self.exploration_decay
class ConstantLiar(AcquisitionFunction):
"""Constant Liar acquisition function.
Used for asynchronous optimization. It operates on a copy of the target space
that includes the previously suggested points that have not been evaluated yet.
A GP fitted to this target space is less likely to suggest the same point again,
since the variance of the predictive distribution is lower at these points.
This is discourages the optimization algorithm from suggesting the same point
to multiple workers.
Parameters
----------
base_acquisition : AcquisitionFunction
The acquisition function to use.
strategy : float or str, default 'max'
Strategy to use for the constant liar. If a float, the constant liar
will always register dummies with this value. If 'min'/'mean'/'max',
the constant liar will register dummies with the minimum/mean/maximum
target value in the target space.
random_state : int, RandomState, default None
Set the random state for reproducibility.
atol : float, default 1e-5
Absolute tolerance to eliminate a dummy point.
rtol : float, default 1e-8
Relative tolerance to eliminate a dummy point.
"""
def __init__(
self,
base_acquisition: AcquisitionFunction,
strategy: Literal["min", "mean", "max"] | float = "max",
random_state: int | RandomState | None = None,
atol: float = 1e-5,
rtol: float = 1e-8,
) -> None:
super().__init__(random_state)
self.base_acquisition = base_acquisition
self.dummies = []
if not isinstance(strategy, float) and strategy not in ["min", "mean", "max"]:
error_msg = f"Received invalid argument {strategy} for strategy."
raise ValueError(error_msg)
self.strategy: Literal["min", "mean", "max"] | float = strategy
self.atol = atol
self.rtol = rtol
def base_acq(self, *args: Any, **kwargs: Any) -> NDArray[Float]:
"""Calculate the acquisition function.
Calls the base acquisition function's `base_acq` method.
Returns
-------
np.ndarray
Acquisition function value.
"""
return self.base_acquisition.base_acq(*args, **kwargs)
def _copy_target_space(self, target_space: TargetSpace) -> TargetSpace:
"""Create a copy of the target space.
Parameters
----------
target_space : TargetSpace
The target space to copy.
Returns
-------
TargetSpace
A copy of the target space.
"""
keys = target_space.keys
pbounds = {key: bound for key, bound in zip(keys, target_space.bounds)}
target_space_copy = TargetSpace(
None,
pbounds=pbounds,
constraint=target_space.constraint,
allow_duplicate_points=target_space._allow_duplicate_points,
)
target_space_copy._params = deepcopy(target_space._params)
target_space_copy._target = deepcopy(target_space._target)
return target_space_copy
def _remove_expired_dummies(self, target_space: TargetSpace) -> None:
"""Remove expired dummy points from the list of dummies.
Once a worker has evaluated a dummy point, the dummy is discarded. To
accomplish this, we compare every dummy point to the current target
space's parameters and remove it if it is close to any of them.
Parameters
----------
target_space : TargetSpace
The target space to compare the dummies to.
"""
dummies = []
for dummy in self.dummies:
close = np.isclose(dummy, target_space.params, rtol=self.rtol, atol=self.atol)
if not close.all(axis=1).any():
dummies.append(dummy)
self.dummies = dummies
def suggest(
self,
gp: GaussianProcessRegressor,
target_space: TargetSpace,
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
fit_gp: bool = True,
) -> NDArray[Float]:
"""Suggest a promising point to probe next.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
target_space : TargetSpace
The target space to probe.
n_random : int, default 10_000
Number of random samples to use.
n_l_bfgs_b : int, default 10
Number of starting points for the L-BFGS-B optimizer.
fit_gp : bool, default True
Whether to fit the Gaussian Process to the target space.
Set to False if the GP is already fitted.
Returns
-------
np.ndarray
Suggested point to probe next.
"""
if len(target_space) == 0:
msg = (
"Cannot suggest a point without previous samples. Use "
" target_space.random_sample() to generate a point and "
" target_space.probe(*) to evaluate it."
)
raise TargetSpaceEmptyError(msg)
if target_space.constraint is not None:
msg = (
f"Received constraints, but acquisition function {type(self)} "
"does not support constrained optimization."
)
raise ConstraintNotSupportedError(msg)
# Check if any dummies have been evaluated and remove them
self._remove_expired_dummies(target_space)
# Create a copy of the target space
dummy_target_space = self._copy_target_space(target_space)
dummy_target: float
# Choose the dummy target value
if isinstance(self.strategy, float):
dummy_target = self.strategy
elif self.strategy == "min":
dummy_target = target_space.target.min()
elif self.strategy == "mean":
dummy_target = target_space.target.mean()
elif self.strategy != "max":
error_msg = f"Received invalid argument {self.strategy} for strategy."
raise ValueError(error_msg)
else:
dummy_target = target_space.target.max()
# Register the dummies to the dummy target space
for dummy in self.dummies:
dummy_target_space.register(dummy, dummy_target)
# Fit the GP to the dummy target space and suggest a point
self._fit_gp(gp=gp, target_space=dummy_target_space)
x_max = self.base_acquisition.suggest(
gp, dummy_target_space, n_random=n_random, n_l_bfgs_b=n_l_bfgs_b, fit_gp=False
)
# Register the suggested point as a dummy
self.dummies.append(x_max)
return x_max
class GPHedge(AcquisitionFunction):
"""GPHedge acquisition function.
At each suggestion step, GPHedge samples suggestions from each base
acquisition function acq_i. Then a candidate is selected from the
suggestions based on the on the cumulative rewards of each acq_i.
After evaluating the candidate, the gains are updated (in the next
iteration) based on the updated expectation value of the candidates.
For more information, see:
Brochu et al., "Portfolio Allocation for Bayesian Optimization",
https://arxiv.org/abs/1009.5419
Parameters
----------
base_acquisitions : Sequence[AcquisitionFunction]
Sequence of base acquisition functions.
random_state : int, RandomState, default None
Set the random state for reproducibility.
"""
def __init__(
self, base_acquisitions: Sequence[AcquisitionFunction], random_state: int | RandomState | None = None
) -> None:
super().__init__(random_state)
self.base_acquisitions = list(base_acquisitions)
self.n_acq = len(self.base_acquisitions)
self.gains = np.zeros(self.n_acq)
self.previous_candidates = None
def base_acq(self, *args: Any, **kwargs: Any) -> NoReturn:
"""Raise an error, since the base acquisition function is ambiguous."""
msg = (
"GPHedge base acquisition function is ambiguous."
" You may use self.base_acquisitions[i].base_acq(mean, std)"
" to get the base acquisition function for the i-th acquisition."
)
raise TypeError(msg)
def _sample_idx_from_softmax_gains(self) -> int:
"""Sample an index weighted by the softmax of the gains."""
cumsum_softmax_g = np.cumsum(softmax(self.gains))
r = self.random_state.rand()
return np.argmax(r <= cumsum_softmax_g) # Returns the first True value
def _update_gains(self, gp: GaussianProcessRegressor) -> None:
"""Update the gains of the base acquisition functions."""
with warnings.catch_warnings():
warnings.simplefilter("ignore")
rewards = gp.predict(self.previous_candidates)
self.gains += rewards
self.previous_candidates = None
def suggest(
self,
gp: GaussianProcessRegressor,
target_space: TargetSpace,
n_random: int = 10_000,
n_l_bfgs_b: int = 10,
fit_gp: bool = True,
) -> NDArray[Float]:
"""Suggest a promising point to probe next.
Parameters
----------
gp : GaussianProcessRegressor
A fitted Gaussian Process.
target_space : TargetSpace
The target space to probe.
n_random : int, default 10_000
Number of random samples to use.
n_l_bfgs_b : int, default 10
Number of starting points for the L-BFGS-B optimizer.
fit_gp : bool, default True
Whether to fit the Gaussian Process to the target space.
Set to False if the GP is already fitted.
Returns
-------
np.ndarray
Suggested point to probe next.
"""
if len(target_space) == 0:
msg = (
"Cannot suggest a point without previous samples. Use "
" target_space.random_sample() to generate a point and "
" target_space.probe(*) to evaluate it."
)
raise TargetSpaceEmptyError(msg)
self.i += 1
if fit_gp:
self._fit_gp(gp=gp, target_space=target_space)
# Update the gains of the base acquisition functions
if self.previous_candidates is not None:
self._update_gains(gp)
# Suggest a point using each base acquisition function
x_max = [
base_acq.suggest(
gp=gp,
target_space=target_space,
n_random=n_random // self.n_acq,
n_l_bfgs_b=n_l_bfgs_b // self.n_acq,
fit_gp=False,
)
for base_acq in self.base_acquisitions
]
self.previous_candidates = np.array(x_max)
idx = self._sample_idx_from_softmax_gains()
return x_max[idx]
BayesianOptimization-2.0.1/bayes_opt/bayesian_optimization.py 0000664 0000000 0000000 00000030012 14725554646 0024632 0 ustar 00root root 0000000 0000000 """Main module.
Holds the `BayesianOptimization` class, which handles the maximization of a
function over a specific target space.
"""
from __future__ import annotations
from collections import deque
from typing import TYPE_CHECKING, Any
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern
from bayes_opt import acquisition
from bayes_opt.constraint import ConstraintModel
from bayes_opt.event import DEFAULT_EVENTS, Events
from bayes_opt.logger import _get_default_logger
from bayes_opt.target_space import TargetSpace
from bayes_opt.util import ensure_rng
if TYPE_CHECKING:
from collections.abc import Callable, Iterable, Mapping, Sequence
import numpy as np
from numpy.random import RandomState
from numpy.typing import NDArray
from scipy.optimize import NonlinearConstraint
from bayes_opt.acquisition import AcquisitionFunction
from bayes_opt.constraint import ConstraintModel
from bayes_opt.domain_reduction import DomainTransformer
Float = np.floating[Any]
class Observable:
"""Inspired by https://www.protechtraining.com/blog/post/879#simple-observer."""
def __init__(self, events: Iterable[Any]) -> None:
# maps event names to subscribers
# str -> dict
self._events = {event: dict() for event in events}
def get_subscribers(self, event: Any) -> Any:
"""Return the subscribers of an event."""
return self._events[event]
def subscribe(self, event: Any, subscriber: Any, callback: Callable[..., Any] | None = None) -> None:
"""Add subscriber to an event."""
if callback is None:
callback = subscriber.update
self.get_subscribers(event)[subscriber] = callback
def unsubscribe(self, event: Any, subscriber: Any) -> None:
"""Remove a subscriber for a particular event."""
del self.get_subscribers(event)[subscriber]
def dispatch(self, event: Any) -> None:
"""Trigger callbacks for subscribers of an event."""
for callback in self.get_subscribers(event).values():
callback(event, self)
class BayesianOptimization(Observable):
"""Handle optimization of a target function over a specific target space.
This class takes the function to optimize as well as the parameters bounds
in order to find which values for the parameters yield the maximum value
using bayesian optimization.
Parameters
----------
f: function or None.
Function to be maximized.
pbounds: dict
Dictionary with parameters names as keys and a tuple with minimum
and maximum values.
constraint: NonlinearConstraint.
Note that the names of arguments of the constraint function and of
f need to be the same.
random_state: int or numpy.random.RandomState, optional(default=None)
If the value is an integer, it is used as the seed for creating a
numpy.random.RandomState. Otherwise the random state provided is used.
When set to None, an unseeded random state is generated.
verbose: int, optional(default=2)
The level of verbosity.
bounds_transformer: DomainTransformer, optional(default=None)
If provided, the transformation is applied to the bounds.
allow_duplicate_points: bool, optional (default=False)
If True, the optimizer will allow duplicate points to be registered.
This behavior may be desired in high noise situations where repeatedly probing
the same point will give different answers. In other situations, the acquisition
may occasionally generate a duplicate point.
"""
def __init__(
self,
f: Callable[..., float] | None,
pbounds: Mapping[str, tuple[float, float]],
acquisition_function: AcquisitionFunction | None = None,
constraint: NonlinearConstraint | None = None,
random_state: int | RandomState | None = None,
verbose: int = 2,
bounds_transformer: DomainTransformer | None = None,
allow_duplicate_points: bool = False,
):
self._random_state = ensure_rng(random_state)
self._allow_duplicate_points = allow_duplicate_points
self._queue: deque[Mapping[str, float] | Sequence[float] | NDArray[Float]] = deque()
if acquisition_function is None:
if constraint is None:
self._acquisition_function = acquisition.UpperConfidenceBound(
kappa=2.576, random_state=self._random_state
)
else:
self._acquisition_function = acquisition.ExpectedImprovement(
xi=0.01, random_state=self._random_state
)
else:
self._acquisition_function = acquisition_function
# Internal GP regressor
self._gp = GaussianProcessRegressor(
kernel=Matern(nu=2.5),
alpha=1e-6,
normalize_y=True,
n_restarts_optimizer=5,
random_state=self._random_state,
)
if constraint is None:
# Data structure containing the function to be optimized, the
# bounds of its domain, and a record of the evaluations we have
# done so far
self._space = TargetSpace(
f, pbounds, random_state=random_state, allow_duplicate_points=self._allow_duplicate_points
)
self.is_constrained = False
else:
constraint_ = ConstraintModel(
constraint.fun, constraint.lb, constraint.ub, random_state=random_state
)
self._space = TargetSpace(
f,
pbounds,
constraint=constraint_,
random_state=random_state,
allow_duplicate_points=self._allow_duplicate_points,
)
self.is_constrained = True
self._verbose = verbose
self._bounds_transformer = bounds_transformer
if self._bounds_transformer:
try:
self._bounds_transformer.initialize(self._space)
except (AttributeError, TypeError) as exc:
error_msg = "The transformer must be an instance of DomainTransformer"
raise TypeError(error_msg) from exc
super().__init__(events=DEFAULT_EVENTS)
@property
def space(self) -> TargetSpace:
"""Return the target space associated with the optimizer."""
return self._space
@property
def acquisition_function(self) -> AcquisitionFunction:
"""Return the acquisition function associated with the optimizer."""
return self._acquisition_function
@property
def constraint(self) -> ConstraintModel | None:
"""Return the constraint associated with the optimizer, if any."""
if self.is_constrained:
return self._space.constraint
return None
@property
def max(self) -> dict[str, Any] | None:
"""Get maximum target value found and corresponding parameters.
See `TargetSpace.max` for more information.
"""
return self._space.max()
@property
def res(self) -> list[dict[str, Any]]:
"""Get all target values and constraint fulfillment for all parameters.
See `TargetSpace.res` for more information.
"""
return self._space.res()
def register(
self,
params: Mapping[str, float] | Sequence[float] | NDArray[Float],
target: float,
constraint_value: float | NDArray[Float] | None = None,
) -> None:
"""Register an observation with known target.
Parameters
----------
params: dict or list
The parameters associated with the observation.
target: float
Value of the target function at the observation.
constraint_value: float or None
Value of the constraint function at the observation, if any.
"""
self._space.register(params, target, constraint_value)
self.dispatch(Events.OPTIMIZATION_STEP)
def probe(
self, params: Mapping[str, float] | Sequence[float] | NDArray[Float], lazy: bool = True
) -> None:
"""Evaluate the function at the given points.
Useful to guide the optimizer.
Parameters
----------
params: dict or list
The parameters where the optimizer will evaluate the function.
lazy: bool, optional(default=True)
If True, the optimizer will evaluate the points when calling
maximize(). Otherwise it will evaluate it at the moment.
"""
if lazy:
self._queue.append(params)
else:
self._space.probe(params)
self.dispatch(Events.OPTIMIZATION_STEP)
def suggest(self) -> dict[str, float]:
"""Suggest a promising point to probe next."""
if len(self._space) == 0:
return self._space.array_to_params(self._space.random_sample())
# Finding argmax of the acquisition function.
suggestion = self._acquisition_function.suggest(gp=self._gp, target_space=self._space, fit_gp=True)
return self._space.array_to_params(suggestion)
def _prime_queue(self, init_points: int) -> None:
"""Ensure the queue is not empty.
Parameters
----------
init_points: int
Number of parameters to prime the queue with.
"""
if not self._queue and self._space.empty:
init_points = max(init_points, 1)
for _ in range(init_points):
self._queue.append(self._space.random_sample())
def _prime_subscriptions(self) -> None:
if not any([len(subs) for subs in self._events.values()]):
_logger = _get_default_logger(self._verbose, self.is_constrained)
self.subscribe(Events.OPTIMIZATION_START, _logger)
self.subscribe(Events.OPTIMIZATION_STEP, _logger)
self.subscribe(Events.OPTIMIZATION_END, _logger)
def maximize(self, init_points: int = 5, n_iter: int = 25) -> None:
r"""
Maximize the given function over the target space.
Parameters
----------
init_points : int, optional(default=5)
Number of random points to probe before starting the optimization.
n_iter: int, optional(default=25)
Number of iterations where the method attempts to find the maximum
value.
Warning
-------
The maximize loop only fits the GP when suggesting a new point to
probe based on the acquisition function. This means that the GP may
not be fitted on all points registered to the target space when the
method completes. If you intend to use the GP model after the
optimization routine, make sure to fit it manually, e.g. by calling
``optimizer._gp.fit(optimizer.space.params, optimizer.space.target)``.
"""
self._prime_subscriptions()
self.dispatch(Events.OPTIMIZATION_START)
self._prime_queue(init_points)
iteration = 0
while self._queue or iteration < n_iter:
try:
x_probe = self._queue.popleft()
except IndexError:
x_probe = self.suggest()
iteration += 1
self.probe(x_probe, lazy=False)
if self._bounds_transformer and iteration > 0:
# The bounds transformer should only modify the bounds after
# the init_points points (only for the true iterations)
self.set_bounds(self._bounds_transformer.transform(self._space))
self.dispatch(Events.OPTIMIZATION_END)
def set_bounds(self, new_bounds: Mapping[str, NDArray[Float] | Sequence[float]]) -> None:
"""Modify the bounds of the search space.
Parameters
----------
new_bounds : dict
A dictionary with the parameter name and its new bounds
"""
self._space.set_bounds(new_bounds)
def set_gp_params(self, **params: Any) -> None:
"""Set parameters of the internal Gaussian Process Regressor."""
self._gp.set_params(**params)
BayesianOptimization-2.0.1/bayes_opt/constraint.py 0000664 0000000 0000000 00000020726 14725554646 0022430 0 ustar 00root root 0000000 0000000 """Constraint handling."""
from __future__ import annotations
from typing import TYPE_CHECKING, Any
import numpy as np
from scipy.stats import norm
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern
if TYPE_CHECKING:
from collections.abc import Callable
from numpy.random import RandomState
from numpy.typing import NDArray
Float = np.floating[Any]
class ConstraintModel:
"""Model constraints using GP regressors.
This class takes the function to optimize as well as the parameters bounds
in order to find which values for the parameters yield the maximum value
using bayesian optimization.
Parameters
----------
fun : None or Callable -> float or np.ndarray
The constraint function. Should be float-valued or array-valued (if
multiple constraints are present). Needs to take the same parameters
as the optimization target with the same argument names.
lb : float or np.ndarray
The lower bound on the constraints. Should have the same
dimensionality as the return value of the constraint function.
ub : float or np.ndarray
The upper bound on the constraints. Should have the same
dimensionality as the return value of the constraint function.
random_state : np.random.RandomState or int or None, default=None
Random state to use.
Note
----
In case of multiple constraints, this model assumes conditional
independence. This means that the overall probability of fulfillment is a
simply the product of the individual probabilities.
"""
def __init__(
self,
fun: Callable[..., float] | Callable[..., NDArray[Float]] | None,
lb: float | NDArray[Float],
ub: float | NDArray[Float],
random_state: int | RandomState | None = None,
) -> None:
self.fun = fun
self._lb = np.atleast_1d(lb)
self._ub = np.atleast_1d(ub)
if np.any(self._lb >= self._ub):
msg = "Lower bounds must be less than upper bounds."
raise ValueError(msg)
self._model = [
GaussianProcessRegressor(
kernel=Matern(nu=2.5),
alpha=1e-6,
normalize_y=True,
n_restarts_optimizer=5,
random_state=random_state,
)
for _ in range(len(self._lb))
]
@property
def lb(self) -> NDArray[Float]:
"""Return lower bounds."""
return self._lb
@property
def ub(self) -> NDArray[Float]:
"""Return upper bounds."""
return self._ub
@property
def model(self) -> list[GaussianProcessRegressor]:
"""Return GP regressors of the constraint function."""
return self._model
def eval(self, **kwargs: Any) -> float | NDArray[Float]: # noqa: D417
r"""Evaluate the constraint function.
Parameters
----------
\*\*kwargs : any
Function arguments to evaluate the constraint function on.
Returns
-------
Value of the constraint function.
Raises
------
TypeError
If the kwargs' keys don't match the function argument names.
"""
if self.fun is None:
error_msg = "No constraint function was provided."
raise ValueError(error_msg)
try:
return self.fun(**kwargs)
except TypeError as e:
msg = (
"Encountered TypeError when evaluating constraint "
"function. This could be because your constraint function "
"doesn't use the same keyword arguments as the target "
f"function. Original error message:\n\n{e}"
)
e.args = (msg,)
raise
def fit(self, X: NDArray[Float], Y: NDArray[Float]) -> None:
"""Fit internal GPRs to the data.
Parameters
----------
X : np.ndarray of shape (n_samples, n_features)
Parameters of the constraint function.
Y : np.ndarray of shape (n_samples, n_constraints)
Values of the constraint function.
Returns
-------
None
"""
if len(self._model) == 1:
self._model[0].fit(X, Y)
else:
for i, gp in enumerate(self._model):
gp.fit(X, Y[:, i])
def predict(self, X: NDArray[Float]) -> NDArray[Float]:
r"""Calculate the probability that the constraint is fulfilled at `X`.
Note that this does not try to approximate the values of the
constraint function (for this, see `ConstraintModel.approx()`.), but
probability that the constraint function is fulfilled. That is, this
function calculates
.. math::
p = \text{Pr}\left\{c^{\text{low}} \leq \tilde{c}(x) \leq
c^{\text{up}} \right\} = \int_{c^{\text{low}}}^{c^{\text{up}}}
\mathcal{N}(c, \mu(x), \sigma^2(x)) \, dc.
with :math:`\mu(x)`, :math:`\sigma^2(x)` the mean and variance at
:math:`x` as given by the GP and :math:`c^{\text{low}}`,
:math:`c^{\text{up}}` the lower and upper bounds of the constraint
respectively.
Note
----
In case of multiple constraints, we assume conditional independence.
This means we calculate the probability of constraint fulfilment
individually, with the joint probability given as their product.
Parameters
----------
X : np.ndarray of shape (n_samples, n_features)
Parameters for which to predict the probability of constraint
fulfilment.
Returns
-------
np.ndarray of shape (n_samples,)
Probability of constraint fulfilment.
"""
X_shape = X.shape
X = X.reshape((-1, self._model[0].n_features_in_))
result: NDArray[Float]
y_mean: NDArray[Float]
y_std: NDArray[Float]
p_lower: NDArray[Float]
p_upper: NDArray[Float]
if len(self._model) == 1:
y_mean, y_std = self._model[0].predict(X, return_std=True)
p_lower = (
norm(loc=y_mean, scale=y_std).cdf(self._lb[0]) if self._lb[0] != -np.inf else np.array([0])
)
p_upper = (
norm(loc=y_mean, scale=y_std).cdf(self._ub[0]) if self._lb[0] != np.inf else np.array([1])
)
result = p_upper - p_lower
return result.reshape(X_shape[:-1])
result = np.ones(X.shape[0])
for j, gp in enumerate(self._model):
y_mean, y_std = gp.predict(X, return_std=True)
p_lower = (
norm(loc=y_mean, scale=y_std).cdf(self._lb[j]) if self._lb[j] != -np.inf else np.array([0])
)
p_upper = (
norm(loc=y_mean, scale=y_std).cdf(self._ub[j]) if self._lb[j] != np.inf else np.array([1])
)
result = result * (p_upper - p_lower)
return result.reshape(X_shape[:-1])
def approx(self, X: NDArray[Float]) -> NDArray[Float]:
"""
Approximate the constraint function using the internal GPR model.
Parameters
----------
X : np.ndarray of shape (n_samples, n_features)
Parameters for which to estimate the constraint function value.
Returns
-------
np.ndarray of shape (n_samples, n_constraints)
Constraint function value estimates.
"""
X_shape = X.shape
X = X.reshape((-1, self._model[0].n_features_in_))
if len(self._model) == 1:
return self._model[0].predict(X).reshape(X_shape[:-1])
result = np.column_stack([gp.predict(X) for gp in self._model])
return result.reshape(X_shape[:-1] + (len(self._lb),))
def allowed(self, constraint_values: NDArray[Float]) -> NDArray[np.bool_]:
"""Check whether `constraint_values` fulfills the specified limits.
Parameters
----------
constraint_values : np.ndarray of shape (n_samples, n_constraints)
The values of the constraint function.
Returns
-------
np.ndarrray of shape (n_samples,)
Specifying wheter the constraints are fulfilled.
"""
if self._lb.size == 1:
return np.less_equal(self._lb, constraint_values) & np.less_equal(constraint_values, self._ub)
return np.all(constraint_values <= self._ub, axis=-1) & np.all(constraint_values >= self._lb, axis=-1)
BayesianOptimization-2.0.1/bayes_opt/domain_reduction.py 0000664 0000000 0000000 00000025557 14725554646 0023576 0 ustar 00root root 0000000 0000000 """Implement domain transformation.
In particular, this provides a base transformer class and a sequential domain
reduction transformer as based on Stander and Craig's "On the robustness of a
simple domain reduction scheme for simulation-based optimization"
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from collections.abc import Iterable, Mapping, Sequence
from typing import TYPE_CHECKING, Any
from warnings import warn
import numpy as np
from bayes_opt.target_space import TargetSpace
if TYPE_CHECKING:
from numpy.typing import NDArray
Float = np.floating[Any]
class DomainTransformer(ABC):
"""Base class."""
@abstractmethod
def __init__(self, **kwargs: Any) -> None:
"""To override with specific implementation."""
@abstractmethod
def initialize(self, target_space: TargetSpace) -> None:
"""To override with specific implementation."""
@abstractmethod
def transform(self, target_space: TargetSpace) -> dict[str, NDArray[Float]]:
"""To override with specific implementation."""
class SequentialDomainReductionTransformer(DomainTransformer):
"""Reduce the searchable space.
A sequential domain reduction transformer based on the work by Stander, N. and Craig, K:
"On the robustness of a simple domain reduction scheme for simulation-based optimization"
Parameters
----------
gamma_osc : float, default=0.7
Parameter used to scale (typically dampen) oscillations.
gamma_pan : float, default=1.0
Parameter used to scale (typically unitary) panning.
eta : float, default=0.9
Zooming parameter used to shrink the region of interest.
minimum_window : float or np.ndarray or dict, default=0.0
Minimum window size for each parameter. If a float is provided,
the same value is used for all parameters.
"""
def __init__(
self,
gamma_osc: float = 0.7,
gamma_pan: float = 1.0,
eta: float = 0.9,
minimum_window: NDArray[Float] | Sequence[float] | Mapping[str, float] | float = 0.0,
) -> None:
self.gamma_osc = gamma_osc
self.gamma_pan = gamma_pan
self.eta = eta
self.minimum_window_value: NDArray[Float] | Sequence[float] | float
if isinstance(minimum_window, Mapping):
self.minimum_window_value = [
item[1] for item in sorted(minimum_window.items(), key=lambda x: x[0])
]
else:
self.minimum_window_value = minimum_window
def initialize(self, target_space: TargetSpace) -> None:
"""Initialize all of the parameters.
Parameters
----------
target_space : TargetSpace
TargetSpace this DomainTransformer operates on.
"""
# Set the original bounds
self.original_bounds = np.copy(target_space.bounds)
self.bounds = [self.original_bounds]
self.minimum_window: NDArray[Float] | Sequence[float]
# Set the minimum window to an array of length bounds
if isinstance(self.minimum_window_value, (Sequence, np.ndarray)):
if len(self.minimum_window_value) != len(target_space.bounds):
error_msg = "Length of minimum_window must be the same as the number of parameters"
raise ValueError(error_msg)
self.minimum_window = self.minimum_window_value
else:
self.minimum_window = [self.minimum_window_value] * len(target_space.bounds)
# Set initial values
self.previous_optimal = np.mean(target_space.bounds, axis=1)
self.current_optimal = np.mean(target_space.bounds, axis=1)
self.r = target_space.bounds[:, 1] - target_space.bounds[:, 0]
self.previous_d = 2.0 * (self.current_optimal - self.previous_optimal) / self.r
self.current_d = 2.0 * (self.current_optimal - self.previous_optimal) / self.r
self.c = self.current_d * self.previous_d
self.c_hat = np.sqrt(np.abs(self.c)) * np.sign(self.c)
self.gamma = 0.5 * (self.gamma_pan * (1.0 + self.c_hat) + self.gamma_osc * (1.0 - self.c_hat))
self.contraction_rate = self.eta + np.abs(self.current_d) * (self.gamma - self.eta)
self.r = self.contraction_rate * self.r
# check if the minimum window fits in the original bounds
self._window_bounds_compatibility(self.original_bounds)
def _update(self, target_space: TargetSpace) -> None:
"""Update contraction rate, window size, and window center.
Parameters
----------
target_space : TargetSpace
TargetSpace this DomainTransformer operates on.
"""
# setting the previous
self.previous_optimal = self.current_optimal
self.previous_d = self.current_d
self.current_optimal = target_space.params_to_array(target_space.max()["params"])
self.current_d = 2.0 * (self.current_optimal - self.previous_optimal) / self.r
self.c = self.current_d * self.previous_d
self.c_hat = np.sqrt(np.abs(self.c)) * np.sign(self.c)
self.gamma = 0.5 * (self.gamma_pan * (1.0 + self.c_hat) + self.gamma_osc * (1.0 - self.c_hat))
self.contraction_rate = self.eta + np.abs(self.current_d) * (self.gamma - self.eta)
self.r = self.contraction_rate * self.r
def _trim(self, new_bounds: NDArray[Float], global_bounds: NDArray[Float]) -> NDArray[Float]:
"""
Adjust the new_bounds and verify that they adhere to global_bounds and minimum_window.
Parameters
----------
new_bounds : np.ndarray
The proposed new_bounds that (may) need adjustment.
global_bounds : np.ndarray
The maximum allowable bounds for each parameter.
Returns
-------
new_bounds : np.ndarray
The adjusted bounds after enforcing constraints.
"""
# sort bounds
new_bounds = np.sort(new_bounds)
pbounds: NDArray[Float]
# Validate each parameter's bounds against the global_bounds
for i, pbounds in enumerate(new_bounds):
# If the one of the bounds is outside the global bounds, reset the bound to the global bound
# This is expected to happen when the window is near the global bounds, no warning is issued
if pbounds[0] < global_bounds[i, 0]:
pbounds[0] = global_bounds[i, 0]
if pbounds[1] > global_bounds[i, 1]:
pbounds[1] = global_bounds[i, 1]
# If a lower bound is greater than the associated global upper bound,
# reset it to the global lower bound
if pbounds[0] > global_bounds[i, 1]:
pbounds[0] = global_bounds[i, 0]
warn(
"\nDomain Reduction Warning:\n"
"A parameter's lower bound is greater than the global upper bound."
"The offensive boundary has been reset."
"Be cautious of subsequent reductions.",
stacklevel=2,
)
# If an upper bound is less than the associated global lower bound,
# reset it to the global upper bound
if pbounds[1] < global_bounds[i, 0]:
pbounds[1] = global_bounds[i, 1]
warn(
"\nDomain Reduction Warning:\n"
"A parameter's lower bound is greater than the global upper bound."
"The offensive boundary has been reset."
"Be cautious of subsequent reductions.",
stacklevel=2,
)
# Adjust new_bounds to ensure they respect the minimum window width for each parameter
for i, pbounds in enumerate(new_bounds):
current_window_width = abs(pbounds[0] - pbounds[1])
# If the window width is less than the minimum allowable width, adjust it
# Note that when minimum_window < width of the global bounds one side
# always has more space than required
if current_window_width < self.minimum_window[i]:
width_deficit = (self.minimum_window[i] - current_window_width) / 2.0
available_left_space = abs(global_bounds[i, 0] - pbounds[0])
available_right_space = abs(global_bounds[i, 1] - pbounds[1])
# determine how much to expand on the left and right
expand_left = min(width_deficit, available_left_space)
expand_right = min(width_deficit, available_right_space)
# calculate the deficit on each side
expand_left_deficit = width_deficit - expand_left
expand_right_deficit = width_deficit - expand_right
# shift the deficit to the side with more space
adjust_left = expand_left + max(expand_right_deficit, 0)
adjust_right = expand_right + max(expand_left_deficit, 0)
# adjust the bounds
pbounds[0] -= adjust_left
pbounds[1] += adjust_right
return new_bounds
def _window_bounds_compatibility(self, global_bounds: NDArray[Float]) -> None:
"""Check if global bounds are compatible with the minimum window sizes.
Parameters
----------
global_bounds : np.ndarray
The maximum allowable bounds for each parameter.
Raises
------
ValueError
If global bounds are not compatible with the minimum window size.
"""
entry: NDArray[Float]
for i, entry in enumerate(global_bounds):
global_window_width = abs(entry[1] - entry[0])
if global_window_width < self.minimum_window[i]:
error_msg = "Global bounds are not compatible with the minimum window size."
raise ValueError(error_msg)
def _create_bounds(self, parameters: Iterable[str], bounds: NDArray[Float]) -> dict[str, NDArray[Float]]:
"""Create a dictionary of bounds for each parameter.
Parameters
----------
parameters : Iterable[str]
The parameters for which to create the bounds.
bounds : np.ndarray
The bounds for each parameter.
"""
return {param: bounds[i, :] for i, param in enumerate(parameters)}
def transform(self, target_space: TargetSpace) -> dict[str, NDArray[Float]]:
"""Transform the bounds of the target space.
Parameters
----------
target_space : TargetSpace
TargetSpace this DomainTransformer operates on.
Returns
-------
dict
The new bounds of each parameter.
"""
self._update(target_space)
new_bounds = np.array([self.current_optimal - 0.5 * self.r, self.current_optimal + 0.5 * self.r]).T
new_bounds = self._trim(new_bounds, self.original_bounds)
self.bounds.append(new_bounds)
return self._create_bounds(target_space.keys, new_bounds)
BayesianOptimization-2.0.1/bayes_opt/event.py 0000664 0000000 0000000 00000000623 14725554646 0021357 0 ustar 00root root 0000000 0000000 """Register optimization events variables."""
from __future__ import annotations
class Events:
"""Define optimization events.
Behaves similar to enums.
"""
OPTIMIZATION_START = "optimization:start"
OPTIMIZATION_STEP = "optimization:step"
OPTIMIZATION_END = "optimization:end"
DEFAULT_EVENTS = [Events.OPTIMIZATION_START, Events.OPTIMIZATION_STEP, Events.OPTIMIZATION_END]
BayesianOptimization-2.0.1/bayes_opt/exception.py 0000664 0000000 0000000 00000001547 14725554646 0022242 0 ustar 00root root 0000000 0000000 """This module contains custom exceptions for Bayesian Optimization."""
from __future__ import annotations
__all__ = [
"BayesianOptimizationError",
"NotUniqueError",
"ConstraintNotSupportedError",
"NoValidPointRegisteredError",
"TargetSpaceEmptyError",
]
class BayesianOptimizationError(Exception):
"""Base class for exceptions in the Bayesian Optimization."""
class NotUniqueError(BayesianOptimizationError):
"""A point is non-unique."""
class ConstraintNotSupportedError(BayesianOptimizationError):
"""Raised when constrained optimization is not supported."""
class NoValidPointRegisteredError(BayesianOptimizationError):
"""Raised when an acquisition function depends on previous points but none are registered."""
class TargetSpaceEmptyError(BayesianOptimizationError):
"""Raised when the target space is empty."""
BayesianOptimization-2.0.1/bayes_opt/logger.py 0000664 0000000 0000000 00000021724 14725554646 0021522 0 ustar 00root root 0000000 0000000 """Contains classes and functions for logging."""
from __future__ import annotations
import json
from contextlib import suppress
from pathlib import Path
from typing import TYPE_CHECKING, Any
import numpy as np
from colorama import Fore, just_fix_windows_console
from bayes_opt.event import Events
from bayes_opt.observer import _Tracker
if TYPE_CHECKING:
from os import PathLike
from bayes_opt.bayesian_optimization import BayesianOptimization
just_fix_windows_console()
def _get_default_logger(verbose: int, is_constrained: bool) -> ScreenLogger:
"""
Return the default logger.
Parameters
----------
verbose : int
Verbosity level of the logger.
is_constrained : bool
Whether the underlying optimizer uses constraints (this requires
an additional column in the output).
Returns
-------
ScreenLogger
The default logger.
"""
return ScreenLogger(verbose=verbose, is_constrained=is_constrained)
class ScreenLogger(_Tracker):
"""Logger that outputs text, e.g. to log to a terminal.
Parameters
----------
verbose : int
Verbosity level of the logger.
is_constrained : bool
Whether the logger is associated with a constrained optimization
instance.
"""
_default_cell_size = 9
_default_precision = 4
_colour_new_max = Fore.MAGENTA
_colour_regular_message = Fore.RESET
_colour_reset = Fore.RESET
def __init__(self, verbose: int = 2, is_constrained: bool = False) -> None:
self._verbose = verbose
self._is_constrained = is_constrained
self._header_length = None
super().__init__()
@property
def verbose(self) -> int:
"""Return the verbosity level."""
return self._verbose
@verbose.setter
def verbose(self, v: int) -> None:
"""Set the verbosity level.
Parameters
----------
v : int
New verbosity level of the logger.
"""
self._verbose = v
@property
def is_constrained(self) -> bool:
"""Return whether the logger is constrained."""
return self._is_constrained
def _format_number(self, x: float) -> str:
"""Format a number.
Parameters
----------
x : number
Value to format.
Returns
-------
A stringified, formatted version of `x`.
"""
if isinstance(x, int):
s = f"{x:<{self._default_cell_size}}"
else:
s = f"{x:<{self._default_cell_size}.{self._default_precision}}"
if len(s) > self._default_cell_size:
if "." in s:
return s[: self._default_cell_size]
return s[: self._default_cell_size - 3] + "..."
return s
def _format_bool(self, x: bool) -> str:
"""Format a boolean.
Parameters
----------
x : boolean
Value to format.
Returns
-------
A stringified, formatted version of `x`.
"""
x_ = ("T" if x else "F") if self._default_cell_size < 5 else str(x)
return f"{x_:<{self._default_cell_size}}"
def _format_key(self, key: str) -> str:
"""Format a key.
Parameters
----------
key : string
Value to format.
Returns
-------
A stringified, formatted version of `x`.
"""
s = f"{key:^{self._default_cell_size}}"
if len(s) > self._default_cell_size:
return s[: self._default_cell_size - 3] + "..."
return s
def _step(self, instance: BayesianOptimization, colour: str = _colour_regular_message) -> str:
"""Log a step.
Parameters
----------
instance : bayesian_optimization.BayesianOptimization
The instance associated with the event.
colour :
(Default value = _colour_regular_message, equivalent to Fore.RESET)
Returns
-------
A stringified, formatted version of the most recent optimization step.
"""
res: dict[str, Any] = instance.res[-1]
keys: list[str] = instance.space.keys
# iter, target, allowed [, *params]
cells: list[str | None] = [None] * (3 + len(keys))
cells[:2] = self._format_number(self._iterations + 1), self._format_number(res["target"])
if self._is_constrained:
cells[2] = self._format_bool(res["allowed"])
params = res.get("params", {})
cells[3:] = [self._format_number(params.get(key, float("nan"))) for key in keys]
return "| " + " | ".join(colour + x + self._colour_reset for x in cells if x is not None) + " |"
def _header(self, instance: BayesianOptimization) -> str:
"""Print the header of the log.
Parameters
----------
instance : bayesian_optimization.BayesianOptimization
The instance associated with the header.
Returns
-------
A stringified, formatted version of the most header.
"""
keys: list[str] = instance.space.keys
# iter, target, allowed [, *params]
cells: list[str | None] = [None] * (3 + len(keys))
cells[:2] = self._format_key("iter"), self._format_key("target")
if self._is_constrained:
cells[2] = self._format_key("allowed")
cells[3:] = [self._format_key(key) for key in keys]
line = "| " + " | ".join(x for x in cells if x is not None) + " |"
self._header_length = len(line)
return line + "\n" + ("-" * self._header_length)
def _is_new_max(self, instance: BayesianOptimization) -> bool:
"""Check if the step to log produced a new maximum.
Parameters
----------
instance : bayesian_optimization.BayesianOptimization
The instance associated with the step.
Returns
-------
boolean
"""
if instance.max is None:
# During constrained optimization, there might not be a maximum
# value since the optimizer might've not encountered any points
# that fulfill the constraints.
return False
if self._previous_max is None:
self._previous_max = instance.max["target"]
return instance.max["target"] > self._previous_max
def update(self, event: str, instance: BayesianOptimization) -> None:
"""Handle incoming events.
Parameters
----------
event : str
One of the values associated with `Events.OPTIMIZATION_START`,
`Events.OPTIMIZATION_STEP` or `Events.OPTIMIZATION_END`.
instance : bayesian_optimization.BayesianOptimization
The instance associated with the step.
"""
line = ""
if event == Events.OPTIMIZATION_START:
line = self._header(instance) + "\n"
elif event == Events.OPTIMIZATION_STEP:
is_new_max = self._is_new_max(instance)
if self._verbose != 1 or is_new_max:
colour = self._colour_new_max if is_new_max else self._colour_regular_message
line = self._step(instance, colour=colour) + "\n"
elif event == Events.OPTIMIZATION_END:
line = "=" * self._header_length + "\n"
if self._verbose:
print(line, end="")
self._update_tracker(event, instance)
class JSONLogger(_Tracker):
"""
Logger that outputs steps in JSON format.
The resulting file can be used to restart the optimization from an earlier state.
Parameters
----------
path : str or os.PathLike
Path to the file to write to.
reset : bool
Whether to overwrite the file if it already exists.
"""
def __init__(self, path: str | PathLike[str], reset: bool = True):
self._path = Path(path)
if reset:
with suppress(OSError):
self._path.unlink(missing_ok=True)
super().__init__()
def update(self, event: str, instance: BayesianOptimization) -> None:
"""
Handle incoming events.
Parameters
----------
event : str
One of the values associated with `Events.OPTIMIZATION_START`,
`Events.OPTIMIZATION_STEP` or `Events.OPTIMIZATION_END`.
instance : bayesian_optimization.BayesianOptimization
The instance associated with the step.
"""
if event == Events.OPTIMIZATION_STEP:
data = dict(instance.res[-1])
now, time_elapsed, time_delta = self._time_metrics()
data["datetime"] = {"datetime": now, "elapsed": time_elapsed, "delta": time_delta}
if "allowed" in data: # fix: github.com/fmfn/BayesianOptimization/issues/361
data["allowed"] = bool(data["allowed"])
if "constraint" in data and isinstance(data["constraint"], np.ndarray):
data["constraint"] = data["constraint"].tolist()
with self._path.open("a") as f:
f.write(json.dumps(data) + "\n")
self._update_tracker(event, instance)
BayesianOptimization-2.0.1/bayes_opt/observer.py 0000664 0000000 0000000 00000003607 14725554646 0022072 0 ustar 00root root 0000000 0000000 """Holds the parent class for loggers."""
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING
from bayes_opt.event import Events
if TYPE_CHECKING:
from bayes_opt.bayesian_optimization import BayesianOptimization
class _Tracker:
"""Parent class for ScreenLogger and JSONLogger."""
def __init__(self) -> None:
self._iterations = 0
self._previous_max = None
self._previous_max_params = None
self._start_time = None
self._previous_time = None
def _update_tracker(self, event: str, instance: BayesianOptimization) -> None:
"""Update the tracker.
Parameters
----------
event : str
One of the values associated with `Events.OPTIMIZATION_START`,
`Events.OPTIMIZATION_STEP` or `Events.OPTIMIZATION_END`.
instance : bayesian_optimization.BayesianOptimization
The instance associated with the step.
"""
if event == Events.OPTIMIZATION_STEP:
self._iterations += 1
if instance.max is None:
return
current_max = instance.max
if self._previous_max is None or current_max["target"] > self._previous_max:
self._previous_max = current_max["target"]
self._previous_max_params = current_max["params"]
def _time_metrics(self) -> tuple[str, float, float]:
"""Return time passed since last call."""
now = datetime.now() # noqa: DTZ005
if self._start_time is None:
self._start_time = now
if self._previous_time is None:
self._previous_time = now
time_elapsed = now - self._start_time
time_delta = now - self._previous_time
self._previous_time = now
return (now.strftime("%Y-%m-%d %H:%M:%S"), time_elapsed.total_seconds(), time_delta.total_seconds())
BayesianOptimization-2.0.1/bayes_opt/py.typed 0000664 0000000 0000000 00000000000 14725554646 0021350 0 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/bayes_opt/target_space.py 0000664 0000000 0000000 00000043572 14725554646 0022711 0 ustar 00root root 0000000 0000000 """Manages the optimization domain and holds points."""
from __future__ import annotations
from typing import TYPE_CHECKING, Any
from warnings import warn
import numpy as np
from colorama import Fore
from bayes_opt.exception import NotUniqueError
from bayes_opt.util import ensure_rng
if TYPE_CHECKING:
from collections.abc import Callable, Mapping, Sequence
from numpy.random import RandomState
from numpy.typing import NDArray
from bayes_opt.constraint import ConstraintModel
Float = np.floating[Any]
def _hashable(x: NDArray[Float]) -> tuple[float, ...]:
"""Ensure that a point is hashable by a python dict."""
return tuple(map(float, x))
class TargetSpace:
"""Holds the param-space coordinates (X) and target values (Y).
Allows for constant-time appends.
Parameters
----------
target_func : function or None.
Function to be maximized.
pbounds : dict
Dictionary with parameters names as keys and a tuple with minimum
and maximum values.
random_state : int, RandomState, or None
optionally specify a seed for a random number generator
allow_duplicate_points: bool, optional (default=False)
If True, the optimizer will allow duplicate points to be registered.
This behavior may be desired in high noise situations where repeatedly probing
the same point will give different answers. In other situations, the acquisition
may occasionally generate a duplicate point.
Examples
--------
>>> def target_func(p1, p2):
>>> return p1 + p2
>>> pbounds = {"p1": (0, 1), "p2": (1, 100)}
>>> space = TargetSpace(target_func, pbounds, random_state=0)
>>> x = np.array([4, 5])
>>> y = target_func(x)
>>> space.register(x, y)
>>> assert self.max()["target"] == 9
>>> assert self.max()["params"] == {"p1": 1.0, "p2": 2.0}
"""
def __init__(
self,
target_func: Callable[..., float] | None,
pbounds: Mapping[str, tuple[float, float]],
constraint: ConstraintModel | None = None,
random_state: int | RandomState | None = None,
allow_duplicate_points: bool | None = False,
) -> None:
self.random_state = ensure_rng(random_state)
self._allow_duplicate_points = allow_duplicate_points or False
self.n_duplicate_points = 0
# The function to be optimized
self.target_func = target_func
# Get the name of the parameters
self._keys: list[str] = sorted(pbounds)
# Create an array with parameters bounds
self._bounds: NDArray[Float] = np.array(
[item[1] for item in sorted(pbounds.items(), key=lambda x: x[0])], dtype=float
)
# preallocated memory for X and Y points
self._params: NDArray[Float] = np.empty(shape=(0, self.dim))
self._target: NDArray[Float] = np.empty(shape=(0,))
# keep track of unique points we have seen so far
self._cache: dict[tuple[float, ...], float | tuple[float, float | NDArray[Float]]] = {}
self._constraint: ConstraintModel | None = constraint
if constraint is not None:
# preallocated memory for constraint fulfillment
self._constraint_values: NDArray[Float]
if constraint.lb.size == 1:
self._constraint_values = np.empty(shape=(0), dtype=float)
else:
self._constraint_values = np.empty(shape=(0, constraint.lb.size), dtype=float)
self._sorting_warning_already_shown = False # TODO: remove in future version
def __contains__(self, x: NDArray[Float]) -> bool:
"""Check if this parameter has already been registered.
Returns
-------
bool
"""
return _hashable(x) in self._cache
def __len__(self) -> int:
"""Return number of observations registered.
Returns
-------
int
"""
if len(self._params) != len(self._target):
error_msg = "The number of parameters and targets do not match."
raise ValueError(error_msg)
return len(self._target)
@property
def empty(self) -> bool:
"""Check if anything has been registered.
Returns
-------
bool
"""
return len(self) == 0
@property
def params(self) -> NDArray[Float]:
"""Get the parameter values registered to this TargetSpace.
Returns
-------
np.ndarray
"""
return self._params
@property
def target(self) -> NDArray[Float]:
"""Get the target function values registered to this TargetSpace.
Returns
-------
np.ndarray
"""
return self._target
@property
def dim(self) -> int:
"""Get the number of parameter names.
Returns
-------
int
"""
return len(self._keys)
@property
def keys(self) -> list[str]:
"""Get the keys (or parameter names).
Returns
-------
list of str
"""
return self._keys
@property
def bounds(self) -> NDArray[Float]:
"""Get the bounds of this TargetSpace.
Returns
-------
np.ndarray
"""
return self._bounds
@property
def constraint(self) -> ConstraintModel | None:
"""Get the constraint model.
Returns
-------
ConstraintModel
"""
return self._constraint
@property
def constraint_values(self) -> NDArray[Float]:
"""Get the constraint values registered to this TargetSpace.
Returns
-------
np.ndarray
"""
if self._constraint is None:
error_msg = "TargetSpace belongs to an unconstrained optimization"
raise AttributeError(error_msg)
return self._constraint_values
@property
def mask(self) -> NDArray[np.bool_]:
"""Return a boolean array of valid points.
Points are valid if they satisfy both the constraint and boundary conditions.
Returns
-------
np.ndarray
"""
mask = np.ones_like(self.target, dtype=bool)
# mask points that don't satisfy the constraint
if self._constraint is not None:
mask &= self._constraint.allowed(self._constraint_values)
# mask points that are outside the bounds
if self._bounds is not None:
within_bounds = np.all(
(self._bounds[:, 0] <= self._params) & (self._params <= self._bounds[:, 1]), axis=1
)
mask &= within_bounds
return mask
def params_to_array(self, params: Mapping[str, float]) -> NDArray[Float]:
"""Convert a dict representation of parameters into an array version.
Parameters
----------
params : dict
a single point, with len(x) == self.dim.
Returns
-------
np.ndarray
Representation of the parameters as an array.
"""
if set(params) != set(self.keys):
error_msg = (
f"Parameters' keys ({sorted(params)}) do "
f"not match the expected set of keys ({self.keys})."
)
raise ValueError(error_msg)
return np.asarray([params[key] for key in self.keys])
def array_to_params(self, x: NDArray[Float]) -> dict[str, float]:
"""Convert an array representation of parameters into a dict version.
Parameters
----------
x : np.ndarray
a single point, with len(x) == self.dim.
Returns
-------
dict
Representation of the parameters as dictionary.
"""
if len(x) != len(self.keys):
error_msg = (
f"Size of array ({len(x)}) is different than the "
f"expected number of parameters ({len(self.keys)})."
)
raise ValueError(error_msg)
return dict(zip(self.keys, x))
def _as_array(self, x: Any) -> NDArray[Float]:
try:
x = np.asarray(x, dtype=float)
except TypeError:
x = self.params_to_array(x)
x = x.ravel()
if x.size != self.dim:
error_msg = (
f"Size of array ({len(x)}) is different than the "
f"expected number of parameters ({len(self.keys)})."
)
raise ValueError(error_msg)
return x
def register(
self,
params: Mapping[str, float] | Sequence[float] | NDArray[Float],
target: float,
constraint_value: float | NDArray[Float] | None = None,
) -> None:
"""Append a point and its target value to the known data.
Parameters
----------
params : np.ndarray
a single point, with len(x) == self.dim.
target : float
target function value
constraint_value : float or np.ndarray or None
Constraint function value
Raises
------
NotUniqueError:
if the point is not unique
Notes
-----
runs in amortized constant time
Examples
--------
>>> target_func = lambda p1, p2: p1 + p2
>>> pbounds = {"p1": (0, 1), "p2": (1, 100)}
>>> space = TargetSpace(target_func, pbounds)
>>> len(space)
0
>>> x = np.array([0, 0])
>>> y = 1
>>> space.register(x, y)
>>> len(space)
1
"""
# TODO: remove in future version
if isinstance(params, np.ndarray) and not self._sorting_warning_already_shown:
msg = (
"You're attempting to register an np.ndarray. Currently, the optimizer internally sorts"
" parameters by key and expects any registered array to respect this order. In future"
" versions this behaviour will change and the order as given by the pbounds dictionary"
" will be used. If you wish to retain sorted parameters, please manually sort your pbounds"
" dictionary before constructing the optimizer."
)
warn(msg, stacklevel=1)
self._sorting_warning_already_shown = True
x = self._as_array(params)
if x in self:
if self._allow_duplicate_points:
self.n_duplicate_points = self.n_duplicate_points + 1
print(
Fore.RED + f"Data point {x} is not unique. {self.n_duplicate_points}"
" duplicates registered. Continuing ..." + Fore.RESET
)
else:
error_msg = (
f"Data point {x} is not unique. You can set"
' "allow_duplicate_points=True" to avoid this error'
)
raise NotUniqueError(error_msg)
# if x is not within the bounds of the parameter space, warn the user
if self._bounds is not None and not np.all((self._bounds[:, 0] <= x) & (x <= self._bounds[:, 1])):
warn(f"\nData point {x} is outside the bounds of the parameter space. ", stacklevel=2)
# Make copies of the data, so as not to modify the originals incase something fails
# during the registration process. This prevents out-of-sync data.
params_copy: NDArray[Float] = np.concatenate([self._params, x.reshape(1, -1)])
target_copy: NDArray[Float] = np.concatenate([self._target, [target]])
cache_copy = self._cache.copy() # shallow copy suffices
if self._constraint is None:
# Insert data into unique dictionary
cache_copy[_hashable(x.ravel())] = target
else:
if constraint_value is None:
msg = (
"When registering a point to a constrained TargetSpace"
" a constraint value needs to be present."
)
raise ValueError(msg)
# Insert data into unique dictionary
cache_copy[_hashable(x.ravel())] = (target, constraint_value)
constraint_values_copy: NDArray[Float] = np.concatenate(
[self._constraint_values, [constraint_value]]
)
self._constraint_values = constraint_values_copy
# Operations passed, update the variables
self._params = params_copy
self._target = target_copy
self._cache = cache_copy
def probe(
self, params: Mapping[str, float] | Sequence[float] | NDArray[Float]
) -> float | tuple[float, float | NDArray[Float]]:
"""Evaluate the target function on a point and register the result.
Notes
-----
If `params` has been previously seen and duplicate points are not allowed,
returns a cached value of `result`.
Parameters
----------
params : np.ndarray
a single point, with len(x) == self.dim
Returns
-------
result : float | Tuple(float, float)
target function value, or Tuple(target function value, constraint value)
Example
-------
>>> target_func = lambda p1, p2: p1 + p2
>>> pbounds = {"p1": (0, 1), "p2": (1, 100)}
>>> space = TargetSpace(target_func, pbounds)
>>> space.probe([1, 5])
>>> assert self.max()["target"] == 6
>>> assert self.max()["params"] == {"p1": 1.0, "p2": 5.0}
"""
x = self._as_array(params)
if x in self and not self._allow_duplicate_points:
return self._cache[_hashable(x.ravel())]
dict_params = self.array_to_params(x)
if self.target_func is None:
error_msg = "No target function has been provided."
raise ValueError(error_msg)
target = self.target_func(**dict_params)
if self._constraint is None:
self.register(x, target)
return target
constraint_value = self._constraint.eval(**dict_params)
self.register(x, target, constraint_value)
return target, constraint_value
def random_sample(self) -> NDArray[Float]:
"""
Sample a random point from within the bounds of the space.
Returns
-------
data: ndarray
[1 x dim] array with dimensions corresponding to `self._keys`
Examples
--------
>>> target_func = lambda p1, p2: p1 + p2
>>> pbounds = {"p1": (0, 1), "p2": (1, 100)}
>>> space = TargetSpace(target_func, pbounds, random_state=0)
>>> space.random_sample()
array([[ 0.54488318, 55.33253689]])
"""
data = np.empty((1, self.dim))
for col, (lower, upper) in enumerate(self._bounds):
data.T[col] = self.random_state.uniform(lower, upper, size=1)
return data.ravel()
def _target_max(self) -> float | None:
"""Get the maximum target value within the current parameter bounds.
If there is a constraint present, the maximum value that fulfills the
constraint within the parameter bounds is returned.
Returns
-------
max: float
The maximum target value.
"""
if len(self.target) == 0:
return None
if len(self.target[self.mask]) == 0:
return None
return self.target[self.mask].max()
def max(self) -> dict[str, Any] | None:
"""Get maximum target value found and corresponding parameters.
If there is a constraint present, the maximum value that fulfills the
constraint within the parameter bounds is returned.
Returns
-------
res: dict
A dictionary with the keys 'target' and 'params'. The value of
'target' is the maximum target value, and the value of 'params' is
a dictionary with the parameter names as keys and the parameter
values as values.
"""
target_max = self._target_max()
if target_max is None:
return None
target = self.target[self.mask]
params = self.params[self.mask]
target_max_idx = np.argmax(target)
res = {"target": target_max, "params": dict(zip(self.keys, params[target_max_idx]))}
if self._constraint is not None:
constraint_values = self.constraint_values[self.mask]
res["constraint"] = constraint_values[target_max_idx]
return res
def res(self) -> list[dict[str, Any]]:
"""Get all target values and constraint fulfillment for all parameters.
Returns
-------
res: list
A list of dictionaries with the keys 'target', 'params', and
'constraint'. The value of 'target' is the target value, the value
of 'params' is a dictionary with the parameter names as keys and the
parameter values as values, and the value of 'constraint' is the
constraint fulfillment.
Notes
-----
Does not report if points are within the bounds of the parameter space.
"""
if self._constraint is None:
params = [dict(zip(self.keys, p)) for p in self.params]
return [{"target": target, "params": param} for target, param in zip(self.target, params)]
params = [dict(zip(self.keys, p)) for p in self.params]
return [
{"target": target, "constraint": constraint_value, "params": param, "allowed": allowed}
for target, constraint_value, param, allowed in zip(
self.target,
self._constraint_values,
params,
self._constraint.allowed(self._constraint_values),
)
]
def set_bounds(self, new_bounds: Mapping[str, NDArray[Float] | Sequence[float]]) -> None:
"""Change the lower and upper search bounds.
Parameters
----------
new_bounds : dict
A dictionary with the parameter name and its new bounds
"""
for row, key in enumerate(self.keys):
if key in new_bounds:
self._bounds[row] = new_bounds[key]
BayesianOptimization-2.0.1/bayes_opt/util.py 0000664 0000000 0000000 00000004603 14725554646 0021215 0 ustar 00root root 0000000 0000000 """Contains utility functions."""
from __future__ import annotations
import json
from os import PathLike
from pathlib import Path
from typing import TYPE_CHECKING
import numpy as np
from bayes_opt.exception import NotUniqueError
if TYPE_CHECKING:
from collections.abc import Iterable
from bayes_opt.bayesian_optimization import BayesianOptimization
def load_logs(
optimizer: BayesianOptimization, logs: str | PathLike[str] | Iterable[str | PathLike[str]]
) -> BayesianOptimization:
"""Load previous ...
Parameters
----------
optimizer : BayesianOptimizer
Optimizer the register the previous observations with.
logs : str or os.PathLike
File to load the logs from.
Returns
-------
The optimizer with the state loaded.
"""
if isinstance(logs, (str, PathLike)):
logs = [logs]
for log in logs:
with Path(log).open("r") as j:
while True:
try:
iteration = next(j)
except StopIteration:
break
iteration = json.loads(iteration)
try:
optimizer.register(
params=iteration["params"],
target=iteration["target"],
constraint_value=(iteration["constraint"] if optimizer.is_constrained else None),
)
except NotUniqueError:
continue
return optimizer
def ensure_rng(random_state: int | np.random.RandomState | None = None) -> np.random.RandomState:
"""Create a random number generator based on an optional seed.
Parameters
----------
random_state : np.random.RandomState or int or None, default=None
Random state to use. if `None`, will create an unseeded random state.
If `int`, creates a state using the argument as seed. If a
`np.random.RandomState` simply returns the argument.
Returns
-------
np.random.RandomState
"""
if random_state is None:
random_state = np.random.RandomState()
elif isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
elif not isinstance(random_state, np.random.RandomState):
error_msg = "random_state should be an instance of np.random.RandomState, an int, or None."
raise TypeError(error_msg)
return random_state
BayesianOptimization-2.0.1/docsrc/ 0000775 0000000 0000000 00000000000 14725554646 0017153 5 ustar 00root root 0000000 0000000 BayesianOptimization-2.0.1/docsrc/Makefile 0000664 0000000 0000000 00000001343 14725554646 0020614 0 ustar 00root root 0000000 0000000 # Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = ../docs
# Put it first so that "make" without argument is like "make help".
# help:
# @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
github:
# @cp ../README.md .
@make html
@cp -a ../docs/html/. ../docs
@cp -r ../docsrc/ ../docs
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) BayesianOptimization-2.0.1/docsrc/conf.py 0000664 0000000 0000000 00000015422 14725554646 0020456 0 ustar 00root root 0000000 0000000 # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
import time
import shutil
from glob import glob
from pathlib import Path
# sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('..'))
# copy the latest example files:
this_file_loc = Path(__file__).parent
notebooks = glob(str(this_file_loc.parent / 'examples' / '*.ipynb'))
for notebook in notebooks:
shutil.copy(notebook, this_file_loc)
# -- Project information -----------------------------------------------------
project = 'bayesian-optimization'
author = 'Fernando Nogueira'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.coverage',
'sphinx.ext.githubpages',
'nbsphinx',
'IPython.sphinxext.ipython_console_highlighting',
'sphinx.ext.mathjax',
"sphinx.ext.napoleon",
'sphinx_autodoc_typehints',
'sphinx.ext.intersphinx',
'sphinx_immaterial',
]
source_suffix = {
'.rst': 'restructuredtext',
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# Link types to the corresponding documentations
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
'scipy': ('https://docs.scipy.org/doc/scipy/reference', None),
'sklearn': ('https://scikit-learn.org/stable', None),
}
napoleon_use_rtype = False
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_title = "Bayesian Optimization"
html_theme = "sphinx_immaterial"
copyright = f"{time.strftime('%Y')}, Fernando Nogueira and the bayesian-optimization developers"
# material theme options (see theme.conf for more information)
html_theme_options = {
"icon": {
"repo": "fontawesome/brands/github",
"edit": "material/file-edit-outline",
},
"site_url": "https://bayesian-optimization.github.io/BayesianOptimization/",
"repo_url": "https://github.com/bayesian-optimization/BayesianOptimization/",
"repo_name": "bayesian-optimization",
"edit_uri": "blob/master/docsrc",
"globaltoc_collapse": True,
"features": [
"navigation.expand",
# "navigation.tabs",
# "toc.integrate",
"navigation.sections",
# "navigation.instant",
# "header.autohide",
"navigation.top",
# "navigation.tracking",
# "search.highlight",
"search.share",
"toc.follow",
"toc.sticky",
"content.tabs.link",
"announce.dismiss",
],
"palette": [
{
"media": "(prefers-color-scheme: light)",
"scheme": "default",
"primary": "light-blue",
"accent": "light-green",
"toggle": {
"icon": "material/lightbulb-outline",
"name": "Switch to dark mode",
},
},
{
"media": "(prefers-color-scheme: dark)",
"scheme": "slate",
"primary": "deep-orange",
"accent": "lime",
"toggle": {
"icon": "material/lightbulb",
"name": "Switch to light mode",
},
},
],
# BEGIN: version_dropdown
"version_dropdown": True,
"version_json": '../versions.json',
# END: version_dropdown
"scope": "/", # share preferences across subsites
"toc_title_is_page_title": True,
# BEGIN: social icons
"social": [
{
"icon": "fontawesome/brands/github",
"link": "https://github.com/bayesian-optimization/BayesianOptimization",
"name": "Source on github.com",
},
{
"icon": "fontawesome/brands/python",
"link": "https://pypi.org/project/bayesian-optimization/",
},
{
"icon": "fontawesome/brands/python",
"link": "https://anaconda.org/conda-forge/bayesian-optimization",
}
],
# END: social icons
}
html_favicon = 'func.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
## extensions configuration
### sphinx-autodoc-typehints
typehints_use_signature = True
"""
If True, typehints for parameters in the signature are shown.
see more: https://github.com/tox-dev/sphinx-autodoc-typehints/blob/main/README.md#options
"""
typehints_use_signature_return = True
"""
If True, return annotations in the signature are shown.
see more: https://github.com/tox-dev/sphinx-autodoc-typehints/blob/main/README.md#options
"""
### autodoc
autodoc_typehints = "both"
"""
This value controls how to represent typehints. The setting takes the following values:
- `signature`: Show typehints in the signature
- `description`: Show typehints as content of the function or method
The typehints of overloaded functions or methods will still be represented in the signature.
- `none`: Do not show typehints
- `both`: Show typehints in the signature and as content of the function or method
Overloaded functions or methods will not have typehints included in the description
because it is impossible to accurately represent all possible overloads as a list of parameters.
see more: https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-autodoc_typehints
"""
autodoc_typehints_format = "short"
"""
This value controls the format of typehints. The setting takes the following values:
- `fully-qualified`: Show the module name and its name of typehints
- `short`: Suppress the leading module names of the typehints
(e.g. io.StringIO -> StringIO)
see more: https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-autodoc_typehints_format
"""
BayesianOptimization-2.0.1/docsrc/func.ico 0000664 0000000 0000000 00000412017 14725554646 0020607 0 ustar 00root root 0000000 0000000 ¡Ž f €€ ( @@ (B /— 00 ¨% WÙ ¨ ÿþ h § ‰PNG
IHDR \r¨f € IDATxÚìÝg˜dWuèýß©\]ã䜥 圅„²È6ƒmàú:€ñõ566/Æ09„B9ç0ÒH3Òhrî陞Ωº+W½jŒ1æÚ€C¨ÿóô#MwÕ9{ï³ÏÚk½ÖÚÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5 ¨T*•£Ýˆ5jBG»5jÔ8zÔÀÏÈž{îÑýè£G»5j¼$DŽv~ÝÈõöÚ½aƒH"aÆ 'íæÔ¨ñ¢¨i ?#=÷Ýgå›Þä…ÏÞÁ§ž:ÚÍ©QãEQ ?#ÉXÌæë¯Ïçíºë.•Réh7©FŸ›Ú.ÀÏÀî¯~U´«K&—3²i“¥o|£Í_ûšSþ÷ÿI$ŽvójÔø™©i ?-•ŠÂ¾}úž}V4V<|Øà¶mJÏ=ç¹o|C¹X<Ú-¬Qãg¦&