A high-dimensional property predictor framed as a pseudo-materials discovery benchmark with fake compositional (linear) and "no-more-than-X-components" (non-linear) constraints.
Industry-relevant materials discovery tasks are often hierarchical, noisy, multi-fidelity, multi-objective, high-dimensional, non-linearly correlated, and exhibit mixed numerical and categorical variables subject to linear and non-linear constraints. To boot, experimental iterations are usually prohibitively expensive.
Examples of such materials discovery tasks include formulation optimization, compositional design of high entropy alloys, and multi-step synthesis. Choosing an algorithm that can expertly navigate such complex design spaces is a non-trivial task, and no single algorithm is supreme.
So, how do you pair an algorithm with a design task?
Here, we introduce
PseudoCrab
: a high-dimensional property predictor framed as a pseudo-materials discovery
benchmark with fake compositional (linear) and "no-more-than-X-components" (non-linear)
constraints. We apply a state-of-the-art high-dimensional Bayesian optimization
algorithm (SAASBO) in conjunction with a multi-objective parallel Noisy Expected
Hypervolume Improvement (qNEHVI) acquisition function and compare it against other
high-performing models. Because PseudoCrab is customizable, researchers can adjust the
PseudoCrab benchmark to more closely match their applications of interest during the
algorithm downselection process prior to expensive materials discovery campaigns.
Additional WIP: https://colab.research.google.com/drive/1-tSKAfYbBhYESqfi0n04NSZRJj4h9Drj?usp=sharing
In order to set up the necessary environment:
- review and uncomment what you need in
environment.yml
and create an environmentoptimization-benchmark
with the help of conda:conda env create -f environment.yml
- activate the new environment with:
conda activate optimization-benchmark
NOTE: The conda environment will have optimization-benchmark installed in editable mode. Some changes, e.g. in
setup.cfg
, might require you to runpip install -e .
again.
Optional and needed only once after git clone
:
-
install several pre-commit git hooks with:
pre-commit install # You might also want to run `pre-commit autoupdate`
and checkout the configuration under
.pre-commit-config.yaml
. The-n, --no-verify
flag ofgit commit
can be used to deactivate pre-commit hooks temporarily. -
install nbstripout git hooks to remove the output cells of committed notebooks with:
nbstripout --install --attributes notebooks/.gitattributes
This is useful to avoid large diffs due to plots in your notebooks. A simple
nbstripout --uninstall
will revert these changes.
Then take a look into the scripts
and notebooks
folders.
- Always keep your abstract (unpinned) dependencies updated in
environment.yml
and eventually insetup.cfg
if you want to ship and install your package viapip
later on. - Create concrete dependencies as
environment.lock.yml
for the exact reproduction of your environment with:For multi-OS development, consider usingconda env export -n optimization-benchmark -f environment.lock.yml
--no-builds
during the export. - Update your current environment with respect to a new
environment.lock.yml
using:conda env update -f environment.lock.yml --prune
├── AUTHORS.md <- List of developers and maintainers.
├── CHANGELOG.md <- Changelog to keep track of new features and fixes.
├── CONTRIBUTING.md <- Guidelines for contributing to this project.
├── Dockerfile <- Build a docker container with `docker build .`.
├── LICENSE.txt <- License as chosen on the command-line.
├── README.md <- The top-level README for developers.
├── configs <- Directory for configurations of model & application.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
├── docs <- Directory for Sphinx documentation in rst or md.
├── environment.yml <- The conda environment file for reproducibility.
├── models <- Trained and serialized models, model predictions,
│ or model summaries.
├── notebooks <- Jupyter notebooks. Naming convention is a number (for
│ ordering), the creator's initials and a description,
│ e.g. `1.0-fw-initial-data-exploration`.
├── pyproject.toml <- Build configuration. Don't change! Use `pip install -e .`
│ to install for development or to build `tox -e build`.
├── references <- Data dictionaries, manuals, and all other materials.
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated plots and figures for reports.
├── scripts <- Analysis and production scripts which import the
│ actual PYTHON_PKG, e.g. train_model.
├── setup.cfg <- Declarative configuration of your project.
├── setup.py <- [DEPRECATED] Use `python setup.py develop` to install for
│ development or `python setup.py bdist_wheel` to build.
├── src
│ └── optimization_benchmark <- Actual Python package where the main functionality goes.
├── tests <- Unit tests which can be run with `pytest`.
├── .coveragerc <- Configuration for coverage reports of unit tests.
├── .isort.cfg <- Configuration for git hook that sorts imports.
└── .pre-commit-config.yaml <- Configuration of pre-commit git hooks.
This project has been set up using PyScaffold 4.2.3.post1.dev12+g22876ea6 and the dsproject extension 0.7.2.post1.dev4+g5267ba3.