Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement multi-fidelity optimization #50

Open
sgbaird opened this issue Jun 7, 2024 · 5 comments
Open

Implement multi-fidelity optimization #50

sgbaird opened this issue Jun 7, 2024 · 5 comments

Comments

@sgbaird
Copy link
Owner

sgbaird commented Jun 7, 2024

No description provided.

@sgbaird
Copy link
Owner Author

sgbaird commented Jun 8, 2024

Need to set fidelity row to hidden for now

@Abrikosoff
Copy link

I've been trying to implement this in the Service API: Ax Issue 2514. Might be useful here (if I can verify its correctness!)

@sgbaird
Copy link
Owner Author

sgbaird commented Jun 12, 2024

I've been trying to implement this in the Service API: Ax Issue 2514. Might be useful here (if I can verify its correctness!)

Thanks! I have https://github.com/sgbaird/honegumi/blob/main/scripts%2Frefreshers%2Fcontinuous_multi_fidelity.py though I'm not sure if this is the most straightforward way with the most recent Ax version. Will take a look!

@sgbaird sgbaird closed this as completed Jun 26, 2024
@sgbaird
Copy link
Owner Author

sgbaird commented Jun 26, 2024

Fidelity not complete, but at least removed from visibility for now.

@sgbaird sgbaird reopened this Jun 26, 2024
@sgbaird sgbaird changed the title Fidelity is there but isn't implemented Implement multi-fidelity optimization Jun 26, 2024
@Abrikosoff
Copy link

In the end I was able to make the following work (for multiobjectives):

from botorch.test_functions.multi_objective_multi_fidelity import MOMFBraninCurrin
import torch

tkwargs = {  # Tkwargs is a dictionary contaning data about data type and data device
    "dtype": torch.double,
    "device": torch.device("cuda" if torch.cuda.is_available() else "cpu"),
}

BC = MOMFBraninCurrin(negate=True).to(**tkwargs)
dim_x = BC.dim
dim_y = BC.num_objectives

ref_point = torch.zeros(dim_y, **tkwargs)

n_INIT = 2  # Initialization budget in terms of the number of full-fidelity evaluations

from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.registry import Models
from botorch.acquisition.multi_objective import qExpectedHypervolumeImprovement
from ax.models.torch.botorch_modular.surrogate import Surrogate
from botorch.models.gp_regression_fidelity import SingleTaskMultiFidelityGP

gs = GenerationStrategy(
    steps=[
        # Quasi-random initialization step
        GenerationStep(
            model=Models.SOBOL,
            num_trials=1,  # How many trials should be produced from this generation step
        ),
        # Bayesian optimization step using the custom acquisition function
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1,  # No limitation on how many trials should be produced from this step
            # For `BOTORCH_MODULAR`, we pass in kwargs to specify what surrogate or acquisition function to use.
            model_kwargs={
                # "botorch_acqf_class": qHypervolumeKnowledgeGradient,
                "botorch_acqf_class": qExpectedHypervolumeImprovement,
                "surrogate": Surrogate(SingleTaskMultiFidelityGP),
            },
            model_gen_kwargs={
                                "model_gen_options": {
                                    "acqf_kwargs": {
                                        # "ref_point": ref_point,
                                        "data_fidelities": [2],
                    }
                }
            }  
        ),
    ]
)

import torch
from ax.service.ax_client import AxClient
from ax.service.utils.instantiation import ObjectiveProperties
from botorch.test_functions import Branin


# # Initialize the client - AxClient offers a convenient API to control the experiment
ax_client = AxClient(generation_strategy=gs)

ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        # {
        #     "name": "x3",
        #     "type": "range",
        #     "bounds": [0.0, 1.0],
        # },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
            "is_fidelity": True,
            "target_value": 1.0,  
        },
    ],
    # Multi-objective optimization, using augmented Hartmann function (6D+1D).
    objectives={
        "a": ObjectiveProperties(minimize=False, threshold=BC.ref_point[0]),
        "b": ObjectiveProperties(minimize=False, threshold=BC.ref_point[1]),
    },
    # parameter_constraints=["x1 + x2 <= 2.0"],  # Optional.
    # outcome_constraints=["l2norm <= 1.25"],  # Optional.
)

def evaluate(parameters):
    evaluation = BC(
        torch.tensor([parameters.get("x1"), parameters.get("x2"), parameters.get("x4")])
    )
    return {"a": (evaluation[0].item(), 0.0), "b": (evaluation[1].item(), 0.0)}

NUM_EVALS = 40
# with fast_smoke_test():
for i in range(NUM_EVALS):
    parameters, trial_index = ax_client.get_next_trial()
    # Local evaluation here can be replaced with deployment to external system.
    ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))

Not sure if this is helpful in anyway, or even if it is correct, but I'm putting it here for completeness. Also I still wasn't able to make the MF-HVKG work with multiple objectives (linked above), so that's kind of ongoing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants