Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider incorporating SEBO as one of the model choices #13

Open
sgbaird opened this issue Jan 4, 2024 · 0 comments
Open

Consider incorporating SEBO as one of the model choices #13

sgbaird opened this issue Jan 4, 2024 · 0 comments

Comments

@sgbaird
Copy link
Owner

sgbaird commented Jan 4, 2024

Fairly straightforward thanks to @liusulin's SEBO Ax tutorial and Service API example. I.e.,

        GenerationStep(  # BayesOpt step
            model=Models.BOTORCH_MODULAR,
            # No limit on how many generator runs will be produced
            num_trials=-1,
            model_kwargs={  # Kwargs to pass to `BoTorchModel.__init__`
                "surrogate": Surrogate(botorch_model_class=SURROGATE_CLASS),
                "acquisition_class": SEBOAcquisition,
                "botorch_acqf_class": qNoisyExpectedHypervolumeImprovement,
                "acquisition_options": {
                    "penalty": "L0_norm", # it can be L0_norm or L1_norm.
                    "target_point": target_point, 
                    "sparsity_threshold": aug_dim,
                },
            },
        )

For the most part, just specifying Models.BOTORCH_MODULAR and model_kwargs:

GenerationStep(
model=Models.{{ model_name }},
num_trials=-1,
max_parallelism=3,
model_kwargs={{ model_kwargs }},
),

and updating the specification of model_kwargs

for opt in all_opts:
# If the key-value pair is not already there, then add it based on
# conditions. For example, if use_custom_gen is a hidden variable, and the
# model is FULLYBAYESIAN, then use_custom_gen should be True.
# Do this for each hidden variable.
opt.setdefault(USE_CUSTOM_GEN_OPT_NAME, opt[MODEL_OPT_NAME] == "FULLYBAYESIAN")
opt["model_kwargs"] = (
{"num_samples": 256, "warmup_steps": 512}
if opt[MODEL_OPT_NAME] == "FULLYBAYESIAN"
else {}
) # override later to 16 and 32 later on, but only for test script
# verify that all variables (hidden and visible) are represented
assert all(
[opt.get(option_name, None) is not None for option_name in option_names]
), f"option_names {option_names} not in opt {opt}"

The logic will also need to be updated to allow for specifying BOTORCH_MODULAR (xref: #14), which is good move IMO anyway. I.e., SEBO is the requested model, but BOTORCH_MODULAR is used as the model.

@liusulin, can you clarify any other assumptions or limitations? For example, can this be used with multiple "traditional" objectives, or does it assume there is only one "traditional" objective in addition to the sparsity objective? Is it compatible with other categorical and integer variables?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant