Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

it is possible to use beta-crown directly without the need for alpha-crown? #79

Open
xiaoyuanpigo opened this issue Nov 5, 2024 · 15 comments

Comments

@xiaoyuanpigo
Copy link

I have encountered some questions while working with beta-crown, and I would appreciate your assistance in clarifying them.

I would like to know if it is possible to use beta-crown directly without the need for alpha-crown. If this is possible, could you kindly explain why I am encountering an error when setting
"alpha-crown:
alpha: false" in the "configuration.yaml"?

The specific error message is as follows:
BaB round 1
batch: 1
Traceback (most recent call last):
  File "abcrown.py", line 658, in
    abcrown.main()
  File "abcrown.py", line 632, in main
    verified_status = self.complete_verifier(
  File "abcrown.py", line 424, in complete_verifier
    l, nodes, ret = self.bab(
  File "abcrown.py", line 244, in bab
    result = general_bab(
  File "/data/code/alpha-beta-CROWN/complete_verifier/bab.py", line 345, in general_bab
    global_lb = act_split_round(
  File "/data/code/alpha-beta-CROWN/complete_verifier/bab.py", line 170, in act_split_round
    split_domain(net, domains, d, batch, impl_params=impl_params,
  File "/data/code/alpha-beta-CROWN/complete_verifier/bab.py", line 75, in split_domain
    branching_heuristic.get_branching_decisions(
  File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/code//alpha-beta-CROWN/complete_verifier/heuristics/kfsb.py", line 148, in get_branching_decisions
    k_ret_lbs = self.net.update_bounds(
  File "/data/code/alpha-beta-CROWN/complete_verifier/beta_CROWN_solver.py", line 309, in update_bounds
    lb, _, = self.net.compute_bounds(
  File "/data/code/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 1209, in compute_bounds
    return self._compute_bounds_main(C=C,
  File "/data/code/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 1314, in _compute_bounds_main
    ret = self.backward_general(
  File "/data/code/alpha-beta-CROWN/complete_verifier/auto_LiRPA/backward_bound.py", line 256, in backward_general
    A, lower_b, upper_b = l.bound_backward(
  File "/data/code/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/relu.py", line 249, in bound_backward
    self._backward_relaxation(last_lA, last_uA, x, start_node, unstable_idx)
  File "/data/code/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/relu.py", line 555, in _backward_relaxation
    selected_alpha, alpha_lookup_idx = self.select_alpha_by_idx(last_lA, last_uA,
  File "/data/code/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/relu.py", line 207, in select_alpha_by_idx
    selected_alpha = self.alpha[start_node.name]
KeyError: '/50'

@shizhouxing
Copy link
Member

That's probably not supported for now, but you may try setting the learning rate for alpha to 0 to effectively disable alpha optimization.

@xiaoyuanpigo
Copy link
Author

Thanks!, May I ask why the certified accuracy under the setting:
"
general:
  device: cpu
  conv_mode: matrix
  root_path: /data/code/
  csv_name: mnist_4layer_5.csv
  enable_incomplete_verification: False

attack:
 pgd_restarts: 50
solver:
 batch_size: 4096
 beta-crown:
   iteration: 20
bab:
  timeout: 3600
  branching:
    method: fsb
    reduceop: min
    candidates: 1 "

is 64%, while for
"

general:
  device: cpu
  conv_mode: matrix
  root_path: ../../
  csv_name: mnist_4layer_5.csv
attack: 
pgd_restarts: 50
solver:
 batch_size: 1024
 beta-crown:
   iteration: 20
bab:
 timeout: 180"

is 69%. It seem the front setting is beta-crown and is a complete verifier , while the latter seeting is alpha-beta-crown and is partially incomplete. However, the results for incomplete verifier is more precise than that for complete verifier. The results seem theoretically impossible. Did I make a mistake in the implementation?

@shizhouxing
Copy link
Member

The later one is not an incomplete verifier as branch-and-bound is enabled by default. The former one is worse probably because some configs are changed (such as the fsb branching heuristic with 1 candidate, which is worse than the default kfsb)

@ytsao
Copy link

ytsao commented Nov 8, 2024

Hi @shizhouxing
As you mentioned, branch-and-bound is enabled by default. Could we turn-off by setting the configuration file? If yes, which parameter I should use? thanks.

@shizhouxing
Copy link
Member

@ytsao It can be turned off by setting complete_verifier: skip. But in that case, there will be no beta-crown.

@xiaoyuanpigo
Copy link
Author

xiaoyuanpigo commented Nov 9, 2024

Thanks! May I ask why the certified accuracy under the following setting is still worse than the default one?
general:
device: cpu
conv_mode: matrix
root_path: ../
csv_name: maxpool_mnist_conv8.csv

enable_incomplete_verification: False

data:
start: 0
end: 100

attack:

pgd_restarts: 50
solver:
batch_size: 64
beta-crown:
iteration: 100

bab:
timeout: 3600
branching:
method: fsb
reduceop: min
candidates: 10
branching_relu_iterations: 100

@shizhouxing
Copy link
Member

Thanks! May I ask why the certified accuracy under the following setting is still worse than the default one? general: device: cpu conv_mode: matrix root_path: ../ csv_name: maxpool_mnist_conv8.csv

enable_incomplete_verification: False

data: start: 0 end: 100

attack:

pgd_restarts: 50 solver: batch_size: 64 beta-crown: iteration: 100

bab: timeout: 3600 branching: method: fsb reduceop: min candidates: 10 branching_relu_iterations: 100

@xiaoyuanpigo Could you explain what changes are you adding and why the changes are introduced?

@xiaoyuanpigo
Copy link
Author

I aim to use a verifier that primarily relies on BaB (Branch and Bound) for ReLU, instead of a simpler approach that bounds ReLU with a single pair of upper and lower linear constraints. To achieve this, I’ve set the timeout to 3600 seconds, the iteration count for beta-CROWN to 100, and the number of FSB candidates to 10 to encourage the verifier to apply BaB on more ReLUs. However, the results don’t seem to match my expectations. Could there be an error in my implementation?

Further, The better certified results in under the following settings:

general:
  device: cpu
  conv_mode: matrix
  root_path: ../
  csv_name: maxpool_mnist_conv8.csv

data:
  start: 0
  end: 100

attack:
 pgd_restarts: 50
solver:
 batch_size: 1024
 beta-crown:
   iteration: 20
bab:
 timeout: 180

@shizhouxing
Copy link
Member

the number of FSB candidates to 10 to encourage the verifier to apply BaB on more ReLUs

@xiaoyuanpigo

This number is not for the number of ReLUs to apply BaB on. BaB will be applied to all the unstable ReLUs until timeout. FSB candidates is the number of candidate neurons where some actual bounds will be computed when trying to pick the best neuron to branch. Increasing this number a lot may slow down BaB.

Same for the number of beta-crown iterations -- it may slow down BaB. So it's possible that you may observe a worse result.

@xiaoyuanpigo
Copy link
Author

Thanks! @shizhouxing Should I consider skipping beta-CROWN and alpha-crown altogether? Like the following setting:
device: cpu
conv_mode: matrix
root_path: ../
csv_name: maxpool_mnist_conv8.csv
enable_incomplete_verification: False
complete_verifier: bab
data:
start: 0
end: 100
attack:
pgd_restarts: 50
solver:
batch_size: 64
beta-crown:
iteration: 0
bab:
timeout: 3600
branching:
method: fsb
reduceop: min
candidates: 3
branching_relu_iterations: 100,so that the verifier will apply as many BaB techniques for ReLU as possible before reaching the timeout? Since the network only includes ReLU and MaxPool as its non-linear layers, I want the verifier to apply BaB for ReLU, while using a simple pair of upper and lower linear constraints to bound the MaxPool layers.

@shizhouxing
Copy link
Member

Thanks! @shizhouxing Should I consider skipping beta-CROWN and alpha-crown altogether? Like the following setting: device: cpu conv_mode: matrix root_path: ../ csv_name: maxpool_mnist_conv8.csv enable_incomplete_verification: False complete_verifier: bab data: start: 0 end: 100 attack: pgd_restarts: 50 solver: batch_size: 64 beta-crown: iteration: 0 bab: timeout: 3600 branching: method: fsb reduceop: min candidates: 3 branching_relu_iterations: 100,so that the verifier will apply as many BaB techniques for ReLU as possible before reaching the timeout? Since the network only includes ReLU and MaxPool as its non-linear layers, I want the verifier to apply BaB for ReLU, while using a simple pair of upper and lower linear constraints to bound the MaxPool layers.

Making the number of iterations too small or too large can both worsen the
results. I think alpha-CROWN and beta-CROWN should not be disabled. It's
recommended to keep the default settings if you are not sure how they
should be tuned.

so that the verifier will apply as many BaB techniques for ReLU as possible before reaching the timeout

If alpha-CROWN and beta-CROWN are disabled, bounds will be looser. Although BaB may be able to handle more ReLU neurons, due to the worsen bounds, you will likely get worse results.

@xiaoyuanpigo
Copy link
Author

Thanks @shizhouxing May I ask why the final result is marked as "unknown" when both the "global_lb" and "best_l after optimization" are already greater than 0, which seems to suggest that the Alpha-Beta-Crown algorithm has successfully verified the instance? Below is the relevant log:
"all verified at 0th iter
best_l after optimization: 1.4772682189941406
beta sum per layer: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
alpha/beta optimization time: 0.010720252990722656
pruning_in_iteration open status: False
ratio of positive domain = 4 / 4 = 1.0
pruning-in-iteration extra time: 1.2636184692382812e-05
Time: prepare 0.0002 bound 0.0118 transfer 0.0000 finalize 0.0023 func 0.0144
Accumulated time: func 0.0144 prepare 0.0030 bound 0.0118 transfer 0.0000 finalize 0.0023
Length of domains: 0
Time: pickout 0.0003 decision 0.6042 set_bounds 0.0005 solve 0.0144 add 0.0000
Accumulated time: pickout 0.0003 decision 0.6042 set_bounds 0.0005 solve 0.0144 add 0.0000
No domains left, verification finished!
Current (lb-rhs): 1.0000000116860974e-07
0 domains visited
num_domains_AFTER BAB 0
Cumulative time: 224.36278986930847
Result: unknown in 226.0425 seconds"

@shizhouxing
Copy link
Member

@xiaoyuanpigo Could you post your full log?

@xiaoyuanpigo
Copy link
Author

Certainly! Here’s the full log for your reference
MNIST_CONV_MAXPOOL_NEW.log
I guess the reason the result shows as ‘unknown’ when ‘global_lb’ is greater than 0 lies in line 319 of ‘bab.py.’ The condition in the ‘if’ statement doesn’t account for cases where ‘global_lb’ is already above 0.

@shizhouxing
Copy link
Member

@xiaoyuanpigo

It's probably a timeout. Maybe 224s exceeded your timeout? What timeout did you set?

I think " line 319 of ‘bab.py." is not relevant here. If stats.all_node_split were triggered, it would print all nodes are split!! which didn't exist in your log.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants