You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, Thanks for the great implementation. It really helped me to understand and play with segmentation by diffusion.
I would like to contribute pretrained models on Brats2020 and compare the performance with the original paper. To do so I would like to set the hyperparameters to be the same.
In the original implementation (https://github.com/WuJunde/MedSegDiff) they use the following parameters:
--num_res_blocks 2
--num_heads 1
--learn_sigma True
--attention_resolutions 16
Could you please confirm those parameters are equivalent to the following parameters in this code?:
(From the unet Class)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
First, Thanks for the great implementation. It really helped me to understand and play with segmentation by diffusion.
I would like to contribute pretrained models on Brats2020 and compare the performance with the original paper. To do so I would like to set the hyperparameters to be the same.
In the original implementation (https://github.com/WuJunde/MedSegDiff) they use the following parameters:
--num_res_blocks 2
--num_heads 1
--learn_sigma True
--attention_resolutions 16
Could you please confirm those parameters are equivalent to the following parameters in this code?:
(From the unet Class)
resnet_block_groups=8
attn_heads=4
attn_dim_head=32
full_self_attn ?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions