Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarity on brain tumor results #109

Open
25benjaminli opened this issue May 13, 2024 · 0 comments
Open

Clarity on brain tumor results #109

25benjaminli opened this issue May 13, 2024 · 0 comments

Comments

@25benjaminli
Copy link

25benjaminli commented May 13, 2024

Hello, thanks for the pre-print and releasing the code. I am working on brain tumor segmentation with the BraTS dataset, which is semantic, as you know. I have two questions that would be nice to have some clarity on.

  1. What is actually being evaluated in the pre-print? The objectives seem like they're binary, such as the dataset dataset/brat.py seeming to show it doing binary segmentation (segmentation mask is binarized). I know Segment Anything is natively a binary segmentation algorithm, but how come there are class-wise results for the BTCV organ dataset and not BraTS? @LJQCN101 supposedly integrated multimask output for semantic segmentation, but the results in the pre-print don't seem to reflect this. Code segment from dataset/brat.pyattached below.
def __getitem__(self, index):
        # if self.mode == 'Training':
        #     point_label = random.randint(0, 1)
        #     inout = random.randint(0, 1)
        # else:
        #     inout = 1
        #     point_label = 1
        point_label = 1
        label = 4   # the class to be segmented

        """Get the images"""
        name = self.name_list[index]
        img,mask = self.load_all_levels(name)

        mask[mask!=label] = 0
        mask[mask==label] = 1
  1. Does medical SAM by default use channel wise segmentation with multiple modalities (e.g. in the case of brain tumors, flair, t1, t1ce, t2) or does it repeat the same modality across multiple channels? I ask this because in the dataset it shows only the first level being used. Code segment from dataset/brat.py attached below.
def load_all_levels(self,path):
        import nibabel as nib
        data_dir = os.path.join(self.data_path)
        levels = ['t1','flair','t2','t1ce']
        raw_image = [nib.load(os.path.join
        (data_dir,path,path+'_'+level+'.nii.gz')).get_fdata() for level in levels]
        raw_seg = nib.load(os.path.join(data_dir,path,path+'_seg.nii.gz')).get_fdata()

        return raw_image[0], raw_seg

I am adding the authors of the pre-print and the person who implemented multimask output below. Thanks again!
@WuJunde @LJQCN101

EDIT: when trying to run with the default "Brat.py" dataset configuration, I ran into the following issue:
Given groups =1, weight of size [768, 3,16,16] expected input [1,1024,1024,155] to have 3 channels, but got 1024 channels instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant