AssertionError: ignore label is not implemented for one hot encoded target variables (DC_and_CE_loss) #2621
Unanswered
kovecsroli
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear All,
I am trying to train nnUNet with a dataset where the mask has 3 pixel values: "background": 0, "replication": 1, "ignore": 2
When I run training like: !nnUNetv2_train 001 2d 0 -tr nnUNetTrainer_100epochs --npz
after veryfing dataset integrity: !nnUNetv2_plan_and_preprocess -d 001 --verify_dataset_integrity I get the error in the title.
Could anyone help me with this please?
Here are the details:
My dataset.json looks like this:
{
"channel_names": {
"0": "replication",
"1": "chromatin"
},
"labels": {
"background": 0,
"replication": 1,
"ignore": 2
},
"numTraining": 601,
"file_ending": ".png",
"training": []
}
The result of dataset verification:
Fingerprint extraction...
Dataset001_ChromfDatas
Using <class 'nnunetv2.imageio.natural_image_reader_writer.NaturalImage2DIO'> as reader/writer
####################
verify_dataset_integrity Done.
If you didn't see any error messages then your dataset is most likely OK!
####################
Experiment planning...
############################
INFO: You are using the old nnU-Net default planner. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################
2D U-Net configuration:
{'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 16, 'patch_size': (np.int64(384), np.int64(512)), 'median_image_size_in_voxels': array([361., 512.]), 'spacing': array([1., 1.]), 'normalization_schemes': ['ZScoreNormalization', 'ZScoreNormalization'], 'use_mask_for_norm': [True, True], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.PlainConvUNet', 'arch_kwargs': {'n_stages': 7, 'features_per_stage': (32, 64, 128, 256, 512, 512, 512), 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': ((3, 3), (3, 3), (3, 3), (3, 3), (3, 3), (3, 3), (3, 3)), 'strides': ((1, 1), (2, 2), (2, 2), (2, 2), (2, 2), (2, 2), (2, 2)), 'n_conv_per_stage': (2, 2, 2, 2, 2, 2, 2), 'n_conv_per_stage_decoder': (2, 2, 2, 2, 2, 2), 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ('conv_op', 'norm_op', 'dropout_op', 'nonlin')}, 'batch_dice': True}
Using <class 'nnunetv2.imageio.natural_image_reader_writer.NaturalImage2DIO'> as reader/writer
Plans were saved to /workspace/nnUNet/nnUNet_preprocessed/Dataset001_ChromfDatas/nnUNetPlans.json
Preprocessing...
Preprocessing dataset Dataset001_ChromfDatas
Configuration: 2d...
100%|████████████████████████████████████████| 601/601 [00:03<00:00, 160.70it/s]
Configuration: 3d_fullres...
INFO: Configuration 3d_fullres not found in plans file nnUNetPlans.json of dataset Dataset001_ChromfDatas. Skipping.
Configuration: 3d_lowres...
INFO: Configuration 3d_lowres not found in plans file nnUNetPlans.json of dataset Dataset001_ChromfDatas. Skipping.
The result of training:
############################
INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################
Using device: cuda:0
/opt/conda/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py:164: FutureWarning:
torch.cuda.amp.GradScaler(args...)
is deprecated. Please usetorch.amp.GradScaler('cuda', args...)
instead.self.grad_scaler = GradScaler() if self.device.type == 'cuda' else None
#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################
2024-11-26 22:50:13.299598: do_dummy_2d_data_aug: False
2024-11-26 22:50:13.300583: Using splits from existing split file: /workspace/nnUNet/nnUNet_preprocessed/Dataset001_ChromfDatas/splits_final.json
2024-11-26 22:50:13.300682: The split file contains 5 splits.
2024-11-26 22:50:13.300708: Desired fold for training: 0
2024-11-26 22:50:13.300730: This split has 117 training and 30 validation cases.
using pin_memory on device 0
using pin_memory on device 0
2024-11-26 22:50:16.724347: Using torch.compile...
/opt/conda/lib/python3.11/site-packages/torch/optim/lr_scheduler.py:62: UserWarning: The verbose parameter is deprecated. Please use get_last_lr() to access the learning rate.
warnings.warn(
This is the configuration used by this training:
Configuration name: 2d
{'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 16, 'patch_size': [384, 512], 'median_image_size_in_voxels': [361.0, 512.0], 'spacing': [1.0, 1.0], 'normalization_schemes': ['ZScoreNormalization', 'ZScoreNormalization'], 'use_mask_for_norm': [True, True], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.PlainConvUNet', 'arch_kwargs': {'n_stages': 7, 'features_per_stage': [32, 64, 128, 256, 512, 512, 512], 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'strides': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'n_conv_per_stage': [2, 2, 2, 2, 2, 2, 2], 'n_conv_per_stage_decoder': [2, 2, 2, 2, 2, 2], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}, 'deep_supervision': True}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': True}
These are the global plan.json settings:
{'dataset_name': 'Dataset001_ChromfDatas', 'plans_name': 'nnUNetPlans', 'original_median_spacing_after_transp': [999.0, 1.0, 1.0], 'original_median_shape_after_transp': [1, 361, 512], 'image_reader_writer': 'NaturalImage2DIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'experiment_planner_used': 'ExperimentPlanner', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 255.0, 'mean': 16.551502566225484, 'median': 2.0, 'min': 0.0, 'percentile_00_5': 0.0, 'percentile_99_5': 255.0, 'std': 38.01857776584112}, '1': {'max': 255.0, 'mean': 28.11965959884703, 'median': 3.0, 'min': 0.0, 'percentile_00_5': 0.0, 'percentile_99_5': 255.0, 'std': 50.683726753609186}}}
2024-11-26 22:50:17.177851: unpacking dataset...
2024-11-26 22:50:20.170195: unpacking done...
2024-11-26 22:50:20.170908: Unable to plot network architecture: nnUNet_compile is enabled!
2024-11-26 22:50:20.174668:
2024-11-26 22:50:20.174775: Epoch 0
2024-11-26 22:50:20.174866: Current learning rate: 0.01
Traceback (most recent call last):
File "/opt/conda/bin/nnUNetv2_train", line 8, in
sys.exit(run_training_entry())
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/run/run_training.py", line 275, in run_training_entry
run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/run/run_training.py", line 211, in run_training
nnunet_trainer.run_training()
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1370, in run_training
train_outputs.append(self.train_step(next(self.dataloader_train)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 996, in train_step
l = self.loss(output, target)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/training/loss/deep_supervision.py", line 30, in forward
return sum([weights[i] * self.loss(*inputs) for i, inputs in enumerate(zip(*args)) if weights[i] != 0.0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/training/loss/deep_supervision.py", line 30, in
return sum([weights[i] * self.loss(*inputs) for i, inputs in enumerate(zip(*args)) if weights[i] != 0.0])
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/nnunetv2/training/loss/compound_losses.py", line 39, in forward
assert target.shape[1] == 1, 'ignore label is not implemented for one hot encoded target variables '
^^^^^^^^^^^^^^^^^^^^
AssertionError: ignore label is not implemented for one hot encoded target variables (DC_and_CE_loss)
Exception in thread Thread-2 (results_loop):
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/opt/conda/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
raise e
File "/opt/conda/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
Exception in thread Thread-1 (results_loop):
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/opt/conda/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
raise e
File "/opt/conda/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
Beta Was this translation helpful? Give feedback.
All reactions