Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instance Segmentation Head #144

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions configs/instance_segmentation_heavy_model.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Example configuration for training a predefined heavy instance segmentation model

model:
name: instance_segmentation_heavy
predefined_model:
name: InstanceSegmentationModel
params:
variant: heavy

loader:
params:
dataset_name: coco_test

trainer:
preprocessing:
train_image_size: [384, 512]
keep_aspect_ratio: true
normalize:
active: true

batch_size: 8
epochs: &epochs 200
n_workers: 4
validation_interval: 10
n_log_images: 8

callbacks:
- name: ExportOnTrainEnd
- name: TestOnTrainEnd

optimizer:
name: SGD
params:
lr: 0.01
momentum: 0.937
weight_decay: 0.0005
dampening: 0.0
nesterov: true

scheduler:
name: CosineAnnealingLR
params:
T_max: *epochs
eta_min: 0.0001
last_epoch: -1
45 changes: 45 additions & 0 deletions configs/instance_segmentation_light_model.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Example configuration for training a predefined light instance segmentation model

model:
name: instance_segmentation_light
predefined_model:
name: InstanceSegmentationModel
params:
variant: light

loader:
params:
dataset_name: coco_test

trainer:
preprocessing:
train_image_size: [384, 512]
keep_aspect_ratio: true
normalize:
active: true

batch_size: 8
epochs: &epochs 200
n_workers: 4
validation_interval: 10
n_log_images: 8

callbacks:
- name: ExportOnTrainEnd
- name: TestOnTrainEnd

optimizer:
name: SGD
params:
lr: 0.01
momentum: 0.937
weight_decay: 0.0005
dampening: 0.0
nesterov: true

scheduler:
name: CosineAnnealingLR
params:
T_max: *epochs
eta_min: 0.0001
last_epoch: -1
6 changes: 0 additions & 6 deletions luxonis_train/assigners/tal_assigner.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,10 +250,4 @@ def _get_final_assignments(
torch.full_like(assigned_scores, 0),
)

assigned_labels = torch.where(
mask_pos_sum.bool(),
assigned_labels,
torch.full_like(assigned_labels, self.n_classes),
)

return assigned_labels, assigned_bboxes, assigned_scores
28 changes: 28 additions & 0 deletions luxonis_train/attached_modules/losses/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ List of all the available loss functions.
- [`AdaptiveDetectionLoss`](#adaptivedetectionloss)
- [`EfficientKeypointBBoxLoss`](#efficientkeypointbboxloss)
- [`FOMOLocalizationLoss`](#fomolocalizationLoss)
- \[`PrecisionDFLDetectionLoss`\] (# precisiondfldetectionloss)
- \[`PrecisionDFLSegmentationLoss`\] (# precisiondflsegmentationloss)

## `CrossEntropyLoss`

Expand Down Expand Up @@ -121,3 +123,29 @@ Adapted from [here](https://arxiv.org/abs/2108.07610).
| Key | Type | Default value | Description |
| --------------- | ------- | ------------- | ----------------------------------------------- |
| `object_weight` | `float` | `1000` | Weight for the objects in the loss calculation. |

## `PrecisionDFLDetectionLoss`

Adapted from [here](https://arxiv.org/pdf/2207.02696.pdf) and [here](https://arxiv.org/pdf/2209.02976.pdf).

**Parameters:**

| Key | Type | Default value | Description |
| ------------------- | ------- | ------------- | ------------------------------------------ |
| `tal_topk` | `int` | `10` | Number of anchors considered in selection. |
| `class_loss_weight` | `float` | `0.5` | Weight for classification loss. |
| `bbox_loss_weight` | `float` | `7.5` | Weight for bbox loss. |
| `dfl_loss_weigth` | `float` | `1.5` | Weight for DFL loss. |

## `PrecisionDFLSegmentationLoss`

Adapted from [here](https://arxiv.org/pdf/2207.02696.pdf) and [here](https://arxiv.org/pdf/2209.02976.pdf).

**Parameters:**

| Key | Type | Default value | Description |
| ------------------- | ------- | ------------- | ------------------------------------------ |
| `tal_topk` | `int` | `10` | Number of anchors considered in selection. |
| `class_loss_weight` | `float` | `0.5` | Weight for classification loss. |
| `bbox_loss_weight` | `float` | `7.5` | Weight for bbox and segmentation loss. |
| `dfl_loss_weigth` | `float` | `1.5` | Weight for DFL loss. |
4 changes: 4 additions & 0 deletions luxonis_train/attached_modules/losses/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
from .ohem_bce_with_logits import OHEMBCEWithLogitsLoss
from .ohem_cross_entropy import OHEMCrossEntropyLoss
from .ohem_loss import OHEMLoss
from .precision_dfl_detection_loss import PrecisionDFLDetectionLoss
from .precision_dlf_segmentation_loss import PrecisionDFLSegmentationLoss
from .reconstruction_segmentation_loss import ReconstructionSegmentationLoss
from .sigmoid_focal_loss import SigmoidFocalLoss
from .smooth_bce_with_logits import SmoothBCEWithLogitsLoss
Expand All @@ -26,4 +28,6 @@
"OHEMCrossEntropyLoss",
"OHEMBCEWithLogitsLoss",
"FOMOLocalizationLoss",
"PrecisionDFLDetectionLoss",
"PrecisionDFLSegmentationLoss",
]
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ def __init__(
@type reduction: Literal["sum", "mean"]
@param reduction: Reduction type for loss.
@type class_loss_weight: float
@param class_loss_weight: Weight of classification loss.
@param class_loss_weight: Weight of classification loss. Defaults to 1.0. For optimal results, multiply with accumulate_grad_batches.
@type iou_loss_weight: float
@param iou_loss_weight: Weight of IoU loss.
@param iou_loss_weight: Weight of IoU loss. Defaults to 2.5. For optimal results, multiply with accumulate_grad_batches.
"""
super().__init__(**kwargs)

Expand Down Expand Up @@ -133,6 +133,11 @@ def forward(
assigned_scores: Tensor,
mask_positive: Tensor,
):
assigned_labels = torch.where(
mask_positive > 0,
assigned_labels,
torch.full_like(assigned_labels, self.n_classes),
)
one_hot_label = F.one_hot(assigned_labels.long(), self.n_classes + 1)[
..., :-1
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,11 @@ def __init__(
@type class_loss_weight: float
@param class_loss_weight: Weight of classification loss for bounding boxes.
@type regr_kpts_loss_weight: float
@param regr_kpts_loss_weight: Weight of regression loss for keypoints.
@param regr_kpts_loss_weight: Weight of regression loss for keypoints. Defaults to 12.0. For optimal results, multiply with accumulate_grad_batches.
@type vis_kpts_loss_weight: float
@param vis_kpts_loss_weight: Weight of visibility loss for keypoints.
@param vis_kpts_loss_weight: Weight of visibility loss for keypoints. Defaults to 1.0. For optimal results, multiply with accumulate_grad_batches.
@type iou_loss_weight: float
@param iou_loss_weight: Weight of IoU loss.
@param iou_loss_weight: Weight of IoU loss. Defaults to 2.5. For optimal results, multiply with accumulate_grad_batches.
@type sigmas: list[float] | None
@param sigmas: Sigmas used in keypoint loss for OKS metric. If None then use COCO ones if possible or default ones. Defaults to C{None}.
@type area_factor: float | None
Expand Down
Loading
Loading