Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KeyError: 'module_list.85.Conv2d.weight' #650

Closed
alontrais opened this issue Nov 23, 2019 · 61 comments
Closed

KeyError: 'module_list.85.Conv2d.weight' #650

alontrais opened this issue Nov 23, 2019 · 61 comments
Labels
bug Something isn't working

Comments

@alontrais
Copy link

alontrais commented Nov 23, 2019

Hey I get a new error whan I run the train script:

Downloading https://drive.google.com/uc?export=download&id=158g62Vs14E3aj7oPVPuEnNZMKFNgGyNq as weights/ultralytics49.pt... Done (2.8s)
Traceback (most recent call last):
  File "train.py", line 444, in <module>
    train()  # train normally
  File "train.py", line 111, in train
    chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()}
  File "train.py", line 111, in <dictcomp>
    chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()}
KeyError: 'module_list.85.Conv2d.weight'
@alontrais alontrais added the bug Something isn't working label Nov 23, 2019
@daddydrac
Copy link

I am having a much similar issue:

File "train.py", line 111, in <dictcomp> chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()} KeyError: 'module_list.85.Conv2d.weight'

@daddydrac
Copy link

daddydrac commented Nov 23, 2019

I think something is wrong w/ custom .cfg and/or .data file. because when I do a sanity check w/ default files I get:

'No labels found. Recommend correcting image and label paths.' AssertionError: No labels found. Recommend correcting image and label paths.

Please see, "Train On Custom Data" - #621

@FranciscoReveriano
Copy link
Contributor

Did you check the coco.data file? And your .cfg file should have nothing to do with this.

The easiest way to fix this is by making sure that you have a directory called 'labels' inside your data directory. In this directory you place all the labels for both the test/validation.
Also make sure that you have the correct path names of your images. I have found relative paths to be better than then full paths.

image

@daddydrac
Copy link

Nope still broken

@daddydrac
Copy link

daddydrac commented Nov 23, 2019

Why isn’t there instructions on simply running your own images thru it, while using coco/yolo, and getting some metrics like mAP and false positives and negatives? I can’t believe the docs have made it this hard. I’m willing to rewrite them if I can figure this out.

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 23, 2019

@alontrais @joehoeller thank you for your interest in our work! Please note that most technical problems are due to:

  • Your changes to the default repository. If your issue is not reproducible in a fresh git clone version of this repository we can not debug it. Before going further run this code and ensure your issue persists:
sudo rm -rf yolov3  # remove exising repo
git clone https://github.com/ultralytics/yolov3 && cd yolov3 # git clone latest
python3 detect.py  # verify detection
python3 train.py  # verify training (a few batches only)
# CODE TO REPRODUCE YOUR ISSUE HERE
  • Your custom data. If your issue is not reproducible with COCO data we can not debug it. Visit our Custom Training Tutorial for exact details on how to format your custom data. Examine train_batch0.jpg and test_batch0.jpg for a sanity check of training and testing data.
  • Your environment. If your issue is not reproducible in a GCP Quickstart Guide VM we can not debug it. Ensure you meet the requirements specified in the README: Unix, MacOS, or Windows with Python >= 3.7, Pytorch >= 1.3 etc. You can also use our Google Colab Notebook to test your code in working environment.

If none of these apply to you, we suggest you close this issue and raise a new one using the Bug Report template, providing screenshots and minimum viable code to reproduce your issue. Thank you!

@yle8458
Copy link

yle8458 commented Nov 24, 2019

@alontrais I had a similar error before, and I figured it out. The cause of this error in my end is because I used yolov3.cfg as my configure, but use the default weight file 'ultralytic49.pt', and the two does not match.

In the case that you want to use the default weight, you can use the yolov3-spp.cfg as a baseline and modify the corresponding filters/num_class as instructed.

@daddydrac
Copy link

daddydrac commented Nov 24, 2019

@glenn-jocher I followed your instructions:

sudo rm -rf yolov3  # remove exising repo
git clone https://github.com/ultralytics/yolov3 && cd yolov3 # git clone latest
python3 detect.py  # verify detection
python3 train.py  # verify training (a few batches only)

I get this when I run train.py:

line 374, in __init__
    assert nf > 0, 'No labels found. Recommend correcting image and label paths.'
AssertionError: No labels found. Recommend correcting image and label paths.
  • I will denote that python3 detect.py works just fine.
  • I did print out the path arg from line 374 and this is what I got: ./coco/trainvalno5k.txt.

@glenn-jocher
Copy link
Member

@joehoeller you need the coco dataset to run the training examples:

$ bash yolov3/data/get_coco_dataset_gdrive.sh

@daddydrac
Copy link

daddydrac commented Nov 24, 2019 via email

@glenn-jocher
Copy link
Member

@joehoeller nothing needs to be done to the labels. You just git clone the repo, copy the coco dataset and train. You can even follow the notebook, just click play in each cell.

https://colab.research.google.com/drive/1G8T-VFxQkjDe4idzN8F-hbIBqkkkQnxw

@daddydrac
Copy link

daddydrac commented Nov 24, 2019 via email

@glenn-jocher
Copy link
Member

@joehoeller your error is not reproducible, there's no bug. Follow the steps, everything works properly.

@daddydrac
Copy link

daddydrac commented Nov 24, 2019 via email

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 24, 2019

@joehoeller To get started simply run the following in a terminal, or open the notebook and click play on the first cells (same code):
https://colab.research.google.com/drive/1G8T-VFxQkjDe4idzN8F-hbIBqkkkQnxw

rm -rf yolov3 coco coco.zip  # WARNING: remove existing 
git clone https://github.com/ultralytics/yolov3  # clone
bash yolov3/data/get_coco_dataset_gdrive.sh  # copy COCO2014 dataset (19GB)
cd yolov3
python3 train.py

@daddydrac
Copy link

daddydrac commented Nov 24, 2019 via email

@FranciscoReveriano
Copy link
Contributor

How many times do I have to tell you I did that. I’m moving on to build my own solution — which I can do, I was just hoping to save time.

On Sun, Nov 24, 2019 at 5:21 PM Glenn Jocher @.***> wrote: @joehoeller https://github.com/joehoeller To get started simply run the following in a terminal, or open the notebook and click play on the first cells (same code): https://colab.research.google.com/drive/1G8T-VFxQkjDe4idzN8F-hbIBqkkkQnxw rm -rf yolov3 coco coco.zip # WARNING: remove existing git clone https://github.com/ultralytics/yolov3 # clone bash yolov3/data/get_coco_dataset_gdrive.sh # copy COCO2014 dataset (19GB) %cd yolov3 python3 train.py — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#650?email_source=notifications&email_token=ABHVQHFARQIBSGLVIWYU6KLQVMEANA5CNFSM4JQ3CBSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFAXYFQ#issuecomment-557939734>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVQHBHLDPEZJIFQ65WGADQVMEANANCNFSM4JQ3CBSA .

Don't be rude! Instead of complaining, you need to embrace the spirit of collaboration. This is the best PyTorch implementation public. Contribute to making it better.

FYI. If you are not on a notebook and you want to run this. I would advise that you follow the setup in that is made by

bash get_coco_dataset.sh

There you will get the perfect structure.

@daddydrac
Copy link

daddydrac commented Nov 25, 2019 via email

@glenn-jocher
Copy link
Member

@joehoeller if the default code I sent you works in your environment, then use that as a starting point for your own development efforts. You simply mimic the coco data format with your own data. All of the info, including step by step directions and code to reproduce are in the custom training example in the wiki.
https://github.com/ultralytics/yolov3/wiki

@daddydrac
Copy link

It does not for the last time. How many times do I have to tell you. Scroll up and read the label error. Because that’s what I get after I performed the command line cmd’s as given per your instructions.

@daddydrac
Copy link

Actually don’t bother because I’m already hooking up analytics and metrics to my own solution I’ve built in Torch w Tensorboard.

@Samjith888
Copy link

I got the same error,
Traceback (most recent call last): File "train.py", line 444, in <module> train() # train normally File "train.py", line 111, in train chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()} File "train.py", line 111, in <dictcomp> chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()} KeyError: 'module_list.85.Conv2d.weight'

I have tried the suggested steps, but nothing worked out. https://github.com/ultralytics/yolov3/issues/650#issuecomment-557939734

@inspire-lts
Copy link

so sad! the same error:
File "train.py", line 444, in
train() # train normally
File "train.py", line 111, in train
chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()}
File "train.py", line 111, in
chkpt['model'] = {k: v for k, v in chkpt['model'].items() if model.state_dict()[k].numel() == v.numel()}

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 25, 2019

@Samjith888 @inspire-lts @joehoeller see #657

This error is caused by a user supplying incompatible --weights and --cfg arguments. To solve this you must specify no weights (i.e. random initialization of the model) using --weights '' and any --cfg, or use a --cfg that is compatible with your --weights. If none are specified, the defaults are --weights ultralytics49.pt and --cfg cfg/yolov3-spp.cfg.

Examples of compatible combinations are:

python3 train.py --weights yolov3.pt --cfg cfg/yolov3.cfg
python3 train.py --weights yolov3.weights --cfg cfg/yolov3.cfg
python3 train.py --weights yolov3-spp.pt --cfg cfg/yolov3-spp.cfg
python3 train.py --weights ultralytics49.pt --cfg cfg/yolov3-spp.cfg
python3 train.py --weights '' --cfg cfg/*.cfg  # any cfg will work here

ultralytics49.pt is currently the highest performing YOLOv3 model (trained from scratch using this repo) available at the default img-size of 416 (see #310), which is the reason it is used as the default backbone.

@daddydrac
Copy link

So for the last time, what does this mean and how do I fix it:

assert nf > 0, 'No labels found. Recommend correcting image and label paths.'

@daddydrac
Copy link

daddydrac commented Nov 27, 2019 via email

@daddydrac
Copy link

How do we correct image/label paths? I have them but it is not clear as to where to set those up at.

AssertionError: No labels found. Recommend correcting image and label paths.

@daddydrac
Copy link

Send me a message we can collaborate in article. Or add me at Linked which is my profile.

Message is in your LinkedIn inbox. I built the automation tool, I now call "Dark Chocolate", it converts COCO annotations to Darknet annotation format.

@glenn-jocher
Copy link
Member

@joehoeller coco.data points to the train.txt and test.txt list of images on lines 2 and 3.
image

These files have lists of image paths as they would be from the yolov3 directory:
image

If in doubt, you can run python3 train.py in debug mode, and put a breakpoint on this line to see what values the img_files are. If there are no images there, or if there are no labels in the corresponding labels folder (by replacing /images/ with /labels/ in the image paths) you will get this error message.

yolov3/utils/datasets.py

Lines 261 to 262 in 5bcc2b3

with open(path, 'r') as f:
self.img_files = [x.replace('/', os.sep) for x in f.read().splitlines() # os-agnostic

@daddydrac
Copy link

daddydrac commented Dec 1, 2019 via email

@FranciscoReveriano
Copy link
Contributor

I will be uploading a reader for this if you have a custom Dataset. All I can say is that its better if you are using the full path of the images. So the computer knows where to grab the images/labels.

@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 1, 2019

@joehoeller the same structure is used for custom data as for coco. The labels need to be in a separate folder next to the images folder. The labels folder needs to be found simply by replacing /images/ with /labels/ in the image folder path, like this custom "dataset1" (ds1). Each labelname is identical to each image name, except the extension for the labels is *.txt. This example trains on the first 8 images of the dataset, and tests on the last 2.

The paths all need to be relative to your yolov3 folder (or absolute paths, though these break easier if you send the code to a different environment).

Screen Shot 2019-12-01 at 1 37 27 PM

Then run:

cd yolov3
python3 train.py --data ../data/ds1/out.data

@glenn-jocher
Copy link
Member

BTW @FranciscoReveriano @joehoeller this is legacy structure from darknet, so the same exact data can also be used to train darknet.

This repo now outperforms darknet by a wide margin I believe, but nevertheless darknet has a strong following (i.e. pjreddier/darknet has 15k stars, alexeyab/darknet has 6k stars), so I'm not sure if we should keep following the darknet convention, or perhaps start from a clean-slate mentality about what would be easiest for the most people to train their own custom data with a minimum of hassle.

In principle this repo is here to create the most accurate, fastest object detector in the world. In practice though, people seem to care more about quick results and ease of use, and don't care as much about being the best or the fastest.

@FranciscoReveriano
Copy link
Contributor

I think we need to continue to DarkNet. I guess people still follow it because it provides a nice benchmark with a lot of literature. Although I don't think Machine Learning or Object Detection should be 'people'-proof. At some point people should be expected to do the learning curve. Seems like alot of people just want quick fixes.

Although it might not be a bad idea to make a version of Facebook's Detectron 2 that could be sold. That would be the best way to start from a clean state in my opinion.

@daddydrac
Copy link

daddydrac commented Dec 2, 2019 via email

@FranciscoReveriano
Copy link
Contributor

For me. The problem is when people ask you to interpret, figure out, or tell them to how to make their results much better. This is GitHub not ResearchGate. I was looking for a Udacity course to take this break. I might do that CV course.
Most my experience is with Tensorflow and Keras. Trying to move to Torch like the rest of us.

@daddydrac
Copy link

daddydrac commented Dec 2, 2019 via email

@daddydrac
Copy link

@glenn-jocher you show paths for images but not labels - i have been doing all of this already, and just like others in this thread it continues to fail.

@glenn-jocher
Copy link
Member

@joehoeller the label paths are inferred automatically by replacing /images/ with /labels/ in the image paths. You only need to specify image paths.

@glenn-jocher
Copy link
Member

The labelfile definition happens here.

yolov3/utils/datasets.py

Lines 278 to 281 in 3d91731

# Define labels
self.label_files = [x.replace('images', 'labels').replace(os.path.splitext(x)[-1], '.txt')
for x in self.img_files]

@daddydrac
Copy link

daddydrac commented Dec 2, 2019

@Samjith888 @inspire-lts @joehoeller see #657

This error is caused by a user supplying incompatible --weights and --cfg arguments. To solve this you must specify no weights (i.e. random initialization of the model) using --weights '' and any --cfg, or use a --cfg that is compatible with your --weights. If none are specified, the defaults are --weights ultralytics49.pt and --cfg cfg/yolov3-spp.cfg.

Examples of compatible combinations are:

python3 train.py --weights yolov3.pt --cfg cfg/yolov3.cfg
python3 train.py --weights yolov3.weights --cfg cfg/yolov3.cfg
python3 train.py --weights yolov3-spp.pt --cfg cfg/yolov3-spp.cfg
python3 train.py --weights ultralytics49.pt --cfg cfg/yolov3-spp.cfg
python3 train.py --weights '' --cfg cfg/*.cfg  # any cfg will work here

ultralytics49.pt is currently the highest performing YOLOv3 model (trained from scratch using this repo) available at the default img-size of 416 (see #310), which is the reason it is used as the default backbone.

This tutorial, https://docs.ultralytics.com/yolov5/tutorials/train_custom_data , says:

  1. Train. Run python3 train.py --data data/coco_10img.data to train using your custom data. If you created a custom *.cfg file as well, specify it using --cfg cfg/my_new_file.cfg.

I HAVE TRIED ALL OF THE SUGGESTIONS ABOVE AND STILL GET:
assert nf > 0, 'No labels found. Recommend correcting image and label paths.

@daddydrac
Copy link

This script will generate file paths to images:

import os
filee = open('FILE_NAME.txt','w')
given_dir = 'PATH_TO_CUSTOM_IMAGES'
[filee.write(os.path.join(given_dir,i)+'\n') for i in os.listdir(given_dir)]

@daddydrac
Copy link

I got it going, now have CUDA memory error, but that's a "me" problem. Not a "you" problem. I will write a very clear and concise tutorial for medium when I am done.

@glenn-jocher
Copy link
Member

Yes, I think the default training settings should probably use a smaller batch size. The current settings should work fine for a 1080Ti or 2080Ti and up (11GB) cuda memory, but smaller graphics cards may run out.

The current default is --batch-size 32 --accumulate 2 to get to an effective 64 batch size. I think I should reduce this to --batch-size 16 --accumulate 4 to get the most number of people running smoothly without CUDA out of memory issues. The performance hit (from batch norming less images) is not very large.

@glenn-jocher
Copy link
Member

Ok, this should do it: 93a70d9

If you git pull you can get all the latest updates.

@daddydrac
Copy link

daddydrac commented Dec 2, 2019 via email

@daddydrac
Copy link

Thanks, did the git pull, and is working just fine.

@daddydrac
Copy link

How long does it take to train normally?

Note, I ran:
python3 train.py --data data/custom.data --cfg cfg/yolov3-spp.cfg --weights weights/yolov3-spp.weights

@glenn-jocher
Copy link
Member

@joehoeller training speeds are here.
https://github.com/ultralytics/yolov3#speed

Roughly a week to train COCO. Smaller datasets faster of course.

@glenn-jocher
Copy link
Member

@joehoeller nvidia apex speeds things up a lot. This repo uses it automatically if it installed.
https://github.com/NVIDIA/apex

@daddydrac
Copy link

daddydrac commented Dec 2, 2019 via email

@daddydrac
Copy link

@joehoeller nvidia apex speeds things up a lot. This repo uses it automatically if it installed.
https://github.com/NVIDIA/apex

Check out my Pytorch/Anaconda/TensorRT container on my github, TensorRT does same thing :)

@glenn-jocher
Copy link
Member

@joehoeller to get test metrics run python3 test.py with the same dataset and model you trained on.

$ python3 test.py --weights ultralytics68.pt --img-size 512 --device 0

Namespace(batch_size=16, cfg='cfg/yolov3-spp.cfg', conf_thres=0.001, data='data/coco.data', device='0', img_size=512, iou_thres=0.5, nms_thres=0.5, save_json=False, weights='ultralytics68.pt')
Using CUDA device0 _CudaDeviceProperties(name='GeForce RTX 2080 Ti', total_memory=10989MB)

Downloading https://drive.google.com/uc?export=download&id=1Jm8kqnMdMGUUxGo8zMFZMJ0eaPwLkxSG as ultralytics68.pt... Done (7.6s)
               Class    Images   Targets         P         R   [email protected]        F1: 100%|███████████████████████████████████████████████████████████████| 313/313 [08:30<00:00,  1.35it/s]
                 all     5e+03  3.58e+04    0.0823     0.798     0.595     0.145
              person     5e+03  1.09e+04    0.0999     0.903     0.771      0.18
             bicycle     5e+03       316    0.0491     0.782      0.56    0.0925
                 car     5e+03  1.67e+03    0.0552     0.845     0.646     0.104
          motorcycle     5e+03       391      0.11     0.847     0.704     0.194
            airplane     5e+03       131     0.099     0.947     0.878     0.179
                 bus     5e+03       261     0.142     0.874     0.825     0.244
               train     5e+03       212     0.152     0.863     0.806     0.258
               truck     5e+03       352    0.0849     0.682     0.514     0.151
                boat     5e+03       475    0.0498     0.787     0.504    0.0937
       traffic light     5e+03       516    0.0304     0.752     0.516    0.0584
        fire hydrant     5e+03        83     0.144     0.916     0.882     0.248
           stop sign     5e+03        84    0.0833     0.917     0.809     0.153
       parking meter     5e+03        59    0.0607     0.695     0.611     0.112
               bench     5e+03       473    0.0294     0.685     0.363    0.0564
                bird     5e+03       469    0.0521     0.716     0.524    0.0972
                 cat     5e+03       195     0.252     0.908      0.78     0.395
                 dog     5e+03       223     0.192     0.883     0.829     0.315
               horse     5e+03       305     0.121     0.911     0.843     0.214
               sheep     5e+03       321     0.114     0.854     0.724     0.201
                 cow     5e+03       384     0.105     0.849     0.695     0.187
            elephant     5e+03       284     0.184     0.944     0.912     0.308
                bear     5e+03        53     0.358     0.925     0.875     0.516
               zebra     5e+03       277     0.176     0.935     0.858     0.297
             giraffe     5e+03       170     0.171     0.959     0.892      0.29
            backpack     5e+03       384    0.0426     0.708     0.392    0.0803
            umbrella     5e+03       392    0.0672     0.878      0.65     0.125
             handbag     5e+03       483    0.0238     0.629     0.242    0.0458
                 tie     5e+03       297    0.0419     0.805     0.599    0.0797
            suitcase     5e+03       310    0.0823     0.855     0.628      0.15
             frisbee     5e+03       109     0.126     0.872     0.796     0.221
                skis     5e+03       282    0.0473     0.748     0.454     0.089
           snowboard     5e+03        92    0.0579     0.804     0.559     0.108
         sports ball     5e+03       236     0.057     0.733     0.622     0.106
                kite     5e+03       399     0.087     0.852     0.645     0.158
        baseball bat     5e+03       125    0.0496     0.776     0.603    0.0932
      baseball glove     5e+03       139    0.0511     0.734     0.563    0.0956
          skateboard     5e+03       218    0.0655     0.844      0.73     0.122
           surfboard     5e+03       266    0.0709     0.827     0.651     0.131
       tennis racket     5e+03       183    0.0694     0.858     0.759     0.128
              bottle     5e+03       966    0.0484     0.812     0.513    0.0914
          wine glass     5e+03       366    0.0735     0.738     0.543     0.134
                 cup     5e+03       897    0.0637     0.788     0.538     0.118
                fork     5e+03       234    0.0411     0.662     0.487    0.0774
               knife     5e+03       291    0.0334     0.557     0.292    0.0631
               spoon     5e+03       253    0.0281     0.621     0.307    0.0537
                bowl     5e+03       620    0.0624     0.795     0.514     0.116
              banana     5e+03       371     0.052      0.83      0.41    0.0979
               apple     5e+03       158    0.0293     0.741     0.262    0.0564
            sandwich     5e+03       160    0.0913     0.725     0.522     0.162
              orange     5e+03       189    0.0382     0.688      0.32    0.0723
            broccoli     5e+03       332    0.0513      0.88     0.445     0.097
              carrot     5e+03       346    0.0398     0.766     0.362    0.0757
             hot dog     5e+03       164    0.0958     0.646     0.494     0.167
               pizza     5e+03       224    0.0886     0.875     0.699     0.161
               donut     5e+03       237    0.0925     0.827      0.64     0.166
                cake     5e+03       241    0.0658      0.71     0.539      0.12
               chair     5e+03  1.62e+03    0.0432     0.793     0.489    0.0819
               couch     5e+03       236     0.118     0.801     0.584     0.205
        potted plant     5e+03       431    0.0373     0.852     0.505    0.0714
                 bed     5e+03       195     0.149     0.846     0.693     0.253
        dining table     5e+03       634    0.0546      0.82      0.49     0.102
              toilet     5e+03       179     0.161      0.95      0.81     0.275
                  tv     5e+03       257    0.0922     0.903      0.79     0.167
              laptop     5e+03       237     0.127     0.869     0.744     0.222
               mouse     5e+03        95    0.0648     0.863     0.732      0.12
              remote     5e+03       241    0.0436     0.788     0.535    0.0827
            keyboard     5e+03       117    0.0668     0.923     0.755     0.125
          cell phone     5e+03       291    0.0364     0.704     0.436    0.0692
           microwave     5e+03        88     0.154     0.841     0.743     0.261
                oven     5e+03       142    0.0618     0.803     0.576     0.115
             toaster     5e+03        11    0.0565     0.636     0.191     0.104
                sink     5e+03       211    0.0439     0.853     0.544    0.0835
        refrigerator     5e+03       107    0.0791     0.907     0.742     0.145
                book     5e+03  1.08e+03    0.0399     0.667     0.233    0.0753
               clock     5e+03       292    0.0542     0.836     0.733     0.102
                vase     5e+03       353    0.0675     0.799     0.591     0.125
            scissors     5e+03        56    0.0397      0.75     0.461    0.0755
          teddy bear     5e+03       245    0.0995     0.882     0.669     0.179
          hair drier     5e+03        11   0.00508    0.0909    0.0475   0.00962
          toothbrush     5e+03        77    0.0371      0.74     0.418    0.0706

@daddydrac
Copy link

training error:
assert c.max() <= model.nc, 'Target classes exceed model classes'
AssertionError: Target classes exceed model classes

@daddydrac
Copy link

UPDATE: I fixed the PR and updated the math, the COCO JSON -> Darknet conversion tool (Dark Chocolate) works now: daddydrac/Dark-Chocolate#2

@glenn-jocher
Copy link
Member

Glad to hear that, @daddydrac! The COCO JSON to Darknet conversion tool is a great contribution. Thank you for sharing it with the community! If you have any further questions or need assistance with anything else, feel free to ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants