index #8012
Replies: 79 comments 97 replies
-
Hey everyone, Glenn here! 🚀 Dive into our comprehensive guide on YOLOv8, the pinnacle of real-time object detection and image segmentation technology. Whether you're just starting out or you're deep into the machine learning world, this page is your go-to resource for installing, predicting, and training with YOLOv8. Got questions or insights? This is the perfect spot to share your thoughts and learn from others in the community. Let's make the most of YOLOv8 together! 💡👥 |
Beta Was this translation helpful? Give feedback.
-
Hello, I am super new to computer vision, and I want to know if there is a way to isolate the detected texts (like you would isolate in Roboflow, where all the text areas detected will be split) so I can use it better in my text extraction model. Thank you. |
Beta Was this translation helpful? Give feedback.
-
if i pass the model an image if i want to extract class id and class name from result how can i do that |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
I tried using result.show() bit it says it has no obect show and using your
code, it says list has no object pred.
…On Sun, Mar 10, 2024, 3:34 AM Glenn Jocher ***@***.***> wrote:
@Zulkazeem <https://github.com/Zulkazeem> hey there! 👋 It looks like
your code is almost there, but if you're not getting any detections, there
might be a few things to check:
1.
*Model Confidence:* Ensure your model's confidence threshold isn't set
too high, which might prevent detections. Try lowering the conf
argument in your predict call.
2.
*Image Path:* Double-check the image path to ensure it's correct and
the image is accessible.
3.
*Model Compatibility:* Make sure the model you're using is appropriate
for the task. If it's trained on a very different dataset or for a
different task, it might not perform well on your images.
4.
*Looping Through Results:* The way you're iterating through pred and
then r seems a bit off. After calling predict, you should directly
access the detections, like so:
results = pred_model.predict(source=img_path)for result in results:
for *xyxy, conf, cls in result.pred[0]:
# Process each detection here
1. *Visualization:* Before trying to save or further process
detections, simply try visualizing them with result.show() to ensure
detections are being made.
If you've checked these and still face issues, it might be helpful to
share more details or error messages you're encountering. Keep
experimenting, and don't hesitate to reach out for more help! 🚀
—
Reply to this email directly, view it on GitHub
<#8012 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARQRRZCLUDZ4XTCWWM4NNBDYXQEG3AVCNFSM6AAAAABCZBK6B6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DOMZUGA4TK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hi, I'm new to running models myself, and the last time I did any image training was about 15 years ago, though I am a Python veteran. I'd like to try running the building footprint models. Do you have a video series that can take me through setting up YOLOv8 and then running the model to extract footprints? |
Beta Was this translation helpful? Give feedback.
-
Hi, Can someone please help? Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
hi, I'm a student and I'm doing this for my undergraduate thesis. I'm implementing yolov8 model in android gallery's search mechanism. the purpose of yolov8 model is to scan media files and return images that has a bounding box label that matches the search query. i can make it work with yolov5s.torchscript.ptl using org.pytorch:pytorch_android_lite:1.10.0 and org.pytorch:pytorch_android_torchvision_lite:1.10.0 but it wont work with yolov8s.torchscript. the yolov5s.torchscript.ptl has a function to load the model and the classes.txt, is the yolov8 model wont need that? |
Beta Was this translation helpful? Give feedback.
-
when i am training yolov8 model i need to store current epoch number to a variable that can be used where ever i want ? |
Beta Was this translation helpful? Give feedback.
-
Hey Glenn, However I just wanna obtain the metrics for the lower half of the images. I tried modifying the labels and annotations files of the validation split to contain only those bounding boxes which are in the lower half. However this doesn't seem to work. Any suggestions? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a question about the YOLOv8 model. In the pre-trained model, there are labels like "person" and others, but if I create a new model with only the "person" label, will there be a performance difference on my computer between the pre-trained model and the model I create? |
Beta Was this translation helpful? Give feedback.
-
In yolo v8.1 I can't find confusion matrix and results.png? Where is it stored?? This is how I started my training: %cd /kaggle/working/HOME/YOLO_V8_OUTPUT !yolo train model=yolov8l.pt data=/kaggle/working/HOME/FRUITS_AND_VEGITABLES_NITHIN-6/data.yaml epochs=100 imgsz=640 patience = 10 device=0,1 project=/kaggle/working/HOME/YOLO_V8_OUTPUT |
Beta Was this translation helpful? Give feedback.
-
I am tasked with developing a shelf management system tailored for a specific brand. This system aims to automate the process of sales personnel visiting stores to assess product stock levels and required replenishments. Utilizing object detection, I intend to accurately count the products on the shelves and inform the salesperson of the quantities needed to refill. One major challenge to address is product occlusion, where items may partially or fully obscure others, complicating accurate counting. I'm particularly interested in exploring how YOLOv8, a popular object detection model, can be employed to tackle this problem effectively. Any guidance or insights on implementing such a solution would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi @glenn-jocher can you please have a look at this google doc, I have tried to explain the problem through screenshots, i am facing after fine tuning the model, I would really appreciate your kind guidance. https://docs.google.com/document/d/1WJ5SBdunWSqyd3FjgYgrZn2KjeYxezlel2LeXspmWAQ/edit?usp=sharing |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm a student and I want to use YOLO to detect ground interference signals scanned by a drone during its moving process. Can you recommend any good articles and documents? |
Beta Was this translation helpful? Give feedback.
-
vscode原来可以正常使用突然此应用无法在你的电脑怎么办 |
Beta Was this translation helpful? Give feedback.
-
i dont know what happned today yedterday when i trained my model this works my command is but when i train my model today with the same command i encounter this error whats this and how resolve Ultralytics YOLOv8.2.64 🚀 Python-3.10.13 torch-2.1.2 CUDA:0 (Tesla P100-PCIE-16GB, 16269MiB)
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] Transferred 319/355 items from pretrained weights |
Beta Was this translation helpful? Give feedback.
-
Hello, I have been having a problem. The input image size for YOLOV8 is 640x640, which is imgsz: 640 # (int | list) input images size as int for train and val modes, or list[h, w] for predict and export modes |
Beta Was this translation helpful? Give feedback.
-
hello sir, i want to make object detection and tracking using yolov8, what tracking algorithm can i use |
Beta Was this translation helpful? Give feedback.
-
Dear Ultralytics Team, I hope this message finds you well. First and foremost, thank you for developing YOLOv8, which has been incredibly beneficial for many computer vision applications. I am currently engaged in a project involving remote sensing data, where we need to classify, detect, localize, and generate segmentation masks for various features. While working on this, we have encountered some challenges related to dataset preparation and integration, and I would greatly appreciate your guidance on the following:
We have access to multiple remote sensing datasets, some annotated for object detection (bounding boxes) and others for segmentation tasks (masks). Converting detection annotations to segmentation masks and vice versa seems complex and time-consuming. Could you provide any best practices, tools, or workflows to facilitate this process effectively?
We are facing confusion because some datasets have segmentation masks but are missing bounding boxes, while others have bounding boxes but are missing masks. How can we overcome this issue to train a YOLOv8-Seg model effectively using a combined dataset with these inconsistencies?
Should we aim to use a single YOLOv8-Seg model to handle both detection and segmentation tasks, or is it more practical to train separate models for each task? What are the potential trade-offs in terms of performance and accuracy? Given the variability in the number of classes and annotation types across different datasets, how can we best prepare and combine these datasets for training with YOLOv8? Are there specific considerations we should keep in mind to ensure consistency and compatibility? I apologize if these questions seem too basic or lengthy, but your insights would be invaluable for the success of our project. Thank you for your time and assistance. Best regards, RAM |
Beta Was this translation helpful? Give feedback.
-
Hi, I use raspberry pi cm4 and I can run my code 3 weeks ago, but i meet a problem today.
and get
python version: python 3.11.2 |
Beta Was this translation helpful? Give feedback.
-
Hello, I want to know how to training for both segmentation and pose estimation Using YOLO. thank you. |
Beta Was this translation helpful? Give feedback.
-
Hello, for the classification task
|
Beta Was this translation helpful? Give feedback.
-
Hello, So I'm doing a Project with the xBD (Xview2) dataset available on Google, which has Satellite images with instances of buildings which are labelled like no-damage, minor-damage, major-damage and destroyed. Now the thing is the dataset is heavily imbalanced; it has loads of no-damage instances of buildings compared to the other 3 damage types. I tried augmenting using RoboFlow, but the pictures look weird to me here's a sample of before and after: https://prnt.sc/5FNcPr4QgL8J Now, my Question is, how should I handle the dataset imbalances while training this dataset using YOLOv8 or YOLOv9? And is it okay to use augmented images like that and what should I do to improve my prediction accuracy and the mAP score and should I follow any other augmenting techniques? |
Beta Was this translation helpful? Give feedback.
-
Hello! Quick question, is there any way to have the specific add-ons/models get downloaded and placed in an accessible area for the YOLO World and SAM classes to access at runtime? I'm deploying YOLO World and SAM on RunPod together as a serverless GPU endpoint for a meme studio app. I don't believe we're charged during container pulling, so I'd like to handle everything during container build instead. I know YOLO World uses CLIP, so would added "git+https://github.com/ultralytics/CLIP.git" to my requirements.txt cover that, or does Ultralytics look elsewhere for the models? |
Beta Was this translation helpful? Give feedback.
-
I want to replace the SPPF module in yolov8 with another module. I defined the module in the “block” file and modified the yaml configuration file. Is there anything else that needs to be modified or should be noted? Thanks |
Beta Was this translation helpful? Give feedback.
-
Hello, now I have a task requirement, which is roughly as follows: Identify four positions in an area through YOLO algorithm, and keep the corresponding ids of the four positions unchanged. What algorithm can be used |
Beta Was this translation helpful? Give feedback.
-
i dont have results at all , when i finish training the model and try to use any image or video to detect there is ( no detection ) , the results image in the train30 file did not show any improvement only one point in all curves |
Beta Was this translation helpful? Give feedback.
-
index
Explore a complete guide to Ultralytics YOLOv8, a high-speed, high-accuracy object detection & image segmentation model. Installation, prediction, training tutorials and more.
https://docs.ultralytics.com/
Beta Was this translation helpful? Give feedback.
All reactions