This module implements the RDK vision API in a viam-labs:vision:yolov8 model.
This model leverages the Ultralytics inference library to allow for object detection and classification from YOLOv8 models.
Both locally deployed YOLOv8 models and models from web sources like HuggingFace can be used (HuggingFace models will be downloaded and used locally).
Navigate to the Config tab of your robot’s page in the Viam app.
Click on the Components subtab and click Create component.
Select the vision type, then select the viam-labs:vision:yolov8 model.
Enter a name for your vision and click Create.
Copy and paste the following attribute template into your vision service's Attributes box:
{
"model_location": "<string>"
}The following attributes are available for viam-labs:vision:yolov8 model:
| Name | Type | Inclusion | Description |
|---|---|---|---|
model_location |
string | Required | YOLO model name (such as "yolov8n.pt"), local path to model, or HuggingFace model repo identifier |
model_name |
string | Optional | Name of model file when using HuggingFace repo identifier as model_location |
task |
string | Optional | Name of computer vision task performed by the model: "detect" (default) or "classify" |
YOLO base model:
{
"model_location": "yolov8n.pt",
}{
"model_location": "keremberke/yolov8n-hard-hat-detection",
"model_name": "best.pt"
}Local YOLOv8 model:
{
"model_location": "/path/to/yolov8n.pt"
}Note: if using the get_detections_from_camera or get_classifications_from_camera API, any cameras you are using must be set in the depends_on array for the service configuration, for example:
"depends_on": [
"cam"
]The YOLOv8 resource provides the following methods from Viam's built-in rdk:service:vision API
