This is a Viam module providing a mlmodel service for PyTorch model
pip install -r requirements.txt
Note
Before configuring your vision service, you must create a robot.
Navigate to the Config tab of your robot’s page in the Viam app. Click on the Services subtab and click Create service. Select the MLModel
type, then select the torch-cpi
model. Enter a name for your service and click Create.
{
"modules": [
{
"name": "mymodel",
"version": "latest",
"type": "registry",
"module_id": "viam:torch-cpu"
}
],
"services": [
{
"name": "torch",
"type": "mlmodel",
"model": "viam:mlmodel:torch-cpu",
"attributes": {
"model_path": "examples/resnet_18/resnet-18.pt",
"label_path": "examples/resnet_18/labels.txt",
}
}
]
}
The following attributes are available to configure your module:
Name | Type | Inclusion | Default | Description |
---|---|---|---|---|
model_path |
string | Required | Path to standalone model file | |
label_path |
string | Optional | Path to file with class labels. |
infer(input_tensors: Dict[str, NDArray], *, timeout: Optional[float]) -> Dict[str, NDArray]
my_model = MLModelClient.from_robot(robot, "torch")
input_image = np.array(Image.open(path_to_input_image), dtype=np.float32)
input_image = np.transpose(input_image, (2, 0, 1)) # channel first
input_image = np.expand_dims(input_image, axis=0) # batch dim
input_tensor = dict()
input_tensor["input"] = input_image
output = await my_model.infer(input_tensor)
print(f"output.shape is {output['output'].shape}")