Add device parameter for TransformerModel in models.py #49
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Added "device" parameter to TransformerModel in
model.py
for Improved GPU/CPU CompatibilityDescription
This PR introduces the
device
parameter to theTransformersModel
class, allowing users to explicitly specify the device ("cpu"
,"cuda"
, etc.) for model loading and inference. This change improves flexibility and control over model deployment environments, especially when running on systems with mixed hardware setups.Changes Made
Device Detection:
cuda
if available, otherwisecpu
).to(device)
call to move the model and tensors to the appropriate device.Backward Compatibility:
Enhance Logging:
Example Usage
Before
After
This ensures the model explicitly runs on the GPU.
Motivation
Explicit hardware selection is crucial in scenarios involving mixed hardware environments. Many users work in systems where both CPU and GPU resources are available, and having the ability to specify the
device
simplifies configuration and improves usability.By adding the
device
parameter, users no longer need to modify internal logic or rely on automatic device detection, making the library more adaptable to diverse deployment environments.Testing
"cuda"
) and CPU ("cpu"
).device
parameter is provided or when the specified device is unavailable.device
parameter continues to work as expected.Checklist
device
parameter toTransformersModel
.Impact
This update is fully backward-compatible. Users who do not specify the
device
parameter will experience no change in behavior, as the default device ("cuda"
if available, otherwise"cpu"
) is automatically selected.