You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As this model is a bit slower compared to the existing state-of-the-art model on CPU.
So I tried to make predictions on GPU and surprisingly it slower on Gpu compare to CPU as well.
Hi ,
As this model is a bit slower compared to the existing state-of-the-art model on CPU.
So I tried to make predictions on GPU and surprisingly it slower on Gpu compare to CPU as well.
I am attaching a code snapshot here
device = torch.device('cuda')if torch.cuda.is_available() else torch.device('cpu')
model = LitBTTR.load_from_checkpoint('pretrained-2014.ckpt',map_location=device)
img = Image.open(img_path)
img = ToTensor()(img)
img.to(device)
t1 = time.time()
hyp = model.beam_search(img)
t2 = time.time()
Kindly help me out here how i can reduce prediction time
FYI - using GPU on aws g4dn.xlarge configuration machine
The text was updated successfully, but these errors were encountered: