Skip to content

Commit e0ee128

Browse files
authored
Merge branch 'develop' into tensor_parallel
2 parents f1b263b + 58aa98c commit e0ee128

File tree

3 files changed

+4
-2
lines changed

3 files changed

+4
-2
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
[![Downloads](https://static.pepy.tech/badge/dicee)](https://pepy.tech/project/dicee)
2+
[![Downloads](https://img.shields.io/pypi/dm/dicee)](https://pypi.org/project/dicee/)
13
[![Coverage](https://img.shields.io/badge/coverage-54%25-green)](https://dice-group.github.io/dice-embeddings/usage/main.html#coverage-report)
24
[![Pypi](https://img.shields.io/badge/pypi-0.1.4-blue)](https://pypi.org/project/dicee/0.1.4/)
35
[![Docs](https://img.shields.io/badge/documentation-0.1.4-yellow)](https://dice-group.github.io/dice-embeddings/index.html)

dicee/trainer/model_parallelism.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def increase_batch_size_until_cuda_out_of_memory(ensemble_model, train_loader, b
8787
return batch_sizes_and_mem_usages,True
8888

8989
except torch.OutOfMemoryError as e:
90-
print(f"torch.OutOfMemoryError caught! {e}")
90+
print(f"torch.OutOfMemoryError caught! {e}\n\n")
9191
return batch_sizes_and_mem_usages, False
9292

9393
history_batch_sizes_and_mem_usages=[]

examples/multi_hop_query_answering/benchmarking.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
args = Namespace()
3232
args.model = kge_name
3333
args.scoring_technique = "KvsAll"
34-
args.path_dataset_folder = "KGs/UMLS"
34+
args.dataset_dir = "KGs/UMLS"
3535
args.num_epochs = 20
3636
args.batch_size = 1024
3737
args.lr = 0.1

0 commit comments

Comments
 (0)