Skip to content

Commit 4bc42ae

Browse files
authored
Merge pull request #248 from dice-group/develop
Prep for the new release
2 parents dae330e + 3eebbac commit 4bc42ae

16 files changed

Lines changed: 400 additions & 1508 deletions

README.md

Lines changed: 22 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Deploy a pre-trained embedding model without writing a single line of code.
3535
### Installation from Source
3636
``` bash
3737
git clone https://github.com/dice-group/dice-embeddings.git
38-
conda create -n dice python=3.10.13 --no-default-packages && conda activate dice && cd dice-embeddings &&
38+
conda create -n dice python=3.10.13 --no-default-packages && conda activate dice
3939
pip3 install -e .
4040
```
4141
or
@@ -48,7 +48,7 @@ wget https://files.dice-research.org/datasets/dice-embeddings/KGs.zip --no-check
4848
```
4949
To test the Installation
5050
```bash
51-
python -m pytest -p no:warnings -x # Runs >114 tests leading to > 15 mins
51+
python -m pytest -p no:warnings -x # Runs >119 tests leading to > 15 mins
5252
python -m pytest -p no:warnings --lf # run only the last failed test
5353
python -m pytest -p no:warnings --ff # to run the failures first and then the rest of the tests.
5454
```
@@ -95,45 +95,26 @@ A KGE model can also be trained from the command line
9595
```bash
9696
dicee --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
9797
```
98-
dicee automaticaly detects available GPUs and trains a model with distributed data parallels technique. Under the hood, dicee uses lighning as a default trainer.
98+
dicee automatically detects available GPUs and trains a model with distributed data parallels technique.
9999
```bash
100100
# Train a model by only using the GPU-0
101101
CUDA_VISIBLE_DEVICES=0 dicee --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
102102
# Train a model by only using GPU-1
103103
CUDA_VISIBLE_DEVICES=1 dicee --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
104-
NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 python dicee/scripts/run.py --trainer PL --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
104+
# Train a model by using all available GPUs
105+
dicee --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
105106
```
106-
Under the hood, dicee executes run.py script and uses lighning as a default trainer
107+
Under the hood, dicee executes the run.py script and uses [lightning](https://lightning.ai/) as a default trainer.
107108
```bash
108109
# Two equivalent executions
109110
# (1)
110111
dicee --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
111-
# Evaluate Keci on Train set: Evaluate Keci on Train set
112-
# {'H@1': 0.9518788343558282, 'H@3': 0.9988496932515337, 'H@10': 1.0, 'MRR': 0.9753123402351737}
113-
# Evaluate Keci on Validation set: Evaluate Keci on Validation set
114-
# {'H@1': 0.6932515337423313, 'H@3': 0.9041411042944786, 'H@10': 0.9754601226993865, 'MRR': 0.8072362996241839}
115-
# Evaluate Keci on Test set: Evaluate Keci on Test set
116-
# {'H@1': 0.6951588502269289, 'H@3': 0.9039334341906202, 'H@10': 0.9750378214826021, 'MRR': 0.8064032293278861}
117-
118112
# (2)
119113
CUDA_VISIBLE_DEVICES=0,1 python dicee/scripts/run.py --trainer PL --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
120-
# Evaluate Keci on Train set: Evaluate Keci on Train set
121-
# {'H@1': 0.9518788343558282, 'H@3': 0.9988496932515337, 'H@10': 1.0, 'MRR': 0.9753123402351737}
122-
# Evaluate Keci on Train set: Evaluate Keci on Train set
123-
# Evaluate Keci on Validation set: Evaluate Keci on Validation set
124-
# {'H@1': 0.6932515337423313, 'H@3': 0.9041411042944786, 'H@10': 0.9754601226993865, 'MRR': 0.8072362996241839}
125-
# Evaluate Keci on Test set: Evaluate Keci on Test set
126-
# {'H@1': 0.6951588502269289, 'H@3': 0.9039334341906202, 'H@10': 0.9750378214826021, 'MRR': 0.8064032293278861}
127114
```
128115
Similarly, models can be easily trained with torchrun
129116
```bash
130117
torchrun --standalone --nnodes=1 --nproc_per_node=gpu dicee/scripts/run.py --trainer torchDDP --dataset_dir "KGs/UMLS" --model Keci --eval_model "train_val_test"
131-
# Evaluate Keci on Train set: Evaluate Keci on Train set: Evaluate Keci on Train set
132-
# {'H@1': 0.9518788343558282, 'H@3': 0.9988496932515337, 'H@10': 1.0, 'MRR': 0.9753123402351737}
133-
# Evaluate Keci on Validation set: Evaluate Keci on Validation set
134-
# {'H@1': 0.6932515337423313, 'H@3': 0.9041411042944786, 'H@10': 0.9754601226993865, 'MRR': 0.8072499937521418}
135-
# Evaluate Keci on Test set: Evaluate Keci on Test set
136-
{'H@1': 0.6951588502269289, 'H@3': 0.9039334341906202, 'H@10': 0.9750378214826021, 'MRR': 0.8064032293278861}
137118
```
138119
You can also train a model in multi-node multi-gpu setting.
139120
```bash
@@ -143,7 +124,7 @@ torchrun --nnodes 2 --nproc_per_node=gpu --node_rank 1 --rdzv_id 455 --rdzv_bac
143124
Train a KGE model by providing the path of a single file and store all parameters under newly created directory
144125
called `KeciFamilyRun`.
145126
```bash
146-
dicee --path_single_kg "KGs/Family/family-benchmark_rich_background.owl" --model Keci --path_to_store_single_run KeciFamilyRun --backend rdflib
127+
dicee --path_single_kg "KGs/Family/family-benchmark_rich_background.owl" --model Keci --path_to_store_single_run KeciFamilyRun --backend rdflib --eval_model None
147128
```
148129
where the data is in the following form
149130
```bash
@@ -152,6 +133,11 @@ _:1 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07
152133
<http://www.benchmark.org/family#hasChild> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#ObjectProperty> .
153134
<http://www.benchmark.org/family#hasParent> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#ObjectProperty> .
154135
```
136+
**Continual Training:** the training phase of a pretrained model can be resumed.
137+
```bash
138+
dicee --continual_learning KeciFamilyRun --path_single_kg "KGs/Family/family-benchmark_rich_background.owl" --model Keci --path_to_store_single_run KeciFamilyRun --backend rdflib --eval_model None
139+
```
140+
155141
**Apart from n-triples or standard link prediction dataset formats, we support ["owl", "nt", "turtle", "rdf/xml", "n3"]***.
156142
Moreover, a KGE model can be also trained by providing **an endpoint of a triple store**.
157143
```bash
@@ -285,16 +271,22 @@ pre_trained_kge.predict_topk(r=[".."],t=[".."],topk=10)
285271

286272
## Downloading Pretrained Models
287273

274+
We provide plenty pretrained knowledge graph embedding models at [dice-research.org/projects/DiceEmbeddings/](https://files.dice-research.org/projects/DiceEmbeddings/).
288275
<details> <summary> To see a code snippet </summary>
289276

290277
```python
291278
from dicee import KGE
292-
# (1) Load a pretrained ConEx on DBpedia
293-
model = KGE(url="https://files.dice-research.org/projects/DiceEmbeddings/KINSHIP-Keci-dim128-epoch256-KvsAll")
279+
mure = KGE(url="https://files.dice-research.org/projects/DiceEmbeddings/YAGO3-10-Pykeen_MuRE-dim128-epoch256-KvsAll")
280+
quate = KGE(url="https://files.dice-research.org/projects/DiceEmbeddings/YAGO3-10-Pykeen_QuatE-dim128-epoch256-KvsAll")
281+
keci = KGE(url="https://files.dice-research.org/projects/DiceEmbeddings/YAGO3-10-Keci-dim128-epoch256-KvsAll")
282+
quate.predict_topk(h=["Mongolia"],r=["isLocatedIn"],topk=3)
283+
# [('Asia', 0.9894362688064575), ('Europe', 0.01575559377670288), ('Tadanari_Lee', 0.012544365599751472)]
284+
keci.predict_topk(h=["Mongolia"],r=["isLocatedIn"],topk=3)
285+
# [('Asia', 0.6522021293640137), ('Chinggis_Khaan_International_Airport', 0.36563414335250854), ('Democratic_Party_(Mongolia)', 0.19600993394851685)]
286+
mure.predict_topk(h=["Mongolia"],r=["isLocatedIn"],topk=3)
287+
# [('Asia', 0.9996906518936157), ('Ulan_Bator', 0.0009907372295856476), ('Philippines', 0.0003116439620498568)]
294288
```
295289

296-
- For more please look at [dice-research.org/projects/DiceEmbeddings/](https://files.dice-research.org/projects/DiceEmbeddings/)
297-
298290
</details>
299291

300292
## How to Deploy

dicee/config.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,8 @@ def __init__(self, **kwargs):
133133
self.block_size: int = None
134134
"block size of LLM"
135135

136+
self.continual_learning=None
137+
"Path of a pretrained model size of LLM"
136138

137139
def __iter__(self):
138140
# Iterate

dicee/evaluator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -456,7 +456,7 @@ def dummy_eval(self, trained_model, form_of_labelling: str):
456456
valid_set=valid_set,
457457
test_set=test_set,
458458
trained_model=trained_model)
459-
elif self.args.scoring_technique in ['KvsAll', 'KvsSample', '1vsAll', 'PvsAll', 'CCvsAll']:
459+
elif self.args.scoring_technique in ["AllvsAll",'KvsAll', 'KvsSample', '1vsAll']:
460460
self.eval_with_vs_all(train_set=train_set,
461461
valid_set=valid_set,
462462
test_set=test_set,

dicee/executer.py

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -234,31 +234,32 @@ class ContinuousExecute(Execute):
234234
(1) Loading & Preprocessing & Serializing input data.
235235
(2) Training & Validation & Testing
236236
(3) Storing all necessary info
237+
238+
During the continual learning we can only modify *** num_epochs *** parameter.
239+
Trained model stored in the same folder as the seed model for the training.
240+
Trained model is noted with the current time.
237241
"""
238242

239243
def __init__(self, args):
240-
assert os.path.exists(args.path_experiment_folder)
241-
assert os.path.isfile(args.path_experiment_folder + '/configuration.json')
242-
# (1) Load Previous input configuration
243-
previous_args = load_json(args.path_experiment_folder + '/configuration.json')
244-
dargs = vars(args)
245-
del args
246-
for k in list(dargs.keys()):
247-
if dargs[k] is None:
248-
del dargs[k]
249-
# (2) Update (1) with new input
250-
previous_args.update(dargs)
244+
# (1) Current input configuration.
245+
assert os.path.exists(args.continual_learning)
246+
assert os.path.isfile(args.continual_learning + '/configuration.json')
247+
# (2) Load previous input configuration.
248+
previous_args = load_json(args.continual_learning + '/configuration.json')
249+
args=vars(args)
250+
#
251+
previous_args["num_epochs"]=args["num_epochs"]
252+
previous_args["continual_learning"]=args["continual_learning"]
253+
print("Updated configuration:",previous_args)
251254
try:
252-
report = load_json(dargs['path_experiment_folder'] + '/report.json')
255+
report = load_json(args['continual_learning'] + '/report.json')
253256
previous_args['num_entities'] = report['num_entities']
254257
previous_args['num_relations'] = report['num_relations']
255258
except AssertionError:
256259
print("Couldn't find report.json.")
257260
previous_args = SimpleNamespace(**previous_args)
258-
previous_args.full_storage_path = previous_args.path_experiment_folder
259261
print('ContinuousExecute starting...')
260262
print(previous_args)
261-
# TODO: can we remove continuous_training from Execute ?
262263
super().__init__(previous_args, continuous_training=True)
263264

264265
def continual_start(self) -> dict:
@@ -279,7 +280,7 @@ def continual_start(self) -> dict:
279280
"""
280281
# (1)
281282
self.trainer = DICE_Trainer(args=self.args, is_continual_training=True,
282-
storage_path=self.args.path_experiment_folder)
283+
storage_path=self.args.continual_learning)
283284
# (2)
284285
self.trained_model, form_of_labelling = self.trainer.continual_start()
285286

dicee/models/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,4 @@
66
from .clifford import Keci, KeciBase, CMult, DeCaL # noqa
77
from .pykeen_models import * # noqa
88
from .function_space import * # noqa
9+
from .dualE import DualE

dicee/models/base_model.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -431,6 +431,8 @@ class IdentityClass(torch.nn.Module):
431431
def __init__(self, args=None):
432432
super().__init__()
433433
self.args = args
434+
def __call__(self, x):
435+
return x
434436

435437
@staticmethod
436438
def forward(x):

dicee/models/clifford.py

Lines changed: 60 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -764,7 +764,7 @@ def forward_triples(self, x: torch.Tensor) -> torch.FloatTensor:
764764
765765
Parameter
766766
---------
767-
x: torch.LongTensor with (n,3) shape
767+
x: torch.LongTensor with (n, ) shape
768768
769769
Returns
770770
-------
@@ -844,9 +844,9 @@ def forward_triples(self, x: torch.Tensor) -> torch.FloatTensor:
844844
sigma_qr = 0
845845
return h0r0t0 + score_p + score_q + score_r + sigma_pp + sigma_qq + sigma_rr + sigma_pq + sigma_qr + sigma_pr
846846

847-
def cl_pqr(self, a):
847+
def cl_pqr(self, a:torch.tensor)->torch.tensor:
848848

849-
''' Input: tensor(batch_size, emb_dim) ----> output: tensor with 1+p+q+r components with size (batch_size, emb_dim/(1+p+q+r)) each.
849+
''' Input: tensor(batch_size, emb_dim) ---> output: tensor with 1+p+q+r components with size (batch_size, emb_dim/(1+p+q+r)) each.
850850
851851
1) takes a tensor of size (batch_size, emb_dim), split it into 1 + p + q +r components, hence 1+p+q+r must be a divisor
852852
of the emb_dim.
@@ -861,17 +861,25 @@ def cl_pqr(self, a):
861861
def compute_sigmas_single(self, list_h_emb, list_r_emb, list_t_emb):
862862

863863
'''here we compute all the sums with no others vectors interaction taken with the scalar product with t, that is,
864-
1) s0 = h_0r_0t_0
865-
2) s1 = \sum_{i=1}^{p}h_ir_it_0
866-
3) s2 = \sum_{j=p+1}^{p+q}h_jr_jt_0
867-
4) s3 = \sum_{i=1}^{q}(h_0r_it_i + h_ir_0t_i)
868-
5) s4 = \sum_{i=p+1}^{p+q}(h_0r_it_i + h_ir_0t_i)
869-
5) s5 = \sum_{i=p+q+1}^{p+q+r}(h_0r_it_i + h_ir_0t_i)
864+
865+
.. math::
866+
867+
s0 = h_0r_0t_0
868+
s1 = \sum_{i=1}^{p}h_ir_it_0
869+
s2 = \sum_{j=p+1}^{p+q}h_jr_jt_0
870+
s3 = \sum_{i=1}^{q}(h_0r_it_i + h_ir_0t_i)
871+
s4 = \sum_{i=p+1}^{p+q}(h_0r_it_i + h_ir_0t_i)
872+
s5 = \sum_{i=p+q+1}^{p+q+r}(h_0r_it_i + h_ir_0t_i)
870873
871874
and return:
872875
873-
*) sigma_0t = \sigma_0 \cdot t_0 = s0 + s1 -s2
874-
*) s3, s4 and s5'''
876+
.. math::
877+
878+
sigma_0t = \sigma_0 \cdot t_0 = s0 + s1 -s2
879+
s3, s4 and s5
880+
881+
882+
'''
875883

876884
p = self.p
877885
q = self.q
@@ -906,15 +914,19 @@ def compute_sigmas_multivect(self, list_h_emb, list_r_emb):
906914
907915
For same bases vectors interaction we have
908916
909-
1) \sigma_pp = \sum_{i=1}^{p-1}\sum_{i'=i+1}^{p}(h_ir_{i'}-h_{i'}r_i) (models the interactions between e_i and e_i' for 1 <= i, i' <= p)
910-
2) \sigma_qq = \sum_{j=p+1}^{p+q-1}\sum_{j'=j+1}^{p+q}(h_jr_{j'}-h_{j'} (models the interactions between e_j and e_j' for p+1 <= j, j' <= p+q)
911-
3) \sigma_rr = \sum_{k=p+q+1}^{p+q+r-1}\sum_{k'=k+1}^{p}(h_kr_{k'}-h_{k'}r_k) (models the interactions between e_k and e_k' for p+q+1 <= k, k' <= p+q+r)
912-
917+
.. math::
918+
919+
\sigma_pp = \sum_{i=1}^{p-1}\sum_{i'=i+1}^{p}(h_ir_{i'}-h_{i'}r_i) (models the interactions between e_i and e_i' for 1 <= i, i' <= p)
920+
\sigma_qq = \sum_{j=p+1}^{p+q-1}\sum_{j'=j+1}^{p+q}(h_jr_{j'}-h_{j'} (models the interactions between e_j and e_j' for p+1 <= j, j' <= p+q)
921+
\sigma_rr = \sum_{k=p+q+1}^{p+q+r-1}\sum_{k'=k+1}^{p}(h_kr_{k'}-h_{k'}r_k) (models the interactions between e_k and e_k' for p+q+1 <= k, k' <= p+q+r)
922+
913923
For different base vector interactions, we have
914924
915-
4) \sigma_pq = \sum_{i=1}^{p}\sum_{j=p+1}^{p+q}(h_ir_j - h_jr_i) (interactionsn between e_i and e_j for 1<=i <=p and p+1<= j <= p+q)
916-
5) \sigma_pr = \sum_{i=1}^{p}\sum_{k=p+q+1}^{p+q+r}(h_ir_k - h_kr_i) (interactionsn between e_i and e_k for 1<=i <=p and p+q+1<= k <= p+q+r)
917-
6) \sigma_qr = \sum_{j=p+1}^{p+q}\sum_{j=p+q+1}^{p+q+r}(h_jr_k - h_kr_j) (interactionsn between e_j and e_k for p+1 <= j <=p+q and p+q+1<= j <= p+q+r)
925+
.. math::
926+
927+
\sigma_pq = \sum_{i=1}^{p}\sum_{j=p+1}^{p+q}(h_ir_j - h_jr_i) (interactionsn between e_i and e_j for 1<=i <=p and p+1<= j <= p+q)
928+
\sigma_pr = \sum_{i=1}^{p}\sum_{k=p+q+1}^{p+q+r}(h_ir_k - h_kr_i) (interactionsn between e_i and e_k for 1<=i <=p and p+q+1<= k <= p+q+r)
929+
\sigma_qr = \sum_{j=p+1}^{p+q}\sum_{j=p+q+1}^{p+q+r}(h_jr_k - h_kr_j) (interactionsn between e_j and e_k for p+1 <= j <=p+q and p+q+1<= j <= p+q+r)
918930
919931
'''
920932

@@ -958,15 +970,15 @@ def forward_k_vs_all(self, x: torch.Tensor) -> torch.FloatTensor:
958970
"""
959971
Kvsall training
960972
961-
(1) Retrieve real-valued embedding vectors for heads and relations \mathbb{R}^d .
962-
(2) Construct head entity and relation embeddings according to Cl_{p,q}(\mathbb{R}^d) .
973+
(1) Retrieve real-valued embedding vectors for heads and relations
974+
(2) Construct head entity and relation embeddings according to Cl_{p,q, r}(\mathbb{R}^d) .
963975
(3) Perform Cl multiplication
964976
(4) Inner product of (3) and all entity embeddings
965977
966978
forward_k_vs_with_explicit and this funcitons are identical
967979
Parameter
968980
---------
969-
x: torch.LongTensor with (n,2) shape
981+
x: torch.LongTensor with (n, ) shape
970982
Returns
971983
-------
972984
torch.FloatTensor with (n, |E|) shape
@@ -1097,9 +1109,12 @@ def construct_cl_multivector(self, x: torch.FloatTensor, re: int, p: int, q: int
10971109

10981110
def compute_sigma_pp(self, hp, rp):
10991111
"""
1100-
\sigma_{p,p}^* = \sum_{i=1}^{p-1}\sum_{i'=i+1}^{p}(x_iy_{i'}-x_{i'}y_i)
1112+
Compute
1113+
.. math::
1114+
1115+
\sigma_{p,p}^* = \sum_{i=1}^{p-1}\sum_{i'=i+1}^{p}(x_iy_{i'}-x_{i'}y_i)
11011116
1102-
sigma_{pp} captures the interactions between along p bases
1117+
\sigma_{pp} captures the interactions between along p bases
11031118
For instance, let p e_1, e_2, e_3, we compute interactions between e_1 e_2, e_1 e_3 , and e_2 e_3
11041119
This can be implemented with a nested two for loops
11051120
@@ -1125,7 +1140,12 @@ def compute_sigma_pp(self, hp, rp):
11251140

11261141
def compute_sigma_qq(self, hq, rq):
11271142
"""
1128-
Compute \sigma_{q,q}^* = \sum_{j=p+1}^{p+q-1}\sum_{j'=j+1}^{p+q}(x_jy_{j'}-x_{j'}y_j) Eq. 16
1143+
Compute
1144+
1145+
.. math::
1146+
1147+
\sigma_{q,q}^* = \sum_{j=p+1}^{p+q-1}\sum_{j'=j+1}^{p+q}(x_jy_{j'}-x_{j'}y_j) Eq. 16
1148+
11291149
sigma_{q} captures the interactions between along q bases
11301150
For instance, let q e_1, e_2, e_3, we compute interactions between e_1 e_2, e_1 e_3 , and e_2 e_3
11311151
This can be implemented with a nested two for loops
@@ -1157,7 +1177,9 @@ def compute_sigma_qq(self, hq, rq):
11571177

11581178
def compute_sigma_rr(self, hk, rk):
11591179
"""
1160-
\sigma_{r,r}^* = \sum_{k=p+q+1}^{p+q+r-1}\sum_{k'=k+1}^{p}(x_ky_{k'}-x_{k'}y_k)
1180+
.. math::
1181+
1182+
\sigma_{r,r}^* = \sum_{k=p+q+1}^{p+q+r-1}\sum_{k'=k+1}^{p}(x_ky_{k'}-x_{k'}y_k)
11611183
11621184
"""
11631185
# Compute indexes for the upper triangle of p by p matrix
@@ -1173,7 +1195,11 @@ def compute_sigma_rr(self, hk, rk):
11731195

11741196
def compute_sigma_pq(self, *, hp, hq, rp, rq):
11751197
"""
1176-
\sum_{i=1}^{p} \sum_{j=p+1}^{p+q} (h_i r_j - h_j r_i) e_i e_j
1198+
Compute
1199+
1200+
.. math::
1201+
1202+
\sum_{i=1}^{p} \sum_{j=p+1}^{p+q} (h_i r_j - h_j r_i) e_i e_j
11771203
11781204
results = []
11791205
sigma_pq = torch.zeros(b, r, p, q)
@@ -1189,7 +1215,11 @@ def compute_sigma_pq(self, *, hp, hq, rp, rq):
11891215

11901216
def compute_sigma_pr(self, *, hp, hk, rp, rk):
11911217
"""
1192-
\sum_{i=1}^{p} \sum_{j=p+1}^{p+q} (h_i r_j - h_j r_i) e_i e_j
1218+
Compute
1219+
1220+
.. math::
1221+
1222+
\sum_{i=1}^{p} \sum_{j=p+1}^{p+q} (h_i r_j - h_j r_i) e_i e_j
11931223
11941224
results = []
11951225
sigma_pq = torch.zeros(b, r, p, q)
@@ -1205,7 +1235,9 @@ def compute_sigma_pr(self, *, hp, hk, rp, rk):
12051235

12061236
def compute_sigma_qr(self, *, hq, hk, rq, rk):
12071237
"""
1208-
\sum_{i=1}^{p} \sum_{j=p+1}^{p+q} (h_i r_j - h_j r_i) e_i e_j
1238+
.. math::
1239+
1240+
\sum_{i=1}^{p} \sum_{j=p+1}^{p+q} (h_i r_j - h_j r_i) e_i e_j
12091241
12101242
results = []
12111243
sigma_pq = torch.zeros(b, r, p, q)

0 commit comments

Comments
 (0)