Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugs when run VEEGAN for dataset credit #46

Open
zhao-zilong opened this issue Jul 26, 2020 · 3 comments
Open

Bugs when run VEEGAN for dataset credit #46

zhao-zilong opened this issue Jul 26, 2020 · 3 comments

Comments

@zhao-zilong
Copy link

zhao-zilong commented Jul 26, 2020

  • SDGym version:
  • Python version: 3.6
  • Operating System: OSX

Description

Thanks a lot for sharing the code, I tried to run the benchmark for VEEGAN with credit dataset, but I got the following bugs, I didn't really find where it exists the inplace operation, any ideas on that?

What I Did

Error computing scores for VEEGANSynthesizer on dataset credit - iteration 0
Traceback (most recent call last):
  File "<stdin>", line 8, in compute_benchmark
  File "/Users/zhaozilong/Documents/SDGym/sdgym/synthesizers/base.py", line 17, in fit_sample
    self.fit(data, categorical_columns, ordinal_columns)
  File "/Users/zhaozilong/Documents/SDGym/sdgym/synthesizers/veegan.py", line 148, in fit
    loss_g.backward(retain_graph=True)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
@csala
Copy link
Contributor

csala commented Jul 27, 2020

Hello @zhao-zilong

Thanks for reporting this. Would you mind adding the following details?

  • Snippet of the commands that you executed, including the imports and the function call that provoked the error
  • The output of the command: pip freeze

@zhao-zilong
Copy link
Author

zhao-zilong commented Jul 27, 2020

Hi @csala ,

Off course, the snippet is as following:

(pytorch0.3) zhaozilongdeMBP:Documents zhaozilong$ python
Python 3.6.10 |Anaconda, Inc.| (default, May  7 2020, 23:06:31) 
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> from sdgym import benchmark
>>> from sdgym.synthesizers import (VEEGANSynthesizer)
>>> scores = benchmark(synthesizers=[VEEGANSynthesizer], datasets=['credit'])
Error computing scores for VEEGANSynthesizer on dataset credit - iteration 0
Traceback (most recent call last):
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/benchmark.py", line 70, in compute_benchmark
    synthesized = synthesizer(train, categoricals, ordinals)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/synthesizers/base.py", line 17, in fit_sample
    self.fit(data, categorical_columns, ordinal_columns)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/synthesizers/veegan.py", line 147, in fit
    loss_g.backward(retain_graph=True)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Error computing scores for VEEGANSynthesizer on dataset credit - iteration 1
Traceback (most recent call last):
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/benchmark.py", line 70, in compute_benchmark
    synthesized = synthesizer(train, categoricals, ordinals)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/synthesizers/base.py", line 17, in fit_sample
    self.fit(data, categorical_columns, ordinal_columns)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/synthesizers/veegan.py", line 147, in fit
    loss_g.backward(retain_graph=True)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Error computing scores for VEEGANSynthesizer on dataset credit - iteration 2
Traceback (most recent call last):
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/benchmark.py", line 70, in compute_benchmark
    synthesized = synthesizer(train, categoricals, ordinals)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/synthesizers/base.py", line 17, in fit_sample
    self.fit(data, categorical_columns, ordinal_columns)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/synthesizers/veegan.py", line 147, in fit
    loss_g.backward(retain_graph=True)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/benchmark.py", line 232, in benchmark
    synthesizer_scores = compute_benchmark(synthesizer, datasets, iterations)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/sdgym/benchmark.py", line 79, in compute_benchmark
    return pd.concat(results, sort=False)
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 255, in concat
    sort=sort,
  File "/anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 304, in __init__
    raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate

and for pip freeze

absl-py==0.9.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
cycler==0.10.0
decorator==4.4.2
future==0.18.2
gast==0.3.3
google-auth==1.19.2
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.30.0
h5py==2.10.0
idna==2.10
importlib-metadata==1.7.0
joblib==0.16.0
Keras-Preprocessing==1.1.2
kiwisolver==1.2.0
Markdown==3.2.2
matplotlib==3.3.0
networkx==2.4
numpy==1.17.5
oauthlib==3.1.0
opt-einsum==3.3.0
pandas==0.25.3
Pillow==7.2.0
pomegranate==0.11.2
protobuf==3.12.2
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.1
PyYAML==5.3.1
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
scikit-learn==0.21.3
scipy==1.4.1
sdgym==0.2.1
six==1.15.0
tensorboard==2.2.2
tensorboard-plugin-wit==1.7.0
tensorflow==2.2.0
tensorflow-estimator==2.2.0
termcolor==1.1.0
torch==1.5.1
torchvision==0.6.1
tqdm==4.48.0
urllib3==1.25.10
Werkzeug==1.0.1
wrapt==1.12.1
zipp==3.1.0```

@sbrugman
Copy link
Contributor

Encountering the same issue (py 3.8, osx). This comment from one of the pytorch devs indicates that the gradients are wrongly computed before pytorch 1.5, and just no error is shown, rather than it being an incompatibility for the dependencies. Might be worth looking into to ensure that the comparison is fair.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants