Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

python main.py failed #4

Open
zjingwang opened this issue Apr 12, 2023 · 2 comments
Open

python main.py failed #4

zjingwang opened this issue Apr 12, 2023 · 2 comments

Comments

@zjingwang
Copy link

Hello sir, sorry to bother, but i got an error when i run python main.py keras2circom/test.h5 with test.h5 generated by my mnist.py which is copied from your best_practice.ipynb.

Here is the mnist.py file:

import json
import os
from tensorflow.keras import Model
from tensorflow.keras.layers import (
    Input,
    AveragePooling2D,
    BatchNormalization,
    Conv2D,
    Dense,
    GlobalAveragePooling2D,
    Lambda,  # only for polynomial activation in the form of `Lambda(lambda x: x**2+x)`
    Softmax,
)
from tensorflow.keras.utils import to_categorical
current_dir = os.getcwd()
from tensorflow.keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data(path=current_dir + '/mnist.npz')
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
inputs = Input(shape=(28,28,1))
out = Conv2D(4, 3, use_bias=False)(inputs)
out = BatchNormalization()(out)
out = Lambda(lambda x: x**2+x)(out) # best practice: use polynomial activation instead of ReLU
out = AveragePooling2D()(out) # best practice: use AveragePooling2D instead of MaxPooling2D
out = Conv2D(16, 3, use_bias=False)(out)
out = BatchNormalization()(out)
out = Lambda(lambda x: x**2+x)(out)
out = AveragePooling2D()(out)
out = GlobalAveragePooling2D()(out) # best practice: use GlobalAveragePooling2D instead of Flatten
out = Dense(10, activation=None)(out)
out = Softmax()(out)
model = Model(inputs, out)
model.summary()
model.compile(
    loss='categorical_crossentropy',
    optimizer='adam',
    metrics=['acc']
)
model.fit(X_train, y_train, epochs=100, batch_size=128, validation_data=(X_test, y_test))
model.save('test.h5')
model2 = Model(model.input, model.layers[-2].output)
model2.layers[-1]
X = X_test[[0]]
y = model2.predict(X)
y
for layer in model.layers:
    print(layer.__class__.__name__, layer.get_config())
    try:
        print(layer.get_config()['function'])
    except:
        pass
    print(layer.get_input_shape_at(0), layer.get_output_shape_at(0))
    try:
        print(layer.get_weights()[0].shape)
        print(layer.get_weights()[1].shape)
    except:
        pass
with open("test.json", "w") as f:
    json.dump({'X': X.flatten().tolist(), 'y': y.flatten().tolist()}, f)

and here is the error:

root@984cb5a21ce9:/usr/wzj# python main.py keras2circom/test.h5                                                                                             
2023-04-12 02:16:30.752955: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA                                            
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.                                                                 
2023-04-12 02:16:31.020164: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.                    
2023-04-12 02:16:31.045363: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory                                                                                    
2023-04-12 02:16:31.045446: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-04-12 02:16:31.094098: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-04-12 02:16:32.528631: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-04-12 02:16:32.528909: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-04-12 02:16:32.528941: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-04-12 02:16:33.942876: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2023-04-12 02:16:33.942948: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2023-04-12 02:16:33.943119: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (984cb5a21ce9): /proc/driver/nvidia/version does not exist
2023-04-12 02:16:33.943566: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
  File "/usr/wzj/main.py", line 25, in <module>
    main()
  File "/usr/wzj/main.py", line 21, in main
    transpiler.transpile(args['<model.h5>'], args['--output'], args['--raw'])
  File "/usr/wzj/keras2circom/transpiler.py", line 16, in transpile
    circuit.add_components(transpile_layer(layer))
  File "/usr/wzj/keras2circom/transpiler.py", line 80, in transpile_layer
    raise ValueError('Only polynomial activation functions are supported')
ValueError: Only polynomial activation functions are supported

and i printed the value of s.ratio() in transpiler.py:

0.6615384615384615

is this some kind of accuracy problem?

@socathie
Copy link
Member

Thanks for bringing this up. This seems like a machine-specific issue that I have sometimes been able to reproduce but sometimes cannot. Basically, what s.ratio checks here is how much the encoded lambda function matches the desired polynomial activation but seems like the encoding is machine specific.

If you are following the example, it's safe to just comment out L78-80 of transpiler.py for now.

I'm actively looking for a more universal fix/condition that will not give rise to this issue.

@zjingwang
Copy link
Author

got it, thanks for your time, really appreciate it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants