Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while running synthesis_ppg_script.py #6

Open
narendranp opened this issue Jul 16, 2021 · 2 comments
Open

Error while running synthesis_ppg_script.py #6

narendranp opened this issue Jul 16, 2021 · 2 comments

Comments

@narendranp
Copy link

Hi,

I am getting following the error while trying to run synthesis_ppg_script.py. I am using the following command: python synthesis_ppg_script.py /home/ubuntu/narendra/VC_dataset/SV2TTS/synthesizer_test/ /home/ubuntu/narendra/Adversarial-Many-to-Many-VC-master/ --num_speakers 2 --num_utterances 2

Also, can please let me know what are the changes to be done in hparams.py or any other file for running synthesis_ppg_script.py.

Traceback (most recent call last):
File "synthesis_ppg_script.py", line 138, in
main()
File "synthesis_ppg_script.py", line 132, in main
generated_wavs_avg = synthesize_ppg_batch_avg_embed(ppg_paths, embed_paths)
File "synthesis_ppg_script.py", line 49, in synthesize_ppg_batch_avg_embed
generated_wavs.append(synthesize_ppg(ppg, avg_embed))
File "synthesis_ppg_script.py", line 22, in synthesize_ppg
specs = synthesizer.synthesize_spectrograms([ppg], [embed])
File "/home/ubuntu/narendra/Adversarial-Many-to-Many-VC-master/synthesizer/inference.py", line 80, in synthesize_spectrograms
self.load()
File "/home/ubuntu/narendra/Adversarial-Many-to-Many-VC-master/synthesizer/inference.py", line 61, in load
self._model = Tacotron2(self.checkpoint_fpath, hparams)
File "/home/ubuntu/narendra/Adversarial-Many-to-Many-VC-master/synthesizer/tacotron2.py", line 28, in init
split_infos=split_infos)
File "/home/ubuntu/narendra/Adversarial-Many-to-Many-VC-master/synthesizer/models/tacotron.py", line 178, in initialize
p = tf.cast(global_step, tf.float32) / tf.cast(tacotron_train_steps, tf.float32)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 614, in cast
x = ops.convert_to_tensor(x, name="x")
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1039, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1097, in convert_to_tensor_v2
as_ref=False)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1175, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 304, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 245, in constant
allow_broadcast=True)
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 283, in _constant_impl
allow_broadcast=allow_broadcast))
File "/home/ubuntu/anaconda3/envs/vc-speech/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 454, in make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.

@shaojinding
Copy link
Owner

shaojinding commented Jul 16, 2021 via email

@narendranp
Copy link
Author

Hi,
Thanks for the reply. I am using tensorflow-gpu=1.13.1. I was getting similar kind of error while training. During that time in hparams.py, I changed "if_use_speaker_classifier=True", "n_speakers=4(no. spkrs used)", then error had gone. So, I thought there is something I am missing (e.g. changing some parameters in hparams.py or other files). Is the setup used for training and testing is same or there any changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants