You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to make an autoencoder that uses variable length inputs (batched together). I want to have decoder=True so that the decoding portion has the latent space as an input at every timestep. However, when I make decoder=True I need to provide an output_length= <int>. How can I make it so that it takes a dynamic length?
The text was updated successfully, but these errors were encountered:
I think I have solved it. You create a Lambda layer that takes in the input layer, finds its dimension and returns this as an int which is passed to output_length=
def get_length(args):
input_layer = args
return tf.shape(input_layer)[1]
seq_length_per_batch = Lambda(get_length,output_shape=(None))(inputs)
# Put this into the output_length parameter
Nevermind, this does not work. When I do it I get the error:
TypeError: Using a tf.Tensor as a Python bool is not allowed. Use if t is not None: instead of if t: to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
So I have to provide an integer value...
There must be some edit to the code so that my above solution works?
I am trying to make an autoencoder that uses variable length inputs (batched together). I want to have
decoder=True
so that the decoding portion has the latent space as an input at every timestep. However, when I makedecoder=True
I need to provide anoutput_length= <int>
. How can I make it so that it takes a dynamic length?The text was updated successfully, but these errors were encountered: