Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decoder variable output_length? #113

Open
TrentBrick opened this issue Mar 25, 2019 · 2 comments
Open

Decoder variable output_length? #113

TrentBrick opened this issue Mar 25, 2019 · 2 comments

Comments

@TrentBrick
Copy link

I am trying to make an autoencoder that uses variable length inputs (batched together). I want to have decoder=True so that the decoding portion has the latent space as an input at every timestep. However, when I make decoder=True I need to provide an output_length= <int>. How can I make it so that it takes a dynamic length?

@TrentBrick
Copy link
Author

I think I have solved it. You create a Lambda layer that takes in the input layer, finds its dimension and returns this as an int which is passed to output_length=

def get_length(args):
        input_layer = args
        return tf.shape(input_layer)[1]

seq_length_per_batch = Lambda(get_length,output_shape=(None))(inputs)
# Put this into the output_length parameter

@TrentBrick
Copy link
Author

TrentBrick commented Mar 25, 2019

Nevermind, this does not work. When I do it I get the error:

TypeError: Using a tf.Tensor as a Python bool is not allowed. Use if t is not None: instead of if t: to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.

So I have to provide an integer value...

There must be some edit to the code so that my above solution works?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant