Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call for comments for seq2seq and charrnn #7

Open
hunkim opened this issue Oct 31, 2017 · 4 comments
Open

Call for comments for seq2seq and charrnn #7

hunkim opened this issue Oct 31, 2017 · 4 comments

Comments

@hunkim
Copy link
Owner

hunkim commented Oct 31, 2017

Could you please check if this code is easy/correct enough to show the basic concepts?

https://github.com/hunkim/PyTorchZeroToAll/blob/master/13_1_seq2seq.py
https://github.com/hunkim/PyTorchZeroToAll/blob/master/12_5_char_rnn.py

@yunjey @kkweon, thanks in advance!

@kkweon
Copy link

kkweon commented Oct 31, 2017

As of now, it is easy to understand. 👍 👍 👍

But I think there's still room for improvement.

1. Combine multiple global expressions into one main function

Instead of writing everything in a procedural way

def some_function():
    pass

expression1

expression2

blah blah

Le'ts do this!

def some_function():
    pass

def main():
    expression1
    expression2
    blah_blah

if __name__ == "__main__":
    main()

because it's hard to differentiate from the function definitions.
It seemed quite confusing to me even though it was placed at the bottom.

2. Let's add comments with type annotations (and avoid bad names)

str is a python keyword and I would still avoid it since the pylint will nag about this.
And I don't want to write a custom lintrc for that.

So, instead of this

def str2tensor(str, eos=False):
    tensor = [ord(c) for c in str]
    if eos:
        tensor.append(EOS_token)
    tensor = torch.LongTensor(tensor)

    # Do before wrapping with variable
    if torch.cuda.is_available():
        tensor = tensor.cuda()

    return Variable(tensor)

Let's do this.
Plus, I am greatly in favor of type annotation though others may have different opinions.

def str2tensor(message: str, eos: bool=False) -> Variable:
    """Converts message to a tensor

    Args:
        message (str): Message string
        eos (bool, optional): Append `EOS_Token`

    Returns:
        Variable: The result is a tensor `torch.autograd.Variable` object
    """
    tensor = [ord(c) for c in message]
    if eos:
        tensor.append(EOS_token)
    tensor = torch.LongTensor(tensor)

    # Do before wrapping with variable
    if torch.cuda.is_available():
        tensor = tensor.cuda()

    return Variable(tensor)

Benefits of Type annotation

  • Type annotation will enable code completion inside of the function as well
  • Type checking can be done by mypy
  • You can always look back what type it requires.

@hunkim
Copy link
Owner Author

hunkim commented Oct 31, 2017

@kkweon Thanks. I try to make the main, but in this case, we need to pass encoder and decoder for train and translate functions. Do you think it's OK?

@kkweon
Copy link

kkweon commented Oct 31, 2017

Yes, explicit > implicit. Wasn't it one of your favorite quotes as well 😉 ?

@hunkim
Copy link
Owner Author

hunkim commented Oct 31, 2017

@kkweon, also need to pass criterion as an argument. :-)

Hammania689 added a commit to Hammania689/PyTorchZeroToAll that referenced this issue Jun 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants