-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call for comments for seq2seq and charrnn #7
Comments
As of now, it is easy to understand. 👍 👍 👍 But I think there's still room for improvement. 1. Combine multiple global expressions into one main functionInstead of writing everything in a procedural way def some_function():
pass
expression1
expression2
blah blah Le'ts do this! def some_function():
pass
def main():
expression1
expression2
blah_blah
if __name__ == "__main__":
main() because it's hard to differentiate from the function definitions. 2. Let's add comments with type annotations (and avoid bad names)
So, instead of this def str2tensor(str, eos=False):
tensor = [ord(c) for c in str]
if eos:
tensor.append(EOS_token)
tensor = torch.LongTensor(tensor)
# Do before wrapping with variable
if torch.cuda.is_available():
tensor = tensor.cuda()
return Variable(tensor) Let's do this. def str2tensor(message: str, eos: bool=False) -> Variable:
"""Converts message to a tensor
Args:
message (str): Message string
eos (bool, optional): Append `EOS_Token`
Returns:
Variable: The result is a tensor `torch.autograd.Variable` object
"""
tensor = [ord(c) for c in message]
if eos:
tensor.append(EOS_token)
tensor = torch.LongTensor(tensor)
# Do before wrapping with variable
if torch.cuda.is_available():
tensor = tensor.cuda()
return Variable(tensor) Benefits of Type annotation
|
@kkweon Thanks. I try to make the main, but in this case, we need to pass encoder and decoder for |
Yes, |
@kkweon, also need to pass criterion as an argument. :-) |
Could you please check if this code is easy/correct enough to show the basic concepts?
https://github.com/hunkim/PyTorchZeroToAll/blob/master/13_1_seq2seq.py
https://github.com/hunkim/PyTorchZeroToAll/blob/master/12_5_char_rnn.py
@yunjey @kkweon, thanks in advance!
The text was updated successfully, but these errors were encountered: