You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It appears to be splitting the data set into three lists: train, test and everything. Yet when I run the code it appears to train on 22500 pieces of data.
In this line of code:
train, test, _ = imdb.load_data(path='imdb.pkl', n_words=10000, valid_portion=0.1)
It appears to be splitting the data set into three lists: train, test and everything. Yet when I run the code it appears to train on 22500 pieces of data.
Obtaining imdb db...
numpy.shape(train)= (2, 22500)
numpy.shape(test)= (2, 2500)
numpy.shape(_)= (2, 25000)
This web page suggest that n_words maybe should be num_words but this gives an error.
https://keras.io/datasets/
I suspect this may be a bug in the tflearn library.
The text was updated successfully, but these errors were encountered: