Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow 1.0.0-alpha #47

Open
hunkim opened this issue Jan 11, 2017 · 2 comments
Open

TensorFlow 1.0.0-alpha #47

hunkim opened this issue Jan 11, 2017 · 2 comments

Comments

@hunkim
Copy link
Owner

hunkim commented Jan 11, 2017

Exciting news!

https://github.com/tensorflow/tensorflow/releases/tag/v1.0.0-alpha

However, it includes some API breaking changes. Perhaps, we need to refactor our code.

Please feel free to test and send us PR for TF 1.0!

@normanheckscher
Copy link
Collaborator

  • I've pushed in some changes to a new branch.
  • Need to look further into the tf.contrib.seq2seq
  • I can't compare speed with older versions yet because TF1.0 from the pip install isn't optimally compiled for my computer yet.
  • Need to investigate the deprecation of concat

WARNING:tensorflow:From /Users/norman/Documents/workspace/word-rnn-tensorflow/model.py:66: concat (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2016-12-14.
Instructions for updating:
This op will be removed after the deprecation date. Please switch to tf.concat_v2().

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

@normanheckscher
Copy link
Collaborator

I slipped in a new --gpu_mem switch with a default use of 66% of GPU memory. I've found that this helps load the graph onto my limited 1GB share notebook GPU (rMBP mid-2012). It might not be needed, or it may inhibit, when using GPUs with larger memory specs.

Can change or remove this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants