Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run prepro_img.lua failed #11

Open
andyyuan78 opened this issue Feb 20, 2016 · 2 comments
Open

run prepro_img.lua failed #11

andyyuan78 opened this issue Feb 20, 2016 · 2 comments

Comments

@andyyuan78
Copy link

envy@ub1404envy:/os_prj/github/_QA/VQA_LSTM_CNN$ th prepro_img.lua -backend nn -input_json data_prepro.json -image_root data_prepro.h5 -cnn_proto model/ -cnn_model VGG_ILSVRC_19_layers.caffemodel
{
backend : "nn"
image_root : "data_prepro.h5"
cnn_proto : "model/"
batch_size : 10
input_json : "data_prepro.json"
gpuid : 1
out_name : "data_img.h5"
cnn_model : "VGG_ILSVRC_19_layers.caffemodel"
}
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
processing 82459 images...
/home/envy/torch/install/bin/luajit: /home/envy/torch/install/share/lua/5.1/image/init.lua:650: attempt to call method 'nDimension' (a nil value)
stack traceback:
/home/envy/torch/install/share/lua/5.1/image/init.lua:650: in function 'scale'
prepro_img.lua:51: in function 'loadim'
prepro_img.lua:95: in main chunk
[C]: in function 'dofile'
...envy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
envy@ub1404envy:
/os_prj/github/_QA/VQA_LSTM_CNN$

envy@ub1404envy:~/os_prj/github/_QA/VQA_LSTM_CNN$ tree
.
├── data
│   ├── annotations
│   │   ├── mscoco_train2014_annotations.json
│   │   ├── mscoco_val2014_annotations.json
│   │   ├── MultipleChoice_mscoco_test2015_questions.json
│   │   ├── MultipleChoice_mscoco_test-dev2015_questions.json
│   │   ├── MultipleChoice_mscoco_train2014_questions.json
│   │   ├── MultipleChoice_mscoco_val2014_questions.json
│   │   ├── OpenEnded_mscoco_test2015_questions.json
│   │   ├── OpenEnded_mscoco_test-dev2015_questions.json
│   │   ├── OpenEnded_mscoco_train2014_questions.json
│   │   └── OpenEnded_mscoco_val2014_questions.json
│   ├── vqa_preprocessing.py
│   ├── vqa_raw_test.json
│   ├── vqa_raw_train.json
│   └── zip
│   ├── Annotations_Train_mscoco.zip
│   ├── Annotations_Val_mscoco.zip
│   ├── Questions_Test_mscoco.zip
│   ├── Questions_Train_mscoco.zip
│   └── Questions_Val_mscoco.zip
├── data_prepro.h5
├── data_prepro.json
├── data_train_val.zip
├── eval.lua
├── evaluate.py
├── misc
│   ├── LSTM.lua
│   ├── netdef.lua
│   └── RNNUtils.lua
├── model
├── path_to_cnn_prototxt.lua
├── prepro_img.lua
├── prepro.py
├── pretrained_lstm_train.t7
├── pretrained_lstm_train_val.t7.zip
├── readme.md
├── result
├── train.lua
├── VGG_ILSVRC_19_layers.caffemodel
├── vgg_ilsvrc_19_layers_deploy-prototxt
├── vgg_ilsvrc_19_layers_deploy-prototxt.lua
├── vgg_ilsvrc_19_layers_deploy-prototxt.lua.lua
├── yknote---log--1
└── yknote---log--2

6 directories, 39 files

@jnhwkim
Copy link
Contributor

jnhwkim commented Mar 1, 2016

@andyyuan78 The likely cause is not found that image. -image_root should indicate the directory contains test2014, val2014 and test2015 for MSCOCO. And, you need execution permissions to that directory to open. If the problem persists, print image filename and check using ls -l <filename>.

@andyyuan78
Copy link
Author

envy@ub1404:/media/envy/data1t/os_prj/github/_QA/VQA_LSTM_CNN$ tree
.
├── data
│   ├── annotations
│   │   ├── mscoco_train2014_annotations.json
│   │   ├── mscoco_val2014_annotations.json
│   │   ├── MultipleChoice_mscoco_test2015_questions.json
│   │   ├── MultipleChoice_mscoco_test-dev2015_questions.json
│   │   ├── MultipleChoice_mscoco_train2014_questions.json
│   │   ├── MultipleChoice_mscoco_val2014_questions.json
│   │   ├── OpenEnded_mscoco_test2015_questions.json
│   │   ├── OpenEnded_mscoco_test-dev2015_questions.json
│   │   ├── OpenEnded_mscoco_train2014_questions.json
│   │   └── OpenEnded_mscoco_val2014_questions.json
│   ├── vqa_preprocessing.py
│   ├── vqa_raw_test.json
│   ├── vqa_raw_train.json
│   └── zip
│   ├── Annotations_Train_mscoco.zip
│   ├── Annotations_Val_mscoco.zip
│   ├── Questions_Test_mscoco.zip
│   ├── Questions_Train_mscoco.zip
│   └── Questions_Val_mscoco.zip
├── data_img.h5
├── data_prepro.h5
├── data_prepro.json
├── data_train_val.zip
├── eval.lua
├── evaluate.py
├── misc
│   ├── LSTM.lua
│   ├── netdef.lua
│   └── RNNUtils.lua
├── model
├── model.lua
├── path_to_cnn_prototxt.lua
├── path_to_cnn_prototxt.lua.lua
├── prepro_img.lua
├── prepro.py
├── pretrained_lstm_train.t7
├── pretrained_lstm_train_val.t7.zip
├── readme.md
├── result
├── train.lua
├── VGG_ILSVRC_19_layers.caffemodel
├── vgg_ilsvrc_19_layers_deploy-prototxt
├── vgg_ilsvrc_19_layers_deploy-prototxt.lua
├── vgg_ilsvrc_19_layers_deploy-prototxt.lua.lua
├── yknote---log--1
└── yknote---log--2

6 directories, 42 files
envy@ub1404:/media/envy/data1t/os_prj/github/_QA/VQA_LSTM_CNN$
envy@ub1404:/media/envy/data1t/os_prj/github/_QA/VQA_LSTM_CNN$
envy@ub1404:/media/envy/data1t/os_prj/github/_QA/VQA_LSTM_CNN$ th prepro_img.lua -backend nn -input_json data_prepro.json -image_root data_image.h5 -cnn_proto path_to_cnn_prototxt.lua -cnn_model model
{
backend : "nn"
image_root : "data_image.h5"
cnn_proto : "path_to_cnn_prototxt.lua"
batch_size : 10
input_json : "data_prepro.json"
gpuid : 1
out_name : "data_img.h5"
cnn_model : "model"
}
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 1:9: Message type "caffe.NetParameter" has no field named "require".
Successfully loaded model
processing 82459 images...
/home/envy/torch/install/bin/luajit: /home/envy/torch/install/share/lua/5.1/image/init.lua:650: attempt to call method 'nDimension' (a nil value)
stack traceback:
/home/envy/torch/install/share/lua/5.1/image/init.lua:650: in function 'scale'
prepro_img.lua:51: in function 'loadim'
prepro_img.lua:95: in main chunk
[C]: in function 'dofile'
...envy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
envy@ub1404:/media/envy/data1t/os_prj/github/_QA/VQA_LSTM_CNN$

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants