Skip to content
This repository has been archived by the owner on Jul 29, 2024. It is now read-only.

test dataset no work #56

Open
windcatcher opened this issue Nov 6, 2019 · 0 comments
Open

test dataset no work #56

windcatcher opened this issue Nov 6, 2019 · 0 comments

Comments

@windcatcher
Copy link

here is my command
python3 predict.py --ckpt log/semantic/best_model_epoch_405.ckpt --set=test --num_samples=500

here is the error
`Dataset split: test
Loading file_prefixes: ['MarketplaceFeldkirch_Station4_rgb_intensity-reduced']
pl_points shape Tensor("Shape:0", shape=(3,), dtype=int32, device=/device:GPU:0)

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/lms/pointNet2/Open3D-PointNet2-Semantic3D/util/tf_util.py:662: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob.
2019-11-06 10:44:50.087783: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-06 10:44:50.384107: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-11-06 10:44:50.384597: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x657ef10 executing computations on platform CUDA. Devices:
2019-11-06 10:44:50.384613: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1050, Compute Capability 6.1
2019-11-06 10:44:50.403814: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2019-11-06 10:44:50.404274: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x65e7580 executing computations on platform Host. Devices:
2019-11-06 10:44:50.404346: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-11-06 10:44:50.404616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.455
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.90GiB
2019-11-06 10:44:50.404698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-11-06 10:44:50.406574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-06 10:44:50.406587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-11-06 10:44:50.406593: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-11-06 10:44:50.406662: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1724 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
Model restored
Processing <dataset.semantic_dataset.SemanticFileData object at 0x7f485ff25400>
2019-11-06 10:44:53.848523: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.17GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:53.875321: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.02GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:54.036662: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.16GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:54.108900: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:54.127858: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.32GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
Batch size: 32, time: 1.9697668552398682
Batch size: 32, time: 0.5003781318664551
Batch size: 32, time: 0.49585819244384766
Batch size: 32, time: 0.5022487640380859
Batch size: 32, time: 0.4994063377380371
Batch size: 32, time: 0.4927208423614502
Batch size: 32, time: 0.49471569061279297
Batch size: 32, time: 0.498868465423584
Batch size: 32, time: 0.49785780906677246
Batch size: 32, time: 0.4957921504974365
Batch size: 32, time: 0.49452805519104004
Batch size: 32, time: 0.49374890327453613
Batch size: 32, time: 0.49533581733703613
Batch size: 32, time: 0.49709129333496094
Batch size: 32, time: 0.4983334541320801
2019-11-06 10:45:31.690487: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.77GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.705401: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.65GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.814868: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.11GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.874898: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.10GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.888066: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.22GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
Batch size: 20, time: 0.6125662326812744
Exported sparse pcd to result/sparse/MarketplaceFeldkirch_Station4_rgb_intensity-reduced.pcd
Exported sparse labels to result/sparse/MarketplaceFeldkirch_Station4_rgb_intensity-reduced.labels
Confusion matrix:
0 1 2 3 4 5 6 7 8
0 0 730814 821 29381 125951 3018863 114555 68655 6960
1 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0
IoU per class:
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
mIoU (ignoring label 0):
0.0
Overall accuracy
/home/lms/pointNet2/Open3D-PointNet2-Semantic3D/util/metric.py:83: RuntimeWarning: invalid value encountered in long_scalars
return np.trace(valid_confusion_matrix) / np.sum(valid_confusion_matrix)
nan`

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant