Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding simple_nms and top_k_keypoints layers to the config file #66

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,15 @@ python3 hfnet/export_predictions.py \
[--exper_name hfnet] \ # for HF-Net only
--keys keypoints,scores,local_descriptors[,global_descriptor]
```
For HF-net in HPatches:
```bash
python hfnet/export_predictions_hfnet.py hfnet/configs/hfnet_export_hpatches.yaml hfnetV1_hpatches_predictions --keys global_descriptor,keypoints,local_descriptors --as_dataset
```

For SuperPoint in HPatches
```bash
python3 hfnet/export_predictions.py hfnet/configs/superpoint_export_hpatches.yaml superpoint_hpatches_predictions --keys keypoints,scores,local_descriptors --as_dataset
```

For NetVLAD:
```bash
Expand Down
Binary file not shown.
Binary file not shown.
312 changes: 296 additions & 16 deletions demo.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion doc/datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ sfm/

## Multi-task distillation

HF-Net is trained on the Google Landmarks and Berkeley Deep Drive datasets. For the former, first download the [index of images](https://github.com/ethz-asl/hierarchical_loc/releases/download/1.0/google_landmarks_index.csv) and then the dataset itself using the script `setup/scripts/download_google_landmarks.py`. The latter can be downloaded on the [dataset website](https://bdd-data.berkeley.edu/) (we used the night and dawn sequences).
HF-Net is trained on the Google Landmarks and Berkeley Deep Drive datasets. For the former, follow the instructions from the following repository: https://github.com/cvdfoundation/google-landmark. The latter can be downloaded on the [dataset website](https://bdd-data.berkeley.edu/) (we used the night and dawn sequences).

The labels are predictions of SuperPoint and NetVLAD. Their export is described in the [training documentation](doc/training.md).

Expand Down
Binary file added doc/demo/custom_dataset/db1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/db2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/db3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/db4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/db5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/query1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/query2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/custom_dataset/query3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Binary file added doc/demo/demo_self_driving/db1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/db2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/db3.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/db4.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/db5.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/query1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/query2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/demo_self_driving/query3.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/testing_data/db1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/demo/testing_data/db2.png
Binary file added doc/demo/testing_data/db3.png
Binary file added doc/demo/testing_data/db4.png
Binary file added doc/demo/testing_data/db6.png
Binary file added doc/demo/testing_data/query1.png
Binary file added doc/demo/testing_data/query2.png
Binary file added doc/demo/testing_data/query3.png
1 change: 1 addition & 0 deletions hfnet/configs/hfnet_export_aachen_db.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ model:
image_channels: 1
local:
detector_threshold: 0.005
weights: 'hf_net_pretrained_weights/model.ckpt-83096'
1 change: 1 addition & 0 deletions hfnet/configs/hfnet_export_cmu_db.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ model:
image_channels: 1
local:
detector_threshold: 0.005
weights: 'hf_net_pretrained_weights/model.ckpt-83096'
2 changes: 2 additions & 0 deletions hfnet/configs/hfnet_export_cmu_queries.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,5 @@ model:
detector_threshold: 0.005
nms_radius: 4
num_keypoints: 2500
weights: 'hf_net_pretrained_weights/model.ckpt-83096'

1 change: 1 addition & 0 deletions hfnet/configs/hfnet_export_hpatches.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ model:
image_channels: 1
local:
detector_threshold: 0.005
model_path: 'hfnet_vanilla'
15 changes: 10 additions & 5 deletions hfnet/configs/hfnet_train_distill.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
data:
name: 'distillation'
image_dirs: ['google_landmarks/images',
'bdd/dawn_images_vga', 'bdd/night_images_vga']
image_dirs: ['dataset_full/custom_dataset_road_full']
load_targets: True
targets:
- dir: 'global_descriptors'
keys: ['global_descriptor']
- dir: 'superpoint_predictions'
keys: ['local_descriptor_map', 'dense_scores']
# keys: ['local_descriptor_map', 'dense_scores']
validation_size: 192
truncate: [185000, null, null]
preprocessing:
Expand Down Expand Up @@ -54,15 +54,20 @@ model:
n_clusters: 32
local:
descriptor_dim: 256
detector_threshold: 0.005
nms_radius: 4
num_keypoints: 1000
#loss_weights: {local_desc: 1, global_desc: 1, detector: 1}
loss_weights: 'uncertainties'
train_backbone: true
batch_size: 16
eval_batch_size: 16
learning_rate: [0.001, 0.0001, 0.00001]
learning_rate_step: [60000, 80000]
weights: 'mobilenet_v2_0.75_224/mobilenet_v2_0.75_224.ckpt'
train_iter: 85000
#weights: 'mobilenet_v2_0.75_224/mobilenet_v2_0.75_224.ckpt'
# weights: 'hfnet_trained_from_scratch_weights/model.ckpt-60001'
weights: 'hf_net_pretrained_weights/model.ckpt-83096'
train_iter: 10000
validation_interval: 500
save_interval: 5000
save_interval: 2000
keep_checkpoints: 100
3 changes: 1 addition & 2 deletions hfnet/configs/netvlad_export_distill.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
data:
name: 'distillation'
image_dirs: ['google_landmarks/images',
'bdd/dawn_images_vga', 'bdd/night_images_vga']
image_dirs: ['dataset_full/custom_dataset_road_full']
shuffle: false
preprocessing:
resize: [480, 640]
Expand Down
3 changes: 1 addition & 2 deletions hfnet/configs/superpoint_export_distill.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
data:
name: 'distillation'
image_dirs: ['google_landmarks/images',
'bdd/dawn_images_vga', 'bdd/night_images_vga']
image_dirs: ['dataset_full/custom_dataset_road_full']
shuffle: false
preprocessing:
resize: [480, 640]
Expand Down
16 changes: 10 additions & 6 deletions hfnet/datasets/distillation.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,16 +46,19 @@ def _init_dataset(self, **config):
data = {'names': [], 'images': []}
if config['load_targets']:
for i, target in enumerate(config['targets']):
for im in config['image_dirs']:
assert Path(Path(DATA_PATH, im).parent,
target['dir']).exists()
# for im in config['image_dirs']:
# assert Path(Path(DATA_PATH, im).parent,
# target['dir']).exists()
data[i] = []

logging.info('Listing image files')
im_paths = []
names = []
for i, image_dir in enumerate(config['image_dirs']):
paths = Path(DATA_PATH, image_dir).glob('*.jpg')
if image_dir == 'dataset_full/google_landmarks':
paths = Path(DATA_PATH, image_dir).glob('*.jpg')
else:
paths = Path(DATA_PATH, image_dir).glob('*.png')
paths = sorted([str(p) for p in paths])
if config['truncate'] is not None:
t = config['truncate'][i]
Expand All @@ -74,9 +77,10 @@ def _init_dataset(self, **config):
Path(im).parent.parent, target['dir'], f'{n}.npz')
# target_path = Path(DATA_PATH, target['dir'], f'{n}.npz')
ok &= target_path.exists()
# list with target paths
target_paths.append(target_path.as_posix())
if not ok:
continue
# if not ok:
# continue
data['images'].append(im)
data['names'].append(n)
for i, p in enumerate(target_paths):
Expand Down
64 changes: 64 additions & 0 deletions hfnet/evaluation/loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,3 +124,67 @@ def export_loader(image, name, experiment, **config):
if binarize:
pred['descriptors'] = pred['descriptors'] > 0
return pred

def export_loader_hfnet(image, name, experiment, **config):
has_keypoints = config.get('has_keypoints', True)
has_descriptors = config.get('has_descriptors', True)

num_features = config.get('num_features', 0)
remove_borders = config.get('remove_borders', 0)
keypoint_predictor = config.get('keypoint_predictor', None)
do_nms = config.get('do_nms', False)
nms_thresh = config.get('nms_thresh', 4)
keypoint_refinement = config.get('keypoint_refinement', False)
binarize = config.get('binarize', False)
# entries = ['keypoints', 'scores', 'global_descriptor', 'local_descriptors']
entries = ['keypoints', 'global_descriptor', 'local_descriptors']

name = name.decode('utf-8') if isinstance(name, bytes) else name
path = Path(EXPER_PATH, 'exports', experiment, name+'.npz')
with np.load(path) as p:
pred = {k: v.copy() for k, v in p.items()}
image_shape = image.shape[:2]
if keypoint_predictor:
keypoint_config = config.get('keypoint_config', config)
keypoint_config['keypoint_predictor'] = None
pred_detector = keypoint_predictor(
image, name, **{'experiment': experiment, **keypoint_config})
pred['keypoints'] = pred_detector['keypoints']
# pred['scores'] = pred_detector['scores']
elif has_keypoints:
assert 'keypoints' in pred
if remove_borders:
mask = keypoints_filter_borders(
pred['keypoints'], image_shape, remove_borders)
pred = {**pred,
**{k: v[mask] for k, v in pred.items() if k in entries}}
if do_nms:
keep = nms_fast(
# pred['keypoints'], pred['scores'], image_shape, nms_thresh)
pred['keypoints'], image_shape, nms_thresh)
pred = {**pred,
**{k: v[keep] for k, v in pred.items() if k in entries}}
if keypoint_refinement:
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER,
30, 0.001)
pred['keypoints'] = cv2.cornerSubPix(
image, np.float32(pred['keypoints']),
(3, 3), (-1, -1), criteria)
#if num_features:
#keep = np.argsort(pred['scores'])[::-1][:num_features]
#pred = {**pred,
# **{k: v[keep] for k, v in pred.items() if k in entries}}
if has_descriptors:
if 'global_descriptor' in pred:
pass
elif 'local_descriptors' in pred:
pred['descriptors'] = pred['local_descriptors']
else:
assert 'local_descriptor_map' in pred
pred['descriptors'] = sample_descriptors(
pred['local_descriptor_map'], pred['keypoints'], image_shape,
input_shape=pred['input_shape'][:2] if 'input_shape' in pred
else None)
if binarize:
pred['descriptors'] = pred['descriptors'] > 0
return pred
8 changes: 5 additions & 3 deletions hfnet/export_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@
exper_name = args.exper_name

with open(args.config, 'r') as f:
config = yaml.load(f)
config = yaml.safe_load(f)

export_dir = Path(EXPER_PATH, 'saved_models', export_name)

if exper_name:
assert Path(EXPER_PATH, exper_name).exists()
with open(Path(EXPER_PATH, exper_name, 'config.yml'), 'r') as f:
with open(Path(EXPER_PATH, exper_name, 'config.yaml'), 'r') as f:
config['model'] = tools.dict_update(
yaml.load(f)['model'], config.get('model', {}))
yaml.safe_load(f)['model'], config.get('model', {}))
checkpoint_path = Path(EXPER_PATH, exper_name)
if config.get('weights', None):
checkpoint_path = Path(checkpoint_path, config['weights'])
Expand All @@ -43,6 +43,8 @@
**config['model']) as net:

net.load(str(checkpoint_path))
print(net.pred_in)
print(net.pred_out)

tf.saved_model.simple_save(
net.sess,
Expand Down
6 changes: 4 additions & 2 deletions hfnet/export_predictions.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
export_name = args.export_name
exper_name = args.exper_name
with open(args.config, 'r') as f:
config = yaml.load(f)
config = yaml.safe_load(f)
keys = '*' if args.keys == '*' else args.keys.split(',')

if args.as_dataset:
Expand All @@ -41,7 +41,7 @@
# Update only the model config (not the dataset)
with open(Path(EXPER_PATH, exper_name, 'config.yaml'), 'r') as f:
config['model'] = tools.dict_update(
yaml.load(f)['model'], config.get('model', {}))
yaml.safe_load(f)['model'], config.get('model', {}))
checkpoint_path = Path(EXPER_PATH, exper_name)
if config.get('weights', None):
checkpoint_path = Path(checkpoint_path, config['weights'])
Expand All @@ -59,6 +59,7 @@
if checkpoint_path is not None:
net.load(str(checkpoint_path))
dataset = get_dataset(config['data']['name'])(**config['data'])
print(dataset)
test_set = dataset.get_test_set()

for data in tqdm(test_set):
Expand All @@ -67,3 +68,4 @@
name = data['name'].decode('utf-8')
Path(base_dir, Path(name).parent).mkdir(parents=True, exist_ok=True)
np.savez(Path(base_dir, '{}.npz'.format(name)), **predictions)

78 changes: 78 additions & 0 deletions hfnet/export_predictions_hfnet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import numpy as np
import argparse
import yaml
import logging
from pathlib import Path
from tqdm import tqdm
from pprint import pformat

logging.basicConfig(format='[%(asctime)s %(levelname)s] %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
from hfnet.models import get_model # noqa: E402
from hfnet.datasets import get_dataset # noqa: E402
from hfnet.utils import tools # noqa: E402
from hfnet.settings import EXPER_PATH, DATA_PATH # noqa: E402
from hfnet_inference import HFNet
import cv2


if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('config', type=str)
parser.add_argument('export_name', type=str)
parser.add_argument('--keys', type=str, default='*')
parser.add_argument('--exper_name', type=str)
parser.add_argument('--as_dataset', action='store_true')
args = parser.parse_args()

export_name = args.export_name
exper_name = args.exper_name
with open(args.config, 'r') as f:
config = yaml.safe_load(f)
keys = '*' if args.keys == '*' else args.keys.split(',')

if args.as_dataset:
base_dir = Path(DATA_PATH, export_name)
else:
base_dir = Path(EXPER_PATH, 'exports')
base_dir = Path(base_dir, ((exper_name+'/') if exper_name else '') + export_name)
base_dir.mkdir(parents=True, exist_ok=True)

if exper_name:
# Update only the model config (not the dataset)
with open(Path(EXPER_PATH, exper_name, 'config.yaml'), 'r') as f:
config['model'] = tools.dict_update(
yaml.safe_load(f)['model'], config.get('model', {}))
checkpoint_path = Path(EXPER_PATH, exper_name)
if config.get('weights', None):
checkpoint_path = Path(checkpoint_path, config['weights'])
else:
if config.get('weights', None):
checkpoint_path = Path(DATA_PATH, 'weights', config['weights'])
else:
checkpoint_path = None
logging.info('No weights provided.')
logging.info(f'Starting export with configuration:\n{pformat(config)}')

#with get_model(config['model']['name'])(
# data_shape={'image': [None, None, None, config['model']['image_channels']]},
# **config['model']) as net:
# if checkpoint_path is not None:
# net.load(str(checkpoint_path))
dataset = get_dataset(config['data']['name'])(**config['data'])
print(dataset)
test_set = dataset.get_test_set()

model_path = Path(EXPER_PATH, config['model_path'])
#outputs = ['global_descriptor', 'keypoints', 'local_descriptors']
hfnet = HFNet(model_path, keys)

for data in tqdm(test_set):
#print(DATA_PATH + '/' + config['data']['name'] + '/' + data['name'].decode('UTF-8') + '.ppm')
im = cv2.imread(DATA_PATH + '/' + config['data']['name'] + '/' + data['name'].decode('UTF-8') + '.ppm')
predictions = hfnet.inference(im)
predictions['input_shape'] = data['image'].shape
name = data['name'].decode('utf-8')
Path(base_dir, Path(name).parent).mkdir(parents=True, exist_ok=True)
np.savez(Path(base_dir, '{}.npz'.format(name)), **predictions)
36 changes: 36 additions & 0 deletions hfnet/hfnet_inference.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import cv2
import numpy as np
from pathlib import Path

from hfnet.settings import EXPER_PATH
from notebooks.utils import plot_images, plot_matches, add_frame

import tensorflow as tf
from tensorflow.python.saved_model import tag_constants
tf.contrib.resampler # import C++ op

class HFNet:
def __init__(self, model_path, outputs):
self.session = tf.Session()
self.image_ph = tf.placeholder(tf.float32, shape=(None, None, 3))

net_input = tf.image.rgb_to_grayscale(self.image_ph[None])
tf.saved_model.loader.load(
self.session, [tag_constants.SERVING], str(model_path),
clear_devices=True,
input_map={'image:0': net_input})

graph = tf.get_default_graph()
self.outputs = {n: graph.get_tensor_by_name(n+':0')[0] for n in outputs}
self.nms_radius_op = graph.get_tensor_by_name('pred/simple_nms/radius:0')
self.num_keypoints_op = graph.get_tensor_by_name('pred/top_k_keypoints/k:0')
self.scores_op = graph.get_tensor_by_name('pred/top_k_keypoints/k:0')

def inference(self, image, nms_radius=4, num_keypoints=1000,scores=1000):
inputs = {
self.image_ph: image[..., ::-1].astype(np.float),
self.nms_radius_op: nms_radius,
self.num_keypoints_op: num_keypoints,
self.scores_op: scores
}
return self.session.run(self.outputs, feed_dict=inputs)
Loading