Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about boxes in evaluation #10

Open
VanillaChelle opened this issue Aug 7, 2019 · 0 comments
Open

Question about boxes in evaluation #10

VanillaChelle opened this issue Aug 7, 2019 · 0 comments

Comments

@VanillaChelle
Copy link

Hi, thanks for your work.

I have a question about the boxes mapping in keras_code/B_Online_Aggregation_Eval.py.

In line 251, we get the groundtruth box with x, y, dx, dy:
x, y, dx, dy = f_list[1], f_list[2], f_list[3], f_list[4]
and make them in multiples of 16 in line 254-255:
f_x, f_y, f_dx, f_dy = int((x - (x % 16)) / 16), int((y - (y % 16)) / 16), \ int((dx - (dx % 16)) / 16), int((dy - (dy % 16)) / 16)
The query image is processed with resizing to either 1024*720 or 720*1024 in line 276:
img_p = preprocess_images(img, size[0], size[1], mean_value)
The processed image is then fed into the network to get features in line 278-279:
features, cams, roi = extract_feat_cam_fast(model, get_output, conv_layer_features, 1, img_p, num_cams, class_list[0, 0:num_cams])
However, after that we get the descriptors by slicing features with f_x, f_y, f_dx, f_dy in line 281-282:
d_wp = weighted_cam_pooling(features[:, :, f_y:f_dy, f_x:f_dx], cams[:, :, f_y:f_dy, f_x:f_dx])

I'm a little confused since we get the features by feeding the resized images, while the box indices for slicing features are not resized accordingly. I'll be really grateful for your reply :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant