Skip to content

Commit 7e83d31

Browse files
committed
update README.md
Summary: add information about fastreid V1.0
1 parent 15e1729 commit 7e83d31

File tree

4 files changed

+61
-90
lines changed

4 files changed

+61
-90
lines changed

README.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,14 @@ FastReID is a research platform that implements state-of-the-art re-identificati
44

55
## What's New
66

7-
- [Oct 2020] Added the [Hyper-Parameter Optimization](https://github.com/JDAI-CV/fast-reid/tree/master/projects/HPOReID) based on fastreid. See `projects/HPOReID`.
8-
- [Sep 2020] Added the [person attribute recognition](https://github.com/JDAI-CV/fast-reid/tree/master/projects/attribute_recognition) based on fastreid. See `projects/attribute_recognition`.
9-
- [Sep 2020] Automatic Mixed Precision training is supported with pytorch1.6 built-in `torch.cuda.amp`. Set `cfg.SOLVER.AMP_ENABLED=True` to switch it on.
10-
- [Aug 2020] [Model Distillation](https://github.com/JDAI-CV/fast-reid/tree/master/projects/DistillReID) is supported, thanks for [guan'an wang](https://github.com/wangguanan)'s contribution.
7+
- [Jan 2021] FastReID V1.0 has been released!🎉
8+
Support many tasks beyond reid, such image retrieval and face recognition. See [projects](https://github.com/JDAI-CV/fast-reid/tree/master/projects).
9+
- [Oct 2020] Added the [Hyper-Parameter Optimization](https://github.com/JDAI-CV/fast-reid/tree/master/projects/FastTune) based on fastreid. See `projects/FastTune`.
10+
- [Sep 2020] Added the [person attribute recognition](https://github.com/JDAI-CV/fast-reid/tree/master/projects/FastAttr) based on fastreid. See `projects/FastAttr`.
11+
- [Sep 2020] Automatic Mixed Precision training is supported with `apex`. Set `cfg.SOLVER.FP16_ENABLED=True` to switch it on.
12+
- [Aug 2020] [Model Distillation](https://github.com/JDAI-CV/fast-reid/tree/master/projects/FastDistill) is supported, thanks for [guan'an wang](https://github.com/wangguanan)'s contribution.
1113
- [Aug 2020] ONNX/TensorRT converter is supported.
1214
- [Jul 2020] Distributed training with multiple GPUs, it trains much faster.
13-
- [Jul 2020] `MAX_ITER` in config means `epoch`, it will auto scale to maximum iterations.
1415
- Includes more features such as circle loss, abundant visualization methods and evaluation metrics, SoTA results on conventional, cross-domain, partial and vehicle re-id, testing on multi-datasets simultaneously, etc.
1516
- Can be used as a library to support [different projects](https://github.com/JDAI-CV/fast-reid/tree/master/projects) on top of it. We'll open source more research projects in this way.
1617
- Remove [ignite](https://github.com/pytorch/ignite)(a high-level library) dependency and powered by [PyTorch](https://pytorch.org/).

fastreid/modeling/backbones/regnet/config.py

Lines changed: 2 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,13 @@
1313

1414
from yacs.config import CfgNode as CfgNode
1515

16-
1716
# Global config object
1817
_C = CfgNode()
1918

2019
# Example usage:
2120
# from core.config import cfg
2221
cfg = _C
2322

24-
2523
# ------------------------------------------------------------------------------------ #
2624
# Model options
2725
# ------------------------------------------------------------------------------------ #
@@ -39,7 +37,6 @@
3937
# Loss function (see pycls/models/loss.py for options)
4038
_C.MODEL.LOSS_FUN = "cross_entropy"
4139

42-
4340
# ------------------------------------------------------------------------------------ #
4441
# ResNet options
4542
# ------------------------------------------------------------------------------------ #
@@ -57,7 +54,6 @@
5754
# Apply stride to 1x1 conv (True -> MSRA; False -> fb.torch)
5855
_C.RESNET.STRIDE_1X1 = True
5956

60-
6157
# ------------------------------------------------------------------------------------ #
6258
# AnyNet options
6359
# ------------------------------------------------------------------------------------ #
@@ -93,7 +89,6 @@
9389
# SE ratio
9490
_C.ANYNET.SE_R = 0.25
9591

96-
9792
# ------------------------------------------------------------------------------------ #
9893
# RegNet options
9994
# ------------------------------------------------------------------------------------ #
@@ -133,7 +128,6 @@
133128
# Bottleneck multiplier (bm = 1 / b from the paper)
134129
_C.REGNET.BOT_MUL = 1.0
135130

136-
137131
# ------------------------------------------------------------------------------------ #
138132
# EfficientNet options
139133
# ------------------------------------------------------------------------------------ #
@@ -169,7 +163,6 @@
169163
# Dropout ratio
170164
_C.EN.DROPOUT_RATIO = 0.0
171165

172-
173166
# ------------------------------------------------------------------------------------ #
174167
# Batch norm options
175168
# ------------------------------------------------------------------------------------ #
@@ -192,7 +185,6 @@
192185
_C.BN.USE_CUSTOM_WEIGHT_DECAY = False
193186
_C.BN.CUSTOM_WEIGHT_DECAY = 0.0
194187

195-
196188
# ------------------------------------------------------------------------------------ #
197189
# Optimizer options
198190
# ------------------------------------------------------------------------------------ #
@@ -234,7 +226,6 @@
234226
# Gradually warm up the OPTIM.BASE_LR over this number of epochs
235227
_C.OPTIM.WARMUP_EPOCHS = 0
236228

237-
238229
# ------------------------------------------------------------------------------------ #
239230
# Training options
240231
# ------------------------------------------------------------------------------------ #
@@ -262,7 +253,6 @@
262253
# Weights to start training from
263254
_C.TRAIN.WEIGHTS = ""
264255

265-
266256
# ------------------------------------------------------------------------------------ #
267257
# Testing options
268258
# ------------------------------------------------------------------------------------ #
@@ -281,7 +271,6 @@
281271
# Weights to use for testing
282272
_C.TEST.WEIGHTS = ""
283273

284-
285274
# ------------------------------------------------------------------------------------ #
286275
# Common train/test data loader options
287276
# ------------------------------------------------------------------------------------ #
@@ -293,7 +282,6 @@
293282
# Load data to pinned host memory
294283
_C.DATA_LOADER.PIN_MEMORY = True
295284

296-
297285
# ------------------------------------------------------------------------------------ #
298286
# Memory options
299287
# ------------------------------------------------------------------------------------ #
@@ -302,7 +290,6 @@
302290
# Perform ReLU inplace
303291
_C.MEM.RELU_INPLACE = True
304292

305-
306293
# ------------------------------------------------------------------------------------ #
307294
# CUDNN options
308295
# ------------------------------------------------------------------------------------ #
@@ -313,7 +300,6 @@
313300
# in overall speedups when variable size inputs are used (e.g. COCO training)
314301
_C.CUDNN.BENCHMARK = True
315302

316-
317303
# ------------------------------------------------------------------------------------ #
318304
# Precise timing options
319305
# ------------------------------------------------------------------------------------ #
@@ -325,7 +311,6 @@
325311
# Number of iterations to compute avg time
326312
_C.PREC_TIME.NUM_ITER = 30
327313

328-
329314
# ------------------------------------------------------------------------------------ #
330315
# Misc options
331316
# ------------------------------------------------------------------------------------ #
@@ -359,7 +344,6 @@
359344
# Models weights referred to by URL are downloaded to this local cache
360345
_C.DOWNLOAD_CACHE = "/tmp/pycls-download-cache"
361346

362-
363347
# ------------------------------------------------------------------------------------ #
364348
# Deprecated keys
365349
# ------------------------------------------------------------------------------------ #
@@ -369,7 +353,7 @@
369353
_C.register_deprecated_key("PORT")
370354

371355

372-
def assert_and_infer_cfg(cache_urls=True):
356+
def assert_and_infer_cfg():
373357
"""Checks config values invariants."""
374358
err_str = "The first lr step must start at 0"
375359
assert not _C.OPTIM.STEPS or _C.OPTIM.STEPS[0] == 0, err_str
@@ -382,14 +366,6 @@ def assert_and_infer_cfg(cache_urls=True):
382366
assert _C.TEST.BATCH_SIZE % _C.NUM_GPUS == 0, err_str
383367
err_str = "Log destination '{}' not supported"
384368
assert _C.LOG_DEST in ["stdout", "file"], err_str.format(_C.LOG_DEST)
385-
if cache_urls:
386-
cache_cfg_urls()
387-
388-
389-
def cache_cfg_urls():
390-
"""Download URLs in config, cache them, and rewrite cfg to use cached file."""
391-
_C.TRAIN.WEIGHTS = cache_url(_C.TRAIN.WEIGHTS, _C.DOWNLOAD_CACHE)
392-
_C.TEST.WEIGHTS = cache_url(_C.TEST.WEIGHTS, _C.DOWNLOAD_CACHE)
393369

394370

395371
def dump_cfg():
@@ -417,4 +393,4 @@ def load_cfg_fom_args(description="Config file options."):
417393
sys.exit(1)
418394
args = parser.parse_args()
419395
_C.merge_from_file(args.cfg_file)
420-
_C.merge_from_list(args.opts)
396+
_C.merge_from_list(args.opts)

tools/deploy/Caffe/layer_param.py

Lines changed: 51 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,41 @@
11
from __future__ import absolute_import
2+
23
from . import caffe_pb2 as pb
3-
import numpy as np
44

5-
def pair_process(item,strict_one=True):
6-
if hasattr(item,'__iter__'):
5+
6+
def pair_process(item, strict_one=True):
7+
if hasattr(item, '__iter__'):
78
for i in item:
8-
if i!=item[0]:
9+
if i != item[0]:
910
if strict_one:
1011
raise ValueError("number in item {} must be the same".format(item))
1112
else:
1213
print("IMPORTANT WARNING: number in item {} must be the same".format(item))
1314
return item[0]
1415
return item
1516

17+
1618
def pair_reduce(item):
17-
if hasattr(item,'__iter__'):
19+
if hasattr(item, '__iter__'):
1820
for i in item:
19-
if i!=item[0]:
21+
if i != item[0]:
2022
return item
2123
return [item[0]]
2224
return [item]
2325

26+
2427
class Layer_param():
25-
def __init__(self,name='',type='',top=(),bottom=()):
26-
self.param=pb.LayerParameter()
27-
self.name=self.param.name=name
28-
self.type=self.param.type=type
28+
def __init__(self, name='', type='', top=(), bottom=()):
29+
self.param = pb.LayerParameter()
30+
self.name = self.param.name = name
31+
self.type = self.param.type = type
2932

30-
self.top=self.param.top
33+
self.top = self.param.top
3134
self.top.extend(top)
32-
self.bottom=self.param.bottom
35+
self.bottom = self.param.bottom
3336
self.bottom.extend(bottom)
3437

35-
def fc_param(self, num_output, weight_filler='xavier', bias_filler='constant',has_bias=True):
38+
def fc_param(self, num_output, weight_filler='xavier', bias_filler='constant', has_bias=True):
3639
if self.type != 'InnerProduct':
3740
raise TypeError('the layer type must be InnerProduct if you want set fc param')
3841
fc_param = pb.InnerProductParameter()
@@ -45,7 +48,7 @@ def fc_param(self, num_output, weight_filler='xavier', bias_filler='constant',ha
4548

4649
def conv_param(self, num_output, kernel_size, stride=(1), pad=(0,),
4750
weight_filler_type='xavier', bias_filler_type='constant',
48-
bias_term=True, dilation=None,groups=None):
51+
bias_term=True, dilation=None, groups=None):
4952
"""
5053
add a conv_param layer if you spec the layer type "Convolution"
5154
Args:
@@ -56,80 +59,69 @@ def conv_param(self, num_output, kernel_size, stride=(1), pad=(0,),
5659
bias_filler_type: the bias filler type
5760
Returns:
5861
"""
59-
if self.type not in ['Convolution','Deconvolution']:
62+
if self.type not in ['Convolution', 'Deconvolution']:
6063
raise TypeError('the layer type must be Convolution or Deconvolution if you want set conv param')
61-
conv_param=pb.ConvolutionParameter()
62-
conv_param.num_output=num_output
64+
conv_param = pb.ConvolutionParameter()
65+
conv_param.num_output = num_output
6366
conv_param.kernel_size.extend(pair_reduce(kernel_size))
6467
conv_param.stride.extend(pair_reduce(stride))
6568
conv_param.pad.extend(pair_reduce(pad))
66-
conv_param.bias_term=bias_term
67-
conv_param.weight_filler.type=weight_filler_type
69+
conv_param.bias_term = bias_term
70+
conv_param.weight_filler.type = weight_filler_type
6871
if bias_term:
6972
conv_param.bias_filler.type = bias_filler_type
7073
if dilation:
7174
conv_param.dilation.extend(pair_reduce(dilation))
7275
if groups:
73-
conv_param.group=groups
76+
conv_param.group = groups
7477
self.param.convolution_param.CopyFrom(conv_param)
7578

76-
def pool_param(self,type='MAX',kernel_size=2,stride=2,pad=None, ceil_mode = False):
77-
pool_param=pb.PoolingParameter()
78-
pool_param.pool=pool_param.PoolMethod.Value(type)
79-
pool_param.kernel_size=pair_process(kernel_size)
80-
pool_param.stride=pair_process(stride)
81-
pool_param.ceil_mode=ceil_mode
79+
def pool_param(self, type='MAX', kernel_size=2, stride=2, pad=None, ceil_mode=False):
80+
pool_param = pb.PoolingParameter()
81+
pool_param.pool = pool_param.PoolMethod.Value(type)
82+
pool_param.kernel_size = pair_process(kernel_size)
83+
pool_param.stride = pair_process(stride)
84+
pool_param.ceil_mode = ceil_mode
8285
if pad:
83-
if isinstance(pad,tuple):
86+
if isinstance(pad, tuple):
8487
pool_param.pad_h = pad[0]
8588
pool_param.pad_w = pad[1]
8689
else:
87-
pool_param.pad=pad
90+
pool_param.pad = pad
8891
self.param.pooling_param.CopyFrom(pool_param)
8992

90-
def batch_norm_param(self,use_global_stats=0,moving_average_fraction=None,eps=None):
91-
bn_param=pb.BatchNormParameter()
92-
bn_param.use_global_stats=use_global_stats
93+
def batch_norm_param(self, use_global_stats=0, moving_average_fraction=None, eps=None):
94+
bn_param = pb.BatchNormParameter()
95+
bn_param.use_global_stats = use_global_stats
9396
if moving_average_fraction:
94-
bn_param.moving_average_fraction=moving_average_fraction
97+
bn_param.moving_average_fraction = moving_average_fraction
9598
if eps:
9699
bn_param.eps = eps
97100
self.param.batch_norm_param.CopyFrom(bn_param)
98101

99-
# layer
100-
# {
101-
# name: "upsample_layer"
102-
# type: "Upsample"
103-
# bottom: "some_input_feature_map"
104-
# bottom: "some_input_pool_index"
105-
# top: "some_output"
106-
# upsample_param {
107-
# upsample_h: 224
108-
# upsample_w: 224
109-
# }
110-
# }
111-
def upsample_param(self,size=None, scale_factor=None):
112-
upsample_param=pb.UpsampleParameter()
102+
def upsample_param(self, size=None, scale_factor=None):
103+
upsample_param = pb.UpsampleParameter()
113104
if scale_factor:
114-
if isinstance(scale_factor,int):
105+
if isinstance(scale_factor, int):
115106
upsample_param.scale = scale_factor
116107
else:
117108
upsample_param.scale_h = scale_factor[0]
118109
upsample_param.scale_w = scale_factor[1]
119110

120111
if size:
121-
if isinstance(size,int):
112+
if isinstance(size, int):
122113
upsample_param.upsample_h = size
123114
else:
124115
upsample_param.upsample_h = size[0]
125116
upsample_param.upsample_w = size[1]
126-
#upsample_param.upsample_h = size[0] * scale_factor
127-
#upsample_param.upsample_w = size[1] * scale_factor
117+
# upsample_param.upsample_h = size[0] * scale_factor
118+
# upsample_param.upsample_w = size[1] * scale_factor
128119
self.param.upsample_param.CopyFrom(upsample_param)
129-
def interp_param(self,size=None, scale_factor=None):
130-
interp_param=pb.InterpParameter()
120+
121+
def interp_param(self, size=None, scale_factor=None):
122+
interp_param = pb.InterpParameter()
131123
if scale_factor:
132-
if isinstance(scale_factor,int):
124+
if isinstance(scale_factor, int):
133125
interp_param.zoom_factor = scale_factor
134126

135127
if size:
@@ -138,7 +130,7 @@ def interp_param(self,size=None, scale_factor=None):
138130
interp_param.width = size[1]
139131
self.param.interp_param.CopyFrom(interp_param)
140132

141-
def add_data(self,*args):
133+
def add_data(self, *args):
142134
"""Args are data numpy array
143135
"""
144136
del self.param.blobs[:]
@@ -148,11 +140,12 @@ def add_data(self,*args):
148140
new_blob.shape.dim.append(dim)
149141
new_blob.data.extend(data.flatten().astype(float))
150142

151-
def set_params_by_dict(self,dic):
143+
def set_params_by_dict(self, dic):
152144
pass
153145

154-
def copy_from(self,layer_param):
146+
def copy_from(self, layer_param):
155147
pass
156148

157-
def set_enum(param,key,value):
158-
setattr(param,key,param.Value(value))
149+
150+
def set_enum(param, key, value):
151+
setattr(param, key, param.Value(value))

tools/deploy/Caffe/net.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
raise ImportError,'the nn_tools.Caffe.net is no longer used, please use nn_tools.Caffe.caffe_net'
1+
raise ImportError("the nn_tools.Caffe.net is no longer used, please use nn_tools.Caffe.caffe_net")
2+

0 commit comments

Comments
 (0)