You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I remember the out_dim is the dimension of the output embedding through contrastive learning from projection head. The dim_mlp refers the output dimension of resnet18 or resnet50. The projection head is added after the network.
Hi there, in the models/resnet_simclr.py we have the following:
dim_mlp = self.backbone.fc.in_features
self.backbone.fc = nn.Sequential(nn.Linear(dim_mlp, dim_mlp), nn.ReLU(), self.backbone.fc)
Shouldn't the Linear be: dim_mlp, 128
Threfore we get the 128 dimension hidden vector used on the paper.
The text was updated successfully, but these errors were encountered: