You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have fine-tuned the training model according to the meaning of the article, but the results are different from yours. I don't know why the result is so bad. Can you help me solve it? Thank you!
Run command “python train.py --id finetune --freeze_earlier_blocks 4"
Sorry for late reply.
You didn't load any panorama/layout pertained weight for encoder and freeze them on ImageNet pertained only weight.
This could be the reason for your bad result.
If you really want to freeze the entire encoder, please consider load my pertained weight mentioned in README (here).
Some other questions:
Did you use the latest version?
Did you check the correctness of your dataset by visualization? (python dataset.py -h can help you)
I fine-tuned the model according to the article, freezing all layers, the effect is bad.
Run command “python train.py --id finetune --freeze_earlier_blocks 4 --pth resnet50_rnn__panos2d3d.pth"
When I froze some layers, the results were relatively good. The fewer layers are frozen, the better the results will be?
Run command “python train.py --id finetune --freeze_earlier_blocks 2 --pth resnet50_rnn__panos2d3d.pth"
Hello, I have fine-tuned the training model according to the meaning of the article, but the results are different from yours. I don't know why the result is so bad. Can you help me solve it? Thank you!
Run command “python train.py --id finetune --freeze_earlier_blocks 4"
This is the code I modified.
Create dataloader
####################### #modify#########################
dataset_train_finetune = PanoCorBonDataset(
root_dir=args.train_finetune_dir,
flip=not args.no_flip, rotate=not args.no_rotate, gamma=not args.no_gamma,
stretch=not args.no_pano_stretch)
Start training
#######################################modify##################################
for _ in trange(len(loader_train_finetune),
desc='Train ep%s' % ith_epoch, position=1):
# Set learning rate
adjust_learning_rate(optimizer, args)
#######################################modify##################################
# backprop
optimizer.zero_grad()
loss.backward()
nn.utils.clip_grad_norm_(net.parameters(), 3.0, norm_type='inf')
optimizer.step()
ith_batch += 1
179,24 82%
The text was updated successfully, but these errors were encountered: