Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RecursionError: maximum recursion depth exceeded #46

Open
sepehratwork opened this issue Oct 14, 2021 · 3 comments
Open

RecursionError: maximum recursion depth exceeded #46

sepehratwork opened this issue Oct 14, 2021 · 3 comments

Comments

@sepehratwork
Copy link

This massage is logged to the terminal:

[2021-10-14 04:53:57,225 main.py:215 INFO train-abinet] ModelConfig(
(0): dataset_case_sensitive = False
(1): dataset_charset_path = data/charset_36.txt
(2): dataset_data_aug = True
(3): dataset_eval_case_sensitive = False
(4): dataset_image_height = 32
(5): dataset_image_width = 128
(6): dataset_max_length = 25
(7): dataset_multiscales = False
(8): dataset_num_workers = 14
(9): dataset_one_hot_y = True
(10): dataset_pin_memory = True
(11): dataset_smooth_factor = 0.1
(12): dataset_smooth_label = False
(13): dataset_test_batch_size = 384
(14): dataset_test_roots = ['data/training/MJ/MJ_train/', 'data/training/MJ/MJ_test/', 'data/training/MJ/MJ_valid/', 'data/training/ST']
(15): dataset_train_batch_size = 384
(16): dataset_train_roots = ['data/training/MJ/MJ_train/', 'data/training/MJ/MJ_test/', 'data/training/MJ/MJ_valid/', 'data/training/ST']
(17): dataset_use_sm = False
(18): global_name = train-abinet
(19): global_phase = train
(20): global_seed = None
(21): global_stage = train-super
(22): global_workdir = workdir/train-abinet
(23): model_alignment_loss_weight = 1.0
(24): model_checkpoint = None
(25): model_ensemble =
(26): model_iter_size = 3
(27): model_language_checkpoint = workdir/pretrain-language-model/pretrain-language-model.pth
(28): model_language_detach = True
(29): model_language_loss_weight = 1.0
(30): model_language_num_layers = 4
(31): model_language_use_self_attn = False
(32): model_name = modules.model_abinet_iter.ABINetIterModel
(33): model_strict = True
(34): model_use_vision = False
(35): model_vision_attention = position
(36): model_vision_backbone = transformer
(37): model_vision_backbone_ln = 3
(38): model_vision_checkpoint = workdir/pretrain-vision-model/best-pretrain-vision-model.pth
(39): model_vision_loss_weight = 1.0
(40): optimizer_args_betas = (0.9, 0.999)
(41): optimizer_bn_wd = False
(42): optimizer_clip_grad = 20
(43): optimizer_lr = 0.0001
(44): optimizer_scheduler_gamma = 0.1
(45): optimizer_scheduler_periods = [6, 4]
(46): optimizer_true_wd = False
(47): optimizer_type = Adam
(48): optimizer_wd = 0.0
(49): training_epochs = 10
(50): training_eval_iters = 3000
(51): training_save_iters = 3000
(52): training_show_iters = 50
(53): training_start_iters = 0
(54): training_stats_iters = 100000
)
[2021-10-14 04:53:57,226 main.py:222 INFO train-abinet] Construct dataset.
[2021-10-14 04:53:57,228 main.py:92 INFO train-abinet] 67199 training items found.
[2021-10-14 04:53:57,228 main.py:94 INFO train-abinet] 67199 valid items found.
[2021-10-14 04:53:57,228 main.py:226 INFO train-abinet] Construct model.
[2021-10-14 04:53:57,488 model_vision.py:37 INFO train-abinet] Read vision model from workdir/pretrain-vision-model/best-pretrain-vision-model.pth.
[2021-10-14 04:53:59,805 model_language.py:38 INFO train-abinet] Read language model from workdir/pretrain-language-model/pretrain-language-model.pth.
[2021-10-14 04:53:59,843 main.py:104 INFO train-abinet] ABINetIterModel(
(vision): BaseVision(
(backbone): ResTranformer(
(resnet): ResNet(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(32, 32, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(4): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(5): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(4): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(5): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer5): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(pos_encoder): PositionalEncoding(
(dropout): Dropout(p=0.1)
)
(transformer): TransformerEncoder(
(layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.1)
(dropout2): Dropout(p=0.1)
)
(1): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.1)
(dropout2): Dropout(p=0.1)
)
(2): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm1): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout1): Dropout(p=0.1)
(dropout2): Dropout(p=0.1)
)
)
)
)
(attention): PositionAttention(
(k_encoder): Sequential(
(0): Sequential(
(0): Conv2d(512, 64, kernel_size=(3, 3), stride=(1, 2), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(1): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(2): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(3): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(k_decoder): Sequential(
(0): Sequential(
(0): Upsample(scale_factor=2.0, mode=nearest)
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU(inplace)
)
(1): Sequential(
(0): Upsample(scale_factor=2.0, mode=nearest)
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU(inplace)
)
(2): Sequential(
(0): Upsample(scale_factor=2.0, mode=nearest)
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU(inplace)
)
(3): Sequential(
(0): Upsample(size=(8, 32), mode=nearest)
(1): Conv2d(64, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU(inplace)
)
)
(pos_encoder): PositionalEncoding(
(dropout): Dropout(p=0)
)
(project): Linear(in_features=512, out_features=512, bias=True)
)
(cls): Linear(in_features=512, out_features=37, bias=True)
)
(language): BCNLanguage(
(proj): Linear(in_features=37, out_features=512, bias=False)
(token_encoder): PositionalEncoding(
(dropout): Dropout(p=0.1)
)
(pos_encoder): PositionalEncoding(
(dropout): Dropout(p=0)
)
(model): TransformerDecoder(
(layers): ModuleList(
(0): TransformerDecoderLayer(
(multihead_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm3): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout2): Dropout(p=0.1)
(dropout3): Dropout(p=0.1)
)
(1): TransformerDecoderLayer(
(multihead_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm3): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout2): Dropout(p=0.1)
(dropout3): Dropout(p=0.1)
)
(2): TransformerDecoderLayer(
(multihead_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm3): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout2): Dropout(p=0.1)
(dropout3): Dropout(p=0.1)
)
(3): TransformerDecoderLayer(
(multihead_attn): MultiheadAttention(
(out_proj): Linear(in_features=512, out_features=512, bias=True)
)
(linear1): Linear(in_features=512, out_features=2048, bias=True)
(dropout): Dropout(p=0.1)
(linear2): Linear(in_features=2048, out_features=512, bias=True)
(norm2): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(norm3): LayerNorm(torch.Size([512]), eps=1e-05, elementwise_affine=True)
(dropout2): Dropout(p=0.1)
(dropout3): Dropout(p=0.1)
)
)
)
(cls): Linear(in_features=512, out_features=37, bias=True)
)
(alignment): BaseAlignment(
(w_att): Linear(in_features=1024, out_features=512, bias=True)
(cls): Linear(in_features=512, out_features=37, bias=True)
)
)
[2021-10-14 04:53:59,848 main.py:229 INFO train-abinet] Construct learner.
[2021-10-14 04:53:59,962 main.py:233 INFO train-abinet] Start training.
Traceback (most recent call last):
File "/ABINet/dataset.py", line 103, in get
return self._next_image(idx)
File "/ABINet/dataset.py", line 61, in _next_image
next_index = random.randint(0, len(self) - 1)
RecursionError: maximum recursion depth exceeded
[2021-10-14 04:53:59,994 dataset.py:119 INFO train-abinet] Corrupted image is found: MJ_train, 34607, , 0
Fatal Python error: Cannot recover from stack overflow.

Thread 0x00007f7923687700 (most recent call first):
File "/home/user/miniconda/envs/py36/lib/python3.6/selectors.py", line 376 in select
File "/home/user/miniconda/envs/py36/lib/python3.6/multiprocessing/connection.py", line 911 in wait
File "/home/user/miniconda/envs/py36/lib/python3.6/multiprocessing/connection.py", line 414 in _poll
File "/home/user/miniconda/envs/py36/lib/python3.6/multiprocessing/connection.py", line 257 in poll
File "/home/user/miniconda/envs/py36/lib/python3.6/multiprocessing/queues.py", line 104 in get
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/tensorboardX/event_file_writer.py", line 202 in run
File "/home/user/miniconda/envs/py36/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/home/user/miniconda/envs/py36/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007f797e1ef700 (most recent call first):
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/fastai/callbacks/tensorboard.py", line 235 in _queue_processor
File "/home/user/miniconda/envs/py36/lib/python3.6/threading.py", line 864 in run
File "/home/user/miniconda/envs/py36/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/home/user/miniconda/envs/py36/lib/python3.6/threading.py", line 884 in _bootstrap

Current thread 0x00007f7a07cd3700 (most recent call first):
File "/ABINet/dataset.py", line 61 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 120 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
File "/ABINet/dataset.py", line 62 in _next_image
File "/ABINet/dataset.py", line 103 in get
...
Aborted (core dumped)

Do you know what should I do with my dataset?
Thanks

@pmgautam
Copy link

I am also facing this issue. Has anyone solved this?

@yywwwwww
Copy link

yywwwwww commented Nov 6, 2023

hello,do you solve this problem?i have the same problem but i can not solve it.

@InsaneOnion
Copy link

I am also facing this issue. Has anyone solved this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants