-
Notifications
You must be signed in to change notification settings - Fork 23
/
Copy pathPaddleClas_ResNet50_bs256_fp32_DP_N1C8_log
265 lines (264 loc) · 26.9 KB
/
PaddleClas_ResNet50_bs256_fp32_DP_N1C8_log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
LAUNCH INFO 2023-07-28 07:49:49,627 ----------- Configuration ----------------------
LAUNCH INFO 2023-07-28 07:49:49,627 auto_parallel_config: None
LAUNCH INFO 2023-07-28 07:49:49,627 devices: 0,1,2,3,4,5,6,7
LAUNCH INFO 2023-07-28 07:49:49,627 elastic_level: -1
LAUNCH INFO 2023-07-28 07:49:49,627 elastic_timeout: 30
LAUNCH INFO 2023-07-28 07:49:49,627 gloo_port: 6767
LAUNCH INFO 2023-07-28 07:49:49,627 host: None
LAUNCH INFO 2023-07-28 07:49:49,627 ips: None
LAUNCH INFO 2023-07-28 07:49:49,628 job_id: default
LAUNCH INFO 2023-07-28 07:49:49,628 legacy: False
LAUNCH INFO 2023-07-28 07:49:49,628 log_dir: log
LAUNCH INFO 2023-07-28 07:49:49,628 log_level: INFO
LAUNCH INFO 2023-07-28 07:49:49,628 log_overwrite: False
LAUNCH INFO 2023-07-28 07:49:49,628 master: None
LAUNCH INFO 2023-07-28 07:49:49,628 max_restart: 3
LAUNCH INFO 2023-07-28 07:49:49,628 nnodes: 1
LAUNCH INFO 2023-07-28 07:49:49,628 nproc_per_node: None
LAUNCH INFO 2023-07-28 07:49:49,628 rank: -1
LAUNCH INFO 2023-07-28 07:49:49,628 run_mode: collective
LAUNCH INFO 2023-07-28 07:49:49,628 server_num: None
LAUNCH INFO 2023-07-28 07:49:49,628 servers:
LAUNCH INFO 2023-07-28 07:49:49,628 start_port: 6070
LAUNCH INFO 2023-07-28 07:49:49,628 trainer_num: None
LAUNCH INFO 2023-07-28 07:49:49,628 trainers:
LAUNCH INFO 2023-07-28 07:49:49,628 training_script: ppcls/static/train.py
LAUNCH INFO 2023-07-28 07:49:49,628 training_script_args: ['-c', 'ppcls/configs/ImageNet/ResNet/ResNet50.yaml', '-o', 'DataLoader.Train.sampler.batch_size=256', '-o', 'Global.seed=1234', '-o', 'Global.epochs=8', '-o', 'DataLoader.Train.loader.num_workers=8', '-o', 'Global.eval_during_train=False', '-o', 'fuse_elewise_add_act_ops=True', '-o', 'enable_addto=True']
LAUNCH INFO 2023-07-28 07:49:49,628 with_gloo: 1
LAUNCH INFO 2023-07-28 07:49:49,628 --------------------------------------------------
LAUNCH INFO 2023-07-28 07:49:49,629 Job: default, mode collective, replicas 1[1:1], elastic False
LAUNCH INFO 2023-07-28 07:49:49,637 Run Pod: fcxkfi, replicas 8, status ready
LAUNCH INFO 2023-07-28 07:49:49,752 Watching Pod: fcxkfi, replicas 8, status running
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
A new field (seed) detected!
A new field (fuse_elewise_add_act_ops) detected!
A new field (enable_addto) detected!
[2023/07/28 07:49:52] ppcls INFO:
===========================================================
== PaddleClas is powered by PaddlePaddle ! ==
===========================================================
== ==
== For more info please go to the following website. ==
== ==
== https://github.com/PaddlePaddle/PaddleClas ==
===========================================================
[2023/07/28 07:49:52] ppcls INFO: Global :
[2023/07/28 07:49:52] ppcls INFO: checkpoints : None
[2023/07/28 07:49:52] ppcls INFO: pretrained_model : None
[2023/07/28 07:49:52] ppcls INFO: output_dir : ./output/
[2023/07/28 07:49:52] ppcls INFO: device : gpu
[2023/07/28 07:49:52] ppcls INFO: save_interval : 1
[2023/07/28 07:49:52] ppcls INFO: eval_during_train : False
[2023/07/28 07:49:52] ppcls INFO: eval_interval : 1
[2023/07/28 07:49:52] ppcls INFO: epochs : 8
[2023/07/28 07:49:52] ppcls INFO: print_batch_step : 10
[2023/07/28 07:49:52] ppcls INFO: use_visualdl : False
[2023/07/28 07:49:52] ppcls INFO: image_shape : [3, 224, 224]
[2023/07/28 07:49:52] ppcls INFO: save_inference_dir : ./inference
[2023/07/28 07:49:52] ppcls INFO: to_static : False
[2023/07/28 07:49:52] ppcls INFO: seed : 1234
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: Arch :
[2023/07/28 07:49:52] ppcls INFO: name : ResNet50
[2023/07/28 07:49:52] ppcls INFO: class_num : 1000
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: Loss :
[2023/07/28 07:49:52] ppcls INFO: Train :
[2023/07/28 07:49:52] ppcls INFO: CELoss :
[2023/07/28 07:49:52] ppcls INFO: weight : 1.0
[2023/07/28 07:49:52] ppcls INFO: Eval :
[2023/07/28 07:49:52] ppcls INFO: CELoss :
[2023/07/28 07:49:52] ppcls INFO: weight : 1.0
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: Optimizer :
[2023/07/28 07:49:52] ppcls INFO: name : Momentum
[2023/07/28 07:49:52] ppcls INFO: momentum : 0.9
[2023/07/28 07:49:52] ppcls INFO: lr :
[2023/07/28 07:49:52] ppcls INFO: name : Piecewise
[2023/07/28 07:49:52] ppcls INFO: learning_rate : 0.1
[2023/07/28 07:49:52] ppcls INFO: decay_epochs : [30, 60, 90]
[2023/07/28 07:49:52] ppcls INFO: values : [0.1, 0.01, 0.001, 0.0001]
[2023/07/28 07:49:52] ppcls INFO: regularizer :
[2023/07/28 07:49:52] ppcls INFO: name : L2
[2023/07/28 07:49:52] ppcls INFO: coeff : 0.0001
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: DataLoader :
[2023/07/28 07:49:52] ppcls INFO: Train :
[2023/07/28 07:49:52] ppcls INFO: dataset :
[2023/07/28 07:49:52] ppcls INFO: name : ImageNetDataset
[2023/07/28 07:49:52] ppcls INFO: image_root : ./dataset/ILSVRC2012/
[2023/07/28 07:49:52] ppcls INFO: cls_label_path : ./dataset/ILSVRC2012/train_list.txt
[2023/07/28 07:49:52] ppcls INFO: transform_ops :
[2023/07/28 07:49:52] ppcls INFO: DecodeImage :
[2023/07/28 07:49:52] ppcls INFO: to_rgb : True
[2023/07/28 07:49:52] ppcls INFO: channel_first : False
[2023/07/28 07:49:52] ppcls INFO: RandCropImage :
[2023/07/28 07:49:52] ppcls INFO: size : 224
[2023/07/28 07:49:52] ppcls INFO: RandFlipImage :
[2023/07/28 07:49:52] ppcls INFO: flip_code : 1
[2023/07/28 07:49:52] ppcls INFO: NormalizeImage :
[2023/07/28 07:49:52] ppcls INFO: scale : 1.0/255.0
[2023/07/28 07:49:52] ppcls INFO: mean : [0.485, 0.456, 0.406]
[2023/07/28 07:49:52] ppcls INFO: std : [0.229, 0.224, 0.225]
[2023/07/28 07:49:52] ppcls INFO: order :
[2023/07/28 07:49:52] ppcls INFO: sampler :
[2023/07/28 07:49:52] ppcls INFO: name : DistributedBatchSampler
[2023/07/28 07:49:52] ppcls INFO: batch_size : 256
[2023/07/28 07:49:52] ppcls INFO: drop_last : False
[2023/07/28 07:49:52] ppcls INFO: shuffle : True
[2023/07/28 07:49:52] ppcls INFO: loader :
[2023/07/28 07:49:52] ppcls INFO: num_workers : 8
[2023/07/28 07:49:52] ppcls INFO: use_shared_memory : True
[2023/07/28 07:49:52] ppcls INFO: Eval :
[2023/07/28 07:49:52] ppcls INFO: dataset :
[2023/07/28 07:49:52] ppcls INFO: name : ImageNetDataset
[2023/07/28 07:49:52] ppcls INFO: image_root : ./dataset/ILSVRC2012/
[2023/07/28 07:49:52] ppcls INFO: cls_label_path : ./dataset/ILSVRC2012/val_list.txt
[2023/07/28 07:49:52] ppcls INFO: transform_ops :
[2023/07/28 07:49:52] ppcls INFO: DecodeImage :
[2023/07/28 07:49:52] ppcls INFO: to_rgb : True
[2023/07/28 07:49:52] ppcls INFO: channel_first : False
[2023/07/28 07:49:52] ppcls INFO: ResizeImage :
[2023/07/28 07:49:52] ppcls INFO: resize_short : 256
[2023/07/28 07:49:52] ppcls INFO: CropImage :
[2023/07/28 07:49:52] ppcls INFO: size : 224
[2023/07/28 07:49:52] ppcls INFO: NormalizeImage :
[2023/07/28 07:49:52] ppcls INFO: scale : 1.0/255.0
[2023/07/28 07:49:52] ppcls INFO: mean : [0.485, 0.456, 0.406]
[2023/07/28 07:49:52] ppcls INFO: std : [0.229, 0.224, 0.225]
[2023/07/28 07:49:52] ppcls INFO: order :
[2023/07/28 07:49:52] ppcls INFO: sampler :
[2023/07/28 07:49:52] ppcls INFO: name : DistributedBatchSampler
[2023/07/28 07:49:52] ppcls INFO: batch_size : 64
[2023/07/28 07:49:52] ppcls INFO: drop_last : False
[2023/07/28 07:49:52] ppcls INFO: shuffle : False
[2023/07/28 07:49:52] ppcls INFO: loader :
[2023/07/28 07:49:52] ppcls INFO: num_workers : 4
[2023/07/28 07:49:52] ppcls INFO: use_shared_memory : True
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: Infer :
[2023/07/28 07:49:52] ppcls INFO: infer_imgs : docs/images/inference_deployment/whl_demo.jpg
[2023/07/28 07:49:52] ppcls INFO: batch_size : 10
[2023/07/28 07:49:52] ppcls INFO: transforms :
[2023/07/28 07:49:52] ppcls INFO: DecodeImage :
[2023/07/28 07:49:52] ppcls INFO: to_rgb : True
[2023/07/28 07:49:52] ppcls INFO: channel_first : False
[2023/07/28 07:49:52] ppcls INFO: ResizeImage :
[2023/07/28 07:49:52] ppcls INFO: resize_short : 256
[2023/07/28 07:49:52] ppcls INFO: CropImage :
[2023/07/28 07:49:52] ppcls INFO: size : 224
[2023/07/28 07:49:52] ppcls INFO: NormalizeImage :
[2023/07/28 07:49:52] ppcls INFO: scale : 1.0/255.0
[2023/07/28 07:49:52] ppcls INFO: mean : [0.485, 0.456, 0.406]
[2023/07/28 07:49:52] ppcls INFO: std : [0.229, 0.224, 0.225]
[2023/07/28 07:49:52] ppcls INFO: order :
[2023/07/28 07:49:52] ppcls INFO: ToCHWImage : None
[2023/07/28 07:49:52] ppcls INFO: PostProcess :
[2023/07/28 07:49:52] ppcls INFO: name : Topk
[2023/07/28 07:49:52] ppcls INFO: topk : 5
[2023/07/28 07:49:52] ppcls INFO: class_id_map_file : ppcls/utils/imagenet1k_label_list.txt
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: Metric :
[2023/07/28 07:49:52] ppcls INFO: Train :
[2023/07/28 07:49:52] ppcls INFO: TopkAcc :
[2023/07/28 07:49:52] ppcls INFO: topk : [1, 5]
[2023/07/28 07:49:52] ppcls INFO: Eval :
[2023/07/28 07:49:52] ppcls INFO: TopkAcc :
[2023/07/28 07:49:52] ppcls INFO: topk : [1, 5]
[2023/07/28 07:49:52] ppcls INFO: ------------------------------------------------------------
[2023/07/28 07:49:52] ppcls INFO: fuse_elewise_add_act_ops : True
[2023/07/28 07:49:52] ppcls INFO: enable_addto : True
[2023-07-28 07:49:52,172] [ INFO] distributed_strategy.py:160 - distributed strategy initialized
[2023/07/28 07:50:08] ppcls WARNING: "init_res" will be deprecated, please use "init_net" instead.
[2023-07-28 07:50:08,881] [ INFO] distributed_strategy.py:160 - distributed strategy initialized
[2023-07-28 07:50:08,882] [ WARNING] fleet.py:1092 - It is recommended to use DistributedStrategy in fleet.init(). The strategy here is only for compatibility. If the strategy in fleet.distributed_optimizer() is not None, then it will overwrite the DistributedStrategy in fleet.init(), which will take effect in distributed training.
I0728 07:50:09.184554 20850 fuse_pass_base.cc:59] --- detected 16 subgraphs
I0728 07:50:09.197741 20850 fuse_pass_base.cc:59] --- detected 16 subgraphs
server not ready, wait 3 sec to retry...
I0728 07:50:13.750840 20850 interpretercore.cc:237] New Executor is Running.
W0728 07:50:13.756700 20850 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 12.0, Runtime API Version: 11.2
W0728 07:50:13.756727 20850 gpu_resources.cc:149] device: 0, cuDNN Version: 8.2.
I0728 07:50:27.829695 20850 interpreter_util.cc:518] Standalone Executor is Used.
[2023/07/28 07:50:37] ppcls INFO: epoch:0 train step:10 lr: 0.100000, loss: 7.0510 top1: 0.0039 top5: 0.0078 batch_cost: 0.66375 s, reader_cost: 0.00076 s, ips: 385.68800 samples/sec.
[2023/07/28 07:50:44] ppcls INFO: epoch:0 train step:20 lr: 0.100000, loss: 7.2424 top1: 0.0000 top5: 0.0078 batch_cost: 0.66463 s, reader_cost: 0.00089 s, ips: 385.17405 samples/sec.
[2023/07/28 07:50:50] ppcls INFO: epoch:0 train step:30 lr: 0.100000, loss: 7.0827 top1: 0.0000 top5: 0.0000 batch_cost: 0.66602 s, reader_cost: 0.00098 s, ips: 384.37240 samples/sec.
[2023/07/28 07:50:57] ppcls INFO: epoch:0 train step:40 lr: 0.100000, loss: 7.1303 top1: 0.0000 top5: 0.0078 batch_cost: 0.66647 s, reader_cost: 0.00099 s, ips: 384.11330 samples/sec.
[2023/07/28 07:51:04] ppcls INFO: epoch:0 train step:50 lr: 0.100000, loss: 6.9684 top1: 0.0039 top5: -0.0117 batch_cost: 0.66643 s, reader_cost: 0.00099 s, ips: 384.13738 samples/sec.
[2023/07/28 07:51:10] ppcls INFO: epoch:0 train step:60 lr: 0.100000, loss: 6.9367 top1: 0.0000 top5: 0.0078 batch_cost: 0.66696 s, reader_cost: 0.00097 s, ips: 383.83347 samples/sec.
[2023/07/28 07:51:17] ppcls INFO: epoch:0 train step:70 lr: 0.100000, loss: 6.9185 top1: 0.0117 top5: -0.0156 batch_cost: 0.66738 s, reader_cost: 0.00098 s, ips: 383.58669 samples/sec.
[2023/07/28 07:51:24] ppcls INFO: epoch:0 train step:80 lr: 0.100000, loss: 6.9859 top1: 0.0039 top5: -0.0078 batch_cost: 0.66776 s, reader_cost: 0.00095 s, ips: 383.37373 samples/sec.
[2023/07/28 07:51:30] ppcls INFO: epoch:0 train step:90 lr: 0.100000, loss: 6.9183 top1: 0.0000 top5: 0.0078 batch_cost: 0.66779 s, reader_cost: 0.00096 s, ips: 383.35653 samples/sec.
[2023/07/28 07:51:37] ppcls INFO: epoch:0 train step:100 lr: 0.100000, loss: 6.8587 top1: 0.0078 top5: -0.0117 batch_cost: 0.66795 s, reader_cost: 0.00095 s, ips: 383.26232 samples/sec.
[2023/07/28 07:51:44] ppcls INFO: epoch:0 train step:110 lr: 0.100000, loss: 6.8582 top1: 0.0000 top5: 0.0156 batch_cost: 0.66802 s, reader_cost: 0.00094 s, ips: 383.22271 samples/sec.
[2023/07/28 07:51:50] ppcls INFO: epoch:0 train step:120 lr: 0.100000, loss: 6.8077 top1: -0.0117 top5: 0.0273 batch_cost: 0.66815 s, reader_cost: 0.00093 s, ips: 383.14913 samples/sec.
[2023/07/28 07:51:57] ppcls INFO: epoch:0 train step:130 lr: 0.100000, loss: 6.8112 top1: 0.0039 top5: 0.0078 batch_cost: 0.66833 s, reader_cost: 0.00093 s, ips: 383.04212 samples/sec.
[2023/07/28 07:52:04] ppcls INFO: epoch:0 train step:140 lr: 0.100000, loss: 6.6684 top1: 0.0117 top5: 0.0273 batch_cost: 0.66841 s, reader_cost: 0.00092 s, ips: 382.99877 samples/sec.
[2023/07/28 07:52:11] ppcls INFO: epoch:0 train step:150 lr: 0.100000, loss: 6.6124 top1: 0.0039 top5: -0.0312 batch_cost: 0.66854 s, reader_cost: 0.00091 s, ips: 382.92335 samples/sec.
[2023/07/28 07:52:17] ppcls INFO: epoch:0 train step:160 lr: 0.100000, loss: 6.6398 top1: 0.0000 top5: 0.0039 batch_cost: 0.66848 s, reader_cost: 0.00090 s, ips: 382.95749 samples/sec.
[2023/07/28 07:52:24] ppcls INFO: epoch:0 train step:170 lr: 0.100000, loss: 6.4299 top1: 0.0117 top5: 0.0312 batch_cost: 0.66856 s, reader_cost: 0.00089 s, ips: 382.91394 samples/sec.
[2023/07/28 07:52:31] ppcls INFO: epoch:0 train step:180 lr: 0.100000, loss: 6.5616 top1: 0.0078 top5: 0.0156 batch_cost: 0.66867 s, reader_cost: 0.00088 s, ips: 382.85195 samples/sec.
[2023/07/28 07:52:37] ppcls INFO: epoch:0 train step:190 lr: 0.100000, loss: 6.3911 top1: 0.0039 top5: 0.0273 batch_cost: 0.66878 s, reader_cost: 0.00087 s, ips: 382.78682 samples/sec.
[2023/07/28 07:52:44] ppcls INFO: epoch:0 train step:200 lr: 0.100000, loss: 6.3925 top1: 0.0156 top5: 0.0469 batch_cost: 0.66889 s, reader_cost: 0.00087 s, ips: 382.72418 samples/sec.
[2023/07/28 07:52:51] ppcls INFO: epoch:0 train step:210 lr: 0.100000, loss: 6.3392 top1: 0.0156 top5: 0.0703 batch_cost: 0.66890 s, reader_cost: 0.00088 s, ips: 382.71856 samples/sec.
[2023/07/28 07:52:57] ppcls INFO: epoch:0 train step:220 lr: 0.100000, loss: 6.1845 top1: 0.0234 top5: 0.0859 batch_cost: 0.66901 s, reader_cost: 0.00087 s, ips: 382.65720 samples/sec.
[2023/07/28 07:53:04] ppcls INFO: epoch:0 train step:230 lr: 0.100000, loss: 6.0534 top1: 0.0312 top5: -0.0977 batch_cost: 0.66905 s, reader_cost: 0.00086 s, ips: 382.63409 samples/sec.
[2023/07/28 07:53:11] ppcls INFO: epoch:0 train step:240 lr: 0.100000, loss: 6.3658 top1: 0.0273 top5: -0.0508 batch_cost: 0.66921 s, reader_cost: 0.00086 s, ips: 382.54063 samples/sec.
[2023/07/28 07:53:18] ppcls INFO: epoch:0 train step:250 lr: 0.100000, loss: 6.0838 top1: 0.0195 top5: -0.0625 batch_cost: 0.66950 s, reader_cost: 0.00085 s, ips: 382.37240 samples/sec.
[2023/07/28 07:53:24] ppcls INFO: epoch:0 train step:260 lr: 0.100000, loss: 6.0542 top1: 0.0234 top5: 0.0820 batch_cost: 0.66959 s, reader_cost: 0.00141 s, ips: 382.32160 samples/sec.
[2023/07/28 07:53:31] ppcls INFO: epoch:0 train step:270 lr: 0.100000, loss: 6.1229 top1: 0.0273 top5: 0.0781 batch_cost: 0.67040 s, reader_cost: 0.00139 s, ips: 381.85940 samples/sec.
[2023/07/28 07:53:38] ppcls INFO: epoch:0 train step:280 lr: 0.100000, loss: 5.9238 top1: 0.0195 top5: 0.0625 batch_cost: 0.67102 s, reader_cost: 0.00136 s, ips: 381.50948 samples/sec.
[2023/07/28 07:53:45] ppcls INFO: epoch:0 train step:290 lr: 0.100000, loss: 5.8097 top1: 0.0391 top5: 0.0977 batch_cost: 0.67109 s, reader_cost: 0.00134 s, ips: 381.47020 samples/sec.
[2023/07/28 07:53:52] ppcls INFO: epoch:0 train step:300 lr: 0.100000, loss: 5.8087 top1: 0.0312 top5: 0.1133 batch_cost: 0.67108 s, reader_cost: 0.00132 s, ips: 381.47437 samples/sec.
[2023/07/28 07:53:58] ppcls INFO: epoch:0 train step:310 lr: 0.100000, loss: 5.7020 top1: 0.0508 top5: 0.1172 batch_cost: 0.67122 s, reader_cost: 0.00130 s, ips: 381.39314 samples/sec.
[2023/07/28 07:54:05] ppcls INFO: epoch:0 train step:320 lr: 0.100000, loss: 5.7232 top1: 0.0547 top5: 0.1406 batch_cost: 0.67246 s, reader_cost: 0.00129 s, ips: 380.69270 samples/sec.
[2023/07/28 07:54:12] ppcls INFO: epoch:0 train step:330 lr: 0.100000, loss: 5.9645 top1: 0.0234 top5: -0.0742 batch_cost: 0.67241 s, reader_cost: 0.00128 s, ips: 380.72021 samples/sec.
[2023/07/28 07:54:19] ppcls INFO: epoch:0 train step:340 lr: 0.100000, loss: 5.7864 top1: 0.0391 top5: -0.1094 batch_cost: 0.67244 s, reader_cost: 0.00127 s, ips: 380.70133 samples/sec.
[2023/07/28 07:54:26] ppcls INFO: epoch:0 train step:350 lr: 0.100000, loss: 5.5519 top1: -0.0469 top5: 0.1523 batch_cost: 0.67234 s, reader_cost: 0.00125 s, ips: 380.75984 samples/sec.
[2023/07/28 07:54:32] ppcls INFO: epoch:0 train step:360 lr: 0.100000, loss: 5.7048 top1: 0.0391 top5: 0.0938 batch_cost: 0.67265 s, reader_cost: 0.00126 s, ips: 380.58487 samples/sec.
[2023/07/28 07:54:39] ppcls INFO: epoch:0 train step:370 lr: 0.100000, loss: 5.7580 top1: 0.0234 top5: 0.0977 batch_cost: 0.67283 s, reader_cost: 0.00890 s, ips: 380.48106 samples/sec.
[2023/07/28 07:54:46] ppcls INFO: epoch:0 train step:380 lr: 0.100000, loss: 5.3609 top1: 0.0703 top5: 0.1953 batch_cost: 0.67339 s, reader_cost: 0.01592 s, ips: 380.16743 samples/sec.
[2023/07/28 07:54:53] ppcls INFO: epoch:0 train step:390 lr: 0.100000, loss: 5.3399 top1: 0.0664 top5: 0.1836 batch_cost: 0.67377 s, reader_cost: 0.02425 s, ips: 379.95292 samples/sec.
[2023/07/28 07:55:00] ppcls INFO: epoch:0 train step:400 lr: 0.100000, loss: 5.4978 top1: 0.0547 top5: -0.1445 batch_cost: 0.67464 s, reader_cost: 0.03186 s, ips: 379.46280 samples/sec.
[2023/07/28 07:55:07] ppcls INFO: epoch:0 train step:410 lr: 0.100000, loss: 5.2122 top1: 0.1133 top5: 0.1914 batch_cost: 0.67533 s, reader_cost: 0.03799 s, ips: 379.07440 samples/sec.
[2023/07/28 07:55:14] ppcls INFO: epoch:0 train step:420 lr: 0.100000, loss: 5.3411 top1: 0.0469 top5: -0.1914 batch_cost: 0.67621 s, reader_cost: 0.04478 s, ips: 378.58249 samples/sec.
[2023/07/28 07:55:21] ppcls INFO: epoch:0 train step:430 lr: 0.100000, loss: 5.0937 top1: 0.0742 top5: 0.2422 batch_cost: 0.67697 s, reader_cost: 0.05335 s, ips: 378.15728 samples/sec.
[2023/07/28 07:55:28] ppcls INFO: epoch:0 train step:440 lr: 0.100000, loss: 5.3157 top1: 0.0625 top5: 0.1523 batch_cost: 0.67679 s, reader_cost: 0.05842 s, ips: 378.25391 samples/sec.
[2023/07/28 07:55:35] ppcls INFO: epoch:0 train step:450 lr: 0.100000, loss: 5.0596 top1: 0.0703 top5: 0.2188 batch_cost: 0.67712 s, reader_cost: 0.06461 s, ips: 378.07172 samples/sec.
[2023/07/28 07:55:42] ppcls INFO: epoch:0 train step:460 lr: 0.100000, loss: 5.2225 top1: 0.0703 top5: 0.1953 batch_cost: 0.67777 s, reader_cost: 0.06988 s, ips: 377.71095 samples/sec.
[2023/07/28 07:55:49] ppcls INFO: epoch:0 train step:470 lr: 0.100000, loss: 4.9951 top1: 0.1016 top5: 0.2188 batch_cost: 0.67792 s, reader_cost: 0.07278 s, ips: 377.62535 samples/sec.
[2023/07/28 07:55:56] ppcls INFO: epoch:0 train step:480 lr: 0.100000, loss: 5.0845 top1: 0.0820 top5: -0.2227 batch_cost: 0.67809 s, reader_cost: 0.07127 s, ips: 377.53191 samples/sec.
[2023/07/28 07:56:02] ppcls INFO: epoch:0 train step:490 lr: 0.100000, loss: 5.0536 top1: 0.0703 top5: -0.2109 batch_cost: 0.67797 s, reader_cost: 0.06983 s, ips: 377.59567 samples/sec.
[2023/07/28 07:56:09] ppcls INFO: epoch:0 train step:500 lr: 0.100000, loss: 5.1822 top1: 0.0703 top5: 0.2070 batch_cost: 0.67799 s, reader_cost: 0.06843 s, ips: 377.58847 samples/sec.
[2023/07/28 07:56:16] ppcls INFO: epoch:0 train step:510 lr: 0.100000, loss: 5.1361 top1: 0.0977 top5: 0.2109 batch_cost: 0.67861 s, reader_cost: 0.06790 s, ips: 377.23896 samples/sec.
[2023/07/28 07:56:24] ppcls INFO: epoch:0 train step:520 lr: 0.100000, loss: 4.8543 top1: 0.0977 top5: 0.2500 batch_cost: 0.68077 s, reader_cost: 0.06739 s, ips: 376.04608 samples/sec.
[2023/07/28 07:56:31] ppcls INFO: epoch:0 train step:530 lr: 0.100000, loss: 4.8917 top1: 0.1172 top5: -0.2422 batch_cost: 0.68069 s, reader_cost: 0.06613 s, ips: 376.09044 samples/sec.
[2023/07/28 07:56:38] ppcls INFO: epoch:0 train step:540 lr: 0.100000, loss: 4.7704 top1: 0.1094 top5: 0.2617 batch_cost: 0.68078 s, reader_cost: 0.06491 s, ips: 376.03723 samples/sec.
[2023/07/28 07:56:45] ppcls INFO: epoch:0 train step:550 lr: 0.100000, loss: 4.8687 top1: 0.0859 top5: 0.2656 batch_cost: 0.68121 s, reader_cost: 0.06374 s, ips: 375.80102 samples/sec.
[2023/07/28 07:56:53] ppcls INFO: epoch:0 train step:560 lr: 0.100000, loss: 4.7888 top1: 0.1172 top5: -0.2695 batch_cost: 0.68353 s, reader_cost: 0.06338 s, ips: 374.52421 samples/sec.
[2023/07/28 07:57:00] ppcls INFO: epoch:0 train step:570 lr: 0.100000, loss: 4.8397 top1: 0.0938 top5: 0.2578 batch_cost: 0.68330 s, reader_cost: 0.06442 s, ips: 374.64975 samples/sec.
[2023/07/28 07:57:07] ppcls INFO: epoch:0 train step:580 lr: 0.100000, loss: 4.8012 top1: 0.1094 top5: 0.2930 batch_cost: 0.68355 s, reader_cost: 0.06591 s, ips: 374.51574 samples/sec.
[2023/07/28 07:57:14] ppcls INFO: epoch:0 train step:590 lr: 0.100000, loss: 4.8414 top1: 0.1289 top5: 0.2383 batch_cost: 0.68468 s, reader_cost: 0.06755 s, ips: 373.89838 samples/sec.
[2023/07/28 07:57:21] ppcls INFO: epoch:0 train step:600 lr: 0.100000, loss: 4.9723 top1: 0.1172 top5: 0.2344 batch_cost: 0.68453 s, reader_cost: 0.06946 s, ips: 373.97963 samples/sec.
[2023/07/28 07:57:29] ppcls INFO: epoch:0 train step:610 lr: 0.100000, loss: 4.7459 top1: 0.1055 top5: 0.3008 batch_cost: 0.68586 s, reader_cost: 0.07188 s, ips: 373.25564 samples/sec.
[2023/07/28 07:57:36] ppcls INFO: epoch:0 train step:620 lr: 0.100000, loss: 4.8603 top1: 0.0859 top5: -0.2344 batch_cost: 0.68689 s, reader_cost: 0.07344 s, ips: 372.69419 samples/sec.
[2023/07/28 07:57:43] ppcls INFO: epoch:0 train step:630 lr: 0.100000, loss: 4.6228 top1: 0.0977 top5: -0.3008 batch_cost: 0.68727 s, reader_cost: 0.07425 s, ips: 372.48778 samples/sec.
[2023/07/28 07:57:51] ppcls INFO: epoch:0 train step:640 lr: 0.100000, loss: 4.5886 top1: 0.1367 top5: -0.2812 batch_cost: 0.68798 s, reader_cost: 0.07309 s, ips: 372.10288 samples/sec.
[2023/07/28 07:57:57] ppcls INFO: epoch:0 train step:650 lr: 0.100000, loss: 4.7000 top1: 0.1289 top5: 0.2930 batch_cost: 0.68806 s, reader_cost: 0.07318 s, ips: 372.05821 samples/sec.
[2023/07/28 07:58:04] ppcls INFO: epoch:0 train step:660 lr: 0.100000, loss: 4.5021 top1: 0.1211 top5: 0.3242 batch_cost: 0.68825 s, reader_cost: 0.07550 s, ips: 371.95526 samples/sec.
[2023/07/28 07:58:11] ppcls INFO: epoch:0 train step:670 lr: 0.100000, loss: 4.6218 top1: 0.1172 top5: 0.2891 batch_cost: 0.68828 s, reader_cost: 0.07536 s, ips: 371.93990 samples/sec.
[2023/07/28 07:58:18] ppcls INFO: epoch:0 train step:680 lr: 0.100000, loss: 4.5123 top1: 0.1406 top5: 0.2852 batch_cost: 0.68822 s, reader_cost: 0.07739 s, ips: 371.97625 samples/sec.
[2023/07/28 07:58:26] ppcls INFO: epoch:0 train step:690 lr: 0.100000, loss: 4.5024 top1: 0.1406 top5: 0.3242 batch_cost: 0.68909 s, reader_cost: 0.07817 s, ips: 371.50427 samples/sec.
[2023/07/28 07:58:33] ppcls INFO: epoch:0 train step:700 lr: 0.100000, loss: 4.5621 top1: 0.1211 top5: 0.3125 batch_cost: 0.68961 s, reader_cost: 0.07748 s, ips: 371.22525 samples/sec.
[2023/07/28 07:58:41] ppcls INFO: epoch:0 train step:710 lr: 0.100000, loss: 4.4998 top1: 0.1484 top5: 0.3398 batch_cost: 0.69086 s, reader_cost: 0.07725 s, ips: 370.55011 samples/sec.
[2023/07/28 07:58:48] ppcls INFO: epoch:0 train step:720 lr: 0.100000, loss: 4.2679 top1: 0.1758 top5: -0.3711 batch_cost: 0.69070 s, reader_cost: 0.07925 s, ips: 370.64063 samples/sec.
[2023/07/28 07:58:55] ppcls INFO: epoch:0 train step:730 lr: 0.100000, loss: 4.1928 top1: 0.2070 top5: -0.3711 batch_cost: 0.69155 s, reader_cost: 0.08115 s, ips: 370.18459 samples/sec.
[2023/07/28 07:59:03] ppcls INFO: epoch:0 train step:740 lr: 0.100000, loss: 4.2532 top1: 0.1406 top5: 0.3711 batch_cost: 0.69322 s, reader_cost: 0.08339 s, ips: 369.28883 samples/sec.
[2023/07/28 07:59:10] ppcls INFO: epoch:0 train step:750 lr: 0.100000, loss: 4.2936 top1: 0.1641 top5: -0.3516 batch_cost: 0.69293 s, reader_cost: 0.08528 s, ips: 369.44413 samples/sec.
[2023/07/28 07:59:17] ppcls INFO: epoch:0 train step:760 lr: 0.100000, loss: 4.2327 top1: 0.2109 top5: 0.3672 batch_cost: 0.69283 s, reader_cost: 0.08833 s, ips: 369.49783 samples/sec.
LAUNCH INFO 2023-07-28 07:59:48,218 Terminating with signal 15
LAUNCH INFO 2023-07-28 07:59:54,854 Exit with signal 15
[2023/07/28 07:59:24] ppcls INFO: epoch:0 train step:770 lr: 0.100000, loss: 4.3001 top1: -0.1641 top5: 0.3750 batch_cost: 0.69285 s, reader_cost: 0.09171 s, ips: 369.48952 samples/sec.
[2023/07/28 07:59:31] ppcls INFO: epoch:0 train step:780 lr: 0.100000, loss: 4.2880 top1: 0.1719 top5: 0.3711 batch_cost: 0.69291 s, reader_cost: 0.09401 s, ips: 369.45766 samples/sec.
[2023/07/28 07:59:38] ppcls INFO: epoch:0 train step:790 lr: 0.100000, loss: 4.0443 top1: -0.1602 top5: 0.4414 batch_cost: 0.69299 s, reader_cost: 0.09581 s, ips: 369.41587 samples/sec.
[2023/07/28 07:59:45] ppcls INFO: epoch:0 train step:800 lr: 0.100000, loss: 4.2342 top1: -0.1719 top5: -0.3633 batch_cost: 0.69293 s, reader_cost: 0.09548 s, ips: 369.44641 samples/sec.