Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnx2daq fails with yolov3 network #64

Open
amelia808 opened this issue Sep 2, 2019 · 4 comments
Open

onnx2daq fails with yolov3 network #64

amelia808 opened this issue Sep 2, 2019 · 4 comments
Assignees

Comments

@amelia808
Copy link

terminate called after throwing an instance of 'std::out_of_range'
  what():  _Map_base::at

this is the error while trying to convert https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov3

@amelia808
Copy link
Author

It actually fails for all onnx object detection models.

@daquexian daquexian self-assigned this Sep 5, 2019
@ardiya
Copy link

ardiya commented Jan 17, 2020

I got similar problem with my own .onnx.

I compiled the code with debug mode and tried to convert ONNX yolov3.onnx and I got this error

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff7a4c899 in __GI_abort () at abort.c:79
#2  0x00007ffff7e1f5f6 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007ffff7e2b9ec in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007ffff7e2ba47 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007ffff7e2bca9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007ffff7e21f04 in std::__throw_out_of_range(char const*) ()
   from /lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x0000555555686023 in std::__detail::_Map_base<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnx_daq::Value*>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnx_daq::Value*> >, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::at (this=0x7fffffffc600, __k="W74")
    at /usr/include/c++/9/bits/hashtable_policy.h:750
#8  0x0000555555683999 in std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, onnx_daq::Value*, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, onnx_daq::Value*> > >::at (this=0x7fffffffc600, __k="W74")
    at /usr/include/c++/9/bits/unordered_map.h:1002
#9  0x000055555567dcaf in onnx_daq::graphProtoToGraph (gp=..., nested=false)
    at /home/ardiya/Workspace/DNNLibrary/third_party/onnx/onnx/common/ir_pb_converter.cc:287
#10 0x000055555567e52d in onnx_daq::ImportModelProto (mp=...)
    at /home/ardiya/Workspace/DNNLibrary/third_party/onnx/onnx/common/ir_pb_converter.cc:331
#11 0x0000555555646fd5 in onnx_daq::optimization::Optimizer::optimize (this=0x7fffffffcb40, mp_in=...)
    at /home/ardiya/Workspace/DNNLibrary/third_party/onnx/onnx/optimizer/optimize.h:26
#12 0x0000555555639d9a in onnx_daq::optimization::OptimizeFixed (mp_in=..., 
    names=std::vector of length 18, capacity 18 = {...})
    at /home/ardiya/Workspace/DNNLibrary/third_party/onnx/onnx/optimizer/optimize.cc:38
#13 0x000055555557711d in dnn::OnnxConverter::Convert (this=0x7fffffffd880, model_proto=..., 
    table_file="") at /home/ardiya/Workspace/DNNLibrary/tools/onnx2daq/OnnxConverter.cpp:646
#14 0x000055555556bfc0 in main (argc=3, argv=0x7fffffffdbc8)
    at /home/ardiya/Workspace/DNNLibrary/tools/onnx2daq/onnx2daq.cpp:34

from frame 9, file ir_pb_converter.cc line 287, which is n->addInput(value_by_name_of.at(input));
here are more info from the gdb

(gdb) print input
$1 = "W74"
(gdb) print value_by_name_of
$2 = std::unordered_map with 478 elements = {["yolonms_layer_1/ExpandDims_1:0"] = 0x55555b61a6e0, 
  ["TFNodes/yolo_evaluation_layer_1/concat_7:0"] = 0x55555b619640, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_56:0"] = 0x55555b619140, 
  ["TFNodes/yolo_evaluation_layer_1/add_5:0"] = 0x55555b618880, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_54:0"] = 0x55555b618380, 
  ["TFNodes/yolo_evaluation_layer_1/mul_15:0"] = 0x55555b617650, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_52:0"] = 0x55555b616d00, 
  ["TFNodes/yolo_evaluation_layer_1/add_4:0"] = 0x55555b616560, 
  ["TFNodes/yolo_evaluation_layer_1/Sigmoid_6:0"] = 0x55555b616310, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_46:0"] = 0x55555b615cc0, 
  ["TFNodes/yolo_evaluation_layer_1/truediv_23:0"] = 0x55555b6158a0, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_53:0"] = 0x55555b615040, 
  ["TFNodes/yolo_evaluation_layer_1/mul_13:0"] = 0x55555b614a50, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_48:0"] = 0x55555b614240, 
  ["yolo_evaluation_layer_1/concat_10:0_tx"] = 0x55555b613d20, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_17:0"] = 0x55555b6134b0, 
  ["TFNodes/yolo_evaluation_layer_1/mul_18:0"] = 0x55555b613240, 
  ["TFNodes/yolo_evaluation_layer_1/Sigmoid_7:0"] = 0x55555b613050, 
  ["TFNodes/yolo_evaluation_layer_1/Sigmoid_8:0"] = 0x55555b612840, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_51:0"] = 0x55555b612220, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_15__87:0"] = 0x55555b611a60, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_15/shape_Concat__36:0"] = 0x55555b6114b0, 
  ["TFNodes/yolo_evaluation_layer_1/concat_6:0"] = 0x55555b610ea0, 
  ["TFNodes/yolo_evaluation_layer_1/Tile_5/multiples_Concat__46:0"] = 0x55555b6105d0, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_15/shape_Unsqueeze__32:0"] = 0x55555b60da10, 
  ["TFNodes/yolo_evaluation_layer_1/arange_4__77_loop:1"] = 0x55555b60e520, 
  ["TFNodes/yolo_evaluation_layer_1/arange_4__77_loop:0"] = 0x55555b60e450, 
  ["TFNodes/yolo_evaluation_layer_1/arange_4__77_trip_cnt:0"] = 0x55555b60d780, 
  ["TFNodes/yolo_evaluation_layer_1/arange_4__77_div:0"] = 0x55555b60cfc0, 
  ["TFNodes/yolo_evaluation_layer_1/arange_4__77_cast_diff:0"] = 0x55555b60cd40, 
  ["TFNodes/yolo_evaluation_layer_1/arange_4__77_diff__78:0"] = 0x55555b60ca50, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_44:0"] = 0x55555b5cb8a0, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_14:0"] = 0x55555b5c8b10, 
  ["TFNodes/yolo_evaluation_layer_1/arange_5__52_trip_cnt:0"] = 0x55555b5caa10, 
  ["TFNodes/yolo_evaluation_layer_1/arange_5__52_ceil:0"] = 0x55555b5ca640, 
  ["TFNodes/yolo_evaluation_layer_1/arange_5__52_cast_diff:0"] = 0x55555b5c9e90, 
  ["TFNodes/yolo_evaluation_layer_1/arange_5__52_diff__53:0"] = 0x55555b5c9b50, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_45:0"] = 0x55555b604060, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_3__306:0"] = 0x55555b5d8770, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_15/shape_Unsqueeze__33:0"] = 0x55555b5c8930, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_13:0"] = 0x55555b5da1c0, 
  ["TFNodes/yolo_evaluation_layer_1/Cast_8:0"] = 0x55555b611150, 
  ["batch_norm_output_buffer64"] = 0x55555b59c670, ["model_1/add_23/add:0"] = 0x55555b5c44e0, 
  ["model_1/leaky_re_lu_52/LeakyRelu:0"] = 0x55555b5c42c0, 
  ["TFNodes/yolo_evaluation_layer_1/mul_7:0"] = 0x55555b5f7e80, 
  ["convolution_output23"] = 0x55555b5c3a10, ["convolution_output57"] = 0x55555b5a4ed0, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_1:0"] = 0x55555b5cd130, 
  ["model_1/leaky_re_lu_51/LeakyRelu:0"] = 0x55555b5c3630, 
  ["TFNodes/yolo_evaluation_layer_1/Shape_2:0"] = 0x55555b5e9b70, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_7__219:0"] = 0x55555b5ce4a0, 
  ["model_1/leaky_re_lu_68/LeakyRelu:0"] = 0x55555b5fec20, ["convolution_output25"] = 0x55555b5c1e80, 
  ["model_1/leaky_re_lu_49/LeakyRelu:0"] = 0x55555b5c1aa0, 
  ["TFNodes/yolo_evaluation_layer_1/Sigmoid_5:0"] = 0x55555b5f65f0, 
  ["model_1/add_21/add:0"] = 0x55555b5c0dc0, ["model_1/leaky_re_lu_48/LeakyRelu:0"] = 0x55555b5c0ba0, 
  ["convolution_output12"] = 0x55555b5e6930, ["batch_norm_output_buffer25"] = 0x55555b5bfb90, 
  ["model_1/add_20/add:0"] = 0x55555b5bf220, ["model_1/leaky_re_lu_45/LeakyRelu:0"] = 0x55555b5be3d0, 
  ["convolution_output30"] = 0x55555b5bdb20, ["model_1/add_6/add:0"] = 0x55555b5a3e10, 
  ["model_1/leaky_re_lu_44/LeakyRelu:0"] = 0x55555b5bd7d0, 
  ["batch_norm_output_buffer40"] = 0x55555b5b2e00, ["batch_norm_output_buffer28"] = 0x55555b5bd450, 
  ["convolution_output34"] = 0x55555b5ba3a0, 
  ["TFNodes/yolo_evaluation_layer_1/Reshape_9:0"] = 0x55555b5f5b40, 
  ["convolution_output54"] = 0x55555b5a79a0, 
  ["TFNodes/yolo_evaluation_layer_1/concat_10:0"] = 0x55555b613800, 
  ["model_1/leaky_re_lu_43/LeakyRelu:0"] = 0x55555b5bc7e0, 
  ["TFNodes/yolo_evaluation_layer_1/strided_slice_37:0"] = 0x55555b5fc2c0, 
  ["model_1/leaky_re_lu_50/LeakyRelu:0"] = 0x55555b5c2730, 
  ["model_1/leaky_re_lu_41/LeakyRelu:0"] = 0x55555b5bac50, 
  ["batch_norm_output_buffer48"] = 0x55555b5aa6b0, ["batch_norm_output_buffer32"] = 0x55555b5b9c40, 
  ["convolution_output35"] = 0x55555b5b9750, ["convolution_output28"] = 0x55555b5bf6a0, 
  ["TFNodes/yolo_evaluation_layer_1/arange__271_diff__272:0"] = 0x55555b5d3900, 
  ["TFNodes/yolo_evaluation_layer_1/Shape_3:0"] = 0x55555b602a50, 
  ["model_1/add_17/add:0"] = 0x55555b5b92e0, 
  ["TFNodes/yolo_evaluation_layer_1/arange_5__52_loop:1"] = 0x55555b609270, 
  ["model_1/leaky_re_lu_5/LeakyRelu:0"] = 0x55555b599fb0, 
  ["batch_norm_output_buffer15"] = 0x55555b5c7f60, 
  ["model_1/leaky_re_lu_32/LeakyRelu:0"] = 0x55555b5b3180, ["model_1/add_19/add:0"] = 0x55555b5bca00, 
  ["batch_norm_output_buffer34"] = 0x55555b5b80b0, ["model_1/add_16/add:0"] = 0x55555b5b7750, 
  ["convolution_output38"] = 0x55555b5b6c80, 
  ["TFNodes/yolo_evaluation_layer_1/mul_16:0"] = 0x55555b615660, 
  ["convolution_output18"] = 0x55555b5c7a30, 
  ["TFNodes/yolo_evaluation_layer_1/Squeeze:0"] = 0x55555b595170, 
  ["convolution_output39"] = 0x55555b5b6030, ["convolution_output10"] = 0x55555b5fd750, 
  ["TFNodes/yolo_evaluation_layer_1/Tile:0"] = 0x55555b5d48c0, 
  ["batch_norm_output_buffer16"] = 0x55555b5c7330, ["batch_norm_output_buffer37"] = 0x55555b5b5620, 
  ["TFNodes/yolo_evaluation_layer_1/mul_17:0"] = 0x55555b619750, 
  ["batch_norm_output_buffer38"] = 0x55555b5b4990...}

@daquexian could you provide any guidance on how to fix this?

@aitikgupta
Copy link

Any updates on this?

@LewsTherin511
Copy link

Having the same error while trying to convert a custom SSD object detection model. The original model was implemented in GluonCV, fine-tuned from one of the models in the ZOO, and converted to ONNX format.
Any updates?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants