You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, when I run mynteye_vinsfusion.launch, there is the problem
there is nothing in rviz and something is wrong
can you help me? Thank you
why@why-desktop:~/SLAM_WS/Kidnap$ roslaunch cerebro mynteye_vinsfusion.launch
... logging to /home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/roslaunch-why-desktop-26401.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
auto-starting new master
process[master]: started with pid [26413]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 338daad0-ea89-11e9-b4a5-7085c2882345
process[rosout-1]: started with pid [26428]
started core service [/rosout]
process[rosbag-2]: started with pid [26432]
process[vins_estimator-3]: started with pid [26440]
process[cerebro_node-4]: started with pid [26454]
process[my_desc_server-5]: started with pid [26459]
process[keyframe_pose_graph_slam_node-6]: started with pid [26464]
[ WARN] [1570621061.983622030]: [cerebro_node] loadStateFromDisk cmdline parameter was not found, so I will not loadStateFromDisk()
[ WARN] [1570621061.988763392]: [cerebro_node] saveStateToDisk cmdline parameter was not found, so I will not saveStateToDisk()
[ WARN] [1570621061.999904135]: Config File Name : /home/why/SLAM_WS/Kidnap/src/cerebro/config/vinsfusion/mynteye/mynteye_stereo_imu_config.yaml
Using TensorFlow backend.
[cerebro_node-4] process has died [pid 26454, exit code -11, cmd /home/why/SLAM_WS/Kidnap/devel/lib/cerebro/cerebro_node __name:=cerebro_node __log:=/home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/cerebro_node-4.log].
log file: /home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/cerebro_node-4*.log
2019-10-09 19:37:43.230471: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-10-09 19:37:43.302617: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-10-09 19:37:43.303406: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5ecd360 executing computations on platform CUDA. Devices:
2019-10-09 19:37:43.303423: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1050, Compute Capability 6.1
2019-10-09 19:37:43.322800: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3600000000 Hz
2019-10-09 19:37:43.323171: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5f34f00 executing computations on platform Host. Devices:
2019-10-09 19:37:43.323191: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-10-09 19:37:43.323420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 544.44MiB
2019-10-09 19:37:43.323439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-10-09 19:37:43.324262: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-10-09 19:37:43.324279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-10-09 19:37:43.324286: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-10-09 19:37:43.324389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 398 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "/home/why/SLAM_WS/Kidnap/src/cerebro/scripts/whole_image_desc_compute_server.py", line 733, in
gpu_netvlad = JSONModelImageDescriptor( kerasmodel_file=kerasmodel_file, im_rows=fs_image_height, im_cols=fs_image_width, im_chnls=fs_image_chnls )
File "/home/why/SLAM_WS/Kidnap/src/cerebro/scripts/whole_image_desc_compute_server.py", line 372, in init
assert os.path.isdir( LOG_DIR ), "The LOG_DIR doesnot exist, or there is a permission issue. LOG_DIR="+LOG_DIR
AssertionError: The LOG_DIR doesnot exist, or there is a permission issue. LOG_DIR=/models.keras/June2019/centeredinput-m1to1-240x320x1__mobilenetv2-block_9_add__K16__allpairloss
[my_desc_server-5] process has died [pid 26459, exit code 1, cmd /home/why/SLAM_WS/Kidnap/src/cerebro/scripts/whole_image_desc_compute_server.py __name:=my_desc_server __log:=/home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/my_desc_server-5.log].
log file: /home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/my_desc_server-5*.log
^C[keyframe_pose_graph_slam_node-6] killing on exit
[vins_estimator-3] killing on exit
[rosbag-2] killing on exit
[rosout-1] killing on exit
[master] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
The text was updated successfully, but these errors were encountered:
Hello, when I run mynteye_vinsfusion.launch, there is the problem
why@why-desktop:~/SLAM_WS/Kidnap$ roslaunch cerebro mynteye_vinsfusion.launch
... logging to /home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/roslaunch-why-desktop-26401.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://why-desktop:45089/
SUMMARY
PARAMETERS
NODES
/
cerebro_node (cerebro/cerebro_node)
keyframe_pose_graph_slam_node (solve_keyframe_pose_graph/keyframe_pose_graph_slam)
my_desc_server (cerebro/whole_image_desc_compute_server.py)
rosbag (rosbag/play)
vins_estimator (vins/vins_node)
auto-starting new master
process[master]: started with pid [26413]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 338daad0-ea89-11e9-b4a5-7085c2882345
process[rosout-1]: started with pid [26428]
started core service [/rosout]
process[rosbag-2]: started with pid [26432]
process[vins_estimator-3]: started with pid [26440]
process[cerebro_node-4]: started with pid [26454]
process[my_desc_server-5]: started with pid [26459]
process[keyframe_pose_graph_slam_node-6]: started with pid [26464]
[ WARN] [1570621061.983622030]: [cerebro_node] loadStateFromDisk cmdline parameter was not found, so I will not loadStateFromDisk()
[ WARN] [1570621061.988763392]: [cerebro_node] saveStateToDisk cmdline parameter was not found, so I will not saveStateToDisk()
[ WARN] [1570621061.999904135]: Config File Name : /home/why/SLAM_WS/Kidnap/src/cerebro/config/vinsfusion/mynteye/mynteye_stereo_imu_config.yaml
Using TensorFlow backend.
[cerebro_node-4] process has died [pid 26454, exit code -11, cmd /home/why/SLAM_WS/Kidnap/devel/lib/cerebro/cerebro_node __name:=cerebro_node __log:=/home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/cerebro_node-4.log].
log file: /home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/cerebro_node-4*.log
2019-10-09 19:37:43.230471: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-10-09 19:37:43.302617: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-10-09 19:37:43.303406: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5ecd360 executing computations on platform CUDA. Devices:
2019-10-09 19:37:43.303423: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1050, Compute Capability 6.1
2019-10-09 19:37:43.322800: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3600000000 Hz
2019-10-09 19:37:43.323171: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5f34f00 executing computations on platform Host. Devices:
2019-10-09 19:37:43.323191: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-10-09 19:37:43.323420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 544.44MiB
2019-10-09 19:37:43.323439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-10-09 19:37:43.324262: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-10-09 19:37:43.324279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-10-09 19:37:43.324286: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-10-09 19:37:43.324389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 398 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "/home/why/SLAM_WS/Kidnap/src/cerebro/scripts/whole_image_desc_compute_server.py", line 733, in
gpu_netvlad = JSONModelImageDescriptor( kerasmodel_file=kerasmodel_file, im_rows=fs_image_height, im_cols=fs_image_width, im_chnls=fs_image_chnls )
File "/home/why/SLAM_WS/Kidnap/src/cerebro/scripts/whole_image_desc_compute_server.py", line 372, in init
assert os.path.isdir( LOG_DIR ), "The LOG_DIR doesnot exist, or there is a permission issue. LOG_DIR="+LOG_DIR
AssertionError: The LOG_DIR doesnot exist, or there is a permission issue. LOG_DIR=/models.keras/June2019/centeredinput-m1to1-240x320x1__mobilenetv2-block_9_add__K16__allpairloss
[my_desc_server-5] process has died [pid 26459, exit code 1, cmd /home/why/SLAM_WS/Kidnap/src/cerebro/scripts/whole_image_desc_compute_server.py __name:=my_desc_server __log:=/home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/my_desc_server-5.log].
log file: /home/why/.ros/log/338daad0-ea89-11e9-b4a5-7085c2882345/my_desc_server-5*.log
^C[keyframe_pose_graph_slam_node-6] killing on exit
[vins_estimator-3] killing on exit
[rosbag-2] killing on exit
[rosout-1] killing on exit
[master] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
The text was updated successfully, but these errors were encountered: