You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can run the program successfully, using the your parameters but it shows bad. Collision-rate is 1 and no success.
It can not perform as well as it did in the link https://bark-simulator.github.io/tutorials/bark_ml_getting_started/.
Do I need to retrain? I trained yesterday, but the results still don't seem good
:~/Project/bark-ml$ bazel run //examples:tfa_gnn
INFO: Analyzed target //examples:tfa_gnn (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //examples:tfa_gnn up-to-date:
bazel-bin/examples/tfa_gnn
INFO: Elapsed time: 0.202s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: external/bazel_tools/tools/test/test-setup.sh examplINFO: Build completed successfully, 1 total action
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //examples:tfa_gnn
2022-07-21 09:06:25.154909: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:25.175083: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:25.175223: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I0721 09:06:25.176883 139873437524800 xodr_parser.py:317] Transforming PlanView with given offset {'x': 0.0, 'y': -0.4, 'z': 0.0, 'hdg': 0.0}
I0721 09:06:25.178073 139873437524800 xodr_parser.py:317] Transforming PlanView with given offset {'x': 0.0, 'y': -0.4, 'z': 0.0, 'hdg': 0.0}
I0721 09:06:26.265485 139873437524800 graph_observer.py:77] GraphObserver configured with node attributes: ['x', 'y', 'theta', 'vel', 'goal_x', 'goal_y', 'goal_dx', 'goal_dy', 'goal_theta', 'goal_d', 'goal_vel']
I0721 09:06:26.265587 139873437524800 graph_observer.py:92] GraphObserver configured with edge attributes: ['dx', 'dy', 'dvel', 'dtheta']
2022-07-21 09:06:26.268973: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-21 09:06:26.269864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.270013: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.270111: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627707: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627814: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627906: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20880 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:08:00.0, compute capability: 8.6
2022-07-21 09:06:27.487744: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
/home/myx/anaconda3/envs/bark-ml/lib/python3.7/site-packages/gym/spaces/box.py:84: UserWarning: WARN: Box bound precision lowered by casting to float32
logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
I0721 09:06:27.772876 139873437524800 common.py:1007] No checkpoint available at
I0721 09:06:27.773449 139873437524800 common.py:1007] No checkpoint available at best_checkpoint/
WARNING:tensorflow:From /home/myx/anaconda3/envs/bark-ml/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version.
Instructions for updating:
Use as_dataset(..., single_deterministic_pass=False) instead. W0721 09:06:27.844532 139873437524800 api.py:459] From /home/myx/anaconda3/envs/bark-ml/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version. Instructions for updating: Use as_dataset(..., single_deterministic_pass=False) instead.
I0721 09:06:28.160851 139873437524800 tfa_runner.py:150] Simulating episode 0.
I0721 09:06:29.264624 139873437524800 tfa_runner.py:150] Simulating episode 1.
I0721 09:06:29.730968 139873437524800 tfa_runner.py:150] Simulating episode 2.
I0721 09:06:30.606807 139873437524800 tfa_runner.py:150] Simulating episode 3.
I0721 09:06:31.119165 139873437524800 tfa_runner.py:150] Simulating episode 4.
I0721 09:06:31.633056 139873437524800 tfa_runner.py:150] Simulating episode 5.
I0721 09:06:32.500190 139873437524800 tfa_runner.py:150] Simulating episode 6.
I0721 09:06:34.434643 139873437524800 tfa_runner.py:150] Simulating episode 7.
I0721 09:06:34.834882 139873437524800 tfa_runner.py:150] Simulating episode 8.
I0721 09:06:35.307642 139873437524800 tfa_runner.py:150] Simulating episode 9.
The agent achieved an average reward of -0.185, collision-rate of 1.00000, took on average 12.300 steps, and reached a success-rate of 0.000 (evaluated over 10 episodes).`
The text was updated successfully, but these errors were encountered:
Hi,
this question is not that clear. There is no pre-trained model in the repo. If you just run this example using "bazel run //examples:tfa_gnn", the default mode is "visualize". It is quite reasonable, that the network achieves zero success without training.
Please add more details.
Best
I can run the program successfully, using the your parameters but it shows bad. Collision-rate is 1 and no success.
It can not perform as well as it did in the link https://bark-simulator.github.io/tutorials/bark_ml_getting_started/.
Do I need to retrain? I trained yesterday, but the results still don't seem good
:~/Project/bark-ml$ bazel run //examples:tfa_gnn
INFO: Analyzed target //examples:tfa_gnn (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //examples:tfa_gnn up-to-date:
bazel-bin/examples/tfa_gnn
INFO: Elapsed time: 0.202s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: external/bazel_tools/tools/test/test-setup.sh examplINFO: Build completed successfully, 1 total action
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //examples:tfa_gnn
2022-07-21 09:06:25.154909: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:25.175083: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:25.175223: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I0721 09:06:25.176883 139873437524800 xodr_parser.py:317] Transforming PlanView with given offset {'x': 0.0, 'y': -0.4, 'z': 0.0, 'hdg': 0.0}
I0721 09:06:25.178073 139873437524800 xodr_parser.py:317] Transforming PlanView with given offset {'x': 0.0, 'y': -0.4, 'z': 0.0, 'hdg': 0.0}
I0721 09:06:26.265485 139873437524800 graph_observer.py:77] GraphObserver configured with node attributes: ['x', 'y', 'theta', 'vel', 'goal_x', 'goal_y', 'goal_dx', 'goal_dy', 'goal_theta', 'goal_d', 'goal_vel']
I0721 09:06:26.265587 139873437524800 graph_observer.py:92] GraphObserver configured with edge attributes: ['dx', 'dy', 'dvel', 'dtheta']
2022-07-21 09:06:26.268973: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-21 09:06:26.269864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.270013: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.270111: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627707: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627814: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-07-21 09:06:26.627906: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20880 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:08:00.0, compute capability: 8.6
2022-07-21 09:06:27.487744: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
/home/myx/anaconda3/envs/bark-ml/lib/python3.7/site-packages/gym/spaces/box.py:84: UserWarning: WARN: Box bound precision lowered by casting to float32
logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
I0721 09:06:27.772876 139873437524800 common.py:1007] No checkpoint available at
I0721 09:06:27.773449 139873437524800 common.py:1007] No checkpoint available at best_checkpoint/
WARNING:tensorflow:From /home/myx/anaconda3/envs/bark-ml/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version.
Instructions for updating:
Use
as_dataset(..., single_deterministic_pass=False) instead. W0721 09:06:27.844532 139873437524800 api.py:459] From /home/myx/anaconda3/envs/bark-ml/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version. Instructions for updating: Use
as_dataset(..., single_deterministic_pass=False) instead.I0721 09:06:28.160851 139873437524800 tfa_runner.py:150] Simulating episode 0.
I0721 09:06:29.264624 139873437524800 tfa_runner.py:150] Simulating episode 1.
I0721 09:06:29.730968 139873437524800 tfa_runner.py:150] Simulating episode 2.
I0721 09:06:30.606807 139873437524800 tfa_runner.py:150] Simulating episode 3.
I0721 09:06:31.119165 139873437524800 tfa_runner.py:150] Simulating episode 4.
I0721 09:06:31.633056 139873437524800 tfa_runner.py:150] Simulating episode 5.
I0721 09:06:32.500190 139873437524800 tfa_runner.py:150] Simulating episode 6.
I0721 09:06:34.434643 139873437524800 tfa_runner.py:150] Simulating episode 7.
I0721 09:06:34.834882 139873437524800 tfa_runner.py:150] Simulating episode 8.
I0721 09:06:35.307642 139873437524800 tfa_runner.py:150] Simulating episode 9.
The agent achieved an average reward of -0.185, collision-rate of 1.00000, took on average 12.300 steps, and reached a success-rate of 0.000 (evaluated over 10 episodes).`
The text was updated successfully, but these errors were encountered: