Replies: 4 comments 28 replies
-
This layering is not that feasible and efficient right now as we are introducing too many diffs on one layer. There should be better organization when building the Autoware image. Also, there are too many development packages involved in runtime images. We have started putting some effort into addressing some of these issues in this repo: https://github.com/leo-drive/avte_autoware . |
Beta Was this translation helpful? Give feedback.
-
@xmfcx - start for starting this discussion! I think that when we're going through this work of rethinking how we want to create and distribute Autoware containers, we should make the split between development environment (DE) and production environment (PE). I would suggest the following:
Other questions:
Inspiration:
I would like to join and discuss this topic. Is this being discussed in the SW working group? |
Beta Was this translation helpful? Give feedback.
-
@xmfcx Thank you for raising this. I'll give you some related information.
The largest part (maybe over 10GB) is CUDA, which will be improved by #3084.
It could be, but please note that OSRF doesn't provide CUDA images.
It's a good practice, but please note that (although you already know that) splitting it into too many layers may increase maintenance costs. |
Beta Was this translation helpful? Give feedback.
-
Here is an updated version of my idea of reconstructed images. Changes
Related ideas with debian packaging of Autoware
Rough Dockerfiles Diagramgraph TD
subgraph Diagram
tit01("Autoware Docker Images"):::cl1 --> itA011("Development and Build Environment"):::cl2
itA011 --> itA01("FROM ros:humble"):::cl4
itA01 --> itA02("apt dist-upgrade"):::cl3
itA02 --> itA03("ansible"):::cl3
itA03 --> itA04("ansible - roles (except cuda)"):::cl3
itA04 --> itA05("ansible - cuda, cudnn, tensorrt"):::cl3
itA05 --> itA06("pull latest autoware"):::cl3
itA06 --> itA07("rosdep install"):::cl3
itA07 --> itA08("build autoware"):::cl3
itA08 --> itA09("test autoware"):::cl3
itA09 --> itA10("remove autoware folder"):::cl3
itA10 --> img01("Development Image"):::cl5
itA09 --> img02("Prebuilt Image"):::cl5
tit01 --> itB01("Production Environment"):::cl2
itB01 --> itB02("FROM ros:humble-ros-base-jammy"):::cl4
itB02 --> itB03("apt dist-upgrade"):::cl3
itB03 --> itB04("ansible"):::cl3
itB04 --> itB05("ansible - runtime roles"):::cls1
itB05 --> itB06("ansible - cuda, cudnn, tensorrt (runtime)"):::cl3
itB06 --> itB071("FROM autoware:prebuilt COPY autoware folder"):::cl4
itB071 --> itB07("rosdep install -t exec"):::cl3
itB07 --> itB08("test autoware"):::cl3
itB08 --> itB09("remove build, src, log folders"):::cl3
itB09 --> itB10("remove testing tooling"):::cls1
itB10 --> itB11("Runtime Image"):::cl5
end
Diagram --- Legend
%% Color-based legend
subgraph Legend
direction TB
lg1["Title"]:::cl1
lg2["Environment Types"]:::cl2
lg3["Pseudo Layers"]:::cl3
lg4["Image References"]:::cl4
lg5["Images"]:::cl5
lg6["Needs discussion"]:::cls1
end
%% Carnation pink
classDef cl1 stroke:#111,fill:#FF99C8
%% Uranian blue
classDef cl2 stroke:#111,fill:#A9DEF9
%% Nyanza green
classDef cl3 stroke:#111,fill:#D0F4DE
%% Lemon chiffon
classDef cl4 stroke:#111,fill:#FCF6BD
%% Mauve violet
classDef cl5 stroke:#111,fill:#E4C1f9
%% Nyanza green striked
classDef cls1 stroke:#F00,fill:#D0F4DE,stroke-dasharray: 5 5,stroke-width:3px
I'd like to hear your opinions about this way of restructuring the layers and images. cc. @oguzkaganozt @kenji-miyake @kaspermeck-arm @HamburgDave @armaganarsln @doganulus @esteve |
Beta Was this translation helpful? Give feedback.
-
Right now, the base image for the docker containers in Autoware is
ubuntu:22.04
.Order of dependencies:
Another image
Maybe we can use OSRF ROS2 humble desktop image to modularize the deployment of the current docker images?
https://hub.docker.com/layers/osrf/ros/humble-desktop/images/sha256-97f179f6bbcc60c6ffbef88486b3a29a3c79794c0a233e50a9f65130ac5533b5
Current image and how to improve
Right now we have a 15.7GB blob of a layer which we have to download the entirety of it if we do a docker pull.
But if we separate the
dockerfile
build steps into multiple layers, keeping less updated layers at beginning and more updated Autoware related layers at the end.This way, our
dockerfile
will be able to make use of reusing layers better.Here are our current layers and their sizes:
CREATED BY
SIZE
CMD ["/bin/bash"]
0B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c echo "source /opt/ros/${ROS_DISTRO}/setup.bash" > /etc/bash.bashrc # buildkit
34B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y software-properties-common && apt-add-repository ppa:kisak/kisak-mesa && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y libegl-mesa0 libegl1-mesa-dev libgbm-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri libglapi-mesa libglx-mesa0 && apt-get clean && rm -rf /var/lib/apt/lists/* # buildkit
43.5MB
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c mkdir -p /etc/OpenCL/vendors && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd && chmod 644 /etc/OpenCL/vendors/nvidia.icd # buildkit
22B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c curl https://gitlab.com/nvidia/container-images/opengl/raw/5191cf205d3e4bb1150091f9464499b076104354/glvnd/runtime/10_nvidia.json -o /etc/glvnd/egl_vendor.d/10_nvidia.json && chmod 644 /etc/glvnd/egl_vendor.d/10_nvidia.json # buildkit
107B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c curl https://gitlab.com/nvidia/container-images/vulkan/raw/dc389b0445c788901fda1d85be96fd1cb9410164/nvidia_icd.json -o /etc/vulkan/icd.d/nvidia_icd.json && chmod 644 /etc/vulkan/icd.d/nvidia_icd.json # buildkit
139B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c rm -rf "$HOME"/.cache /etc/apt/sources.list.d/cuda*.list /etc/apt/sources.list.d/docker.list /etc/apt/sources.list.d/nvidia-docker.list # buildkit
0B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c ./setup-dev-env.sh -y $SETUP_ARGS universe && pip uninstall -y ansible ansible-core && mkdir src && vcs import src < autoware.repos && rosdep update && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" && apt-get clean && rm -rf /var/lib/apt/lists/* # buildkit
15.7GB
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c mkdir -p ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts # buildkit
828B
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c ls /autoware # buildkit
0B
WORKDIR /autoware
0B
COPY ansible/ /autoware/ansible/ # buildkit
37kB
COPY autoware.repos setup-dev-env.sh ansible-galaxy-requirements.yaml amd64.env arm64.env /autoware/ # buildkit
7.3kB
RUN |2 ROS_DISTRO=humble SETUP_ARGS=--no-cuda-drivers /bin/bash -o pipefail -c apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install --no-install-recommends git ssh && apt-get clean && rm -rf /var/lib/apt/lists/* # buildkit
71.2MB
ARG SETUP_ARGS
0B
ARG ROS_DISTRO
0B
SHELL [/bin/bash -o pipefail -c]
0B
/bin/sh -c #(nop) CMD ["/bin/bash"]
0B
/bin/sh -c #(nop) ADD file:c8ef6447752cab2541ffca9e3cfa27d581f3491bc8f356f6eafd951243609341 in /
77.8MB
/bin/sh -c #(nop) LABEL org.opencontainers.image.version=22.04
0B
/bin/sh -c #(nop) LABEL org.opencontainers.image.ref.name=ubuntu
0B
/bin/sh -c #(nop) ARG LAUNCHPAD_BUILD_ARCH
0B
/bin/sh -c #(nop) ARG RELEASE
0B
Related issue
#2840
cc. @oguzkaganozt @kenji-miyake @kaspermeck-arm @HamburgDave @armaganarsln
Beta Was this translation helpful? Give feedback.
All reactions