-
Notifications
You must be signed in to change notification settings - Fork 582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PAM error inside buildx #1302
Comments
This repo is about Let me move this to |
I don't think this is related to the action but your Dockerfile. Do you repro locally as well? |
It works locally on my Fedora 41 machine. |
Local test of F41
|
Looks like it works with the Default builder (docker 27.5.0); docker buildx inspect
Name: default
Driver: docker
Last Activity: 2025-01-17 23:48:47 +0000 UTC
Nodes:
Name: default
Endpoint: default
Status: running
BuildKit version: v0.18.2
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.17.0.1 docker buildx build -t foo --load -<<'EOF'
FROM registry.fedoraproject.org/fedora:latest
RUN useradd -m -G wheel -u 1001 user
RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers.d/user
USER user
WORKDIR /home/user
RUN sudo whoami
EOF [+] Building 6.7s (9/9) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 234B 0.0s
=> [internal] load metadata for registry.fedoraproject.org/fedora:latest 0.6s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM registry.fedoraproject.org/fedora:latest@sha256:991a06b2425c13613ef8ace721a9055e52a64f65cd96c2b18c72bde43fe1308b 4.4s
=> => resolve registry.fedoraproject.org/fedora:latest@sha256:991a06b2425c13613ef8ace721a9055e52a64f65cd96c2b18c72bde43fe1308b 0.0s
=> => sha256:991a06b2425c13613ef8ace721a9055e52a64f65cd96c2b18c72bde43fe1308b 1.41kB / 1.41kB 0.0s
=> => sha256:ef58b9a9b4eeb929cb37b1b83d94a2f7258edd175f9837b1bfa01d3383d5cd09 504B / 504B 0.0s
=> => sha256:a432b057a522737c229d2aac9b029f55bf2a44eb3f423e4e4ece2acb8a304652 858B / 858B 0.0s
=> => sha256:a52c777f25d4afed9d7958da2f249de731ed6e4479ead4f00621589d0398610c 60.06MB / 60.06MB 0.8s
=> => extracting sha256:a52c777f25d4afed9d7958da2f249de731ed6e4479ead4f00621589d0398610c 3.3s
=> [2/5] RUN useradd -m -G wheel -u 1001 user 0.5s
=> [3/5] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers.d/user 0.2s
=> [4/5] WORKDIR /home/user 0.1s
=> [5/5] RUN sudo whoami 0.3s
=> exporting to image 0.2s
=> => exporting layers 0.1s
=> => writing image sha256:cb32a41b3f9c46fcd2c337c20ac788780f4cef5a04ce9eab7b4e38f3b88f2bda 0.0s
=> => naming to docker.io/library/foo 0.0s But with a custom builder, using steps from GitHub actions, it fails; docker buildx create --name builder-7764b229-6772-4d87-9422-87cbaee29d6b --driver docker-container --buildkitd-flags '--allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host' --use
docker buildx use builder-7764b229-6772-4d87-9422-87cbaee29d6b
[+] Building 8.0s (9/9) FINISHED docker-container:builder-7764b229-6772-4d87-9422-87cbaee29d6b
=> [internal] booting buildkit 1.8s
=> => pulling image moby/buildkit:buildx-stable-1 1.0s
=> => creating container buildx_buildkit_builder-7764b229-6772-4d87-9422-87cbaee29d6b0 0.8s
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 234B 0.0s
=> [internal] load metadata for registry.fedoraproject.org/fedora:latest 0.9s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [1/5] FROM registry.fedoraproject.org/fedora:latest@sha256:991a06b2425c13613ef8ace721a9055e52a64f65cd96c2b18c72bde43fe1308b 4.1s
=> => resolve registry.fedoraproject.org/fedora:latest@sha256:991a06b2425c13613ef8ace721a9055e52a64f65cd96c2b18c72bde43fe1308b 0.0s
=> => sha256:a52c777f25d4afed9d7958da2f249de731ed6e4479ead4f00621589d0398610c 60.06MB / 60.06MB 0.6s
=> => extracting sha256:a52c777f25d4afed9d7958da2f249de731ed6e4479ead4f00621589d0398610c 3.4s
=> [2/5] RUN useradd -m -G wheel -u 1001 user 0.4s
=> [3/5] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers.d/user 0.1s
=> [4/5] WORKDIR /home/user 0.1s
=> ERROR [5/5] RUN sudo whoami 0.2s
------
> [5/5] RUN sudo whoami:
0.135 sudo: PAM account management error: Authentication service cannot retrieve authentication info
0.136 sudo: a password is required
------
Dockerfile:8
--------------------
6 | USER user
7 | WORKDIR /home/user
8 | >>> RUN sudo whoami
9 |
--------------------
ERROR: failed to solve: process "/bin/sh -c sudo whoami" did not complete successfully: exit code: 1 docker buildx inspect
Name: builder-7764b229-6772-4d87-9422-87cbaee29d6b
Driver: docker-container
Last Activity: 2025-01-17 23:51:36 +0000 UTC
Nodes:
Name: builder-7764b229-6772-4d87-9422-87cbaee29d6b0
Endpoint: unix:///var/run/docker.sock
Status: running
BuildKit daemon flags: --allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host
BuildKit version: v0.18.2
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
Labels:
org.mobyproject.buildkit.worker.executor: oci
org.mobyproject.buildkit.worker.hostname: 6b8648b69562
org.mobyproject.buildkit.worker.network: host
org.mobyproject.buildkit.worker.oci.process-mode: sandbox
org.mobyproject.buildkit.worker.selinux.enabled: false
org.mobyproject.buildkit.worker.snapshotter: overlayfs
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
Max Used Space: 488.3MiB
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Reserved Space: 2.794GiB
Max Used Space: 17.7GiB
Min Free Space: 4.657GiB
GC Policy rule#2:
All: false
Reserved Space: 2.794GiB
Max Used Space: 17.7GiB
Min Free Space: 4.657GiB
GC Policy rule#3:
All: true
Reserved Space: 2.794GiB
Max Used Space: 17.7GiB
Min Free Space: 4.657GiB |
That was running on a Ubuntu 24.04 machine;
Somewhat similar to the GitHub actions runner;
The custom builder would be running inside a docker container, so there's additional nesting happening (possibly relevant); Quick search on github show various spots where the error can come from, one of them from systemd (which for sure won't be present inside the build container); https://github.com/linux-pam/linux-pam/blob/e634a3a9be9484ada6e93970dfaf0f055ca17332/libpam/pam_strerror.c#L60-L61 |
Yeah; looks like it doesn't like running docker-in-docker; on the host; docker run -it --rm registry.fedoraproject.org/fedora:latest sudo whoami
root Running inside a docker-in-docker container; docker run -it --rm registry.fedoraproject.org/fedora:latest sudo whoami
sudo: PAM account management error: Authentication service cannot retrieve authentication info
sudo: a password is required
|
@thaJeztah thanks for the detailed analysis. I am glad it is reproducible. As a workaround for now, is there a way to switch the GitHub action to use the default builder? |
Yes you can set the - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker |
Did a quick check to see what could cause this; initially I was wondering if latest fedora versions perhaps switched to using systemd for handling sudo. On Docker Desktop, the problem didn't show; docker run -d --quiet --rm --privileged --name=dind docker:27-dind -H unix:///var/run/docker.sock
docker exec -it dind sh
/ # docker run -it --quiet --rm registry.fedoraproject.org/fedora:40 sudo whoami
root
/ # docker run -it --quiet --rm registry.fedoraproject.org/fedora:41 sudo whoami
root
/ # docker run -it --quiet --rm registry.fedoraproject.org/fedora:latest sudo whoami
root But running on ubuntu 24.04 it does; docker run -d --rm --privileged --name=dind docker:27-dind -H unix:///var/run/docker.sock
docker exec -it dind sh
/ # docker run -it --quiet --rm registry.fedoraproject.org/fedora:40 sudo whoami
sudo: PAM account management error: Authentication service cannot retrieve authentication info
sudo: a password is required
/ # docker run -it --quiet --rm registry.fedoraproject.org/fedora:41 sudo whoami
sudo: PAM account management error: Authentication service cannot retrieve authentication info
sudo: a password is required
/ # docker run -it --quiet --rm registry.fedoraproject.org/fedora:latest sudo whoami
sudo: PAM account management error: Authentication service cannot retrieve authentication info
sudo: a password is required Checking syslog, it looks to be apparmor blocking these calls: tail -n 100 /var/log/syslog 2025-01-20T12:25:10.207489+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: eth0: renamed from veth8ec412c
2025-01-20T12:25:10.217579+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 4(veth9fa62ac) entered blocking state
2025-01-20T12:25:10.217601+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 4(veth9fa62ac) entered forwarding state
2025-01-20T12:25:10.219176+00:00 ubuntu-s-1vcpu-1gb-ams3-01 systemd-networkd[643]: veth9fa62ac: Gained carrier
2025-01-20T12:25:11.434009+00:00 ubuntu-s-1vcpu-1gb-ams3-01 systemd-networkd[643]: veth9fa62ac: Gained IPv6LL
2025-01-20T12:25:35.518478+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered blocking state
2025-01-20T12:25:35.518508+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered disabled state
2025-01-20T12:25:35.518511+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: vethc5afc37: entered allmulticast mode
2025-01-20T12:25:35.518512+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: vethc5afc37: entered promiscuous mode
2025-01-20T12:25:35.835778+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: eth0: renamed from veth7108517
2025-01-20T12:25:35.839663+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered blocking state
2025-01-20T12:25:35.839682+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered forwarding state
2025-01-20T12:25:36.017483+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: audit: type=1400 audit(1737375936.015:129): apparmor="DENIED" operation="open" class="file" profile="unix-chkpwd" name="/dev/console" pid=84777 comm="unix_chkpwd" requested_mask="w" denied_mask="w" fsuid=0 ouid=0
2025-01-20T12:25:36.101077+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered disabled state
2025-01-20T12:25:36.102616+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7108517: renamed from eth0
2025-01-20T12:25:36.114763+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered disabled state
2025-01-20T12:25:36.114782+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: vethc5afc37 (unregistering): left allmulticast mode
2025-01-20T12:25:36.114784+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: vethc5afc37 (unregistering): left promiscuous mode
2025-01-20T12:25:36.114786+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(vethc5afc37) entered disabled state
2025-01-20T12:25:53.657476+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered blocking state
2025-01-20T12:25:53.657495+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered disabled state
2025-01-20T12:25:53.657496+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7f83543: entered allmulticast mode
2025-01-20T12:25:53.657497+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7f83543: entered promiscuous mode
2025-01-20T12:25:53.886468+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: eth0: renamed from veth4c7a0fd
2025-01-20T12:25:53.889491+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered blocking state
2025-01-20T12:25:53.889508+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered forwarding state
2025-01-20T12:25:54.105489+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: audit: type=1400 audit(1737375954.103:130): apparmor="DENIED" operation="open" class="file" profile="unix-chkpwd" name="/dev/console" pid=84859 comm="unix_chkpwd" requested_mask="w" denied_mask="w" fsuid=0 ouid=0
2025-01-20T12:25:54.172473+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered disabled state
2025-01-20T12:25:54.172504+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth4c7a0fd: renamed from eth0
2025-01-20T12:25:54.186495+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered disabled state
2025-01-20T12:25:54.186514+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7f83543 (unregistering): left allmulticast mode
2025-01-20T12:25:54.186516+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7f83543 (unregistering): left promiscuous mode
2025-01-20T12:25:54.186518+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7f83543) entered disabled state
2025-01-20T12:26:05.074481+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered blocking state
2025-01-20T12:26:05.074499+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered disabled state
2025-01-20T12:26:05.074500+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7d3aef0: entered allmulticast mode
2025-01-20T12:26:05.074501+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7d3aef0: entered promiscuous mode
2025-01-20T12:26:05.283530+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: eth0: renamed from vetha46b3df
2025-01-20T12:26:05.287526+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered blocking state
2025-01-20T12:26:05.287554+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered forwarding state
2025-01-20T12:26:05.409485+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: audit: type=1400 audit(1737375965.407:131): apparmor="DENIED" operation="open" class="file" profile="unix-chkpwd" name="/dev/console" pid=84940 comm="unix_chkpwd" requested_mask="w" denied_mask="w" fsuid=0 ouid=0
2025-01-20T12:26:05.476518+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered disabled state
2025-01-20T12:26:05.476537+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: vetha46b3df: renamed from eth0
2025-01-20T12:26:05.490473+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered disabled state
2025-01-20T12:26:05.491492+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7d3aef0 (unregistering): left allmulticast mode
2025-01-20T12:26:05.491507+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: veth7d3aef0 (unregistering): left promiscuous mode
2025-01-20T12:26:05.491510+00:00 ubuntu-s-1vcpu-1gb-ams3-01 kernel: docker0: port 1(veth7d3aef0) entered disabled state Which makes me consider this could be similar to; And related to changes in Ubuntu no longer allowing "unconfined" processes, but requiring any process to be assigned a profile; |
Did some further testing; ✅ Docker 27.5.0 on Docker Desktop works;
✅ Docker 27.5.0 on Ubuntu 20.04 works;
✅ Docker 27.5.0 on Ubuntu 22.04 works;
❌ Docker 27.5.0 on Ubuntu 24.04 doesn't work;
❌ Docker 27.5.0 on Ubuntu 24.10 doesn't work;
Running docker run -it --quiet --rm ubuntu:24.04
# inside the container:
apt-get update && apt-get install -y sudo
sudo whoami
root Location of the docker run -it --quiet --rm ubuntu:24.04 sh -c 'command -v unix_chkpwd'
/usr/sbin/unix_chkpwd
docker run -it --quiet --rm ubuntu:24.04 ls -la /usr/sbin/unix_chkpwd
-rwxr-sr-x 1 root shadow 31040 May 2 2024 /usr/sbin/unix_chkpwd
docker run -it --quiet --rm registry.fedoraproject.org/fedora:41 sh -c 'command -v unix_chkpwd'
/usr/sbin/unix_chkpwd
docker run -it --quiet --rm registry.fedoraproject.org/fedora:41 ls -la /usr/sbin/unix_chkpwd
-rwsr-xr-x 1 root root 32560 Nov 25 00:00 /usr/sbin/unix_chkpwd |
comparing Ubuntu 20.04; apparmor_status
apparmor module is loaded.
29 profiles are loaded.
29 profiles are in enforce mode.
/snap/snapd/16292/usr/lib/snapd/snap-confine
/snap/snapd/16292/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/lib/snapd/snap-confine
/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
/usr/sbin/tcpdump
/{,usr/}sbin/dhclient
docker-default
lsb_release
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
snap-update-ns.lxd
snap.lxd.activate
snap.lxd.benchmark
snap.lxd.buginfo
snap.lxd.check-kernel
snap.lxd.daemon
snap.lxd.hook.configure
snap.lxd.hook.install
snap.lxd.hook.remove
snap.lxd.lxc
snap.lxd.lxc-to-lxd
snap.lxd.lxd
snap.lxd.migrate
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined. Ubuntu 22.04; apparmor_status
apparmor module is loaded.
40 profiles are loaded.
40 profiles are in enforce mode.
/snap/snapd/21759/usr/lib/snapd/snap-confine
/snap/snapd/21759/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/lib/snapd/snap-confine
/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
/{,usr/}sbin/dhclient
docker-default
lsb_release
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
snap-update-ns.lxd
snap.lxd.activate
snap.lxd.benchmark
snap.lxd.buginfo
snap.lxd.check-kernel
snap.lxd.daemon
snap.lxd.hook.configure
snap.lxd.hook.install
snap.lxd.hook.remove
snap.lxd.lxc
snap.lxd.lxc-to-lxd
snap.lxd.lxd
snap.lxd.migrate
snap.lxd.user-daemon
tcpdump
ubuntu_pro_apt_news
ubuntu_pro_esm_cache
ubuntu_pro_esm_cache//apt_methods
ubuntu_pro_esm_cache//apt_methods_gpgv
ubuntu_pro_esm_cache//cloud_id
ubuntu_pro_esm_cache//dpkg
ubuntu_pro_esm_cache//ps
ubuntu_pro_esm_cache//ubuntu_distro_info
ubuntu_pro_esm_cache_systemctl
ubuntu_pro_esm_cache_systemd_detect_virt
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode. Ubuntu 24.04: apparmor_status
apparmor module is loaded.
120 profiles are loaded.
25 profiles are in enforce mode.
/usr/bin/man
/usr/lib/snapd/snap-confine
/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
docker-default
lsb_release
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
plasmashell
plasmashell//QtWebEngineProcess
rsyslogd
tcpdump
ubuntu_pro_apt_news
ubuntu_pro_esm_cache
ubuntu_pro_esm_cache//apt_methods
ubuntu_pro_esm_cache//apt_methods_gpgv
ubuntu_pro_esm_cache//cloud_id
ubuntu_pro_esm_cache//dpkg
ubuntu_pro_esm_cache//ps
ubuntu_pro_esm_cache//ubuntu_distro_info
ubuntu_pro_esm_cache_systemctl
ubuntu_pro_esm_cache_systemd_detect_virt
unix-chkpwd
unprivileged_userns
4 profiles are in complain mode.
transmission-cli
transmission-daemon
transmission-gtk
transmission-qt
0 profiles are in prompt mode.
0 profiles are in kill mode.
91 profiles are in unconfined mode.
1password
Discord
MongoDB Compass
QtWebEngineProcess
balena-etcher
brave
buildah
busybox
cam
ch-checkns
ch-run
chrome
crun
devhelp
element-desktop
epiphany
evolution
firefox
flatpak
foliate
geary
github-desktop
goldendict
ipa_verify
kchmviewer
keybase
lc-compliance
libcamerify
linux-sandbox
loupe
lxc-attach
lxc-create
lxc-destroy
lxc-execute
lxc-stop
lxc-unshare
lxc-usernsexec
mmdebstrap
msedge
nautilus
notepadqq
obsidian
opam
opera
pageedit
podman
polypane
privacybrowser
qcam
qmapshack
qutebrowser
rootlesskit
rpm
rssguard
runc
sbuild
sbuild-abort
sbuild-adduser
sbuild-apt
sbuild-checkpackages
sbuild-clean
sbuild-createchroot
sbuild-destroychroot
sbuild-distupgrade
sbuild-hold
sbuild-shell
sbuild-unhold
sbuild-update
sbuild-upgrade
scide
signal-desktop
slack
slirp4netns
steam
stress-ng
surfshark
systemd-coredump
thunderbird
toybox
trinity
tup
tuxedo-control-center
userbindmount
uwsgi-core
vdens
virtiofsd
vivaldi-bin
vpnns
vscode
wike
wpcom
1 processes have profiles defined.
1 processes are in enforce mode.
/usr/sbin/rsyslogd (890) rsyslogd
0 processes are in complain mode.
0 processes are in prompt mode.
0 processes are in kill mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode. In Ubuntu 24.04, there's many more profiles loaded, and I see cat /etc/apparmor.d/unix-chkpwd
# apparmor.d - Full set of apparmor profiles
# Copyright (C) 2019-2021 Mikhail Morfikov
# SPDX-License-Identifier: GPL-2.0-only
# The apparmor.d project comes with several variables and abstractions
# that are not part of upstream AppArmor yet. Therefore this profile was
# adopted to use abstractions and variables that are available.
# Copyright (C) Christian Boltz 2024
abi <abi/4.0>,
include <tunables/global>
profile unix-chkpwd /{,usr/}{,s}bin/unix_chkpwd {
include <abstractions/base>
include <abstractions/nameservice>
# To write records to the kernel auditing log.
capability audit_write,
network netlink raw,
/{,usr/}{,s}bin/unix_chkpwd mr,
/etc/shadow r,
# systemd userdb, used in nspawn
/run/host/userdb/*.user r,
/run/host/userdb/*.user-privileged r,
# file_inherit
owner /dev/tty[0-9]* rw,
include if exists <local/unix-chkpwd>
} |
Contributing guidelines
I've found a bug, and:
Description
Imagine a simple container like:
This will fail with a PAM error.
Expected behaviour
sudo
executes successfully in the container.Actual behaviour
Repository URL
https://github.com/junghans/test-actions/tree/PAM_error
Workflow run URL
https://github.com/junghans/test-actions/actions/runs/12834771076
YAML workflow
Workflow logs
Full log
BuildKit logs
Additional info
The build from the same Dockerfile worked a couple of month ago.
The text was updated successfully, but these errors were encountered: