-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nextcloud-nginx container crashlooping after securityContext update; /var/www/html/config
always owned by root
#335
Comments
Adding my experience:
|
@FrankelJb are you also using nginx? Which security contexts are you setting? There's a few that you can set. If we could get the security context settings from your values.yaml, that would help in comparing states. Thank you for sharing! |
@jessebot I'm not using Nginx. I'm almost ready to give up on NC in kubernetes (I can't upgrade now). I've managed to solve this issue. I was trying to use a single redis cluster for all my services. However, I had to give up on that dream because NC refused to connect without a password. I'm not sure if that's caused by a config in the helm chart or my configuration error. Thanks for being so responsive :) |
I'm sorry you're having a bad time with this. I also had a bad time with this at first and then became sort of obsessed with trying to fix it for others too 😅 If you can post your values.yaml (after removing sensitive info) I can help troubleshoot it for you :) |
UID 82 comes from the Nextcloud fpm alpine image. If you use another image instead of alpine, I believe the user is 33. The NGINX container you use is an alpine based image, so you have to make sure the group and fsgroup match for both containers. For example my (abbreviated) deployment:
You can see I use a distroless NGINX container image, bu the principle is the same. |
@FrankelJb , for Argo CD, I detailed some of my trials in #336 (comment) if that's at all helpful. For this issue owned by root issue, also discussed in #114 , I finally got around to testing it (after battling argo 😅 ), and I've noted that all of the I don't know why though. At first, I thought it was a persistence thing, but then I disabled persistence and it's still an issue. You can kind of see me live testing with Note: This @provokateurin or @tvories have you been able to get this to work? I can get every other directory to be created as any other user, but the directories from the screenshot seem to always be owned by root. You can see my values.yaml here, but I don't know what else we need to set here 🤔 Are there security contexts for persistent volumes? Or perhaps mount options we need to set for the configmap when it gets mounted? It's been months, albeit in my off hours, but I'm still so confused. |
@Jeroen0494 , I switched to the image:
repository: nextcloud
flavor: fpm-alpine
pullPolicy: Always
nextcloud:
# Set securityContext parameters. For example, you may need to define runAsNonRoot directive
securityContext:
runAsUser: 82
runAsGroup: 82
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
privileged: false
capabilities:
drop:
- ALL
podSecurityContext:
fsGroup: 82
...
# this is deprecated, but I figured why not, anything to change that one config directory from root (but it didn't work)
securityContext:
fsGroup: 82 I can't figure out what else it would be. Maybe a script in the container itself? 🤔 Are you using the helm chart and using persistence? Is your seccompProfile:
type: Localhost
localhostProfile: operator/nextcloud/nextcloud-seccomp-profile.json I see it described here in the k8s api docs, but it doesn't link further for what goes in |
/var/www/html/config
always owned by root
@jessebot Not sure if it is the same issue but maybe it will help. I'm using
I resolved it by manually changing the ownership of the subdirs on the host to the same uid. |
@tomasodehnal , thanks for poppping in to help (in fact, thank you to everyone who has tried to pop in and help with this weird issue 😁 ). I will take a peek at that. Few questions: Are you using k3s or another k8s on metal? Could you post your full PV/PVC manifests or section of your values.yaml with that info? The reason I'm asking is that I'm wondering if it's actually a storage driver problem that has nothing to do with nextcloud? It only seems to be happening consistently for a few directories, and those seem to be mounts from persistent volumes. Here's one of my PVCs which is using the local path provisioner, since I'm using k3s: # Dynamic persistent volume claim for nexctcloud data (/var/www/html) to persist
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: nextcloud
name: nextcloud-files
annotations:
k8up.io/backup: "true"
volumeType: local
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi Still looking if there's anything that can be done here, but from my research, this might just be something that needs to be solved in an init container, which I might have to make a PR for :( Update: tested without any sort of values.yaml, using all default settings on k3s with chart version -rw-r--r-- 1 root www-data 0 Apr 16 09:01 nextcloud-init-sync.lock but that's without any persistence or configurations enabled 🤔 Re:
|
@jessebot are you experiencing these storage permission errors only on already existing storage or also when using an emptyDir for example? When using existing storage and the owner of the files is root, when switching to a non-root container it wouldn't be able to change the owner. You'd have to change the owner on the storage medium itself with a Does the issue exist when using no attached storage? And when using emptyDir? An when using PVC template with local-path-provisioner?
I'm using the security profiles operator and have written my own seccomp profile. You may ignore this line, or switch type to Currently I'm not using the Helm chart, because I require certain changes (that I've created a PR for). But all my YAML's are based on the Helm chart. |
Thanks for getting back to me, @Jeroen0494 🙏
Commented on that PR and will take another look after conflicts are resolved :) Will still probably ping Kate in though, as the PR is large.
Let me try with emptyDir actually. 🤔 I've been doing this on a fresh k3s cluster each time. I completely destroy the cluster and it's storage before testing a new cluster. I checked
No, the issue doesn't exist when I don't use any persistence. Well, except for the |
Could you also try with a local mount, instead of using the local path provisioner? For example, my PV:
|
Here's what else I tried recently: I do not know how to set an emptyDir with the current values.yaml 🤔 Creating a Persistent Volume with
|
securityContext: | |
{{- if .Values.nextcloud.podSecurityContext }} | |
{{- with .Values.nextcloud.podSecurityContext }} | |
{{- toYaml . | nindent 8 }} | |
{{- end }} | |
{{- else }} | |
{{- if .Values.nginx.enabled }} | |
# Will mount configuration files as www-data (id: 82) for nextcloud | |
fsGroup: 82 | |
{{- else }} | |
# Will mount configuration files as www-data (id: 33) for nextcloud | |
fsGroup: 33 | |
{{- end }} |
Submitted PR here: #379 (but that only would fix the group ownership, not the user ownership)
Current thoughts...
Perhaps since bitnami's postgres chart also provides an init container to get around this, we should just provide that as well since k3s and rancher are pretty popular, and it's not pretty, but I don't really see a way around this so far? (there is a beta rootless mode for k3s, but I haven't dove into that yet)
@jessebot It's K3s on a Ubuntu VM on ESXi. This is the manifest I use for the apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud
labels:
type: local
spec:
storageClassName: nextcloud
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/nextcloud"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud
namespace: nextcloud
spec:
storageClassName: nextcloud
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi And the respective excerpt from the nextcloud:
podSecurityContext:
runAsUser: 1003
runAsGroup: 1003
runAsNonRoot: true
fsGroup: 1003
persistence:
enabled: true
existingClaim: nextcloud I was testing with fresh install without existing claims and I would say it works as expected:
Looking into your manifest there is one thing I noticed. You say you use I think the issue lies in the storage provider one uses and is not related to nextcloud:
If you want resolve it regardless of storage used, I would say init container is the safe bet, but it will need to have privileged permissions. One other observation - |
Popping very quickly to say I tested this on GKE with root@nextcloud-web-app-68f6bb8fb6-nblkq:/var/www/html# ls -hal
total 196K
drwxrwsr-x 15 www-data www-data 4.0K Apr 23 14:52 .
drwxrwsr-x 4 root 82 4.0K Apr 23 14:52 ..
-rw-r--r-- 1 www-data www-data 3.2K Apr 23 14:52 .htaccess
-rw-r--r-- 1 www-data www-data 101 Apr 23 14:52 .user.ini
drwxr-sr-x 45 www-data www-data 4.0K Apr 23 14:52 3rdparty
-rw-r--r-- 1 www-data www-data 19K Apr 23 14:52 AUTHORS
-rw-r--r-- 1 www-data www-data 34K Apr 23 14:52 COPYING
drwxr-sr-x 50 www-data www-data 4.0K Apr 23 14:52 apps
drwxrwsr-x 2 root 82 4.0K Apr 23 14:52 config
-rw-r--r-- 1 www-data www-data 4.0K Apr 23 14:52 console.php
drwxr-sr-x 24 www-data www-data 4.0K Apr 23 14:52 core
-rw-r--r-- 1 www-data www-data 6.2K Apr 23 14:52 cron.php
drwxrwsr-x 2 www-data www-data 4.0K Apr 23 14:52 custom_apps
drwxrwsr-x 2 www-data www-data 4.0K Apr 23 14:52 data
drwxr-sr-x 2 www-data www-data 12K Apr 23 14:52 dist
-rw-r--r-- 1 www-data www-data 156 Apr 23 14:52 index.html
-rw-r--r-- 1 www-data www-data 3.4K Apr 23 14:52 index.php
drwxr-sr-x 6 www-data www-data 4.0K Apr 23 14:52 lib
-rw-r--r-- 1 root 82 0 Apr 23 14:52 nextcloud-init-sync.lock
-rw-r----- 1 www-data www-data 14K Apr 23 14:54 nextcloud.log
-rwxr-xr-x 1 www-data www-data 283 Apr 23 14:52 occ
drwxr-sr-x 2 www-data www-data 4.0K Apr 23 14:52 ocm-provider
drwxr-sr-x 2 www-data www-data 4.0K Apr 23 14:52 ocs
drwxr-sr-x 2 www-data www-data 4.0K Apr 23 14:52 ocs-provider
-rw-r--r-- 1 www-data www-data 3.1K Apr 23 14:52 public.php
-rw-r--r-- 1 www-data www-data 5.5K Apr 23 14:52 remote.php
drwxr-sr-x 4 www-data www-data 4.0K Apr 23 14:52 resources
-rw-r--r-- 1 www-data www-data 26 Apr 23 14:52 robots.txt
-rw-r--r-- 1 www-data www-data 2.4K Apr 23 14:52 status.php
drwxrwsr-x 3 www-data www-data 4.0K Apr 23 14:52 themes
-rw-r--r-- 1 www-data www-data 384 Apr 23 14:52 version.php I don't think this is specific to k3s anymore 🤔 |
i also stumbled onto this situation, where mounting a i also observed the same behaviour with the
... anyway, at least for my usecase, i was able to get a non-root nextcloud container running by setting the php config below is a partial extract from my values.caml file. maybe this can help someone in the same boat? image:
flavor: fpm
persistence:
enabled: true
existingClaim: nextcloud-pvc
nextcloud:
...
podSecurityContext:
runAsUser: 33
runAsGroup: 33
runAsNonRoot: true
readOnlyRootFilesystem: false
configs:
custom.config.php: |
<?php
$CONFIG = array(
'check_data_directory_permissions' => false, # https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/
);
nginx:
enabled: true
image:
repository: nginxinc/nginx-unprivileged
tag: alpine
pullPolicy: IfNotPresent
securityContext:
runAsUser: 101
runAsGroup: 101
runAsNonRoot: true
readOnlyRootFilesystem: false
... PVC definition: apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-pvc
namespace: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Mi
storageClassName: local-path
volumeMode: Filesystem |
I am also having this issue and it's a pretty big showstopper for me. I'm using a RWX persistent volume that needs to have a group set in one pod and a different group set in nextcloud. But nextcloud REQUIRES all of the files in the container to be owned by As soon as I set the Nextcloud is frustrating to say the least. I'm not sure why so many people like this thing, I've only experienced headache and problems with Nextcloud so far... |
Hi, I had a similar problem and for me the fix was to create all folders that are needed by the I'm using pre created persistent volumes with See below the commands I'm running on my node (using sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server
sudo chown 1000:1000 -R /ext/persistent/nextcloud-staging/server/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/config
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/config/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/custom_apps
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/custom_apps/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/data
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/data/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/html
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/html/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/root
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/root/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/themes
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/themes/
sudo mkdir --mode 0755 -p /ext/persistent/nextcloud-staging/server/tmp
sudo chown www-data:www-data -R /ext/persistent/nextcloud-staging/server/tmp/ It might be the problem that Maybe this helps someone. |
Description
I've edited this for full context of how we got here, as this issue is getting kind of long, because it needed to be tested in a lot of different ways which lead me in several directions.
This issue is a continuation of the conversation started after #269 was merged. I was originally trying to changed the
podSecurityContext.runAsUser
andpodSecurityContext.runAsGroup
to33
because I was trying to diagnose why the/var/www/html/config
directory was always owned by root. I am deploying the nextcloud helm chart using persistent volumes on k3s with the default local path provisioner.I learned that the
podSecurityContext.fsGroup
was always being set to82
anytime you usednginx.enabled
and didn't setpodSecurityContext.fsGroup
explicitly, so I submitted a draft PR here to fix it to so that it checksimage.flavor
foralpine
: #379Through the comments here you can see other things I'm currently testing, because I'm still not sure is it's just the local path provisioner on k3s or k3s itself or what, but the best I can get is 🤷 I'll update this issue description with more clarity as it comes.
Original Issue that was opened on Jan 23
The nginx container in the nextcloud pod won't start and complains of a readonly file system, even if I try to only set the
nextcloud.securityContext
.I have created a new cluster and deployed nextcloud with the securityContext parameters from the values.yaml of this repo, including the nginx security context. My entire
values.yaml
is here, but the parts that matter are:securityContext parameters in my old `values.yaml`
The nextcloud pod is in a crashloopbackoff state with the offending container being nginx, and this being the logs:
This is the resulting deployment.yaml when I do a
kubectl get deployment -n nextcloud nextcloud-web-app > deployment.yaml
:Click me for the nextcloud
deployment.yaml
Where does the UID 82 come from? (Edit: it comes from the alpine nextcloud and nginx images - that's www-data)
I set that to 33 (nextcloud's www-data user) to test, but it didn't seem to make a difference. Just so it's clear, without editing any of the security contexts, everything works, but I would like the security context to work, because otherwise it causes my restores from backups to fail, because the
/var/www/html/config
directory is always created with root ownership, which means if the restores run as www-data, they can't restore that particular directory, which is important. I'm hoping the security context fixes that, so that nothing has to run as root in this stack.I'm deploying the
3.4.1
nextcloud helm chart via Argo CD onto k3s on Ubuntu 22.04.Update: problem still present in
3.5.7
helm chart.The text was updated successfully, but these errors were encountered: