Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[draft] how to reproduce a race condition issue between kubelet and docker #40

Open
zhouhaibing089 opened this issue Apr 29, 2023 · 7 comments

Comments

@zhouhaibing089
Copy link
Owner

zhouhaibing089 commented Apr 29, 2023

This is just a draft, and thus is not meant to be organized at all. (A better version will be shared later).

minikube

We need to build a specialized minikube iso image with the following diff on top of minikube HEAD (at the time of writing).

diff --git a/deploy/iso/minikube-iso/arch/x86_64/package/containerd-bin/containerd-bin.mk b/deploy/iso/minikube-iso/arch/x86_64/package/containerd-bin/containerd-bin.mk
index ca603e419..b4b71a0bb 100644
--- a/deploy/iso/minikube-iso/arch/x86_64/package/containerd-bin/containerd-bin.mk
+++ b/deploy/iso/minikube-iso/arch/x86_64/package/containerd-bin/containerd-bin.mk
@@ -3,8 +3,8 @@
 # containerd
 #
 ################################################################################
-CONTAINERD_BIN_VERSION = v1.7.0
-CONTAINERD_BIN_COMMIT = 1fbd70374134b891f97ce19c70b6e50c7b9f4e0d
+CONTAINERD_BIN_VERSION = v1.2.13
+CONTAINERD_BIN_COMMIT = 7ad184331fa3e55e52b890ea95e65ba581ae3429
 CONTAINERD_BIN_SITE = https://github.com/containerd/containerd/archive
 CONTAINERD_BIN_SOURCE = $(CONTAINERD_BIN_VERSION).tar.gz
 CONTAINERD_BIN_DEPENDENCIES = host-go libgpgme
@@ -45,9 +45,6 @@ define CONTAINERD_BIN_INSTALL_TARGET_CMDS
 	$(INSTALL) -Dm755 \
 		$(@D)/bin/containerd-shim-runc-v1 \
 		$(TARGET_DIR)/usr/bin
-	$(INSTALL) -Dm755 \
-		$(@D)/bin/containerd-shim-runc-v2 \
-		$(TARGET_DIR)/usr/bin
 	$(INSTALL) -Dm755 \
 		$(@D)/bin/ctr \
 		$(TARGET_DIR)/usr/bin
diff --git a/deploy/iso/minikube-iso/arch/x86_64/package/docker-bin/docker-bin.mk b/deploy/iso/minikube-iso/arch/x86_64/package/docker-bin/docker-bin.mk
index a5eab9b93..c2864348b 100644
--- a/deploy/iso/minikube-iso/arch/x86_64/package/docker-bin/docker-bin.mk
+++ b/deploy/iso/minikube-iso/arch/x86_64/package/docker-bin/docker-bin.mk
@@ -4,7 +4,7 @@
 #
 ################################################################################
 
-DOCKER_BIN_VERSION = 20.10.23
+DOCKER_BIN_VERSION = 19.03.11
 DOCKER_BIN_SITE = https://download.docker.com/linux/static/stable/x86_64
 DOCKER_BIN_SOURCE = docker-$(DOCKER_BIN_VERSION).tgz

Then run the following commands to build the iso image:

$ make buildroot-image
$ make out/minikube-amd64.iso

If the commands above run successfully, we should see a new iso image built:

$ ls out
buildroot  minikube-amd64.iso

Once we have it, we can launch a k8s cluster with the following command:

$ minikube start --driver=virtualbox --cpus=4 --memory=8g --iso-url=file://$(pwd)/out/minikube-amd64.iso --kubernetes-version=v1.21.14

apply the following pod specs

I have crafted an example app here - https://github.com/zhouhaibing089/goapp. It is a simple goapp listening on the provided port and when requested, it echos something like "Hello from port 1234".

$ git clone [email protected]:zhouhaibing089/goapp.git
$ cd goapp
$ kubectl create -f pods/

Note that I injected a 10% failure rate to simulate container restart case.

see the issues happening

If you are not super unlucky (which means easy to reproduce), you should see something like:

$ kubectl get pod
NAME       READY   STATUS             RESTARTS   AGE
goapp      30/30   Running            2          6m42s
goapp-1    30/30   Running            6          6m42s
goapp-10   30/30   Running            3          6m42s
goapp-11   30/30   Running            3          6m42s
goapp-12   30/30   Running            4          6m42s
goapp-13   30/30   Running            3          6m42s
goapp-14   30/30   Running            4          6m42s
goapp-15   30/30   Running            4          6m42s
goapp-16   30/30   Running            0          6m42s
goapp-17   30/30   Running            4          6m42s
goapp-18   30/30   Running            5          6m42s
goapp-19   29/30   CrashLoopBackOff   13         6m42s
goapp-2    30/30   Running            2          6m42s
goapp-20   30/30   Running            2          6m42s
goapp-3    27/30   CrashLoopBackOff   28         6m42s
goapp-4    30/30   Running            5          6m42s
goapp-5    30/30   Running            3          6m42s
goapp-6    30/30   Running            2          6m42s
goapp-7    30/30   Running            4          6m42s
goapp-8    30/30   Running            7          6m42s
goapp-9    30/30   Running            2          6m42s

One or few of them simply can't start. For example, if we look at the goapp-19 above:

$ kubectl get pod goapp-19 -o yaml
  - containerID: docker://fe5747627a26541c1ed2d61ab3a166b41e6f42f9e25fcfb28ef2be615a95b0a4
    image: haibzhou/goapp:0.1
    imageID: docker-pullable://haibzhou/goapp@sha256:045cf3c5e47e76d5615e429b90dfe3d10b51479e13e3f112820175801e8fa17f
    lastState:
      terminated:
        containerID: docker://fe5747627a26541c1ed2d61ab3a166b41e6f42f9e25fcfb28ef2be615a95b0a4
        exitCode: 1
        finishedAt: "2023-04-29T22:59:17Z"
        reason: Error
        startedAt: "2023-04-29T22:59:17Z"
    name: p1263
    ready: false
    restartCount: 7
    started: false
    state:
      waiting:
        message: back-off 5m0s restarting failed container=p1263 pod=goapp-19_default(5b7c5db5-7a22-407f-bf5d-b1c597b9feab)
        reason: CrashLoopBackOff

And if we try to get the log for container p1263:

$ kubectl logs goapp-19 -c p1263
2023/04/29 22:59:17 listen tcp :1263: bind: address already in use

This is pretty interesting, how could it happen? If we log in to the host, and run a docker ps:

$ minikube ssh
$ docker ps -a | grep goapp-19 | grep p1263
fb75cd1c6cab        731e2216ed4e             "/usr/local/bin/goap…"   34 seconds ago       Exited (1) 34 seconds ago                           k8s_p1263_goapp-19_default_5b7c5db5-7a22-407f-bf5d-b1c597b9feab_4
d56c4e919aa4        731e2216ed4e             "/usr/local/bin/goap…"   About a minute ago   Up About a minute                                   k8s_p1263_goapp-19_default_5b7c5db5-7a22-407f-bf5d-b1c597b9feab_0

It appears that there is already a container running for goapp-19/p1263. So why did kubelet try to create another container?

add logs to kubelet

We can apply the following diff on top of kubelet (from v1.21.14) to inspect container status during each pod sync loop:

diff --git a/pkg/kubelet/kuberuntime/kuberuntime_manager.go b/pkg/kubelet/kuberuntime/kuberuntime_manager.go
index d83e75e300a..92f0c31c95f 100644
--- a/pkg/kubelet/kuberuntime/kuberuntime_manager.go
+++ b/pkg/kubelet/kuberuntime/kuberuntime_manager.go
@@ -611,6 +611,12 @@ func (m *kubeGenericRuntimeManager) computePodActions(pod *v1.Pod, podStatus *ku
 	// check the status of containers.
 	for idx, container := range pod.Spec.Containers {
 		containerStatus := podStatus.FindContainerStatusByName(container.Name)
+		if containerStatus != nil {
+			klog.InfoS("Container status", "containerName", container.Name, "pod", klog.KObj(pod),
+				"containerID", containerStatus.ID.ID, "state", containerStatus.State)
+		} else {
+			klog.InfoS("Container status is nil", "containerName", container.Name, "pod", klog.KObj(pod))
+		}
 
 		// Call internal container post-stop lifecycle hook for any non-running container so that any
 		// allocated cpus are released immediately. If the container is restarted, cpus will be re-allocated

And then compile a new version as well as copy it into minikube:

$ CGO_ENABLED=0 make WHAT=cmd/kubelet
$ minikube cp _output/bin/kubelet /tmp/kubelet

With the new binary, we can restart kubelet inside minikube with:

$ minikube ssh
$ sudo systemctl stop kubelet
$ sudo cp /tmp/kubelet /var/lib/minikube/binaries/v1.21.14/kubelet
$ sudo systemctl start kubelet

see logs in action

Now trying to reproduce it again (see above).

$ kubectl get pod goapp
NAME    READY   STATUS   RESTARTS   AGE
goapp   29/30   Error    9          118s
$ kubectl logs goapp -c p1251
2023/04/29 23:31:32 listen tcp :1251: bind: address already in use

And same as the previous observation, we can there is already a container running:

$ docker ps -a | grep goapp_default | grep p1251
0207c985fa7c        731e2216ed4e             "/usr/local/bin/goap…"   19 seconds ago      Exited (1) 18 seconds ago                          k8s_p1251_goapp_default_c19bc8c4-6d62-4146-864e-b240015a0b36_6
c50f265a3b7c        731e2216ed4e             "/usr/local/bin/goap…"   3 minutes ago       Up 3 minutes                                       k8s_p1251_goapp_default_c19bc8c4-6d62-4146-864e-b240015a0b36_0

logs that we have added

Here are the logs that we added earlier:

Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650407   99564 kuberuntime_manager.go:615] "Container status" containerName="p1234" pod="default/goapp" containerID="7ee98c3f7baa434f6f8f96e1c1f8fa201feb1cb2f898852ebebb7d988bafe465" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650468   99564 kuberuntime_manager.go:615] "Container status" containerName="p1235" pod="default/goapp" containerID="d409e600ab5a0bbda659942b9c5911d56fbc2494fd6efe91b7d808106e8a2e3b" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650497   99564 kuberuntime_manager.go:615] "Container status" containerName="p1236" pod="default/goapp" containerID="e6b4f64b56af835b6acc5bd1493000e1a3316255f8e9dad99f2fab3f982a8c57" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650526   99564 kuberuntime_manager.go:615] "Container status" containerName="p1237" pod="default/goapp" containerID="517c5b963b4b433de46c1fb3e63e92cadfa85b99fe029dd7ea9f7805433edbe1" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650555   99564 kuberuntime_manager.go:615] "Container status" containerName="p1238" pod="default/goapp" containerID="5d31c45c3a345b5b2055901597e17ef72dfac932e68d551eecbb04c952ad953b" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650590   99564 kuberuntime_manager.go:615] "Container status" containerName="p1239" pod="default/goapp" containerID="a0cc7c8245dbf7fa8418524abeb68615d64e420e8a2c3fc46d4d075df2d8277e" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650622   99564 kuberuntime_manager.go:615] "Container status" containerName="p1240" pod="default/goapp" containerID="9b38f46d4280cf12a81a745b56fbb037620b4f3539fb36f43d32c246cfc98619" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650659   99564 kuberuntime_manager.go:615] "Container status" containerName="p1241" pod="default/goapp" containerID="688176cedb4e7628c24d56f2e891a542e8f0bc4cc84c9033e83f570f78fbdd20" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650692   99564 kuberuntime_manager.go:615] "Container status" containerName="p1242" pod="default/goapp" containerID="8b64d43840ddc5245f13db0cd697dbce764bdd212f97e4e180825d89e25178fe" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650726   99564 kuberuntime_manager.go:615] "Container status" containerName="p1243" pod="default/goapp" containerID="aec95dc524f98bd45563d9ab65716de30b8025fc0af4bd2bae226769fc691e02" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650752   99564 kuberuntime_manager.go:615] "Container status" containerName="p1244" pod="default/goapp" containerID="e77ec5462ae2fd9f4ce47a45b1acaad146efb1ebf7a4b58744a4c05b23f7e9b2" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650780   99564 kuberuntime_manager.go:615] "Container status" containerName="p1245" pod="default/goapp" containerID="fbb333adf89ad7cdd7c393bdec6a1e66f342bc896e61da7447db8241a82ff25d" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650858   99564 kuberuntime_manager.go:615] "Container status" containerName="p1246" pod="default/goapp" containerID="e8e7db57a3b069dc3a8f338810325e91e068f9a7960731d058b0aaef41a9cc93" state=exited
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650895   99564 kuberuntime_manager.go:615] "Container status" containerName="p1247" pod="default/goapp" containerID="c8db7f2c86e422d1bd216a94510f271d2b9640f8712ce5441f6624441f5fec43" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650929   99564 kuberuntime_manager.go:615] "Container status" containerName="p1248" pod="default/goapp" containerID="d7393d7e9c0b00a96df9be96bf00482363a54ddcb303809934eb8406154ece08" state=exited
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650955   99564 kuberuntime_manager.go:615] "Container status" containerName="p1249" pod="default/goapp" containerID="bbc665fe6290c477e40f2f6c15491ed2a67403300b8597a1a4e1a460190e3ecb" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.650982   99564 kuberuntime_manager.go:615] "Container status" containerName="p1250" pod="default/goapp" containerID="bcc81e206f08de0b0f329dc20a4ca70a3580c99301b08666fa41c38de297830e" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651016   99564 kuberuntime_manager.go:618] "Container status is nil" containerName="p1251" pod="default/goapp"
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651027   99564 kuberuntime_manager.go:615] "Container status" containerName="p1252" pod="default/goapp" containerID="d7f2109ecc4e0bfe6699a49be37c7c0b826b6a01771b47ef47f8799d1e864f96" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651051   99564 kuberuntime_manager.go:615] "Container status" containerName="p1253" pod="default/goapp" containerID="6a4df7a645a0cac83c5b8161830139c797c8b0bb4d5d92b1d5adfe733eec7998" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651085   99564 kuberuntime_manager.go:615] "Container status" containerName="p1254" pod="default/goapp" containerID="64352683bd77ff70c6a314c8c6739920c83701e8d9f62dd253b76690b3154909" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651119   99564 kuberuntime_manager.go:615] "Container status" containerName="p1255" pod="default/goapp" containerID="b65c85557e15727528e49fb81df5b2b92f4d2be807b1b49b0e0078635ef2b34c" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651152   99564 kuberuntime_manager.go:615] "Container status" containerName="p1256" pod="default/goapp" containerID="cff43da0f68f65554df54a73d3a3d4797c6ef60fcd4bbe29ad33882a8dd6f7aa" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651185   99564 kuberuntime_manager.go:615] "Container status" containerName="p1257" pod="default/goapp" containerID="8a47c95ee2d01abb9e243ff6a5ffe127007640edbf7d042b4d330243c060745a" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651215   99564 kuberuntime_manager.go:615] "Container status" containerName="p1258" pod="default/goapp" containerID="af8f9d390deddc58559ddaeefd34f7329702fb464f25f6736ee9cec58e7c9a96" state=exited
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651237   99564 kuberuntime_manager.go:615] "Container status" containerName="p1259" pod="default/goapp" containerID="4559ce19cd154465f5ed8efddc78126d3362e4e7594bc0ee0a36712c0afe19d1" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651267   99564 kuberuntime_manager.go:615] "Container status" containerName="p1260" pod="default/goapp" containerID="77e30a5793ad4d31b796becbfde407655341f823c59dd1c87f74ddd95a7f4cdc" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651297   99564 kuberuntime_manager.go:615] "Container status" containerName="p1261" pod="default/goapp" containerID="af8244e4353f3fc66874172f9a9d49af37733835bc0b85edacd456e728dbb7ff" state=running
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651330   99564 kuberuntime_manager.go:615] "Container status" containerName="p1262" pod="default/goapp" containerID="dbef43104cd862e15d72ee2b5ec834bec869b3ee6983f2dfc2b949ea972d3351" state=exited
Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651356   99564 kuberuntime_manager.go:615] "Container status" containerName="p1263" pod="default/goapp" containerID="73b0f5218c36f22435957702a1f7202cc82775a2bd04ee0a210a565d76f2bb4b" state=running

In this loop, from container p1234 to p1263, only p1251 has a nil container status:

Apr 29 23:30:08 minikube kubelet[99564]: I0429 23:30:08.651016   99564 kuberuntime_manager.go:618] "Container status is nil" containerName="p1251" pod="default/goapp"

Now if take a closer look at the running container for p1251 (see above):

$ docker inspect c50f265a3b7c
[
    {
        "Id": "c50f265a3b7cd6c7a5853a03b99471d8f36504026440fc05358a3b26482a493d",
        "Created": "2023-04-29T23:29:50.726134045Z",
        "Path": "/usr/local/bin/goapp",
        "Args": [
            "--port=1251"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 124520,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2023-04-29T23:29:51.432835852Z",
            "FinishedAt": "0001-01-01T00:00:00Z"

It was created at "2023-04-29T23:29:50.726134045Z" and started at "2023-04-29T23:29:51.432835852Z", yet in the coming loop, docker didn't report its status at all.

@zhouhaibing089
Copy link
Owner Author

zhouhaibing089 commented Apr 30, 2023

This is also reproducible with a newer docker version, too. [docker 20.10.23 + k8s 1.21.14 + dockershim]

$ docker version
Client:
 Version:           20.10.23
 API version:       1.41
 Go version:        go1.18.10
 Git commit:        7155243
 Built:             Thu Jan 19 17:30:35 2023
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.23
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.10
  Git commit:       6051f14
  Built:            Thu Jan 19 17:36:08 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.0
  GitCommit:        1fbd70374134b891f97ce19c70b6e50c7b9f4e0d
 runc:
  Version:          1.1.5
  GitCommit:        f19387a6bec4944c770f7668ab51c4348d9c2f38
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

(without minikube iso change)

@zhouhaibing089
Copy link
Owner Author

zhouhaibing089 commented Apr 30, 2023

This seems to be not reproducible with [docker 19.03.11 + k8s 1.26.3 + cri-dockerd] (as of yet)

$ docker version
Client: Docker Engine - Community
 Version:           19.03.11
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        42e35e61f3
 Built:             Mon Jun  1 09:09:53 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.11
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       42e35e61f3
  Built:            Mon Jun  1 09:16:24 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.1.5
  GitCommit:        f19387a6bec4944c770f7668ab51c4348d9c2f38
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:12Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}

@zhouhaibing089
Copy link
Owner Author

Also reproducible with [docker 19.03.11 + k8s 1.21.14 + cri-dockerd]

@zhouhaibing089
Copy link
Owner Author

Non-reproducible with [docker 19.03.11 + k8s 1.22.17 + dockershim]

@zhouhaibing089
Copy link
Owner Author

Non-reproducible with [docker 19.03.11 + k8s 1.20.15 + dockershim]

@zhouhaibing089
Copy link
Owner Author

With a diff like below:

diff --git a/pkg/kubelet/kuberuntime/kuberuntime_manager.go b/pkg/kubelet/kuberuntime/kuberuntime_manager.go
index d83e75e300a..f5ef1aebf25 100644
--- a/pkg/kubelet/kuberuntime/kuberuntime_manager.go
+++ b/pkg/kubelet/kuberuntime/kuberuntime_manager.go
@@ -606,11 +606,21 @@ func (m *kubeGenericRuntimeManager) computePodActions(pod *v1.Pod, podStatus *ku
 		return changes
 	}
 
+	for _, cs := range podStatus.ContainerStatuses {
+		klog.InfoS("computePodActions", "podUID", pod.UID, "containerName", cs.Name, "state", cs.State, "id", cs.ID.ID)
+	}
+
 	// Number of running containers to keep.
 	keepCount := 0
 	// check the status of containers.
 	for idx, container := range pod.Spec.Containers {
 		containerStatus := podStatus.FindContainerStatusByName(container.Name)
+		if containerStatus != nil {
+			klog.InfoS("Container status", "containerName", container.Name, "pod", klog.KObj(pod),
+				"containerID", containerStatus.ID.ID, "state", containerStatus.State, "total", len(podStatus.ContainerStatuses))
+		} else {
+			klog.InfoS("Container status is nil", "containerName", container.Name, "pod", klog.KObj(pod), "total", len(podStatus.ContainerStatuses))
+		}
 
 		// Call internal container post-stop lifecycle hook for any non-running container so that any
 		// allocated cpus are released immediately. If the container is restarted, cpus will be re-allocated
@@ -1019,6 +1029,9 @@ func (m *kubeGenericRuntimeManager) GetPodStatus(uid kubetypes.UID, name, namesp
 		}
 		return nil, err
 	}
+	for _, cs := range containerStatuses {
+		klog.InfoS("GetPodStatus", "podUID", uid, "containerName", cs.Name, "state", cs.State, "id", cs.ID.ID)
+	}
 	m.logReduction.ClearID(podFullName)
 
 	return &kubecontainer.PodStatus{
diff --git a/pkg/kubelet/pleg/generic.go b/pkg/kubelet/pleg/generic.go
index 3b9ba505e7b..3f1712be056 100644
--- a/pkg/kubelet/pleg/generic.go
+++ b/pkg/kubelet/pleg/generic.go
@@ -386,7 +386,7 @@ func (g *GenericPLEG) updateCache(pod *kubecontainer.Pod, pid types.UID) error {
 	if klog.V(6).Enabled() {
 		klog.V(6).ErrorS(err, "PLEG: Write status", "pod", klog.KRef(pod.Namespace, pod.Name), "podStatus", status)
 	} else {
-		klog.V(4).ErrorS(err, "PLEG: Write status", "pod", klog.KRef(pod.Namespace, pod.Name))
+		klog.ErrorS(err, "PLEG: Write status", "pod", klog.KRef(pod.Namespace, pod.Name), "timestamp", timestamp.String())
 	}
 	if err == nil {
 		// Preserve the pod IP across cache updates if the new IP is empty.
diff --git a/pkg/kubelet/pod_workers.go b/pkg/kubelet/pod_workers.go
index fcf059f56e3..982f4a4d69a 100644
--- a/pkg/kubelet/pod_workers.go
+++ b/pkg/kubelet/pod_workers.go
@@ -22,7 +22,7 @@ import (
 	"sync"
 	"time"
 
-	"k8s.io/api/core/v1"
+	v1 "k8s.io/api/core/v1"
 	"k8s.io/apimachinery/pkg/types"
 	"k8s.io/apimachinery/pkg/util/runtime"
 	"k8s.io/apimachinery/pkg/util/sets"
@@ -171,6 +171,7 @@ func (p *podWorkers) managePodLoop(podUpdates <-chan UpdatePodOptions) {
 				p.recorder.Eventf(update.Pod, v1.EventTypeWarning, events.FailedSync, "error determining status: %v", err)
 				return err
 			}
+			klog.InfoS("managePodLoop", "podUID", podUID, "lastSyncTime", lastSyncTime.String(), "containers", len(status.ContainerStatuses))
 			err = p.syncPodFn(syncPodOptions{
 				mirrorPod:      update.MirrorPod,
 				pod:            update.Pod,

In GetPodStatus:


Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086932  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1258" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086953  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1257" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086958  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1251" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086961  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1240" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086988  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1237" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086992  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1263" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087000  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1262" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087004  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1261" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087008  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1260" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087012  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1259" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087015  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1258" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087019  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1257" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087022  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1256" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087025  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1255" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087029  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1254" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087032  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1253" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087037  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1252" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087040  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1251" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087044  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1250" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087048  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1249" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087052  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1248" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087055  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1247" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087062  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1246" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087065  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1245" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087069  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1244" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087072  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1243" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087075  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1242" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087079  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1241" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087083  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1240" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087087  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1239" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087091  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1238" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087095  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1237" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087100  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1236" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087104  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1235" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087108  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1234" state=running

and in computePodActions:

Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087449  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1258" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087460  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1257" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087464  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1240" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087474  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1237" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087479  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1237" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087482  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1262" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087485  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1262" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087489  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1261" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087492  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1260" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087495  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1259" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087499  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1258" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087511  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1257" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087514  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1256" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087519  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1255" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087523  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1254" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087532  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1253" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087536  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1252" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087539  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1251" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087542  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1250" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087546  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1249" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087549  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1248" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087552  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1247" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087557  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1246" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087560  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1245" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087563  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1244" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087567  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1243" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087570  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1242" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087573  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1241" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087577  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1240" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087580  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1239" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087583  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1238" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087586  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1237" state=exited
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087593  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1236" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087596  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1235" state=running
Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087600  616628 kuberuntime_manager.go:610] "computePodActions" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1234" state=running

and the overall timeline:

Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.086992  616628 kuberuntime_manager.go:1033] "GetPodStatus" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 containerName="p1263" state=running

Apr 30 05:41:47 minikube kubelet[616628]: E0430 05:41:47.087121  616628 generic.go:389] "PLEG: Write status" pod="default/goapp-15" timestamp="2023-04-30 05:41:47.044355161 +0000 UTC m=+137.204175614"

Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.087165  616628 pod_workers.go:174] "managePodLoop" podUID=1b02f845-00a1-4b69-b7a9-06fcde5496e3 lastSyncTime="2023-04-30 05:41:46.372355805 +0000 UTC m=+136.532176258" containers=35

Apr 30 05:41:47 minikube kubelet[616628]: I0430 05:41:47.088584  616628 kuberuntime_manager.go:622] "Container status is nil" containerName="p1263" pod="default/goapp-15" total=35

We do see p1263 from GetPodStatus, but then, we lost it in computePodActions. And if we compare the containerStatuses logged between GetPodStatus and computePodActions (same length, 90+% similar), they look almost exactly the same except for the first few lines. This means that docker isn't the problem here.

Somewhere, podStatus was changed.

@zhouhaibing089
Copy link
Owner Author

This seems to be the solution:

diff --git a/pkg/kubelet/kubelet_pods.go b/pkg/kubelet/kubelet_pods.go
index b610b47ff7f..5aabf594500 100644
--- a/pkg/kubelet/kubelet_pods.go
+++ b/pkg/kubelet/kubelet_pods.go
@@ -1819,7 +1819,7 @@ func (kl *Kubelet) convertToAPIContainerStatuses(pod *v1.Pod, podStatus *kubecon
 	}
 
 	// Make the latest container status comes first.
-	sort.Sort(sort.Reverse(kubecontainer.SortContainerStatusesByCreationTime(podStatus.ContainerStatuses)))
+	// sort.Sort(sort.Reverse(kubecontainer.SortContainerStatusesByCreationTime(podStatus.ContainerStatuses)))
 	// Set container statuses according to the statuses seen in pod status
 	containerSeen := map[string]int{}
 	for _, cStatus := range podStatus.ContainerStatuses {

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant