Check whether the docker/containerd.sock file is directly mounted to the pods on a node. During an upgrade, Docker or containerd restarts and the sock file on the host changes, but the sock file mounted to pods does not change accordingly. As a result, your services cannot access Docker or containerd due to sock file inconsistency. After the pods are rebuilt, the sock file is mounted to the pods again, and the issue is resolved accordingly.
Kubernetes cluster users typically use sock files in the following scenarios:
Mount the sock file by mounting a directory. For example, if the sock file is stored in /var/run/docker.sock on the host, perform the following operations to resolve this issue (the following modifications will lead to the rebuilding of pods):
kind: Deployment apiVersion: apps/v1 metadata: name: test spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: app: nginx spec: containers: - name: container-1 image: 'nginx' imagePullPolicy: IfNotPresent volumeMounts: - name: sock-dir mountPath: /var/run imagePullSecrets: - name: default-secret volumes: - name: sock-dir hostPath: path: /var/run
Skip this check item and perform the check again. After the cluster is upgraded, delete the existing pods to trigger pod rebuilding. Then, the access to sock will be recovered.