On the details page of a workload, if an event is displayed indicating that the container fails to be started, perform the following steps to locate the fault:
docker ps -a | grep $podName
docker logs $containerID
Rectify the fault of the workload based on logs.
cat /var/log/messages | grep $containerID | grep oom
Check whether the system OOM is triggered based on the logs.
Determine the cause based on the event information, as listed in Table 1.
Log or Event |
Cause and Solution |
---|---|
The log contains exit(0). |
No process exists in the container. Check whether the container is running properly. Check Item 1: Whether There Are Processes that Keep Running in the Container (Exit Code: 0) |
Event information: Liveness probe failed: Get http... The log contains exit(137). |
Health check fails. Check Item 2: Whether Health Check Fails to Be Performed (Exit Code: 137) |
Event information: Thin Pool has 15991 free data blocks which are less than minimum required 16383 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior |
The disk space is insufficient. Clear the disk space. Check Item 3: Whether the Container Disk Space Is Insufficient |
The keyword OOM exists in the log. |
The memory is insufficient. Check Item 4: Whether the Upper Limit of Container Resources Has Been Reached Check Item 5: Whether the Resource Limits Are Improperly Configured for the Container |
Address already in use |
A conflict occurs between container ports in the pod. Check Item 6: Whether the Container Ports in the Same Pod Conflict with Each Other |
Error: failed to start container "filebeat": Error response from daemon: OCI runtime create failed: container_linux.go:330: starting container process caused "process_linux.go:381: container init caused \"setenv: invalid argument\"": unknown |
A secret is mounted to the workload, and the value of the secret is not encrypted using Base64. Check Item 7: Whether the Value of the Secret Mounted to the Workload Meets Requirements |
In addition to the preceding possible causes, there are some other possible causes:
docker ps -a | grep $podName
Example:
If no running process exists in the container, the status code Exited (0) is displayed.
The health check configured for a workload is performed on services periodically. If an exception occurs, the pod reports an event and the pod fails to be restarted.
If the liveness-type (workload liveness probe) health check is configured for the workload and the number of health check failures exceeds the threshold, the containers in the pod will be restarted. On the workload details page, if Kubernetes events contain Liveness probe failed: Get http..., the health check fails.
Solution
Click the workload name to go to the workload details page, click the Containers tab. Then select Health Check to check whether the policy is proper or whether services are running properly.
The following message refers to the thin pool disk that is allocated from the Docker disk selected during node creation. You can run the lvs command as user root to view the current disk usage.
Thin Pool has 15991 free data blocks which are less than minimum required 16383 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
Solution
Solution 1: Clearing images
crictl images -v
crictl rmi Image ID
docker images
docker rmi Image ID
Do not delete system images such as the cce-pause image. Otherwise, pods may fail to be created.
Solution 2: Expanding the disk capacity
To expand a disk capacity, perform the following steps:
A data disk is divided depending on the container storage Rootfs:
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 8:0 0 50G 0 disk └─vda1 8:1 0 50G 0 part / vdb 8:16 0 200G 0 disk ├─vgpaas-dockersys 253:0 0 90G 0 lvm /var/lib/docker # Space used by the container engine └─vgpaas-kubernetes 253:1 0 10G 0 lvm /mnt/paas/kubernetes/kubelet # Space used by Kubernetes
Run the following commands on the node to add the new disk capacity to the dockersys disk:
pvresize /dev/vdb lvextend -l+100%FREE -n vgpaas/dockersys resize2fs /dev/vgpaas/dockersys
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 8:0 0 50G 0 disk └─vda1 8:1 0 50G 0 part / vdb 8:16 0 200G 0 disk ├─vgpaas-dockersys 253:0 0 18G 0 lvm /var/lib/docker ├─vgpaas-thinpool_tmeta 253:1 0 3G 0 lvm │ └─vgpaas-thinpool 253:3 0 67G 0 lvm # Space used by thinpool │ ... ├─vgpaas-thinpool_tdata 253:2 0 67G 0 lvm │ └─vgpaas-thinpool 253:3 0 67G 0 lvm │ ... └─vgpaas-kubernetes 253:4 0 10G 0 lvm /mnt/paas/kubernetes/kubelet
pvresize /dev/vdb lvextend -l+100%FREE -n vgpaas/thinpool
pvresize /dev/vdb lvextend -l+100%FREE -n vgpaas/dockersys resize2fs /dev/vgpaas/dockersys
If the upper limit of container resources has been reached, OOM will be displayed in the event details as well as in the log:
cat /var/log/messages | grep 96feb0a425d6 | grep oom
When a workload is created, if the requested resources exceed the configured upper limit, the system OOM is triggered and the container exits unexpectedly.
If the resource limits set for the container during workload creation are less than required, the container fails to be restarted.
docker ps -a | grep $podName
docker logs $containerID
Rectify the fault of the workload based on logs. As shown in the following figure, container ports in the same pod conflict. As a result, the container fails to be started.
Solution
Re-create the workload and set a port number that is not used by any other pod.
Information similar to the following is displayed in the event:
Error: failed to start container "filebeat": Error response from daemon: OCI runtime create failed: container_linux.go:330: starting container process caused "process_linux.go:381: container init caused \"setenv: invalid argument\"": unknown
The root cause is that a secret is mounted to the workload, but the value of the secret is not encrypted using Base64.
Solution:
Create a secret on the console. The value of the secret is automatically encrypted using Base64.
If you use YAML to create a secret, you need to manually encrypt its value using Base64.
# echo -n "Content to be encoded" | base64
The error messages are as follows:
Solution
Click the workload name to go to the workload details page, click the Containers tab. Choose Lifecycle, click Startup Command, and ensure that the command is correct.
Check whether the workload startup command is correctly executed or whether the workload has a bug.
docker ps -a | grep $podName
docker logs $containerID
Note: In the preceding command, containerID indicates the ID of the container that has exited.
As shown in the figure above, the container fails to be started due to an incorrect startup command. For other errors, rectify the bugs based on the logs.
Solution
Create a new workload and configure a correct startup command.