Dynamic mounting is available only for creating a StatefulSet. It is implemented through a volume claim template (volumeClaimTemplates field) and depends on the storage class to dynamically provision PVs. In this mode, each pod in a multi-pod StatefulSet is associated with a unique PVC and PV. After a pod is rescheduled, the original data can still be mounted to it based on the PVC name. In the common mounting mode for a Deployment, if ReadWriteMany is supported, multiple pods of the Deployment will be mounted to the same underlying storage.
Parameter |
Description |
---|---|
PVC Type |
In this section, select Local PV. |
PVC Name |
Enter the name of the PVC. After a PVC is created, a suffix is automatically added based on the number of pods. The format is <Custom PVC name>-<Serial number>, for example, example-0. |
Creation Method |
You can only select Dynamically provision to create a PVC, PV, and underlying storage on the console in cascading mode. |
Storage Classes |
The storage class of local PVs is csi-local-topology. |
Access Mode |
Local PVs support only ReadWriteOnce, indicating that a storage volume can be mounted to one node in read/write mode. For details, see Volume Access Modes. |
Storage Pool |
View the imported storage pool. For details about how to import a new data volume to the storage pool, see Importing a PV to a Storage Pool. |
Capacity (GiB) |
Capacity of the requested storage volume. |
Parameter |
Description |
---|---|
Mount Path |
Enter a mount path, for example, /tmp. This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as / or /var/run. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures.
NOTICE:
If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host machine may be damaged. |
Subpath |
Enter the subpath of the storage volume and mount a path in the storage volume to the container. In this way, different folders of the same storage volume can be used in a single pod. tmp, for example, indicates that data in the mount path of the container is stored in the tmp folder of the storage volume. If this parameter is left blank, the root path is used by default. |
Permission |
|
In this example, the disk is mounted to the /data path of the container. The container data generated in this path is stored in the local PV.
After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-local
namespace: default
spec:
selector:
matchLabels:
app: statefulset-local
template:
metadata:
labels:
app: statefulset-local
spec:
containers:
- name: container-1
image: nginx:latest
volumeMounts:
- name: pvc-local # The value must be the same as that in the volumeClaimTemplates field.
mountPath: /data # Location where the storage volume is mounted.
imagePullSecrets:
- name: default-secret
serviceName: statefulset-local # Headless Service name.
replicas: 2
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-local
namespace: default
spec:
accessModes:
- ReadWriteOnce # The value must be ReadWriteOnce for local PVs.
resources:
requests:
storage: 10Gi # Storage volume capacity.
storageClassName: csi-local-topology # StorageClass is local PV.
---
apiVersion: v1
kind: Service
metadata:
name: statefulset-local # Headless Service name.
namespace: default
labels:
app: statefulset-local
spec:
selector:
app: statefulset-local
clusterIP: None
ports:
- name: statefulset-local
targetPort: 80
nodePort: 0
port: 80
protocol: TCP
type: ClusterIP
Parameter |
Mandatory |
Description |
---|---|---|
storage |
Yes |
Requested capacity in the PVC, in Gi. |
storageClassName |
Yes |
The storage class of local PVs is csi-local-topology. |
kubectl apply -f statefulset-local.yaml
After the workload is created, you can try Verifying Data Persistence.
kubectl get pod | grep statefulset-local
statefulset-local-0 1/1 Running 0 45s statefulset-local-1 1/1 Running 0 28s
kubectl exec statefulset-local-0 -- df | grep data
Expected output:
/dev/mapper/vg--everest--localvolume--persistent-pvc-local 10255636 36888 10202364 0% /data
kubectl exec statefulset-local-0 -- ls /data
Expected output:
lost+found
kubectl exec statefulset-local-0 -- touch /data/static
kubectl exec statefulset-local-0 -- ls /data
Expected output:
lost+found static
kubectl delete pod statefulset-local-0
Expected output:
pod "statefulset-local-0" deleted
kubectl exec statefulset-local-0 -- ls /data
Expected output:
lost+found static
If the static file still exists, the data in the local PV can be stored persistently.
Operation |
Description |
Procedure |
---|---|---|
Viewing events |
You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. |
|
Viewing a YAML file |
You can view, copy, and download the YAML files of a PVC or PV. |
|