SFS is a network-attached storage (NAS) that provides shared, scalable, and high-performance file storage. It applies to large-capacity expansion and cost-sensitive services. This section describes how to use an existing SFS file system to statically create PVs and PVCs for data persistence and sharing in workloads.
Parameter |
Description |
---|---|
PVC Type |
In this example, select SFS. |
PVC Name |
Enter the PVC name, which must be unique in the same namespace. |
Creation Method |
In this example, select Create new to create a PV and PVC at the same time on the console. |
PVa |
Select an existing PV in the cluster. Create a PV in advance. For details, see "Creating a storage volume" in Related Operations. You do not need to specify this parameter in this example. |
SFSb |
Click Select SFS. On the displayed page, select the SFS file system that meets your requirements and click OK. NOTE:
Currently, only SFS 3.0 Capacity-Oriented is supported. |
PV Nameb |
Enter the PV name, which must be unique in the same cluster. |
Access Modeb |
SFS volumes support only ReadWriteMany, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see Volume Access Modes. |
Reclaim Policyb |
You can select Delete or Retain to specify the reclaim policy of the underlying storage when the PVC is deleted. For details, see PV Reclaim Policy. NOTE:
If multiple PVs use the same underlying storage volume, use Retain to avoid cascading deletion of underlying volumes. |
Mount Optionsb |
Enter the mounting parameter key-value pairs. For details, see Configuring SFS Volume Mount Options. |
a: The parameter is available when Creation Method is set to Use existing.
b: The parameter is available when Creation Method is set to Create new.
You can choose Storage in the navigation pane and view the created PVC and PV on the PVCs and PVs tab pages, respectively.
Parameter |
Description |
---|---|
PVC |
Select an existing SFS volume. |
Mount Path |
Enter a mount path, for example, /tmp. This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as / or /var/run. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures.
NOTICE:
If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host machine may be damaged. |
Subpath |
Enter the subpath of the storage volume and mount a path in the storage volume to the container. In this way, different folders of the same storage volume can be used in a single pod. tmp, for example, indicates that data in the mount path of the container is stored in the tmp folder of the storage volume. If this parameter is left blank, the root path is used by default. |
Permission |
|
In this example, the disk is mounted to the /data path of the container. The container data generated in this path is stored in the SFS file system.
After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.
apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: everest-csi-provisioner everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. name: pv-sfs # PV name. spec: accessModes: - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS. capacity: storage: 1Gi # SFS volume capacity. csi: driver: disk.csi.everest.io # Dependent storage driver for the mounting. fsType: nfs volumeHandle: <your_volume_id> # SFS Capacity-Oriented volume ID. volumeAttributes: everest.io/share-export-location: <your_location> # Shared path of the SFS volume. storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner persistentVolumeReclaimPolicy: Retain # Reclaim policy. storageClassName: csi-nas # Storage class name. csi-nas indicates that SFS Capacity-Oriented is used. mountOptions: [] # Mount options.
Parameter |
Mandatory |
Description |
---|---|---|
everest.io/reclaim-policy: retain-volume-only |
No |
Optional. Currently, only retain-volume-only is supported. This field is valid only when the Everest version is 1.2.9 or later and the reclaim policy is Delete. If the reclaim policy is Delete and the current value is retain-volume-only, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. |
volumeHandle |
Yes |
|
everest.io/share-export-location |
Yes |
Shared path of the file system.
|
mountOptions |
Yes |
Mount options. If not specified, the following configurations are used by default. For details, see Configuring SFS Volume Mount Options. mountOptions: - vers=3 - timeo=600 - nolock - hard |
persistentVolumeReclaimPolicy |
Yes |
A reclaim policy is supported when the cluster version is or later than 1.19.10 and the Everest version is or later than 1.2.9. The Delete and Retain reclaim policies are supported. For details, see PV Reclaim Policy. If multiple PVs use the same SFS volume, use Retain to prevent the underlying volume from being deleted with a PV. Delete:
Retain: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the Released status and cannot be bound to the PVC again. |
storage |
Yes |
Requested capacity in the PVC, in Gi. For SFS, this field is used only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for SFS file systems. |
kubectl apply -f pv-sfs.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sfs namespace: default annotations: volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner spec: accessModes: - ReadWriteMany # The value must be ReadWriteMany for SFS. resources: requests: storage: 1Gi # SFS volume capacity. storageClassName: csi-nas # Storage class name, which must be the same as the PV's storage class. volumeName: pv-sfs # PV name.
Parameter |
Mandatory |
Description |
---|---|---|
storage |
Yes |
Requested capacity in the PVC, in Gi. The value must be the same as the storage size of the existing PV. |
volumeName |
Yes |
PV name, which must be the same as the PV name in 1. |
kubectl apply -f pvc-sfs.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: web-demo namespace: default spec: replicas: 2 selector: matchLabels: app: web-demo template: metadata: labels: app: web-demo spec: containers: - name: container-1 image: nginx:latest volumeMounts: - name: pvc-sfs-volume # Volume name, which must be the same as the volume name in the volumes field. mountPath: /data # Location where the storage volume is mounted. imagePullSecrets: - name: default-secret volumes: - name: pvc-sfs-volume # Volume name, which can be customized. persistentVolumeClaim: claimName: pvc-sfs # Name of the created PVC.
kubectl apply -f web-demo.yaml
After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.
kubectl get pod | grep web-demo
web-demo-846b489584-mjhm9 1/1 Running 0 46s web-demo-846b489584-wvv5s 1/1 Running 0 46s
kubectl exec web-demo-846b489584-mjhm9 -- ls /data kubectl exec web-demo-846b489584-wvv5s -- ls /data
If no result is returned for both pods, no file exists in the /data path.
kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static
kubectl exec web-demo-846b489584-mjhm9 -- ls /data
Expected output:
static
kubectl delete pod web-demo-846b489584-mjhm9
Expected output:
pod "web-demo-846b489584-mjhm9" deleted
After the deletion, the Deployment controller automatically creates a replica.
kubectl get pod | grep web-demo
web-demo-846b489584-d4d4j 1/1 Running 0 110s web-demo-846b489584-wvv5s 1/1 Running 0 7m50s
kubectl exec web-demo-846b489584-d4d4j -- ls /data
Expected output:
static
If the static file still exists, the data can be stored persistently.
kubectl get pod | grep web-demo
web-demo-846b489584-d4d4j 1/1 Running 0 7m web-demo-846b489584-wvv5s 1/1 Running 0 13m
kubectl exec web-demo-846b489584-d4d4j -- touch /data/share
kubectl exec web-demo-846b489584-d4d4j -- ls /data
Expected output:
share static
kubectl exec web-demo-846b489584-wvv5s -- ls /data
Expected output:
share static
After you create a file in the /data path of a pod, if the file is also created in the /data path of the other pod, the two pods share the same volume.
Operation |
Description |
Procedure |
---|---|---|
Creating a storage volume (PV) |
Create a PV on the CCE console. |
|
Viewing events |
You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. |
|
Viewing a YAML file |
You can view, copy, and download the YAML files of a PVC or PV. |
|