Overview

Container Storage

CCE container storage is implemented based on Kubernetes container storage APIs (CSI). CCE integrates multiple types of cloud storage and covers different application scenarios. CCE is fully compatible with Kubernetes native storage services, such as emptyDir, hostPath, secret, and ConfigMap.

Figure 1 Container storage types

CCE allows workload pods to use multiple types of storage:

Cloud Storage Comparison

Item

EVS

SFS

SFS Turbo

OBS

Definition

EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications.

Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services.

Expandable to 320 TB, SFS Turbo provides fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications.

Object Storage Service (OBS) provides massive, secure, and cost-effective data storage for you to store data of any type and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios.

Data storage logic

Stores binary data and cannot directly store files. To store files, format the file system first.

Stores files and sorts and displays data in the hierarchy of files and folders.

Stores files and sorts and displays data in the hierarchy of files and folders.

Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users.

Access mode

Accessible only after being mounted to ECSs or BMSs and initialized.

Mounted to ECSs or BMSs using network protocols. A network address must be specified or mapped to a local directory for access.

Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo.

Accessible through the Internet or Direct Connect (DC). Specify the bucket address and use transmission protocols such as HTTP or HTTPS.

Static storage volumes

Supported. For details, see Using an Existing EVS Disk Through a Static PV.

Supported. For details, see Using an Existing SFS File System Through a Static PV.

Supported. For details, see Using an Existing SFS Turbo File System Through a Static PV.

Supported. For details, see Using an Existing OBS Bucket Through a Static PV.

Dynamic storage volumes

Supported. For details, see Using an EVS Disk Through a Dynamic PV.

Supported. For details, see Using an SFS File System Through a Dynamic PV.

Not supported

Supported. For details, see Using an OBS Bucket Through a Dynamic PV.

Features

Non-shared storage. Each volume can be mounted to only one node.

Shared storage featuring high performance and throughput

Shared storage featuring high performance and bandwidth

Shared, user-mode file system

Application scenarios

HPC, enterprise core cluster applications, enterprise application systems, and dev/test

NOTE:

HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration.

HPC, media processing, content management, web services, big data, and analysis applications

NOTE:

HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering.

High-traffic websites, log storage, DevOps, and enterprise OA

Big data analytics, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks)

Capacity

TB

SFS 1.0: PB

General-purpose: TB

EB

Latency

1–2 ms

SFS 1.0: 3–20 ms

General-purpose: 1–5 ms

10 ms

Max. IOPS

2200–256000, depending on flavors

SFS 1.0: 2000

General-purpose: up to 100,000

Tens of millions

Bandwidth

MB/s

SFS 1.0: GB/s

General-purpose: up to GB/s

TB/s

Local Storage Comparison

Item

Local PV

Local Ephemeral Volume

emptyDir

hostPath

Definition

Node's local disks form a storage pool (VolumeGroup) through LVM. LVM divides them into logical volumes (LVs) and mounts them to pods.

Kubernetes native emptyDir, where node's local disks form a storage pool (VolumeGroup) through LVM. LVs are created as the storage media of emptyDir and mounted to pods. LVs deliver better performance than the default storage medium of emptyDir.

Kubernetes native emptyDir. Its lifecycle is the same as that of a pod. Memory can be specified as the storage media. When the pod is deleted, the emptyDir volume is deleted and its data is lost.

Used to mount a file directory of the host where a pod is located to a specified mount point of the pod.

Features

Low-latency, high-I/O, and non-HA persistent volume.

Storage volumes are non-shared storage and bound to nodes through labels. Therefore, storage volumes can be mounted only to a single pod.

Local temporary volume. The storage space is from local LVs.

Local temporary volume. The storage space comes from the local kubelet root directory or memory.

Used to mount files or directories of the host file system. Host directories can be automatically created. Pods can be migrated (not bound to nodes).

Storage volume mounting

Static storage volumes are not supported.

Using a Local PV Through a Dynamic PV is supported.

For details, see Using a Local EV.

For details, see Using a Temporary Path.

For details, see hostPath.

Application scenarios

High I/O requirements and built-in HA solutions of applications, for example, deploying MySQL in HA mode.

  • Scratch space, such as for a disk-based merge sort
  • Checkpointing a long computation for recovery from crashes
  • Saving the files obtained by the content manager container when web server container data is used
  • Scratch space, such as for a disk-based merge sort
  • Checkpointing a long computation for recovery from crashes
  • Saving the files obtained by the content manager container when web server container data is used

Requiring a node file, for example, if Docker is used, you can use hostPath to mount the /var/lib/docker path of the node.

NOTICE:

Avoid using hostPath volumes as much as possible, as they are prone to security risks. If hostPath volumes must be used, they can only be applied to files or directories and mounted in read-only mode.

Documentation