The object storage service (OBS) is an object-based cloud storage service, featuring data storage of high security, proven reliability, and cost-effectiveness. OBS provides large storage capacity for you to store files of any type.
GaussDB(DWS), a data warehouse service, uses OBS as a platform for converting cluster data and external data, satisfying the requirements for secure, reliable, and cost-effective storage.
You can import data in TXT, CSV, ORC, or CarbonData format from OBS to GaussDB(DWS) for query, and can remotely read data from OBS. You are advised to import frequently accessed hot data to GaussDB(DWS) to facilitate queries and store cold data to OBS for remote read to reduce cost.
Currently, data can be imported using either of the following methods:
During data migration and Extract-Transform-Load (ETL), a massive volume of data needs to be imported to GaussDB(DWS) in parallel. The common import mode is time-consuming. When you import data in parallel using OBS foreign tables, source data files to be imported are identified based on the import URL and data formats specified in the tables. Data is imported in parallel through DNs to GaussDB(DWS), which improves the overall import performance.
Disadvantage:
You need to create OBS foreign tables and store to-be-imported data on OBS.
Application Scenario:
A large volume of local data is imported concurrently on many DNs.
Generally, objects are managed as files. However, OBS has no file system–related concepts, such as files and folders. To let users easily manage data, OBS allows them to simulate folders. Users can add a slash (/) in the object name, for example, tpcds1000/stock.csv. In this name, tpcds1000 is regarded as the folder name and stock.csv the file name. The value of key (object name) is still tpcds1000/stock.csv, and the content of the object is the content of the stock.csv file.
Figure 1 shows how data is imported from OBS. The CN plans and delivers data import tasks. It delivers tasks to each DN by file.
The delivery method is as follows:
In Figure 1, there are four DNs (DN0 to DN3) and OBS stores six files numbered from t1.data.0 to t1.data.5. The files are delivered as follows:
t1.data.0 -> DN0
t1.data.1 -> DN1
t1.data.2 -> DN2
t1.data.3 -> DN3
t1.data.4 -> DN0
t1.data.5 -> DN1
Two files are delivered to DN0 and DN1, respectively. One file is delivered to each of the other DNs.
The import performance is the best when one OBS file is delivered to each DN and all the files have the same size. To improve the performance of loading data from OBS, split the data file into multiple files as evenly as possible before storing it to OBS. The recommended number of split files is an integer multiple of the DN quantity.
Procedure |
Description |
Subtask |
---|---|---|
Upload data to OBS. |
Plan the storage path on the OBS server and upload data files. For details, see Uploading Data to OBS. |
- |
Create an OBS foreign table. |
Create a foreign table to identify source data files on the OBS server. The OBS foreign table stores data source information, such as its bucket name, object name, file format, storage location, encoding format, and delimiter. For details, see Creating an OBS Foreign Table. |
- |
Import data. |
After creating the foreign table, run the INSERT statement to efficiently import data to the target tables. For details, see Importing Data. |
- |
Handle the table with import errors. |
If errors occur during data import, handle them based on the displayed error information described in Handling Import Errors to ensure data integrity. |
- |
Improve query efficiency. |
After data is imported, run the ANALYZE statement to generate table statistics. The ANALYZE statement stores the statistics in the PG_STATISTIC system catalog. When you run the plan generator, the statistics help you generate an efficient query execution plan. |
- |