diff --git a/docs/dws/dev/ALL_META.TXT.json b/docs/dws/dev/ALL_META.TXT.json new file mode 100644 index 00000000..b69d9060 --- /dev/null +++ b/docs/dws/dev/ALL_META.TXT.json @@ -0,0 +1,9452 @@ +[ + { + "uri":"dws_04_1000.html", + "product_code":"dws", + "code":"1", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Developer Guide", + "title":"Developer Guide", + "githuburl":"" + }, + { + "uri":"dws_04_0001.html", + "product_code":"dws", + "code":"2", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Welcome", + "title":"Welcome", + "githuburl":"" + }, + { + "uri":"dws_04_0002.html", + "product_code":"dws", + "code":"3", + "des":"This document is intended for database designers, application developers, and database administrators, and provides information required for designing, building, querying", + "doc_type":"devg", + "kw":"Target Readers,Welcome,Developer Guide", + "title":"Target Readers", + "githuburl":"" + }, + { + "uri":"dws_04_0004.html", + "product_code":"dws", + "code":"4", + "des":"If you are a new GaussDB(DWS) user, you are advised to read the following contents first:Sections describing the features, functions, and application scenarios of GaussDB", + "doc_type":"devg", + "kw":"Reading Guide,Welcome,Developer Guide", + "title":"Reading Guide", + "githuburl":"" + }, + { + "uri":"dws_04_0005.html", + "product_code":"dws", + "code":"5", + "des":"SQL examples in this manual are developed based on the TPC-DS model. Before you execute the examples, install the TPC-DS benchmark by following the instructions on the of", + "doc_type":"devg", + "kw":"Conventions,Welcome,Developer Guide", + "title":"Conventions", + "githuburl":"" + }, + { + "uri":"dws_04_0006.html", + "product_code":"dws", + "code":"6", + "des":"Complete the following tasks before you perform operations described in this document:Create a GaussDB(DWS) cluster.Install an SQL client.Connect the SQL client to the de", + "doc_type":"devg", + "kw":"Prerequisites,Welcome,Developer Guide", + "title":"Prerequisites", + "githuburl":"" + }, + { + "uri":"dws_04_0007.html", + "product_code":"dws", + "code":"7", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"System Overview", + "title":"System Overview", + "githuburl":"" + }, + { + "uri":"dws_04_0011.html", + "product_code":"dws", + "code":"8", + "des":"GaussDB(DWS) manages cluster transactions, the basis of HA and failovers. This ensures speedy fault recovery, guarantees the Atomicity, Consistency, Isolation, Durability", + "doc_type":"devg", + "kw":"Highly Reliable Transaction Processing,System Overview,Developer Guide", + "title":"Highly Reliable Transaction Processing", + "githuburl":"" + }, + { + "uri":"dws_04_0012.html", + "product_code":"dws", + "code":"9", + "des":"The following GaussDB(DWS) features help achieve high query performance.GaussDB(DWS) is an MPP system with the shared-nothing architecture. It consists of multiple indepe", + "doc_type":"devg", + "kw":"High Query Performance,System Overview,Developer Guide", + "title":"High Query Performance", + "githuburl":"" + }, + { + "uri":"dws_04_0015.html", + "product_code":"dws", + "code":"10", + "des":"A database manages data objects and is isolated from other databases. While creating a database, you can specify a tablespace. If you do not specify it, database objects ", + "doc_type":"devg", + "kw":"Related Concepts,System Overview,Developer Guide", + "title":"Related Concepts", + "githuburl":"" + }, + { + "uri":"dws_04_0985.html", + "product_code":"dws", + "code":"11", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Data Migration", + "title":"Data Migration", + "githuburl":"" + }, + { + "uri":"dws_04_0180.html", + "product_code":"dws", + "code":"12", + "des":"GaussDB(DWS) provides flexible methods for importing data. You can import data from different sources to GaussDB(DWS). The features of each method are listed in Table 1. ", + "doc_type":"devg", + "kw":"Data Migration to GaussDB(DWS),Data Migration,Developer Guide", + "title":"Data Migration to GaussDB(DWS)", + "githuburl":"" + }, + { + "uri":"dws_04_0179.html", + "product_code":"dws", + "code":"13", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Data Import", + "title":"Data Import", + "githuburl":"" + }, + { + "uri":"dws_04_0181.html", + "product_code":"dws", + "code":"14", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Importing Data from OBS in Parallel", + "title":"Importing Data from OBS in Parallel", + "githuburl":"" + }, + { + "uri":"dws_04_0182.html", + "product_code":"dws", + "code":"15", + "des":"The object storage service (OBS) is an object-based cloud storage service, featuring data storage of high security, proven reliability, and cost-effectiveness. OBS provid", + "doc_type":"devg", + "kw":"About Parallel Data Import from OBS,Importing Data from OBS in Parallel,Developer Guide", + "title":"About Parallel Data Import from OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0154.html", + "product_code":"dws", + "code":"16", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Importing CSV/TXT Data from the OBS", + "title":"Importing CSV/TXT Data from the OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0183.html", + "product_code":"dws", + "code":"17", + "des":"In this example, OBS data is imported to GaussDB(DWS) databases. When users who have registered with the cloud platform access OBS using clients, call APIs, or SDKs, acce", + "doc_type":"devg", + "kw":"Creating Access Keys (AK and SK),Importing CSV/TXT Data from the OBS,Developer Guide", + "title":"Creating Access Keys (AK and SK)", + "githuburl":"" + }, + { + "uri":"dws_04_0184.html", + "product_code":"dws", + "code":"18", + "des":"Before importing data from OBS to a cluster, prepare source data files and upload these files to OBS. If the data files have been stored on OBS, you only need to complete", + "doc_type":"devg", + "kw":"Uploading Data to OBS,Importing CSV/TXT Data from the OBS,Developer Guide", + "title":"Uploading Data to OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0185.html", + "product_code":"dws", + "code":"19", + "des":"format: format of the source data file in the foreign table. OBS foreign tables support CSV and TEXT formats. The default value is TEXT.header: Whether the data file cont", + "doc_type":"devg", + "kw":"Creating an OBS Foreign Table,Importing CSV/TXT Data from the OBS,Developer Guide", + "title":"Creating an OBS Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0186.html", + "product_code":"dws", + "code":"20", + "des":"Before importing data, you are advised to optimize your design and deployment based on the following excellent practices, helping maximize system resource utilization and", + "doc_type":"devg", + "kw":"Importing Data,Importing CSV/TXT Data from the OBS,Developer Guide", + "title":"Importing Data", + "githuburl":"" + }, + { + "uri":"dws_04_0187.html", + "product_code":"dws", + "code":"21", + "des":"Handle errors that occurred during data import.Errors that occur when data is imported are divided into data format errors and non-data format errors.Data format errorWhe", + "doc_type":"devg", + "kw":"Handling Import Errors,Importing CSV/TXT Data from the OBS,Developer Guide", + "title":"Handling Import Errors", + "githuburl":"" + }, + { + "uri":"dws_04_0155.html", + "product_code":"dws", + "code":"22", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Importing ORC/CarbonData Data from OBS", + "title":"Importing ORC/CarbonData Data from OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0243.html", + "product_code":"dws", + "code":"23", + "des":"Before you use the SQL on OBS feature to query OBS data:You have stored the ORC data on OBS.For example, the ORC table has been created when you use the Hive or Spark com", + "doc_type":"devg", + "kw":"Preparing Data on OBS,Importing ORC/CarbonData Data from OBS,Developer Guide", + "title":"Preparing Data on OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0244.html", + "product_code":"dws", + "code":"24", + "des":"This section describes how to create a foreign server that is used to define the information about OBS servers and is invoked by foreign tables. For details about the syn", + "doc_type":"devg", + "kw":"Creating a Foreign Server,Importing ORC/CarbonData Data from OBS,Developer Guide", + "title":"Creating a Foreign Server", + "githuburl":"" + }, + { + "uri":"dws_04_0245.html", + "product_code":"dws", + "code":"25", + "des":"After performing steps in Creating a Foreign Server, create an OBS foreign table in the GaussDB(DWS) database to access the data stored in OBS. An OBS foreign table is re", + "doc_type":"devg", + "kw":"Creating a Foreign Table,Importing ORC/CarbonData Data from OBS,Developer Guide", + "title":"Creating a Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0246.html", + "product_code":"dws", + "code":"26", + "des":"If the data amount is small, you can directly run SELECT to query the foreign table and view the data on OBS.If the query result is the same as the data in Original Data,", + "doc_type":"devg", + "kw":"Querying Data on OBS Through Foreign Tables,Importing ORC/CarbonData Data from OBS,Developer Guide", + "title":"Querying Data on OBS Through Foreign Tables", + "githuburl":"" + }, + { + "uri":"dws_04_0247.html", + "product_code":"dws", + "code":"27", + "des":"After completing operations in this tutorial, if you no longer need to use the resources created during the operations, you can delete them to avoid resource waste or quo", + "doc_type":"devg", + "kw":"Deleting Resources,Importing ORC/CarbonData Data from OBS,Developer Guide", + "title":"Deleting Resources", + "githuburl":"" + }, + { + "uri":"dws_04_0156.html", + "product_code":"dws", + "code":"28", + "des":"In the big data field, the mainstream file format is ORC, which is supported by GaussDB(DWS). You can use Hive to export data to an ORC file and use a read-only foreign t", + "doc_type":"devg", + "kw":"Supported Data Types,Importing ORC/CarbonData Data from OBS,Developer Guide", + "title":"Supported Data Types", + "githuburl":"" + }, + { + "uri":"dws_04_0189.html", + "product_code":"dws", + "code":"29", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Using GDS to Import Data from a Remote Server", + "title":"Using GDS to Import Data from a Remote Server", + "githuburl":"" + }, + { + "uri":"dws_04_0190.html", + "product_code":"dws", + "code":"30", + "des":"INSERT and COPY statements are serially executed to import a small volume of data. To import a large volume of data to GaussDB(DWS), you can use GDS to import data in par", + "doc_type":"devg", + "kw":"Importing Data In Parallel Using GDS,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Importing Data In Parallel Using GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0192.html", + "product_code":"dws", + "code":"31", + "des":"Generally, the data to be imported has been uploaded to the data server. In this case, you only need to check the communication between the data server and GaussDB(DWS), ", + "doc_type":"devg", + "kw":"Preparing Source Data,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Preparing Source Data", + "githuburl":"" + }, + { + "uri":"dws_04_0193.html", + "product_code":"dws", + "code":"32", + "des":"GaussDB(DWS) uses GDS to allocate the source data for parallel data import. Deploy GDS on the data server.If a large volume of data is stored on multiple data servers, in", + "doc_type":"devg", + "kw":"Installing, Configuring, and Starting GDS,Using GDS to Import Data from a Remote Server,Developer Gu", + "title":"Installing, Configuring, and Starting GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0194.html", + "product_code":"dws", + "code":"33", + "des":"The source data information and GDS access information are configured in a foreign table. Then, GaussDB(DWS) can import data from a data server to a database table based ", + "doc_type":"devg", + "kw":"Creating a GDS Foreign Table,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Creating a GDS Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0195.html", + "product_code":"dws", + "code":"34", + "des":"This section describes how to create tables in GaussDB(DWS) and import data to the tables.Before importing all the data from a table containing over 10 million records, y", + "doc_type":"devg", + "kw":"Importing Data,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Importing Data", + "githuburl":"" + }, + { + "uri":"dws_04_0196.html", + "product_code":"dws", + "code":"35", + "des":"Handle errors that occurred during data import.Errors that occur when data is imported are divided into data format errors and non-data format errors.Data format errorWhe", + "doc_type":"devg", + "kw":"Handling Import Errors,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Handling Import Errors", + "githuburl":"" + }, + { + "uri":"dws_04_0197.html", + "product_code":"dws", + "code":"36", + "des":"Stop GDS after data is imported successfully.If GDS is started using the gds command, perform the following operations to stop GDS:Query the GDS process ID:ps -ef|grep gd", + "doc_type":"devg", + "kw":"Stopping GDS,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Stopping GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0198.html", + "product_code":"dws", + "code":"37", + "des":"The data servers and the cluster reside on the same intranet. The IP addresses are 192.168.0.90 and 192.168.0.91. Source data files are in CSV format.Create the target ta", + "doc_type":"devg", + "kw":"Example of Importing Data Using GDS,Using GDS to Import Data from a Remote Server,Developer Guide", + "title":"Example of Importing Data Using GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0210.html", + "product_code":"dws", + "code":"38", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Importing Data from MRS to a Cluster", + "title":"Importing Data from MRS to a Cluster", + "githuburl":"" + }, + { + "uri":"dws_04_0066.html", + "product_code":"dws", + "code":"39", + "des":"MRS is a big data cluster running based on the open-source Hadoop ecosystem. It provides the industry's latest cutting-edge storage and analytical capabilities of massive", + "doc_type":"devg", + "kw":"Overview,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_04_0212.html", + "product_code":"dws", + "code":"40", + "des":"Before importing data from MRS to a GaussDB(DWS) cluster, you must have:Created an MRS cluster.Created the Hive/Spark ORC table in the MRS cluster and stored the table da", + "doc_type":"devg", + "kw":"Preparing Data in an MRS Cluster,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Preparing Data in an MRS Cluster", + "githuburl":"" + }, + { + "uri":"dws_04_0213.html", + "product_code":"dws", + "code":"41", + "des":"In the syntax CREATE FOREIGN TABLE (SQL on Hadoop or OBS) for creating a foreign table, you need to specify a foreign server associated with the MRS data source connectio", + "doc_type":"devg", + "kw":"Manually Creating a Foreign Server,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Manually Creating a Foreign Server", + "githuburl":"" + }, + { + "uri":"dws_04_0214.html", + "product_code":"dws", + "code":"42", + "des":"This section describes how to create a Hadoop foreign table in the GaussDB(DWS) database to access the Hadoop structured data stored on MRS HDFS. A Hadoop foreign table i", + "doc_type":"devg", + "kw":"Creating a Foreign Table,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Creating a Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0215.html", + "product_code":"dws", + "code":"43", + "des":"If the data amount is small, you can directly run SELECT to query the foreign table and view the data in the MRS data source.If the query result is the same as the data i", + "doc_type":"devg", + "kw":"Importing Data,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Importing Data", + "githuburl":"" + }, + { + "uri":"dws_04_0216.html", + "product_code":"dws", + "code":"44", + "des":"After completing operations in this tutorial, if you no longer need to use the resources created during the operations, you can delete them to avoid resource waste or quo", + "doc_type":"devg", + "kw":"Deleting Resources,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Deleting Resources", + "githuburl":"" + }, + { + "uri":"dws_04_0217.html", + "product_code":"dws", + "code":"45", + "des":"The following error information indicates that GaussDB(DWS) is to read an ORC data file but the actual file is in text format. Therefore, create a table of the Hive ORC t", + "doc_type":"devg", + "kw":"Error Handling,Importing Data from MRS to a Cluster,Developer Guide", + "title":"Error Handling", + "githuburl":"" + }, + { + "uri":"dws_04_0949.html", + "product_code":"dws", + "code":"46", + "des":"You can create foreign tables to perform associated queries and import data between clusters.Import data from one GaussDB(DWS) cluster to another.Perform associated queri", + "doc_type":"devg", + "kw":"Importing Data from One GaussDB(DWS) Cluster to Another,Data Import,Developer Guide", + "title":"Importing Data from One GaussDB(DWS) Cluster to Another", + "githuburl":"" + }, + { + "uri":"dws_04_0208.html", + "product_code":"dws", + "code":"47", + "des":"The gsql tool of GaussDB(DWS) provides the \\copy meta-command to import data.For details about the \\copy command, see Table 1.tableSpecifies the name (possibly schema-qua", + "doc_type":"devg", + "kw":"Using the gsql Meta-Command \\COPY to Import Data,Data Import,Developer Guide", + "title":"Using the gsql Meta-Command \\COPY to Import Data", + "githuburl":"" + }, + { + "uri":"dws_04_0203.html", + "product_code":"dws", + "code":"48", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Running the COPY FROM STDIN Statement to Import Data", + "title":"Running the COPY FROM STDIN Statement to Import Data", + "githuburl":"" + }, + { + "uri":"dws_04_0204.html", + "product_code":"dws", + "code":"49", + "des":"This method is applicable to low-concurrency scenarios where a small volume of data is to be imported.Use either of the following methods to write data to GaussDB(DWS) us", + "doc_type":"devg", + "kw":"Data Import Using COPY FROM STDIN,Running the COPY FROM STDIN Statement to Import Data,Developer Gui", + "title":"Data Import Using COPY FROM STDIN", + "githuburl":"" + }, + { + "uri":"dws_04_0205.html", + "product_code":"dws", + "code":"50", + "des":"CopyManager is an API interface class provided by the JDBC driver in GaussDB(DWS). It is used to import data to GaussDB(DWS) in batches.The CopyManager class is in the or", + "doc_type":"devg", + "kw":"Introduction to the CopyManager Class,Running the COPY FROM STDIN Statement to Import Data,Developer", + "title":"Introduction to the CopyManager Class", + "githuburl":"" + }, + { + "uri":"dws_04_0206.html", + "product_code":"dws", + "code":"51", + "des":"When the JAVA language is used for secondary development based on GaussDB(DWS), you can use the CopyManager interface to export data from the database to a local file or ", + "doc_type":"devg", + "kw":"Example: Importing and Exporting Data Through Local Files,Running the COPY FROM STDIN Statement to I", + "title":"Example: Importing and Exporting Data Through Local Files", + "githuburl":"" + }, + { + "uri":"dws_04_0207.html", + "product_code":"dws", + "code":"52", + "des":"The following example shows how to use CopyManager to migrate data from MySQL to GaussDB(DWS).", + "doc_type":"devg", + "kw":"Example: Migrating Data from MySQL to GaussDB(DWS),Running the COPY FROM STDIN Statement to Import D", + "title":"Example: Migrating Data from MySQL to GaussDB(DWS)", + "githuburl":"" + }, + { + "uri":"dws_04_0986.html", + "product_code":"dws", + "code":"53", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Full Database Migration", + "title":"Full Database Migration", + "githuburl":"" + }, + { + "uri":"dws_04_0219.html", + "product_code":"dws", + "code":"54", + "des":"You can use CDM to migrate data from other data sources (for example, MySQL) to the databases in clusters on GaussDB(DWS).For details about scenarios where CDM is used to", + "doc_type":"devg", + "kw":"Using CDM to Migrate Data to GaussDB(DWS),Full Database Migration,Developer Guide", + "title":"Using CDM to Migrate Data to GaussDB(DWS)", + "githuburl":"" + }, + { + "uri":"dws_01_0127.html", + "product_code":"dws", + "code":"55", + "des":"The DSC is a CLI tool running on the Linux or Windows OS. It is dedicated to providing customers with simple, fast, and reliable application SQL script migration services", + "doc_type":"devg", + "kw":"Using DSC to Migrate SQL Scripts,Full Database Migration,Developer Guide", + "title":"Using DSC to Migrate SQL Scripts", + "githuburl":"" + }, + { + "uri":"dws_04_0987.html", + "product_code":"dws", + "code":"56", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Metadata Migration", + "title":"Metadata Migration", + "githuburl":"" + }, + { + "uri":"dws_04_0269.html", + "product_code":"dws", + "code":"57", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Using gs_dump and gs_dumpall to Export Metadata", + "title":"Using gs_dump and gs_dumpall to Export Metadata", + "githuburl":"" + }, + { + "uri":"dws_04_0270.html", + "product_code":"dws", + "code":"58", + "des":"GaussDB(DWS) provides gs_dump and gs_dumpall to export required database objects and related information. To migrate database information, you can use a tool to import th", + "doc_type":"devg", + "kw":"Overview,Using gs_dump and gs_dumpall to Export Metadata,Developer Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_04_0271.html", + "product_code":"dws", + "code":"59", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Exporting a Single Database", + "title":"Exporting a Single Database", + "githuburl":"" + }, + { + "uri":"dws_04_0272.html", + "product_code":"dws", + "code":"60", + "des":"You can use gs_dump to export data and all object definitions of a database from GaussDB(DWS). You can specify the information to be exported as follows:Export full infor", + "doc_type":"devg", + "kw":"Exporting a Database,Exporting a Single Database,Developer Guide", + "title":"Exporting a Database", + "githuburl":"" + }, + { + "uri":"dws_04_0273.html", + "product_code":"dws", + "code":"61", + "des":"You can use gs_dump to export data and all object definitions of a schema from GaussDB(DWS). You can export one or more specified schemas as needed. You can specify the i", + "doc_type":"devg", + "kw":"Exporting a Schema,Exporting a Single Database,Developer Guide", + "title":"Exporting a Schema", + "githuburl":"" + }, + { + "uri":"dws_04_0274.html", + "product_code":"dws", + "code":"62", + "des":"You can use gs_dump to export data and all object definitions of a table-level object from GaussDB(DWS). Views, sequences, and foreign tables are special tables. You can ", + "doc_type":"devg", + "kw":"Exporting a Table,Exporting a Single Database,Developer Guide", + "title":"Exporting a Table", + "githuburl":"" + }, + { + "uri":"dws_04_0275.html", + "product_code":"dws", + "code":"63", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Exporting All Databases", + "title":"Exporting All Databases", + "githuburl":"" + }, + { + "uri":"dws_04_0276.html", + "product_code":"dws", + "code":"64", + "des":"You can use gs_dumpall to export full information of all databases in a cluster from GaussDB(DWS), including information about each database and global objects in the clu", + "doc_type":"devg", + "kw":"Exporting All Databases,Exporting All Databases,Developer Guide", + "title":"Exporting All Databases", + "githuburl":"" + }, + { + "uri":"dws_04_0277.html", + "product_code":"dws", + "code":"65", + "des":"You can use gs_dumpall to export global objects from GaussDB(DWS), including database users, user groups, tablespaces, and attributes (for example, global access permissi", + "doc_type":"devg", + "kw":"Exporting Global Objects,Exporting All Databases,Developer Guide", + "title":"Exporting Global Objects", + "githuburl":"" + }, + { + "uri":"dws_04_0278.html", + "product_code":"dws", + "code":"66", + "des":"gs_dump and gs_dumpall use -U to specify the user that performs the export. If the specified user does not have the required permission, data cannot be exported. In this ", + "doc_type":"devg", + "kw":"Data Export By a User Without Required Permissions,Using gs_dump and gs_dumpall to Export Metadata,D", + "title":"Data Export By a User Without Required Permissions", + "githuburl":"" + }, + { + "uri":"dws_04_0209.html", + "product_code":"dws", + "code":"67", + "des":"gs_restore is an import tool provided by GaussDB(DWS). You can use gs_restore to import the files exported by gs_dump to a database. gs_restore can import the files in .t", + "doc_type":"devg", + "kw":"Using gs_restore to Import Data,Metadata Migration,Developer Guide", + "title":"Using gs_restore to Import Data", + "githuburl":"" + }, + { + "uri":"dws_04_0249.html", + "product_code":"dws", + "code":"68", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Data Export", + "title":"Data Export", + "githuburl":"" + }, + { + "uri":"dws_04_0250.html", + "product_code":"dws", + "code":"69", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Exporting Data to OBS", + "title":"Exporting Data to OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0251.html", + "product_code":"dws", + "code":"70", + "des":"GaussDB(DWS) databases allow you to export data in parallel using OBS foreign tables, in which the export mode and the exported data format are specified. Data is exporte", + "doc_type":"devg", + "kw":"Parallel OBS Data Export,Exporting Data to OBS,Developer Guide", + "title":"Parallel OBS Data Export", + "githuburl":"" + }, + { + "uri":"dws_04_0157.html", + "product_code":"dws", + "code":"71", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Exporting CSV/TXT Data to OBS", + "title":"Exporting CSV/TXT Data to OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0252.html", + "product_code":"dws", + "code":"72", + "des":"Plan the storage location of exported data in OBS.You need to specify the OBS path (to directory) for storing data that you want to export. The exported data can be saved", + "doc_type":"devg", + "kw":"Planning Data Export,Exporting CSV/TXT Data to OBS,Developer Guide", + "title":"Planning Data Export", + "githuburl":"" + }, + { + "uri":"dws_04_0253.html", + "product_code":"dws", + "code":"73", + "des":"To obtain access keys, log in to the management console, click the username in the upper right corner, and select My Credential from the menu. Then choose Access Keys in ", + "doc_type":"devg", + "kw":"Creating an OBS Foreign Table,Exporting CSV/TXT Data to OBS,Developer Guide", + "title":"Creating an OBS Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0254.html", + "product_code":"dws", + "code":"74", + "des":"Example 1: Export data from table product_info_output to a data file through the product_info_output_ext foreign table.INSERT INTO product_info_output_ext SELECT * FROM p", + "doc_type":"devg", + "kw":"Exporting Data,Exporting CSV/TXT Data to OBS,Developer Guide", + "title":"Exporting Data", + "githuburl":"" + }, + { + "uri":"dws_04_0255.html", + "product_code":"dws", + "code":"75", + "des":"Create two foreign tables and use them to export tables from a database to two buckets in OBS.OBS and the database are in the same region. The example GaussDB(DWS) table ", + "doc_type":"devg", + "kw":"Examples,Exporting CSV/TXT Data to OBS,Developer Guide", + "title":"Examples", + "githuburl":"" + }, + { + "uri":"dws_04_0256.html", + "product_code":"dws", + "code":"76", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Exporting ORC Data to OBS", + "title":"Exporting ORC Data to OBS", + "githuburl":"" + }, + { + "uri":"dws_04_0258.html", + "product_code":"dws", + "code":"77", + "des":"For details about exporting data to OBS, see Planning Data Export.For details about the data types that can be exported to OBS, see Table 2.For details about HDFS data ex", + "doc_type":"devg", + "kw":"Planning Data Export,Exporting ORC Data to OBS,Developer Guide", + "title":"Planning Data Export", + "githuburl":"" + }, + { + "uri":"dws_04_0259.html", + "product_code":"dws", + "code":"78", + "des":"For details about creating a foreign server on OBS, see Creating a Foreign Server.For details about creating a foreign server in HDFS, see Manually Creating a Foreign Ser", + "doc_type":"devg", + "kw":"Creating a Foreign Server,Exporting ORC Data to OBS,Developer Guide", + "title":"Creating a Foreign Server", + "githuburl":"" + }, + { + "uri":"dws_04_0260.html", + "product_code":"dws", + "code":"79", + "des":"After operations in Creating a Foreign Server are complete, create an OBS/HDFS write-only foreign table in the GaussDB(DWS) database to access data stored in OBS/HDFS. Th", + "doc_type":"devg", + "kw":"Creating a Foreign Table,Exporting ORC Data to OBS,Developer Guide", + "title":"Creating a Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0158.html", + "product_code":"dws", + "code":"80", + "des":"Example 1: Export data from table product_info_output to a data file using the product_info_output_ext foreign table.INSERT INTO product_info_output_ext SELECT * FROM pro", + "doc_type":"devg", + "kw":"Exporting Data,Exporting ORC Data to OBS,Developer Guide", + "title":"Exporting Data", + "githuburl":"" + }, + { + "uri":"dws_04_0159.html", + "product_code":"dws", + "code":"81", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Exporting ORC Data to MRS", + "title":"Exporting ORC Data to MRS", + "githuburl":"" + }, + { + "uri":"dws_04_0160.html", + "product_code":"dws", + "code":"82", + "des":"GaussDB(DWS) allows you to export ORC data to MRS using an HDFS foreign table. You can specify the export mode and export data format in the foreign table. Data is export", + "doc_type":"devg", + "kw":"Overview,Exporting ORC Data to MRS,Developer Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_04_0161.html", + "product_code":"dws", + "code":"83", + "des":"For details about the data types that can be exported to MRS, see Table 2.For details about HDFS data export or MRS configuration, see the MapReduce Service User Guide.", + "doc_type":"devg", + "kw":"Planning Data Export,Exporting ORC Data to MRS,Developer Guide", + "title":"Planning Data Export", + "githuburl":"" + }, + { + "uri":"dws_04_0162.html", + "product_code":"dws", + "code":"84", + "des":"For details about creating a foreign server on HDFS, see Manually Creating a Foreign Server.", + "doc_type":"devg", + "kw":"Creating a Foreign Server,Exporting ORC Data to MRS,Developer Guide", + "title":"Creating a Foreign Server", + "githuburl":"" + }, + { + "uri":"dws_04_0163.html", + "product_code":"dws", + "code":"85", + "des":"After operations in Creating a Foreign Server are complete, create an HDFS write-only foreign table in the GaussDB(DWS) database to access data stored in HDFS. The foreig", + "doc_type":"devg", + "kw":"Creating a Foreign Table,Exporting ORC Data to MRS,Developer Guide", + "title":"Creating a Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0164.html", + "product_code":"dws", + "code":"86", + "des":"Example 1: Export data from table product_info_output to a data file using the product_info_output_ext foreign table.INSERT INTO product_info_output_ext SELECT * FROM pro", + "doc_type":"devg", + "kw":"Exporting Data,Exporting ORC Data to MRS,Developer Guide", + "title":"Exporting Data", + "githuburl":"" + }, + { + "uri":"dws_04_0261.html", + "product_code":"dws", + "code":"87", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Using GDS to Export Data to a Remote Server", + "title":"Using GDS to Export Data to a Remote Server", + "githuburl":"" + }, + { + "uri":"dws_04_0262.html", + "product_code":"dws", + "code":"88", + "des":"In high-concurrency scenarios, you can use GDS to export data from a database to a common file system.In the current GDS version, data can be exported from a database to ", + "doc_type":"devg", + "kw":"Exporting Data In Parallel Using GDS,Using GDS to Export Data to a Remote Server,Developer Guide", + "title":"Exporting Data In Parallel Using GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0263.html", + "product_code":"dws", + "code":"89", + "des":"Before you use GDS to export data from a cluster, prepare data to be exported and plan the export path.Remote modeIf the following information is displayed, the user and ", + "doc_type":"devg", + "kw":"Planning Data Export,Using GDS to Export Data to a Remote Server,Developer Guide", + "title":"Planning Data Export", + "githuburl":"" + }, + { + "uri":"dws_04_0264.html", + "product_code":"dws", + "code":"90", + "des":"GDS is a data service tool provided by GaussDB(DWS). Using the foreign table mechanism, this tool helps export data at a high speed.For details, see Installing, Configuri", + "doc_type":"devg", + "kw":"Installing, Configuring, and Starting GDS,Using GDS to Export Data to a Remote Server,Developer Guid", + "title":"Installing, Configuring, and Starting GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0265.html", + "product_code":"dws", + "code":"91", + "des":"Remote modeSet the location parameter to the URL of the directory that stores the data files.You do not need to specify any file.For example:The IP address of the GDS dat", + "doc_type":"devg", + "kw":"Creating a GDS Foreign Table,Using GDS to Export Data to a Remote Server,Developer Guide", + "title":"Creating a GDS Foreign Table", + "githuburl":"" + }, + { + "uri":"dws_04_0266.html", + "product_code":"dws", + "code":"92", + "des":"Ensure that the IP addresses and ports of servers where CNs and DNs are deployed can connect to those of the GDS server.Create batch processing scripts to export data in ", + "doc_type":"devg", + "kw":"Exporting Data,Using GDS to Export Data to a Remote Server,Developer Guide", + "title":"Exporting Data", + "githuburl":"" + }, + { + "uri":"dws_04_0267.html", + "product_code":"dws", + "code":"93", + "des":"GDS is a data service tool provided by GaussDB(DWS). Using the foreign table mechanism, this tool helps export data at a high speed.For details, see Stopping GDS.", + "doc_type":"devg", + "kw":"Stopping GDS,Using GDS to Export Data to a Remote Server,Developer Guide", + "title":"Stopping GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0268.html", + "product_code":"dws", + "code":"94", + "des":"The data server and the cluster reside on the same intranet, the IP address of the data server is 192.168.0.90, and data source files are in CSV format. In this scenario,", + "doc_type":"devg", + "kw":"Examples of Exporting Data Using GDS,Using GDS to Export Data to a Remote Server,Developer Guide", + "title":"Examples of Exporting Data Using GDS", + "githuburl":"" + }, + { + "uri":"dws_04_0988.html", + "product_code":"dws", + "code":"95", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Other Operations", + "title":"Other Operations", + "githuburl":"" + }, + { + "uri":"dws_04_0279.html", + "product_code":"dws", + "code":"96", + "des":"GDS supports concurrent import and export. The gds -t parameter is used to set the size of the thread pool and control the maximum number of concurrent working threads. B", + "doc_type":"devg", + "kw":"GDS Pipe FAQs,Other Operations,Developer Guide", + "title":"GDS Pipe FAQs", + "githuburl":"" + }, + { + "uri":"dws_04_0228.html", + "product_code":"dws", + "code":"97", + "des":"Data skew causes the query performance to deteriorate. Before importing all the data from a table consisting of over 10 million records, you are advised to import some of", + "doc_type":"devg", + "kw":"Checking for Data Skew,Other Operations,Developer Guide", + "title":"Checking for Data Skew", + "githuburl":"" + }, + { + "uri":"dws_04_0042.html", + "product_code":"dws", + "code":"98", + "des":"GaussDB(DWS) is compatible with Oracle, Teradata and MySQL syntax, of which the syntax behavior is different.", + "doc_type":"devg", + "kw":"Syntax Compatibility Differences Among Oracle, Teradata, and MySQL,Developer Guide,Developer Guide", + "title":"Syntax Compatibility Differences Among Oracle, Teradata, and MySQL", + "githuburl":"" + }, + { + "uri":"dws_04_0043.html", + "product_code":"dws", + "code":"99", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Database Security Management", + "title":"Database Security Management", + "githuburl":"" + }, + { + "uri":"dws_04_0053.html", + "product_code":"dws", + "code":"100", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Managing Users and Their Permissions", + "title":"Managing Users and Their Permissions", + "githuburl":"" + }, + { + "uri":"dws_04_0054.html", + "product_code":"dws", + "code":"101", + "des":"A user who creates an object is the owner of this object. By default, Separation of Permissions is disabled after cluster installation. A database system administrator ha", + "doc_type":"devg", + "kw":"Default Permission Mechanism,Managing Users and Their Permissions,Developer Guide", + "title":"Default Permission Mechanism", + "githuburl":"" + }, + { + "uri":"dws_04_0055.html", + "product_code":"dws", + "code":"102", + "des":"A system administrator is an account with the SYSADMIN permission. After a cluster is installed, a system administrator has the permissions of all object owners by defaul", + "doc_type":"devg", + "kw":"System Administrator,Managing Users and Their Permissions,Developer Guide", + "title":"System Administrator", + "githuburl":"" + }, + { + "uri":"dws_04_0056.html", + "product_code":"dws", + "code":"103", + "des":"Descriptions in Default Permission Mechanism and System Administrator are about the initial situation after a cluster is created. By default, a system administrator with ", + "doc_type":"devg", + "kw":"Separation of Permissions,Managing Users and Their Permissions,Developer Guide", + "title":"Separation of Permissions", + "githuburl":"" + }, + { + "uri":"dws_04_0057.html", + "product_code":"dws", + "code":"104", + "des":"You can use CREATE USER and ALTER USER to create and manage database users, respectively. The database cluster has one or more named databases. Users and roles are shared", + "doc_type":"devg", + "kw":"Users,Managing Users and Their Permissions,Developer Guide", + "title":"Users", + "githuburl":"" + }, + { + "uri":"dws_04_0058.html", + "product_code":"dws", + "code":"105", + "des":"A role is a set of permissions. After a role is granted to a user through GRANT, the user will have all the permissions of the role. It is recommended that roles be used ", + "doc_type":"devg", + "kw":"Roles,Managing Users and Their Permissions,Developer Guide", + "title":"Roles", + "githuburl":"" + }, + { + "uri":"dws_04_0059.html", + "product_code":"dws", + "code":"106", + "des":"Schemas function as models. Schema management allows multiple users to use the same database without mutual impacts, to organize database objects as manageable logical gr", + "doc_type":"devg", + "kw":"Schema,Managing Users and Their Permissions,Developer Guide", + "title":"Schema", + "githuburl":"" + }, + { + "uri":"dws_04_0060.html", + "product_code":"dws", + "code":"107", + "des":"To grant the permission for an object directly to a user, use GRANT.When permissions for a table or view in a schema are granted to a user or role, the USAGE permission o", + "doc_type":"devg", + "kw":"User Permission Setting,Managing Users and Their Permissions,Developer Guide", + "title":"User Permission Setting", + "githuburl":"" + }, + { + "uri":"dws_04_0063.html", + "product_code":"dws", + "code":"108", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Setting Security Policies", + "title":"Setting Security Policies", + "githuburl":"" + }, + { + "uri":"dws_04_0064.html", + "product_code":"dws", + "code":"109", + "des":"For data security purposes, GaussDB(DWS) provides a series of security measures, such as automatically locking and unlocking accounts, manually locking and unlocking abno", + "doc_type":"devg", + "kw":"Setting Account Security Policies,Setting Security Policies,Developer Guide", + "title":"Setting Account Security Policies", + "githuburl":"" + }, + { + "uri":"dws_04_0065.html", + "product_code":"dws", + "code":"110", + "des":"When creating a user, you need to specify the validity period of the user, including the start time and end time.To enable a user not within the validity period to use it", + "doc_type":"devg", + "kw":"Setting the Validity Period of an Account,Setting Security Policies,Developer Guide", + "title":"Setting the Validity Period of an Account", + "githuburl":"" + }, + { + "uri":"dws_04_0067.html", + "product_code":"dws", + "code":"111", + "des":"User passwords are stored in the system catalog pg_authid. To prevent password leakage, GaussDB(DWS) encrypts and stores the user passwords.Password complexityThe passwor", + "doc_type":"devg", + "kw":"Setting a User Password,Setting Security Policies,Developer Guide", + "title":"Setting a User Password", + "githuburl":"" + }, + { + "uri":"dws_04_0994.html", + "product_code":"dws", + "code":"112", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Sensitive Data Management", + "title":"Sensitive Data Management", + "githuburl":"" + }, + { + "uri":"dws_04_0061.html", + "product_code":"dws", + "code":"113", + "des":"The row-level access control feature enables database access control to be accurate to each row of data tables. In this way, the same SQL query may return different resul", + "doc_type":"devg", + "kw":"Row-Level Access Control,Sensitive Data Management,Developer Guide", + "title":"Row-Level Access Control", + "githuburl":"" + }, + { + "uri":"dws_04_0062.html", + "product_code":"dws", + "code":"114", + "des":"GaussDB(DWS) provides the column-level dynamic data masking (DDM) function. For sensitive data, such as the ID card number, mobile number, and bank card number, the DDM f", + "doc_type":"devg", + "kw":"Data Redaction,Sensitive Data Management,Developer Guide", + "title":"Data Redaction", + "githuburl":"" + }, + { + "uri":"dws_04_0995.html", + "product_code":"dws", + "code":"115", + "des":"GaussDB(DWS) supports encryption and decryption of strings using the following functions:gs_encrypt(encryptstr, keystr, cryptotype, cryptomode, hashmethod)Description: En", + "doc_type":"devg", + "kw":"Using Functions for Encryption and Decryption,Sensitive Data Management,Developer Guide", + "title":"Using Functions for Encryption and Decryption", + "githuburl":"" + }, + { + "uri":"dws_04_0074.html", + "product_code":"dws", + "code":"116", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Development and Design Proposal", + "title":"Development and Design Proposal", + "githuburl":"" + }, + { + "uri":"dws_04_0075.html", + "product_code":"dws", + "code":"117", + "des":"This chapter describes the design specifications for database modeling and application development. Modeling compliant with these specifications fits the distributed proc", + "doc_type":"devg", + "kw":"Development and Design Proposal,Development and Design Proposal,Developer Guide", + "title":"Development and Design Proposal", + "githuburl":"" + }, + { + "uri":"dws_04_0076.html", + "product_code":"dws", + "code":"118", + "des":"The name of a database object must contain 1 to 63 characters, start with a letter or underscore (_), and can contain letters, digits, underscores (_), dollar signs ($), ", + "doc_type":"devg", + "kw":"Database Object Naming Conventions,Development and Design Proposal,Developer Guide", + "title":"Database Object Naming Conventions", + "githuburl":"" + }, + { + "uri":"dws_04_0077.html", + "product_code":"dws", + "code":"119", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Database Object Design", + "title":"Database Object Design", + "githuburl":"" + }, + { + "uri":"dws_04_0078.html", + "product_code":"dws", + "code":"120", + "des":"In GaussDB(DWS), services can be isolated by databases and schemas. Databases share little resources and cannot directly access each other. Connections to and permissions", + "doc_type":"devg", + "kw":"Database and Schema Design,Database Object Design,Developer Guide", + "title":"Database and Schema Design", + "githuburl":"" + }, + { + "uri":"dws_04_0079.html", + "product_code":"dws", + "code":"121", + "des":"GaussDB(DWS) uses a distributed architecture. Data is distributed on DNs. Comply with the following principles to properly design a table:[Notice] Evenly distribute data ", + "doc_type":"devg", + "kw":"Table Design,Database Object Design,Developer Guide", + "title":"Table Design", + "githuburl":"" + }, + { + "uri":"dws_04_0080.html", + "product_code":"dws", + "code":"122", + "des":"Comply with the following rules to improve query efficiency when you design columns:[Proposal] Use the most efficient data types allowed.If all of the following number ty", + "doc_type":"devg", + "kw":"Column Design,Database Object Design,Developer Guide", + "title":"Column Design", + "githuburl":"" + }, + { + "uri":"dws_04_0081.html", + "product_code":"dws", + "code":"123", + "des":"[Proposal] If all the column values can be obtained from services, you are not advised to use the DEFAULT constraint, because doing so will generate unexpected results du", + "doc_type":"devg", + "kw":"Constraint Design,Database Object Design,Developer Guide", + "title":"Constraint Design", + "githuburl":"" + }, + { + "uri":"dws_04_0082.html", + "product_code":"dws", + "code":"124", + "des":"[Proposal] Do not nest views unless they have strong dependency on each other.[Proposal] Try to avoid sort operations in a view definition.[Proposal] Minimize joined colu", + "doc_type":"devg", + "kw":"View and Joined Table Design,Database Object Design,Developer Guide", + "title":"View and Joined Table Design", + "githuburl":"" + }, + { + "uri":"dws_04_0083.html", + "product_code":"dws", + "code":"125", + "des":"Currently, third-party tools are connected to GaussDB(DWS) trough JDBC. This section describes the precautions for configuring the tools.[Notice] When a third-party tool ", + "doc_type":"devg", + "kw":"JDBC Configuration,Development and Design Proposal,Developer Guide", + "title":"JDBC Configuration", + "githuburl":"" + }, + { + "uri":"dws_04_0084.html", + "product_code":"dws", + "code":"126", + "des":"[Proposal] In GaussDB(DWS), you are advised to execute DDL operations, such as creating table or making comments, separately from batch processing jobs to avoid performan", + "doc_type":"devg", + "kw":"SQL Compilation,Development and Design Proposal,Developer Guide", + "title":"SQL Compilation", + "githuburl":"" + }, + { + "uri":"dws_04_0971.html", + "product_code":"dws", + "code":"127", + "des":"[Notice] Java UDFs can perform some Java logic calculation. Do not encapsulate services in Java UDFs.[Notice] Do not connect to a database in any way (for example, by usi", + "doc_type":"devg", + "kw":"PL/Java Usage,Development and Design Proposal,Developer Guide", + "title":"PL/Java Usage", + "githuburl":"" + }, + { + "uri":"dws_04_0972.html", + "product_code":"dws", + "code":"128", + "des":"Development shall strictly comply with design documents.Program modules shall be highly cohesive and loosely coupled.Proper, comprehensive troubleshooting measures shall ", + "doc_type":"devg", + "kw":"PL/pgSQL Usage,Development and Design Proposal,Developer Guide", + "title":"PL/pgSQL Usage", + "githuburl":"" + }, + { + "uri":"dws_04_0085.html", + "product_code":"dws", + "code":"129", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Guide: JDBC- or ODBC-Based Development", + "title":"Guide: JDBC- or ODBC-Based Development", + "githuburl":"" + }, + { + "uri":"dws_04_0086.html", + "product_code":"dws", + "code":"130", + "des":"If the connection pool mechanism is used during application development, comply with the following specifications:If GUC parameters are set in the connection, before you ", + "doc_type":"devg", + "kw":"Development Specifications,Guide: JDBC- or ODBC-Based Development,Developer Guide", + "title":"Development Specifications", + "githuburl":"" + }, + { + "uri":"dws_04_0087.html", + "product_code":"dws", + "code":"131", + "des":"For details, see section \"Downloading the JDBC or ODBC Driver\" in the Data Warehouse Service User Guide.", + "doc_type":"devg", + "kw":"Downloading Drivers,Guide: JDBC- or ODBC-Based Development,Developer Guide", + "title":"Downloading Drivers", + "githuburl":"" + }, + { + "uri":"dws_04_0088.html", + "product_code":"dws", + "code":"132", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"JDBC-Based Development", + "title":"JDBC-Based Development", + "githuburl":"" + }, + { + "uri":"dws_04_0090.html", + "product_code":"dws", + "code":"133", + "des":"Obtain the package dws_8.1.x_jdbc_driver.zip from the management console. For details, see Downloading Drivers.Compressed in it is the JDBC driver JAR package:gsjdbc4.jar", + "doc_type":"devg", + "kw":"JDBC Package and Driver Class,JDBC-Based Development,Developer Guide", + "title":"JDBC Package and Driver Class", + "githuburl":"" + }, + { + "uri":"dws_04_0091.html", + "product_code":"dws", + "code":"134", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Development Process,JDBC-Based Development,Developer Guide", + "title":"Development Process", + "githuburl":"" + }, + { + "uri":"dws_04_0092.html", + "product_code":"dws", + "code":"135", + "des":"Load the database driver before creating a database connection.You can load the driver in the following ways:Implicitly loading the driver before creating a connection in", + "doc_type":"devg", + "kw":"Loading a Driver,JDBC-Based Development,Developer Guide", + "title":"Loading a Driver", + "githuburl":"" + }, + { + "uri":"dws_04_0093.html", + "product_code":"dws", + "code":"136", + "des":"After a database is connected, you can execute SQL statements in the database.If you use an open-source Java Database Connectivity (JDBC) driver, ensure that the database", + "doc_type":"devg", + "kw":"Connecting to a Database,JDBC-Based Development,Developer Guide", + "title":"Connecting to a Database", + "githuburl":"" + }, + { + "uri":"dws_04_0095.html", + "product_code":"dws", + "code":"137", + "des":"The application performs data (parameter statements do not need to be transferred) in the database by running SQL statements, and you need to perform the following steps:", + "doc_type":"devg", + "kw":"Executing SQL Statements,JDBC-Based Development,Developer Guide", + "title":"Executing SQL Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0096.html", + "product_code":"dws", + "code":"138", + "des":"Different types of result sets are applicable to different application scenarios. Applications select proper types of result sets based on requirements. Before executing ", + "doc_type":"devg", + "kw":"Processing Data in a Result Set,JDBC-Based Development,Developer Guide", + "title":"Processing Data in a Result Set", + "githuburl":"" + }, + { + "uri":"dws_04_0097.html", + "product_code":"dws", + "code":"139", + "des":"After you complete required data operations in the database, close the database connection.Call the close method to close the connection, such as, conn. close().", + "doc_type":"devg", + "kw":"Closing the Connection,JDBC-Based Development,Developer Guide", + "title":"Closing the Connection", + "githuburl":"" + }, + { + "uri":"dws_04_0098.html", + "product_code":"dws", + "code":"140", + "des":"Before completing the following example, you need to create a stored procedure.This example illustrates how to develop applications based on the GaussDB(DWS) JDBC interfa", + "doc_type":"devg", + "kw":"Example: Common Operations,JDBC-Based Development,Developer Guide", + "title":"Example: Common Operations", + "githuburl":"" + }, + { + "uri":"dws_04_0099.html", + "product_code":"dws", + "code":"141", + "des":"If the primary DN is faulty and cannot be restored within 40s, its standby is automatically promoted to primary to ensure the normal running of the cluster. Jobs running ", + "doc_type":"devg", + "kw":"Example: Retrying SQL Queries for Applications,JDBC-Based Development,Developer Guide", + "title":"Example: Retrying SQL Queries for Applications", + "githuburl":"" + }, + { + "uri":"dws_04_0100.html", + "product_code":"dws", + "code":"142", + "des":"When the JAVA language is used for secondary development based on GaussDB(DWS), you can use the CopyManager interface to export data from the database to a local file or ", + "doc_type":"devg", + "kw":"Example: Importing and Exporting Data Through Local Files,JDBC-Based Development,Developer Guide", + "title":"Example: Importing and Exporting Data Through Local Files", + "githuburl":"" + }, + { + "uri":"dws_04_0101.html", + "product_code":"dws", + "code":"143", + "des":"The following example shows how to use CopyManager to migrate data from MySQL to GaussDB(DWS).", + "doc_type":"devg", + "kw":"Example: Migrating Data from MySQL to GaussDB(DWS),JDBC-Based Development,Developer Guide", + "title":"Example: Migrating Data from MySQL to GaussDB(DWS)", + "githuburl":"" + }, + { + "uri":"dws_04_0102.html", + "product_code":"dws", + "code":"144", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"JDBC Interface Reference", + "title":"JDBC Interface Reference", + "githuburl":"" + }, + { + "uri":"dws_04_0103.html", + "product_code":"dws", + "code":"145", + "des":"This section describes java.sql.Connection, the interface for connecting to a database.The AutoCommit mode is used by default within the interface. If you disable it runn", + "doc_type":"devg", + "kw":"java.sql.Connection,JDBC Interface Reference,Developer Guide", + "title":"java.sql.Connection", + "githuburl":"" + }, + { + "uri":"dws_04_0104.html", + "product_code":"dws", + "code":"146", + "des":"This section describes java.sql.CallableStatement, the stored procedure execution interface.The batch operation of statements containing OUT parameter is not allowed.The ", + "doc_type":"devg", + "kw":"java.sql.CallableStatement,JDBC Interface Reference,Developer Guide", + "title":"java.sql.CallableStatement", + "githuburl":"" + }, + { + "uri":"dws_04_0105.html", + "product_code":"dws", + "code":"147", + "des":"This section describes java.sql.DatabaseMetaData, the interface for defining database objects.", + "doc_type":"devg", + "kw":"java.sql.DatabaseMetaData,JDBC Interface Reference,Developer Guide", + "title":"java.sql.DatabaseMetaData", + "githuburl":"" + }, + { + "uri":"dws_04_0106.html", + "product_code":"dws", + "code":"148", + "des":"This section describes java.sql.Driver, the database driver interface.", + "doc_type":"devg", + "kw":"java.sql.Driver,JDBC Interface Reference,Developer Guide", + "title":"java.sql.Driver", + "githuburl":"" + }, + { + "uri":"dws_04_0107.html", + "product_code":"dws", + "code":"149", + "des":"This section describes java.sql.PreparedStatement, the interface for preparing statements.Execute addBatch() and execute() only after running clearBatch().Batch is not cl", + "doc_type":"devg", + "kw":"java.sql.PreparedStatement,JDBC Interface Reference,Developer Guide", + "title":"java.sql.PreparedStatement", + "githuburl":"" + }, + { + "uri":"dws_04_0108.html", + "product_code":"dws", + "code":"150", + "des":"This section describes java.sql.ResultSet, the interface for execution result sets.One Statement cannot have multiple open ResultSets.The cursor that is used for traversi", + "doc_type":"devg", + "kw":"java.sql.ResultSet,JDBC Interface Reference,Developer Guide", + "title":"java.sql.ResultSet", + "githuburl":"" + }, + { + "uri":"dws_04_0109.html", + "product_code":"dws", + "code":"151", + "des":"This section describes java.sql.ResultSetMetaData, which provides details about ResultSet object information.", + "doc_type":"devg", + "kw":"java.sql.ResultSetMetaData,JDBC Interface Reference,Developer Guide", + "title":"java.sql.ResultSetMetaData", + "githuburl":"" + }, + { + "uri":"dws_04_0110.html", + "product_code":"dws", + "code":"152", + "des":"This section describes java.sql.Statement, the interface for executing SQL statements.Using setFetchSize can reduce the memory occupied by result sets on the client. Resu", + "doc_type":"devg", + "kw":"java.sql.Statement,JDBC Interface Reference,Developer Guide", + "title":"java.sql.Statement", + "githuburl":"" + }, + { + "uri":"dws_04_0111.html", + "product_code":"dws", + "code":"153", + "des":"This section describes javax.sql.ConnectionPoolDataSource, the interface for data source connection pools.", + "doc_type":"devg", + "kw":"javax.sql.ConnectionPoolDataSource,JDBC Interface Reference,Developer Guide", + "title":"javax.sql.ConnectionPoolDataSource", + "githuburl":"" + }, + { + "uri":"dws_04_0112.html", + "product_code":"dws", + "code":"154", + "des":"This section describes javax.sql.DataSource, the interface for data sources.", + "doc_type":"devg", + "kw":"javax.sql.DataSource,JDBC Interface Reference,Developer Guide", + "title":"javax.sql.DataSource", + "githuburl":"" + }, + { + "uri":"dws_04_0113.html", + "product_code":"dws", + "code":"155", + "des":"This section describes javax.sql.PooledConnection, the connection interface created by a connection pool.", + "doc_type":"devg", + "kw":"javax.sql.PooledConnection,JDBC Interface Reference,Developer Guide", + "title":"javax.sql.PooledConnection", + "githuburl":"" + }, + { + "uri":"dws_04_0114.html", + "product_code":"dws", + "code":"156", + "des":"This section describes javax.naming.Context, the context interface for connection configuration.", + "doc_type":"devg", + "kw":"javax.naming.Context,JDBC Interface Reference,Developer Guide", + "title":"javax.naming.Context", + "githuburl":"" + }, + { + "uri":"dws_04_0115.html", + "product_code":"dws", + "code":"157", + "des":"This section describes javax.naming.spi.InitialContextFactory, the initial context factory interface.", + "doc_type":"devg", + "kw":"javax.naming.spi.InitialContextFactory,JDBC Interface Reference,Developer Guide", + "title":"javax.naming.spi.InitialContextFactory", + "githuburl":"" + }, + { + "uri":"dws_04_0116.html", + "product_code":"dws", + "code":"158", + "des":"CopyManager is an API interface class provided by the JDBC driver in GaussDB(DWS). It is used to import data to GaussDB(DWS) in batches.The CopyManager class is in the or", + "doc_type":"devg", + "kw":"CopyManager,JDBC Interface Reference,Developer Guide", + "title":"CopyManager", + "githuburl":"" + }, + { + "uri":"dws_04_0117.html", + "product_code":"dws", + "code":"159", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"ODBC-Based Development", + "title":"ODBC-Based Development", + "githuburl":"" + }, + { + "uri":"dws_04_0118.html", + "product_code":"dws", + "code":"160", + "des":"Obtain the dws_8.1.x_odbc_driver_for_xxx_xxx.zip package from the release package. In the Linux OS, header files (including sql.h and sqlext.h) and library (libodbc.so) a", + "doc_type":"devg", + "kw":"ODBC Package and Its Dependent Libraries and Header Files,ODBC-Based Development,Developer Guide", + "title":"ODBC Package and Its Dependent Libraries and Header Files", + "githuburl":"" + }, + { + "uri":"dws_04_0119.html", + "product_code":"dws", + "code":"161", + "des":"The ODBC DRIVER (psqlodbcw.so) provided by GaussDB(DWS) can be used after it has been configured in the data source. To configure data sources, users must configure the o", + "doc_type":"devg", + "kw":"Configuring a Data Source in the Linux OS,ODBC-Based Development,Developer Guide", + "title":"Configuring a Data Source in the Linux OS", + "githuburl":"" + }, + { + "uri":"dws_04_0120.html", + "product_code":"dws", + "code":"162", + "des":"Configure the ODBC data source using the ODBC data source manager preinstalled in the Windows OS.Decompress GaussDB-8.1.1-Windows-Odbc.tar.gz and install psqlodbc.msi (fo", + "doc_type":"devg", + "kw":"Configuring a Data Source in the Windows OS,ODBC-Based Development,Developer Guide", + "title":"Configuring a Data Source in the Windows OS", + "githuburl":"" + }, + { + "uri":"dws_04_0123.html", + "product_code":"dws", + "code":"163", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"ODBC Development Example,ODBC-Based Development,Developer Guide", + "title":"ODBC Development Example", + "githuburl":"" + }, + { + "uri":"dws_04_0124.html", + "product_code":"dws", + "code":"164", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"ODBC Interfaces", + "title":"ODBC Interfaces", + "githuburl":"" + }, + { + "uri":"dws_04_0125.html", + "product_code":"dws", + "code":"165", + "des":"In ODBC 3.x, SQLAllocEnv (an ODBC 2.x function) was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.", + "doc_type":"devg", + "kw":"SQLAllocEnv,ODBC Interfaces,Developer Guide", + "title":"SQLAllocEnv", + "githuburl":"" + }, + { + "uri":"dws_04_0126.html", + "product_code":"dws", + "code":"166", + "des":"In ODBC 3.x, SQLAllocConnect (an ODBC 2.x function) was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.", + "doc_type":"devg", + "kw":"SQLAllocConnect,ODBC Interfaces,Developer Guide", + "title":"SQLAllocConnect", + "githuburl":"" + }, + { + "uri":"dws_04_0127.html", + "product_code":"dws", + "code":"167", + "des":"SQLAllocHandle allocates environment, connection, or statement handles. This function is a generic function for allocating handles that replaces the deprecated ODBC 2.x f", + "doc_type":"devg", + "kw":"SQLAllocHandle,ODBC Interfaces,Developer Guide", + "title":"SQLAllocHandle", + "githuburl":"" + }, + { + "uri":"dws_04_0128.html", + "product_code":"dws", + "code":"168", + "des":"In ODBC 3.x, SQLAllocStmt was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.", + "doc_type":"devg", + "kw":"SQLAllocStmt,ODBC Interfaces,Developer Guide", + "title":"SQLAllocStmt", + "githuburl":"" + }, + { + "uri":"dws_04_0129.html", + "product_code":"dws", + "code":"169", + "des":"SQLBindCol is used to associate (bind) columns in a result set to an application data buffer.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates", + "doc_type":"devg", + "kw":"SQLBindCol,ODBC Interfaces,Developer Guide", + "title":"SQLBindCol", + "githuburl":"" + }, + { + "uri":"dws_04_0130.html", + "product_code":"dws", + "code":"170", + "des":"SQLBindParameter is used to associate (bind) parameter markers in an SQL statement to a buffer.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicat", + "doc_type":"devg", + "kw":"SQLBindParameter,ODBC Interfaces,Developer Guide", + "title":"SQLBindParameter", + "githuburl":"" + }, + { + "uri":"dws_04_0131.html", + "product_code":"dws", + "code":"171", + "des":"SQLColAttribute returns the descriptor information about a column in the result set.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some war", + "doc_type":"devg", + "kw":"SQLColAttribute,ODBC Interfaces,Developer Guide", + "title":"SQLColAttribute", + "githuburl":"" + }, + { + "uri":"dws_04_0132.html", + "product_code":"dws", + "code":"172", + "des":"SQLConnect establishes a connection between a driver and a data source. After the connection, the connection handle can be used to access all information about the data s", + "doc_type":"devg", + "kw":"SQLConnect,ODBC Interfaces,Developer Guide", + "title":"SQLConnect", + "githuburl":"" + }, + { + "uri":"dws_04_0133.html", + "product_code":"dws", + "code":"173", + "des":"SQLDisconnect closes the connection associated with the database connection handle.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warn", + "doc_type":"devg", + "kw":"SQLDisconnect,ODBC Interfaces,Developer Guide", + "title":"SQLDisconnect", + "githuburl":"" + }, + { + "uri":"dws_04_0134.html", + "product_code":"dws", + "code":"174", + "des":"SQLExecDirect executes a prepared SQL statement specified in this parameter. This is the fastest execution method for executing only one SQL statement at a time.SQL_SUCCE", + "doc_type":"devg", + "kw":"SQLExecDirect,ODBC Interfaces,Developer Guide", + "title":"SQLExecDirect", + "githuburl":"" + }, + { + "uri":"dws_04_0135.html", + "product_code":"dws", + "code":"175", + "des":"The SQLExecute function executes a prepared SQL statement using SQLPrepare. The statement is executed using the current value of any application variables that were bound", + "doc_type":"devg", + "kw":"SQLExecute,ODBC Interfaces,Developer Guide", + "title":"SQLExecute", + "githuburl":"" + }, + { + "uri":"dws_04_0136.html", + "product_code":"dws", + "code":"176", + "des":"SQLFetch advances the cursor to the next row of the result set and retrieves any bound columns.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicat", + "doc_type":"devg", + "kw":"SQLFetch,ODBC Interfaces,Developer Guide", + "title":"SQLFetch", + "githuburl":"" + }, + { + "uri":"dws_04_0137.html", + "product_code":"dws", + "code":"177", + "des":"In ODBC 3.x, SQLFreeStmt (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.", + "doc_type":"devg", + "kw":"SQLFreeStmt,ODBC Interfaces,Developer Guide", + "title":"SQLFreeStmt", + "githuburl":"" + }, + { + "uri":"dws_04_0138.html", + "product_code":"dws", + "code":"178", + "des":"In ODBC 3.x, SQLFreeConnect (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.", + "doc_type":"devg", + "kw":"SQLFreeConnect,ODBC Interfaces,Developer Guide", + "title":"SQLFreeConnect", + "githuburl":"" + }, + { + "uri":"dws_04_0139.html", + "product_code":"dws", + "code":"179", + "des":"SQLFreeHandle releases resources associated with a specific environment, connection, or statement handle. It replaces the ODBC 2.x functions: SQLFreeEnv, SQLFreeConnect, ", + "doc_type":"devg", + "kw":"SQLFreeHandle,ODBC Interfaces,Developer Guide", + "title":"SQLFreeHandle", + "githuburl":"" + }, + { + "uri":"dws_04_0140.html", + "product_code":"dws", + "code":"180", + "des":"In ODBC 3.x, SQLFreeEnv (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.", + "doc_type":"devg", + "kw":"SQLFreeEnv,ODBC Interfaces,Developer Guide", + "title":"SQLFreeEnv", + "githuburl":"" + }, + { + "uri":"dws_04_0141.html", + "product_code":"dws", + "code":"181", + "des":"SQLPrepare prepares an SQL statement to be executed.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.SQ", + "doc_type":"devg", + "kw":"SQLPrepare,ODBC Interfaces,Developer Guide", + "title":"SQLPrepare", + "githuburl":"" + }, + { + "uri":"dws_04_0142.html", + "product_code":"dws", + "code":"182", + "des":"SQLGetData retrieves data for a single column in the current row of the result set. It can be called for many times to retrieve data of variable lengths.SQL_SUCCESS indic", + "doc_type":"devg", + "kw":"SQLGetData,ODBC Interfaces,Developer Guide", + "title":"SQLGetData", + "githuburl":"" + }, + { + "uri":"dws_04_0143.html", + "product_code":"dws", + "code":"183", + "des":"SQLGetDiagRec returns the current values of multiple fields of a diagnostic record that contains error, warning, and status information.SQL_SUCCESS indicates that the cal", + "doc_type":"devg", + "kw":"SQLGetDiagRec,ODBC Interfaces,Developer Guide", + "title":"SQLGetDiagRec", + "githuburl":"" + }, + { + "uri":"dws_04_0144.html", + "product_code":"dws", + "code":"184", + "des":"SQLSetConnectAttr sets connection attributes.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.SQL_ERROR", + "doc_type":"devg", + "kw":"SQLSetConnectAttr,ODBC Interfaces,Developer Guide", + "title":"SQLSetConnectAttr", + "githuburl":"" + }, + { + "uri":"dws_04_0145.html", + "product_code":"dws", + "code":"185", + "des":"SQLSetEnvAttr sets environment attributes.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.SQL_ERROR in", + "doc_type":"devg", + "kw":"SQLSetEnvAttr,ODBC Interfaces,Developer Guide", + "title":"SQLSetEnvAttr", + "githuburl":"" + }, + { + "uri":"dws_04_0146.html", + "product_code":"dws", + "code":"186", + "des":"SQLSetStmtAttr sets attributes related to a statement.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.", + "doc_type":"devg", + "kw":"SQLSetStmtAttr,ODBC Interfaces,Developer Guide", + "title":"SQLSetStmtAttr", + "githuburl":"" + }, + { + "uri":"dws_04_0301.html", + "product_code":"dws", + "code":"187", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"PostGIS Extension", + "title":"PostGIS Extension", + "githuburl":"" + }, + { + "uri":"dws_04_0302.html", + "product_code":"dws", + "code":"188", + "des":"The third-party software that the PostGIS Extension depends on needs to be installed separately. If you need to use PostGIS, submit a service ticket or contact technical ", + "doc_type":"devg", + "kw":"PostGIS,PostGIS Extension,Developer Guide", + "title":"PostGIS", + "githuburl":"" + }, + { + "uri":"dws_04_0304.html", + "product_code":"dws", + "code":"189", + "des":"The third-party software that the PostGIS Extension depends on needs to be installed separately. If you need to use PostGIS, submit a service ticket or contact technical ", + "doc_type":"devg", + "kw":"Using PostGIS,PostGIS Extension,Developer Guide", + "title":"Using PostGIS", + "githuburl":"" + }, + { + "uri":"dws_04_0305.html", + "product_code":"dws", + "code":"190", + "des":"In GaussDB(DWS), PostGIS Extension support the following data types:box2dbox3dgeometry_dumpgeometrygeographyrasterIf PostGIS is used by a user other than the creator of t", + "doc_type":"devg", + "kw":"PostGIS Support and Constraints,PostGIS Extension,Developer Guide", + "title":"PostGIS Support and Constraints", + "githuburl":"" + }, + { + "uri":"dws_04_0306.html", + "product_code":"dws", + "code":"191", + "des":"This document contains open source software notice for the product. And this document is confidential information of copyright holder. Recipient shall protect it in due c", + "doc_type":"devg", + "kw":"OPEN SOURCE SOFTWARE NOTICE (For PostGIS),PostGIS Extension,Developer Guide", + "title":"OPEN SOURCE SOFTWARE NOTICE (For PostGIS)", + "githuburl":"" + }, + { + "uri":"dws_04_0393.html", + "product_code":"dws", + "code":"192", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Resource Monitoring", + "title":"Resource Monitoring", + "githuburl":"" + }, + { + "uri":"dws_04_0394.html", + "product_code":"dws", + "code":"193", + "des":"In the multi-tenant management framework, you can query the real-time or historical usage of all user resources (including memory, CPU cores, storage space, temporary spa", + "doc_type":"devg", + "kw":"User Resource Query,Resource Monitoring,Developer Guide", + "title":"User Resource Query", + "githuburl":"" + }, + { + "uri":"dws_04_0395.html", + "product_code":"dws", + "code":"194", + "des":"GaussDB(DWS) provides a view for monitoring the memory usage of the entire cluster.Query the pgxc_total_memory_detail view as a user with sysadmin permissions.SELECT * FR", + "doc_type":"devg", + "kw":"Monitoring Memory Resources,Resource Monitoring,Developer Guide", + "title":"Monitoring Memory Resources", + "githuburl":"" + }, + { + "uri":"dws_04_0396.html", + "product_code":"dws", + "code":"195", + "des":"GaussDB(DWS) provides system catalogs for monitoring the resource usage of CNs and DNs (including memory, CPU usage, disk I/O, process physical I/O, and process logical I", + "doc_type":"devg", + "kw":"Instance Resource Monitoring,Resource Monitoring,Developer Guide", + "title":"Instance Resource Monitoring", + "githuburl":"" + }, + { + "uri":"dws_04_0397.html", + "product_code":"dws", + "code":"196", + "des":"You can query real-time Top SQL in real-time resource monitoring views at different levels. The real-time resource monitoring view records the resource usage (including m", + "doc_type":"devg", + "kw":"Real-time TopSQL,Resource Monitoring,Developer Guide", + "title":"Real-time TopSQL", + "githuburl":"" + }, + { + "uri":"dws_04_0398.html", + "product_code":"dws", + "code":"197", + "des":"You can query historical Top SQL in historical resource monitoring views. The historical resource monitoring view records the resource usage (of memory, disk, CPU time, a", + "doc_type":"devg", + "kw":"Historical TopSQL,Resource Monitoring,Developer Guide", + "title":"Historical TopSQL", + "githuburl":"" + }, + { + "uri":"dws_04_0399.html", + "product_code":"dws", + "code":"198", + "des":"In this section, TPC-DS sample data is used as an example to describe how to query Real-time TopSQL and Historical TopSQL.To query for historical or archived resource mon", + "doc_type":"devg", + "kw":"TopSQL Query Example,Resource Monitoring,Developer Guide", + "title":"TopSQL Query Example", + "githuburl":"" + }, + { + "uri":"dws_04_0400.html", + "product_code":"dws", + "code":"199", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Query Performance Optimization", + "title":"Query Performance Optimization", + "githuburl":"" + }, + { + "uri":"dws_04_0402.html", + "product_code":"dws", + "code":"200", + "des":"The aim of SQL optimization is to maximize the utilization of resources, including CPU, memory, disk I/O, and network I/O. To maximize resource utilization is to run SQL ", + "doc_type":"devg", + "kw":"Overview of Query Performance Optimization,Query Performance Optimization,Developer Guide", + "title":"Overview of Query Performance Optimization", + "githuburl":"" + }, + { + "uri":"dws_04_0403.html", + "product_code":"dws", + "code":"201", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Query Analysis", + "title":"Query Analysis", + "githuburl":"" + }, + { + "uri":"dws_04_0409.html", + "product_code":"dws", + "code":"202", + "des":"The process from receiving SQL statements to the statement execution by the SQL engine is shown in Figure 1 and Table 1. The texts in red are steps where database adminis", + "doc_type":"devg", + "kw":"Query Execution Process,Query Analysis,Developer Guide", + "title":"Query Execution Process", + "githuburl":"" + }, + { + "uri":"dws_04_0410.html", + "product_code":"dws", + "code":"203", + "des":"The SQL execution plan is a node tree, which displays detailed procedure when GaussDB(DWS) runs an SQL statement. A database operator indicates one step.You can run the E", + "doc_type":"devg", + "kw":"Overview of the SQL Execution Plan,Query Analysis,Developer Guide", + "title":"Overview of the SQL Execution Plan", + "githuburl":"" + }, + { + "uri":"dws_04_0411.html", + "product_code":"dws", + "code":"204", + "des":"As described in Overview of the SQL Execution Plan, EXPLAIN displays the execution plan, but will not actually run SQL statements. EXPLAIN ANALYZE and EXPLAIN PERFORMANCE", + "doc_type":"devg", + "kw":"Deep Dive on the SQL Execution Plan,Query Analysis,Developer Guide", + "title":"Deep Dive on the SQL Execution Plan", + "githuburl":"" + }, + { + "uri":"dws_04_0412.html", + "product_code":"dws", + "code":"205", + "des":"This section describes how to query SQL statements whose execution takes a long time, leading to poor system performance.After the query, query statements are returned as", + "doc_type":"devg", + "kw":"Querying SQL Statements That Affect Performance Most,Query Analysis,Developer Guide", + "title":"Querying SQL Statements That Affect Performance Most", + "githuburl":"" + }, + { + "uri":"dws_04_0413.html", + "product_code":"dws", + "code":"206", + "des":"During database running, query statements are blocked in some service scenarios and run for an excessively long time. In this case, you can forcibly terminate the faulty ", + "doc_type":"devg", + "kw":"Checking Blocked Statements,Query Analysis,Developer Guide", + "title":"Checking Blocked Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0430.html", + "product_code":"dws", + "code":"207", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Query Improvement", + "title":"Query Improvement", + "githuburl":"" + }, + { + "uri":"dws_04_0435.html", + "product_code":"dws", + "code":"208", + "des":"You can analyze slow SQL statements to optimize them.", + "doc_type":"devg", + "kw":"Optimization Process,Query Improvement,Developer Guide", + "title":"Optimization Process", + "githuburl":"" + }, + { + "uri":"dws_04_0436.html", + "product_code":"dws", + "code":"209", + "des":"In a database, statistics indicate the source data of a plan generated by a planner. If no collection statistics are available or out of date, the execution plan may seri", + "doc_type":"devg", + "kw":"Updating Statistics,Query Improvement,Developer Guide", + "title":"Updating Statistics", + "githuburl":"" + }, + { + "uri":"dws_04_0437.html", + "product_code":"dws", + "code":"210", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Reviewing and Modifying a Table Definition", + "title":"Reviewing and Modifying a Table Definition", + "githuburl":"" + }, + { + "uri":"dws_04_0438.html", + "product_code":"dws", + "code":"211", + "des":"In a distributed framework, data is distributed on DNs. Data on one or more DNs is stored on a physical storage device. To properly define a table, you must:Evenly distri", + "doc_type":"devg", + "kw":"Reviewing and Modifying a Table Definition,Reviewing and Modifying a Table Definition,Developer Guid", + "title":"Reviewing and Modifying a Table Definition", + "githuburl":"" + }, + { + "uri":"dws_04_0439.html", + "product_code":"dws", + "code":"212", + "des":"During database design, some key factors about table design will greatly affect the subsequent query performance of the database. Table design affects data storage as wel", + "doc_type":"devg", + "kw":"Selecting a Storage Model,Reviewing and Modifying a Table Definition,Developer Guide", + "title":"Selecting a Storage Model", + "githuburl":"" + }, + { + "uri":"dws_04_0440.html", + "product_code":"dws", + "code":"213", + "des":"In replication mode, full data in a table is copied to each DN in the cluster. This mode is used for tables containing a small volume of data. Full data in a table stored", + "doc_type":"devg", + "kw":"Selecting a Distribution Mode,Reviewing and Modifying a Table Definition,Developer Guide", + "title":"Selecting a Distribution Mode", + "githuburl":"" + }, + { + "uri":"dws_04_0441.html", + "product_code":"dws", + "code":"214", + "des":"The distribution column in a hash table must meet the following requirements, which are ranked by priority in descending order:The value of the distribution column should", + "doc_type":"devg", + "kw":"Selecting a Distribution Column,Reviewing and Modifying a Table Definition,Developer Guide", + "title":"Selecting a Distribution Column", + "githuburl":"" + }, + { + "uri":"dws_04_0442.html", + "product_code":"dws", + "code":"215", + "des":"Partial Cluster Key is the column-based technology. It can minimize or maximize sparse indexes to quickly filter base tables. Partial cluster key can specify multiple col", + "doc_type":"devg", + "kw":"Using Partial Clustering,Reviewing and Modifying a Table Definition,Developer Guide", + "title":"Using Partial Clustering", + "githuburl":"" + }, + { + "uri":"dws_04_0443.html", + "product_code":"dws", + "code":"216", + "des":"Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partit", + "doc_type":"devg", + "kw":"Using Partitioned Tables,Reviewing and Modifying a Table Definition,Developer Guide", + "title":"Using Partitioned Tables", + "githuburl":"" + }, + { + "uri":"dws_04_0444.html", + "product_code":"dws", + "code":"217", + "des":"Use the following principles to obtain efficient data types:Using the data type that can be efficiently executedGenerally, calculation of integers (including common compa", + "doc_type":"devg", + "kw":"Selecting a Data Type,Reviewing and Modifying a Table Definition,Developer Guide", + "title":"Selecting a Data Type", + "githuburl":"" + }, + { + "uri":"dws_04_0445.html", + "product_code":"dws", + "code":"218", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Typical SQL Optimization Methods", + "title":"Typical SQL Optimization Methods", + "githuburl":"" + }, + { + "uri":"dws_04_0446.html", + "product_code":"dws", + "code":"219", + "des":"Performance issues may occur when you query data or run the INSERT, DELETE, UPDATE, or CREATE TABLE AS statement. You can query the warning column in the GS_WLM_SESSION_S", + "doc_type":"devg", + "kw":"SQL Self-Diagnosis,Typical SQL Optimization Methods,Developer Guide", + "title":"SQL Self-Diagnosis", + "githuburl":"" + }, + { + "uri":"dws_04_0447.html", + "product_code":"dws", + "code":"220", + "des":"Currently, the GaussDB(DWS) optimizer can use three methods to develop statement execution policies in the distributed framework: generating a statement pushdown plan, a ", + "doc_type":"devg", + "kw":"Optimizing Statement Pushdown,Typical SQL Optimization Methods,Developer Guide", + "title":"Optimizing Statement Pushdown", + "githuburl":"" + }, + { + "uri":"dws_04_0448.html", + "product_code":"dws", + "code":"221", + "des":"When an application runs a SQL statement to operate the database, a large number of subqueries are used because they are more clear than table join. Especially in complic", + "doc_type":"devg", + "kw":"Optimizing Subqueries,Typical SQL Optimization Methods,Developer Guide", + "title":"Optimizing Subqueries", + "githuburl":"" + }, + { + "uri":"dws_04_0449.html", + "product_code":"dws", + "code":"222", + "des":"GaussDB(DWS) generates optimal execution plans based on the cost estimation. Optimizers need to estimate the number of data rows and the cost based on statistics collecte", + "doc_type":"devg", + "kw":"Optimizing Statistics,Typical SQL Optimization Methods,Developer Guide", + "title":"Optimizing Statistics", + "githuburl":"" + }, + { + "uri":"dws_04_0450.html", + "product_code":"dws", + "code":"223", + "des":"A query statement needs to go through multiple operator procedures to generate the final result. Sometimes, the overall query performance deteriorates due to long executi", + "doc_type":"devg", + "kw":"Optimizing Operators,Typical SQL Optimization Methods,Developer Guide", + "title":"Optimizing Operators", + "githuburl":"" + }, + { + "uri":"dws_04_0451.html", + "product_code":"dws", + "code":"224", + "des":"Data skew breaks the balance among nodes in the distributed MPP architecture. If the amount of data stored or processed by a node is much greater than that by other nodes", + "doc_type":"devg", + "kw":"Optimizing Data Skew,Typical SQL Optimization Methods,Developer Guide", + "title":"Optimizing Data Skew", + "githuburl":"" + }, + { + "uri":"dws_04_0452.html", + "product_code":"dws", + "code":"225", + "des":"Based on the database SQL execution mechanism and a large number of practices, summarize finds that: using rules of a certain SQL statement, on the basis of the so that t", + "doc_type":"devg", + "kw":"Experience in Rewriting SQL Statements,Query Improvement,Developer Guide", + "title":"Experience in Rewriting SQL Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0453.html", + "product_code":"dws", + "code":"226", + "des":"This section describes the key CN parameters that affect GaussDB(DWS) SQL tuning performance. For details about how to configure these parameters, see Configuring GUC Par", + "doc_type":"devg", + "kw":"Adjusting Key Parameters During SQL Tuning,Query Improvement,Developer Guide", + "title":"Adjusting Key Parameters During SQL Tuning", + "githuburl":"" + }, + { + "uri":"dws_04_0454.html", + "product_code":"dws", + "code":"227", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Hint-based Tuning", + "title":"Hint-based Tuning", + "githuburl":"" + }, + { + "uri":"dws_04_0455.html", + "product_code":"dws", + "code":"228", + "des":"In plan hints, you can specify a join order, join, stream, and scan operations, the number of rows in a result, and redistribution skew information to tune an execution p", + "doc_type":"devg", + "kw":"Plan Hint Optimization,Hint-based Tuning,Developer Guide", + "title":"Plan Hint Optimization", + "githuburl":"" + }, + { + "uri":"dws_04_0456.html", + "product_code":"dws", + "code":"229", + "des":"Theses hints specify the join order and outer/inner tables.Specify only the join order.Specify the join order and outer/inner tables. The outer/inner tables are specified", + "doc_type":"devg", + "kw":"Join Order Hints,Hint-based Tuning,Developer Guide", + "title":"Join Order Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0457.html", + "product_code":"dws", + "code":"230", + "des":"Specifies the join method. It can be nested loop join, hash join, or merge join.no indicates that the specified hint will not be used for a join.table_list specifies the ", + "doc_type":"devg", + "kw":"Join Operation Hints,Hint-based Tuning,Developer Guide", + "title":"Join Operation Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0458.html", + "product_code":"dws", + "code":"231", + "des":"These hints specify the number of rows in an intermediate result set. Both absolute values and relative values are supported.#,+,-, and * are operators used for hinting t", + "doc_type":"devg", + "kw":"Rows Hints,Hint-based Tuning,Developer Guide", + "title":"Rows Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0459.html", + "product_code":"dws", + "code":"232", + "des":"These hints specify a stream operation, which can be broadcast or redistribute.no indicates that the specified hint will not be used for a join.table_list specifies the t", + "doc_type":"devg", + "kw":"Stream Operation Hints,Hint-based Tuning,Developer Guide", + "title":"Stream Operation Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0460.html", + "product_code":"dws", + "code":"233", + "des":"These hints specify a scan operation, which can be tablescan, indexscan, or indexonlyscan.no indicates that the specified hint will not be used for a join.table specifies", + "doc_type":"devg", + "kw":"Scan Operation Hints,Hint-based Tuning,Developer Guide", + "title":"Scan Operation Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0461.html", + "product_code":"dws", + "code":"234", + "des":"These hints specify the name of a sublink block.table indicates the name you have specified for a sublink block.This hint is used by an outer query only when a sublink is", + "doc_type":"devg", + "kw":"Sublink Name Hints,Hint-based Tuning,Developer Guide", + "title":"Sublink Name Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0462.html", + "product_code":"dws", + "code":"235", + "des":"Theses hints specify redistribution keys containing skew data and skew values, and are used to optimize redistribution involving Join or HashAgg.Specify single-table skew", + "doc_type":"devg", + "kw":"Skew Hints,Hint-based Tuning,Developer Guide", + "title":"Skew Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0463.html", + "product_code":"dws", + "code":"236", + "des":"A hint, or a GUC hint, specifies a configuration parameter value when a plan is generated. Currently, only the following parameters are supported:agg_redistribute_enhance", + "doc_type":"devg", + "kw":"Configuration Parameter Hints,Hint-based Tuning,Developer Guide", + "title":"Configuration Parameter Hints", + "githuburl":"" + }, + { + "uri":"dws_04_0464.html", + "product_code":"dws", + "code":"237", + "des":"Plan hints change an execution plan. You can run EXPLAIN to view the changes.Hints containing errors are invalid and do not affect statement execution. The errors will be", + "doc_type":"devg", + "kw":"Hint Errors, Conflicts, and Other Warnings,Hint-based Tuning,Developer Guide", + "title":"Hint Errors, Conflicts, and Other Warnings", + "githuburl":"" + }, + { + "uri":"dws_04_0465.html", + "product_code":"dws", + "code":"238", + "des":"This section takes the statements in TPC-DS (Q24) as an example to describe how to optimize an execution plan by using hints in 1000X+24DN environments. For example:The o", + "doc_type":"devg", + "kw":"Plan Hint Cases,Hint-based Tuning,Developer Guide", + "title":"Plan Hint Cases", + "githuburl":"" + }, + { + "uri":"dws_04_0466.html", + "product_code":"dws", + "code":"239", + "des":"To ensure proper database running, after INSERT and DELETE operations, you need to routinely do VACUUM FULL and ANALYZE as appropriate for customer scenarios and update s", + "doc_type":"devg", + "kw":"Routinely Maintaining Tables,Query Improvement,Developer Guide", + "title":"Routinely Maintaining Tables", + "githuburl":"" + }, + { + "uri":"dws_04_0467.html", + "product_code":"dws", + "code":"240", + "des":"When data deletion is repeatedly performed in the database, index keys will be deleted from the index page, resulting in index distention. Recreating an index routinely i", + "doc_type":"devg", + "kw":"Routinely Recreating an Index,Query Improvement,Developer Guide", + "title":"Routinely Recreating an Index", + "githuburl":"" + }, + { + "uri":"dws_04_0468.html", + "product_code":"dws", + "code":"241", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Configuring the SMP", + "title":"Configuring the SMP", + "githuburl":"" + }, + { + "uri":"dws_04_0469.html", + "product_code":"dws", + "code":"242", + "des":"The SMP feature improves the performance through operator parallelism and occupies more system resources, including CPU, memory, network, and I/O. Actually, SMP is a meth", + "doc_type":"devg", + "kw":"Application Scenarios and Restrictions,Configuring the SMP,Developer Guide", + "title":"Application Scenarios and Restrictions", + "githuburl":"" + }, + { + "uri":"dws_04_0470.html", + "product_code":"dws", + "code":"243", + "des":"The SMP architecture uses abundant resources to obtain time. After the plan parallelism is executed, the resource consumption is added, including the CPU, memory, I/O, an", + "doc_type":"devg", + "kw":"Resource Impact on SMP Performance,Configuring the SMP,Developer Guide", + "title":"Resource Impact on SMP Performance", + "githuburl":"" + }, + { + "uri":"dws_04_0471.html", + "product_code":"dws", + "code":"244", + "des":"Besides resource factors, there are other factors that impact the SMP parallelism performance, such as unevenly data distributed in a partitioned table and system paralle", + "doc_type":"devg", + "kw":"Other Factors Affecting SMP Performance,Configuring the SMP,Developer Guide", + "title":"Other Factors Affecting SMP Performance", + "githuburl":"" + }, + { + "uri":"dws_04_0472.html", + "product_code":"dws", + "code":"245", + "des":"Starting from this version, SMP auto adaptation is enabled. For newly deployed clusters, the default value of query_dop is 0, and SMP parameters have been adjusted. To en", + "doc_type":"devg", + "kw":"Suggestions for SMP Parameter Settings,Configuring the SMP,Developer Guide", + "title":"Suggestions for SMP Parameter Settings", + "githuburl":"" + }, + { + "uri":"dws_04_0473.html", + "product_code":"dws", + "code":"246", + "des":"To manually optimize SMP, you need to be familiar with Suggestions for SMP Parameter Settings. This section describes how to optimize SMP.The CPU, memory, I/O, and networ", + "doc_type":"devg", + "kw":"SMP Manual Optimization Suggestions,Configuring the SMP,Developer Guide", + "title":"SMP Manual Optimization Suggestions", + "githuburl":"" + }, + { + "uri":"dws_04_0474.html", + "product_code":"dws", + "code":"247", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Optimization Cases", + "title":"Optimization Cases", + "githuburl":"" + }, + { + "uri":"dws_04_0475.html", + "product_code":"dws", + "code":"248", + "des":"Tables are defined as follows:The following query is executed:If a is the distribution column of t1 and t2:Then Streaming exists in the execution plan and the data volume", + "doc_type":"devg", + "kw":"Case: Selecting an Appropriate Distribution Column,Optimization Cases,Developer Guide", + "title":"Case: Selecting an Appropriate Distribution Column", + "githuburl":"" + }, + { + "uri":"dws_04_0476.html", + "product_code":"dws", + "code":"249", + "des":"Query the information about all personnel in the sales department.The original execution plan is as follows before creating the places.place_id and states.state_id indexe", + "doc_type":"devg", + "kw":"Case: Creating an Appropriate Index,Optimization Cases,Developer Guide", + "title":"Case: Creating an Appropriate Index", + "githuburl":"" + }, + { + "uri":"dws_04_0477.html", + "product_code":"dws", + "code":"250", + "des":"Figure 1 shows the execution plan.As shown in Figure 1, the sequential scan phase is time consuming.The JOIN performance is poor because a large number of null values exi", + "doc_type":"devg", + "kw":"Case: Adding NOT NULL for JOIN Columns,Optimization Cases,Developer Guide", + "title":"Case: Adding NOT NULL for JOIN Columns", + "githuburl":"" + }, + { + "uri":"dws_04_0478.html", + "product_code":"dws", + "code":"251", + "des":"In an execution plan, more than 95% of the execution time is spent on window agg performed on the CN. In this case, sum is performed for the two columns separately, and t", + "doc_type":"devg", + "kw":"Case: Pushing Down Sort Operations to DNs,Optimization Cases,Developer Guide", + "title":"Case: Pushing Down Sort Operations to DNs", + "githuburl":"" + }, + { + "uri":"dws_04_0479.html", + "product_code":"dws", + "code":"252", + "des":"If bit0 of cost_param is set to 1, an improved mechanism is used for estimating the selection rate of non-equi-joins. This method is more accurate for estimating the sele", + "doc_type":"devg", + "kw":"Case: Configuring cost_param for Better Query Performance,Optimization Cases,Developer Guide", + "title":"Case: Configuring cost_param for Better Query Performance", + "githuburl":"" + }, + { + "uri":"dws_04_0480.html", + "product_code":"dws", + "code":"253", + "des":"During a site test, the information is displayed after EXPLAIN ANALYZE is executed:According to the execution information, HashJoin becomes the performance bottleneck of ", + "doc_type":"devg", + "kw":"Case: Adjusting the Distribution Key,Optimization Cases,Developer Guide", + "title":"Case: Adjusting the Distribution Key", + "githuburl":"" + }, + { + "uri":"dws_04_0481.html", + "product_code":"dws", + "code":"254", + "des":"Information on the EXPLAIN PERFORMANCE at a site is as follows: As shown in the red boxes, two performance bottlenecks are scan operations in a table.After further analys", + "doc_type":"devg", + "kw":"Case: Adjusting the Partial Clustering Key,Optimization Cases,Developer Guide", + "title":"Case: Adjusting the Partial Clustering Key", + "githuburl":"" + }, + { + "uri":"dws_04_0482.html", + "product_code":"dws", + "code":"255", + "des":"In the GaussDB(DWS) database, row-store tables use the row execution engine, and column-store tables use the column execution engine. If both row-store table and column-s", + "doc_type":"devg", + "kw":"Case: Adjusting the Table Storage Mode in a Medium Table,Optimization Cases,Developer Guide", + "title":"Case: Adjusting the Table Storage Mode in a Medium Table", + "githuburl":"" + }, + { + "uri":"dws_04_0483.html", + "product_code":"dws", + "code":"256", + "des":"During the test at a site, if the following execution plan is performed, the customer expects that the performance can be improved and the result can be returned within 3", + "doc_type":"devg", + "kw":"Case: Adjusting the Local Clustering Column,Optimization Cases,Developer Guide", + "title":"Case: Adjusting the Local Clustering Column", + "githuburl":"" + }, + { + "uri":"dws_04_0484.html", + "product_code":"dws", + "code":"257", + "des":"In the following simple SQL statements, the performance bottlenecks exist in the scan operation of dwcjk.Obviously, there are date features in the cjrq field of table dat", + "doc_type":"devg", + "kw":"Case: Reconstructing Partition Tables,Optimization Cases,Developer Guide", + "title":"Case: Reconstructing Partition Tables", + "githuburl":"" + }, + { + "uri":"dws_04_0485.html", + "product_code":"dws", + "code":"258", + "des":"The t1 table is defined as follows:Assume that the distribution column of the result set provided by the agg lower-layer operator is setA, and the group by column of the ", + "doc_type":"devg", + "kw":"Case: Adjusting the GUC Parameter best_agg_plan,Optimization Cases,Developer Guide", + "title":"Case: Adjusting the GUC Parameter best_agg_plan", + "githuburl":"" + }, + { + "uri":"dws_04_0486.html", + "product_code":"dws", + "code":"259", + "des":"This SQL performance is poor. SubPlan exists in the execution plan as follows:The core of this optimization is to eliminate subqueries. Based on the service scenario anal", + "doc_type":"devg", + "kw":"Case: Rewriting SQL and Deleting Subqueries (Case 1),Optimization Cases,Developer Guide", + "title":"Case: Rewriting SQL and Deleting Subqueries (Case 1)", + "githuburl":"" + }, + { + "uri":"dws_04_0487.html", + "product_code":"dws", + "code":"260", + "des":"On a site, the customer gave the feedback saying that the execution time of the following SQL statements lasted over one day and did not end:The corresponding execution p", + "doc_type":"devg", + "kw":"Case: Rewriting SQL and Deleting Subqueries (Case 2),Optimization Cases,Developer Guide", + "title":"Case: Rewriting SQL and Deleting Subqueries (Case 2)", + "githuburl":"" + }, + { + "uri":"dws_04_0488.html", + "product_code":"dws", + "code":"261", + "des":"In a test at a site, ddw_f10_op_cust_asset_mon is a partitioned table and the partition key is year_mth whose value is a combined string of month and year values.The foll", + "doc_type":"devg", + "kw":"Case: Rewriting SQL Statements and Eliminating Prune Interference,Optimization Cases,Developer Guide", + "title":"Case: Rewriting SQL Statements and Eliminating Prune Interference", + "githuburl":"" + }, + { + "uri":"dws_04_0489.html", + "product_code":"dws", + "code":"262", + "des":"in-clause/any-clause is a common SQL statement constraint. Sometimes, the clause following in or any is a constant. For example:orSome special usages are as follows:Where", + "doc_type":"devg", + "kw":"Case: Rewriting SQL Statements and Deleting in-clause,Optimization Cases,Developer Guide", + "title":"Case: Rewriting SQL Statements and Deleting in-clause", + "githuburl":"" + }, + { + "uri":"dws_04_0490.html", + "product_code":"dws", + "code":"263", + "des":"You can add PARTIAL CLUSTER KEY(column_name[,...]) to the definition of a column-store table to set one or more columns of this table as partial cluster keys. In this way", + "doc_type":"devg", + "kw":"Case: Setting Partial Cluster Keys,Optimization Cases,Developer Guide", + "title":"Case: Setting Partial Cluster Keys", + "githuburl":"" + }, + { + "uri":"dws_04_0491.html", + "product_code":"dws", + "code":"264", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"SQL Execution Troubleshooting", + "title":"SQL Execution Troubleshooting", + "githuburl":"" + }, + { + "uri":"dws_04_0492.html", + "product_code":"dws", + "code":"265", + "des":"A query task that used to take a few milliseconds to complete is now requiring several seconds, and that used to take several seconds is now requiring even half an hour. ", + "doc_type":"devg", + "kw":"Low Query Efficiency,SQL Execution Troubleshooting,Developer Guide", + "title":"Low Query Efficiency", + "githuburl":"" + }, + { + "uri":"dws_04_0494.html", + "product_code":"dws", + "code":"266", + "des":"DROP TABLE fails to be executed in the following scenarios:A user runs the \\dt+ command using gsql and finds that the table_name table does not exist. When the user runs ", + "doc_type":"devg", + "kw":"DROP TABLE Fails to Be Executed,SQL Execution Troubleshooting,Developer Guide", + "title":"DROP TABLE Fails to Be Executed", + "githuburl":"" + }, + { + "uri":"dws_04_0495.html", + "product_code":"dws", + "code":"267", + "des":"Two users log in to the same database human_resource and run the select count(*) from areas statement separately to query the areas table, but obtain different results.Ch", + "doc_type":"devg", + "kw":"Different Data Is Displayed for the Same Table Queried By Multiple Users,SQL Execution Troubleshooti", + "title":"Different Data Is Displayed for the Same Table Queried By Multiple Users", + "githuburl":"" + }, + { + "uri":"dws_04_0496.html", + "product_code":"dws", + "code":"268", + "des":"The following error is reported during the integer conversion:Some data types cannot be converted to the target data type.Gradually narrow down the range of SQL statement", + "doc_type":"devg", + "kw":"An Error Occurs During the Integer Conversion,SQL Execution Troubleshooting,Developer Guide", + "title":"An Error Occurs During the Integer Conversion", + "githuburl":"" + }, + { + "uri":"dws_04_0497.html", + "product_code":"dws", + "code":"269", + "des":"With automatic retry (referred to as CN retry), GaussDB(DWS) retries an SQL statement when the execution of this statement fails. If an SQL statement sent from the gsql c", + "doc_type":"devg", + "kw":"Automatic Retry upon SQL Statement Execution Errors,SQL Execution Troubleshooting,Developer Guide", + "title":"Automatic Retry upon SQL Statement Execution Errors", + "githuburl":"" + }, + { + "uri":"dws_04_0970.html", + "product_code":"dws", + "code":"270", + "des":"To improve the cluster performance, you can use multiple methods to optimize the database, including hardware configuration, software driver upgrade, and internal paramet", + "doc_type":"devg", + "kw":"Common Performance Parameter Optimization Design,Query Performance Optimization,Developer Guide", + "title":"Common Performance Parameter Optimization Design", + "githuburl":"" + }, + { + "uri":"dws_04_0507.html", + "product_code":"dws", + "code":"271", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"User-Defined Functions", + "title":"User-Defined Functions", + "githuburl":"" + }, + { + "uri":"dws_04_0509.html", + "product_code":"dws", + "code":"272", + "des":"With the GaussDB(DWS) PL/Java functions, you can choose your favorite Java IDE to write Java methods and install the JAR files containing these methods into the GaussDB(D", + "doc_type":"devg", + "kw":"PL/Java Functions,User-Defined Functions,Developer Guide", + "title":"PL/Java Functions", + "githuburl":"" + }, + { + "uri":"dws_04_0511.html", + "product_code":"dws", + "code":"273", + "des":"PL/pgSQL is similar to PL/SQL of Oracle. It is a loadable procedural language.The functions created using PL/pgSQL can be used in any place where you can use built-in fun", + "doc_type":"devg", + "kw":"PL/pgSQL Functions,User-Defined Functions,Developer Guide", + "title":"PL/pgSQL Functions", + "githuburl":"" + }, + { + "uri":"dws_04_0512.html", + "product_code":"dws", + "code":"274", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Stored Procedures", + "title":"Stored Procedures", + "githuburl":"" + }, + { + "uri":"dws_04_0513.html", + "product_code":"dws", + "code":"275", + "des":"In GaussDB(DWS), business rules and logics are saved as stored procedures.A stored procedure is a combination of SQL, PL/SQL, and Java statements, enabling business rule ", + "doc_type":"devg", + "kw":"Stored Procedure,Stored Procedures,Developer Guide", + "title":"Stored Procedure", + "githuburl":"" + }, + { + "uri":"dws_04_0514.html", + "product_code":"dws", + "code":"276", + "des":"A data type refers to a value set and an operation set defined on the value set. A GaussDB(DWS) database consists of tables, each of which is defined by its own columns. ", + "doc_type":"devg", + "kw":"Data Types,Stored Procedures,Developer Guide", + "title":"Data Types", + "githuburl":"" + }, + { + "uri":"dws_04_0515.html", + "product_code":"dws", + "code":"277", + "des":"Certain data types in the database support implicit data type conversions, such as assignments and parameters invoked by functions. For other data types, you can use the ", + "doc_type":"devg", + "kw":"Data Type Conversion,Stored Procedures,Developer Guide", + "title":"Data Type Conversion", + "githuburl":"" + }, + { + "uri":"dws_04_0516.html", + "product_code":"dws", + "code":"278", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Arrays and Records", + "title":"Arrays and Records", + "githuburl":"" + }, + { + "uri":"dws_04_0517.html", + "product_code":"dws", + "code":"279", + "des":"Before the use of arrays, an array type needs to be defined:Define an array type immediately after the AS keyword in a stored procedure. Run the following statement:TYPE ", + "doc_type":"devg", + "kw":"Arrays,Arrays and Records,Developer Guide", + "title":"Arrays", + "githuburl":"" + }, + { + "uri":"dws_04_0518.html", + "product_code":"dws", + "code":"280", + "des":"Perform the following operations to create a record variable:Define a record type and use this type to declare a variable.For the syntax of the record type, see Figure 1.", + "doc_type":"devg", + "kw":"record,Arrays and Records,Developer Guide", + "title":"record", + "githuburl":"" + }, + { + "uri":"dws_04_0519.html", + "product_code":"dws", + "code":"281", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Syntax", + "title":"Syntax", + "githuburl":"" + }, + { + "uri":"dws_04_0520.html", + "product_code":"dws", + "code":"282", + "des":"A PL/SQL block can contain a sub-block which can be placed in any section. The following describes the architecture of a PL/SQL block:DECLARE: declares variables, types, ", + "doc_type":"devg", + "kw":"Basic Structure,Syntax,Developer Guide", + "title":"Basic Structure", + "githuburl":"" + }, + { + "uri":"dws_04_0521.html", + "product_code":"dws", + "code":"283", + "des":"An anonymous block applies to a script infrequently executed or a one-off activity. An anonymous block is executed in a session and is not stored.Figure 1 shows the synta", + "doc_type":"devg", + "kw":"Anonymous Block,Syntax,Developer Guide", + "title":"Anonymous Block", + "githuburl":"" + }, + { + "uri":"dws_04_0522.html", + "product_code":"dws", + "code":"284", + "des":"A subprogram stores stored procedures, functions, operators, and advanced packages. A subprogram created in a database can be called by other programs.", + "doc_type":"devg", + "kw":"Subprogram,Syntax,Developer Guide", + "title":"Subprogram", + "githuburl":"" + }, + { + "uri":"dws_04_0523.html", + "product_code":"dws", + "code":"285", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Basic Statements", + "title":"Basic Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0524.html", + "product_code":"dws", + "code":"286", + "des":"This section describes the declaration of variables in the PL/SQL and the scope of this variable in codes.For details about the variable declaration syntax, see Figure 1.", + "doc_type":"devg", + "kw":"Variable Definition Statement,Basic Statements,Developer Guide", + "title":"Variable Definition Statement", + "githuburl":"" + }, + { + "uri":"dws_04_0525.html", + "product_code":"dws", + "code":"287", + "des":"Figure 1 shows the syntax diagram for assigning a value to a variable.The above syntax diagram is explained as follows:variable_name indicates the name of a variable.valu", + "doc_type":"devg", + "kw":"Assignment Statement,Basic Statements,Developer Guide", + "title":"Assignment Statement", + "githuburl":"" + }, + { + "uri":"dws_04_0526.html", + "product_code":"dws", + "code":"288", + "des":"Figure 1 shows the syntax diagram for calling a clause.The above syntax diagram is explained as follows:procedure_name specifies the name of a stored procedure.parameter ", + "doc_type":"devg", + "kw":"Call Statement,Basic Statements,Developer Guide", + "title":"Call Statement", + "githuburl":"" + }, + { + "uri":"dws_04_0527.html", + "product_code":"dws", + "code":"289", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Dynamic Statements", + "title":"Dynamic Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0528.html", + "product_code":"dws", + "code":"290", + "des":"You can perform dynamic queries using EXECUTE IMMEDIATE or OPEN FOR in GaussDB(DWS). EXECUTE IMMEDIATE dynamically executes SELECT statements and OPEN FOR combines use of", + "doc_type":"devg", + "kw":"Executing Dynamic Query Statements,Dynamic Statements,Developer Guide", + "title":"Executing Dynamic Query Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0529.html", + "product_code":"dws", + "code":"291", + "des":"Figure 1 shows the syntax diagram.Figure 2 shows the syntax diagram for using_clause.The above syntax diagram is explained as follows:USING IN bind_argument is used to sp", + "doc_type":"devg", + "kw":"Executing Dynamic Non-query Statements,Dynamic Statements,Developer Guide", + "title":"Executing Dynamic Non-query Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0530.html", + "product_code":"dws", + "code":"292", + "des":"This section describes how to dynamically call store procedures. You must use anonymous statement blocks to package stored procedures or statement blocks and append IN an", + "doc_type":"devg", + "kw":"Dynamically Calling Stored Procedures,Dynamic Statements,Developer Guide", + "title":"Dynamically Calling Stored Procedures", + "githuburl":"" + }, + { + "uri":"dws_04_0531.html", + "product_code":"dws", + "code":"293", + "des":"This section describes how to execute anonymous blocks in dynamic statements. Append IN and OUT behind the EXECUTE IMMEDIATE...USING statement to input and output paramet", + "doc_type":"devg", + "kw":"Dynamically Calling Anonymous Blocks,Dynamic Statements,Developer Guide", + "title":"Dynamically Calling Anonymous Blocks", + "githuburl":"" + }, + { + "uri":"dws_04_0532.html", + "product_code":"dws", + "code":"294", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Control Statements", + "title":"Control Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0533.html", + "product_code":"dws", + "code":"295", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"RETURN Statements", + "title":"RETURN Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0534.html", + "product_code":"dws", + "code":"296", + "des":"Figure 1 shows the syntax diagram for a return statement.The syntax details are as follows:This statement returns control from a stored procedure or function to a caller.", + "doc_type":"devg", + "kw":"RETURN,RETURN Statements,Developer Guide", + "title":"RETURN", + "githuburl":"" + }, + { + "uri":"dws_04_0535.html", + "product_code":"dws", + "code":"297", + "des":"When creating a function, specify SETOF datatype for the return values.return_next_clause::=return_query_clause::=The syntax details are as follows:If a function needs to", + "doc_type":"devg", + "kw":"RETURN NEXT and RETURN QUERY,RETURN Statements,Developer Guide", + "title":"RETURN NEXT and RETURN QUERY", + "githuburl":"" + }, + { + "uri":"dws_04_0536.html", + "product_code":"dws", + "code":"298", + "des":"Conditional statements are used to decide whether given conditions are met. Operations are executed based on the decisions made.GaussDB(DWS) supports five usages of IF:IF", + "doc_type":"devg", + "kw":"Conditional Statements,Control Statements,Developer Guide", + "title":"Conditional Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0537.html", + "product_code":"dws", + "code":"299", + "des":"The syntax diagram is as follows.Example:The loop must be exploited together with EXIT; otherwise, a dead loop occurs.The syntax diagram is as follows.If the conditional ", + "doc_type":"devg", + "kw":"Loop Statements,Control Statements,Developer Guide", + "title":"Loop Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0538.html", + "product_code":"dws", + "code":"300", + "des":"Figure 1 shows the syntax diagram.Figure 2 shows the syntax diagram for when_clause.Parameter description:case_expression: specifies the variable or expression.when_expre", + "doc_type":"devg", + "kw":"Branch Statements,Control Statements,Developer Guide", + "title":"Branch Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0539.html", + "product_code":"dws", + "code":"301", + "des":"In PL/SQL programs, NULL statements are used to indicate \"nothing should be done\", equal to placeholders. They grant meanings to some statements and improve program reada", + "doc_type":"devg", + "kw":"NULL Statements,Control Statements,Developer Guide", + "title":"NULL Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0540.html", + "product_code":"dws", + "code":"302", + "des":"By default, any error occurring in a PL/SQL function aborts execution of the function, and indeed of the surrounding transaction as well. You can trap errors and restore ", + "doc_type":"devg", + "kw":"Error Trapping Statements,Control Statements,Developer Guide", + "title":"Error Trapping Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0541.html", + "product_code":"dws", + "code":"303", + "des":"The GOTO statement unconditionally transfers the control from the current statement to a labeled statement. The GOTO statement changes the execution logic. Therefore, use", + "doc_type":"devg", + "kw":"GOTO Statements,Control Statements,Developer Guide", + "title":"GOTO Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0542.html", + "product_code":"dws", + "code":"304", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Other Statements", + "title":"Other Statements", + "githuburl":"" + }, + { + "uri":"dws_04_0543.html", + "product_code":"dws", + "code":"305", + "des":"GaussDB(DWS) provides multiple lock modes to control concurrent accesses to table data. These modes are used when Multi-Version Concurrency Control (MVCC) cannot give exp", + "doc_type":"devg", + "kw":"Lock Operations,Other Statements,Developer Guide", + "title":"Lock Operations", + "githuburl":"" + }, + { + "uri":"dws_04_0544.html", + "product_code":"dws", + "code":"306", + "des":"GaussDB(DWS) provides cursors as a data buffer for users to store execution results of SQL statements. Each cursor region has a name. Users can use SQL statements to obta", + "doc_type":"devg", + "kw":"Cursor Operations,Other Statements,Developer Guide", + "title":"Cursor Operations", + "githuburl":"" + }, + { + "uri":"dws_04_0545.html", + "product_code":"dws", + "code":"307", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Cursors", + "title":"Cursors", + "githuburl":"" + }, + { + "uri":"dws_04_0546.html", + "product_code":"dws", + "code":"308", + "des":"To process SQL statements, the stored procedure process assigns a memory segment to store context association. Cursors are handles or pointers to context areas. With curs", + "doc_type":"devg", + "kw":"Overview,Cursors,Developer Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_04_0547.html", + "product_code":"dws", + "code":"309", + "des":"An explicit cursor is used to process query statements, particularly when the query results contain multiple records.An explicit cursor performs the following six PL/SQL ", + "doc_type":"devg", + "kw":"Explicit Cursor,Cursors,Developer Guide", + "title":"Explicit Cursor", + "githuburl":"" + }, + { + "uri":"dws_04_0548.html", + "product_code":"dws", + "code":"310", + "des":"The system automatically sets implicit cursors for non-query statements, such as ALTER and DROP, and creates work areas for these statements. These implicit cursors are n", + "doc_type":"devg", + "kw":"Implicit Cursor,Cursors,Developer Guide", + "title":"Implicit Cursor", + "githuburl":"" + }, + { + "uri":"dws_04_0549.html", + "product_code":"dws", + "code":"311", + "des":"The use of cursors in WHILE and LOOP statements is called a cursor loop. Generally, OPEN, FETCH, and CLOSE statements are needed in cursor loop. The following describes a", + "doc_type":"devg", + "kw":"Cursor Loop,Cursors,Developer Guide", + "title":"Cursor Loop", + "githuburl":"" + }, + { + "uri":"dws_04_0550.html", + "product_code":"dws", + "code":"312", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Advanced Packages", + "title":"Advanced Packages", + "githuburl":"" + }, + { + "uri":"dws_04_0551.html", + "product_code":"dws", + "code":"313", + "des":"Table 1 provides all interfaces supported by the DBMS_LOB package.DBMS_LOB.GETLENGTHSpecifies the length of a LOB type object obtained and returned by the stored procedur", + "doc_type":"devg", + "kw":"DBMS_LOB,Advanced Packages,Developer Guide", + "title":"DBMS_LOB", + "githuburl":"" + }, + { + "uri":"dws_04_0552.html", + "product_code":"dws", + "code":"314", + "des":"Table 1 provides all interfaces supported by the DBMS_RANDOM package.DBMS_RANDOM.SEEDThe stored procedure SEED is used to set a seed for a random number. The DBMS_RANDOM.", + "doc_type":"devg", + "kw":"DBMS_RANDOM,Advanced Packages,Developer Guide", + "title":"DBMS_RANDOM", + "githuburl":"" + }, + { + "uri":"dws_04_0553.html", + "product_code":"dws", + "code":"315", + "des":"Table 1 provides all interfaces supported by the DBMS_OUTPUT package.DBMS_OUTPUT.PUT_LINEThe PUT_LINE procedure writes a row of text carrying a line end symbol in the buf", + "doc_type":"devg", + "kw":"DBMS_OUTPUT,Advanced Packages,Developer Guide", + "title":"DBMS_OUTPUT", + "githuburl":"" + }, + { + "uri":"dws_04_0554.html", + "product_code":"dws", + "code":"316", + "des":"Table 1 provides all interfaces supported by the UTL_RAW package.The external representation of the RAW type data is hexadecimal and its internal storage form is binary. ", + "doc_type":"devg", + "kw":"UTL_RAW,Advanced Packages,Developer Guide", + "title":"UTL_RAW", + "githuburl":"" + }, + { + "uri":"dws_04_0555.html", + "product_code":"dws", + "code":"317", + "des":"Table 1 lists all interfaces supported by the DBMS_JOB package.DBMS_JOB.SUBMITThe stored procedure SUBMIT submits a job provided by the system.A prototype of the DBMS_JOB", + "doc_type":"devg", + "kw":"DBMS_JOB,Advanced Packages,Developer Guide", + "title":"DBMS_JOB", + "githuburl":"" + }, + { + "uri":"dws_04_0556.html", + "product_code":"dws", + "code":"318", + "des":"Table 1 lists interfaces supported by the DBMS_SQL package.You are advised to use dbms_sql.define_column and dbms_sql.column_value to define columns.If the size of the re", + "doc_type":"devg", + "kw":"DBMS_SQL,Advanced Packages,Developer Guide", + "title":"DBMS_SQL", + "githuburl":"" + }, + { + "uri":"dws_04_0558.html", + "product_code":"dws", + "code":"319", + "des":"RAISE has the following five syntax formats:Parameter description:The level option is used to specify the error level, that is, DEBUG, LOG, INFO, NOTICE, WARNING, or EXCE", + "doc_type":"devg", + "kw":"Debugging,Stored Procedures,Developer Guide", + "title":"Debugging", + "githuburl":"" + }, + { + "uri":"dws_04_0559.html", + "product_code":"dws", + "code":"320", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"System Catalogs and System Views", + "title":"System Catalogs and System Views", + "githuburl":"" + }, + { + "uri":"dws_04_0560.html", + "product_code":"dws", + "code":"321", + "des":"System catalogs are used by GaussDB(DWS) to store structure metadata. They are a core component the GaussDB(DWS) database system and provide control information for the d", + "doc_type":"devg", + "kw":"Overview of System Catalogs and System Views,System Catalogs and System Views,Developer Guide", + "title":"Overview of System Catalogs and System Views", + "githuburl":"" + }, + { + "uri":"dws_04_0561.html", + "product_code":"dws", + "code":"322", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"System Catalogs", + "title":"System Catalogs", + "githuburl":"" + }, + { + "uri":"dws_04_0562.html", + "product_code":"dws", + "code":"323", + "des":"GS_OBSSCANINFO defines the OBS runtime information scanned in cluster acceleration scenarios. Each record corresponds to a piece of runtime information of a foreign table", + "doc_type":"devg", + "kw":"GS_OBSSCANINFO,System Catalogs,Developer Guide", + "title":"GS_OBSSCANINFO", + "githuburl":"" + }, + { + "uri":"dws_04_0564.html", + "product_code":"dws", + "code":"324", + "des":"The GS_WLM_INSTANCE_HISTORY system catalog stores information about resource usage related to CN or DN instances. Each record in the system table indicates the resource u", + "doc_type":"devg", + "kw":"GS_WLM_INSTANCE_HISTORY,System Catalogs,Developer Guide", + "title":"GS_WLM_INSTANCE_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0565.html", + "product_code":"dws", + "code":"325", + "des":"GS_WLM_OPERATOR_INFO records operators of completed jobs. The data is dumped from the kernel to a system catalog.This system catalog's schema is dbms_om.This system catal", + "doc_type":"devg", + "kw":"GS_WLM_OPERATOR_INFO,System Catalogs,Developer Guide", + "title":"GS_WLM_OPERATOR_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0566.html", + "product_code":"dws", + "code":"326", + "des":"GS_WLM_SESSION_INFO records load management information about a completed job executed on all CNs. The data is dumped from the kernel to a system catalog.This system cata", + "doc_type":"devg", + "kw":"GS_WLM_SESSION_INFO,System Catalogs,Developer Guide", + "title":"GS_WLM_SESSION_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0567.html", + "product_code":"dws", + "code":"327", + "des":"The GS_WLM_USER_RESOURCE_HISTORY system table stores information about resources used by users and is valid only on CNs. Each record in the system table indicates the res", + "doc_type":"devg", + "kw":"GS_WLM_USER_RESOURCE_HISTORY,System Catalogs,Developer Guide", + "title":"GS_WLM_USER_RESOURCE_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0568.html", + "product_code":"dws", + "code":"328", + "des":"pg_aggregate records information about aggregation functions. Each entry in pg_aggregate is an extension of an entry in pg_proc. The pg_proc entry carries the aggregate's", + "doc_type":"devg", + "kw":"PG_AGGREGATE,System Catalogs,Developer Guide", + "title":"PG_AGGREGATE", + "githuburl":"" + }, + { + "uri":"dws_04_0569.html", + "product_code":"dws", + "code":"329", + "des":"PG_AM records information about index access methods. There is one row for each index access method supported by the system.", + "doc_type":"devg", + "kw":"PG_AM,System Catalogs,Developer Guide", + "title":"PG_AM", + "githuburl":"" + }, + { + "uri":"dws_04_0570.html", + "product_code":"dws", + "code":"330", + "des":"PG_AMOP records information about operators associated with access method operator families. There is one row for each operator that is a member of an operator family. A ", + "doc_type":"devg", + "kw":"PG_AMOP,System Catalogs,Developer Guide", + "title":"PG_AMOP", + "githuburl":"" + }, + { + "uri":"dws_04_0571.html", + "product_code":"dws", + "code":"331", + "des":"PG_AMPROC records information about the support procedures associated with the access method operator families. There is one row for each support procedure belonging to a", + "doc_type":"devg", + "kw":"PG_AMPROC,System Catalogs,Developer Guide", + "title":"PG_AMPROC", + "githuburl":"" + }, + { + "uri":"dws_04_0572.html", + "product_code":"dws", + "code":"332", + "des":"PG_ATTRDEF stores default values of columns.", + "doc_type":"devg", + "kw":"PG_ATTRDEF,System Catalogs,Developer Guide", + "title":"PG_ATTRDEF", + "githuburl":"" + }, + { + "uri":"dws_04_0573.html", + "product_code":"dws", + "code":"333", + "des":"PG_ATTRIBUTE records information about table columns.", + "doc_type":"devg", + "kw":"PG_ATTRIBUTE,System Catalogs,Developer Guide", + "title":"PG_ATTRIBUTE", + "githuburl":"" + }, + { + "uri":"dws_04_0574.html", + "product_code":"dws", + "code":"334", + "des":"PG_AUTHID records information about the database authentication identifiers (roles). The concept of users is contained in that of roles. A user is actually a role whose r", + "doc_type":"devg", + "kw":"PG_AUTHID,System Catalogs,Developer Guide", + "title":"PG_AUTHID", + "githuburl":"" + }, + { + "uri":"dws_04_0575.html", + "product_code":"dws", + "code":"335", + "des":"PG_AUTH_HISTORY records the authentication history of the role. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"PG_AUTH_HISTORY,System Catalogs,Developer Guide", + "title":"PG_AUTH_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0576.html", + "product_code":"dws", + "code":"336", + "des":"PG_AUTH_MEMBERS records the membership relations between roles.", + "doc_type":"devg", + "kw":"PG_AUTH_MEMBERS,System Catalogs,Developer Guide", + "title":"PG_AUTH_MEMBERS", + "githuburl":"" + }, + { + "uri":"dws_04_0577.html", + "product_code":"dws", + "code":"337", + "des":"PG_CAST records conversion relationships between data types.", + "doc_type":"devg", + "kw":"PG_CAST,System Catalogs,Developer Guide", + "title":"PG_CAST", + "githuburl":"" + }, + { + "uri":"dws_04_0578.html", + "product_code":"dws", + "code":"338", + "des":"PG_CLASS records database objects and their relations.View the OID and relfilenode of a table.Count row-store tables.Count column-store tables.", + "doc_type":"devg", + "kw":"PG_CLASS,System Catalogs,Developer Guide", + "title":"PG_CLASS", + "githuburl":"" + }, + { + "uri":"dws_04_0579.html", + "product_code":"dws", + "code":"339", + "des":"PG_COLLATION records the available collations, which are essentially mappings from an SQL name to operating system locale categories.", + "doc_type":"devg", + "kw":"PG_COLLATION,System Catalogs,Developer Guide", + "title":"PG_COLLATION", + "githuburl":"" + }, + { + "uri":"dws_04_0580.html", + "product_code":"dws", + "code":"340", + "des":"PG_CONSTRAINT records check, primary key, unique, and foreign key constraints on the tables.consrc is not updated when referenced objects change; for example, it will not", + "doc_type":"devg", + "kw":"PG_CONSTRAINT,System Catalogs,Developer Guide", + "title":"PG_CONSTRAINT", + "githuburl":"" + }, + { + "uri":"dws_04_0581.html", + "product_code":"dws", + "code":"341", + "des":"PG_CONVERSION records encoding conversion information.", + "doc_type":"devg", + "kw":"PG_CONVERSION,System Catalogs,Developer Guide", + "title":"PG_CONVERSION", + "githuburl":"" + }, + { + "uri":"dws_04_0582.html", + "product_code":"dws", + "code":"342", + "des":"PG_DATABASE records information about the available databases.", + "doc_type":"devg", + "kw":"PG_DATABASE,System Catalogs,Developer Guide", + "title":"PG_DATABASE", + "githuburl":"" + }, + { + "uri":"dws_04_0583.html", + "product_code":"dws", + "code":"343", + "des":"PG_DB_ROLE_SETTING records the default values of configuration items bonded to each role and database when the database is running.", + "doc_type":"devg", + "kw":"PG_DB_ROLE_SETTING,System Catalogs,Developer Guide", + "title":"PG_DB_ROLE_SETTING", + "githuburl":"" + }, + { + "uri":"dws_04_0584.html", + "product_code":"dws", + "code":"344", + "des":"PG_DEFAULT_ACL records the initial privileges assigned to the newly created objects.Run the following command to view the initial permissions of the new user role1:You ca", + "doc_type":"devg", + "kw":"PG_DEFAULT_ACL,System Catalogs,Developer Guide", + "title":"PG_DEFAULT_ACL", + "githuburl":"" + }, + { + "uri":"dws_04_0585.html", + "product_code":"dws", + "code":"345", + "des":"PG_DEPEND records the dependency relationships between database objects. This information allows DROP commands to find which other objects must be dropped by DROP CASCADE", + "doc_type":"devg", + "kw":"PG_DEPEND,System Catalogs,Developer Guide", + "title":"PG_DEPEND", + "githuburl":"" + }, + { + "uri":"dws_04_0586.html", + "product_code":"dws", + "code":"346", + "des":"PG_DESCRIPTION records optional descriptions (comments) for each database object. Descriptions of many built-in system objects are provided in the initial contents of PG_", + "doc_type":"devg", + "kw":"PG_DESCRIPTION,System Catalogs,Developer Guide", + "title":"PG_DESCRIPTION", + "githuburl":"" + }, + { + "uri":"dws_04_0588.html", + "product_code":"dws", + "code":"347", + "des":"PG_ENUM records entries showing the values and labels for each enum type. The internal representation of a given enum value is actually the OID of its associated row in p", + "doc_type":"devg", + "kw":"PG_ENUM,System Catalogs,Developer Guide", + "title":"PG_ENUM", + "githuburl":"" + }, + { + "uri":"dws_04_0589.html", + "product_code":"dws", + "code":"348", + "des":"PG_EXTENSION records information about the installed extensions. By default, GaussDB(DWS) has 12 extensions, that is, PLPGSQL, DIST_FDW, FILE_FDW, HDFS_FDW, HSTORE, PLDBG", + "doc_type":"devg", + "kw":"PG_EXTENSION,System Catalogs,Developer Guide", + "title":"PG_EXTENSION", + "githuburl":"" + }, + { + "uri":"dws_04_0590.html", + "product_code":"dws", + "code":"349", + "des":"PG_EXTENSION_DATA_SOURCE records information about external data source. An external data source contains information about an external database, such as its password enc", + "doc_type":"devg", + "kw":"PG_EXTENSION_DATA_SOURCE,System Catalogs,Developer Guide", + "title":"PG_EXTENSION_DATA_SOURCE", + "githuburl":"" + }, + { + "uri":"dws_04_0591.html", + "product_code":"dws", + "code":"350", + "des":"PG_FOREIGN_DATA_WRAPPER records foreign-data wrapper definitions. A foreign-data wrapper is the mechanism by which external data, residing on foreign servers, is accessed", + "doc_type":"devg", + "kw":"PG_FOREIGN_DATA_WRAPPER,System Catalogs,Developer Guide", + "title":"PG_FOREIGN_DATA_WRAPPER", + "githuburl":"" + }, + { + "uri":"dws_04_0592.html", + "product_code":"dws", + "code":"351", + "des":"PG_FOREIGN_SERVER records the foreign server definitions. A foreign server describes a source of external data, such as a remote server. Foreign servers are accessed via ", + "doc_type":"devg", + "kw":"PG_FOREIGN_SERVER,System Catalogs,Developer Guide", + "title":"PG_FOREIGN_SERVER", + "githuburl":"" + }, + { + "uri":"dws_04_0593.html", + "product_code":"dws", + "code":"352", + "des":"PG_FOREIGN_TABLE records auxiliary information about foreign tables.", + "doc_type":"devg", + "kw":"PG_FOREIGN_TABLE,System Catalogs,Developer Guide", + "title":"PG_FOREIGN_TABLE", + "githuburl":"" + }, + { + "uri":"dws_04_0594.html", + "product_code":"dws", + "code":"353", + "des":"PG_INDEX records part of the information about indexes. The rest is mostly in PG_CLASS.", + "doc_type":"devg", + "kw":"PG_INDEX,System Catalogs,Developer Guide", + "title":"PG_INDEX", + "githuburl":"" + }, + { + "uri":"dws_04_0595.html", + "product_code":"dws", + "code":"354", + "des":"PG_INHERITS records information about table inheritance hierarchies. There is one entry for each direct child table in the database. Indirect inheritance can be determine", + "doc_type":"devg", + "kw":"PG_INHERITS,System Catalogs,Developer Guide", + "title":"PG_INHERITS", + "githuburl":"" + }, + { + "uri":"dws_04_0596.html", + "product_code":"dws", + "code":"355", + "des":"PG_JOBS records detailed information about jobs created by users. Dedicated threads poll the pg_jobs table and trigger jobs based on scheduled job execution time. This ta", + "doc_type":"devg", + "kw":"PG_JOBS,System Catalogs,Developer Guide", + "title":"PG_JOBS", + "githuburl":"" + }, + { + "uri":"dws_04_0597.html", + "product_code":"dws", + "code":"356", + "des":"PG_LANGUAGE records programming languages. You can use them and interfaces to write functions or stored procedures.", + "doc_type":"devg", + "kw":"PG_LANGUAGE,System Catalogs,Developer Guide", + "title":"PG_LANGUAGE", + "githuburl":"" + }, + { + "uri":"dws_04_0598.html", + "product_code":"dws", + "code":"357", + "des":"PG_LARGEOBJECT records the data making up large objects A large object is identified by an OID assigned when it is created. Each large object is broken into segments or \"", + "doc_type":"devg", + "kw":"PG_LARGEOBJECT,System Catalogs,Developer Guide", + "title":"PG_LARGEOBJECT", + "githuburl":"" + }, + { + "uri":"dws_04_0599.html", + "product_code":"dws", + "code":"358", + "des":"PG_LARGEOBJECT_METADATA records metadata associated with large objects. The actual large object data is stored in PG_LARGEOBJECT.", + "doc_type":"devg", + "kw":"PG_LARGEOBJECT_METADATA,System Catalogs,Developer Guide", + "title":"PG_LARGEOBJECT_METADATA", + "githuburl":"" + }, + { + "uri":"dws_04_0600.html", + "product_code":"dws", + "code":"359", + "des":"PG_NAMESPACE records the namespaces, that is, schema-related information.", + "doc_type":"devg", + "kw":"PG_NAMESPACE,System Catalogs,Developer Guide", + "title":"PG_NAMESPACE", + "githuburl":"" + }, + { + "uri":"dws_04_0601.html", + "product_code":"dws", + "code":"360", + "des":"PG_OBJECT records the user creation, creation time, last modification time, and last analyzing time of objects of specified types (types existing in object_type).Only nor", + "doc_type":"devg", + "kw":"PG_OBJECT,System Catalogs,Developer Guide", + "title":"PG_OBJECT", + "githuburl":"" + }, + { + "uri":"dws_04_0602.html", + "product_code":"dws", + "code":"361", + "des":"PG_OBSSCANINFO defines the OBS runtime information scanned in cluster acceleration scenarios. Each record corresponds to a piece of runtime information of a foreign table", + "doc_type":"devg", + "kw":"PG_OBSSCANINFO,System Catalogs,Developer Guide", + "title":"PG_OBSSCANINFO", + "githuburl":"" + }, + { + "uri":"dws_04_0603.html", + "product_code":"dws", + "code":"362", + "des":"PG_OPCLASS defines index access method operator classes.Each operator class defines semantics for index columns of a particular data type and a particular index access me", + "doc_type":"devg", + "kw":"PG_OPCLASS,System Catalogs,Developer Guide", + "title":"PG_OPCLASS", + "githuburl":"" + }, + { + "uri":"dws_04_0604.html", + "product_code":"dws", + "code":"363", + "des":"PG_OPERATOR records information about operators.", + "doc_type":"devg", + "kw":"PG_OPERATOR,System Catalogs,Developer Guide", + "title":"PG_OPERATOR", + "githuburl":"" + }, + { + "uri":"dws_04_0605.html", + "product_code":"dws", + "code":"364", + "des":"PG_OPFAMILY defines operator families.Each operator family is a collection of operators and associated support routines that implement the semantics specified for a parti", + "doc_type":"devg", + "kw":"PG_OPFAMILY,System Catalogs,Developer Guide", + "title":"PG_OPFAMILY", + "githuburl":"" + }, + { + "uri":"dws_04_0606.html", + "product_code":"dws", + "code":"365", + "des":"PG_PARTITION records all partitioned tables, table partitions, toast tables on table partitions, and index partitions in the database. Partitioned index information is no", + "doc_type":"devg", + "kw":"PG_PARTITION,System Catalogs,Developer Guide", + "title":"PG_PARTITION", + "githuburl":"" + }, + { + "uri":"dws_04_0607.html", + "product_code":"dws", + "code":"366", + "des":"PG_PLTEMPLATE records template information for procedural languages.", + "doc_type":"devg", + "kw":"PG_PLTEMPLATE,System Catalogs,Developer Guide", + "title":"PG_PLTEMPLATE", + "githuburl":"" + }, + { + "uri":"dws_04_0608.html", + "product_code":"dws", + "code":"367", + "des":"PG_PROC records information about functions or procedures.Query the OID of a specified function. For example, obtain the OID 1295 of the justify_days function.Query wheth", + "doc_type":"devg", + "kw":"PG_PROC,System Catalogs,Developer Guide", + "title":"PG_PROC", + "githuburl":"" + }, + { + "uri":"dws_04_0609.html", + "product_code":"dws", + "code":"368", + "des":"PG_RANGE records information about range types.This is in addition to the types' entries in PG_TYPE.rngsubopc (plus rngcollation, if the element type is collatable) deter", + "doc_type":"devg", + "kw":"PG_RANGE,System Catalogs,Developer Guide", + "title":"PG_RANGE", + "githuburl":"" + }, + { + "uri":"dws_04_0610.html", + "product_code":"dws", + "code":"369", + "des":"PG_REDACTION_COLUMN records the information about the redacted columns.", + "doc_type":"devg", + "kw":"PG_REDACTION_COLUMN,System Catalogs,Developer Guide", + "title":"PG_REDACTION_COLUMN", + "githuburl":"" + }, + { + "uri":"dws_04_0611.html", + "product_code":"dws", + "code":"370", + "des":"PG_REDACTION_POLICY records information about the object to be redacted.", + "doc_type":"devg", + "kw":"PG_REDACTION_POLICY,System Catalogs,Developer Guide", + "title":"PG_REDACTION_POLICY", + "githuburl":"" + }, + { + "uri":"dws_04_0612.html", + "product_code":"dws", + "code":"371", + "des":"PG_RLSPOLICY displays the information about row-level access control policies.", + "doc_type":"devg", + "kw":"PG_RLSPOLICY,System Catalogs,Developer Guide", + "title":"PG_RLSPOLICY", + "githuburl":"" + }, + { + "uri":"dws_04_0613.html", + "product_code":"dws", + "code":"372", + "des":"PG_RESOURCE_POOL records the information about database resource pool.", + "doc_type":"devg", + "kw":"PG_RESOURCE_POOL,System Catalogs,Developer Guide", + "title":"PG_RESOURCE_POOL", + "githuburl":"" + }, + { + "uri":"dws_04_0614.html", + "product_code":"dws", + "code":"373", + "des":"PG_REWRITE records rewrite rules defined for tables and views.", + "doc_type":"devg", + "kw":"PG_REWRITE,System Catalogs,Developer Guide", + "title":"PG_REWRITE", + "githuburl":"" + }, + { + "uri":"dws_04_0615.html", + "product_code":"dws", + "code":"374", + "des":"PG_SECLABEL records security labels on database objects.See also PG_SHSECLABEL, which performs a similar function for security labels of database objects that are shared ", + "doc_type":"devg", + "kw":"PG_SECLABEL,System Catalogs,Developer Guide", + "title":"PG_SECLABEL", + "githuburl":"" + }, + { + "uri":"dws_04_0616.html", + "product_code":"dws", + "code":"375", + "des":"PG_SHDEPEND records the dependency relationships between database objects and shared objects, such as roles. This information allows GaussDB(DWS) to ensure that those obj", + "doc_type":"devg", + "kw":"PG_SHDEPEND,System Catalogs,Developer Guide", + "title":"PG_SHDEPEND", + "githuburl":"" + }, + { + "uri":"dws_04_0617.html", + "product_code":"dws", + "code":"376", + "des":"PG_SHDESCRIPTION records optional comments for shared database objects. Descriptions can be manipulated with the COMMENT command and viewed with psql's \\d commands.See al", + "doc_type":"devg", + "kw":"PG_SHDESCRIPTION,System Catalogs,Developer Guide", + "title":"PG_SHDESCRIPTION", + "githuburl":"" + }, + { + "uri":"dws_04_0618.html", + "product_code":"dws", + "code":"377", + "des":"PG_SHSECLABEL records security labels on shared database objects. Security labels can be manipulated with the SECURITY LABEL command.For an easier way to view security la", + "doc_type":"devg", + "kw":"PG_SHSECLABEL,System Catalogs,Developer Guide", + "title":"PG_SHSECLABEL", + "githuburl":"" + }, + { + "uri":"dws_04_0619.html", + "product_code":"dws", + "code":"378", + "des":"PG_STATISTIC records statistics about tables and index columns in a database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"PG_STATISTIC,System Catalogs,Developer Guide", + "title":"PG_STATISTIC", + "githuburl":"" + }, + { + "uri":"dws_04_0620.html", + "product_code":"dws", + "code":"379", + "des":"PG_STATISTIC_EXT records the extended statistics of tables in a database, such as statistics of multiple columns. Statistics of expressions will be supported later. You c", + "doc_type":"devg", + "kw":"PG_STATISTIC_EXT,System Catalogs,Developer Guide", + "title":"PG_STATISTIC_EXT", + "githuburl":"" + }, + { + "uri":"dws_04_0621.html", + "product_code":"dws", + "code":"380", + "des":"PG_SYNONYM records the mapping between synonym object names and other database object names.", + "doc_type":"devg", + "kw":"PG_SYNONYM,System Catalogs,Developer Guide", + "title":"PG_SYNONYM", + "githuburl":"" + }, + { + "uri":"dws_04_0622.html", + "product_code":"dws", + "code":"381", + "des":"PG_TABLESPACE records tablespace information.", + "doc_type":"devg", + "kw":"PG_TABLESPACE,System Catalogs,Developer Guide", + "title":"PG_TABLESPACE", + "githuburl":"" + }, + { + "uri":"dws_04_0623.html", + "product_code":"dws", + "code":"382", + "des":"PG_TRIGGER records the trigger information.", + "doc_type":"devg", + "kw":"PG_TRIGGER,System Catalogs,Developer Guide", + "title":"PG_TRIGGER", + "githuburl":"" + }, + { + "uri":"dws_04_0624.html", + "product_code":"dws", + "code":"383", + "des":"PG_TS_CONFIG records entries representing text search configurations. A configuration specifies a particular text search parser and a list of dictionaries to use for each", + "doc_type":"devg", + "kw":"PG_TS_CONFIG,System Catalogs,Developer Guide", + "title":"PG_TS_CONFIG", + "githuburl":"" + }, + { + "uri":"dws_04_0625.html", + "product_code":"dws", + "code":"384", + "des":"PG_TS_CONFIG_MAP records entries showing which text search dictionaries should be consulted, and in what order, for each output token type of each text search configurati", + "doc_type":"devg", + "kw":"PG_TS_CONFIG_MAP,System Catalogs,Developer Guide", + "title":"PG_TS_CONFIG_MAP", + "githuburl":"" + }, + { + "uri":"dws_04_0626.html", + "product_code":"dws", + "code":"385", + "des":"PG_TS_DICT records entries that define text search dictionaries. A dictionary depends on a text search template, which specifies all the implementation functions needed. ", + "doc_type":"devg", + "kw":"PG_TS_DICT,System Catalogs,Developer Guide", + "title":"PG_TS_DICT", + "githuburl":"" + }, + { + "uri":"dws_04_0627.html", + "product_code":"dws", + "code":"386", + "des":"PG_TS_PARSER records entries defining text search parsers. A parser splits input text into lexemes and assigns a token type to each lexeme. Since a parser must be impleme", + "doc_type":"devg", + "kw":"PG_TS_PARSER,System Catalogs,Developer Guide", + "title":"PG_TS_PARSER", + "githuburl":"" + }, + { + "uri":"dws_04_0628.html", + "product_code":"dws", + "code":"387", + "des":"PG_TS_TEMPLATE records entries defining text search templates. A template provides a framework for text search dictionaries. Since a template must be implemented by C fun", + "doc_type":"devg", + "kw":"PG_TS_TEMPLATE,System Catalogs,Developer Guide", + "title":"PG_TS_TEMPLATE", + "githuburl":"" + }, + { + "uri":"dws_04_0629.html", + "product_code":"dws", + "code":"388", + "des":"PG_TYPE records the information about data types.", + "doc_type":"devg", + "kw":"PG_TYPE,System Catalogs,Developer Guide", + "title":"PG_TYPE", + "githuburl":"" + }, + { + "uri":"dws_04_0630.html", + "product_code":"dws", + "code":"389", + "des":"PG_USER_MAPPING records the mappings from local users to remote.It is accessible only to users with system administrator rights. You can use view PG_USER_MAPPINGS to quer", + "doc_type":"devg", + "kw":"PG_USER_MAPPING,System Catalogs,Developer Guide", + "title":"PG_USER_MAPPING", + "githuburl":"" + }, + { + "uri":"dws_04_0631.html", + "product_code":"dws", + "code":"390", + "des":"PG_USER_STATUS records the states of users that access to the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"PG_USER_STATUS,System Catalogs,Developer Guide", + "title":"PG_USER_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0632.html", + "product_code":"dws", + "code":"391", + "des":"PG_WORKLOAD_ACTION records information about query_band.", + "doc_type":"devg", + "kw":"PG_WORKLOAD_ACTION,System Catalogs,Developer Guide", + "title":"PG_WORKLOAD_ACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0633.html", + "product_code":"dws", + "code":"392", + "des":"PGXC_CLASS records the replicated or distributed information for each table.", + "doc_type":"devg", + "kw":"PGXC_CLASS,System Catalogs,Developer Guide", + "title":"PGXC_CLASS", + "githuburl":"" + }, + { + "uri":"dws_04_0634.html", + "product_code":"dws", + "code":"393", + "des":"PGXC_GROUP records information about node groups.", + "doc_type":"devg", + "kw":"PGXC_GROUP,System Catalogs,Developer Guide", + "title":"PGXC_GROUP", + "githuburl":"" + }, + { + "uri":"dws_04_0635.html", + "product_code":"dws", + "code":"394", + "des":"PGXC_NODE records information about cluster nodes.Query the CN and DN information of the cluster:", + "doc_type":"devg", + "kw":"PGXC_NODE,System Catalogs,Developer Guide", + "title":"PGXC_NODE", + "githuburl":"" + }, + { + "uri":"dws_04_0639.html", + "product_code":"dws", + "code":"395", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"System Views", + "title":"System Views", + "githuburl":"" + }, + { + "uri":"dws_04_0640.html", + "product_code":"dws", + "code":"396", + "des":"ALL_ALL_TABLES displays the tables or views accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_ALL_TABLES,System Views,Developer Guide", + "title":"ALL_ALL_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0641.html", + "product_code":"dws", + "code":"397", + "des":"ALL_CONSTRAINTS displays information about constraints accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_CONSTRAINTS,System Views,Developer Guide", + "title":"ALL_CONSTRAINTS", + "githuburl":"" + }, + { + "uri":"dws_04_0642.html", + "product_code":"dws", + "code":"398", + "des":"ALL_CONS_COLUMNS displays information about constraint columns accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_CONS_COLUMNS,System Views,Developer Guide", + "title":"ALL_CONS_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0643.html", + "product_code":"dws", + "code":"399", + "des":"ALL_COL_COMMENTS displays the comment information about table columns accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_COL_COMMENTS,System Views,Developer Guide", + "title":"ALL_COL_COMMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0644.html", + "product_code":"dws", + "code":"400", + "des":"ALL_DEPENDENCIES displays dependencies between functions and advanced packages accessible to the current user.Currently in GaussDB(DWS), this table is empty without any r", + "doc_type":"devg", + "kw":"ALL_DEPENDENCIES,System Views,Developer Guide", + "title":"ALL_DEPENDENCIES", + "githuburl":"" + }, + { + "uri":"dws_04_0645.html", + "product_code":"dws", + "code":"401", + "des":"ALL_IND_COLUMNS displays all index columns accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_IND_COLUMNS,System Views,Developer Guide", + "title":"ALL_IND_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0646.html", + "product_code":"dws", + "code":"402", + "des":"ALL_IND_EXPRESSIONS displays information about the expression indexes accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_IND_EXPRESSIONS,System Views,Developer Guide", + "title":"ALL_IND_EXPRESSIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0647.html", + "product_code":"dws", + "code":"403", + "des":"ALL_INDEXES displays information about indexes accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_INDEXES,System Views,Developer Guide", + "title":"ALL_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0648.html", + "product_code":"dws", + "code":"404", + "des":"ALL_OBJECTS displays all database objects accessible to the current user.For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.", + "doc_type":"devg", + "kw":"ALL_OBJECTS,System Views,Developer Guide", + "title":"ALL_OBJECTS", + "githuburl":"" + }, + { + "uri":"dws_04_0649.html", + "product_code":"dws", + "code":"405", + "des":"ALL_PROCEDURES displays information about all stored procedures or functions accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_PROCEDURES,System Views,Developer Guide", + "title":"ALL_PROCEDURES", + "githuburl":"" + }, + { + "uri":"dws_04_0650.html", + "product_code":"dws", + "code":"406", + "des":"ALL_SEQUENCES displays all sequences accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_SEQUENCES,System Views,Developer Guide", + "title":"ALL_SEQUENCES", + "githuburl":"" + }, + { + "uri":"dws_04_0651.html", + "product_code":"dws", + "code":"407", + "des":"ALL_SOURCE displays information about stored procedures or functions accessible to the current user, and provides the columns defined by the stored procedures and functio", + "doc_type":"devg", + "kw":"ALL_SOURCE,System Views,Developer Guide", + "title":"ALL_SOURCE", + "githuburl":"" + }, + { + "uri":"dws_04_0652.html", + "product_code":"dws", + "code":"408", + "des":"ALL_SYNONYMS displays all synonyms accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_SYNONYMS,System Views,Developer Guide", + "title":"ALL_SYNONYMS", + "githuburl":"" + }, + { + "uri":"dws_04_0653.html", + "product_code":"dws", + "code":"409", + "des":"ALL_TAB_COLUMNS displays description information about columns of the tables accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_TAB_COLUMNS,System Views,Developer Guide", + "title":"ALL_TAB_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0654.html", + "product_code":"dws", + "code":"410", + "des":"ALL_TAB_COMMENTS displays comments about all tables and views accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_TAB_COMMENTS,System Views,Developer Guide", + "title":"ALL_TAB_COMMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0655.html", + "product_code":"dws", + "code":"411", + "des":"ALL_TABLES displays all the tables accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_TABLES,System Views,Developer Guide", + "title":"ALL_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0656.html", + "product_code":"dws", + "code":"412", + "des":"ALL_USERS displays all users of the database visible to the current user, however, it does not describe the users.", + "doc_type":"devg", + "kw":"ALL_USERS,System Views,Developer Guide", + "title":"ALL_USERS", + "githuburl":"" + }, + { + "uri":"dws_04_0657.html", + "product_code":"dws", + "code":"413", + "des":"ALL_VIEWS displays the description about all views accessible to the current user.", + "doc_type":"devg", + "kw":"ALL_VIEWS,System Views,Developer Guide", + "title":"ALL_VIEWS", + "githuburl":"" + }, + { + "uri":"dws_04_0658.html", + "product_code":"dws", + "code":"414", + "des":"DBA_DATA_FILES displays the description of database files. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_DATA_FILES,System Views,Developer Guide", + "title":"DBA_DATA_FILES", + "githuburl":"" + }, + { + "uri":"dws_04_0659.html", + "product_code":"dws", + "code":"415", + "des":"DBA_USERS displays all user names in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_USERS,System Views,Developer Guide", + "title":"DBA_USERS", + "githuburl":"" + }, + { + "uri":"dws_04_0660.html", + "product_code":"dws", + "code":"416", + "des":"DBA_COL_COMMENTS displays information about table colum comments in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_COL_COMMENTS,System Views,Developer Guide", + "title":"DBA_COL_COMMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0661.html", + "product_code":"dws", + "code":"417", + "des":"DBA_CONSTRAINTS displays information about table constraints in database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_CONSTRAINTS,System Views,Developer Guide", + "title":"DBA_CONSTRAINTS", + "githuburl":"" + }, + { + "uri":"dws_04_0662.html", + "product_code":"dws", + "code":"418", + "des":"DBA_CONS_COLUMNS displays information about constraint columns in database tables. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_CONS_COLUMNS,System Views,Developer Guide", + "title":"DBA_CONS_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0663.html", + "product_code":"dws", + "code":"419", + "des":"DBA_IND_COLUMNS displays column information about all indexes in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_IND_COLUMNS,System Views,Developer Guide", + "title":"DBA_IND_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0664.html", + "product_code":"dws", + "code":"420", + "des":"DBA_IND_EXPRESSIONS displays the information about expression indexes in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_IND_EXPRESSIONS,System Views,Developer Guide", + "title":"DBA_IND_EXPRESSIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0665.html", + "product_code":"dws", + "code":"421", + "des":"DBA_IND_PARTITIONS displays information about all index partitions in the database. Each index partition of a partitioned table in the database, if present, has a row of ", + "doc_type":"devg", + "kw":"DBA_IND_PARTITIONS,System Views,Developer Guide", + "title":"DBA_IND_PARTITIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0666.html", + "product_code":"dws", + "code":"422", + "des":"DBA_INDEXES displays all indexes in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_INDEXES,System Views,Developer Guide", + "title":"DBA_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0667.html", + "product_code":"dws", + "code":"423", + "des":"DBA_OBJECTS displays all database objects in the database. It is accessible only to users with system administrator rights.For details about the value ranges of last_ddl_", + "doc_type":"devg", + "kw":"DBA_OBJECTS,System Views,Developer Guide", + "title":"DBA_OBJECTS", + "githuburl":"" + }, + { + "uri":"dws_04_0668.html", + "product_code":"dws", + "code":"424", + "des":"DBA_PART_INDEXES displays information about all partitioned table indexes in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_PART_INDEXES,System Views,Developer Guide", + "title":"DBA_PART_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0669.html", + "product_code":"dws", + "code":"425", + "des":"DBA_PART_TABLES displays information about all partitioned tables in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_PART_TABLES,System Views,Developer Guide", + "title":"DBA_PART_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0670.html", + "product_code":"dws", + "code":"426", + "des":"DBA_PROCEDURES displays information about all stored procedures and functions in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_PROCEDURES,System Views,Developer Guide", + "title":"DBA_PROCEDURES", + "githuburl":"" + }, + { + "uri":"dws_04_0671.html", + "product_code":"dws", + "code":"427", + "des":"DBA_SEQUENCES displays information about all sequences in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_SEQUENCES,System Views,Developer Guide", + "title":"DBA_SEQUENCES", + "githuburl":"" + }, + { + "uri":"dws_04_0672.html", + "product_code":"dws", + "code":"428", + "des":"DBA_SOURCE displays all stored procedures or functions in the database, and it provides the columns defined by the stored procedures or functions. It is accessible only t", + "doc_type":"devg", + "kw":"DBA_SOURCE,System Views,Developer Guide", + "title":"DBA_SOURCE", + "githuburl":"" + }, + { + "uri":"dws_04_0673.html", + "product_code":"dws", + "code":"429", + "des":"DBA_SYNONYMS displays all synonyms in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_SYNONYMS,System Views,Developer Guide", + "title":"DBA_SYNONYMS", + "githuburl":"" + }, + { + "uri":"dws_04_0674.html", + "product_code":"dws", + "code":"430", + "des":"DBA_TAB_COLUMNS displays the columns of tables. Each column of a table in the database has a row in DBA_TAB_COLUMNS. It is accessible only to users with system administra", + "doc_type":"devg", + "kw":"DBA_TAB_COLUMNS,System Views,Developer Guide", + "title":"DBA_TAB_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0675.html", + "product_code":"dws", + "code":"431", + "des":"DBA_TAB_COMMENTS displays comments about all tables and views in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_TAB_COMMENTS,System Views,Developer Guide", + "title":"DBA_TAB_COMMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0676.html", + "product_code":"dws", + "code":"432", + "des":"DBA_TAB_PARTITIONS displays information about all partitions in the database.", + "doc_type":"devg", + "kw":"DBA_TAB_PARTITIONS,System Views,Developer Guide", + "title":"DBA_TAB_PARTITIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0677.html", + "product_code":"dws", + "code":"433", + "des":"DBA_TABLES displays all tables in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_TABLES,System Views,Developer Guide", + "title":"DBA_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0678.html", + "product_code":"dws", + "code":"434", + "des":"DBA_TABLESPACES displays information about available tablespaces. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_TABLESPACES,System Views,Developer Guide", + "title":"DBA_TABLESPACES", + "githuburl":"" + }, + { + "uri":"dws_04_0679.html", + "product_code":"dws", + "code":"435", + "des":"DBA_TRIGGERS displays information about triggers in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_TRIGGERS,System Views,Developer Guide", + "title":"DBA_TRIGGERS", + "githuburl":"" + }, + { + "uri":"dws_04_0680.html", + "product_code":"dws", + "code":"436", + "des":"DBA_VIEWS displays views in the database. It is accessible only to users with system administrator rights.", + "doc_type":"devg", + "kw":"DBA_VIEWS,System Views,Developer Guide", + "title":"DBA_VIEWS", + "githuburl":"" + }, + { + "uri":"dws_04_0681.html", + "product_code":"dws", + "code":"437", + "des":"DUAL is automatically created by the database based on the data dictionary. It has only one text column in only one row for storing expression calculation results. It is ", + "doc_type":"devg", + "kw":"DUAL,System Views,Developer Guide", + "title":"DUAL", + "githuburl":"" + }, + { + "uri":"dws_04_0682.html", + "product_code":"dws", + "code":"438", + "des":"GLOBAL_REDO_STAT displays the total statistics of XLOG redo operations on all nodes in a cluster. Except the avgiotim column (indicating the average redo write time of al", + "doc_type":"devg", + "kw":"GLOBAL_REDO_STAT,System Views,Developer Guide", + "title":"GLOBAL_REDO_STAT", + "githuburl":"" + }, + { + "uri":"dws_04_0683.html", + "product_code":"dws", + "code":"439", + "des":"GLOBAL_REL_IOSTAT displays the total disk I/O statistics of all nodes in a cluster. The name of each column in this view is the same as that in the GS_REL_IOSTAT view, bu", + "doc_type":"devg", + "kw":"GLOBAL_REL_IOSTAT,System Views,Developer Guide", + "title":"GLOBAL_REL_IOSTAT", + "githuburl":"" + }, + { + "uri":"dws_04_0684.html", + "product_code":"dws", + "code":"440", + "des":"GLOBAL_STAT_DATABASE displays the status and statistics of databases on all nodes in a cluster.When you query the GLOBAL_STAT_DATABASE view on a CN, the respective values", + "doc_type":"devg", + "kw":"GLOBAL_STAT_DATABASE,System Views,Developer Guide", + "title":"GLOBAL_STAT_DATABASE", + "githuburl":"" + }, + { + "uri":"dws_04_0685.html", + "product_code":"dws", + "code":"441", + "des":"GLOBAL_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in all workload Cgroups in a cluster, including the number of SELECT, UPDATE, INSER", + "doc_type":"devg", + "kw":"GLOBAL_WORKLOAD_SQL_COUNT,System Views,Developer Guide", + "title":"GLOBAL_WORKLOAD_SQL_COUNT", + "githuburl":"" + }, + { + "uri":"dws_04_0686.html", + "product_code":"dws", + "code":"442", + "des":"GLOBAL_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in all workload Cgroups in a cluster, including the maximum, minimum, average, ", + "doc_type":"devg", + "kw":"GLOBAL_WORKLOAD_SQL_ELAPSE_TIME,System Views,Developer Guide", + "title":"GLOBAL_WORKLOAD_SQL_ELAPSE_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0687.html", + "product_code":"dws", + "code":"443", + "des":"GLOBAL_WORKLOAD_TRANSACTION provides the total transaction information about workload Cgroups on all CNs in the cluster. This view is accessible only to users with system", + "doc_type":"devg", + "kw":"GLOBAL_WORKLOAD_TRANSACTION,System Views,Developer Guide", + "title":"GLOBAL_WORKLOAD_TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0688.html", + "product_code":"dws", + "code":"444", + "des":"GS_ALL_CONTROL_GROUP_INFO displays all Cgroup information in a database.", + "doc_type":"devg", + "kw":"GS_ALL_CONTROL_GROUP_INFO,System Views,Developer Guide", + "title":"GS_ALL_CONTROL_GROUP_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0689.html", + "product_code":"dws", + "code":"445", + "des":"GS_CLUSTER_RESOURCE_INFO displays a DN resource summary.", + "doc_type":"devg", + "kw":"GS_CLUSTER_RESOURCE_INFO,System Views,Developer Guide", + "title":"GS_CLUSTER_RESOURCE_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0690.html", + "product_code":"dws", + "code":"446", + "des":"The database parses each received SQL text string and generates an internal parsing tree. The database traverses the parsing tree and ignores constant values in the parsi", + "doc_type":"devg", + "kw":"GS_INSTR_UNIQUE_SQL,System Views,Developer Guide", + "title":"GS_INSTR_UNIQUE_SQL", + "githuburl":"" + }, + { + "uri":"dws_04_0691.html", + "product_code":"dws", + "code":"447", + "des":"GS_REL_IOSTAT displays disk I/O statistics on the current node. In the current version, only one page is read or written in each read or write operation. Therefore, the n", + "doc_type":"devg", + "kw":"GS_REL_IOSTAT,System Views,Developer Guide", + "title":"GS_REL_IOSTAT", + "githuburl":"" + }, + { + "uri":"dws_04_0692.html", + "product_code":"dws", + "code":"448", + "des":"The GS_NODE_STAT_RESET_TIME view provides the reset time of statistics on the current node and returns the timestamp with the time zone. For details, see the get_node_sta", + "doc_type":"devg", + "kw":"GS_NODE_STAT_RESET_TIME,System Views,Developer Guide", + "title":"GS_NODE_STAT_RESET_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0693.html", + "product_code":"dws", + "code":"449", + "des":"GS_SESSION_CPU_STATISTICS displays load management information about CPU usage of ongoing complex jobs executed by the current user.", + "doc_type":"devg", + "kw":"GS_SESSION_CPU_STATISTICS,System Views,Developer Guide", + "title":"GS_SESSION_CPU_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0694.html", + "product_code":"dws", + "code":"450", + "des":"GS_SESSION_MEMORY_STATISTICS displays load management information about memory usage of ongoing complex jobs executed by the current user.", + "doc_type":"devg", + "kw":"GS_SESSION_MEMORY_STATISTICS,System Views,Developer Guide", + "title":"GS_SESSION_MEMORY_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0695.html", + "product_code":"dws", + "code":"451", + "des":"GS_SQL_COUNT displays statistics about the five types of statements (SELECT, INSERT, UPDATE, DELETE, and MERGE INTO) executed on the current node of the database, includi", + "doc_type":"devg", + "kw":"GS_SQL_COUNT,System Views,Developer Guide", + "title":"GS_SQL_COUNT", + "githuburl":"" + }, + { + "uri":"dws_04_0696.html", + "product_code":"dws", + "code":"452", + "des":"GS_WAIT_EVENTS displays statistics about waiting status and events on the current node.The values of statistical columns in this view are accumulated only when the enable", + "doc_type":"devg", + "kw":"GS_WAIT_EVENTS,System Views,Developer Guide", + "title":"GS_WAIT_EVENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0701.html", + "product_code":"dws", + "code":"453", + "des":"This view displays the execution information about operators in the query statements that have been executed on the current CN. The information comes from the system cata", + "doc_type":"devg", + "kw":"GS_WLM_OPERAROR_INFO,System Views,Developer Guide", + "title":"GS_WLM_OPERAROR_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0702.html", + "product_code":"dws", + "code":"454", + "des":"This view displays the records of operators in jobs that have been executed by the current user on the current CN.This view is used by Database Manager to query data from", + "doc_type":"devg", + "kw":"GS_WLM_OPERATOR_HISTORY,System Views,Developer Guide", + "title":"GS_WLM_OPERATOR_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0703.html", + "product_code":"dws", + "code":"455", + "des":"GS_WLM_OPERATOR_STATISTICS displays the operators of the jobs that are being executed by the current user.", + "doc_type":"devg", + "kw":"GS_WLM_OPERATOR_STATISTICS,System Views,Developer Guide", + "title":"GS_WLM_OPERATOR_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0704.html", + "product_code":"dws", + "code":"456", + "des":"This view displays the execution information about the query statements that have been executed on the current CN. The information comes from the system catalog dbms_om. ", + "doc_type":"devg", + "kw":"GS_WLM_SESSION_INFO,System Views,Developer Guide", + "title":"GS_WLM_SESSION_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0705.html", + "product_code":"dws", + "code":"457", + "des":"GS_WLM_SESSION_HISTORY displays load management information about a completed job executed by the current user on the current CN. This view is used by Database Manager to", + "doc_type":"devg", + "kw":"GS_WLM_SESSION_HISTORY,System Views,Developer Guide", + "title":"GS_WLM_SESSION_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0706.html", + "product_code":"dws", + "code":"458", + "des":"GS_WLM_SESSION_STATISTICS displays load management information about jobs being executed by the current user on the current CN.", + "doc_type":"devg", + "kw":"GS_WLM_SESSION_STATISTICS,System Views,Developer Guide", + "title":"GS_WLM_SESSION_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0708.html", + "product_code":"dws", + "code":"459", + "des":"GS_WLM_SQL_ALLOW displays the configured resource management SQL whitelist, including the default SQL whitelist and the SQL whitelist configured using the GUC parameter w", + "doc_type":"devg", + "kw":"GS_WLM_SQL_ALLOW,System Views,Developer Guide", + "title":"GS_WLM_SQL_ALLOW", + "githuburl":"" + }, + { + "uri":"dws_04_0709.html", + "product_code":"dws", + "code":"460", + "des":"GS_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in workload Cgroups on the current node, including the number of SELECT, UPDATE, INSERT", + "doc_type":"devg", + "kw":"GS_WORKLOAD_SQL_COUNT,System Views,Developer Guide", + "title":"GS_WORKLOAD_SQL_COUNT", + "githuburl":"" + }, + { + "uri":"dws_04_0710.html", + "product_code":"dws", + "code":"461", + "des":"GS_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in workload Cgroups on the current node, including the maximum, minimum, average, a", + "doc_type":"devg", + "kw":"GS_WORKLOAD_SQL_ELAPSE_TIME,System Views,Developer Guide", + "title":"GS_WORKLOAD_SQL_ELAPSE_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0711.html", + "product_code":"dws", + "code":"462", + "des":"GS_WORKLOAD_TRANSACTION provides transaction information about workload cgroups on a single CN. The database records the number of times that each workload Cgroup commits", + "doc_type":"devg", + "kw":"GS_WORKLOAD_TRANSACTION,System Views,Developer Guide", + "title":"GS_WORKLOAD_TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0712.html", + "product_code":"dws", + "code":"463", + "des":"GS_STAT_DB_CU displsys CU hits in a database and in each node in a cluster. You can clear it using gs_stat_reset().", + "doc_type":"devg", + "kw":"GS_STAT_DB_CU,System Views,Developer Guide", + "title":"GS_STAT_DB_CU", + "githuburl":"" + }, + { + "uri":"dws_04_0713.html", + "product_code":"dws", + "code":"464", + "des":"GS_STAT_SESSION_CU displays the CU hit rate of running sessions on each node in a cluster. This data about a session is cleared when you exit this session or restart the ", + "doc_type":"devg", + "kw":"GS_STAT_SESSION_CU,System Views,Developer Guide", + "title":"GS_STAT_SESSION_CU", + "githuburl":"" + }, + { + "uri":"dws_04_0714.html", + "product_code":"dws", + "code":"465", + "des":"GS_TOTAL_NODEGROUP_MEMORY_DETAIL displays statistics about memory usage of the logical cluster that the current database belongs to in the unit of MB.", + "doc_type":"devg", + "kw":"GS_TOTAL_NODEGROUP_MEMORY_DETAIL,System Views,Developer Guide", + "title":"GS_TOTAL_NODEGROUP_MEMORY_DETAIL", + "githuburl":"" + }, + { + "uri":"dws_04_0715.html", + "product_code":"dws", + "code":"466", + "des":"GS_USER_TRANSACTION provides transaction information about users on a single CN. The database records the number of times that each user commits and rolls back transactio", + "doc_type":"devg", + "kw":"GS_USER_TRANSACTION,System Views,Developer Guide", + "title":"GS_USER_TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0716.html", + "product_code":"dws", + "code":"467", + "des":"GS_VIEW_DEPENDENCY allows you to query the direct dependencies of all views visible to the current user.", + "doc_type":"devg", + "kw":"GS_VIEW_DEPENDENCY,System Views,Developer Guide", + "title":"GS_VIEW_DEPENDENCY", + "githuburl":"" + }, + { + "uri":"dws_04_0948.html", + "product_code":"dws", + "code":"468", + "des":"GS_VIEW_DEPENDENCY_PATH allows you to query the direct dependencies of all views visible to the current user. If the base table on which the view depends exists and the d", + "doc_type":"devg", + "kw":"GS_VIEW_DEPENDENCY_PATH,System Views,Developer Guide", + "title":"GS_VIEW_DEPENDENCY_PATH", + "githuburl":"" + }, + { + "uri":"dws_04_0717.html", + "product_code":"dws", + "code":"469", + "des":"GS_VIEW_INVALID queries all unavailable views visible to the current user. If the base table, function, or synonym that the view depends on is abnormal, the validtype col", + "doc_type":"devg", + "kw":"GS_VIEW_INVALID,System Views,Developer Guide", + "title":"GS_VIEW_INVALID", + "githuburl":"" + }, + { + "uri":"dws_04_0998.html", + "product_code":"dws", + "code":"470", + "des":"MPP_TABLES displays information about tables in PGXC_CLASS.", + "doc_type":"devg", + "kw":"MPP_TABLES,System Views,Developer Guide", + "title":"MPP_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0718.html", + "product_code":"dws", + "code":"471", + "des":"PG_AVAILABLE_EXTENSION_VERSIONS displays the extension versions of certain database features.", + "doc_type":"devg", + "kw":"PG_AVAILABLE_EXTENSION_VERSIONS,System Views,Developer Guide", + "title":"PG_AVAILABLE_EXTENSION_VERSIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0719.html", + "product_code":"dws", + "code":"472", + "des":"PG_AVAILABLE_EXTENSIONS displays the extended information about certain database features.", + "doc_type":"devg", + "kw":"PG_AVAILABLE_EXTENSIONS,System Views,Developer Guide", + "title":"PG_AVAILABLE_EXTENSIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0720.html", + "product_code":"dws", + "code":"473", + "des":"On any normal node in a cluster, PG_BULKLOAD_STATISTICS displays the execution status of the import and export services. Each import or export service corresponds to a re", + "doc_type":"devg", + "kw":"PG_BULKLOAD_STATISTICS,System Views,Developer Guide", + "title":"PG_BULKLOAD_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0721.html", + "product_code":"dws", + "code":"474", + "des":"PG_COMM_CLIENT_INFO stores the client connection information of a single node. (You can query this view on a DN to view the information about the connection between the C", + "doc_type":"devg", + "kw":"PG_COMM_CLIENT_INFO,System Views,Developer Guide", + "title":"PG_COMM_CLIENT_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0722.html", + "product_code":"dws", + "code":"475", + "des":"PG_COMM_DELAY displays the communication library delay status for a single DN.", + "doc_type":"devg", + "kw":"PG_COMM_DELAY,System Views,Developer Guide", + "title":"PG_COMM_DELAY", + "githuburl":"" + }, + { + "uri":"dws_04_0723.html", + "product_code":"dws", + "code":"476", + "des":"PG_COMM_STATUS displays the communication library status for a single DN.", + "doc_type":"devg", + "kw":"PG_COMM_STATUS,System Views,Developer Guide", + "title":"PG_COMM_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0724.html", + "product_code":"dws", + "code":"477", + "des":"PG_COMM_RECV_STREAM displays the receiving stream status of all the communication libraries for a single DN.", + "doc_type":"devg", + "kw":"PG_COMM_RECV_STREAM,System Views,Developer Guide", + "title":"PG_COMM_RECV_STREAM", + "githuburl":"" + }, + { + "uri":"dws_04_0725.html", + "product_code":"dws", + "code":"478", + "des":"PG_COMM_SEND_STREAM displays the sending stream status of all the communication libraries for a single DN.", + "doc_type":"devg", + "kw":"PG_COMM_SEND_STREAM,System Views,Developer Guide", + "title":"PG_COMM_SEND_STREAM", + "githuburl":"" + }, + { + "uri":"dws_04_0726.html", + "product_code":"dws", + "code":"479", + "des":"PG_CONTROL_GROUP_CONFIG displays the Cgroup configuration information in the system.", + "doc_type":"devg", + "kw":"PG_CONTROL_GROUP_CONFIG,System Views,Developer Guide", + "title":"PG_CONTROL_GROUP_CONFIG", + "githuburl":"" + }, + { + "uri":"dws_04_0727.html", + "product_code":"dws", + "code":"480", + "des":"PG_CURSORS displays the cursors that are currently available.", + "doc_type":"devg", + "kw":"PG_CURSORS,System Views,Developer Guide", + "title":"PG_CURSORS", + "githuburl":"" + }, + { + "uri":"dws_04_0728.html", + "product_code":"dws", + "code":"481", + "des":"PG_EXT_STATS displays extension statistics stored in the PG_STATISTIC_EXT table. The extension statistics means multiple columns of statistics.", + "doc_type":"devg", + "kw":"PG_EXT_STATS,System Views,Developer Guide", + "title":"PG_EXT_STATS", + "githuburl":"" + }, + { + "uri":"dws_04_0729.html", + "product_code":"dws", + "code":"482", + "des":"PG_GET_INVALID_BACKENDS displays the information about backend threads on the CN that are connected to the current standby DN.", + "doc_type":"devg", + "kw":"PG_GET_INVALID_BACKENDS,System Views,Developer Guide", + "title":"PG_GET_INVALID_BACKENDS", + "githuburl":"" + }, + { + "uri":"dws_04_0730.html", + "product_code":"dws", + "code":"483", + "des":"PG_GET_SENDERS_CATCHUP_TIME displays the catchup information of the currently active primary/standby instance sending thread on a single DN.", + "doc_type":"devg", + "kw":"PG_GET_SENDERS_CATCHUP_TIME,System Views,Developer Guide", + "title":"PG_GET_SENDERS_CATCHUP_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0731.html", + "product_code":"dws", + "code":"484", + "des":"PG_GROUP displays the database role authentication and the relationship between roles.", + "doc_type":"devg", + "kw":"PG_GROUP,System Views,Developer Guide", + "title":"PG_GROUP", + "githuburl":"" + }, + { + "uri":"dws_04_0732.html", + "product_code":"dws", + "code":"485", + "des":"PG_INDEXES displays access to useful information about each index in the database.", + "doc_type":"devg", + "kw":"PG_INDEXES,System Views,Developer Guide", + "title":"PG_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0733.html", + "product_code":"dws", + "code":"486", + "des":"The PG_JOB view replaces the PG_JOB system catalog in earlier versions and provides forward compatibility with earlier versions. The original PG_JOB system catalog is cha", + "doc_type":"devg", + "kw":"PG_JOB,System Views,Developer Guide", + "title":"PG_JOB", + "githuburl":"" + }, + { + "uri":"dws_04_0734.html", + "product_code":"dws", + "code":"487", + "des":"The PG_JOB_PROC view replaces the PG_JOB_PROC system catalog in earlier versions and provides forward compatibility with earlier versions. The original PG_JOB_PROC and PG", + "doc_type":"devg", + "kw":"PG_JOB_PROC,System Views,Developer Guide", + "title":"PG_JOB_PROC", + "githuburl":"" + }, + { + "uri":"dws_04_0735.html", + "product_code":"dws", + "code":"488", + "des":"PG_JOB_SINGLE displays job information about the current node.", + "doc_type":"devg", + "kw":"PG_JOB_SINGLE,System Views,Developer Guide", + "title":"PG_JOB_SINGLE", + "githuburl":"" + }, + { + "uri":"dws_04_0736.html", + "product_code":"dws", + "code":"489", + "des":"PG_LIFECYCLE_DATA_DISTRIBUTE displays the distribution of cold and hot data in a multi-temperature table of OBS.", + "doc_type":"devg", + "kw":"PG_LIFECYCLE_DATA_DISTRIBUTE,System Views,Developer Guide", + "title":"PG_LIFECYCLE_DATA_DISTRIBUTE", + "githuburl":"" + }, + { + "uri":"dws_04_0737.html", + "product_code":"dws", + "code":"490", + "des":"PG_LOCKS displays information about the locks held by open transactions.", + "doc_type":"devg", + "kw":"PG_LOCKS,System Views,Developer Guide", + "title":"PG_LOCKS", + "githuburl":"" + }, + { + "uri":"dws_04_0738.html", + "product_code":"dws", + "code":"491", + "des":"PG_NODE_ENVO displays the environmental variable information about the current node.", + "doc_type":"devg", + "kw":"PG_NODE_ENV,System Views,Developer Guide", + "title":"PG_NODE_ENV", + "githuburl":"" + }, + { + "uri":"dws_04_0739.html", + "product_code":"dws", + "code":"492", + "des":"PG_OS_THREADS displays the status information about all the threads under the current node.", + "doc_type":"devg", + "kw":"PG_OS_THREADS,System Views,Developer Guide", + "title":"PG_OS_THREADS", + "githuburl":"" + }, + { + "uri":"dws_04_0740.html", + "product_code":"dws", + "code":"493", + "des":"PG_POOLER_STATUS displays the cache connection status in the pooler. PG_POOLER_STATUS can only query on the CN, and displays the connection cache information about the po", + "doc_type":"devg", + "kw":"PG_POOLER_STATUS,System Views,Developer Guide", + "title":"PG_POOLER_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0741.html", + "product_code":"dws", + "code":"494", + "des":"PG_PREPARED_STATEMENTS displays all prepared statements that are available in the current session.", + "doc_type":"devg", + "kw":"PG_PREPARED_STATEMENTS,System Views,Developer Guide", + "title":"PG_PREPARED_STATEMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0742.html", + "product_code":"dws", + "code":"495", + "des":"PG_PREPARED_XACTS displays information about transactions that are currently prepared for two-phase commit.", + "doc_type":"devg", + "kw":"PG_PREPARED_XACTS,System Views,Developer Guide", + "title":"PG_PREPARED_XACTS", + "githuburl":"" + }, + { + "uri":"dws_04_0743.html", + "product_code":"dws", + "code":"496", + "des":"PG_QUERYBAND_ACTION displays information about the object associated with query_band and the query_band query order.", + "doc_type":"devg", + "kw":"PG_QUERYBAND_ACTION,System Views,Developer Guide", + "title":"PG_QUERYBAND_ACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0744.html", + "product_code":"dws", + "code":"497", + "des":"PG_REPLICATION_SLOTS displays the replication node information.", + "doc_type":"devg", + "kw":"PG_REPLICATION_SLOTS,System Views,Developer Guide", + "title":"PG_REPLICATION_SLOTS", + "githuburl":"" + }, + { + "uri":"dws_04_0745.html", + "product_code":"dws", + "code":"498", + "des":"PG_ROLES displays information about database roles.", + "doc_type":"devg", + "kw":"PG_ROLES,System Views,Developer Guide", + "title":"PG_ROLES", + "githuburl":"" + }, + { + "uri":"dws_04_0746.html", + "product_code":"dws", + "code":"499", + "des":"PG_RULES displays information about rewrite rules.", + "doc_type":"devg", + "kw":"PG_RULES,System Views,Developer Guide", + "title":"PG_RULES", + "githuburl":"" + }, + { + "uri":"dws_04_0747.html", + "product_code":"dws", + "code":"500", + "des":"PG_RUNNING_XACTS displays the running transaction information on the current node.", + "doc_type":"devg", + "kw":"PG_RUNNING_XACTS,System Views,Developer Guide", + "title":"PG_RUNNING_XACTS", + "githuburl":"" + }, + { + "uri":"dws_04_0748.html", + "product_code":"dws", + "code":"501", + "des":"PG_SECLABELS displays information about security labels.", + "doc_type":"devg", + "kw":"PG_SECLABELS,System Views,Developer Guide", + "title":"PG_SECLABELS", + "githuburl":"" + }, + { + "uri":"dws_04_0749.html", + "product_code":"dws", + "code":"502", + "des":"PG_SESSION_WLMSTAT displays the corresponding load management information about the task currently executed by the user.", + "doc_type":"devg", + "kw":"PG_SESSION_WLMSTAT,System Views,Developer Guide", + "title":"PG_SESSION_WLMSTAT", + "githuburl":"" + }, + { + "uri":"dws_04_0750.html", + "product_code":"dws", + "code":"503", + "des":"PG_SESSION_IOSTAT displays the I/O load management information about the task currently executed by the user.IOPS is counted by ones for column storage and by thousands f", + "doc_type":"devg", + "kw":"PG_SESSION_IOSTAT,System Views,Developer Guide", + "title":"PG_SESSION_IOSTAT", + "githuburl":"" + }, + { + "uri":"dws_04_0751.html", + "product_code":"dws", + "code":"504", + "des":"PG_SETTINGS displays information about parameters of the running database.", + "doc_type":"devg", + "kw":"PG_SETTINGS,System Views,Developer Guide", + "title":"PG_SETTINGS", + "githuburl":"" + }, + { + "uri":"dws_04_0752.html", + "product_code":"dws", + "code":"505", + "des":"PG_SHADOW displays properties of all roles that are marked as rolcanlogin in PG_AUTHID.The name stems from the fact that this table should not be readable by the public s", + "doc_type":"devg", + "kw":"PG_SHADOW,System Views,Developer Guide", + "title":"PG_SHADOW", + "githuburl":"" + }, + { + "uri":"dws_04_0753.html", + "product_code":"dws", + "code":"506", + "des":"PG_SHARED_MEMORY_DETAIL displays usage information about all the shared memory contexts.", + "doc_type":"devg", + "kw":"PG_SHARED_MEMORY_DETAIL,System Views,Developer Guide", + "title":"PG_SHARED_MEMORY_DETAIL", + "githuburl":"" + }, + { + "uri":"dws_04_0754.html", + "product_code":"dws", + "code":"507", + "des":"PG_STATS displays the single-column statistics stored in the pg_statistic table.", + "doc_type":"devg", + "kw":"PG_STATS,System Views,Developer Guide", + "title":"PG_STATS", + "githuburl":"" + }, + { + "uri":"dws_04_0755.html", + "product_code":"dws", + "code":"508", + "des":"PG_STAT_ACTIVITY displays information about the current user's queries.", + "doc_type":"devg", + "kw":"PG_STAT_ACTIVITY,System Views,Developer Guide", + "title":"PG_STAT_ACTIVITY", + "githuburl":"" + }, + { + "uri":"dws_04_0757.html", + "product_code":"dws", + "code":"509", + "des":"PG_STAT_ALL_INDEXES displays access informaton about all indexes in the database, with information about each index displayed in a row.Indexes can be used via either simp", + "doc_type":"devg", + "kw":"PG_STAT_ALL_INDEXES,System Views,Developer Guide", + "title":"PG_STAT_ALL_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0758.html", + "product_code":"dws", + "code":"510", + "des":"PG_STAT_ALL_TABLES displays access information about all rows in all tables (including TOAST tables) in the database.", + "doc_type":"devg", + "kw":"PG_STAT_ALL_TABLES,System Views,Developer Guide", + "title":"PG_STAT_ALL_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0759.html", + "product_code":"dws", + "code":"511", + "des":"PG_STAT_BAD_BLOCK displays statistics about page or CU verification failures after a node is started.", + "doc_type":"devg", + "kw":"PG_STAT_BAD_BLOCK,System Views,Developer Guide", + "title":"PG_STAT_BAD_BLOCK", + "githuburl":"" + }, + { + "uri":"dws_04_0760.html", + "product_code":"dws", + "code":"512", + "des":"PG_STAT_BGWRITER displays statistics about the background writer process's activity.", + "doc_type":"devg", + "kw":"PG_STAT_BGWRITER,System Views,Developer Guide", + "title":"PG_STAT_BGWRITER", + "githuburl":"" + }, + { + "uri":"dws_04_0761.html", + "product_code":"dws", + "code":"513", + "des":"PG_STAT_DATABASE displays the status and statistics of each database on the current node.", + "doc_type":"devg", + "kw":"PG_STAT_DATABASE,System Views,Developer Guide", + "title":"PG_STAT_DATABASE", + "githuburl":"" + }, + { + "uri":"dws_04_0762.html", + "product_code":"dws", + "code":"514", + "des":"PG_STAT_DATABASE_CONFLICTS displays statistics about database conflicts.", + "doc_type":"devg", + "kw":"PG_STAT_DATABASE_CONFLICTS,System Views,Developer Guide", + "title":"PG_STAT_DATABASE_CONFLICTS", + "githuburl":"" + }, + { + "uri":"dws_04_0763.html", + "product_code":"dws", + "code":"515", + "des":"PG_STAT_GET_MEM_MBYTES_RESERVED displays the current activity information of a thread stored in memory. You need to specify the thread ID (pid in PG_STAT_ACTIVITY) for qu", + "doc_type":"devg", + "kw":"PG_STAT_GET_MEM_MBYTES_RESERVED,System Views,Developer Guide", + "title":"PG_STAT_GET_MEM_MBYTES_RESERVED", + "githuburl":"" + }, + { + "uri":"dws_04_0764.html", + "product_code":"dws", + "code":"516", + "des":"PG_STAT_USER_FUNCTIONS displays user-defined function status information in the namespace. (The language of the function is non-internal language.)", + "doc_type":"devg", + "kw":"PG_STAT_USER_FUNCTIONS,System Views,Developer Guide", + "title":"PG_STAT_USER_FUNCTIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0765.html", + "product_code":"dws", + "code":"517", + "des":"PG_STAT_USER_INDEXES displays information about the index status of user-defined ordinary tables and TOAST tables.", + "doc_type":"devg", + "kw":"PG_STAT_USER_INDEXES,System Views,Developer Guide", + "title":"PG_STAT_USER_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0766.html", + "product_code":"dws", + "code":"518", + "des":"PG_STAT_USER_TABLES displays status information about user-defined ordinary tables and TOAST tables in all namespaces.", + "doc_type":"devg", + "kw":"PG_STAT_USER_TABLES,System Views,Developer Guide", + "title":"PG_STAT_USER_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0767.html", + "product_code":"dws", + "code":"519", + "des":"PG_STAT_REPLICATION displays information about log synchronization status, such as the locations of the sender sending logs and the receiver receiving logs.", + "doc_type":"devg", + "kw":"PG_STAT_REPLICATION,System Views,Developer Guide", + "title":"PG_STAT_REPLICATION", + "githuburl":"" + }, + { + "uri":"dws_04_0768.html", + "product_code":"dws", + "code":"520", + "des":"PG_STAT_SYS_INDEXES displays the index status information about all the system catalogs in the pg_catalog and information_schema schemas.", + "doc_type":"devg", + "kw":"PG_STAT_SYS_INDEXES,System Views,Developer Guide", + "title":"PG_STAT_SYS_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0769.html", + "product_code":"dws", + "code":"521", + "des":"PG_STAT_SYS_TABLES displays the statistics about the system catalogs of all the namespaces in pg_catalog and information_schema schemas.", + "doc_type":"devg", + "kw":"PG_STAT_SYS_TABLES,System Views,Developer Guide", + "title":"PG_STAT_SYS_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0770.html", + "product_code":"dws", + "code":"522", + "des":"PG_STAT_XACT_ALL_TABLES displays the transaction status information about all ordinary tables and TOAST tables in the namespaces.", + "doc_type":"devg", + "kw":"PG_STAT_XACT_ALL_TABLES,System Views,Developer Guide", + "title":"PG_STAT_XACT_ALL_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0771.html", + "product_code":"dws", + "code":"523", + "des":"PG_STAT_XACT_SYS_TABLES displays the transaction status information of the system catalog in the namespace.", + "doc_type":"devg", + "kw":"PG_STAT_XACT_SYS_TABLES,System Views,Developer Guide", + "title":"PG_STAT_XACT_SYS_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0772.html", + "product_code":"dws", + "code":"524", + "des":"PG_STAT_XACT_USER_FUNCTIONS displays statistics about function executions, with statistics about each execution displayed in a row.", + "doc_type":"devg", + "kw":"PG_STAT_XACT_USER_FUNCTIONS,System Views,Developer Guide", + "title":"PG_STAT_XACT_USER_FUNCTIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0773.html", + "product_code":"dws", + "code":"525", + "des":"PG_STAT_XACT_USER_TABLES displays the transaction status information of the user table in the namespace.", + "doc_type":"devg", + "kw":"PG_STAT_XACT_USER_TABLES,System Views,Developer Guide", + "title":"PG_STAT_XACT_USER_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0774.html", + "product_code":"dws", + "code":"526", + "des":"PG_STATIO_ALL_INDEXES contains each row of each index in the current database, showing I/O statistics about accesses to that specific index.", + "doc_type":"devg", + "kw":"PG_STATIO_ALL_INDEXES,System Views,Developer Guide", + "title":"PG_STATIO_ALL_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0775.html", + "product_code":"dws", + "code":"527", + "des":"PG_STATIO_ALL_SEQUENCES contains each row of each sequence in the current database, showing I/O statistics about accesses to that specific sequence.", + "doc_type":"devg", + "kw":"PG_STATIO_ALL_SEQUENCES,System Views,Developer Guide", + "title":"PG_STATIO_ALL_SEQUENCES", + "githuburl":"" + }, + { + "uri":"dws_04_0776.html", + "product_code":"dws", + "code":"528", + "des":"PG_STATIO_ALL_TABLES contains one row for each table in the current database (including TOAST tables), showing I/O statistics about accesses to that specific table.", + "doc_type":"devg", + "kw":"PG_STATIO_ALL_TABLES,System Views,Developer Guide", + "title":"PG_STATIO_ALL_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0777.html", + "product_code":"dws", + "code":"529", + "des":"PG_STATIO_SYS_INDEXES displays the I/O status information about all system catalog indexes in the namespace.", + "doc_type":"devg", + "kw":"PG_STATIO_SYS_INDEXES,System Views,Developer Guide", + "title":"PG_STATIO_SYS_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0778.html", + "product_code":"dws", + "code":"530", + "des":"PG_STATIO_SYS_SEQUENCES displays the I/O status information about all the system sequences in the namespace.", + "doc_type":"devg", + "kw":"PG_STATIO_SYS_SEQUENCES,System Views,Developer Guide", + "title":"PG_STATIO_SYS_SEQUENCES", + "githuburl":"" + }, + { + "uri":"dws_04_0779.html", + "product_code":"dws", + "code":"531", + "des":"PG_STATIO_SYS_TABLES displays the I/O status information about all the system catalogs in the namespace.", + "doc_type":"devg", + "kw":"PG_STATIO_SYS_TABLES,System Views,Developer Guide", + "title":"PG_STATIO_SYS_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0780.html", + "product_code":"dws", + "code":"532", + "des":"PG_STATIO_USER_INDEXES displays the I/O status information about all the user relationship table indexes in the namespace.", + "doc_type":"devg", + "kw":"PG_STATIO_USER_INDEXES,System Views,Developer Guide", + "title":"PG_STATIO_USER_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0781.html", + "product_code":"dws", + "code":"533", + "des":"PG_STATIO_USER_SEQUENCES displays the I/O status information about all the user relation table sequences in the namespace.", + "doc_type":"devg", + "kw":"PG_STATIO_USER_SEQUENCES,System Views,Developer Guide", + "title":"PG_STATIO_USER_SEQUENCES", + "githuburl":"" + }, + { + "uri":"dws_04_0782.html", + "product_code":"dws", + "code":"534", + "des":"PG_STATIO_USER_TABLES displays the I/O status information about all the user relation tables in the namespace.", + "doc_type":"devg", + "kw":"PG_STATIO_USER_TABLES,System Views,Developer Guide", + "title":"PG_STATIO_USER_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0783.html", + "product_code":"dws", + "code":"535", + "des":"PG_THREAD_WAIT_STATUS allows you to test the block waiting status about the backend thread and auxiliary thread of the current instance.The waiting statuses in the wait_s", + "doc_type":"devg", + "kw":"PG_THREAD_WAIT_STATUS,System Views,Developer Guide", + "title":"PG_THREAD_WAIT_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0784.html", + "product_code":"dws", + "code":"536", + "des":"PG_TABLES displays access to each table in the database.", + "doc_type":"devg", + "kw":"PG_TABLES,System Views,Developer Guide", + "title":"PG_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0785.html", + "product_code":"dws", + "code":"537", + "des":"PG_TDE_INFO displays the encryption information about the current cluster.Check whether the current cluster is encrypted, and check the encryption algorithm (if any) used", + "doc_type":"devg", + "kw":"PG_TDE_INFO,System Views,Developer Guide", + "title":"PG_TDE_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0786.html", + "product_code":"dws", + "code":"538", + "des":"PG_TIMEZONE_ABBREVS displays all time zone abbreviations that can be recognized by the input routines.", + "doc_type":"devg", + "kw":"PG_TIMEZONE_ABBREVS,System Views,Developer Guide", + "title":"PG_TIMEZONE_ABBREVS", + "githuburl":"" + }, + { + "uri":"dws_04_0787.html", + "product_code":"dws", + "code":"539", + "des":"PG_TIMEZONE_NAMES displays all time zone names that can be recognized by SET TIMEZONE, along with their associated abbreviations, UTC offsets, and daylight saving time st", + "doc_type":"devg", + "kw":"PG_TIMEZONE_NAMES,System Views,Developer Guide", + "title":"PG_TIMEZONE_NAMES", + "githuburl":"" + }, + { + "uri":"dws_04_0788.html", + "product_code":"dws", + "code":"540", + "des":"PG_TOTAL_MEMORY_DETAIL displays the memory usage of a certain node in the database.", + "doc_type":"devg", + "kw":"PG_TOTAL_MEMORY_DETAIL,System Views,Developer Guide", + "title":"PG_TOTAL_MEMORY_DETAIL", + "githuburl":"" + }, + { + "uri":"dws_04_0789.html", + "product_code":"dws", + "code":"541", + "des":"PG_TOTAL_SCHEMA_INFO displays the storage usage of all schemas in each database. This view is valid only if use_workload_manager is set to on.", + "doc_type":"devg", + "kw":"PG_TOTAL_SCHEMA_INFO,System Views,Developer Guide", + "title":"PG_TOTAL_SCHEMA_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0790.html", + "product_code":"dws", + "code":"542", + "des":"PG_TOTAL_USER_RESOURCE_INFO displays the resource usage of all users. Only administrators can query this view. This view is valid only if use_workload_manager is set to o", + "doc_type":"devg", + "kw":"PG_TOTAL_USER_RESOURCE_INFO,System Views,Developer Guide", + "title":"PG_TOTAL_USER_RESOURCE_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0791.html", + "product_code":"dws", + "code":"543", + "des":"PG_USER displays information about users who can access the database.", + "doc_type":"devg", + "kw":"PG_USER,System Views,Developer Guide", + "title":"PG_USER", + "githuburl":"" + }, + { + "uri":"dws_04_0792.html", + "product_code":"dws", + "code":"544", + "des":"PG_USER_MAPPINGS displays information about user mappings.This is essentially a publicly readable view of PG_USER_MAPPING that leaves out the options column if the user h", + "doc_type":"devg", + "kw":"PG_USER_MAPPINGS,System Views,Developer Guide", + "title":"PG_USER_MAPPINGS", + "githuburl":"" + }, + { + "uri":"dws_04_0793.html", + "product_code":"dws", + "code":"545", + "des":"PG_VIEWS displays basic information about each view in the database.", + "doc_type":"devg", + "kw":"PG_VIEWS,System Views,Developer Guide", + "title":"PG_VIEWS", + "githuburl":"" + }, + { + "uri":"dws_04_0794.html", + "product_code":"dws", + "code":"546", + "des":"PG_WLM_STATISTICS displays information about workload management after the task is complete or the exception has been handled.", + "doc_type":"devg", + "kw":"PG_WLM_STATISTICS,System Views,Developer Guide", + "title":"PG_WLM_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0795.html", + "product_code":"dws", + "code":"547", + "des":"PGXC_BULKLOAD_PROGRESS displays the progress of the service import. Only GDS common files can be imported. This view is accessible only to users with system administrator", + "doc_type":"devg", + "kw":"PGXC_BULKLOAD_PROGRESS,System Views,Developer Guide", + "title":"PGXC_BULKLOAD_PROGRESS", + "githuburl":"" + }, + { + "uri":"dws_04_0796.html", + "product_code":"dws", + "code":"548", + "des":"PGXC_BULKLOAD_STATISTICS displays real-time statistics about service execution, such as GDS, COPY, and \\COPY, on a CN. This view summarizes the real-time execution status", + "doc_type":"devg", + "kw":"PGXC_BULKLOAD_STATISTICS,System Views,Developer Guide", + "title":"PGXC_BULKLOAD_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0797.html", + "product_code":"dws", + "code":"549", + "des":"PGXC_COMM_CLIENT_INFO stores the client connection information of all nodes. (You can query this view on a DN to view the information about the connection between the CN ", + "doc_type":"devg", + "kw":"PGXC_COMM_CLIENT_INFO,System Views,Developer Guide", + "title":"PGXC_COMM_CLIENT_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0798.html", + "product_code":"dws", + "code":"550", + "des":"PGXC_COMM_STATUS displays the communication library delay status for all the DNs.", + "doc_type":"devg", + "kw":"PGXC_COMM_DELAY,System Views,Developer Guide", + "title":"PGXC_COMM_DELAY", + "githuburl":"" + }, + { + "uri":"dws_04_0799.html", + "product_code":"dws", + "code":"551", + "des":"PG_COMM_RECV_STREAM displays the receiving stream status of the communication libraries for all the DNs.", + "doc_type":"devg", + "kw":"PGXC_COMM_RECV_STREAM,System Views,Developer Guide", + "title":"PGXC_COMM_RECV_STREAM", + "githuburl":"" + }, + { + "uri":"dws_04_0800.html", + "product_code":"dws", + "code":"552", + "des":"PGXC_COMM_SEND_STREAM displays the sending stream status of the communication libraries for all the DNs.", + "doc_type":"devg", + "kw":"PGXC_COMM_SEND_STREAM,System Views,Developer Guide", + "title":"PGXC_COMM_SEND_STREAM", + "githuburl":"" + }, + { + "uri":"dws_04_0801.html", + "product_code":"dws", + "code":"553", + "des":"PGXC_COMM_STATUS displays the communication library status for all the DNs.", + "doc_type":"devg", + "kw":"PGXC_COMM_STATUS,System Views,Developer Guide", + "title":"PGXC_COMM_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0802.html", + "product_code":"dws", + "code":"554", + "des":"PGXC_DEADLOCK displays lock wait information generated due to distributed deadlocks.Currently, PGXC_DEADLOCK collects only lock wait information about locks whose locktyp", + "doc_type":"devg", + "kw":"PGXC_DEADLOCK,System Views,Developer Guide", + "title":"PGXC_DEADLOCK", + "githuburl":"" + }, + { + "uri":"dws_04_0803.html", + "product_code":"dws", + "code":"555", + "des":"PGXC_GET_STAT_ALL_TABLES displays information about insertion, update, and deletion operations on tables and the dirty page rate of tables.Before running VACUUM FULL to a", + "doc_type":"devg", + "kw":"PGXC_GET_STAT_ALL_TABLES,System Views,Developer Guide", + "title":"PGXC_GET_STAT_ALL_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0804.html", + "product_code":"dws", + "code":"556", + "des":"PGXC_GET_STAT_ALL_PARTITIONS displays information about insertion, update, and deletion operations on partitions of partitioned tables and the dirty page rate of tables.T", + "doc_type":"devg", + "kw":"PGXC_GET_STAT_ALL_PARTITIONS,System Views,Developer Guide", + "title":"PGXC_GET_STAT_ALL_PARTITIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0805.html", + "product_code":"dws", + "code":"557", + "des":"PGXC_GET_TABLE_SKEWNESS displays the data skew on tables in the current database.", + "doc_type":"devg", + "kw":"PGXC_GET_TABLE_SKEWNESS,System Views,Developer Guide", + "title":"PGXC_GET_TABLE_SKEWNESS", + "githuburl":"" + }, + { + "uri":"dws_04_0806.html", + "product_code":"dws", + "code":"558", + "des":"PGXC_GTM_SNAPSHOT_STATUS displays transaction information on the current GTM.", + "doc_type":"devg", + "kw":"PGXC_GTM_SNAPSHOT_STATUS,System Views,Developer Guide", + "title":"PGXC_GTM_SNAPSHOT_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0807.html", + "product_code":"dws", + "code":"559", + "des":"PGXC_INSTANCE_TIME displays the running time of processes on each node in the cluster and the time consumed in each execution phase. Except the node_name column, the othe", + "doc_type":"devg", + "kw":"PGXC_INSTANCE_TIME,System Views,Developer Guide", + "title":"PGXC_INSTANCE_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0808.html", + "product_code":"dws", + "code":"560", + "des":"PGXC_INSTR_UNIQUE_SQL displays the complete Unique SQL statistics of all CN nodes in the cluster.Only the system administrator can access this view. For details about the", + "doc_type":"devg", + "kw":"PGXC_INSTR_UNIQUE_SQL,System Views,Developer Guide", + "title":"PGXC_INSTR_UNIQUE_SQL", + "githuburl":"" + }, + { + "uri":"dws_04_0809.html", + "product_code":"dws", + "code":"561", + "des":"PGXC_LOCK_CONFLICTS displays information about conflicting locks in the cluster.When a lock is waiting for another lock or another lock is waiting for this one, a lock co", + "doc_type":"devg", + "kw":"PGXC_LOCK_CONFLICTS,System Views,Developer Guide", + "title":"PGXC_LOCK_CONFLICTS", + "githuburl":"" + }, + { + "uri":"dws_04_0810.html", + "product_code":"dws", + "code":"562", + "des":"PGXC_NODE_ENV displays the environmental variables information about all nodes in a cluster.", + "doc_type":"devg", + "kw":"PGXC_NODE_ENV,System Views,Developer Guide", + "title":"PGXC_NODE_ENV", + "githuburl":"" + }, + { + "uri":"dws_04_0811.html", + "product_code":"dws", + "code":"563", + "des":"PGXC_NODE_STAT_RESET_TIME displays the time when statistics of each node in the cluster are reset. All columns except node_name are the same as those in the GS_NODE_STAT_", + "doc_type":"devg", + "kw":"PGXC_NODE_STAT_RESET_TIME,System Views,Developer Guide", + "title":"PGXC_NODE_STAT_RESET_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0812.html", + "product_code":"dws", + "code":"564", + "des":"PGXC_OS_RUN_INFO displays the OS running status of each node in the cluster. All columns except node_name are the same as those in the PV_OS_RUN_INFO view. This view is a", + "doc_type":"devg", + "kw":"PGXC_OS_RUN_INFO,System Views,Developer Guide", + "title":"PGXC_OS_RUN_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0813.html", + "product_code":"dws", + "code":"565", + "des":"PGXC_OS_THREADS displays thread status information under all normal nodes in the current cluster.", + "doc_type":"devg", + "kw":"PGXC_OS_THREADS,System Views,Developer Guide", + "title":"PGXC_OS_THREADS", + "githuburl":"" + }, + { + "uri":"dws_04_0814.html", + "product_code":"dws", + "code":"566", + "des":"PGXC_PREPARED_XACTS displays the two-phase transactions in the prepared phase.", + "doc_type":"devg", + "kw":"PGXC_PREPARED_XACTS,System Views,Developer Guide", + "title":"PGXC_PREPARED_XACTS", + "githuburl":"" + }, + { + "uri":"dws_04_0815.html", + "product_code":"dws", + "code":"567", + "des":"PGXC_REDO_STAT displays statistics on redoing Xlogs of each node in the cluster. All columns except node_name are the same as those in the PV_REDO_STAT view. This view is", + "doc_type":"devg", + "kw":"PGXC_REDO_STAT,System Views,Developer Guide", + "title":"PGXC_REDO_STAT", + "githuburl":"" + }, + { + "uri":"dws_04_0816.html", + "product_code":"dws", + "code":"568", + "des":"PGXC_REL_IOSTAT displays statistics on disk read and write of each node in the cluster. All columns except node_name are the same as those in the GS_REL_IOSTAT view. This", + "doc_type":"devg", + "kw":"PGXC_REL_IOSTAT,System Views,Developer Guide", + "title":"PGXC_REL_IOSTAT", + "githuburl":"" + }, + { + "uri":"dws_04_0817.html", + "product_code":"dws", + "code":"569", + "des":"PGXC_REPLICATION_SLOTS displays the replication information of DNs in the cluster. All columns except node_name are the same as those in the PG_REPLICATION_SLOTS view. Th", + "doc_type":"devg", + "kw":"PGXC_REPLICATION_SLOTS,System Views,Developer Guide", + "title":"PGXC_REPLICATION_SLOTS", + "githuburl":"" + }, + { + "uri":"dws_04_0818.html", + "product_code":"dws", + "code":"570", + "des":"PGXC_RUNNING_XACTS displays information about running transactions on each node in the cluster. The content is the same as that displayed in PG_RUNNING_XACTS.", + "doc_type":"devg", + "kw":"PGXC_RUNNING_XACTS,System Views,Developer Guide", + "title":"PGXC_RUNNING_XACTS", + "githuburl":"" + }, + { + "uri":"dws_04_0819.html", + "product_code":"dws", + "code":"571", + "des":"PGXC_SETTINGS displays the database running status of each node in the cluster. All columns except node_name are the same as those in the PG_SETTINGS view. This view is a", + "doc_type":"devg", + "kw":"PGXC_SETTINGS,System Views,Developer Guide", + "title":"PGXC_SETTINGS", + "githuburl":"" + }, + { + "uri":"dws_04_0820.html", + "product_code":"dws", + "code":"572", + "des":"PGXC_STAT_ACTIVITY displays information about the query performed by the current user on all the CNs in the current cluster.Run the following command to view blocked quer", + "doc_type":"devg", + "kw":"PGXC_STAT_ACTIVITY,System Views,Developer Guide", + "title":"PGXC_STAT_ACTIVITY", + "githuburl":"" + }, + { + "uri":"dws_04_0821.html", + "product_code":"dws", + "code":"573", + "des":"PGXC_STAT_BAD_BLOCK displays statistics about page or CU verification failures after all nodes in a cluster are started.", + "doc_type":"devg", + "kw":"PGXC_STAT_BAD_BLOCK,System Views,Developer Guide", + "title":"PGXC_STAT_BAD_BLOCK", + "githuburl":"" + }, + { + "uri":"dws_04_0822.html", + "product_code":"dws", + "code":"574", + "des":"PGXC_STAT_BGWRITER displays statistics on the background writer of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_BGWRITER vi", + "doc_type":"devg", + "kw":"PGXC_STAT_BGWRITER,System Views,Developer Guide", + "title":"PGXC_STAT_BGWRITER", + "githuburl":"" + }, + { + "uri":"dws_04_0823.html", + "product_code":"dws", + "code":"575", + "des":"PGXC_STAT_DATABASE displays the database status and statistics of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_DATABASE vie", + "doc_type":"devg", + "kw":"PGXC_STAT_DATABASE,System Views,Developer Guide", + "title":"PGXC_STAT_DATABASE", + "githuburl":"" + }, + { + "uri":"dws_04_0824.html", + "product_code":"dws", + "code":"576", + "des":"PGXC_STAT_REPLICATION displays the log synchronization status of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_REPLICATION v", + "doc_type":"devg", + "kw":"PGXC_STAT_REPLICATION,System Views,Developer Guide", + "title":"PGXC_STAT_REPLICATION", + "githuburl":"" + }, + { + "uri":"dws_04_0825.html", + "product_code":"dws", + "code":"577", + "des":"PGXC_SQL_COUNT displays the node-level and user-level statistics for the SQL statements of SELECT, INSERT, UPDATE, DELETE, and MERGE INTO and DDL, DML, and DCL statements", + "doc_type":"devg", + "kw":"PGXC_SQL_COUNT,System Views,Developer Guide", + "title":"PGXC_SQL_COUNT", + "githuburl":"" + }, + { + "uri":"dws_04_0826.html", + "product_code":"dws", + "code":"578", + "des":"PGXC_THREAD_WAIT_STATUS displays all the call layer hierarchy relationship between threads of the SQL statements on all the nodes in a cluster, and the waiting status of ", + "doc_type":"devg", + "kw":"PGXC_THREAD_WAIT_STATUS,System Views,Developer Guide", + "title":"PGXC_THREAD_WAIT_STATUS", + "githuburl":"" + }, + { + "uri":"dws_04_0827.html", + "product_code":"dws", + "code":"579", + "des":"PGXC_TOTAL_MEMORY_DETAIL displays the memory usage in the cluster.", + "doc_type":"devg", + "kw":"PGXC_TOTAL_MEMORY_DETAIL,System Views,Developer Guide", + "title":"PGXC_TOTAL_MEMORY_DETAIL", + "githuburl":"" + }, + { + "uri":"dws_04_0828.html", + "product_code":"dws", + "code":"580", + "des":"PGXC_TOTAL_SCHEMA_INFO displays the schema space information of all instances in the cluster, providing visibility into the schema space usage of each instance. This view", + "doc_type":"devg", + "kw":"PGXC_TOTAL_SCHEMA_INFO,System Views,Developer Guide", + "title":"PGXC_TOTAL_SCHEMA_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0829.html", + "product_code":"dws", + "code":"581", + "des":"PGXC_TOTAL_SCHEMA_INFO_ANALYZE displays the overall schema space information of the cluster, including the total cluster space, average space of instances, skew ratio, ma", + "doc_type":"devg", + "kw":"PGXC_TOTAL_SCHEMA_INFO_ANALYZE,System Views,Developer Guide", + "title":"PGXC_TOTAL_SCHEMA_INFO_ANALYZE", + "githuburl":"" + }, + { + "uri":"dws_04_0830.html", + "product_code":"dws", + "code":"582", + "des":"PGXC_USER_TRANSACTION provides transaction information about users on all CNs. It is accessible only to users with system administrator rights. This view is valid only wh", + "doc_type":"devg", + "kw":"PGXC_USER_TRANSACTION,System Views,Developer Guide", + "title":"PGXC_USER_TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0831.html", + "product_code":"dws", + "code":"583", + "des":"PGXC_VARIABLE_INFO displays information about transaction IDs and OIDs of all nodes in a cluster.", + "doc_type":"devg", + "kw":"PGXC_VARIABLE_INFO,System Views,Developer Guide", + "title":"PGXC_VARIABLE_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0832.html", + "product_code":"dws", + "code":"584", + "des":"PGXC_WAIT_EVENTS displays statistics on the waiting status and events of each node in the cluster. The content is the same as that displayed in GS_WAIT_EVENTS. This view ", + "doc_type":"devg", + "kw":"PGXC_WAIT_EVENTS,System Views,Developer Guide", + "title":"PGXC_WAIT_EVENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0836.html", + "product_code":"dws", + "code":"585", + "des":"PGXC_WLM_OPERATOR_HISTORYdisplays the operator information of completed jobs executed on all CNs. This view is used by Database Manager to query data from a database. Dat", + "doc_type":"devg", + "kw":"PGXC_WLM_OPERATOR_HISTORY,System Views,Developer Guide", + "title":"PGXC_WLM_OPERATOR_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0837.html", + "product_code":"dws", + "code":"586", + "des":"PGXC_WLM_OPERATOR_INFO displays the operator information of completed jobs executed on CNs. The data in this view is obtained from GS_WLM_OPERATOR_INFO.This view is acces", + "doc_type":"devg", + "kw":"PGXC_WLM_OPERATOR_INFO,System Views,Developer Guide", + "title":"PGXC_WLM_OPERATOR_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0838.html", + "product_code":"dws", + "code":"587", + "des":"PGXC_WLM_OPERATOR_STATISTICS displays the operator information of jobs being executed on CNs.This view is accessible only to users with system administrators rights. For ", + "doc_type":"devg", + "kw":"PGXC_WLM_OPERATOR_STATISTICS,System Views,Developer Guide", + "title":"PGXC_WLM_OPERATOR_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0839.html", + "product_code":"dws", + "code":"588", + "des":"PGXC_WLM_SESSION_INFO displays load management information for completed jobs executed on all CNs. The data in this view is obtained from GS_WLM_SESSION_INFO.This view is", + "doc_type":"devg", + "kw":"PGXC_WLM_SESSION_INFO,System Views,Developer Guide", + "title":"PGXC_WLM_SESSION_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0840.html", + "product_code":"dws", + "code":"589", + "des":"PGXC_WLM_SESSION_HISTORY displays load management information for completed jobs executed on all CNs. This view is used by Data Manager to query data from a database. Dat", + "doc_type":"devg", + "kw":"PGXC_WLM_SESSION_HISTORY,System Views,Developer Guide", + "title":"PGXC_WLM_SESSION_HISTORY", + "githuburl":"" + }, + { + "uri":"dws_04_0841.html", + "product_code":"dws", + "code":"590", + "des":"PGXC_WLM_SESSION_STATISTICS displays load management information about jobs that are being executed on CNs.This view is accessible only to users with system administrator", + "doc_type":"devg", + "kw":"PGXC_WLM_SESSION_STATISTICS,System Views,Developer Guide", + "title":"PGXC_WLM_SESSION_STATISTICS", + "githuburl":"" + }, + { + "uri":"dws_04_0842.html", + "product_code":"dws", + "code":"591", + "des":"PGXC_WLM_WORKLOAD_RECORDS displays the status of job executed by the current user on CNs. It is accessible only to users with system administrator rights. This view is av", + "doc_type":"devg", + "kw":"PGXC_WLM_WORKLOAD_RECORDS,System Views,Developer Guide", + "title":"PGXC_WLM_WORKLOAD_RECORDS", + "githuburl":"" + }, + { + "uri":"dws_04_0843.html", + "product_code":"dws", + "code":"592", + "des":"PGXC_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in workload Cgroups on all CNs in a cluster, including the number of SELECT, UPDATE, ", + "doc_type":"devg", + "kw":"PGXC_WORKLOAD_SQL_COUNT,System Views,Developer Guide", + "title":"PGXC_WORKLOAD_SQL_COUNT", + "githuburl":"" + }, + { + "uri":"dws_04_0844.html", + "product_code":"dws", + "code":"593", + "des":"PGXC_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in workload Cgroups on all CNs in a cluster, including the maximum, minimum, aver", + "doc_type":"devg", + "kw":"PGXC_WORKLOAD_SQL_ELAPSE_TIME,System Views,Developer Guide", + "title":"PGXC_WORKLOAD_SQL_ELAPSE_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0845.html", + "product_code":"dws", + "code":"594", + "des":"PGXC_WORKLOAD_TRANSACTION provides transaction information about workload Cgroups on all CNs. It is accessible only to users with system administrator rights. This view i", + "doc_type":"devg", + "kw":"PGXC_WORKLOAD_TRANSACTION,System Views,Developer Guide", + "title":"PGXC_WORKLOAD_TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_04_0846.html", + "product_code":"dws", + "code":"595", + "des":"PLAN_TABLE displays the plan information collected by EXPLAIN PLAN. Plan information is in a session-level life cycle. After the session exits, the data will be deleted. ", + "doc_type":"devg", + "kw":"PLAN_TABLE,System Views,Developer Guide", + "title":"PLAN_TABLE", + "githuburl":"" + }, + { + "uri":"dws_04_0847.html", + "product_code":"dws", + "code":"596", + "des":"PLAN_TABLE_DATA displays the plan information collected by EXPLAIN PLAN. Different from the PLAN_TABLE view, the system catalog PLAN_TABLE_DATA stores the plan informatio", + "doc_type":"devg", + "kw":"PLAN_TABLE_DATA,System Views,Developer Guide", + "title":"PLAN_TABLE_DATA", + "githuburl":"" + }, + { + "uri":"dws_04_0848.html", + "product_code":"dws", + "code":"597", + "des":"By collecting statistics about the data file I/Os, PV_FILE_STAT displays the I/O performance of the data to detect the performance problems, such as abnormal I/O operatio", + "doc_type":"devg", + "kw":"PV_FILE_STAT,System Views,Developer Guide", + "title":"PV_FILE_STAT", + "githuburl":"" + }, + { + "uri":"dws_04_0849.html", + "product_code":"dws", + "code":"598", + "des":"PV_INSTANCE_TIME collects statistics on the running time of processes and the time consumed in each execution phase, in microseconds.PV_INSTANCE_TIME records time consump", + "doc_type":"devg", + "kw":"PV_INSTANCE_TIME,System Views,Developer Guide", + "title":"PV_INSTANCE_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0850.html", + "product_code":"dws", + "code":"599", + "des":"PV_OS_RUN_INFO displays the running status of the current operating system.", + "doc_type":"devg", + "kw":"PV_OS_RUN_INFO,System Views,Developer Guide", + "title":"PV_OS_RUN_INFO", + "githuburl":"" + }, + { + "uri":"dws_04_0851.html", + "product_code":"dws", + "code":"600", + "des":"PV_SESSION_MEMORY displays statistics about memory usage at the session level in the unit of MB, including all the memory allocated to Postgres and Stream threads on DNs ", + "doc_type":"devg", + "kw":"PV_SESSION_MEMORY,System Views,Developer Guide", + "title":"PV_SESSION_MEMORY", + "githuburl":"" + }, + { + "uri":"dws_04_0852.html", + "product_code":"dws", + "code":"601", + "des":"PV_SESSION_MEMORY_DETAIL displays statistics about thread memory usage by memory context.The memory context TempSmallContextGroup collects information about all memory co", + "doc_type":"devg", + "kw":"PV_SESSION_MEMORY_DETAIL,System Views,Developer Guide", + "title":"PV_SESSION_MEMORY_DETAIL", + "githuburl":"" + }, + { + "uri":"dws_04_0853.html", + "product_code":"dws", + "code":"602", + "des":"PV_SESSION_STAT displays session state statistics based on session threads or the AutoVacuum thread.", + "doc_type":"devg", + "kw":"PV_SESSION_STAT,System Views,Developer Guide", + "title":"PV_SESSION_STAT", + "githuburl":"" + }, + { + "uri":"dws_04_0854.html", + "product_code":"dws", + "code":"603", + "des":"PV_SESSION_TIME displays statistics about the running time of session threads and time consumed in each execution phase, in microseconds.", + "doc_type":"devg", + "kw":"PV_SESSION_TIME,System Views,Developer Guide", + "title":"PV_SESSION_TIME", + "githuburl":"" + }, + { + "uri":"dws_04_0855.html", + "product_code":"dws", + "code":"604", + "des":"PV_TOTAL_MEMORY_DETAIL displays statistics about memory usage of the current database node in the unit of MB.", + "doc_type":"devg", + "kw":"PV_TOTAL_MEMORY_DETAIL,System Views,Developer Guide", + "title":"PV_TOTAL_MEMORY_DETAIL", + "githuburl":"" + }, + { + "uri":"dws_04_0856.html", + "product_code":"dws", + "code":"605", + "des":"PV_REDO_STAT displays statistics on redoing Xlogs on the current node.", + "doc_type":"devg", + "kw":"PV_REDO_STAT,System Views,Developer Guide", + "title":"PV_REDO_STAT", + "githuburl":"" + }, + { + "uri":"dws_04_0857.html", + "product_code":"dws", + "code":"606", + "des":"REDACTION_COLUMNS displays information about all redaction columns in the current database.", + "doc_type":"devg", + "kw":"REDACTION_COLUMNS,System Views,Developer Guide", + "title":"REDACTION_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0858.html", + "product_code":"dws", + "code":"607", + "des":"REDACTION_POLICIES displays information about all redaction objects in the current database.", + "doc_type":"devg", + "kw":"REDACTION_POLICIES,System Views,Developer Guide", + "title":"REDACTION_POLICIES", + "githuburl":"" + }, + { + "uri":"dws_04_0859.html", + "product_code":"dws", + "code":"608", + "des":"USER_COL_COMMENTS displays the column comments of the table accessible to the current user.", + "doc_type":"devg", + "kw":"USER_COL_COMMENTS,System Views,Developer Guide", + "title":"USER_COL_COMMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0860.html", + "product_code":"dws", + "code":"609", + "des":"USER_CONSTRAINTS displays the table constraint information accessible to the current user.", + "doc_type":"devg", + "kw":"USER_CONSTRAINTS,System Views,Developer Guide", + "title":"USER_CONSTRAINTS", + "githuburl":"" + }, + { + "uri":"dws_04_0861.html", + "product_code":"dws", + "code":"610", + "des":"USER_CONSTRAINTS displays the information about constraint columns of the tables accessible to the current user.", + "doc_type":"devg", + "kw":"USER_CONS_COLUMNS,System Views,Developer Guide", + "title":"USER_CONS_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0862.html", + "product_code":"dws", + "code":"611", + "des":"USER_INDEXES displays index information in the current schema.", + "doc_type":"devg", + "kw":"USER_INDEXES,System Views,Developer Guide", + "title":"USER_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0863.html", + "product_code":"dws", + "code":"612", + "des":"USER_IND_COLUMNS displays column information about all indexes accessible to the current user.", + "doc_type":"devg", + "kw":"USER_IND_COLUMNS,System Views,Developer Guide", + "title":"USER_IND_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0864.html", + "product_code":"dws", + "code":"613", + "des":"USER_IND_EXPRESSIONSdisplays information about the function-based expression index accessible to the current user.", + "doc_type":"devg", + "kw":"USER_IND_EXPRESSIONS,System Views,Developer Guide", + "title":"USER_IND_EXPRESSIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0865.html", + "product_code":"dws", + "code":"614", + "des":"USER_IND_PARTITIONS displays information about index partitions accessible to the current user.", + "doc_type":"devg", + "kw":"USER_IND_PARTITIONS,System Views,Developer Guide", + "title":"USER_IND_PARTITIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0866.html", + "product_code":"dws", + "code":"615", + "des":"USER_JOBS displays all jobs owned by the user.", + "doc_type":"devg", + "kw":"USER_JOBS,System Views,Developer Guide", + "title":"USER_JOBS", + "githuburl":"" + }, + { + "uri":"dws_04_0867.html", + "product_code":"dws", + "code":"616", + "des":"USER_OBJECTS displays all database objects accessible to the current user.For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.", + "doc_type":"devg", + "kw":"USER_OBJECTS,System Views,Developer Guide", + "title":"USER_OBJECTS", + "githuburl":"" + }, + { + "uri":"dws_04_0868.html", + "product_code":"dws", + "code":"617", + "des":"USER_PART_INDEXES displays information about partitioned table indexes accessible to the current user.", + "doc_type":"devg", + "kw":"USER_PART_INDEXES,System Views,Developer Guide", + "title":"USER_PART_INDEXES", + "githuburl":"" + }, + { + "uri":"dws_04_0869.html", + "product_code":"dws", + "code":"618", + "des":"USER_PART_TABLES displays information about partitioned tables accessible to the current user.", + "doc_type":"devg", + "kw":"USER_PART_TABLES,System Views,Developer Guide", + "title":"USER_PART_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0870.html", + "product_code":"dws", + "code":"619", + "des":"USER_PROCEDURES displays information about all stored procedures and functions in the current schema.", + "doc_type":"devg", + "kw":"USER_PROCEDURES,System Views,Developer Guide", + "title":"USER_PROCEDURES", + "githuburl":"" + }, + { + "uri":"dws_04_0871.html", + "product_code":"dws", + "code":"620", + "des":"USER_SEQUENCES displays sequence information in the current schema.", + "doc_type":"devg", + "kw":"USER_SEQUENCES,System Views,Developer Guide", + "title":"USER_SEQUENCES", + "githuburl":"" + }, + { + "uri":"dws_04_0872.html", + "product_code":"dws", + "code":"621", + "des":"USER_SOURCE displays information about stored procedures or functions in this mode, and provides the columns defined by the stored procedures or the functions.", + "doc_type":"devg", + "kw":"USER_SOURCE,System Views,Developer Guide", + "title":"USER_SOURCE", + "githuburl":"" + }, + { + "uri":"dws_04_0873.html", + "product_code":"dws", + "code":"622", + "des":"USER_SYNONYMS displays synonyms accessible to the current user.", + "doc_type":"devg", + "kw":"USER_SYNONYMS,System Views,Developer Guide", + "title":"USER_SYNONYMS", + "githuburl":"" + }, + { + "uri":"dws_04_0874.html", + "product_code":"dws", + "code":"623", + "des":"USER_TAB_COLUMNS displays information about table columns accessible to the current user.", + "doc_type":"devg", + "kw":"USER_TAB_COLUMNS,System Views,Developer Guide", + "title":"USER_TAB_COLUMNS", + "githuburl":"" + }, + { + "uri":"dws_04_0875.html", + "product_code":"dws", + "code":"624", + "des":"USER_TAB_COMMENTS displays comments about all tables and views accessible to the current user.", + "doc_type":"devg", + "kw":"USER_TAB_COMMENTS,System Views,Developer Guide", + "title":"USER_TAB_COMMENTS", + "githuburl":"" + }, + { + "uri":"dws_04_0876.html", + "product_code":"dws", + "code":"625", + "des":"USER_TAB_PARTITIONS displays all table partitions accessible to the current user. Each partition of a partitioned table accessible to the current user has a piece of reco", + "doc_type":"devg", + "kw":"USER_TAB_PARTITIONS,System Views,Developer Guide", + "title":"USER_TAB_PARTITIONS", + "githuburl":"" + }, + { + "uri":"dws_04_0877.html", + "product_code":"dws", + "code":"626", + "des":"USER_TABLES displays table information in the current schema.", + "doc_type":"devg", + "kw":"USER_TABLES,System Views,Developer Guide", + "title":"USER_TABLES", + "githuburl":"" + }, + { + "uri":"dws_04_0878.html", + "product_code":"dws", + "code":"627", + "des":"USER_TRIGGERS displays the information about triggers accessible to the current user.", + "doc_type":"devg", + "kw":"USER_TRIGGERS,System Views,Developer Guide", + "title":"USER_TRIGGERS", + "githuburl":"" + }, + { + "uri":"dws_04_0879.html", + "product_code":"dws", + "code":"628", + "des":"USER_VIEWS displays information about all views in the current schema.", + "doc_type":"devg", + "kw":"USER_VIEWS,System Views,Developer Guide", + "title":"USER_VIEWS", + "githuburl":"" + }, + { + "uri":"dws_04_0880.html", + "product_code":"dws", + "code":"629", + "des":"V$SESSION displays all session information about the current session.", + "doc_type":"devg", + "kw":"V$SESSION,System Views,Developer Guide", + "title":"V$SESSION", + "githuburl":"" + }, + { + "uri":"dws_04_0881.html", + "product_code":"dws", + "code":"630", + "des":"V$SESSION_LONGOPS displays the progress of ongoing operations.", + "doc_type":"devg", + "kw":"V$SESSION_LONGOPS,System Views,Developer Guide", + "title":"V$SESSION_LONGOPS", + "githuburl":"" + }, + { + "uri":"dws_04_0883.html", + "product_code":"dws", + "code":"631", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"GUC Parameters", + "title":"GUC Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0884.html", + "product_code":"dws", + "code":"632", + "des":"GaussDB(DWS) GUC parameters can control database system behaviors. You can check and adjust the GUC parameters based on your business scenario and data volume.After a clu", + "doc_type":"devg", + "kw":"Viewing GUC Parameters,GUC Parameters,Developer Guide", + "title":"Viewing GUC Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0885.html", + "product_code":"dws", + "code":"633", + "des":"To ensure the optimal performance of GaussDB(DWS), you can adjust the GUC parameters in the database.The GUC parameters of GaussDB(DWS) are classified into the following ", + "doc_type":"devg", + "kw":"Configuring GUC Parameters,GUC Parameters,Developer Guide", + "title":"Configuring GUC Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0886.html", + "product_code":"dws", + "code":"634", + "des":"The database provides many operation parameters. Configuration of these parameters affects the behavior of the database system. Before modifying these parameters, learn t", + "doc_type":"devg", + "kw":"GUC Parameter Usage,GUC Parameters,Developer Guide", + "title":"GUC Parameter Usage", + "githuburl":"" + }, + { + "uri":"dws_04_0888.html", + "product_code":"dws", + "code":"635", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Connection and Authentication", + "title":"Connection and Authentication", + "githuburl":"" + }, + { + "uri":"dws_04_0889.html", + "product_code":"dws", + "code":"636", + "des":"This section describes parameters related to the connection mode between the client and server.Parameter description: Specifies the maximum number of allowed parallel con", + "doc_type":"devg", + "kw":"Connection Settings,Connection and Authentication,Developer Guide", + "title":"Connection Settings", + "githuburl":"" + }, + { + "uri":"dws_04_0890.html", + "product_code":"dws", + "code":"637", + "des":"This section describes parameters about how to securely authenticate the client and server.Parameter description: Specifies the longest duration to wait before the client", + "doc_type":"devg", + "kw":"Security and Authentication (postgresql.conf),Connection and Authentication,Developer Guide", + "title":"Security and Authentication (postgresql.conf)", + "githuburl":"" + }, + { + "uri":"dws_04_0891.html", + "product_code":"dws", + "code":"638", + "des":"This section describes parameter settings and value ranges for communication libraries.Parameter description: Specifies whether the communication library uses the TCP or ", + "doc_type":"devg", + "kw":"Communication Library Parameters,Connection and Authentication,Developer Guide", + "title":"Communication Library Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0892.html", + "product_code":"dws", + "code":"639", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Resource Consumption", + "title":"Resource Consumption", + "githuburl":"" + }, + { + "uri":"dws_04_0893.html", + "product_code":"dws", + "code":"640", + "des":"This section describes memory parameters.Parameters described in this section take effect only after the database service restarts.Parameter description: Specifies whethe", + "doc_type":"devg", + "kw":"Memory,Resource Consumption,Developer Guide", + "title":"Memory", + "githuburl":"" + }, + { + "uri":"dws_04_0894.html", + "product_code":"dws", + "code":"641", + "des":"This section describes parameters related to statement disk space control, which are used to limit the disk space usage of statements.Parameter description: Specifies the", + "doc_type":"devg", + "kw":"Statement Disk Space Control,Resource Consumption,Developer Guide", + "title":"Statement Disk Space Control", + "githuburl":"" + }, + { + "uri":"dws_04_0895.html", + "product_code":"dws", + "code":"642", + "des":"This section describes kernel resource parameters. Whether these parameters take effect depends on OS settings.Parameter description: Specifies the maximum number of simu", + "doc_type":"devg", + "kw":"Kernel Resources,Resource Consumption,Developer Guide", + "title":"Kernel Resources", + "githuburl":"" + }, + { + "uri":"dws_04_0896.html", + "product_code":"dws", + "code":"643", + "des":"This feature allows administrators to reduce the I/O impact of the VACUUM and ANALYZE statements on concurrent database activities. It is often more important to prevent ", + "doc_type":"devg", + "kw":"Cost-based Vacuum Delay,Resource Consumption,Developer Guide", + "title":"Cost-based Vacuum Delay", + "githuburl":"" + }, + { + "uri":"dws_04_0898.html", + "product_code":"dws", + "code":"644", + "des":"Parameter description: Specifies whether O&M personnel are allowed to generate some ADIO logs to locate ADIO issues. This parameter is used only by developers. Common use", + "doc_type":"devg", + "kw":"Asynchronous I/O Operations,Resource Consumption,Developer Guide", + "title":"Asynchronous I/O Operations", + "githuburl":"" + }, + { + "uri":"dws_04_0899.html", + "product_code":"dws", + "code":"645", + "des":"GaussDB(DWS) provides a parallel data import function that enables a large amount of data to be imported in a fast and efficient manner. This section describes parameters", + "doc_type":"devg", + "kw":"Parallel Data Import,GUC Parameters,Developer Guide", + "title":"Parallel Data Import", + "githuburl":"" + }, + { + "uri":"dws_04_0900.html", + "product_code":"dws", + "code":"646", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Write Ahead Logs", + "title":"Write Ahead Logs", + "githuburl":"" + }, + { + "uri":"dws_04_0901.html", + "product_code":"dws", + "code":"647", + "des":"Parameter description: Specifies the level of the information that is written to WALs.Type: POSTMASTERValue range: enumerated valuesminimalAdvantages: Certain bulk operat", + "doc_type":"devg", + "kw":"Settings,Write Ahead Logs,Developer Guide", + "title":"Settings", + "githuburl":"" + }, + { + "uri":"dws_04_0902.html", + "product_code":"dws", + "code":"648", + "des":"Parameter description: Specifies the minimum number of WAL segment files in the period specified by checkpoint_timeout. The size of each log file is 16 MB.Type: SIGHUPVal", + "doc_type":"devg", + "kw":"Checkpoints,Write Ahead Logs,Developer Guide", + "title":"Checkpoints", + "githuburl":"" + }, + { + "uri":"dws_04_0903.html", + "product_code":"dws", + "code":"649", + "des":"Parameter description: When archive_mode is enabled, completed WAL segments are sent to archive storage by setting archive_command.Type: SIGHUPValue range: Booleanon: The", + "doc_type":"devg", + "kw":"Archiving,Write Ahead Logs,Developer Guide", + "title":"Archiving", + "githuburl":"" + }, + { + "uri":"dws_04_0904.html", + "product_code":"dws", + "code":"650", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"HA Replication", + "title":"HA Replication", + "githuburl":"" + }, + { + "uri":"dws_04_0905.html", + "product_code":"dws", + "code":"651", + "des":"Parameter description: Specifies the number of Xlog file segments. Specifies the minimum number of transaction log files stored in the pg_xlog directory. The standby serv", + "doc_type":"devg", + "kw":"Sending Server,HA Replication,Developer Guide", + "title":"Sending Server", + "githuburl":"" + }, + { + "uri":"dws_04_0906.html", + "product_code":"dws", + "code":"652", + "des":"Parameter description: Specifies the number of transactions by which VACUUM will defer the cleanup of invalid row-store table records, so that VACUUM and VACUUM FULL do n", + "doc_type":"devg", + "kw":"Primary Server,HA Replication,Developer Guide", + "title":"Primary Server", + "githuburl":"" + }, + { + "uri":"dws_04_0908.html", + "product_code":"dws", + "code":"653", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Query Planning", + "title":"Query Planning", + "githuburl":"" + }, + { + "uri":"dws_04_0909.html", + "product_code":"dws", + "code":"654", + "des":"These configuration parameters provide a crude method of influencing the query plans chosen by the query optimizer. If the default plan chosen by the optimizer for a part", + "doc_type":"devg", + "kw":"Optimizer Method Configuration,Query Planning,Developer Guide", + "title":"Optimizer Method Configuration", + "githuburl":"" + }, + { + "uri":"dws_04_0910.html", + "product_code":"dws", + "code":"655", + "des":"This section describes the optimizer cost constants. The cost variables described in this section are measured on an arbitrary scale. Only their relative values matter, t", + "doc_type":"devg", + "kw":"Optimizer Cost Constants,Query Planning,Developer Guide", + "title":"Optimizer Cost Constants", + "githuburl":"" + }, + { + "uri":"dws_04_0911.html", + "product_code":"dws", + "code":"656", + "des":"This section describes parameters related to genetic query optimizer. The genetic query optimizer (GEQO) is an algorithm that plans queries by using heuristic searching. ", + "doc_type":"devg", + "kw":"Genetic Query Optimizer,Query Planning,Developer Guide", + "title":"Genetic Query Optimizer", + "githuburl":"" + }, + { + "uri":"dws_04_0912.html", + "product_code":"dws", + "code":"657", + "des":"Parameter description: Specifies the default statistics target for table columns without a column-specific target set via ALTER TABLE SET STATISTICS. If this parameter is", + "doc_type":"devg", + "kw":"Other Optimizer Options,Query Planning,Developer Guide", + "title":"Other Optimizer Options", + "githuburl":"" + }, + { + "uri":"dws_04_0913.html", + "product_code":"dws", + "code":"658", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Error Reporting and Logging", + "title":"Error Reporting and Logging", + "githuburl":"" + }, + { + "uri":"dws_04_0914.html", + "product_code":"dws", + "code":"659", + "des":"Parameter description: Specifies the writing mode of the log files when logging_collector is set to on.Type: SIGHUPValue range: Booleanon indicates that GaussDB(DWS) over", + "doc_type":"devg", + "kw":"Logging Destination,Error Reporting and Logging,Developer Guide", + "title":"Logging Destination", + "githuburl":"" + }, + { + "uri":"dws_04_0915.html", + "product_code":"dws", + "code":"660", + "des":"Parameter description: Specifies which level of messages are sent to the client. Each level covers all the levels following it. The lower the level is, the fewer messages", + "doc_type":"devg", + "kw":"Logging Time,Error Reporting and Logging,Developer Guide", + "title":"Logging Time", + "githuburl":"" + }, + { + "uri":"dws_04_0916.html", + "product_code":"dws", + "code":"661", + "des":"Parameter description: Specifies whether to print parsing tree results.Type: SIGHUPValue range: Booleanon indicates the printing result function is enabled.off indicates ", + "doc_type":"devg", + "kw":"Logging Content,Error Reporting and Logging,Developer Guide", + "title":"Logging Content", + "githuburl":"" + }, + { + "uri":"dws_04_0918.html", + "product_code":"dws", + "code":"662", + "des":"During cluster running, error scenarios can be detected in a timely manner to inform users as soon as possible.Parameter description: Enables the alarm detection thread t", + "doc_type":"devg", + "kw":"Alarm Detection,GUC Parameters,Developer Guide", + "title":"Alarm Detection", + "githuburl":"" + }, + { + "uri":"dws_04_0919.html", + "product_code":"dws", + "code":"663", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Statistics During the Database Running", + "title":"Statistics During the Database Running", + "githuburl":"" + }, + { + "uri":"dws_04_0920.html", + "product_code":"dws", + "code":"664", + "des":"The query and index statistics collector is used to collect statistics during database running. The statistics include the times of inserting and updating a table and an ", + "doc_type":"devg", + "kw":"Query and Index Statistics Collector,Statistics During the Database Running,Developer Guide", + "title":"Query and Index Statistics Collector", + "githuburl":"" + }, + { + "uri":"dws_04_0921.html", + "product_code":"dws", + "code":"665", + "des":"During the running of the database, the lock access, disk I/O operation, and invalid message process are involved. All these operations are the bottleneck of the database", + "doc_type":"devg", + "kw":"Performance Statistics,Statistics During the Database Running,Developer Guide", + "title":"Performance Statistics", + "githuburl":"" + }, + { + "uri":"dws_04_0922.html", + "product_code":"dws", + "code":"666", + "des":"If database resource usage is not controlled, concurrent tasks easily preempt resources. As a result, the OS will be overloaded and cannot respond to user tasks; or even ", + "doc_type":"devg", + "kw":"Workload Management,GUC Parameters,Developer Guide", + "title":"Workload Management", + "githuburl":"" + }, + { + "uri":"dws_04_0923.html", + "product_code":"dws", + "code":"667", + "des":"The automatic cleanup process (autovacuum) in the system automatically runs the VACUUM and ANALYZE commands to recycle the record space marked by the deleted status and u", + "doc_type":"devg", + "kw":"Automatic Cleanup,GUC Parameters,Developer Guide", + "title":"Automatic Cleanup", + "githuburl":"" + }, + { + "uri":"dws_04_0924.html", + "product_code":"dws", + "code":"668", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Default Settings of Client Connection", + "title":"Default Settings of Client Connection", + "githuburl":"" + }, + { + "uri":"dws_04_0925.html", + "product_code":"dws", + "code":"669", + "des":"This section describes related default parameters involved in the execution of SQL statements.Parameter description: Specifies the order in which schemas are searched whe", + "doc_type":"devg", + "kw":"Statement Behavior,Default Settings of Client Connection,Developer Guide", + "title":"Statement Behavior", + "githuburl":"" + }, + { + "uri":"dws_04_0926.html", + "product_code":"dws", + "code":"670", + "des":"This section describes parameters related to the time format setting.Parameter description: Specifies the display format for date and time values, as well as the rules fo", + "doc_type":"devg", + "kw":"Zone and Formatting,Default Settings of Client Connection,Developer Guide", + "title":"Zone and Formatting", + "githuburl":"" + }, + { + "uri":"dws_04_0927.html", + "product_code":"dws", + "code":"671", + "des":"This section describes the default database loading parameters of the database system.Parameter description: Specifies the path for saving the shared database files that ", + "doc_type":"devg", + "kw":"Other Default Parameters,Default Settings of Client Connection,Developer Guide", + "title":"Other Default Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0928.html", + "product_code":"dws", + "code":"672", + "des":"In GaussDB(DWS), a deadlock may occur when concurrently executed transactions compete for resources. This section describes parameters used for managing transaction lock ", + "doc_type":"devg", + "kw":"Lock Management,GUC Parameters,Developer Guide", + "title":"Lock Management", + "githuburl":"" + }, + { + "uri":"dws_04_0929.html", + "product_code":"dws", + "code":"673", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Version and Platform Compatibility", + "title":"Version and Platform Compatibility", + "githuburl":"" + }, + { + "uri":"dws_04_0930.html", + "product_code":"dws", + "code":"674", + "des":"This section describes the parameter control of the downward compatibility and external compatibility features of GaussDB(DWS). Backward compatibility of the database sys", + "doc_type":"devg", + "kw":"Compatibility with Earlier Versions,Version and Platform Compatibility,Developer Guide", + "title":"Compatibility with Earlier Versions", + "githuburl":"" + }, + { + "uri":"dws_04_0931.html", + "product_code":"dws", + "code":"675", + "des":"Many platforms use the database system. External compatibility of the database system provides a lot of convenience for platforms.Parameter description: Determines whethe", + "doc_type":"devg", + "kw":"Platform and Client Compatibility,Version and Platform Compatibility,Developer Guide", + "title":"Platform and Client Compatibility", + "githuburl":"" + }, + { + "uri":"dws_04_0932.html", + "product_code":"dws", + "code":"676", + "des":"This section describes parameters used for controlling the methods that the server processes an error occurring in the database system.Parameter description: Specifies wh", + "doc_type":"devg", + "kw":"Fault Tolerance,GUC Parameters,Developer Guide", + "title":"Fault Tolerance", + "githuburl":"" + }, + { + "uri":"dws_04_0933.html", + "product_code":"dws", + "code":"677", + "des":"When a connection pool is used to access the database, database connections are established and then stored in the memory as objects during system running. When you need ", + "doc_type":"devg", + "kw":"Connection Pool Parameters,GUC Parameters,Developer Guide", + "title":"Connection Pool Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0934.html", + "product_code":"dws", + "code":"678", + "des":"This section describes the settings and value ranges of cluster transaction parameters.Parameter description: Specifies the isolation level of the current transaction.Typ", + "doc_type":"devg", + "kw":"Cluster Transaction Parameters,GUC Parameters,Developer Guide", + "title":"Cluster Transaction Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0936.html", + "product_code":"dws", + "code":"679", + "des":"Parameter description: Specifies whether to enable the lightweight column-store update.Type: USERSETValue range: Booleanon indicates that the lightweight column-store upd", + "doc_type":"devg", + "kw":"Developer Operations,GUC Parameters,Developer Guide", + "title":"Developer Operations", + "githuburl":"" + }, + { + "uri":"dws_04_0937.html", + "product_code":"dws", + "code":"680", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Auditing", + "title":"Auditing", + "githuburl":"" + }, + { + "uri":"dws_04_0938.html", + "product_code":"dws", + "code":"681", + "des":"Parameter description: Specifies whether to enable or disable the audit process. After the audit process is enabled, the auditing information written by the background pr", + "doc_type":"devg", + "kw":"Audit Switch,Auditing,Developer Guide", + "title":"Audit Switch", + "githuburl":"" + }, + { + "uri":"dws_04_0940.html", + "product_code":"dws", + "code":"682", + "des":"Parameter description: Specifies whether to audit successful operations in GaussDB(DWS). Set this parameter as required.Type: SIGHUPValue range: a stringnone: indicates t", + "doc_type":"devg", + "kw":"Operation Audit,Auditing,Developer Guide", + "title":"Operation Audit", + "githuburl":"" + }, + { + "uri":"dws_04_0941.html", + "product_code":"dws", + "code":"683", + "des":"The automatic rollback transaction can be monitored and its statement problems can be located by setting the transaction timeout warning. In addition, the statements with", + "doc_type":"devg", + "kw":"Transaction Monitoring,GUC Parameters,Developer Guide", + "title":"Transaction Monitoring", + "githuburl":"" + }, + { + "uri":"dws_04_0945.html", + "product_code":"dws", + "code":"684", + "des":"Parameter description: If an SQL statement involves tables belonging to different groups, you can enable this parameter to push the execution plan of the statement to imp", + "doc_type":"devg", + "kw":"Miscellaneous Parameters,GUC Parameters,Developer Guide", + "title":"Miscellaneous Parameters", + "githuburl":"" + }, + { + "uri":"dws_04_0946.html", + "product_code":"dws", + "code":"685", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Glossary,Developer Guide,Developer Guide", + "title":"Glossary", + "githuburl":"" + }, + { + "uri":"dws_04_2000.html", + "product_code":"dws", + "code":"686", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"SQL Syntax Reference", + "title":"SQL Syntax Reference", + "githuburl":"" + }, + { + "uri":"dws_06_0001.html", + "product_code":"dws", + "code":"687", + "des":"SQL is a standard computer language used to control the access to databases and manage data in databases.SQL provides different statements to enable you to:Query data.Ins", + "doc_type":"devg", + "kw":"GaussDB(DWS) SQL,SQL Syntax Reference,Developer Guide", + "title":"GaussDB(DWS) SQL", + "githuburl":"" + }, + { + "uri":"dws_06_0002.html", + "product_code":"dws", + "code":"688", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Differences Between GaussDB(DWS) and PostgreSQL", + "title":"Differences Between GaussDB(DWS) and PostgreSQL", + "githuburl":"" + }, + { + "uri":"dws_06_0003.html", + "product_code":"dws", + "code":"689", + "des":"GaussDB(DWS) gsql differs from PostgreSQL psql in that the former has made the following changes to enhance security:User passwords cannot be set by running the \\password", + "doc_type":"devg", + "kw":"GaussDB(DWS) gsql, PostgreSQL psql, and libpq,Differences Between GaussDB(DWS) and PostgreSQL,Develo", + "title":"GaussDB(DWS) gsql, PostgreSQL psql, and libpq", + "githuburl":"" + }, + { + "uri":"dws_06_0004.html", + "product_code":"dws", + "code":"690", + "des":"For details about supported data types by GaussDB(DWS), see Data Types.The following PostgreSQL data type is not supported:Lines, a geometric typepg_node_tree", + "doc_type":"devg", + "kw":"Data Type Differences,Differences Between GaussDB(DWS) and PostgreSQL,Developer Guide", + "title":"Data Type Differences", + "githuburl":"" + }, + { + "uri":"dws_06_0005.html", + "product_code":"dws", + "code":"691", + "des":"For details about the functions supported by GaussDB(DWS), see Functions and Operators.The following PostgreSQL functions are not supported:Enum support functionsAccess p", + "doc_type":"devg", + "kw":"Function Differences,Differences Between GaussDB(DWS) and PostgreSQL,Developer Guide", + "title":"Function Differences", + "githuburl":"" + }, + { + "uri":"dws_06_0006.html", + "product_code":"dws", + "code":"692", + "des":"Table inheritanceTable creation features:Use REFERENCES reftable [ (refcolumn) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] t", + "doc_type":"devg", + "kw":"PostgreSQL Features Unsupported by GaussDB(DWS),Differences Between GaussDB(DWS) and PostgreSQL,Deve", + "title":"PostgreSQL Features Unsupported by GaussDB(DWS)", + "githuburl":"" + }, + { + "uri":"dws_06_0007.html", + "product_code":"dws", + "code":"693", + "des":"The SQL contains reserved and non-reserved words. Standards require that reserved keywords not be used as other identifiers. Non-reserved keywords have special meanings o", + "doc_type":"devg", + "kw":"Keyword,SQL Syntax Reference,Developer Guide", + "title":"Keyword", + "githuburl":"" + }, + { + "uri":"dws_06_0008.html", + "product_code":"dws", + "code":"694", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Data Types", + "title":"Data Types", + "githuburl":"" + }, + { + "uri":"dws_06_0009.html", + "product_code":"dws", + "code":"695", + "des":"Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision decimals.For details about numeric op", + "doc_type":"devg", + "kw":"Numeric Types,Data Types,Developer Guide", + "title":"Numeric Types", + "githuburl":"" + }, + { + "uri":"dws_06_0010.html", + "product_code":"dws", + "code":"696", + "des":"The money type stores a currency amount with fixed fractional precision. The range shown in Table 1 assumes there are two fractional digits. Input is accepted in a variet", + "doc_type":"devg", + "kw":"Monetary Types,Data Types,Developer Guide", + "title":"Monetary Types", + "githuburl":"" + }, + { + "uri":"dws_06_0011.html", + "product_code":"dws", + "code":"697", + "des":"Valid literal values for the \"true\" state are:TRUE, 't', 'true', 'y', 'yes', '1'Valid literal values for the \"false\" state include:FALSE, 'f', 'false', 'n', 'no', '0'TRUE", + "doc_type":"devg", + "kw":"Boolean Type,Data Types,Developer Guide", + "title":"Boolean Type", + "githuburl":"" + }, + { + "uri":"dws_06_0012.html", + "product_code":"dws", + "code":"698", + "des":"Table 1 lists the character types that can be used in GaussDB(DWS). For string operators and related built-in functions, see Character Processing Functions and Operators.", + "doc_type":"devg", + "kw":"Character Types,Data Types,Developer Guide", + "title":"Character Types", + "githuburl":"" + }, + { + "uri":"dws_06_0013.html", + "product_code":"dws", + "code":"699", + "des":"Table 1 lists the binary data types that can be used in GaussDB(DWS).In addition to the size limitation on each column, the total size of each tuple is 8203 bytes less th", + "doc_type":"devg", + "kw":"Binary Data Types,Data Types,Developer Guide", + "title":"Binary Data Types", + "githuburl":"" + }, + { + "uri":"dws_06_0014.html", + "product_code":"dws", + "code":"700", + "des":"Table 1 lists date and time types supported by GaussDB(DWS). For the operators and built-in functions of the types, see Date and Time Processing Functions and Operators.I", + "doc_type":"devg", + "kw":"Date/Time Types,Data Types,Developer Guide", + "title":"Date/Time Types", + "githuburl":"" + }, + { + "uri":"dws_06_0015.html", + "product_code":"dws", + "code":"701", + "des":"Table 1 lists the geometric types that can be used in GaussDB(DWS). The most fundamental type, the point, forms the basis for all of the other types.A rich set of functio", + "doc_type":"devg", + "kw":"Geometric Types,Data Types,Developer Guide", + "title":"Geometric Types", + "githuburl":"" + }, + { + "uri":"dws_06_0016.html", + "product_code":"dws", + "code":"702", + "des":"GaussDB(DWS) offers data types to store IPv4, IPv6, and MAC addresses.It is better to use network address types instead of plaintext types to store IPv4, IPv6, and MAC ad", + "doc_type":"devg", + "kw":"Network Address Types,Data Types,Developer Guide", + "title":"Network Address Types", + "githuburl":"" + }, + { + "uri":"dws_06_0017.html", + "product_code":"dws", + "code":"703", + "des":"Bit strings are strings of 1's and 0's. They can be used to store bit masks.GaussDB(DWS) supports two SQL bit types: bit(n) and bit varying(n), where n is a positive inte", + "doc_type":"devg", + "kw":"Bit String Types,Data Types,Developer Guide", + "title":"Bit String Types", + "githuburl":"" + }, + { + "uri":"dws_06_0018.html", + "product_code":"dws", + "code":"704", + "des":"GaussDB(DWS) offers two data types that are designed to support full text search. The tsvector type represents a document in a form optimized for text search. The tsquery", + "doc_type":"devg", + "kw":"Text Search Types,Data Types,Developer Guide", + "title":"Text Search Types", + "githuburl":"" + }, + { + "uri":"dws_06_0019.html", + "product_code":"dws", + "code":"705", + "des":"The data type UUID stores Universally Unique Identifiers (UUID) as defined by RFC 4122, ISO/IEF 9834-8:2005, and related standards. This identifier is a 128-bit quantity ", + "doc_type":"devg", + "kw":"UUID Type,Data Types,Developer Guide", + "title":"UUID Type", + "githuburl":"" + }, + { + "uri":"dws_06_0020.html", + "product_code":"dws", + "code":"706", + "des":"JSON data types are for storing JavaScript Object Notation (JSON) data. Such data can also be stored as TEXT, but the JSON data type has the advantage of checking that ea", + "doc_type":"devg", + "kw":"JSON Types,Data Types,Developer Guide", + "title":"JSON Types", + "githuburl":"" + }, + { + "uri":"dws_06_0021.html", + "product_code":"dws", + "code":"707", + "des":"HyperLoglog (HLL) is an approximation algorithm for efficiently counting the number of distinct values in a data set. It features faster computing and lower space usage. ", + "doc_type":"devg", + "kw":"HLL Data Types,Data Types,Developer Guide", + "title":"HLL Data Types", + "githuburl":"" + }, + { + "uri":"dws_06_0022.html", + "product_code":"dws", + "code":"708", + "des":"Object identifiers (OIDs) are used internally by GaussDB(DWS) as primary keys for various system catalogs. OIDs are not added to user-created tables by the system. The OI", + "doc_type":"devg", + "kw":"Object Identifier Types,Data Types,Developer Guide", + "title":"Object Identifier Types", + "githuburl":"" + }, + { + "uri":"dws_06_0023.html", + "product_code":"dws", + "code":"709", + "des":"GaussDB(DWS) has a number of special-purpose entries that are collectively called pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to ", + "doc_type":"devg", + "kw":"Pseudo-Types,Data Types,Developer Guide", + "title":"Pseudo-Types", + "githuburl":"" + }, + { + "uri":"dws_06_0024.html", + "product_code":"dws", + "code":"710", + "des":"Table 1 lists the data types supported by column-store tables.", + "doc_type":"devg", + "kw":"Data Types Supported by Column-Store Tables,Data Types,Developer Guide", + "title":"Data Types Supported by Column-Store Tables", + "githuburl":"" + }, + { + "uri":"dws_06_0025.html", + "product_code":"dws", + "code":"711", + "des":"XML data type stores Extensible Markup Language (XML) formatted data. Such data can also be stored as text, but the advantage of the XML data type is that it checks wheth", + "doc_type":"devg", + "kw":"XML,Data Types,Developer Guide", + "title":"XML", + "githuburl":"" + }, + { + "uri":"dws_06_0026.html", + "product_code":"dws", + "code":"712", + "des":"Table 1 lists the constants and macros that can be used in GaussDB(DWS).", + "doc_type":"devg", + "kw":"Constant and Macro,SQL Syntax Reference,Developer Guide", + "title":"Constant and Macro", + "githuburl":"" + }, + { + "uri":"dws_06_0027.html", + "product_code":"dws", + "code":"713", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Functions and Operators", + "title":"Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0028.html", + "product_code":"dws", + "code":"714", + "des":"The usual logical operators include AND, OR, and NOT. SQL uses a three-valued logical system with true, false, and null, which represents \"unknown\". Their priorities are ", + "doc_type":"devg", + "kw":"Logical Operators,Functions and Operators,Developer Guide", + "title":"Logical Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0029.html", + "product_code":"dws", + "code":"715", + "des":"Comparison operators are available for all data types and return Boolean values.All comparison operators are binary operators. Only data types that are the same or can be", + "doc_type":"devg", + "kw":"Comparison Operators,Functions and Operators,Developer Guide", + "title":"Comparison Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0030.html", + "product_code":"dws", + "code":"716", + "des":"String functions and operators provided by GaussDB(DWS) are for concatenating strings with each other, concatenating strings with non-strings, and matching the patterns o", + "doc_type":"devg", + "kw":"Character Processing Functions and Operators,Functions and Operators,Developer Guide", + "title":"Character Processing Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0031.html", + "product_code":"dws", + "code":"717", + "des":"SQL defines some string functions that use keywords, rather than commas, to separate arguments.octet_length(string)Description: Number of bytes in binary stringReturn typ", + "doc_type":"devg", + "kw":"Binary String Functions and Operators,Functions and Operators,Developer Guide", + "title":"Binary String Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0032.html", + "product_code":"dws", + "code":"718", + "des":"Aside from the usual comparison operators, the following operators can be used. Bit string operands of &, |, and # must be of equal length. When bit shifting, the origina", + "doc_type":"devg", + "kw":"Bit String Functions and Operators,Functions and Operators,Developer Guide", + "title":"Bit String Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0033.html", + "product_code":"dws", + "code":"719", + "des":"There are three separate approaches to pattern matching provided by the database: the traditional SQL LIKE operator, the more recent SIMILAR TO operator, and POSIX-style ", + "doc_type":"devg", + "kw":"Pattern Matching Operators,Functions and Operators,Developer Guide", + "title":"Pattern Matching Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0034.html", + "product_code":"dws", + "code":"720", + "des":"+Description: AdditionFor example:SELECT 2+3 AS RESULT;\n result \n--------\n 5\n(1 row)Description: AdditionFor example:-Description: SubtractionFor example:SELECT 2-3 ", + "doc_type":"devg", + "kw":"Mathematical Functions and Operators,Functions and Operators,Developer Guide", + "title":"Mathematical Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0035.html", + "product_code":"dws", + "code":"721", + "des":"When the user uses date/time operators, explicit type prefixes are modified for corresponding operands to ensure that the operands parsed by the database are consistent w", + "doc_type":"devg", + "kw":"Date and Time Processing Functions and Operators,Functions and Operators,Developer Guide", + "title":"Date and Time Processing Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0036.html", + "product_code":"dws", + "code":"722", + "des":"cast(x as y)Description: Converts x into the type specified by y.For example:SELECT cast('22-oct-1997' as timestamp);\n timestamp \n---------------------\n 1997-10", + "doc_type":"devg", + "kw":"Type Conversion Functions,Functions and Operators,Developer Guide", + "title":"Type Conversion Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0037.html", + "product_code":"dws", + "code":"723", + "des":"+Description: TranslationFor example:SELECT box '((0,0),(1,1))' + point '(2.0,0)' AS RESULT;\n result \n-------------\n (3,1),(2,0)\n(1 row)Description: TranslationFor e", + "doc_type":"devg", + "kw":"Geometric Functions and Operators,Functions and Operators,Developer Guide", + "title":"Geometric Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0038.html", + "product_code":"dws", + "code":"724", + "des":"The operators <<, <<=, >>, and >>= test for subnet inclusion. They consider only the network parts of the two addresses (ignoring any host part) and determine whether one", + "doc_type":"devg", + "kw":"Network Address Functions and Operators,Functions and Operators,Developer Guide", + "title":"Network Address Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0039.html", + "product_code":"dws", + "code":"725", + "des":"@@Description: Specifies whether the tsvector-typed words match the tsquery-typed words.For example:SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat') AS ", + "doc_type":"devg", + "kw":"Text Search Functions and Operators,Functions and Operators,Developer Guide", + "title":"Text Search Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0040.html", + "product_code":"dws", + "code":"726", + "des":"UUID functions are used to generate UUID data (see UUID Type).uuid_generate_v1()Description: Generates a UUID sequence number.Return type: UUIDExample:SELECT uuid_generat", + "doc_type":"devg", + "kw":"UUID Functions,Functions and Operators,Developer Guide", + "title":"UUID Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0041.html", + "product_code":"dws", + "code":"727", + "des":"JSON functions are used to generate JSON data (see JSON Types).array_to_json(anyarray [, pretty_bool])Description: Returns the array as JSON. A multi-dimensional array be", + "doc_type":"devg", + "kw":"JSON Functions,Functions and Operators,Developer Guide", + "title":"JSON Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0042.html", + "product_code":"dws", + "code":"728", + "des":"hll_hash_boolean(bool)Description: Hashes data of the bool type.Return type: hll_hashvalFor example:SELECT hll_hash_boolean(FALSE);\n hll_hash_boolean \n----------------", + "doc_type":"devg", + "kw":"HLL Functions and Operators,Functions and Operators,Developer Guide", + "title":"HLL Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0043.html", + "product_code":"dws", + "code":"729", + "des":"The sequence functions provide a simple method to ensure security of multiple users for users to obtain sequence values from sequence objects.The hybrid data warehouse (s", + "doc_type":"devg", + "kw":"SEQUENCE Functions,Functions and Operators,Developer Guide", + "title":"SEQUENCE Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0044.html", + "product_code":"dws", + "code":"730", + "des":"=Description: Specifies whether two arrays are equal.For example:SELECT ARRAY[1.1,2.1,3.1]::int[] = ARRAY[1,2,3] AS RESULT ;\n result \n--------\n t\n(1 row)Description: Spec", + "doc_type":"devg", + "kw":"Array Functions and Operators,Functions and Operators,Developer Guide", + "title":"Array Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0045.html", + "product_code":"dws", + "code":"731", + "des":"=Description: EqualsFor example:SELECT int4range(1,5) = '[1,4]'::int4range AS RESULT;\n result\n--------\n t\n(1 row)Description: EqualsFor example:<>Description: Does not eq", + "doc_type":"devg", + "kw":"Range Functions and Operators,Functions and Operators,Developer Guide", + "title":"Range Functions and Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0046.html", + "product_code":"dws", + "code":"732", + "des":"sum(expression)Description: Sum of expression across all input valuesReturn type:Generally, same as the argument data type. In the following cases, type conversion occurs", + "doc_type":"devg", + "kw":"Aggregate Functions,Functions and Operators,Developer Guide", + "title":"Aggregate Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0047.html", + "product_code":"dws", + "code":"733", + "des":"Regular aggregate functions return a single value calculated from values in a row, or group all rows into a single output row. Window functions perform a calculation acro", + "doc_type":"devg", + "kw":"Window Functions,Functions and Operators,Developer Guide", + "title":"Window Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0048.html", + "product_code":"dws", + "code":"734", + "des":"gs_password_deadline()Description: Indicates the number of remaining days before the password of the current user expires. After the password expires, the system prompts ", + "doc_type":"devg", + "kw":"Security Functions,Functions and Operators,Developer Guide", + "title":"Security Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0049.html", + "product_code":"dws", + "code":"735", + "des":"generate_series(start, stop)Description: Generates a series of values, from start to stop with a step size of one.Parameter type: int, bigint, or numericReturn type: seto", + "doc_type":"devg", + "kw":"Set Returning Functions,Functions and Operators,Developer Guide", + "title":"Set Returning Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0050.html", + "product_code":"dws", + "code":"736", + "des":"coalesce(expr1, expr2, ..., exprn)Description: Returns the first argument that is not NULL in the argument list.COALESCE(expr1, expr2) is equivalent to CASE WHEN expr1 IS", + "doc_type":"devg", + "kw":"Conditional Expression Functions,Functions and Operators,Developer Guide", + "title":"Conditional Expression Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0051.html", + "product_code":"dws", + "code":"737", + "des":"current_catalogDescription: Name of the current database (called \"catalog\" in the SQL standard)Return type: nameFor example:SELECT current_catalog;\n current_database\n----", + "doc_type":"devg", + "kw":"System Information Functions,Functions and Operators,Developer Guide", + "title":"System Information Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0052.html", + "product_code":"dws", + "code":"738", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"System Administration Functions", + "title":"System Administration Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0053.html", + "product_code":"dws", + "code":"739", + "des":"Configuration setting functions are used for querying and modifying configuration parameters during running.current_setting(setting_name)Description: Specifies the curren", + "doc_type":"devg", + "kw":"Configuration Settings Functions,System Administration Functions,Developer Guide", + "title":"Configuration Settings Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0054.html", + "product_code":"dws", + "code":"740", + "des":"Universal file access functions provide local access interfaces for files on a database server. Only files in the database cluster directory and the log_directory directo", + "doc_type":"devg", + "kw":"Universal File Access Functions,System Administration Functions,Developer Guide", + "title":"Universal File Access Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0055.html", + "product_code":"dws", + "code":"741", + "des":"Server signaling functions send control signals to other server processes. Only system administrators can use these functions.pg_cancel_backend(pid int)Description: Cance", + "doc_type":"devg", + "kw":"Server Signaling Functions,System Administration Functions,Developer Guide", + "title":"Server Signaling Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0056.html", + "product_code":"dws", + "code":"742", + "des":"Backup control functions help online backup.pg_create_restore_point(name text)Description: Creates a named point for performing the restore operation (restricted to syste", + "doc_type":"devg", + "kw":"Backup and Restoration Control Functions,System Administration Functions,Developer Guide", + "title":"Backup and Restoration Control Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0057.html", + "product_code":"dws", + "code":"743", + "des":"Snapshot synchronization functions save the current snapshot and return its identifier.pg_export_snapshot()Description: Saves the current snapshot and returns its identif", + "doc_type":"devg", + "kw":"Snapshot Synchronization Functions,System Administration Functions,Developer Guide", + "title":"Snapshot Synchronization Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0058.html", + "product_code":"dws", + "code":"744", + "des":"Database object size functions calculate the actual disk space used by database objects.pg_column_size(any)Description: Specifies the number of bytes used to store a part", + "doc_type":"devg", + "kw":"Database Object Functions,System Administration Functions,Developer Guide", + "title":"Database Object Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0059.html", + "product_code":"dws", + "code":"745", + "des":"Advisory lock functions manage advisory locks. These functions are only for internal use currently.pg_advisory_lock(key bigint)Description: Obtains an exclusive session-l", + "doc_type":"devg", + "kw":"Advisory Lock Functions,System Administration Functions,Developer Guide", + "title":"Advisory Lock Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0060.html", + "product_code":"dws", + "code":"746", + "des":"pg_get_residualfiles()Description: Obtains all residual file records of the current node. This function is an instance-level function and is irrelevant to the current dat", + "doc_type":"devg", + "kw":"Residual File Management Functions,System Administration Functions,Developer Guide", + "title":"Residual File Management Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0061.html", + "product_code":"dws", + "code":"747", + "des":"A replication function synchronizes logs and data between instances. It is a statistics or operation method provided by the system to implement HA.Replication functions e", + "doc_type":"devg", + "kw":"Replication Functions,System Administration Functions,Developer Guide", + "title":"Replication Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0062.html", + "product_code":"dws", + "code":"748", + "des":"pgxc_pool_check()Description: Checks whether the connection data buffered in the pool is consistent with pgxc_node.Return type: booleanDescription: Checks whether the con", + "doc_type":"devg", + "kw":"Other Functions,System Administration Functions,Developer Guide", + "title":"Other Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0063.html", + "product_code":"dws", + "code":"749", + "des":"This section describes the functions of the resource management module.gs_wlm_readjust_user_space(oid)Description: This function calibrates the permanent storage space of", + "doc_type":"devg", + "kw":"Resource Management Functions,System Administration Functions,Developer Guide", + "title":"Resource Management Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0064.html", + "product_code":"dws", + "code":"750", + "des":"Data redaction functions are used to mask and protect sensitive data. Generally, you are advised to bind these functions to the columns to be redacted based on the data r", + "doc_type":"devg", + "kw":"Data Redaction Functions,Functions and Operators,Developer Guide", + "title":"Data Redaction Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0065.html", + "product_code":"dws", + "code":"751", + "des":"Statistics information functions are divided into the following two categories: functions that access databases, using the OID of each table or index in a database to mar", + "doc_type":"devg", + "kw":"Statistics Information Functions,Functions and Operators,Developer Guide", + "title":"Statistics Information Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0066.html", + "product_code":"dws", + "code":"752", + "des":"pg_get_triggerdef(oid)Description: Obtains the definition information of a trigger.Parameter: OID of the trigger to be queriedReturn type: textExample:select pg_get_trigg", + "doc_type":"devg", + "kw":"Trigger Functions,Functions and Operators,Developer Guide", + "title":"Trigger Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0067.html", + "product_code":"dws", + "code":"753", + "des":"XMLPARSE ( { DOCUMENT | CONTENT } value)Description: Generates an XML value from character data.Return type: XMLExample:XMLSERIALIZE ( { DOCUMENT | CONTENT } value AS typ", + "doc_type":"devg", + "kw":"XML Functions,Functions and Operators,Developer Guide", + "title":"XML Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0068.html", + "product_code":"dws", + "code":"754", + "des":"The pv_memory_profiling(type int) and environment variable MALLOC_CONF are used by GaussDB(DWS) to control the enabling and disabling of the memory allocation call stack ", + "doc_type":"devg", + "kw":"Call Stack Recording Functions,Functions and Operators,Developer Guide", + "title":"Call Stack Recording Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0069.html", + "product_code":"dws", + "code":"755", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Expressions", + "title":"Expressions", + "githuburl":"" + }, + { + "uri":"dws_06_0070.html", + "product_code":"dws", + "code":"756", + "des":"Logical Operators lists the operators and calculation rules of logical expressions.Comparison Operators lists the common comparative operators.In addition to comparative ", + "doc_type":"devg", + "kw":"Simple Expressions,Expressions,Developer Guide", + "title":"Simple Expressions", + "githuburl":"" + }, + { + "uri":"dws_06_0071.html", + "product_code":"dws", + "code":"757", + "des":"Data that meets the requirements specified by conditional expressions are filtered during SQL statement execution.Conditional expressions include the following types:CASE", + "doc_type":"devg", + "kw":"Conditional Expressions,Expressions,Developer Guide", + "title":"Conditional Expressions", + "githuburl":"" + }, + { + "uri":"dws_06_0072.html", + "product_code":"dws", + "code":"758", + "des":"Subquery expressions include the following types:EXISTS/NOT EXISTSFigure 1 shows the syntax of an EXISTS/NOT EXISTS expression.EXISTS/NOT EXISTS::=The parameter of an EXI", + "doc_type":"devg", + "kw":"Subquery Expressions,Expressions,Developer Guide", + "title":"Subquery Expressions", + "githuburl":"" + }, + { + "uri":"dws_06_0073.html", + "product_code":"dws", + "code":"759", + "des":"expressionIN(value [, ...])The parentheses on the right contain an expression list. The expression result on the left is compared with the content in the expression list.", + "doc_type":"devg", + "kw":"Array Expressions,Expressions,Developer Guide", + "title":"Array Expressions", + "githuburl":"" + }, + { + "uri":"dws_06_0074.html", + "product_code":"dws", + "code":"760", + "des":"Syntax:row_constructor operator row_constructorBoth sides of the row expression are row constructors. The values of both rows must have the same number of fields and they", + "doc_type":"devg", + "kw":"Row Expressions,Expressions,Developer Guide", + "title":"Row Expressions", + "githuburl":"" + }, + { + "uri":"dws_06_0075.html", + "product_code":"dws", + "code":"761", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Type Conversion", + "title":"Type Conversion", + "githuburl":"" + }, + { + "uri":"dws_06_0076.html", + "product_code":"dws", + "code":"762", + "des":"SQL is a typed language. That is, every data item has an associated data type which determines its behavior and allowed usage. GaussDB(DWS) has an extensible type system ", + "doc_type":"devg", + "kw":"Overview,Type Conversion,Developer Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0077.html", + "product_code":"dws", + "code":"763", + "des":"Select the operators to be considered from the pg_operator system catalog. Considered operators are those with the matching name and argument count. If the search path fi", + "doc_type":"devg", + "kw":"Operators,Type Conversion,Developer Guide", + "title":"Operators", + "githuburl":"" + }, + { + "uri":"dws_06_0078.html", + "product_code":"dws", + "code":"764", + "des":"Select the functions to be considered from the pg_proc system catalog. If a non-schema-qualified function name was used, the functions in the current search path are cons", + "doc_type":"devg", + "kw":"Functions,Type Conversion,Developer Guide", + "title":"Functions", + "githuburl":"" + }, + { + "uri":"dws_06_0079.html", + "product_code":"dws", + "code":"765", + "des":"Search for an exact match with the target column.Try to convert the expression to the target type. This will succeed if there is a registered cast between the two types. ", + "doc_type":"devg", + "kw":"Value Storage,Type Conversion,Developer Guide", + "title":"Value Storage", + "githuburl":"" + }, + { + "uri":"dws_06_0080.html", + "product_code":"dws", + "code":"766", + "des":"SQL UNION constructs must match up possibly dissimilar types to become a single result set. Since all query results from a SELECT UNION statement must appear in a single ", + "doc_type":"devg", + "kw":"UNION, CASE, and Related Constructs,Type Conversion,Developer Guide", + "title":"UNION, CASE, and Related Constructs", + "githuburl":"" + }, + { + "uri":"dws_06_0081.html", + "product_code":"dws", + "code":"767", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Full Text Search", + "title":"Full Text Search", + "githuburl":"" + }, + { + "uri":"dws_06_0082.html", + "product_code":"dws", + "code":"768", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Introduction", + "title":"Introduction", + "githuburl":"" + }, + { + "uri":"dws_06_0083.html", + "product_code":"dws", + "code":"769", + "des":"Textual search operators have been used in databases for years. GaussDB(DWS) has ~, ~*, LIKE, and ILIKE operators for textual data types, but they lack many essential pro", + "doc_type":"devg", + "kw":"Full-Text Retrieval,Introduction,Developer Guide", + "title":"Full-Text Retrieval", + "githuburl":"" + }, + { + "uri":"dws_06_0084.html", + "product_code":"dws", + "code":"770", + "des":"A document is the unit of searching in a full text search system; for example, a magazine article or email message. The text search engine must be able to parse documents", + "doc_type":"devg", + "kw":"What Is a Document?,Introduction,Developer Guide", + "title":"What Is a Document?", + "githuburl":"" + }, + { + "uri":"dws_06_0085.html", + "product_code":"dws", + "code":"771", + "des":"Full text search in GaussDB(DWS) is based on the match operator @@, which returns true if a tsvector (document) matches a tsquery (query). It does not matter which data t", + "doc_type":"devg", + "kw":"Basic Text Matching,Introduction,Developer Guide", + "title":"Basic Text Matching", + "githuburl":"" + }, + { + "uri":"dws_06_0086.html", + "product_code":"dws", + "code":"772", + "des":"Full text search functionality includes the ability to do many more things: skip indexing certain words (stop words), process synonyms, and use sophisticated parsing, for", + "doc_type":"devg", + "kw":"Configurations,Introduction,Developer Guide", + "title":"Configurations", + "githuburl":"" + }, + { + "uri":"dws_06_0087.html", + "product_code":"dws", + "code":"773", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Table and index", + "title":"Table and index", + "githuburl":"" + }, + { + "uri":"dws_06_0088.html", + "product_code":"dws", + "code":"774", + "des":"It is possible to do a full text search without an index.A simple query to print each row that contains the word science in its body column is as follows:DROP SCHEMA IF E", + "doc_type":"devg", + "kw":"Searching a Table,Table and index,Developer Guide", + "title":"Searching a Table", + "githuburl":"" + }, + { + "uri":"dws_06_0089.html", + "product_code":"dws", + "code":"775", + "des":"You can create a GIN index to speed up text searches:The to_tsvector() function accepts one or two augments.If the one-augment version of the index is used, the system wi", + "doc_type":"devg", + "kw":"Creating an Index,Table and index,Developer Guide", + "title":"Creating an Index", + "githuburl":"" + }, + { + "uri":"dws_06_0090.html", + "product_code":"dws", + "code":"776", + "des":"The following is an example of using an index. Run the following statements in a database that uses the UTF-8 or GBK encoding:In this example, table1 has two GIN indexes ", + "doc_type":"devg", + "kw":"Constraints on Index Use,Table and index,Developer Guide", + "title":"Constraints on Index Use", + "githuburl":"" + }, + { + "uri":"dws_06_0091.html", + "product_code":"dws", + "code":"777", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Controlling Text Search", + "title":"Controlling Text Search", + "githuburl":"" + }, + { + "uri":"dws_06_0092.html", + "product_code":"dws", + "code":"778", + "des":"GaussDB(DWS) provides function to_tsvector for converting a document to the tsvector data type.to_tsvector parses a textual document into tokens, reduces the tokens to le", + "doc_type":"devg", + "kw":"Parsing Documents,Controlling Text Search,Developer Guide", + "title":"Parsing Documents", + "githuburl":"" + }, + { + "uri":"dws_06_0093.html", + "product_code":"dws", + "code":"779", + "des":"GaussDB(DWS) provides functions to_tsquery and plainto_tsquery for converting a query to the tsquery data type. to_tsquery offers access to more features than plainto_tsq", + "doc_type":"devg", + "kw":"Parsing Queries,Controlling Text Search,Developer Guide", + "title":"Parsing Queries", + "githuburl":"" + }, + { + "uri":"dws_06_0094.html", + "product_code":"dws", + "code":"780", + "des":"Ranking attempts to measure how relevant documents are to a particular query, so that when there are many matches the most relevant ones can be shown first. GaussDB(DWS) ", + "doc_type":"devg", + "kw":"Ranking Search Results,Controlling Text Search,Developer Guide", + "title":"Ranking Search Results", + "githuburl":"" + }, + { + "uri":"dws_06_0095.html", + "product_code":"dws", + "code":"781", + "des":"To present search results it is ideal to show a part of each document and how it is related to the query. Usually, search engines show fragments of the document with mark", + "doc_type":"devg", + "kw":"Highlighting Results,Controlling Text Search,Developer Guide", + "title":"Highlighting Results", + "githuburl":"" + }, + { + "uri":"dws_06_0096.html", + "product_code":"dws", + "code":"782", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Additional Features", + "title":"Additional Features", + "githuburl":"" + }, + { + "uri":"dws_06_0097.html", + "product_code":"dws", + "code":"783", + "des":"GaussDB(DWS) provides functions and operators that can be used to manipulate documents that are already in tsvector type.tsvector || tsvectorThe tsvector concatenation op", + "doc_type":"devg", + "kw":"Manipulating tsvector,Additional Features,Developer Guide", + "title":"Manipulating tsvector", + "githuburl":"" + }, + { + "uri":"dws_06_0098.html", + "product_code":"dws", + "code":"784", + "des":"GaussDB(DWS) provides functions and operators that can be used to manipulate queries that are already in tsquery type.tsquery && tsqueryReturns the AND-combination of the", + "doc_type":"devg", + "kw":"Manipulating Queries,Additional Features,Developer Guide", + "title":"Manipulating Queries", + "githuburl":"" + }, + { + "uri":"dws_06_0099.html", + "product_code":"dws", + "code":"785", + "des":"The ts_rewrite family of functions searches a given tsquery for occurrences of a target subquery, and replace each occurrence with a substitute subquery. In essence this ", + "doc_type":"devg", + "kw":"Rewriting Queries,Additional Features,Developer Guide", + "title":"Rewriting Queries", + "githuburl":"" + }, + { + "uri":"dws_06_0100.html", + "product_code":"dws", + "code":"786", + "des":"The function ts_stat is useful for checking your configuration and for finding stop-word candidates.sqlquery is a text value containing an SQL query which must return a s", + "doc_type":"devg", + "kw":"Gathering Document Statistics,Additional Features,Developer Guide", + "title":"Gathering Document Statistics", + "githuburl":"" + }, + { + "uri":"dws_06_0101.html", + "product_code":"dws", + "code":"787", + "des":"Text search parsers are responsible for splitting raw document text into tokens and identifying each token's type, where the set of types is defined by the parser itself.", + "doc_type":"devg", + "kw":"Parsers,Full Text Search,Developer Guide", + "title":"Parsers", + "githuburl":"" + }, + { + "uri":"dws_06_0102.html", + "product_code":"dws", + "code":"788", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Dictionaries", + "title":"Dictionaries", + "githuburl":"" + }, + { + "uri":"dws_06_0103.html", + "product_code":"dws", + "code":"789", + "des":"A dictionary is used to define stop words, that is, words to be ignored in full-text retrieval.A dictionary can also be used to normalize words so that different derived ", + "doc_type":"devg", + "kw":"Overview,Dictionaries,Developer Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0104.html", + "product_code":"dws", + "code":"790", + "des":"Stop words are words that are very common, appear in almost every document, and have no discrimination value. Therefore, they can be ignored in the context of full text s", + "doc_type":"devg", + "kw":"Stop Words,Dictionaries,Developer Guide", + "title":"Stop Words", + "githuburl":"" + }, + { + "uri":"dws_06_0105.html", + "product_code":"dws", + "code":"791", + "des":"A Simple dictionary operates by converting the input token to lower case and checking it against a list of stop words. If the token is found in the list, an empty array w", + "doc_type":"devg", + "kw":"Simple Dictionary,Dictionaries,Developer Guide", + "title":"Simple Dictionary", + "githuburl":"" + }, + { + "uri":"dws_06_0106.html", + "product_code":"dws", + "code":"792", + "des":"A synonym dictionary is used to define, identify, and convert synonyms of tokens. Phrases are not supported (use the thesaurus dictionary in Thesaurus Dictionary).A synon", + "doc_type":"devg", + "kw":"Synonym Dictionary,Dictionaries,Developer Guide", + "title":"Synonym Dictionary", + "githuburl":"" + }, + { + "uri":"dws_06_0107.html", + "product_code":"dws", + "code":"793", + "des":"A thesaurus dictionary (sometimes abbreviated as TZ) is a collection of words that include relationships between words and phrases, such as broader terms (BT), narrower t", + "doc_type":"devg", + "kw":"Thesaurus Dictionary,Dictionaries,Developer Guide", + "title":"Thesaurus Dictionary", + "githuburl":"" + }, + { + "uri":"dws_06_0108.html", + "product_code":"dws", + "code":"794", + "des":"The Ispell dictionary template supports morphological dictionaries, which can normalize many different linguistic forms of a word into the same lexeme. For example, an En", + "doc_type":"devg", + "kw":"Ispell Dictionary,Dictionaries,Developer Guide", + "title":"Ispell Dictionary", + "githuburl":"" + }, + { + "uri":"dws_06_0109.html", + "product_code":"dws", + "code":"795", + "des":"A Snowball dictionary is based on a project by Martin Porter and is used for stem analysis, providing stemming algorithms for many languages. GaussDB(DWS) provides predef", + "doc_type":"devg", + "kw":"Snowball Dictionary,Dictionaries,Developer Guide", + "title":"Snowball Dictionary", + "githuburl":"" + }, + { + "uri":"dws_06_0110.html", + "product_code":"dws", + "code":"796", + "des":"Text search configuration specifies the following components required for converting a document into a tsvector:A parser, decomposes a text into tokens.Dictionary list, c", + "doc_type":"devg", + "kw":"Configuration Examples,Full Text Search,Developer Guide", + "title":"Configuration Examples", + "githuburl":"" + }, + { + "uri":"dws_06_0111.html", + "product_code":"dws", + "code":"797", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Testing and Debugging Text Search", + "title":"Testing and Debugging Text Search", + "githuburl":"" + }, + { + "uri":"dws_06_0112.html", + "product_code":"dws", + "code":"798", + "des":"The function ts_debug allows easy testing of a text search configuration.ts_debug displays information about every token of document as produced by the parser and process", + "doc_type":"devg", + "kw":"Testing a Configuration,Testing and Debugging Text Search,Developer Guide", + "title":"Testing a Configuration", + "githuburl":"" + }, + { + "uri":"dws_06_0113.html", + "product_code":"dws", + "code":"799", + "des":"The ts_parse function allows direct testing of a text search parser.ts_parse parses the given document and returns a series of records, one for each token produced by par", + "doc_type":"devg", + "kw":"Testing a Parser,Testing and Debugging Text Search,Developer Guide", + "title":"Testing a Parser", + "githuburl":"" + }, + { + "uri":"dws_06_0114.html", + "product_code":"dws", + "code":"800", + "des":"The ts_lexize function facilitates dictionary testing.ts_lexize(dict regdictionary, token text) returns text[] ts_lexize returns an array of lexemes if the input token is", + "doc_type":"devg", + "kw":"Testing a Dictionary,Testing and Debugging Text Search,Developer Guide", + "title":"Testing a Dictionary", + "githuburl":"" + }, + { + "uri":"dws_06_0115.html", + "product_code":"dws", + "code":"801", + "des":"The current limitations of GaussDB(DWS)'s full text search are:The length of each lexeme must be less than 2 KB.The length of a tsvector (lexemes + positions) must be les", + "doc_type":"devg", + "kw":"Limitations,Full Text Search,Developer Guide", + "title":"Limitations", + "githuburl":"" + }, + { + "uri":"dws_06_0116.html", + "product_code":"dws", + "code":"802", + "des":"GaussDB(DWS) runs SQL statements to perform different system operations, such as setting variables, displaying the execution plan, and collecting garbage data.For details", + "doc_type":"devg", + "kw":"System Operation,SQL Syntax Reference,Developer Guide", + "title":"System Operation", + "githuburl":"" + }, + { + "uri":"dws_06_0117.html", + "product_code":"dws", + "code":"803", + "des":"A transaction is a user-defined sequence of database operations, which form an integral unit of work.GaussDB(DWS) starts a transaction using START TRANSACTION and BEGIN. ", + "doc_type":"devg", + "kw":"Controlling Transactions,SQL Syntax Reference,Developer Guide", + "title":"Controlling Transactions", + "githuburl":"" + }, + { + "uri":"dws_06_0118.html", + "product_code":"dws", + "code":"804", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"DDL Syntax", + "title":"DDL Syntax", + "githuburl":"" + }, + { + "uri":"dws_06_0119.html", + "product_code":"dws", + "code":"805", + "des":"Data definition language (DDL) is used to define or modify an object in a database, such as a table, index, or view.GaussDB(DWS) does not support DDL if its CN is unavail", + "doc_type":"devg", + "kw":"DDL Syntax Overview,DDL Syntax,Developer Guide", + "title":"DDL Syntax Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0120.html", + "product_code":"dws", + "code":"806", + "des":"This command is used to modify the attributes of a database, including the database name, owner, maximum number of connections, and object isolation attribute.Only the ow", + "doc_type":"devg", + "kw":"ALTER DATABASE,DDL Syntax,Developer Guide", + "title":"ALTER DATABASE", + "githuburl":"" + }, + { + "uri":"dws_06_0123.html", + "product_code":"dws", + "code":"807", + "des":"ALTER FOREIGN TABLE modifies a foreign table.NoneSet the attributes of a foreign table.ALTER FOREIGN TABLE [ IF EXISTS ] table_name\n OPTIONS ( {[ ADD | SET | DROP ] o", + "doc_type":"devg", + "kw":"ALTER FOREIGN TABLE (for GDS),DDL Syntax,Developer Guide", + "title":"ALTER FOREIGN TABLE (for GDS)", + "githuburl":"" + }, + { + "uri":"dws_06_0124.html", + "product_code":"dws", + "code":"808", + "des":"ALTER FOREIGN TABLE modifies an HDFS or OBS foreign table.NoneSet a foreign table's attributes.ALTER FOREIGN TABLE [ IF EXISTS ] table_name\n OPTIONS ( {[ ADD | SET | ", + "doc_type":"devg", + "kw":"ALTER FOREIGN TABLE (for HDFS or OBS),DDL Syntax,Developer Guide", + "title":"ALTER FOREIGN TABLE (for HDFS or OBS)", + "githuburl":"" + }, + { + "uri":"dws_06_0126.html", + "product_code":"dws", + "code":"809", + "des":"ALTER FUNCTION modifies the attributes of a customized function.Only the owner of a function or a system administrator can run this statement. If a function involves oper", + "doc_type":"devg", + "kw":"ALTER FUNCTION,DDL Syntax,Developer Guide", + "title":"ALTER FUNCTION", + "githuburl":"" + }, + { + "uri":"dws_06_0127.html", + "product_code":"dws", + "code":"810", + "des":"ALTER GROUP modifies the attributes of a user group.ALTER GROUP is an alias for ALTER ROLE, and it is not a standard SQL command and not recommended. Users can use ALTER ", + "doc_type":"devg", + "kw":"ALTER GROUP,DDL Syntax,Developer Guide", + "title":"ALTER GROUP", + "githuburl":"" + }, + { + "uri":"dws_06_0128.html", + "product_code":"dws", + "code":"811", + "des":"ALTER INDEX modifies the definition of an existing index.There are several sub-forms:IF EXISTSIf the specified index does not exist, a notice instead of an error is sent.", + "doc_type":"devg", + "kw":"ALTER INDEX,DDL Syntax,Developer Guide", + "title":"ALTER INDEX", + "githuburl":"" + }, + { + "uri":"dws_06_0129.html", + "product_code":"dws", + "code":"812", + "des":"ALTER LARGE OBJECT modifies the definition of a large object. It can only assign a new owner to a large object.Only the administrator or the owner of the to-be-modified l", + "doc_type":"devg", + "kw":"ALTER LARGE OBJECT,DDL Syntax,Developer Guide", + "title":"ALTER LARGE OBJECT", + "githuburl":"" + }, + { + "uri":"dws_06_0132.html", + "product_code":"dws", + "code":"813", + "des":"ALTER REDACTION POLICY modifies a data redaction policy applied to a specified table.Only the owner of the table to which the redaction policy is applied has the permissi", + "doc_type":"devg", + "kw":"ALTER REDACTION POLICY,DDL Syntax,Developer Guide", + "title":"ALTER REDACTION POLICY", + "githuburl":"" + }, + { + "uri":"dws_06_0133.html", + "product_code":"dws", + "code":"814", + "des":"ALTER RESOURCE POOL changes the Cgroup of a resource pool.Users having the ALTER permission can modify resource pools.pool_nameSpecifies the name of the resource pool.The", + "doc_type":"devg", + "kw":"ALTER RESOURCE POOL,DDL Syntax,Developer Guide", + "title":"ALTER RESOURCE POOL", + "githuburl":"" + }, + { + "uri":"dws_06_0134.html", + "product_code":"dws", + "code":"815", + "des":"ALTER ROLE changes the attributes of a role.NoneModifying the Rights of a RoleALTER ROLE role_name [ [ WITH ] option [ ... ] ];The option clause for granting rights is as", + "doc_type":"devg", + "kw":"ALTER ROLE,DDL Syntax,Developer Guide", + "title":"ALTER ROLE", + "githuburl":"" + }, + { + "uri":"dws_06_0135.html", + "product_code":"dws", + "code":"816", + "des":"ALTER ROW LEVEL SECURITY POLICY modifies an existing row-level access control policy, including the policy name and the users and expressions affected by the policy.Only ", + "doc_type":"devg", + "kw":"ALTER ROW LEVEL SECURITY POLICY,DDL Syntax,Developer Guide", + "title":"ALTER ROW LEVEL SECURITY POLICY", + "githuburl":"" + }, + { + "uri":"dws_06_0136.html", + "product_code":"dws", + "code":"817", + "des":"ALTER SCHEMA changes the attributes of a schema.Only the owner of an index or a system administrator can run this statement.Rename a schema.ALTER SCHEMA schema_name \n ", + "doc_type":"devg", + "kw":"ALTER SCHEMA,DDL Syntax,Developer Guide", + "title":"ALTER SCHEMA", + "githuburl":"" + }, + { + "uri":"dws_06_0137.html", + "product_code":"dws", + "code":"818", + "des":"ALTER SEQUENCE modifies the parameters of an existing sequence.You must be the owner of the sequence to use ALTER SEQUENCE.In the current version, you can modify only the", + "doc_type":"devg", + "kw":"ALTER SEQUENCE,DDL Syntax,Developer Guide", + "title":"ALTER SEQUENCE", + "githuburl":"" + }, + { + "uri":"dws_06_0138.html", + "product_code":"dws", + "code":"819", + "des":"ALTER SERVER adds, modifies, or deletes the parameters of an existing server. You can query existing servers from the pg_foreign_server system catalog.Only the owner of a", + "doc_type":"devg", + "kw":"ALTER SERVER,DDL Syntax,Developer Guide", + "title":"ALTER SERVER", + "githuburl":"" + }, + { + "uri":"dws_06_0139.html", + "product_code":"dws", + "code":"820", + "des":"ALTER SESSION defines or modifies the conditions or parameters that affect the current session. Modified session parameters are kept until the current session is disconne", + "doc_type":"devg", + "kw":"ALTER SESSION,DDL Syntax,Developer Guide", + "title":"ALTER SESSION", + "githuburl":"" + }, + { + "uri":"dws_06_0140.html", + "product_code":"dws", + "code":"821", + "des":"ALTER SYNONYM is used to modify the attribute of a synonym.Only the synonym owner can be changed.Only the system administrator and the synonym owner has the permission to", + "doc_type":"devg", + "kw":"ALTER SYNONYM,DDL Syntax,Developer Guide", + "title":"ALTER SYNONYM", + "githuburl":"" + }, + { + "uri":"dws_06_0141.html", + "product_code":"dws", + "code":"822", + "des":"ALTER SYSTEM KILL SESSION ends a session.Nonesession_sid, serialSpecifies SID and SERIAL of a session (see examples for format).Value range: The SIDs and SERIALs of all s", + "doc_type":"devg", + "kw":"ALTER SYSTEM KILL SESSION,DDL Syntax,Developer Guide", + "title":"ALTER SYSTEM KILL SESSION", + "githuburl":"" + }, + { + "uri":"dws_06_0142.html", + "product_code":"dws", + "code":"823", + "des":"ALTER TABLE is used to modify tables, including modifying table definitions, renaming tables, renaming specified columns in tables, renaming table constraints, setting ta", + "doc_type":"devg", + "kw":"ALTER TABLE,DDL Syntax,Developer Guide", + "title":"ALTER TABLE", + "githuburl":"" + }, + { + "uri":"dws_06_0143.html", + "product_code":"dws", + "code":"824", + "des":"ALTER TABLE PARTITION modifies table partitioning, including adding, deleting, splitting, merging partitions, and modifying partition attributes.The name of the added par", + "doc_type":"devg", + "kw":"ALTER TABLE PARTITION,DDL Syntax,Developer Guide", + "title":"ALTER TABLE PARTITION", + "githuburl":"" + }, + { + "uri":"dws_06_0145.html", + "product_code":"dws", + "code":"825", + "des":"ALTER TEXT SEARCH CONFIGURATION modifies the definition of a text search configuration. You can modify its mappings from token types to dictionaries, change the configura", + "doc_type":"devg", + "kw":"ALTER TEXT SEARCH CONFIGURATION,DDL Syntax,Developer Guide", + "title":"ALTER TEXT SEARCH CONFIGURATION", + "githuburl":"" + }, + { + "uri":"dws_06_0146.html", + "product_code":"dws", + "code":"826", + "des":"ALTER TEXT SEARCH DICTIONARY modifies the definition of a full-text retrieval dictionary, including its parameters, name, owner, and schema.ALTER is not supported by pred", + "doc_type":"devg", + "kw":"ALTER TEXT SEARCH DICTIONARY,DDL Syntax,Developer Guide", + "title":"ALTER TEXT SEARCH DICTIONARY", + "githuburl":"" + }, + { + "uri":"dws_06_0147.html", + "product_code":"dws", + "code":"827", + "des":"ALTER TRIGGER modifies the definition of a trigger.Only the owner of a table where a trigger is created and system administrators can run the ALTER TRIGGER statement.trig", + "doc_type":"devg", + "kw":"ALTER TRIGGER,DDL Syntax,Developer Guide", + "title":"ALTER TRIGGER", + "githuburl":"" + }, + { + "uri":"dws_06_0148.html", + "product_code":"dws", + "code":"828", + "des":"ALTER TYPE modifies the definition of a type.Modify a type.ALTER TYPE name action [, ... ]\nALTER TYPE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER }\nALTER TYPE", + "doc_type":"devg", + "kw":"ALTER TYPE,DDL Syntax,Developer Guide", + "title":"ALTER TYPE", + "githuburl":"" + }, + { + "uri":"dws_06_0149.html", + "product_code":"dws", + "code":"829", + "des":"ALTER USER modifies the attributes of a database user.Session parameters modified by ALTER USER apply to a specified user and take effect in the next session.Modify user ", + "doc_type":"devg", + "kw":"ALTER USER,DDL Syntax,Developer Guide", + "title":"ALTER USER", + "githuburl":"" + }, + { + "uri":"dws_06_0150.html", + "product_code":"dws", + "code":"830", + "des":"ALTER VIEW modifies all auxiliary attributes of a view. (To modify the query definition of a view, use CREATE OR REPLACE VIEW.)Only the view owner can modify a view by ru", + "doc_type":"devg", + "kw":"ALTER VIEW,DDL Syntax,Developer Guide", + "title":"ALTER VIEW", + "githuburl":"" + }, + { + "uri":"dws_06_0151.html", + "product_code":"dws", + "code":"831", + "des":"CLEAN CONNECTION clears database connections when a database is abnormal. You may use this statement to delete a specific user's connections to a specified database.NoneC", + "doc_type":"devg", + "kw":"CLEAN CONNECTION,DDL Syntax,Developer Guide", + "title":"CLEAN CONNECTION", + "githuburl":"" + }, + { + "uri":"dws_06_0152.html", + "product_code":"dws", + "code":"832", + "des":"CLOSE frees the resources associated with an open cursor.After a cursor is closed, no subsequent operations are allowed on it.A cursor should be closed when it is no long", + "doc_type":"devg", + "kw":"CLOSE,DDL Syntax,Developer Guide", + "title":"CLOSE", + "githuburl":"" + }, + { + "uri":"dws_06_0153.html", + "product_code":"dws", + "code":"833", + "des":"Cluster a table according to an index.CLUSTER instructs GaussDB(DWS) to cluster the table specified by table_name based on the index specified by index_name. The index mu", + "doc_type":"devg", + "kw":"CLUSTER,DDL Syntax,Developer Guide", + "title":"CLUSTER", + "githuburl":"" + }, + { + "uri":"dws_06_0154.html", + "product_code":"dws", + "code":"834", + "des":"COMMENT defines or changes the comment of an object.Only one comment string is stored for each object. To modify a comment, issue a new COMMENT command for the same objec", + "doc_type":"devg", + "kw":"COMMENT,DDL Syntax,Developer Guide", + "title":"COMMENT", + "githuburl":"" + }, + { + "uri":"dws_06_0155.html", + "product_code":"dws", + "code":"835", + "des":"Creates a barrier for cluster nodes. The barrier can be used for data restoration.Before creating a barrier, ensure that gtm_backup_barrier and enable_cbm_tracking are se", + "doc_type":"devg", + "kw":"CREATE BARRIER,DDL Syntax,Developer Guide", + "title":"CREATE BARRIER", + "githuburl":"" + }, + { + "uri":"dws_06_0156.html", + "product_code":"dws", + "code":"836", + "des":"CREATE DATABASE creates a database. By default, the new database will be created by cloning the standard system database template1. A different template can be specified ", + "doc_type":"devg", + "kw":"CREATE DATABASE,DDL Syntax,Developer Guide", + "title":"CREATE DATABASE", + "githuburl":"" + }, + { + "uri":"dws_06_0159.html", + "product_code":"dws", + "code":"837", + "des":"CREATE FOREIGN TABLE creates a GDS foreign table.CREATE FOREIGN TABLE creates a GDS foreign table in the current database for concurrent data import and export. The GDS f", + "doc_type":"devg", + "kw":"CREATE FOREIGN TABLE (for GDS Import and Export),DDL Syntax,Developer Guide", + "title":"CREATE FOREIGN TABLE (for GDS Import and Export)", + "githuburl":"" + }, + { + "uri":"dws_06_0161.html", + "product_code":"dws", + "code":"838", + "des":"CREATE FOREIGN TABLE creates an HDFS or OBS foreign table in the current database to access or export structured data stored on HDFS or OBS. You can also export data in O", + "doc_type":"devg", + "kw":"CREATE FOREIGN TABLE (SQL on OBS or Hadoop),DDL Syntax,Developer Guide", + "title":"CREATE FOREIGN TABLE (SQL on OBS or Hadoop)", + "githuburl":"" + }, + { + "uri":"dws_06_0160.html", + "product_code":"dws", + "code":"839", + "des":"CREATE FOREIGN TABLE creates a foreign table in the current database for parallel data import and export of OBS data. The server used is gsmpp_server, which is created by", + "doc_type":"devg", + "kw":"CREATE FOREIGN TABLE (for OBS Import and Export),DDL Syntax,Developer Guide", + "title":"CREATE FOREIGN TABLE (for OBS Import and Export)", + "githuburl":"" + }, + { + "uri":"dws_06_0163.html", + "product_code":"dws", + "code":"840", + "des":"CREATE FUNCTION creates a function.The precision values (if any) of the parameters or return values of a function are not checked.When creating a function, you are advise", + "doc_type":"devg", + "kw":"CREATE FUNCTION,DDL Syntax,Developer Guide", + "title":"CREATE FUNCTION", + "githuburl":"" + }, + { + "uri":"dws_06_0164.html", + "product_code":"dws", + "code":"841", + "des":"CREATE GROUP creates a user group.CREATE GROUP is an alias for CREATE ROLE, and it is not a standard SQL command and not recommended. Users can use CREATE ROLE directly.T", + "doc_type":"devg", + "kw":"CREATE GROUP,DDL Syntax,Developer Guide", + "title":"CREATE GROUP", + "githuburl":"" + }, + { + "uri":"dws_06_0165.html", + "product_code":"dws", + "code":"842", + "des":"CREATE INDEX-bak defines a new index.Indexes are primarily used to enhance database performance (though inappropriate use can result in slower database performance). You ", + "doc_type":"devg", + "kw":"CREATE INDEX,DDL Syntax,Developer Guide", + "title":"CREATE INDEX", + "githuburl":"" + }, + { + "uri":"dws_06_0168.html", + "product_code":"dws", + "code":"843", + "des":"CREATE REDACTION POLICY creates a data redaction policy for a table.Only the table owner has the permission to create a data redaction policy.You can create data redactio", + "doc_type":"devg", + "kw":"CREATE REDACTION POLICY,DDL Syntax,Developer Guide", + "title":"CREATE REDACTION POLICY", + "githuburl":"" + }, + { + "uri":"dws_06_0169.html", + "product_code":"dws", + "code":"844", + "des":"CREATE ROW LEVEL SECURITY POLICY creates a row-level access control policy for a table.The policy takes effect only after row-level access control is enabled (by running ", + "doc_type":"devg", + "kw":"CREATE ROW LEVEL SECURITY POLICY,DDL Syntax,Developer Guide", + "title":"CREATE ROW LEVEL SECURITY POLICY", + "githuburl":"" + }, + { + "uri":"dws_06_0170.html", + "product_code":"dws", + "code":"845", + "des":"CREATE PROCEDURE creates a stored procedure.The precision values (if any) of the parameters or return values of a stored procedure are not checked.When creating a stored ", + "doc_type":"devg", + "kw":"CREATE PROCEDURE,DDL Syntax,Developer Guide", + "title":"CREATE PROCEDURE", + "githuburl":"" + }, + { + "uri":"dws_06_0171.html", + "product_code":"dws", + "code":"846", + "des":"CREATE RESOURCE POOL creates a resource pool and specifies the Cgroup for the resource pool.As long as the current user has CREATE permission, it can create a resource po", + "doc_type":"devg", + "kw":"CREATE RESOURCE POOL,DDL Syntax,Developer Guide", + "title":"CREATE RESOURCE POOL", + "githuburl":"" + }, + { + "uri":"dws_06_0172.html", + "product_code":"dws", + "code":"847", + "des":"Create a role.A role is an entity that has own database objects and permissions. In different environments, a role can be considered a user, a group, or both.CREATE ROLE ", + "doc_type":"devg", + "kw":"CREATE ROLE,DDL Syntax,Developer Guide", + "title":"CREATE ROLE", + "githuburl":"" + }, + { + "uri":"dws_06_0173.html", + "product_code":"dws", + "code":"848", + "des":"CREATE SCHEMA creates a schema.Named objects are accessed either by \"qualifying\" their names with the schema name as a prefix, or by setting a search path that includes t", + "doc_type":"devg", + "kw":"CREATE SCHEMA,DDL Syntax,Developer Guide", + "title":"CREATE SCHEMA", + "githuburl":"" + }, + { + "uri":"dws_06_0174.html", + "product_code":"dws", + "code":"849", + "des":"CREATE SEQUENCE adds a sequence to the current database. The owner of a sequence is the user who creates the sequence.A sequence is a special table that stores arithmetic", + "doc_type":"devg", + "kw":"CREATE SEQUENCE,DDL Syntax,Developer Guide", + "title":"CREATE SEQUENCE", + "githuburl":"" + }, + { + "uri":"dws_06_0175.html", + "product_code":"dws", + "code":"850", + "des":"CREATE SERVER creates an external server.An external server stores information of HDFS clusters, OBS servers, DLI connections, or other homogeneous clusters.By default, o", + "doc_type":"devg", + "kw":"CREATE SERVER,DDL Syntax,Developer Guide", + "title":"CREATE SERVER", + "githuburl":"" + }, + { + "uri":"dws_06_0176.html", + "product_code":"dws", + "code":"851", + "des":"CREATE SYNONYM is used to create a synonym object. A synonym is an alias of a database object and is used to record the mapping between database object names. You can use", + "doc_type":"devg", + "kw":"CREATE SYNONYM,DDL Syntax,Developer Guide", + "title":"CREATE SYNONYM", + "githuburl":"" + }, + { + "uri":"dws_06_0177.html", + "product_code":"dws", + "code":"852", + "des":"CREATE TABLE creates a table in the current database. The table will be owned by the user who created it.For details about the data types supported by column-store tables", + "doc_type":"devg", + "kw":"CREATE TABLE,DDL Syntax,Developer Guide", + "title":"CREATE TABLE", + "githuburl":"" + }, + { + "uri":"dws_06_0178.html", + "product_code":"dws", + "code":"853", + "des":"CREATE TABLE AS creates a table based on the results of a query.It creates a table and fills it with data obtained using SELECT. The table columns have the names and data", + "doc_type":"devg", + "kw":"CREATE TABLE AS,DDL Syntax,Developer Guide", + "title":"CREATE TABLE AS", + "githuburl":"" + }, + { + "uri":"dws_06_0179.html", + "product_code":"dws", + "code":"854", + "des":"CREATE TABLE PARTITION creates a partitioned table. Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific sche", + "doc_type":"devg", + "kw":"CREATE TABLE PARTITION,DDL Syntax,Developer Guide", + "title":"CREATE TABLE PARTITION", + "githuburl":"" + }, + { + "uri":"dws_06_0182.html", + "product_code":"dws", + "code":"855", + "des":"CREATE TEXT SEARCH CONFIGURATION creates a text search configuration. A text search configuration specifies a text search parser that can divide a string into tokens, plu", + "doc_type":"devg", + "kw":"CREATE TEXT SEARCH CONFIGURATION,DDL Syntax,Developer Guide", + "title":"CREATE TEXT SEARCH CONFIGURATION", + "githuburl":"" + }, + { + "uri":"dws_06_0183.html", + "product_code":"dws", + "code":"856", + "des":"CREATE TEXT SEARCH DICTIONARY creates a full-text search dictionary. A dictionary is used to identify and process specified words during full-text search.Dictionaries are", + "doc_type":"devg", + "kw":"CREATE TEXT SEARCH DICTIONARY,DDL Syntax,Developer Guide", + "title":"CREATE TEXT SEARCH DICTIONARY", + "githuburl":"" + }, + { + "uri":"dws_06_0184.html", + "product_code":"dws", + "code":"857", + "des":"CREATE TRIGGER creates a trigger. The trigger will be associated with a specified table or view, and will execute a specified function when certain events occur.Currently", + "doc_type":"devg", + "kw":"CREATE TRIGGER,DDL Syntax,Developer Guide", + "title":"CREATE TRIGGER", + "githuburl":"" + }, + { + "uri":"dws_06_0185.html", + "product_code":"dws", + "code":"858", + "des":"CREATE TYPE defines a new data type in the current database. The user who defines a new data type becomes its owner. Types are designed only for row-store tables.Four typ", + "doc_type":"devg", + "kw":"CREATE TYPE,DDL Syntax,Developer Guide", + "title":"CREATE TYPE", + "githuburl":"" + }, + { + "uri":"dws_06_0186.html", + "product_code":"dws", + "code":"859", + "des":"CREATE USER creates a user.A user created using the CREATE USER statement has the LOGIN permission by default.A schema named after the user is automatically created in th", + "doc_type":"devg", + "kw":"CREATE USER,DDL Syntax,Developer Guide", + "title":"CREATE USER", + "githuburl":"" + }, + { + "uri":"dws_06_0187.html", + "product_code":"dws", + "code":"860", + "des":"CREATE VIEW creates a view. A view is a virtual table, not a base table. A database only stores the definition of a view and does not store its data. The data is still st", + "doc_type":"devg", + "kw":"CREATE VIEW,DDL Syntax,Developer Guide", + "title":"CREATE VIEW", + "githuburl":"" + }, + { + "uri":"dws_06_0188.html", + "product_code":"dws", + "code":"861", + "des":"CURSOR defines a cursor. This command retrieves few rows of data in a query.To process SQL statements, the stored procedure process assigns a memory segment to store cont", + "doc_type":"devg", + "kw":"CURSOR,DDL Syntax,Developer Guide", + "title":"CURSOR", + "githuburl":"" + }, + { + "uri":"dws_06_0189.html", + "product_code":"dws", + "code":"862", + "des":"DROP DATABASE deletes a database.Only the owner of a database or a system administrator has the permission to run the DROP DATABASE command.DROP DATABASE does not take ef", + "doc_type":"devg", + "kw":"DROP DATABASE,DDL Syntax,Developer Guide", + "title":"DROP DATABASE", + "githuburl":"" + }, + { + "uri":"dws_06_0192.html", + "product_code":"dws", + "code":"863", + "des":"DROP FOREIGN TABLE deletes a specified foreign table.DROP FOREIGN TABLE forcibly deletes a specified table. After a table is deleted, any indexes that exist for the table", + "doc_type":"devg", + "kw":"DROP FOREIGN TABLE,DDL Syntax,Developer Guide", + "title":"DROP FOREIGN TABLE", + "githuburl":"" + }, + { + "uri":"dws_06_0193.html", + "product_code":"dws", + "code":"864", + "des":"DROP FUNCTION deletes an existing function.If a function involves operations on temporary tables, the function cannot be deleted by running DROP FUNCTION.IF EXISTSSends a", + "doc_type":"devg", + "kw":"DROP FUNCTION,DDL Syntax,Developer Guide", + "title":"DROP FUNCTION", + "githuburl":"" + }, + { + "uri":"dws_06_0194.html", + "product_code":"dws", + "code":"865", + "des":"DROP GROUP deletes a user group.DROP GROUP is the alias for DROP ROLE.DROP GROUP is the internal interface encapsulated in the gs_om tool. You are not advised to use this", + "doc_type":"devg", + "kw":"DROP GROUP,DDL Syntax,Developer Guide", + "title":"DROP GROUP", + "githuburl":"" + }, + { + "uri":"dws_06_0195.html", + "product_code":"dws", + "code":"866", + "des":"DROP INDEX deletes an index.Only the owner of an index or a system administrator can run DROP INDEX command.IF EXISTSSends a notice instead of an error if the specified i", + "doc_type":"devg", + "kw":"DROP INDEX,DDL Syntax,Developer Guide", + "title":"DROP INDEX", + "githuburl":"" + }, + { + "uri":"dws_06_0198.html", + "product_code":"dws", + "code":"867", + "des":"DROP OWNED deletes the database objects of a database role.The role's permissions on all the database objects in the current database and shared objects (databases and ta", + "doc_type":"devg", + "kw":"DROP OWNED,DDL Syntax,Developer Guide", + "title":"DROP OWNED", + "githuburl":"" + }, + { + "uri":"dws_06_0199.html", + "product_code":"dws", + "code":"868", + "des":"DROP REDACTION POLICY deletes a data redaction policy applied to a specified table.Only the table owner has the permission to delete a data redaction policy.IF EXISTSSend", + "doc_type":"devg", + "kw":"DROP REDACTION POLICY,DDL Syntax,Developer Guide", + "title":"DROP REDACTION POLICY", + "githuburl":"" + }, + { + "uri":"dws_06_0200.html", + "product_code":"dws", + "code":"869", + "des":"DROP ROW LEVEL SECURITY POLICY deletes a row-level access control policy from a table.Only the table owner or administrators can delete a row-level access control policy ", + "doc_type":"devg", + "kw":"DROP ROW LEVEL SECURITY POLICY,DDL Syntax,Developer Guide", + "title":"DROP ROW LEVEL SECURITY POLICY", + "githuburl":"" + }, + { + "uri":"dws_06_0201.html", + "product_code":"dws", + "code":"870", + "des":"DROP PROCEDURE deletes an existing stored procedure.None.IF EXISTSSends a notice instead of an error if the stored procedure does not exist.Sends a notice instead of an e", + "doc_type":"devg", + "kw":"DROP PROCEDURE,DDL Syntax,Developer Guide", + "title":"DROP PROCEDURE", + "githuburl":"" + }, + { + "uri":"dws_06_0202.html", + "product_code":"dws", + "code":"871", + "des":"DROP RESOURCE POOL deletes a resource pool.The resource pool cannot be deleted if it is associated with a role.The user must have the DROP permission in order to delete a", + "doc_type":"devg", + "kw":"DROP RESOURCE POOL,DDL Syntax,Developer Guide", + "title":"DROP RESOURCE POOL", + "githuburl":"" + }, + { + "uri":"dws_06_0203.html", + "product_code":"dws", + "code":"872", + "des":"DROP ROLE deletes a specified role.If a \"role is being used by other users\" error is displayed when you run DROP ROLE, it might be that threads cannot respond to signals ", + "doc_type":"devg", + "kw":"DROP ROLE,DDL Syntax,Developer Guide", + "title":"DROP ROLE", + "githuburl":"" + }, + { + "uri":"dws_06_0204.html", + "product_code":"dws", + "code":"873", + "des":"DROP SCHEMA deletes a schema in a database.Only a schema owner or a system administrator can run the DROP SCHEMA command.IF EXISTSSends a notice instead of an error if th", + "doc_type":"devg", + "kw":"DROP SCHEMA,DDL Syntax,Developer Guide", + "title":"DROP SCHEMA", + "githuburl":"" + }, + { + "uri":"dws_06_0205.html", + "product_code":"dws", + "code":"874", + "des":"DROP SEQUENCE deletes a sequence from the current database.Only a sequence owner or a system administrator can delete a sequence.IF EXISTSSends a notice instead of an err", + "doc_type":"devg", + "kw":"DROP SEQUENCE,DDL Syntax,Developer Guide", + "title":"DROP SEQUENCE", + "githuburl":"" + }, + { + "uri":"dws_06_0206.html", + "product_code":"dws", + "code":"875", + "des":"DROP SERVER deletes an existing data server.Only the server owner can delete a server.IF EXISTSSends a notice instead of an error if the specified table does not exist.Se", + "doc_type":"devg", + "kw":"DROP SERVER,DDL Syntax,Developer Guide", + "title":"DROP SERVER", + "githuburl":"" + }, + { + "uri":"dws_06_0207.html", + "product_code":"dws", + "code":"876", + "des":"DROP SYNONYM is used to delete a synonym object.Only a synonym owner or a system administrator can run the DROP SYNONYM command.IF EXISTSSend a notice instead of reportin", + "doc_type":"devg", + "kw":"DROP SYNONYM,DDL Syntax,Developer Guide", + "title":"DROP SYNONYM", + "githuburl":"" + }, + { + "uri":"dws_06_0208.html", + "product_code":"dws", + "code":"877", + "des":"DROP TABLE deletes a specified table.Only the table owner, schema owner, and system administrator have the permission to delete a table. To delete all the rows in a table", + "doc_type":"devg", + "kw":"DROP TABLE,DDL Syntax,Developer Guide", + "title":"DROP TABLE", + "githuburl":"" + }, + { + "uri":"dws_06_0210.html", + "product_code":"dws", + "code":"878", + "des":"DROP TEXT SEARCH CONFIGURATION deletes an existing text search configuration.To run the DROP TEXT SEARCH CONFIGURATION command, you must be the owner of the text search c", + "doc_type":"devg", + "kw":"DROP TEXT SEARCH CONFIGURATION,DDL Syntax,Developer Guide", + "title":"DROP TEXT SEARCH CONFIGURATION", + "githuburl":"" + }, + { + "uri":"dws_06_0211.html", + "product_code":"dws", + "code":"879", + "des":"DROPTEXT SEARCHDICTIONARY deletes a full-text retrieval dictionary.DROP is not supported by predefined dictionaries.Only the owner of a dictionary can do DROP to the dict", + "doc_type":"devg", + "kw":"DROP TEXT SEARCH DICTIONARY,DDL Syntax,Developer Guide", + "title":"DROP TEXT SEARCH DICTIONARY", + "githuburl":"" + }, + { + "uri":"dws_06_0212.html", + "product_code":"dws", + "code":"880", + "des":"DROP TRIGGER deletes a trigger.Only the owner of a trigger and system administrators can run the DROP TRIGGER statement.IF EXISTSSends a notice instead of an error if the", + "doc_type":"devg", + "kw":"DROP TRIGGER,DDL Syntax,Developer Guide", + "title":"DROP TRIGGER", + "githuburl":"" + }, + { + "uri":"dws_06_0213.html", + "product_code":"dws", + "code":"881", + "des":"DROP TYPE deletes a user-defined data type. Only the type owner has permission to run this statement.IF EXISTSSends a notice instead of an error if the specified type doe", + "doc_type":"devg", + "kw":"DROP TYPE,DDL Syntax,Developer Guide", + "title":"DROP TYPE", + "githuburl":"" + }, + { + "uri":"dws_06_0214.html", + "product_code":"dws", + "code":"882", + "des":"Deleting a user will also delete the schema having the same name as the user.CASCADE is used to delete objects (excluding databases) that depend on the user. CASCADE cann", + "doc_type":"devg", + "kw":"DROP USER,DDL Syntax,Developer Guide", + "title":"DROP USER", + "githuburl":"" + }, + { + "uri":"dws_06_0215.html", + "product_code":"dws", + "code":"883", + "des":"DROP VIEW forcibly deletes an existing view in a database.Only a view owner or a system administrator can run DROP VIEW command.IF EXISTSSends a notice instead of an erro", + "doc_type":"devg", + "kw":"DROP VIEW,DDL Syntax,Developer Guide", + "title":"DROP VIEW", + "githuburl":"" + }, + { + "uri":"dws_06_0216.html", + "product_code":"dws", + "code":"884", + "des":"FETCH retrieves data using a previously-created cursor.A cursor has an associated position, which is used by FETCH. The cursor position can be before the first row of the", + "doc_type":"devg", + "kw":"FETCH,DDL Syntax,Developer Guide", + "title":"FETCH", + "githuburl":"" + }, + { + "uri":"dws_06_0217.html", + "product_code":"dws", + "code":"885", + "des":"MOVE repositions a cursor without retrieving any data. MOVE works exactly like the FETCH command, except it only repositions the cursor and does not return rows.NoneThe d", + "doc_type":"devg", + "kw":"MOVE,DDL Syntax,Developer Guide", + "title":"MOVE", + "githuburl":"" + }, + { + "uri":"dws_06_0218.html", + "product_code":"dws", + "code":"886", + "des":"REINDEX rebuilds an index using the data stored in the index's table, replacing the old copy of the index.There are several scenarios in which REINDEX can be used:An inde", + "doc_type":"devg", + "kw":"REINDEX,DDL Syntax,Developer Guide", + "title":"REINDEX", + "githuburl":"" + }, + { + "uri":"dws_06_0219.html", + "product_code":"dws", + "code":"887", + "des":"RESET restores run-time parameters to their default values. The default values are parameter default values complied in the postgresql.conf configuration file.RESET is an", + "doc_type":"devg", + "kw":"RESET,DDL Syntax,Developer Guide", + "title":"RESET", + "githuburl":"" + }, + { + "uri":"dws_06_0220.html", + "product_code":"dws", + "code":"888", + "des":"SET modifies a run-time parameter.Most run-time parameters can be modified by executing SET. Some parameters cannot be modified after a server or session starts.Set the s", + "doc_type":"devg", + "kw":"SET,DDL Syntax,Developer Guide", + "title":"SET", + "githuburl":"" + }, + { + "uri":"dws_06_0221.html", + "product_code":"dws", + "code":"889", + "des":"SET CONSTRAINTS sets the behavior of constraint checking within the current transaction.IMMEDIATE constraints are checked at the end of each statement. DEFERRED constrain", + "doc_type":"devg", + "kw":"SET CONSTRAINTS,DDL Syntax,Developer Guide", + "title":"SET CONSTRAINTS", + "githuburl":"" + }, + { + "uri":"dws_06_0222.html", + "product_code":"dws", + "code":"890", + "des":"SET ROLE sets the current user identifier of the current session.Users of the current session must be members of specified rolename, but the system administrator can choo", + "doc_type":"devg", + "kw":"SET ROLE,DDL Syntax,Developer Guide", + "title":"SET ROLE", + "githuburl":"" + }, + { + "uri":"dws_06_0223.html", + "product_code":"dws", + "code":"891", + "des":"SET SESSION AUTHORIZATION sets the session user identifier and the current user identifier of the current SQL session to a specified user.The session identifier can be ch", + "doc_type":"devg", + "kw":"SET SESSION AUTHORIZATION,DDL Syntax,Developer Guide", + "title":"SET SESSION AUTHORIZATION", + "githuburl":"" + }, + { + "uri":"dws_06_0224.html", + "product_code":"dws", + "code":"892", + "des":"SHOW shows the current value of a run-time parameter. You can use the SET statement to set these parameters.Some parameters that can be viewed by SHOW are read-only. You ", + "doc_type":"devg", + "kw":"SHOW,DDL Syntax,Developer Guide", + "title":"SHOW", + "githuburl":"" + }, + { + "uri":"dws_06_0225.html", + "product_code":"dws", + "code":"893", + "des":"TRUNCATE quickly removes all rows from a database table.It has the same effect as an unqualified DELETE on each table, but it is faster since it does not actually scan th", + "doc_type":"devg", + "kw":"TRUNCATE,DDL Syntax,Developer Guide", + "title":"TRUNCATE", + "githuburl":"" + }, + { + "uri":"dws_06_0226.html", + "product_code":"dws", + "code":"894", + "des":"VACUUM reclaims storage space occupied by tables or B-tree indexes. In normal database operation, rows that have been deleted or obsoleted by an update are not physically", + "doc_type":"devg", + "kw":"VACUUM,DDL Syntax,Developer Guide", + "title":"VACUUM", + "githuburl":"" + }, + { + "uri":"dws_06_0227.html", + "product_code":"dws", + "code":"895", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"DML Syntax", + "title":"DML Syntax", + "githuburl":"" + }, + { + "uri":"dws_06_0228.html", + "product_code":"dws", + "code":"896", + "des":"Data Manipulation Language (DML) is used to perform operations on data in database tables, such as inserting, updating, querying, or deleting data.Inserting data refers t", + "doc_type":"devg", + "kw":"DML Syntax Overview,DML Syntax,Developer Guide", + "title":"DML Syntax Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0229.html", + "product_code":"dws", + "code":"897", + "des":"CALL calls defined functions or stored procedures.NoneschemaSpecifies the name of the schema where a function or stored procedure is located.Specifies the name of the sch", + "doc_type":"devg", + "kw":"CALL,DML Syntax,Developer Guide", + "title":"CALL", + "githuburl":"" + }, + { + "uri":"dws_06_0230.html", + "product_code":"dws", + "code":"898", + "des":"COPY copies data between tables and files.COPY FROM copies data from a file to a table. COPY TO copies data from a table to a file.If CNs and DNs are enabled in security ", + "doc_type":"devg", + "kw":"COPY,DML Syntax,Developer Guide", + "title":"COPY", + "githuburl":"" + }, + { + "uri":"dws_06_0231.html", + "product_code":"dws", + "code":"899", + "des":"DELETE deletes rows that satisfy the WHERE clause from the specified table. If the WHERE clause does not exist, all rows in the table will be deleted. The result is a val", + "doc_type":"devg", + "kw":"DELETE,DML Syntax,Developer Guide", + "title":"DELETE", + "githuburl":"" + }, + { + "uri":"dws_06_0232.html", + "product_code":"dws", + "code":"900", + "des":"EXPLAIN shows the execution plan of an SQL statement.The execution plan shows how the tables referenced by the SQL statement will be scanned, for example, by plain sequen", + "doc_type":"devg", + "kw":"EXPLAIN,DML Syntax,Developer Guide", + "title":"EXPLAIN", + "githuburl":"" + }, + { + "uri":"dws_06_0233.html", + "product_code":"dws", + "code":"901", + "des":"You can run the EXPLAIN PLAN statement to save the information about an execution plan to the PLAN_TABLE table. Different from the EXPLAIN statement, EXPLAIN PLAN only st", + "doc_type":"devg", + "kw":"EXPLAIN PLAN,DML Syntax,Developer Guide", + "title":"EXPLAIN PLAN", + "githuburl":"" + }, + { + "uri":"dws_06_0234.html", + "product_code":"dws", + "code":"902", + "des":"LOCK TABLE obtains a table-level lock.GaussDB(DWS) always tries to select the lock mode with minimum constraints when automatically requesting a lock for a command refere", + "doc_type":"devg", + "kw":"LOCK,DML Syntax,Developer Guide", + "title":"LOCK", + "githuburl":"" + }, + { + "uri":"dws_06_0235.html", + "product_code":"dws", + "code":"903", + "des":"The MERGE INTO statement is used to conditionally match data in a target table with that in a source table. If data matches, UPDATE is executed on the target table; if da", + "doc_type":"devg", + "kw":"MERGE INTO,DML Syntax,Developer Guide", + "title":"MERGE INTO", + "githuburl":"" + }, + { + "uri":"dws_06_0275.html", + "product_code":"dws", + "code":"904", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"INSERT and UPSERT", + "title":"INSERT and UPSERT", + "githuburl":"" + }, + { + "uri":"dws_06_0236.html", + "product_code":"dws", + "code":"905", + "des":"INSERT inserts new rows into a table.You must have the INSERT permission on a table in order to insert into it.Use of the RETURNING clause requires the SELECT permission ", + "doc_type":"devg", + "kw":"INSERT,INSERT and UPSERT,Developer Guide", + "title":"INSERT", + "githuburl":"" + }, + { + "uri":"dws_06_0237.html", + "product_code":"dws", + "code":"906", + "des":"UPSERT inserts rows into a table. When a row duplicates an existing primary key or unique key value, the row will be ignored or updated.The UPSERT syntax is supported onl", + "doc_type":"devg", + "kw":"UPSERT,INSERT and UPSERT,Developer Guide", + "title":"UPSERT", + "githuburl":"" + }, + { + "uri":"dws_06_0240.html", + "product_code":"dws", + "code":"907", + "des":"UPDATE updates data in a table. UPDATE changes the values of the specified columns in all rows that satisfy the condition. The WHERE clause clarifies conditions. The colu", + "doc_type":"devg", + "kw":"UPDATE,DML Syntax,Developer Guide", + "title":"UPDATE", + "githuburl":"" + }, + { + "uri":"dws_06_0241.html", + "product_code":"dws", + "code":"908", + "des":"VALUES computes a row or a set of rows based on given values. It is most commonly used to generate a constant table within a large command.VALUES lists with large numbers", + "doc_type":"devg", + "kw":"VALUES,DML Syntax,Developer Guide", + "title":"VALUES", + "githuburl":"" + }, + { + "uri":"dws_06_0242.html", + "product_code":"dws", + "code":"909", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"DCL Syntax", + "title":"DCL Syntax", + "githuburl":"" + }, + { + "uri":"dws_06_0243.html", + "product_code":"dws", + "code":"910", + "des":"Data control language (DCL) is used to set or modify database users or role rights.GaussDB(DWS) provides a statement for granting rights to data objects and roles. For de", + "doc_type":"devg", + "kw":"DCL Syntax Overview,DCL Syntax,Developer Guide", + "title":"DCL Syntax Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0244.html", + "product_code":"dws", + "code":"911", + "des":"ALTER DEFAULT PRIVILEGES allows you to set the permissions that will be used for objects to be created. It does not affect permissions assigned to existing objects.To iso", + "doc_type":"devg", + "kw":"ALTER DEFAULT PRIVILEGES,DCL Syntax,Developer Guide", + "title":"ALTER DEFAULT PRIVILEGES", + "githuburl":"" + }, + { + "uri":"dws_06_0245.html", + "product_code":"dws", + "code":"912", + "des":"ANALYZE collects statistics about ordinary tables in a database, and stores the results in the PG_STATISTIC system catalog. The execution plan generator uses these statis", + "doc_type":"devg", + "kw":"ANALYZE | ANALYSE,DCL Syntax,Developer Guide", + "title":"ANALYZE | ANALYSE", + "githuburl":"" + }, + { + "uri":"dws_06_0246.html", + "product_code":"dws", + "code":"913", + "des":"DEALLOCATE deallocates a previously prepared statement. If you do not explicitly deallocate a prepared statement, it is deallocated when the session ends.The PREPARE key ", + "doc_type":"devg", + "kw":"DEALLOCATE,DCL Syntax,Developer Guide", + "title":"DEALLOCATE", + "githuburl":"" + }, + { + "uri":"dws_06_0247.html", + "product_code":"dws", + "code":"914", + "des":"DO executes an anonymous code block.A code block is a function body without parameters that returns void. It is analyzed and executed at the same time.Before using a prog", + "doc_type":"devg", + "kw":"DO,DCL Syntax,Developer Guide", + "title":"DO", + "githuburl":"" + }, + { + "uri":"dws_06_0248.html", + "product_code":"dws", + "code":"915", + "des":"EXECUTE executes a prepared statement. A prepared statement only exists in the lifecycle of a session. Therefore, only prepared statements created using PREPARE earlier i", + "doc_type":"devg", + "kw":"EXECUTE,DCL Syntax,Developer Guide", + "title":"EXECUTE", + "githuburl":"" + }, + { + "uri":"dws_06_0249.html", + "product_code":"dws", + "code":"916", + "des":"EXECUTE DIRECT executes an SQL statement on a specified node. Generally, the cluster automatically allocates an SQL statement to proper nodes. EXECUTE DIRECT is mainly us", + "doc_type":"devg", + "kw":"EXECUTE DIRECT,DCL Syntax,Developer Guide", + "title":"EXECUTE DIRECT", + "githuburl":"" + }, + { + "uri":"dws_06_0250.html", + "product_code":"dws", + "code":"917", + "des":"GRANT grants permissions to roles and users.GRANT is used in the following scenarios:Granting system permissions to roles or usersSystem permissions are also called user ", + "doc_type":"devg", + "kw":"GRANT,DCL Syntax,Developer Guide", + "title":"GRANT", + "githuburl":"" + }, + { + "uri":"dws_06_0251.html", + "product_code":"dws", + "code":"918", + "des":"PREPARE creates a prepared statement.A prepared statement is a performance optimizing object on the server. When the PREPARE statement is executed, the specified query is", + "doc_type":"devg", + "kw":"PREPARE,DCL Syntax,Developer Guide", + "title":"PREPARE", + "githuburl":"" + }, + { + "uri":"dws_06_0252.html", + "product_code":"dws", + "code":"919", + "des":"REASSIGN OWNED changes the owner of a database.REASSIGN OWNED requires that the system change owners of all the database objects owned by old_roles to new_role.REASSIGN O", + "doc_type":"devg", + "kw":"REASSIGN OWNED,DCL Syntax,Developer Guide", + "title":"REASSIGN OWNED", + "githuburl":"" + }, + { + "uri":"dws_06_0253.html", + "product_code":"dws", + "code":"920", + "des":"REVOKE revokes rights from one or more roles.If a non-owner user of an object attempts to REVOKE rights on the object, the command is executed based on the following rule", + "doc_type":"devg", + "kw":"REVOKE,DCL Syntax,Developer Guide", + "title":"REVOKE", + "githuburl":"" + }, + { + "uri":"dws_06_0276.html", + "product_code":"dws", + "code":"921", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"DQL Syntax", + "title":"DQL Syntax", + "githuburl":"" + }, + { + "uri":"dws_06_0277.html", + "product_code":"dws", + "code":"922", + "des":"Data Query Language (DQL) can obtain data from tables or views.GaussDB(DWS) provides statements for obtaining data from tables or views. For details, see SELECT.GaussDB(D", + "doc_type":"devg", + "kw":"DQL Syntax Overview,DQL Syntax,Developer Guide", + "title":"DQL Syntax Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0238.html", + "product_code":"dws", + "code":"923", + "des":"SELECT retrieves data from a table or view.Serving as an overlaid filter for a database table, SELECT using SQL keywords retrieves required data from data tables.Using SE", + "doc_type":"devg", + "kw":"SELECT,DQL Syntax,Developer Guide", + "title":"SELECT", + "githuburl":"" + }, + { + "uri":"dws_06_0239.html", + "product_code":"dws", + "code":"924", + "des":"SELECT INTO defines a new table based on a query result and insert data obtained by query to the new table.Different from SELECT, data found by SELECT INTO is not returne", + "doc_type":"devg", + "kw":"SELECT INTO,DQL Syntax,Developer Guide", + "title":"SELECT INTO", + "githuburl":"" + }, + { + "uri":"dws_06_0254.html", + "product_code":"dws", + "code":"925", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"TCL Syntax", + "title":"TCL Syntax", + "githuburl":"" + }, + { + "uri":"dws_06_0255.html", + "product_code":"dws", + "code":"926", + "des":"Transaction Control Language (TCL) controls the time and effect of database transactions and monitors the database.GaussDB(DWS) uses the COMMIT or END statement to commit", + "doc_type":"devg", + "kw":"TCL Syntax Overview,TCL Syntax,Developer Guide", + "title":"TCL Syntax Overview", + "githuburl":"" + }, + { + "uri":"dws_06_0256.html", + "product_code":"dws", + "code":"927", + "des":"ABORT rolls back the current transaction and cancels the changes in the transaction.This command is equivalent to ROLLBACK, and is present only for historical reasons. No", + "doc_type":"devg", + "kw":"ABORT,TCL Syntax,Developer Guide", + "title":"ABORT", + "githuburl":"" + }, + { + "uri":"dws_06_0257.html", + "product_code":"dws", + "code":"928", + "des":"BEGIN may be used to initiate an anonymous block or a single transaction. This section describes the syntax of BEGIN used to initiate an anonymous block. For details abou", + "doc_type":"devg", + "kw":"BEGIN,TCL Syntax,Developer Guide", + "title":"BEGIN", + "githuburl":"" + }, + { + "uri":"dws_06_0258.html", + "product_code":"dws", + "code":"929", + "des":"A checkpoint is a point in the transaction log sequence at which all data files have been updated to reflect the information in the log. All data files will be flushed to", + "doc_type":"devg", + "kw":"CHECKPOINT,TCL Syntax,Developer Guide", + "title":"CHECKPOINT", + "githuburl":"" + }, + { + "uri":"dws_06_0259.html", + "product_code":"dws", + "code":"930", + "des":"COMMIT or END commits all operations of a transaction.Only the transaction creators or system administrators can run the COMMIT command. The creation and commit operation", + "doc_type":"devg", + "kw":"COMMIT | END,TCL Syntax,Developer Guide", + "title":"COMMIT | END", + "githuburl":"" + }, + { + "uri":"dws_06_0260.html", + "product_code":"dws", + "code":"931", + "des":"COMMIT PREPARED commits a prepared two-phase transaction.The function is only available in maintenance mode (when GUC parameter xc_maintenance_mode is on). Exercise cauti", + "doc_type":"devg", + "kw":"COMMIT PREPARED,TCL Syntax,Developer Guide", + "title":"COMMIT PREPARED", + "githuburl":"" + }, + { + "uri":"dws_06_0262.html", + "product_code":"dws", + "code":"932", + "des":"PREPARE TRANSACTION prepares the current transaction for two-phase commit.After this command, the transaction is no longer associated with the current session; instead, i", + "doc_type":"devg", + "kw":"PREPARE TRANSACTION,TCL Syntax,Developer Guide", + "title":"PREPARE TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_06_0263.html", + "product_code":"dws", + "code":"933", + "des":"SAVEPOINT establishes a new savepoint within the current transaction.A savepoint is a special mark inside a transaction that rolls back all commands that are executed aft", + "doc_type":"devg", + "kw":"SAVEPOINT,TCL Syntax,Developer Guide", + "title":"SAVEPOINT", + "githuburl":"" + }, + { + "uri":"dws_06_0264.html", + "product_code":"dws", + "code":"934", + "des":"SET TRANSACTION sets the characteristics of the current transaction. It has no effect on any subsequent transactions. Available transaction characteristics include the tr", + "doc_type":"devg", + "kw":"SET TRANSACTION,TCL Syntax,Developer Guide", + "title":"SET TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_06_0265.html", + "product_code":"dws", + "code":"935", + "des":"START TRANSACTION starts a transaction. If the isolation level, read/write mode, or deferrable mode is specified, a new transaction will have those characteristics. You c", + "doc_type":"devg", + "kw":"START TRANSACTION,TCL Syntax,Developer Guide", + "title":"START TRANSACTION", + "githuburl":"" + }, + { + "uri":"dws_06_0266.html", + "product_code":"dws", + "code":"936", + "des":"Rolls back the current transaction and backs out all updates in the transaction.ROLLBACK backs out of all changes that a transaction makes to a database if the transactio", + "doc_type":"devg", + "kw":"ROLLBACK,TCL Syntax,Developer Guide", + "title":"ROLLBACK", + "githuburl":"" + }, + { + "uri":"dws_06_0267.html", + "product_code":"dws", + "code":"937", + "des":"RELEASE SAVEPOINT destroys a savepoint previously defined in the current transaction.Destroying a savepoint makes it unavailable as a rollback point, but it has no other ", + "doc_type":"devg", + "kw":"RELEASE SAVEPOINT,TCL Syntax,Developer Guide", + "title":"RELEASE SAVEPOINT", + "githuburl":"" + }, + { + "uri":"dws_06_0268.html", + "product_code":"dws", + "code":"938", + "des":"ROLLBACK PREPARED cancels a transaction ready for two-phase committing.The function is only available in maintenance mode (when GUC parameter xc_maintenance_mode is on). ", + "doc_type":"devg", + "kw":"ROLLBACK PREPARED,TCL Syntax,Developer Guide", + "title":"ROLLBACK PREPARED", + "githuburl":"" + }, + { + "uri":"dws_06_0269.html", + "product_code":"dws", + "code":"939", + "des":"ROLLBACK TO SAVEPOINT rolls back to a savepoint. It implicitly destroys all savepoints that were established after the named savepoint.Rolls back all commands that were e", + "doc_type":"devg", + "kw":"ROLLBACK TO SAVEPOINT,TCL Syntax,Developer Guide", + "title":"ROLLBACK TO SAVEPOINT", + "githuburl":"" + }, + { + "uri":"dws_06_0270.html", + "product_code":"dws", + "code":"940", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"GIN Indexes", + "title":"GIN Indexes", + "githuburl":"" + }, + { + "uri":"dws_06_0271.html", + "product_code":"dws", + "code":"941", + "des":"Generalized Inverted Index (GIN) is designed for handling cases where the items to be indexed are composite values, and the queries to be handled by the index need to sea", + "doc_type":"devg", + "kw":"Introduction,GIN Indexes,Developer Guide", + "title":"Introduction", + "githuburl":"" + }, + { + "uri":"dws_06_0272.html", + "product_code":"dws", + "code":"942", + "des":"The GIN interface has a high level of abstraction, requiring the access method implementer only to implement the semantics of the data type being accessed. The GIN layer ", + "doc_type":"devg", + "kw":"Scalability,GIN Indexes,Developer Guide", + "title":"Scalability", + "githuburl":"" + }, + { + "uri":"dws_06_0273.html", + "product_code":"dws", + "code":"943", + "des":"Internally, a GIN index contains a B-tree index constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and w", + "doc_type":"devg", + "kw":"Implementation,GIN Indexes,Developer Guide", + "title":"Implementation", + "githuburl":"" + }, + { + "uri":"dws_06_0274.html", + "product_code":"dws", + "code":"944", + "des":"Create vs. InsertInsertion into a GIN index can be slow due to the likelihood of many keys being inserted for each item. So, for bulk insertions into a table, it is advis", + "doc_type":"devg", + "kw":"GIN Tips and Tricks,GIN Indexes,Developer Guide", + "title":"GIN Tips and Tricks", + "githuburl":"" + }, + { + "uri":"dws_04_3333.html", + "product_code":"dws", + "code":"945", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"devg", + "kw":"Change History,Developer Guide", + "title":"Change History", + "githuburl":"" + } +] \ No newline at end of file diff --git a/docs/dws/dev/CLASS.TXT.json b/docs/dws/dev/CLASS.TXT.json new file mode 100644 index 00000000..0dc84935 --- /dev/null +++ b/docs/dws/dev/CLASS.TXT.json @@ -0,0 +1,8507 @@ +[ + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Developer Guide", + "uri":"dws_04_1000.html", + "doc_type":"devg", + "p_code":"", + "code":"1" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Welcome", + "uri":"dws_04_0001.html", + "doc_type":"devg", + "p_code":"1", + "code":"2" + }, + { + "desc":"This document is intended for database designers, application developers, and database administrators, and provides information required for designing, building, querying", + "product_code":"dws", + "title":"Target Readers", + "uri":"dws_04_0002.html", + "doc_type":"devg", + "p_code":"2", + "code":"3" + }, + { + "desc":"If you are a new GaussDB(DWS) user, you are advised to read the following contents first:Sections describing the features, functions, and application scenarios of GaussDB", + "product_code":"dws", + "title":"Reading Guide", + "uri":"dws_04_0004.html", + "doc_type":"devg", + "p_code":"2", + "code":"4" + }, + { + "desc":"SQL examples in this manual are developed based on the TPC-DS model. Before you execute the examples, install the TPC-DS benchmark by following the instructions on the of", + "product_code":"dws", + "title":"Conventions", + "uri":"dws_04_0005.html", + "doc_type":"devg", + "p_code":"2", + "code":"5" + }, + { + "desc":"Complete the following tasks before you perform operations described in this document:Create a GaussDB(DWS) cluster.Install an SQL client.Connect the SQL client to the de", + "product_code":"dws", + "title":"Prerequisites", + "uri":"dws_04_0006.html", + "doc_type":"devg", + "p_code":"2", + "code":"6" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"System Overview", + "uri":"dws_04_0007.html", + "doc_type":"devg", + "p_code":"1", + "code":"7" + }, + { + "desc":"GaussDB(DWS) manages cluster transactions, the basis of HA and failovers. This ensures speedy fault recovery, guarantees the Atomicity, Consistency, Isolation, Durability", + "product_code":"dws", + "title":"Highly Reliable Transaction Processing", + "uri":"dws_04_0011.html", + "doc_type":"devg", + "p_code":"7", + "code":"8" + }, + { + "desc":"The following GaussDB(DWS) features help achieve high query performance.GaussDB(DWS) is an MPP system with the shared-nothing architecture. It consists of multiple indepe", + "product_code":"dws", + "title":"High Query Performance", + "uri":"dws_04_0012.html", + "doc_type":"devg", + "p_code":"7", + "code":"9" + }, + { + "desc":"A database manages data objects and is isolated from other databases. While creating a database, you can specify a tablespace. If you do not specify it, database objects ", + "product_code":"dws", + "title":"Related Concepts", + "uri":"dws_04_0015.html", + "doc_type":"devg", + "p_code":"7", + "code":"10" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Data Migration", + "uri":"dws_04_0985.html", + "doc_type":"devg", + "p_code":"1", + "code":"11" + }, + { + "desc":"GaussDB(DWS) provides flexible methods for importing data. You can import data from different sources to GaussDB(DWS). The features of each method are listed in Table 1. ", + "product_code":"dws", + "title":"Data Migration to GaussDB(DWS)", + "uri":"dws_04_0180.html", + "doc_type":"devg", + "p_code":"11", + "code":"12" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Data Import", + "uri":"dws_04_0179.html", + "doc_type":"devg", + "p_code":"11", + "code":"13" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Importing Data from OBS in Parallel", + "uri":"dws_04_0181.html", + "doc_type":"devg", + "p_code":"13", + "code":"14" + }, + { + "desc":"The object storage service (OBS) is an object-based cloud storage service, featuring data storage of high security, proven reliability, and cost-effectiveness. OBS provid", + "product_code":"dws", + "title":"About Parallel Data Import from OBS", + "uri":"dws_04_0182.html", + "doc_type":"devg", + "p_code":"14", + "code":"15" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Importing CSV/TXT Data from the OBS", + "uri":"dws_04_0154.html", + "doc_type":"devg", + "p_code":"14", + "code":"16" + }, + { + "desc":"In this example, OBS data is imported to GaussDB(DWS) databases. When users who have registered with the cloud platform access OBS using clients, call APIs, or SDKs, acce", + "product_code":"dws", + "title":"Creating Access Keys (AK and SK)", + "uri":"dws_04_0183.html", + "doc_type":"devg", + "p_code":"16", + "code":"17" + }, + { + "desc":"Before importing data from OBS to a cluster, prepare source data files and upload these files to OBS. If the data files have been stored on OBS, you only need to complete", + "product_code":"dws", + "title":"Uploading Data to OBS", + "uri":"dws_04_0184.html", + "doc_type":"devg", + "p_code":"16", + "code":"18" + }, + { + "desc":"format: format of the source data file in the foreign table. OBS foreign tables support CSV and TEXT formats. The default value is TEXT.header: Whether the data file cont", + "product_code":"dws", + "title":"Creating an OBS Foreign Table", + "uri":"dws_04_0185.html", + "doc_type":"devg", + "p_code":"16", + "code":"19" + }, + { + "desc":"Before importing data, you are advised to optimize your design and deployment based on the following excellent practices, helping maximize system resource utilization and", + "product_code":"dws", + "title":"Importing Data", + "uri":"dws_04_0186.html", + "doc_type":"devg", + "p_code":"16", + "code":"20" + }, + { + "desc":"Handle errors that occurred during data import.Errors that occur when data is imported are divided into data format errors and non-data format errors.Data format errorWhe", + "product_code":"dws", + "title":"Handling Import Errors", + "uri":"dws_04_0187.html", + "doc_type":"devg", + "p_code":"16", + "code":"21" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Importing ORC/CarbonData Data from OBS", + "uri":"dws_04_0155.html", + "doc_type":"devg", + "p_code":"14", + "code":"22" + }, + { + "desc":"Before you use the SQL on OBS feature to query OBS data:You have stored the ORC data on OBS.For example, the ORC table has been created when you use the Hive or Spark com", + "product_code":"dws", + "title":"Preparing Data on OBS", + "uri":"dws_04_0243.html", + "doc_type":"devg", + "p_code":"22", + "code":"23" + }, + { + "desc":"This section describes how to create a foreign server that is used to define the information about OBS servers and is invoked by foreign tables. For details about the syn", + "product_code":"dws", + "title":"Creating a Foreign Server", + "uri":"dws_04_0244.html", + "doc_type":"devg", + "p_code":"22", + "code":"24" + }, + { + "desc":"After performing steps in Creating a Foreign Server, create an OBS foreign table in the GaussDB(DWS) database to access the data stored in OBS. An OBS foreign table is re", + "product_code":"dws", + "title":"Creating a Foreign Table", + "uri":"dws_04_0245.html", + "doc_type":"devg", + "p_code":"22", + "code":"25" + }, + { + "desc":"If the data amount is small, you can directly run SELECT to query the foreign table and view the data on OBS.If the query result is the same as the data in Original Data,", + "product_code":"dws", + "title":"Querying Data on OBS Through Foreign Tables", + "uri":"dws_04_0246.html", + "doc_type":"devg", + "p_code":"22", + "code":"26" + }, + { + "desc":"After completing operations in this tutorial, if you no longer need to use the resources created during the operations, you can delete them to avoid resource waste or quo", + "product_code":"dws", + "title":"Deleting Resources", + "uri":"dws_04_0247.html", + "doc_type":"devg", + "p_code":"22", + "code":"27" + }, + { + "desc":"In the big data field, the mainstream file format is ORC, which is supported by GaussDB(DWS). You can use Hive to export data to an ORC file and use a read-only foreign t", + "product_code":"dws", + "title":"Supported Data Types", + "uri":"dws_04_0156.html", + "doc_type":"devg", + "p_code":"22", + "code":"28" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Using GDS to Import Data from a Remote Server", + "uri":"dws_04_0189.html", + "doc_type":"devg", + "p_code":"13", + "code":"29" + }, + { + "desc":"INSERT and COPY statements are serially executed to import a small volume of data. To import a large volume of data to GaussDB(DWS), you can use GDS to import data in par", + "product_code":"dws", + "title":"Importing Data In Parallel Using GDS", + "uri":"dws_04_0190.html", + "doc_type":"devg", + "p_code":"29", + "code":"30" + }, + { + "desc":"Generally, the data to be imported has been uploaded to the data server. In this case, you only need to check the communication between the data server and GaussDB(DWS), ", + "product_code":"dws", + "title":"Preparing Source Data", + "uri":"dws_04_0192.html", + "doc_type":"devg", + "p_code":"29", + "code":"31" + }, + { + "desc":"GaussDB(DWS) uses GDS to allocate the source data for parallel data import. Deploy GDS on the data server.If a large volume of data is stored on multiple data servers, in", + "product_code":"dws", + "title":"Installing, Configuring, and Starting GDS", + "uri":"dws_04_0193.html", + "doc_type":"devg", + "p_code":"29", + "code":"32" + }, + { + "desc":"The source data information and GDS access information are configured in a foreign table. Then, GaussDB(DWS) can import data from a data server to a database table based ", + "product_code":"dws", + "title":"Creating a GDS Foreign Table", + "uri":"dws_04_0194.html", + "doc_type":"devg", + "p_code":"29", + "code":"33" + }, + { + "desc":"This section describes how to create tables in GaussDB(DWS) and import data to the tables.Before importing all the data from a table containing over 10 million records, y", + "product_code":"dws", + "title":"Importing Data", + "uri":"dws_04_0195.html", + "doc_type":"devg", + "p_code":"29", + "code":"34" + }, + { + "desc":"Handle errors that occurred during data import.Errors that occur when data is imported are divided into data format errors and non-data format errors.Data format errorWhe", + "product_code":"dws", + "title":"Handling Import Errors", + "uri":"dws_04_0196.html", + "doc_type":"devg", + "p_code":"29", + "code":"35" + }, + { + "desc":"Stop GDS after data is imported successfully.If GDS is started using the gds command, perform the following operations to stop GDS:Query the GDS process ID:ps -ef|grep gd", + "product_code":"dws", + "title":"Stopping GDS", + "uri":"dws_04_0197.html", + "doc_type":"devg", + "p_code":"29", + "code":"36" + }, + { + "desc":"The data servers and the cluster reside on the same intranet. The IP addresses are 192.168.0.90 and 192.168.0.91. Source data files are in CSV format.Create the target ta", + "product_code":"dws", + "title":"Example of Importing Data Using GDS", + "uri":"dws_04_0198.html", + "doc_type":"devg", + "p_code":"29", + "code":"37" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Importing Data from MRS to a Cluster", + "uri":"dws_04_0210.html", + "doc_type":"devg", + "p_code":"13", + "code":"38" + }, + { + "desc":"MRS is a big data cluster running based on the open-source Hadoop ecosystem. It provides the industry's latest cutting-edge storage and analytical capabilities of massive", + "product_code":"dws", + "title":"Overview", + "uri":"dws_04_0066.html", + "doc_type":"devg", + "p_code":"38", + "code":"39" + }, + { + "desc":"Before importing data from MRS to a GaussDB(DWS) cluster, you must have:Created an MRS cluster.Created the Hive/Spark ORC table in the MRS cluster and stored the table da", + "product_code":"dws", + "title":"Preparing Data in an MRS Cluster", + "uri":"dws_04_0212.html", + "doc_type":"devg", + "p_code":"38", + "code":"40" + }, + { + "desc":"In the syntax CREATE FOREIGN TABLE (SQL on Hadoop or OBS) for creating a foreign table, you need to specify a foreign server associated with the MRS data source connectio", + "product_code":"dws", + "title":"Manually Creating a Foreign Server", + "uri":"dws_04_0213.html", + "doc_type":"devg", + "p_code":"38", + "code":"41" + }, + { + "desc":"This section describes how to create a Hadoop foreign table in the GaussDB(DWS) database to access the Hadoop structured data stored on MRS HDFS. A Hadoop foreign table i", + "product_code":"dws", + "title":"Creating a Foreign Table", + "uri":"dws_04_0214.html", + "doc_type":"devg", + "p_code":"38", + "code":"42" + }, + { + "desc":"If the data amount is small, you can directly run SELECT to query the foreign table and view the data in the MRS data source.If the query result is the same as the data i", + "product_code":"dws", + "title":"Importing Data", + "uri":"dws_04_0215.html", + "doc_type":"devg", + "p_code":"38", + "code":"43" + }, + { + "desc":"After completing operations in this tutorial, if you no longer need to use the resources created during the operations, you can delete them to avoid resource waste or quo", + "product_code":"dws", + "title":"Deleting Resources", + "uri":"dws_04_0216.html", + "doc_type":"devg", + "p_code":"38", + "code":"44" + }, + { + "desc":"The following error information indicates that GaussDB(DWS) is to read an ORC data file but the actual file is in text format. Therefore, create a table of the Hive ORC t", + "product_code":"dws", + "title":"Error Handling", + "uri":"dws_04_0217.html", + "doc_type":"devg", + "p_code":"38", + "code":"45" + }, + { + "desc":"You can create foreign tables to perform associated queries and import data between clusters.Import data from one GaussDB(DWS) cluster to another.Perform associated queri", + "product_code":"dws", + "title":"Importing Data from One GaussDB(DWS) Cluster to Another", + "uri":"dws_04_0949.html", + "doc_type":"devg", + "p_code":"13", + "code":"46" + }, + { + "desc":"The gsql tool of GaussDB(DWS) provides the \\copy meta-command to import data.For details about the \\copy command, see Table 1.tableSpecifies the name (possibly schema-qua", + "product_code":"dws", + "title":"Using the gsql Meta-Command \\COPY to Import Data", + "uri":"dws_04_0208.html", + "doc_type":"devg", + "p_code":"13", + "code":"47" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Running the COPY FROM STDIN Statement to Import Data", + "uri":"dws_04_0203.html", + "doc_type":"devg", + "p_code":"13", + "code":"48" + }, + { + "desc":"This method is applicable to low-concurrency scenarios where a small volume of data is to be imported.Use either of the following methods to write data to GaussDB(DWS) us", + "product_code":"dws", + "title":"Data Import Using COPY FROM STDIN", + "uri":"dws_04_0204.html", + "doc_type":"devg", + "p_code":"48", + "code":"49" + }, + { + "desc":"CopyManager is an API interface class provided by the JDBC driver in GaussDB(DWS). It is used to import data to GaussDB(DWS) in batches.The CopyManager class is in the or", + "product_code":"dws", + "title":"Introduction to the CopyManager Class", + "uri":"dws_04_0205.html", + "doc_type":"devg", + "p_code":"48", + "code":"50" + }, + { + "desc":"When the JAVA language is used for secondary development based on GaussDB(DWS), you can use the CopyManager interface to export data from the database to a local file or ", + "product_code":"dws", + "title":"Example: Importing and Exporting Data Through Local Files", + "uri":"dws_04_0206.html", + "doc_type":"devg", + "p_code":"48", + "code":"51" + }, + { + "desc":"The following example shows how to use CopyManager to migrate data from MySQL to GaussDB(DWS).", + "product_code":"dws", + "title":"Example: Migrating Data from MySQL to GaussDB(DWS)", + "uri":"dws_04_0207.html", + "doc_type":"devg", + "p_code":"48", + "code":"52" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Full Database Migration", + "uri":"dws_04_0986.html", + "doc_type":"devg", + "p_code":"11", + "code":"53" + }, + { + "desc":"You can use CDM to migrate data from other data sources (for example, MySQL) to the databases in clusters on GaussDB(DWS).For details about scenarios where CDM is used to", + "product_code":"dws", + "title":"Using CDM to Migrate Data to GaussDB(DWS)", + "uri":"dws_04_0219.html", + "doc_type":"devg", + "p_code":"53", + "code":"54" + }, + { + "desc":"The DSC is a CLI tool running on the Linux or Windows OS. It is dedicated to providing customers with simple, fast, and reliable application SQL script migration services", + "product_code":"dws", + "title":"Using DSC to Migrate SQL Scripts", + "uri":"dws_01_0127.html", + "doc_type":"devg", + "p_code":"53", + "code":"55" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Metadata Migration", + "uri":"dws_04_0987.html", + "doc_type":"devg", + "p_code":"11", + "code":"56" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Using gs_dump and gs_dumpall to Export Metadata", + "uri":"dws_04_0269.html", + "doc_type":"devg", + "p_code":"56", + "code":"57" + }, + { + "desc":"GaussDB(DWS) provides gs_dump and gs_dumpall to export required database objects and related information. To migrate database information, you can use a tool to import th", + "product_code":"dws", + "title":"Overview", + "uri":"dws_04_0270.html", + "doc_type":"devg", + "p_code":"57", + "code":"58" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Exporting a Single Database", + "uri":"dws_04_0271.html", + "doc_type":"devg", + "p_code":"57", + "code":"59" + }, + { + "desc":"You can use gs_dump to export data and all object definitions of a database from GaussDB(DWS). You can specify the information to be exported as follows:Export full infor", + "product_code":"dws", + "title":"Exporting a Database", + "uri":"dws_04_0272.html", + "doc_type":"devg", + "p_code":"59", + "code":"60" + }, + { + "desc":"You can use gs_dump to export data and all object definitions of a schema from GaussDB(DWS). You can export one or more specified schemas as needed. You can specify the i", + "product_code":"dws", + "title":"Exporting a Schema", + "uri":"dws_04_0273.html", + "doc_type":"devg", + "p_code":"59", + "code":"61" + }, + { + "desc":"You can use gs_dump to export data and all object definitions of a table-level object from GaussDB(DWS). Views, sequences, and foreign tables are special tables. You can ", + "product_code":"dws", + "title":"Exporting a Table", + "uri":"dws_04_0274.html", + "doc_type":"devg", + "p_code":"59", + "code":"62" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Exporting All Databases", + "uri":"dws_04_0275.html", + "doc_type":"devg", + "p_code":"57", + "code":"63" + }, + { + "desc":"You can use gs_dumpall to export full information of all databases in a cluster from GaussDB(DWS), including information about each database and global objects in the clu", + "product_code":"dws", + "title":"Exporting All Databases", + "uri":"dws_04_0276.html", + "doc_type":"devg", + "p_code":"63", + "code":"64" + }, + { + "desc":"You can use gs_dumpall to export global objects from GaussDB(DWS), including database users, user groups, tablespaces, and attributes (for example, global access permissi", + "product_code":"dws", + "title":"Exporting Global Objects", + "uri":"dws_04_0277.html", + "doc_type":"devg", + "p_code":"63", + "code":"65" + }, + { + "desc":"gs_dump and gs_dumpall use -U to specify the user that performs the export. If the specified user does not have the required permission, data cannot be exported. In this ", + "product_code":"dws", + "title":"Data Export By a User Without Required Permissions", + "uri":"dws_04_0278.html", + "doc_type":"devg", + "p_code":"57", + "code":"66" + }, + { + "desc":"gs_restore is an import tool provided by GaussDB(DWS). You can use gs_restore to import the files exported by gs_dump to a database. gs_restore can import the files in .t", + "product_code":"dws", + "title":"Using gs_restore to Import Data", + "uri":"dws_04_0209.html", + "doc_type":"devg", + "p_code":"56", + "code":"67" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Data Export", + "uri":"dws_04_0249.html", + "doc_type":"devg", + "p_code":"11", + "code":"68" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Exporting Data to OBS", + "uri":"dws_04_0250.html", + "doc_type":"devg", + "p_code":"68", + "code":"69" + }, + { + "desc":"GaussDB(DWS) databases allow you to export data in parallel using OBS foreign tables, in which the export mode and the exported data format are specified. Data is exporte", + "product_code":"dws", + "title":"Parallel OBS Data Export", + "uri":"dws_04_0251.html", + "doc_type":"devg", + "p_code":"69", + "code":"70" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Exporting CSV/TXT Data to OBS", + "uri":"dws_04_0157.html", + "doc_type":"devg", + "p_code":"69", + "code":"71" + }, + { + "desc":"Plan the storage location of exported data in OBS.You need to specify the OBS path (to directory) for storing data that you want to export. The exported data can be saved", + "product_code":"dws", + "title":"Planning Data Export", + "uri":"dws_04_0252.html", + "doc_type":"devg", + "p_code":"71", + "code":"72" + }, + { + "desc":"To obtain access keys, log in to the management console, click the username in the upper right corner, and select My Credential from the menu. Then choose Access Keys in ", + "product_code":"dws", + "title":"Creating an OBS Foreign Table", + "uri":"dws_04_0253.html", + "doc_type":"devg", + "p_code":"71", + "code":"73" + }, + { + "desc":"Example 1: Export data from table product_info_output to a data file through the product_info_output_ext foreign table.INSERT INTO product_info_output_ext SELECT * FROM p", + "product_code":"dws", + "title":"Exporting Data", + "uri":"dws_04_0254.html", + "doc_type":"devg", + "p_code":"71", + "code":"74" + }, + { + "desc":"Create two foreign tables and use them to export tables from a database to two buckets in OBS.OBS and the database are in the same region. The example GaussDB(DWS) table ", + "product_code":"dws", + "title":"Examples", + "uri":"dws_04_0255.html", + "doc_type":"devg", + "p_code":"71", + "code":"75" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Exporting ORC Data to OBS", + "uri":"dws_04_0256.html", + "doc_type":"devg", + "p_code":"69", + "code":"76" + }, + { + "desc":"For details about exporting data to OBS, see Planning Data Export.For details about the data types that can be exported to OBS, see Table 2.For details about HDFS data ex", + "product_code":"dws", + "title":"Planning Data Export", + "uri":"dws_04_0258.html", + "doc_type":"devg", + "p_code":"76", + "code":"77" + }, + { + "desc":"For details about creating a foreign server on OBS, see Creating a Foreign Server.For details about creating a foreign server in HDFS, see Manually Creating a Foreign Ser", + "product_code":"dws", + "title":"Creating a Foreign Server", + "uri":"dws_04_0259.html", + "doc_type":"devg", + "p_code":"76", + "code":"78" + }, + { + "desc":"After operations in Creating a Foreign Server are complete, create an OBS/HDFS write-only foreign table in the GaussDB(DWS) database to access data stored in OBS/HDFS. Th", + "product_code":"dws", + "title":"Creating a Foreign Table", + "uri":"dws_04_0260.html", + "doc_type":"devg", + "p_code":"76", + "code":"79" + }, + { + "desc":"Example 1: Export data from table product_info_output to a data file using the product_info_output_ext foreign table.INSERT INTO product_info_output_ext SELECT * FROM pro", + "product_code":"dws", + "title":"Exporting Data", + "uri":"dws_04_0158.html", + "doc_type":"devg", + "p_code":"76", + "code":"80" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Exporting ORC Data to MRS", + "uri":"dws_04_0159.html", + "doc_type":"devg", + "p_code":"68", + "code":"81" + }, + { + "desc":"GaussDB(DWS) allows you to export ORC data to MRS using an HDFS foreign table. You can specify the export mode and export data format in the foreign table. Data is export", + "product_code":"dws", + "title":"Overview", + "uri":"dws_04_0160.html", + "doc_type":"devg", + "p_code":"81", + "code":"82" + }, + { + "desc":"For details about the data types that can be exported to MRS, see Table 2.For details about HDFS data export or MRS configuration, see the MapReduce Service User Guide.", + "product_code":"dws", + "title":"Planning Data Export", + "uri":"dws_04_0161.html", + "doc_type":"devg", + "p_code":"81", + "code":"83" + }, + { + "desc":"For details about creating a foreign server on HDFS, see Manually Creating a Foreign Server.", + "product_code":"dws", + "title":"Creating a Foreign Server", + "uri":"dws_04_0162.html", + "doc_type":"devg", + "p_code":"81", + "code":"84" + }, + { + "desc":"After operations in Creating a Foreign Server are complete, create an HDFS write-only foreign table in the GaussDB(DWS) database to access data stored in HDFS. The foreig", + "product_code":"dws", + "title":"Creating a Foreign Table", + "uri":"dws_04_0163.html", + "doc_type":"devg", + "p_code":"81", + "code":"85" + }, + { + "desc":"Example 1: Export data from table product_info_output to a data file using the product_info_output_ext foreign table.INSERT INTO product_info_output_ext SELECT * FROM pro", + "product_code":"dws", + "title":"Exporting Data", + "uri":"dws_04_0164.html", + "doc_type":"devg", + "p_code":"81", + "code":"86" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Using GDS to Export Data to a Remote Server", + "uri":"dws_04_0261.html", + "doc_type":"devg", + "p_code":"68", + "code":"87" + }, + { + "desc":"In high-concurrency scenarios, you can use GDS to export data from a database to a common file system.In the current GDS version, data can be exported from a database to ", + "product_code":"dws", + "title":"Exporting Data In Parallel Using GDS", + "uri":"dws_04_0262.html", + "doc_type":"devg", + "p_code":"87", + "code":"88" + }, + { + "desc":"Before you use GDS to export data from a cluster, prepare data to be exported and plan the export path.Remote modeIf the following information is displayed, the user and ", + "product_code":"dws", + "title":"Planning Data Export", + "uri":"dws_04_0263.html", + "doc_type":"devg", + "p_code":"87", + "code":"89" + }, + { + "desc":"GDS is a data service tool provided by GaussDB(DWS). Using the foreign table mechanism, this tool helps export data at a high speed.For details, see Installing, Configuri", + "product_code":"dws", + "title":"Installing, Configuring, and Starting GDS", + "uri":"dws_04_0264.html", + "doc_type":"devg", + "p_code":"87", + "code":"90" + }, + { + "desc":"Remote modeSet the location parameter to the URL of the directory that stores the data files.You do not need to specify any file.For example:The IP address of the GDS dat", + "product_code":"dws", + "title":"Creating a GDS Foreign Table", + "uri":"dws_04_0265.html", + "doc_type":"devg", + "p_code":"87", + "code":"91" + }, + { + "desc":"Ensure that the IP addresses and ports of servers where CNs and DNs are deployed can connect to those of the GDS server.Create batch processing scripts to export data in ", + "product_code":"dws", + "title":"Exporting Data", + "uri":"dws_04_0266.html", + "doc_type":"devg", + "p_code":"87", + "code":"92" + }, + { + "desc":"GDS is a data service tool provided by GaussDB(DWS). Using the foreign table mechanism, this tool helps export data at a high speed.For details, see Stopping GDS.", + "product_code":"dws", + "title":"Stopping GDS", + "uri":"dws_04_0267.html", + "doc_type":"devg", + "p_code":"87", + "code":"93" + }, + { + "desc":"The data server and the cluster reside on the same intranet, the IP address of the data server is 192.168.0.90, and data source files are in CSV format. In this scenario,", + "product_code":"dws", + "title":"Examples of Exporting Data Using GDS", + "uri":"dws_04_0268.html", + "doc_type":"devg", + "p_code":"87", + "code":"94" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Other Operations", + "uri":"dws_04_0988.html", + "doc_type":"devg", + "p_code":"11", + "code":"95" + }, + { + "desc":"GDS supports concurrent import and export. The gds -t parameter is used to set the size of the thread pool and control the maximum number of concurrent working threads. B", + "product_code":"dws", + "title":"GDS Pipe FAQs", + "uri":"dws_04_0279.html", + "doc_type":"devg", + "p_code":"95", + "code":"96" + }, + { + "desc":"Data skew causes the query performance to deteriorate. Before importing all the data from a table consisting of over 10 million records, you are advised to import some of", + "product_code":"dws", + "title":"Checking for Data Skew", + "uri":"dws_04_0228.html", + "doc_type":"devg", + "p_code":"95", + "code":"97" + }, + { + "desc":"GaussDB(DWS) is compatible with Oracle, Teradata and MySQL syntax, of which the syntax behavior is different.", + "product_code":"dws", + "title":"Syntax Compatibility Differences Among Oracle, Teradata, and MySQL", + "uri":"dws_04_0042.html", + "doc_type":"devg", + "p_code":"1", + "code":"98" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Database Security Management", + "uri":"dws_04_0043.html", + "doc_type":"devg", + "p_code":"1", + "code":"99" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Managing Users and Their Permissions", + "uri":"dws_04_0053.html", + "doc_type":"devg", + "p_code":"99", + "code":"100" + }, + { + "desc":"A user who creates an object is the owner of this object. By default, Separation of Permissions is disabled after cluster installation. A database system administrator ha", + "product_code":"dws", + "title":"Default Permission Mechanism", + "uri":"dws_04_0054.html", + "doc_type":"devg", + "p_code":"100", + "code":"101" + }, + { + "desc":"A system administrator is an account with the SYSADMIN permission. After a cluster is installed, a system administrator has the permissions of all object owners by defaul", + "product_code":"dws", + "title":"System Administrator", + "uri":"dws_04_0055.html", + "doc_type":"devg", + "p_code":"100", + "code":"102" + }, + { + "desc":"Descriptions in Default Permission Mechanism and System Administrator are about the initial situation after a cluster is created. By default, a system administrator with ", + "product_code":"dws", + "title":"Separation of Permissions", + "uri":"dws_04_0056.html", + "doc_type":"devg", + "p_code":"100", + "code":"103" + }, + { + "desc":"You can use CREATE USER and ALTER USER to create and manage database users, respectively. The database cluster has one or more named databases. Users and roles are shared", + "product_code":"dws", + "title":"Users", + "uri":"dws_04_0057.html", + "doc_type":"devg", + "p_code":"100", + "code":"104" + }, + { + "desc":"A role is a set of permissions. After a role is granted to a user through GRANT, the user will have all the permissions of the role. It is recommended that roles be used ", + "product_code":"dws", + "title":"Roles", + "uri":"dws_04_0058.html", + "doc_type":"devg", + "p_code":"100", + "code":"105" + }, + { + "desc":"Schemas function as models. Schema management allows multiple users to use the same database without mutual impacts, to organize database objects as manageable logical gr", + "product_code":"dws", + "title":"Schema", + "uri":"dws_04_0059.html", + "doc_type":"devg", + "p_code":"100", + "code":"106" + }, + { + "desc":"To grant the permission for an object directly to a user, use GRANT.When permissions for a table or view in a schema are granted to a user or role, the USAGE permission o", + "product_code":"dws", + "title":"User Permission Setting", + "uri":"dws_04_0060.html", + "doc_type":"devg", + "p_code":"100", + "code":"107" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Setting Security Policies", + "uri":"dws_04_0063.html", + "doc_type":"devg", + "p_code":"100", + "code":"108" + }, + { + "desc":"For data security purposes, GaussDB(DWS) provides a series of security measures, such as automatically locking and unlocking accounts, manually locking and unlocking abno", + "product_code":"dws", + "title":"Setting Account Security Policies", + "uri":"dws_04_0064.html", + "doc_type":"devg", + "p_code":"108", + "code":"109" + }, + { + "desc":"When creating a user, you need to specify the validity period of the user, including the start time and end time.To enable a user not within the validity period to use it", + "product_code":"dws", + "title":"Setting the Validity Period of an Account", + "uri":"dws_04_0065.html", + "doc_type":"devg", + "p_code":"108", + "code":"110" + }, + { + "desc":"User passwords are stored in the system catalog pg_authid. To prevent password leakage, GaussDB(DWS) encrypts and stores the user passwords.Password complexityThe passwor", + "product_code":"dws", + "title":"Setting a User Password", + "uri":"dws_04_0067.html", + "doc_type":"devg", + "p_code":"108", + "code":"111" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Sensitive Data Management", + "uri":"dws_04_0994.html", + "doc_type":"devg", + "p_code":"99", + "code":"112" + }, + { + "desc":"The row-level access control feature enables database access control to be accurate to each row of data tables. In this way, the same SQL query may return different resul", + "product_code":"dws", + "title":"Row-Level Access Control", + "uri":"dws_04_0061.html", + "doc_type":"devg", + "p_code":"112", + "code":"113" + }, + { + "desc":"GaussDB(DWS) provides the column-level dynamic data masking (DDM) function. For sensitive data, such as the ID card number, mobile number, and bank card number, the DDM f", + "product_code":"dws", + "title":"Data Redaction", + "uri":"dws_04_0062.html", + "doc_type":"devg", + "p_code":"112", + "code":"114" + }, + { + "desc":"GaussDB(DWS) supports encryption and decryption of strings using the following functions:gs_encrypt(encryptstr, keystr, cryptotype, cryptomode, hashmethod)Description: En", + "product_code":"dws", + "title":"Using Functions for Encryption and Decryption", + "uri":"dws_04_0995.html", + "doc_type":"devg", + "p_code":"112", + "code":"115" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Development and Design Proposal", + "uri":"dws_04_0074.html", + "doc_type":"devg", + "p_code":"1", + "code":"116" + }, + { + "desc":"This chapter describes the design specifications for database modeling and application development. Modeling compliant with these specifications fits the distributed proc", + "product_code":"dws", + "title":"Development and Design Proposal", + "uri":"dws_04_0075.html", + "doc_type":"devg", + "p_code":"116", + "code":"117" + }, + { + "desc":"The name of a database object must contain 1 to 63 characters, start with a letter or underscore (_), and can contain letters, digits, underscores (_), dollar signs ($), ", + "product_code":"dws", + "title":"Database Object Naming Conventions", + "uri":"dws_04_0076.html", + "doc_type":"devg", + "p_code":"116", + "code":"118" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Database Object Design", + "uri":"dws_04_0077.html", + "doc_type":"devg", + "p_code":"116", + "code":"119" + }, + { + "desc":"In GaussDB(DWS), services can be isolated by databases and schemas. Databases share little resources and cannot directly access each other. Connections to and permissions", + "product_code":"dws", + "title":"Database and Schema Design", + "uri":"dws_04_0078.html", + "doc_type":"devg", + "p_code":"119", + "code":"120" + }, + { + "desc":"GaussDB(DWS) uses a distributed architecture. Data is distributed on DNs. Comply with the following principles to properly design a table:[Notice] Evenly distribute data ", + "product_code":"dws", + "title":"Table Design", + "uri":"dws_04_0079.html", + "doc_type":"devg", + "p_code":"119", + "code":"121" + }, + { + "desc":"Comply with the following rules to improve query efficiency when you design columns:[Proposal] Use the most efficient data types allowed.If all of the following number ty", + "product_code":"dws", + "title":"Column Design", + "uri":"dws_04_0080.html", + "doc_type":"devg", + "p_code":"119", + "code":"122" + }, + { + "desc":"[Proposal] If all the column values can be obtained from services, you are not advised to use the DEFAULT constraint, because doing so will generate unexpected results du", + "product_code":"dws", + "title":"Constraint Design", + "uri":"dws_04_0081.html", + "doc_type":"devg", + "p_code":"119", + "code":"123" + }, + { + "desc":"[Proposal] Do not nest views unless they have strong dependency on each other.[Proposal] Try to avoid sort operations in a view definition.[Proposal] Minimize joined colu", + "product_code":"dws", + "title":"View and Joined Table Design", + "uri":"dws_04_0082.html", + "doc_type":"devg", + "p_code":"119", + "code":"124" + }, + { + "desc":"Currently, third-party tools are connected to GaussDB(DWS) trough JDBC. This section describes the precautions for configuring the tools.[Notice] When a third-party tool ", + "product_code":"dws", + "title":"JDBC Configuration", + "uri":"dws_04_0083.html", + "doc_type":"devg", + "p_code":"116", + "code":"125" + }, + { + "desc":"[Proposal] In GaussDB(DWS), you are advised to execute DDL operations, such as creating table or making comments, separately from batch processing jobs to avoid performan", + "product_code":"dws", + "title":"SQL Compilation", + "uri":"dws_04_0084.html", + "doc_type":"devg", + "p_code":"116", + "code":"126" + }, + { + "desc":"[Notice] Java UDFs can perform some Java logic calculation. Do not encapsulate services in Java UDFs.[Notice] Do not connect to a database in any way (for example, by usi", + "product_code":"dws", + "title":"PL/Java Usage", + "uri":"dws_04_0971.html", + "doc_type":"devg", + "p_code":"116", + "code":"127" + }, + { + "desc":"Development shall strictly comply with design documents.Program modules shall be highly cohesive and loosely coupled.Proper, comprehensive troubleshooting measures shall ", + "product_code":"dws", + "title":"PL/pgSQL Usage", + "uri":"dws_04_0972.html", + "doc_type":"devg", + "p_code":"116", + "code":"128" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Guide: JDBC- or ODBC-Based Development", + "uri":"dws_04_0085.html", + "doc_type":"devg", + "p_code":"1", + "code":"129" + }, + { + "desc":"If the connection pool mechanism is used during application development, comply with the following specifications:If GUC parameters are set in the connection, before you ", + "product_code":"dws", + "title":"Development Specifications", + "uri":"dws_04_0086.html", + "doc_type":"devg", + "p_code":"129", + "code":"130" + }, + { + "desc":"For details, see section \"Downloading the JDBC or ODBC Driver\" in the Data Warehouse Service User Guide.", + "product_code":"dws", + "title":"Downloading Drivers", + "uri":"dws_04_0087.html", + "doc_type":"devg", + "p_code":"129", + "code":"131" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"JDBC-Based Development", + "uri":"dws_04_0088.html", + "doc_type":"devg", + "p_code":"129", + "code":"132" + }, + { + "desc":"Obtain the package dws_8.1.x_jdbc_driver.zip from the management console. For details, see Downloading Drivers.Compressed in it is the JDBC driver JAR package:gsjdbc4.jar", + "product_code":"dws", + "title":"JDBC Package and Driver Class", + "uri":"dws_04_0090.html", + "doc_type":"devg", + "p_code":"132", + "code":"133" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Development Process", + "uri":"dws_04_0091.html", + "doc_type":"devg", + "p_code":"132", + "code":"134" + }, + { + "desc":"Load the database driver before creating a database connection.You can load the driver in the following ways:Implicitly loading the driver before creating a connection in", + "product_code":"dws", + "title":"Loading a Driver", + "uri":"dws_04_0092.html", + "doc_type":"devg", + "p_code":"132", + "code":"135" + }, + { + "desc":"After a database is connected, you can execute SQL statements in the database.If you use an open-source Java Database Connectivity (JDBC) driver, ensure that the database", + "product_code":"dws", + "title":"Connecting to a Database", + "uri":"dws_04_0093.html", + "doc_type":"devg", + "p_code":"132", + "code":"136" + }, + { + "desc":"The application performs data (parameter statements do not need to be transferred) in the database by running SQL statements, and you need to perform the following steps:", + "product_code":"dws", + "title":"Executing SQL Statements", + "uri":"dws_04_0095.html", + "doc_type":"devg", + "p_code":"132", + "code":"137" + }, + { + "desc":"Different types of result sets are applicable to different application scenarios. Applications select proper types of result sets based on requirements. Before executing ", + "product_code":"dws", + "title":"Processing Data in a Result Set", + "uri":"dws_04_0096.html", + "doc_type":"devg", + "p_code":"132", + "code":"138" + }, + { + "desc":"After you complete required data operations in the database, close the database connection.Call the close method to close the connection, such as, conn. close().", + "product_code":"dws", + "title":"Closing the Connection", + "uri":"dws_04_0097.html", + "doc_type":"devg", + "p_code":"132", + "code":"139" + }, + { + "desc":"Before completing the following example, you need to create a stored procedure.This example illustrates how to develop applications based on the GaussDB(DWS) JDBC interfa", + "product_code":"dws", + "title":"Example: Common Operations", + "uri":"dws_04_0098.html", + "doc_type":"devg", + "p_code":"132", + "code":"140" + }, + { + "desc":"If the primary DN is faulty and cannot be restored within 40s, its standby is automatically promoted to primary to ensure the normal running of the cluster. Jobs running ", + "product_code":"dws", + "title":"Example: Retrying SQL Queries for Applications", + "uri":"dws_04_0099.html", + "doc_type":"devg", + "p_code":"132", + "code":"141" + }, + { + "desc":"When the JAVA language is used for secondary development based on GaussDB(DWS), you can use the CopyManager interface to export data from the database to a local file or ", + "product_code":"dws", + "title":"Example: Importing and Exporting Data Through Local Files", + "uri":"dws_04_0100.html", + "doc_type":"devg", + "p_code":"132", + "code":"142" + }, + { + "desc":"The following example shows how to use CopyManager to migrate data from MySQL to GaussDB(DWS).", + "product_code":"dws", + "title":"Example: Migrating Data from MySQL to GaussDB(DWS)", + "uri":"dws_04_0101.html", + "doc_type":"devg", + "p_code":"132", + "code":"143" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"JDBC Interface Reference", + "uri":"dws_04_0102.html", + "doc_type":"devg", + "p_code":"132", + "code":"144" + }, + { + "desc":"This section describes java.sql.Connection, the interface for connecting to a database.The AutoCommit mode is used by default within the interface. If you disable it runn", + "product_code":"dws", + "title":"java.sql.Connection", + "uri":"dws_04_0103.html", + "doc_type":"devg", + "p_code":"144", + "code":"145" + }, + { + "desc":"This section describes java.sql.CallableStatement, the stored procedure execution interface.The batch operation of statements containing OUT parameter is not allowed.The ", + "product_code":"dws", + "title":"java.sql.CallableStatement", + "uri":"dws_04_0104.html", + "doc_type":"devg", + "p_code":"144", + "code":"146" + }, + { + "desc":"This section describes java.sql.DatabaseMetaData, the interface for defining database objects.", + "product_code":"dws", + "title":"java.sql.DatabaseMetaData", + "uri":"dws_04_0105.html", + "doc_type":"devg", + "p_code":"144", + "code":"147" + }, + { + "desc":"This section describes java.sql.Driver, the database driver interface.", + "product_code":"dws", + "title":"java.sql.Driver", + "uri":"dws_04_0106.html", + "doc_type":"devg", + "p_code":"144", + "code":"148" + }, + { + "desc":"This section describes java.sql.PreparedStatement, the interface for preparing statements.Execute addBatch() and execute() only after running clearBatch().Batch is not cl", + "product_code":"dws", + "title":"java.sql.PreparedStatement", + "uri":"dws_04_0107.html", + "doc_type":"devg", + "p_code":"144", + "code":"149" + }, + { + "desc":"This section describes java.sql.ResultSet, the interface for execution result sets.One Statement cannot have multiple open ResultSets.The cursor that is used for traversi", + "product_code":"dws", + "title":"java.sql.ResultSet", + "uri":"dws_04_0108.html", + "doc_type":"devg", + "p_code":"144", + "code":"150" + }, + { + "desc":"This section describes java.sql.ResultSetMetaData, which provides details about ResultSet object information.", + "product_code":"dws", + "title":"java.sql.ResultSetMetaData", + "uri":"dws_04_0109.html", + "doc_type":"devg", + "p_code":"144", + "code":"151" + }, + { + "desc":"This section describes java.sql.Statement, the interface for executing SQL statements.Using setFetchSize can reduce the memory occupied by result sets on the client. Resu", + "product_code":"dws", + "title":"java.sql.Statement", + "uri":"dws_04_0110.html", + "doc_type":"devg", + "p_code":"144", + "code":"152" + }, + { + "desc":"This section describes javax.sql.ConnectionPoolDataSource, the interface for data source connection pools.", + "product_code":"dws", + "title":"javax.sql.ConnectionPoolDataSource", + "uri":"dws_04_0111.html", + "doc_type":"devg", + "p_code":"144", + "code":"153" + }, + { + "desc":"This section describes javax.sql.DataSource, the interface for data sources.", + "product_code":"dws", + "title":"javax.sql.DataSource", + "uri":"dws_04_0112.html", + "doc_type":"devg", + "p_code":"144", + "code":"154" + }, + { + "desc":"This section describes javax.sql.PooledConnection, the connection interface created by a connection pool.", + "product_code":"dws", + "title":"javax.sql.PooledConnection", + "uri":"dws_04_0113.html", + "doc_type":"devg", + "p_code":"144", + "code":"155" + }, + { + "desc":"This section describes javax.naming.Context, the context interface for connection configuration.", + "product_code":"dws", + "title":"javax.naming.Context", + "uri":"dws_04_0114.html", + "doc_type":"devg", + "p_code":"144", + "code":"156" + }, + { + "desc":"This section describes javax.naming.spi.InitialContextFactory, the initial context factory interface.", + "product_code":"dws", + "title":"javax.naming.spi.InitialContextFactory", + "uri":"dws_04_0115.html", + "doc_type":"devg", + "p_code":"144", + "code":"157" + }, + { + "desc":"CopyManager is an API interface class provided by the JDBC driver in GaussDB(DWS). It is used to import data to GaussDB(DWS) in batches.The CopyManager class is in the or", + "product_code":"dws", + "title":"CopyManager", + "uri":"dws_04_0116.html", + "doc_type":"devg", + "p_code":"144", + "code":"158" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"ODBC-Based Development", + "uri":"dws_04_0117.html", + "doc_type":"devg", + "p_code":"129", + "code":"159" + }, + { + "desc":"Obtain the dws_8.1.x_odbc_driver_for_xxx_xxx.zip package from the release package. In the Linux OS, header files (including sql.h and sqlext.h) and library (libodbc.so) a", + "product_code":"dws", + "title":"ODBC Package and Its Dependent Libraries and Header Files", + "uri":"dws_04_0118.html", + "doc_type":"devg", + "p_code":"159", + "code":"160" + }, + { + "desc":"The ODBC DRIVER (psqlodbcw.so) provided by GaussDB(DWS) can be used after it has been configured in the data source. To configure data sources, users must configure the o", + "product_code":"dws", + "title":"Configuring a Data Source in the Linux OS", + "uri":"dws_04_0119.html", + "doc_type":"devg", + "p_code":"159", + "code":"161" + }, + { + "desc":"Configure the ODBC data source using the ODBC data source manager preinstalled in the Windows OS.Decompress GaussDB-8.1.1-Windows-Odbc.tar.gz and install psqlodbc.msi (fo", + "product_code":"dws", + "title":"Configuring a Data Source in the Windows OS", + "uri":"dws_04_0120.html", + "doc_type":"devg", + "p_code":"159", + "code":"162" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"ODBC Development Example", + "uri":"dws_04_0123.html", + "doc_type":"devg", + "p_code":"159", + "code":"163" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"ODBC Interfaces", + "uri":"dws_04_0124.html", + "doc_type":"devg", + "p_code":"159", + "code":"164" + }, + { + "desc":"In ODBC 3.x, SQLAllocEnv (an ODBC 2.x function) was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.", + "product_code":"dws", + "title":"SQLAllocEnv", + "uri":"dws_04_0125.html", + "doc_type":"devg", + "p_code":"164", + "code":"165" + }, + { + "desc":"In ODBC 3.x, SQLAllocConnect (an ODBC 2.x function) was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.", + "product_code":"dws", + "title":"SQLAllocConnect", + "uri":"dws_04_0126.html", + "doc_type":"devg", + "p_code":"164", + "code":"166" + }, + { + "desc":"SQLAllocHandle allocates environment, connection, or statement handles. This function is a generic function for allocating handles that replaces the deprecated ODBC 2.x f", + "product_code":"dws", + "title":"SQLAllocHandle", + "uri":"dws_04_0127.html", + "doc_type":"devg", + "p_code":"164", + "code":"167" + }, + { + "desc":"In ODBC 3.x, SQLAllocStmt was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.", + "product_code":"dws", + "title":"SQLAllocStmt", + "uri":"dws_04_0128.html", + "doc_type":"devg", + "p_code":"164", + "code":"168" + }, + { + "desc":"SQLBindCol is used to associate (bind) columns in a result set to an application data buffer.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates", + "product_code":"dws", + "title":"SQLBindCol", + "uri":"dws_04_0129.html", + "doc_type":"devg", + "p_code":"164", + "code":"169" + }, + { + "desc":"SQLBindParameter is used to associate (bind) parameter markers in an SQL statement to a buffer.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicat", + "product_code":"dws", + "title":"SQLBindParameter", + "uri":"dws_04_0130.html", + "doc_type":"devg", + "p_code":"164", + "code":"170" + }, + { + "desc":"SQLColAttribute returns the descriptor information about a column in the result set.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some war", + "product_code":"dws", + "title":"SQLColAttribute", + "uri":"dws_04_0131.html", + "doc_type":"devg", + "p_code":"164", + "code":"171" + }, + { + "desc":"SQLConnect establishes a connection between a driver and a data source. After the connection, the connection handle can be used to access all information about the data s", + "product_code":"dws", + "title":"SQLConnect", + "uri":"dws_04_0132.html", + "doc_type":"devg", + "p_code":"164", + "code":"172" + }, + { + "desc":"SQLDisconnect closes the connection associated with the database connection handle.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warn", + "product_code":"dws", + "title":"SQLDisconnect", + "uri":"dws_04_0133.html", + "doc_type":"devg", + "p_code":"164", + "code":"173" + }, + { + "desc":"SQLExecDirect executes a prepared SQL statement specified in this parameter. This is the fastest execution method for executing only one SQL statement at a time.SQL_SUCCE", + "product_code":"dws", + "title":"SQLExecDirect", + "uri":"dws_04_0134.html", + "doc_type":"devg", + "p_code":"164", + "code":"174" + }, + { + "desc":"The SQLExecute function executes a prepared SQL statement using SQLPrepare. The statement is executed using the current value of any application variables that were bound", + "product_code":"dws", + "title":"SQLExecute", + "uri":"dws_04_0135.html", + "doc_type":"devg", + "p_code":"164", + "code":"175" + }, + { + "desc":"SQLFetch advances the cursor to the next row of the result set and retrieves any bound columns.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicat", + "product_code":"dws", + "title":"SQLFetch", + "uri":"dws_04_0136.html", + "doc_type":"devg", + "p_code":"164", + "code":"176" + }, + { + "desc":"In ODBC 3.x, SQLFreeStmt (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.", + "product_code":"dws", + "title":"SQLFreeStmt", + "uri":"dws_04_0137.html", + "doc_type":"devg", + "p_code":"164", + "code":"177" + }, + { + "desc":"In ODBC 3.x, SQLFreeConnect (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.", + "product_code":"dws", + "title":"SQLFreeConnect", + "uri":"dws_04_0138.html", + "doc_type":"devg", + "p_code":"164", + "code":"178" + }, + { + "desc":"SQLFreeHandle releases resources associated with a specific environment, connection, or statement handle. It replaces the ODBC 2.x functions: SQLFreeEnv, SQLFreeConnect, ", + "product_code":"dws", + "title":"SQLFreeHandle", + "uri":"dws_04_0139.html", + "doc_type":"devg", + "p_code":"164", + "code":"179" + }, + { + "desc":"In ODBC 3.x, SQLFreeEnv (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.", + "product_code":"dws", + "title":"SQLFreeEnv", + "uri":"dws_04_0140.html", + "doc_type":"devg", + "p_code":"164", + "code":"180" + }, + { + "desc":"SQLPrepare prepares an SQL statement to be executed.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.SQ", + "product_code":"dws", + "title":"SQLPrepare", + "uri":"dws_04_0141.html", + "doc_type":"devg", + "p_code":"164", + "code":"181" + }, + { + "desc":"SQLGetData retrieves data for a single column in the current row of the result set. It can be called for many times to retrieve data of variable lengths.SQL_SUCCESS indic", + "product_code":"dws", + "title":"SQLGetData", + "uri":"dws_04_0142.html", + "doc_type":"devg", + "p_code":"164", + "code":"182" + }, + { + "desc":"SQLGetDiagRec returns the current values of multiple fields of a diagnostic record that contains error, warning, and status information.SQL_SUCCESS indicates that the cal", + "product_code":"dws", + "title":"SQLGetDiagRec", + "uri":"dws_04_0143.html", + "doc_type":"devg", + "p_code":"164", + "code":"183" + }, + { + "desc":"SQLSetConnectAttr sets connection attributes.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.SQL_ERROR", + "product_code":"dws", + "title":"SQLSetConnectAttr", + "uri":"dws_04_0144.html", + "doc_type":"devg", + "p_code":"164", + "code":"184" + }, + { + "desc":"SQLSetEnvAttr sets environment attributes.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.SQL_ERROR in", + "product_code":"dws", + "title":"SQLSetEnvAttr", + "uri":"dws_04_0145.html", + "doc_type":"devg", + "p_code":"164", + "code":"185" + }, + { + "desc":"SQLSetStmtAttr sets attributes related to a statement.SQL_SUCCESS indicates that the call succeeded.SQL_SUCCESS_WITH_INFO indicates some warning information is displayed.", + "product_code":"dws", + "title":"SQLSetStmtAttr", + "uri":"dws_04_0146.html", + "doc_type":"devg", + "p_code":"164", + "code":"186" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"PostGIS Extension", + "uri":"dws_04_0301.html", + "doc_type":"devg", + "p_code":"1", + "code":"187" + }, + { + "desc":"The third-party software that the PostGIS Extension depends on needs to be installed separately. If you need to use PostGIS, submit a service ticket or contact technical ", + "product_code":"dws", + "title":"PostGIS", + "uri":"dws_04_0302.html", + "doc_type":"devg", + "p_code":"187", + "code":"188" + }, + { + "desc":"The third-party software that the PostGIS Extension depends on needs to be installed separately. If you need to use PostGIS, submit a service ticket or contact technical ", + "product_code":"dws", + "title":"Using PostGIS", + "uri":"dws_04_0304.html", + "doc_type":"devg", + "p_code":"187", + "code":"189" + }, + { + "desc":"In GaussDB(DWS), PostGIS Extension support the following data types:box2dbox3dgeometry_dumpgeometrygeographyrasterIf PostGIS is used by a user other than the creator of t", + "product_code":"dws", + "title":"PostGIS Support and Constraints", + "uri":"dws_04_0305.html", + "doc_type":"devg", + "p_code":"187", + "code":"190" + }, + { + "desc":"This document contains open source software notice for the product. And this document is confidential information of copyright holder. Recipient shall protect it in due c", + "product_code":"dws", + "title":"OPEN SOURCE SOFTWARE NOTICE (For PostGIS)", + "uri":"dws_04_0306.html", + "doc_type":"devg", + "p_code":"187", + "code":"191" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Resource Monitoring", + "uri":"dws_04_0393.html", + "doc_type":"devg", + "p_code":"1", + "code":"192" + }, + { + "desc":"In the multi-tenant management framework, you can query the real-time or historical usage of all user resources (including memory, CPU cores, storage space, temporary spa", + "product_code":"dws", + "title":"User Resource Query", + "uri":"dws_04_0394.html", + "doc_type":"devg", + "p_code":"192", + "code":"193" + }, + { + "desc":"GaussDB(DWS) provides a view for monitoring the memory usage of the entire cluster.Query the pgxc_total_memory_detail view as a user with sysadmin permissions.SELECT * FR", + "product_code":"dws", + "title":"Monitoring Memory Resources", + "uri":"dws_04_0395.html", + "doc_type":"devg", + "p_code":"192", + "code":"194" + }, + { + "desc":"GaussDB(DWS) provides system catalogs for monitoring the resource usage of CNs and DNs (including memory, CPU usage, disk I/O, process physical I/O, and process logical I", + "product_code":"dws", + "title":"Instance Resource Monitoring", + "uri":"dws_04_0396.html", + "doc_type":"devg", + "p_code":"192", + "code":"195" + }, + { + "desc":"You can query real-time Top SQL in real-time resource monitoring views at different levels. The real-time resource monitoring view records the resource usage (including m", + "product_code":"dws", + "title":"Real-time TopSQL", + "uri":"dws_04_0397.html", + "doc_type":"devg", + "p_code":"192", + "code":"196" + }, + { + "desc":"You can query historical Top SQL in historical resource monitoring views. The historical resource monitoring view records the resource usage (of memory, disk, CPU time, a", + "product_code":"dws", + "title":"Historical TopSQL", + "uri":"dws_04_0398.html", + "doc_type":"devg", + "p_code":"192", + "code":"197" + }, + { + "desc":"In this section, TPC-DS sample data is used as an example to describe how to query Real-time TopSQL and Historical TopSQL.To query for historical or archived resource mon", + "product_code":"dws", + "title":"TopSQL Query Example", + "uri":"dws_04_0399.html", + "doc_type":"devg", + "p_code":"192", + "code":"198" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Query Performance Optimization", + "uri":"dws_04_0400.html", + "doc_type":"devg", + "p_code":"1", + "code":"199" + }, + { + "desc":"The aim of SQL optimization is to maximize the utilization of resources, including CPU, memory, disk I/O, and network I/O. To maximize resource utilization is to run SQL ", + "product_code":"dws", + "title":"Overview of Query Performance Optimization", + "uri":"dws_04_0402.html", + "doc_type":"devg", + "p_code":"199", + "code":"200" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Query Analysis", + "uri":"dws_04_0403.html", + "doc_type":"devg", + "p_code":"199", + "code":"201" + }, + { + "desc":"The process from receiving SQL statements to the statement execution by the SQL engine is shown in Figure 1 and Table 1. The texts in red are steps where database adminis", + "product_code":"dws", + "title":"Query Execution Process", + "uri":"dws_04_0409.html", + "doc_type":"devg", + "p_code":"201", + "code":"202" + }, + { + "desc":"The SQL execution plan is a node tree, which displays detailed procedure when GaussDB(DWS) runs an SQL statement. A database operator indicates one step.You can run the E", + "product_code":"dws", + "title":"Overview of the SQL Execution Plan", + "uri":"dws_04_0410.html", + "doc_type":"devg", + "p_code":"201", + "code":"203" + }, + { + "desc":"As described in Overview of the SQL Execution Plan, EXPLAIN displays the execution plan, but will not actually run SQL statements. EXPLAIN ANALYZE and EXPLAIN PERFORMANCE", + "product_code":"dws", + "title":"Deep Dive on the SQL Execution Plan", + "uri":"dws_04_0411.html", + "doc_type":"devg", + "p_code":"201", + "code":"204" + }, + { + "desc":"This section describes how to query SQL statements whose execution takes a long time, leading to poor system performance.After the query, query statements are returned as", + "product_code":"dws", + "title":"Querying SQL Statements That Affect Performance Most", + "uri":"dws_04_0412.html", + "doc_type":"devg", + "p_code":"201", + "code":"205" + }, + { + "desc":"During database running, query statements are blocked in some service scenarios and run for an excessively long time. In this case, you can forcibly terminate the faulty ", + "product_code":"dws", + "title":"Checking Blocked Statements", + "uri":"dws_04_0413.html", + "doc_type":"devg", + "p_code":"201", + "code":"206" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Query Improvement", + "uri":"dws_04_0430.html", + "doc_type":"devg", + "p_code":"199", + "code":"207" + }, + { + "desc":"You can analyze slow SQL statements to optimize them.", + "product_code":"dws", + "title":"Optimization Process", + "uri":"dws_04_0435.html", + "doc_type":"devg", + "p_code":"207", + "code":"208" + }, + { + "desc":"In a database, statistics indicate the source data of a plan generated by a planner. If no collection statistics are available or out of date, the execution plan may seri", + "product_code":"dws", + "title":"Updating Statistics", + "uri":"dws_04_0436.html", + "doc_type":"devg", + "p_code":"207", + "code":"209" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Reviewing and Modifying a Table Definition", + "uri":"dws_04_0437.html", + "doc_type":"devg", + "p_code":"207", + "code":"210" + }, + { + "desc":"In a distributed framework, data is distributed on DNs. Data on one or more DNs is stored on a physical storage device. To properly define a table, you must:Evenly distri", + "product_code":"dws", + "title":"Reviewing and Modifying a Table Definition", + "uri":"dws_04_0438.html", + "doc_type":"devg", + "p_code":"210", + "code":"211" + }, + { + "desc":"During database design, some key factors about table design will greatly affect the subsequent query performance of the database. Table design affects data storage as wel", + "product_code":"dws", + "title":"Selecting a Storage Model", + "uri":"dws_04_0439.html", + "doc_type":"devg", + "p_code":"210", + "code":"212" + }, + { + "desc":"In replication mode, full data in a table is copied to each DN in the cluster. This mode is used for tables containing a small volume of data. Full data in a table stored", + "product_code":"dws", + "title":"Selecting a Distribution Mode", + "uri":"dws_04_0440.html", + "doc_type":"devg", + "p_code":"210", + "code":"213" + }, + { + "desc":"The distribution column in a hash table must meet the following requirements, which are ranked by priority in descending order:The value of the distribution column should", + "product_code":"dws", + "title":"Selecting a Distribution Column", + "uri":"dws_04_0441.html", + "doc_type":"devg", + "p_code":"210", + "code":"214" + }, + { + "desc":"Partial Cluster Key is the column-based technology. It can minimize or maximize sparse indexes to quickly filter base tables. Partial cluster key can specify multiple col", + "product_code":"dws", + "title":"Using Partial Clustering", + "uri":"dws_04_0442.html", + "doc_type":"devg", + "p_code":"210", + "code":"215" + }, + { + "desc":"Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partit", + "product_code":"dws", + "title":"Using Partitioned Tables", + "uri":"dws_04_0443.html", + "doc_type":"devg", + "p_code":"210", + "code":"216" + }, + { + "desc":"Use the following principles to obtain efficient data types:Using the data type that can be efficiently executedGenerally, calculation of integers (including common compa", + "product_code":"dws", + "title":"Selecting a Data Type", + "uri":"dws_04_0444.html", + "doc_type":"devg", + "p_code":"210", + "code":"217" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Typical SQL Optimization Methods", + "uri":"dws_04_0445.html", + "doc_type":"devg", + "p_code":"207", + "code":"218" + }, + { + "desc":"Performance issues may occur when you query data or run the INSERT, DELETE, UPDATE, or CREATE TABLE AS statement. You can query the warning column in the GS_WLM_SESSION_S", + "product_code":"dws", + "title":"SQL Self-Diagnosis", + "uri":"dws_04_0446.html", + "doc_type":"devg", + "p_code":"218", + "code":"219" + }, + { + "desc":"Currently, the GaussDB(DWS) optimizer can use three methods to develop statement execution policies in the distributed framework: generating a statement pushdown plan, a ", + "product_code":"dws", + "title":"Optimizing Statement Pushdown", + "uri":"dws_04_0447.html", + "doc_type":"devg", + "p_code":"218", + "code":"220" + }, + { + "desc":"When an application runs a SQL statement to operate the database, a large number of subqueries are used because they are more clear than table join. Especially in complic", + "product_code":"dws", + "title":"Optimizing Subqueries", + "uri":"dws_04_0448.html", + "doc_type":"devg", + "p_code":"218", + "code":"221" + }, + { + "desc":"GaussDB(DWS) generates optimal execution plans based on the cost estimation. Optimizers need to estimate the number of data rows and the cost based on statistics collecte", + "product_code":"dws", + "title":"Optimizing Statistics", + "uri":"dws_04_0449.html", + "doc_type":"devg", + "p_code":"218", + "code":"222" + }, + { + "desc":"A query statement needs to go through multiple operator procedures to generate the final result. Sometimes, the overall query performance deteriorates due to long executi", + "product_code":"dws", + "title":"Optimizing Operators", + "uri":"dws_04_0450.html", + "doc_type":"devg", + "p_code":"218", + "code":"223" + }, + { + "desc":"Data skew breaks the balance among nodes in the distributed MPP architecture. If the amount of data stored or processed by a node is much greater than that by other nodes", + "product_code":"dws", + "title":"Optimizing Data Skew", + "uri":"dws_04_0451.html", + "doc_type":"devg", + "p_code":"218", + "code":"224" + }, + { + "desc":"Based on the database SQL execution mechanism and a large number of practices, summarize finds that: using rules of a certain SQL statement, on the basis of the so that t", + "product_code":"dws", + "title":"Experience in Rewriting SQL Statements", + "uri":"dws_04_0452.html", + "doc_type":"devg", + "p_code":"207", + "code":"225" + }, + { + "desc":"This section describes the key CN parameters that affect GaussDB(DWS) SQL tuning performance. For details about how to configure these parameters, see Configuring GUC Par", + "product_code":"dws", + "title":"Adjusting Key Parameters During SQL Tuning", + "uri":"dws_04_0453.html", + "doc_type":"devg", + "p_code":"207", + "code":"226" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Hint-based Tuning", + "uri":"dws_04_0454.html", + "doc_type":"devg", + "p_code":"207", + "code":"227" + }, + { + "desc":"In plan hints, you can specify a join order, join, stream, and scan operations, the number of rows in a result, and redistribution skew information to tune an execution p", + "product_code":"dws", + "title":"Plan Hint Optimization", + "uri":"dws_04_0455.html", + "doc_type":"devg", + "p_code":"227", + "code":"228" + }, + { + "desc":"Theses hints specify the join order and outer/inner tables.Specify only the join order.Specify the join order and outer/inner tables. The outer/inner tables are specified", + "product_code":"dws", + "title":"Join Order Hints", + "uri":"dws_04_0456.html", + "doc_type":"devg", + "p_code":"227", + "code":"229" + }, + { + "desc":"Specifies the join method. It can be nested loop join, hash join, or merge join.no indicates that the specified hint will not be used for a join.table_list specifies the ", + "product_code":"dws", + "title":"Join Operation Hints", + "uri":"dws_04_0457.html", + "doc_type":"devg", + "p_code":"227", + "code":"230" + }, + { + "desc":"These hints specify the number of rows in an intermediate result set. Both absolute values and relative values are supported.#,+,-, and * are operators used for hinting t", + "product_code":"dws", + "title":"Rows Hints", + "uri":"dws_04_0458.html", + "doc_type":"devg", + "p_code":"227", + "code":"231" + }, + { + "desc":"These hints specify a stream operation, which can be broadcast or redistribute.no indicates that the specified hint will not be used for a join.table_list specifies the t", + "product_code":"dws", + "title":"Stream Operation Hints", + "uri":"dws_04_0459.html", + "doc_type":"devg", + "p_code":"227", + "code":"232" + }, + { + "desc":"These hints specify a scan operation, which can be tablescan, indexscan, or indexonlyscan.no indicates that the specified hint will not be used for a join.table specifies", + "product_code":"dws", + "title":"Scan Operation Hints", + "uri":"dws_04_0460.html", + "doc_type":"devg", + "p_code":"227", + "code":"233" + }, + { + "desc":"These hints specify the name of a sublink block.table indicates the name you have specified for a sublink block.This hint is used by an outer query only when a sublink is", + "product_code":"dws", + "title":"Sublink Name Hints", + "uri":"dws_04_0461.html", + "doc_type":"devg", + "p_code":"227", + "code":"234" + }, + { + "desc":"Theses hints specify redistribution keys containing skew data and skew values, and are used to optimize redistribution involving Join or HashAgg.Specify single-table skew", + "product_code":"dws", + "title":"Skew Hints", + "uri":"dws_04_0462.html", + "doc_type":"devg", + "p_code":"227", + "code":"235" + }, + { + "desc":"A hint, or a GUC hint, specifies a configuration parameter value when a plan is generated. Currently, only the following parameters are supported:agg_redistribute_enhance", + "product_code":"dws", + "title":"Configuration Parameter Hints", + "uri":"dws_04_0463.html", + "doc_type":"devg", + "p_code":"227", + "code":"236" + }, + { + "desc":"Plan hints change an execution plan. You can run EXPLAIN to view the changes.Hints containing errors are invalid and do not affect statement execution. The errors will be", + "product_code":"dws", + "title":"Hint Errors, Conflicts, and Other Warnings", + "uri":"dws_04_0464.html", + "doc_type":"devg", + "p_code":"227", + "code":"237" + }, + { + "desc":"This section takes the statements in TPC-DS (Q24) as an example to describe how to optimize an execution plan by using hints in 1000X+24DN environments. For example:The o", + "product_code":"dws", + "title":"Plan Hint Cases", + "uri":"dws_04_0465.html", + "doc_type":"devg", + "p_code":"227", + "code":"238" + }, + { + "desc":"To ensure proper database running, after INSERT and DELETE operations, you need to routinely do VACUUM FULL and ANALYZE as appropriate for customer scenarios and update s", + "product_code":"dws", + "title":"Routinely Maintaining Tables", + "uri":"dws_04_0466.html", + "doc_type":"devg", + "p_code":"207", + "code":"239" + }, + { + "desc":"When data deletion is repeatedly performed in the database, index keys will be deleted from the index page, resulting in index distention. Recreating an index routinely i", + "product_code":"dws", + "title":"Routinely Recreating an Index", + "uri":"dws_04_0467.html", + "doc_type":"devg", + "p_code":"207", + "code":"240" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Configuring the SMP", + "uri":"dws_04_0468.html", + "doc_type":"devg", + "p_code":"207", + "code":"241" + }, + { + "desc":"The SMP feature improves the performance through operator parallelism and occupies more system resources, including CPU, memory, network, and I/O. Actually, SMP is a meth", + "product_code":"dws", + "title":"Application Scenarios and Restrictions", + "uri":"dws_04_0469.html", + "doc_type":"devg", + "p_code":"241", + "code":"242" + }, + { + "desc":"The SMP architecture uses abundant resources to obtain time. After the plan parallelism is executed, the resource consumption is added, including the CPU, memory, I/O, an", + "product_code":"dws", + "title":"Resource Impact on SMP Performance", + "uri":"dws_04_0470.html", + "doc_type":"devg", + "p_code":"241", + "code":"243" + }, + { + "desc":"Besides resource factors, there are other factors that impact the SMP parallelism performance, such as unevenly data distributed in a partitioned table and system paralle", + "product_code":"dws", + "title":"Other Factors Affecting SMP Performance", + "uri":"dws_04_0471.html", + "doc_type":"devg", + "p_code":"241", + "code":"244" + }, + { + "desc":"Starting from this version, SMP auto adaptation is enabled. For newly deployed clusters, the default value of query_dop is 0, and SMP parameters have been adjusted. To en", + "product_code":"dws", + "title":"Suggestions for SMP Parameter Settings", + "uri":"dws_04_0472.html", + "doc_type":"devg", + "p_code":"241", + "code":"245" + }, + { + "desc":"To manually optimize SMP, you need to be familiar with Suggestions for SMP Parameter Settings. This section describes how to optimize SMP.The CPU, memory, I/O, and networ", + "product_code":"dws", + "title":"SMP Manual Optimization Suggestions", + "uri":"dws_04_0473.html", + "doc_type":"devg", + "p_code":"241", + "code":"246" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Optimization Cases", + "uri":"dws_04_0474.html", + "doc_type":"devg", + "p_code":"199", + "code":"247" + }, + { + "desc":"Tables are defined as follows:The following query is executed:If a is the distribution column of t1 and t2:Then Streaming exists in the execution plan and the data volume", + "product_code":"dws", + "title":"Case: Selecting an Appropriate Distribution Column", + "uri":"dws_04_0475.html", + "doc_type":"devg", + "p_code":"247", + "code":"248" + }, + { + "desc":"Query the information about all personnel in the sales department.The original execution plan is as follows before creating the places.place_id and states.state_id indexe", + "product_code":"dws", + "title":"Case: Creating an Appropriate Index", + "uri":"dws_04_0476.html", + "doc_type":"devg", + "p_code":"247", + "code":"249" + }, + { + "desc":"Figure 1 shows the execution plan.As shown in Figure 1, the sequential scan phase is time consuming.The JOIN performance is poor because a large number of null values exi", + "product_code":"dws", + "title":"Case: Adding NOT NULL for JOIN Columns", + "uri":"dws_04_0477.html", + "doc_type":"devg", + "p_code":"247", + "code":"250" + }, + { + "desc":"In an execution plan, more than 95% of the execution time is spent on window agg performed on the CN. In this case, sum is performed for the two columns separately, and t", + "product_code":"dws", + "title":"Case: Pushing Down Sort Operations to DNs", + "uri":"dws_04_0478.html", + "doc_type":"devg", + "p_code":"247", + "code":"251" + }, + { + "desc":"If bit0 of cost_param is set to 1, an improved mechanism is used for estimating the selection rate of non-equi-joins. This method is more accurate for estimating the sele", + "product_code":"dws", + "title":"Case: Configuring cost_param for Better Query Performance", + "uri":"dws_04_0479.html", + "doc_type":"devg", + "p_code":"247", + "code":"252" + }, + { + "desc":"During a site test, the information is displayed after EXPLAIN ANALYZE is executed:According to the execution information, HashJoin becomes the performance bottleneck of ", + "product_code":"dws", + "title":"Case: Adjusting the Distribution Key", + "uri":"dws_04_0480.html", + "doc_type":"devg", + "p_code":"247", + "code":"253" + }, + { + "desc":"Information on the EXPLAIN PERFORMANCE at a site is as follows: As shown in the red boxes, two performance bottlenecks are scan operations in a table.After further analys", + "product_code":"dws", + "title":"Case: Adjusting the Partial Clustering Key", + "uri":"dws_04_0481.html", + "doc_type":"devg", + "p_code":"247", + "code":"254" + }, + { + "desc":"In the GaussDB(DWS) database, row-store tables use the row execution engine, and column-store tables use the column execution engine. If both row-store table and column-s", + "product_code":"dws", + "title":"Case: Adjusting the Table Storage Mode in a Medium Table", + "uri":"dws_04_0482.html", + "doc_type":"devg", + "p_code":"247", + "code":"255" + }, + { + "desc":"During the test at a site, if the following execution plan is performed, the customer expects that the performance can be improved and the result can be returned within 3", + "product_code":"dws", + "title":"Case: Adjusting the Local Clustering Column", + "uri":"dws_04_0483.html", + "doc_type":"devg", + "p_code":"247", + "code":"256" + }, + { + "desc":"In the following simple SQL statements, the performance bottlenecks exist in the scan operation of dwcjk.Obviously, there are date features in the cjrq field of table dat", + "product_code":"dws", + "title":"Case: Reconstructing Partition Tables", + "uri":"dws_04_0484.html", + "doc_type":"devg", + "p_code":"247", + "code":"257" + }, + { + "desc":"The t1 table is defined as follows:Assume that the distribution column of the result set provided by the agg lower-layer operator is setA, and the group by column of the ", + "product_code":"dws", + "title":"Case: Adjusting the GUC Parameter best_agg_plan", + "uri":"dws_04_0485.html", + "doc_type":"devg", + "p_code":"247", + "code":"258" + }, + { + "desc":"This SQL performance is poor. SubPlan exists in the execution plan as follows:The core of this optimization is to eliminate subqueries. Based on the service scenario anal", + "product_code":"dws", + "title":"Case: Rewriting SQL and Deleting Subqueries (Case 1)", + "uri":"dws_04_0486.html", + "doc_type":"devg", + "p_code":"247", + "code":"259" + }, + { + "desc":"On a site, the customer gave the feedback saying that the execution time of the following SQL statements lasted over one day and did not end:The corresponding execution p", + "product_code":"dws", + "title":"Case: Rewriting SQL and Deleting Subqueries (Case 2)", + "uri":"dws_04_0487.html", + "doc_type":"devg", + "p_code":"247", + "code":"260" + }, + { + "desc":"In a test at a site, ddw_f10_op_cust_asset_mon is a partitioned table and the partition key is year_mth whose value is a combined string of month and year values.The foll", + "product_code":"dws", + "title":"Case: Rewriting SQL Statements and Eliminating Prune Interference", + "uri":"dws_04_0488.html", + "doc_type":"devg", + "p_code":"247", + "code":"261" + }, + { + "desc":"in-clause/any-clause is a common SQL statement constraint. Sometimes, the clause following in or any is a constant. For example:orSome special usages are as follows:Where", + "product_code":"dws", + "title":"Case: Rewriting SQL Statements and Deleting in-clause", + "uri":"dws_04_0489.html", + "doc_type":"devg", + "p_code":"247", + "code":"262" + }, + { + "desc":"You can add PARTIAL CLUSTER KEY(column_name[,...]) to the definition of a column-store table to set one or more columns of this table as partial cluster keys. In this way", + "product_code":"dws", + "title":"Case: Setting Partial Cluster Keys", + "uri":"dws_04_0490.html", + "doc_type":"devg", + "p_code":"247", + "code":"263" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"SQL Execution Troubleshooting", + "uri":"dws_04_0491.html", + "doc_type":"devg", + "p_code":"199", + "code":"264" + }, + { + "desc":"A query task that used to take a few milliseconds to complete is now requiring several seconds, and that used to take several seconds is now requiring even half an hour. ", + "product_code":"dws", + "title":"Low Query Efficiency", + "uri":"dws_04_0492.html", + "doc_type":"devg", + "p_code":"264", + "code":"265" + }, + { + "desc":"DROP TABLE fails to be executed in the following scenarios:A user runs the \\dt+ command using gsql and finds that the table_name table does not exist. When the user runs ", + "product_code":"dws", + "title":"DROP TABLE Fails to Be Executed", + "uri":"dws_04_0494.html", + "doc_type":"devg", + "p_code":"264", + "code":"266" + }, + { + "desc":"Two users log in to the same database human_resource and run the select count(*) from areas statement separately to query the areas table, but obtain different results.Ch", + "product_code":"dws", + "title":"Different Data Is Displayed for the Same Table Queried By Multiple Users", + "uri":"dws_04_0495.html", + "doc_type":"devg", + "p_code":"264", + "code":"267" + }, + { + "desc":"The following error is reported during the integer conversion:Some data types cannot be converted to the target data type.Gradually narrow down the range of SQL statement", + "product_code":"dws", + "title":"An Error Occurs During the Integer Conversion", + "uri":"dws_04_0496.html", + "doc_type":"devg", + "p_code":"264", + "code":"268" + }, + { + "desc":"With automatic retry (referred to as CN retry), GaussDB(DWS) retries an SQL statement when the execution of this statement fails. If an SQL statement sent from the gsql c", + "product_code":"dws", + "title":"Automatic Retry upon SQL Statement Execution Errors", + "uri":"dws_04_0497.html", + "doc_type":"devg", + "p_code":"264", + "code":"269" + }, + { + "desc":"To improve the cluster performance, you can use multiple methods to optimize the database, including hardware configuration, software driver upgrade, and internal paramet", + "product_code":"dws", + "title":"Common Performance Parameter Optimization Design", + "uri":"dws_04_0970.html", + "doc_type":"devg", + "p_code":"199", + "code":"270" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"User-Defined Functions", + "uri":"dws_04_0507.html", + "doc_type":"devg", + "p_code":"1", + "code":"271" + }, + { + "desc":"With the GaussDB(DWS) PL/Java functions, you can choose your favorite Java IDE to write Java methods and install the JAR files containing these methods into the GaussDB(D", + "product_code":"dws", + "title":"PL/Java Functions", + "uri":"dws_04_0509.html", + "doc_type":"devg", + "p_code":"271", + "code":"272" + }, + { + "desc":"PL/pgSQL is similar to PL/SQL of Oracle. It is a loadable procedural language.The functions created using PL/pgSQL can be used in any place where you can use built-in fun", + "product_code":"dws", + "title":"PL/pgSQL Functions", + "uri":"dws_04_0511.html", + "doc_type":"devg", + "p_code":"271", + "code":"273" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Stored Procedures", + "uri":"dws_04_0512.html", + "doc_type":"devg", + "p_code":"1", + "code":"274" + }, + { + "desc":"In GaussDB(DWS), business rules and logics are saved as stored procedures.A stored procedure is a combination of SQL, PL/SQL, and Java statements, enabling business rule ", + "product_code":"dws", + "title":"Stored Procedure", + "uri":"dws_04_0513.html", + "doc_type":"devg", + "p_code":"274", + "code":"275" + }, + { + "desc":"A data type refers to a value set and an operation set defined on the value set. A GaussDB(DWS) database consists of tables, each of which is defined by its own columns. ", + "product_code":"dws", + "title":"Data Types", + "uri":"dws_04_0514.html", + "doc_type":"devg", + "p_code":"274", + "code":"276" + }, + { + "desc":"Certain data types in the database support implicit data type conversions, such as assignments and parameters invoked by functions. For other data types, you can use the ", + "product_code":"dws", + "title":"Data Type Conversion", + "uri":"dws_04_0515.html", + "doc_type":"devg", + "p_code":"274", + "code":"277" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Arrays and Records", + "uri":"dws_04_0516.html", + "doc_type":"devg", + "p_code":"274", + "code":"278" + }, + { + "desc":"Before the use of arrays, an array type needs to be defined:Define an array type immediately after the AS keyword in a stored procedure. Run the following statement:TYPE ", + "product_code":"dws", + "title":"Arrays", + "uri":"dws_04_0517.html", + "doc_type":"devg", + "p_code":"278", + "code":"279" + }, + { + "desc":"Perform the following operations to create a record variable:Define a record type and use this type to declare a variable.For the syntax of the record type, see Figure 1.", + "product_code":"dws", + "title":"record", + "uri":"dws_04_0518.html", + "doc_type":"devg", + "p_code":"278", + "code":"280" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Syntax", + "uri":"dws_04_0519.html", + "doc_type":"devg", + "p_code":"274", + "code":"281" + }, + { + "desc":"A PL/SQL block can contain a sub-block which can be placed in any section. The following describes the architecture of a PL/SQL block:DECLARE: declares variables, types, ", + "product_code":"dws", + "title":"Basic Structure", + "uri":"dws_04_0520.html", + "doc_type":"devg", + "p_code":"281", + "code":"282" + }, + { + "desc":"An anonymous block applies to a script infrequently executed or a one-off activity. An anonymous block is executed in a session and is not stored.Figure 1 shows the synta", + "product_code":"dws", + "title":"Anonymous Block", + "uri":"dws_04_0521.html", + "doc_type":"devg", + "p_code":"281", + "code":"283" + }, + { + "desc":"A subprogram stores stored procedures, functions, operators, and advanced packages. A subprogram created in a database can be called by other programs.", + "product_code":"dws", + "title":"Subprogram", + "uri":"dws_04_0522.html", + "doc_type":"devg", + "p_code":"281", + "code":"284" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Basic Statements", + "uri":"dws_04_0523.html", + "doc_type":"devg", + "p_code":"274", + "code":"285" + }, + { + "desc":"This section describes the declaration of variables in the PL/SQL and the scope of this variable in codes.For details about the variable declaration syntax, see Figure 1.", + "product_code":"dws", + "title":"Variable Definition Statement", + "uri":"dws_04_0524.html", + "doc_type":"devg", + "p_code":"285", + "code":"286" + }, + { + "desc":"Figure 1 shows the syntax diagram for assigning a value to a variable.The above syntax diagram is explained as follows:variable_name indicates the name of a variable.valu", + "product_code":"dws", + "title":"Assignment Statement", + "uri":"dws_04_0525.html", + "doc_type":"devg", + "p_code":"285", + "code":"287" + }, + { + "desc":"Figure 1 shows the syntax diagram for calling a clause.The above syntax diagram is explained as follows:procedure_name specifies the name of a stored procedure.parameter ", + "product_code":"dws", + "title":"Call Statement", + "uri":"dws_04_0526.html", + "doc_type":"devg", + "p_code":"285", + "code":"288" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Dynamic Statements", + "uri":"dws_04_0527.html", + "doc_type":"devg", + "p_code":"274", + "code":"289" + }, + { + "desc":"You can perform dynamic queries using EXECUTE IMMEDIATE or OPEN FOR in GaussDB(DWS). EXECUTE IMMEDIATE dynamically executes SELECT statements and OPEN FOR combines use of", + "product_code":"dws", + "title":"Executing Dynamic Query Statements", + "uri":"dws_04_0528.html", + "doc_type":"devg", + "p_code":"289", + "code":"290" + }, + { + "desc":"Figure 1 shows the syntax diagram.Figure 2 shows the syntax diagram for using_clause.The above syntax diagram is explained as follows:USING IN bind_argument is used to sp", + "product_code":"dws", + "title":"Executing Dynamic Non-query Statements", + "uri":"dws_04_0529.html", + "doc_type":"devg", + "p_code":"289", + "code":"291" + }, + { + "desc":"This section describes how to dynamically call store procedures. You must use anonymous statement blocks to package stored procedures or statement blocks and append IN an", + "product_code":"dws", + "title":"Dynamically Calling Stored Procedures", + "uri":"dws_04_0530.html", + "doc_type":"devg", + "p_code":"289", + "code":"292" + }, + { + "desc":"This section describes how to execute anonymous blocks in dynamic statements. Append IN and OUT behind the EXECUTE IMMEDIATE...USING statement to input and output paramet", + "product_code":"dws", + "title":"Dynamically Calling Anonymous Blocks", + "uri":"dws_04_0531.html", + "doc_type":"devg", + "p_code":"289", + "code":"293" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Control Statements", + "uri":"dws_04_0532.html", + "doc_type":"devg", + "p_code":"274", + "code":"294" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"RETURN Statements", + "uri":"dws_04_0533.html", + "doc_type":"devg", + "p_code":"294", + "code":"295" + }, + { + "desc":"Figure 1 shows the syntax diagram for a return statement.The syntax details are as follows:This statement returns control from a stored procedure or function to a caller.", + "product_code":"dws", + "title":"RETURN", + "uri":"dws_04_0534.html", + "doc_type":"devg", + "p_code":"295", + "code":"296" + }, + { + "desc":"When creating a function, specify SETOF datatype for the return values.return_next_clause::=return_query_clause::=The syntax details are as follows:If a function needs to", + "product_code":"dws", + "title":"RETURN NEXT and RETURN QUERY", + "uri":"dws_04_0535.html", + "doc_type":"devg", + "p_code":"295", + "code":"297" + }, + { + "desc":"Conditional statements are used to decide whether given conditions are met. Operations are executed based on the decisions made.GaussDB(DWS) supports five usages of IF:IF", + "product_code":"dws", + "title":"Conditional Statements", + "uri":"dws_04_0536.html", + "doc_type":"devg", + "p_code":"294", + "code":"298" + }, + { + "desc":"The syntax diagram is as follows.Example:The loop must be exploited together with EXIT; otherwise, a dead loop occurs.The syntax diagram is as follows.If the conditional ", + "product_code":"dws", + "title":"Loop Statements", + "uri":"dws_04_0537.html", + "doc_type":"devg", + "p_code":"294", + "code":"299" + }, + { + "desc":"Figure 1 shows the syntax diagram.Figure 2 shows the syntax diagram for when_clause.Parameter description:case_expression: specifies the variable or expression.when_expre", + "product_code":"dws", + "title":"Branch Statements", + "uri":"dws_04_0538.html", + "doc_type":"devg", + "p_code":"294", + "code":"300" + }, + { + "desc":"In PL/SQL programs, NULL statements are used to indicate \"nothing should be done\", equal to placeholders. They grant meanings to some statements and improve program reada", + "product_code":"dws", + "title":"NULL Statements", + "uri":"dws_04_0539.html", + "doc_type":"devg", + "p_code":"294", + "code":"301" + }, + { + "desc":"By default, any error occurring in a PL/SQL function aborts execution of the function, and indeed of the surrounding transaction as well. You can trap errors and restore ", + "product_code":"dws", + "title":"Error Trapping Statements", + "uri":"dws_04_0540.html", + "doc_type":"devg", + "p_code":"294", + "code":"302" + }, + { + "desc":"The GOTO statement unconditionally transfers the control from the current statement to a labeled statement. The GOTO statement changes the execution logic. Therefore, use", + "product_code":"dws", + "title":"GOTO Statements", + "uri":"dws_04_0541.html", + "doc_type":"devg", + "p_code":"294", + "code":"303" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Other Statements", + "uri":"dws_04_0542.html", + "doc_type":"devg", + "p_code":"274", + "code":"304" + }, + { + "desc":"GaussDB(DWS) provides multiple lock modes to control concurrent accesses to table data. These modes are used when Multi-Version Concurrency Control (MVCC) cannot give exp", + "product_code":"dws", + "title":"Lock Operations", + "uri":"dws_04_0543.html", + "doc_type":"devg", + "p_code":"304", + "code":"305" + }, + { + "desc":"GaussDB(DWS) provides cursors as a data buffer for users to store execution results of SQL statements. Each cursor region has a name. Users can use SQL statements to obta", + "product_code":"dws", + "title":"Cursor Operations", + "uri":"dws_04_0544.html", + "doc_type":"devg", + "p_code":"304", + "code":"306" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Cursors", + "uri":"dws_04_0545.html", + "doc_type":"devg", + "p_code":"274", + "code":"307" + }, + { + "desc":"To process SQL statements, the stored procedure process assigns a memory segment to store context association. Cursors are handles or pointers to context areas. With curs", + "product_code":"dws", + "title":"Overview", + "uri":"dws_04_0546.html", + "doc_type":"devg", + "p_code":"307", + "code":"308" + }, + { + "desc":"An explicit cursor is used to process query statements, particularly when the query results contain multiple records.An explicit cursor performs the following six PL/SQL ", + "product_code":"dws", + "title":"Explicit Cursor", + "uri":"dws_04_0547.html", + "doc_type":"devg", + "p_code":"307", + "code":"309" + }, + { + "desc":"The system automatically sets implicit cursors for non-query statements, such as ALTER and DROP, and creates work areas for these statements. These implicit cursors are n", + "product_code":"dws", + "title":"Implicit Cursor", + "uri":"dws_04_0548.html", + "doc_type":"devg", + "p_code":"307", + "code":"310" + }, + { + "desc":"The use of cursors in WHILE and LOOP statements is called a cursor loop. Generally, OPEN, FETCH, and CLOSE statements are needed in cursor loop. The following describes a", + "product_code":"dws", + "title":"Cursor Loop", + "uri":"dws_04_0549.html", + "doc_type":"devg", + "p_code":"307", + "code":"311" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Advanced Packages", + "uri":"dws_04_0550.html", + "doc_type":"devg", + "p_code":"274", + "code":"312" + }, + { + "desc":"Table 1 provides all interfaces supported by the DBMS_LOB package.DBMS_LOB.GETLENGTHSpecifies the length of a LOB type object obtained and returned by the stored procedur", + "product_code":"dws", + "title":"DBMS_LOB", + "uri":"dws_04_0551.html", + "doc_type":"devg", + "p_code":"312", + "code":"313" + }, + { + "desc":"Table 1 provides all interfaces supported by the DBMS_RANDOM package.DBMS_RANDOM.SEEDThe stored procedure SEED is used to set a seed for a random number. The DBMS_RANDOM.", + "product_code":"dws", + "title":"DBMS_RANDOM", + "uri":"dws_04_0552.html", + "doc_type":"devg", + "p_code":"312", + "code":"314" + }, + { + "desc":"Table 1 provides all interfaces supported by the DBMS_OUTPUT package.DBMS_OUTPUT.PUT_LINEThe PUT_LINE procedure writes a row of text carrying a line end symbol in the buf", + "product_code":"dws", + "title":"DBMS_OUTPUT", + "uri":"dws_04_0553.html", + "doc_type":"devg", + "p_code":"312", + "code":"315" + }, + { + "desc":"Table 1 provides all interfaces supported by the UTL_RAW package.The external representation of the RAW type data is hexadecimal and its internal storage form is binary. ", + "product_code":"dws", + "title":"UTL_RAW", + "uri":"dws_04_0554.html", + "doc_type":"devg", + "p_code":"312", + "code":"316" + }, + { + "desc":"Table 1 lists all interfaces supported by the DBMS_JOB package.DBMS_JOB.SUBMITThe stored procedure SUBMIT submits a job provided by the system.A prototype of the DBMS_JOB", + "product_code":"dws", + "title":"DBMS_JOB", + "uri":"dws_04_0555.html", + "doc_type":"devg", + "p_code":"312", + "code":"317" + }, + { + "desc":"Table 1 lists interfaces supported by the DBMS_SQL package.You are advised to use dbms_sql.define_column and dbms_sql.column_value to define columns.If the size of the re", + "product_code":"dws", + "title":"DBMS_SQL", + "uri":"dws_04_0556.html", + "doc_type":"devg", + "p_code":"312", + "code":"318" + }, + { + "desc":"RAISE has the following five syntax formats:Parameter description:The level option is used to specify the error level, that is, DEBUG, LOG, INFO, NOTICE, WARNING, or EXCE", + "product_code":"dws", + "title":"Debugging", + "uri":"dws_04_0558.html", + "doc_type":"devg", + "p_code":"274", + "code":"319" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"System Catalogs and System Views", + "uri":"dws_04_0559.html", + "doc_type":"devg", + "p_code":"1", + "code":"320" + }, + { + "desc":"System catalogs are used by GaussDB(DWS) to store structure metadata. They are a core component the GaussDB(DWS) database system and provide control information for the d", + "product_code":"dws", + "title":"Overview of System Catalogs and System Views", + "uri":"dws_04_0560.html", + "doc_type":"devg", + "p_code":"320", + "code":"321" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"System Catalogs", + "uri":"dws_04_0561.html", + "doc_type":"devg", + "p_code":"320", + "code":"322" + }, + { + "desc":"GS_OBSSCANINFO defines the OBS runtime information scanned in cluster acceleration scenarios. Each record corresponds to a piece of runtime information of a foreign table", + "product_code":"dws", + "title":"GS_OBSSCANINFO", + "uri":"dws_04_0562.html", + "doc_type":"devg", + "p_code":"322", + "code":"323" + }, + { + "desc":"The GS_WLM_INSTANCE_HISTORY system catalog stores information about resource usage related to CN or DN instances. Each record in the system table indicates the resource u", + "product_code":"dws", + "title":"GS_WLM_INSTANCE_HISTORY", + "uri":"dws_04_0564.html", + "doc_type":"devg", + "p_code":"322", + "code":"324" + }, + { + "desc":"GS_WLM_OPERATOR_INFO records operators of completed jobs. The data is dumped from the kernel to a system catalog.This system catalog's schema is dbms_om.This system catal", + "product_code":"dws", + "title":"GS_WLM_OPERATOR_INFO", + "uri":"dws_04_0565.html", + "doc_type":"devg", + "p_code":"322", + "code":"325" + }, + { + "desc":"GS_WLM_SESSION_INFO records load management information about a completed job executed on all CNs. The data is dumped from the kernel to a system catalog.This system cata", + "product_code":"dws", + "title":"GS_WLM_SESSION_INFO", + "uri":"dws_04_0566.html", + "doc_type":"devg", + "p_code":"322", + "code":"326" + }, + { + "desc":"The GS_WLM_USER_RESOURCE_HISTORY system table stores information about resources used by users and is valid only on CNs. Each record in the system table indicates the res", + "product_code":"dws", + "title":"GS_WLM_USER_RESOURCE_HISTORY", + "uri":"dws_04_0567.html", + "doc_type":"devg", + "p_code":"322", + "code":"327" + }, + { + "desc":"pg_aggregate records information about aggregation functions. Each entry in pg_aggregate is an extension of an entry in pg_proc. The pg_proc entry carries the aggregate's", + "product_code":"dws", + "title":"PG_AGGREGATE", + "uri":"dws_04_0568.html", + "doc_type":"devg", + "p_code":"322", + "code":"328" + }, + { + "desc":"PG_AM records information about index access methods. There is one row for each index access method supported by the system.", + "product_code":"dws", + "title":"PG_AM", + "uri":"dws_04_0569.html", + "doc_type":"devg", + "p_code":"322", + "code":"329" + }, + { + "desc":"PG_AMOP records information about operators associated with access method operator families. There is one row for each operator that is a member of an operator family. A ", + "product_code":"dws", + "title":"PG_AMOP", + "uri":"dws_04_0570.html", + "doc_type":"devg", + "p_code":"322", + "code":"330" + }, + { + "desc":"PG_AMPROC records information about the support procedures associated with the access method operator families. There is one row for each support procedure belonging to a", + "product_code":"dws", + "title":"PG_AMPROC", + "uri":"dws_04_0571.html", + "doc_type":"devg", + "p_code":"322", + "code":"331" + }, + { + "desc":"PG_ATTRDEF stores default values of columns.", + "product_code":"dws", + "title":"PG_ATTRDEF", + "uri":"dws_04_0572.html", + "doc_type":"devg", + "p_code":"322", + "code":"332" + }, + { + "desc":"PG_ATTRIBUTE records information about table columns.", + "product_code":"dws", + "title":"PG_ATTRIBUTE", + "uri":"dws_04_0573.html", + "doc_type":"devg", + "p_code":"322", + "code":"333" + }, + { + "desc":"PG_AUTHID records information about the database authentication identifiers (roles). The concept of users is contained in that of roles. A user is actually a role whose r", + "product_code":"dws", + "title":"PG_AUTHID", + "uri":"dws_04_0574.html", + "doc_type":"devg", + "p_code":"322", + "code":"334" + }, + { + "desc":"PG_AUTH_HISTORY records the authentication history of the role. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"PG_AUTH_HISTORY", + "uri":"dws_04_0575.html", + "doc_type":"devg", + "p_code":"322", + "code":"335" + }, + { + "desc":"PG_AUTH_MEMBERS records the membership relations between roles.", + "product_code":"dws", + "title":"PG_AUTH_MEMBERS", + "uri":"dws_04_0576.html", + "doc_type":"devg", + "p_code":"322", + "code":"336" + }, + { + "desc":"PG_CAST records conversion relationships between data types.", + "product_code":"dws", + "title":"PG_CAST", + "uri":"dws_04_0577.html", + "doc_type":"devg", + "p_code":"322", + "code":"337" + }, + { + "desc":"PG_CLASS records database objects and their relations.View the OID and relfilenode of a table.Count row-store tables.Count column-store tables.", + "product_code":"dws", + "title":"PG_CLASS", + "uri":"dws_04_0578.html", + "doc_type":"devg", + "p_code":"322", + "code":"338" + }, + { + "desc":"PG_COLLATION records the available collations, which are essentially mappings from an SQL name to operating system locale categories.", + "product_code":"dws", + "title":"PG_COLLATION", + "uri":"dws_04_0579.html", + "doc_type":"devg", + "p_code":"322", + "code":"339" + }, + { + "desc":"PG_CONSTRAINT records check, primary key, unique, and foreign key constraints on the tables.consrc is not updated when referenced objects change; for example, it will not", + "product_code":"dws", + "title":"PG_CONSTRAINT", + "uri":"dws_04_0580.html", + "doc_type":"devg", + "p_code":"322", + "code":"340" + }, + { + "desc":"PG_CONVERSION records encoding conversion information.", + "product_code":"dws", + "title":"PG_CONVERSION", + "uri":"dws_04_0581.html", + "doc_type":"devg", + "p_code":"322", + "code":"341" + }, + { + "desc":"PG_DATABASE records information about the available databases.", + "product_code":"dws", + "title":"PG_DATABASE", + "uri":"dws_04_0582.html", + "doc_type":"devg", + "p_code":"322", + "code":"342" + }, + { + "desc":"PG_DB_ROLE_SETTING records the default values of configuration items bonded to each role and database when the database is running.", + "product_code":"dws", + "title":"PG_DB_ROLE_SETTING", + "uri":"dws_04_0583.html", + "doc_type":"devg", + "p_code":"322", + "code":"343" + }, + { + "desc":"PG_DEFAULT_ACL records the initial privileges assigned to the newly created objects.Run the following command to view the initial permissions of the new user role1:You ca", + "product_code":"dws", + "title":"PG_DEFAULT_ACL", + "uri":"dws_04_0584.html", + "doc_type":"devg", + "p_code":"322", + "code":"344" + }, + { + "desc":"PG_DEPEND records the dependency relationships between database objects. This information allows DROP commands to find which other objects must be dropped by DROP CASCADE", + "product_code":"dws", + "title":"PG_DEPEND", + "uri":"dws_04_0585.html", + "doc_type":"devg", + "p_code":"322", + "code":"345" + }, + { + "desc":"PG_DESCRIPTION records optional descriptions (comments) for each database object. Descriptions of many built-in system objects are provided in the initial contents of PG_", + "product_code":"dws", + "title":"PG_DESCRIPTION", + "uri":"dws_04_0586.html", + "doc_type":"devg", + "p_code":"322", + "code":"346" + }, + { + "desc":"PG_ENUM records entries showing the values and labels for each enum type. The internal representation of a given enum value is actually the OID of its associated row in p", + "product_code":"dws", + "title":"PG_ENUM", + "uri":"dws_04_0588.html", + "doc_type":"devg", + "p_code":"322", + "code":"347" + }, + { + "desc":"PG_EXTENSION records information about the installed extensions. By default, GaussDB(DWS) has 12 extensions, that is, PLPGSQL, DIST_FDW, FILE_FDW, HDFS_FDW, HSTORE, PLDBG", + "product_code":"dws", + "title":"PG_EXTENSION", + "uri":"dws_04_0589.html", + "doc_type":"devg", + "p_code":"322", + "code":"348" + }, + { + "desc":"PG_EXTENSION_DATA_SOURCE records information about external data source. An external data source contains information about an external database, such as its password enc", + "product_code":"dws", + "title":"PG_EXTENSION_DATA_SOURCE", + "uri":"dws_04_0590.html", + "doc_type":"devg", + "p_code":"322", + "code":"349" + }, + { + "desc":"PG_FOREIGN_DATA_WRAPPER records foreign-data wrapper definitions. A foreign-data wrapper is the mechanism by which external data, residing on foreign servers, is accessed", + "product_code":"dws", + "title":"PG_FOREIGN_DATA_WRAPPER", + "uri":"dws_04_0591.html", + "doc_type":"devg", + "p_code":"322", + "code":"350" + }, + { + "desc":"PG_FOREIGN_SERVER records the foreign server definitions. A foreign server describes a source of external data, such as a remote server. Foreign servers are accessed via ", + "product_code":"dws", + "title":"PG_FOREIGN_SERVER", + "uri":"dws_04_0592.html", + "doc_type":"devg", + "p_code":"322", + "code":"351" + }, + { + "desc":"PG_FOREIGN_TABLE records auxiliary information about foreign tables.", + "product_code":"dws", + "title":"PG_FOREIGN_TABLE", + "uri":"dws_04_0593.html", + "doc_type":"devg", + "p_code":"322", + "code":"352" + }, + { + "desc":"PG_INDEX records part of the information about indexes. The rest is mostly in PG_CLASS.", + "product_code":"dws", + "title":"PG_INDEX", + "uri":"dws_04_0594.html", + "doc_type":"devg", + "p_code":"322", + "code":"353" + }, + { + "desc":"PG_INHERITS records information about table inheritance hierarchies. There is one entry for each direct child table in the database. Indirect inheritance can be determine", + "product_code":"dws", + "title":"PG_INHERITS", + "uri":"dws_04_0595.html", + "doc_type":"devg", + "p_code":"322", + "code":"354" + }, + { + "desc":"PG_JOBS records detailed information about jobs created by users. Dedicated threads poll the pg_jobs table and trigger jobs based on scheduled job execution time. This ta", + "product_code":"dws", + "title":"PG_JOBS", + "uri":"dws_04_0596.html", + "doc_type":"devg", + "p_code":"322", + "code":"355" + }, + { + "desc":"PG_LANGUAGE records programming languages. You can use them and interfaces to write functions or stored procedures.", + "product_code":"dws", + "title":"PG_LANGUAGE", + "uri":"dws_04_0597.html", + "doc_type":"devg", + "p_code":"322", + "code":"356" + }, + { + "desc":"PG_LARGEOBJECT records the data making up large objects A large object is identified by an OID assigned when it is created. Each large object is broken into segments or \"", + "product_code":"dws", + "title":"PG_LARGEOBJECT", + "uri":"dws_04_0598.html", + "doc_type":"devg", + "p_code":"322", + "code":"357" + }, + { + "desc":"PG_LARGEOBJECT_METADATA records metadata associated with large objects. The actual large object data is stored in PG_LARGEOBJECT.", + "product_code":"dws", + "title":"PG_LARGEOBJECT_METADATA", + "uri":"dws_04_0599.html", + "doc_type":"devg", + "p_code":"322", + "code":"358" + }, + { + "desc":"PG_NAMESPACE records the namespaces, that is, schema-related information.", + "product_code":"dws", + "title":"PG_NAMESPACE", + "uri":"dws_04_0600.html", + "doc_type":"devg", + "p_code":"322", + "code":"359" + }, + { + "desc":"PG_OBJECT records the user creation, creation time, last modification time, and last analyzing time of objects of specified types (types existing in object_type).Only nor", + "product_code":"dws", + "title":"PG_OBJECT", + "uri":"dws_04_0601.html", + "doc_type":"devg", + "p_code":"322", + "code":"360" + }, + { + "desc":"PG_OBSSCANINFO defines the OBS runtime information scanned in cluster acceleration scenarios. Each record corresponds to a piece of runtime information of a foreign table", + "product_code":"dws", + "title":"PG_OBSSCANINFO", + "uri":"dws_04_0602.html", + "doc_type":"devg", + "p_code":"322", + "code":"361" + }, + { + "desc":"PG_OPCLASS defines index access method operator classes.Each operator class defines semantics for index columns of a particular data type and a particular index access me", + "product_code":"dws", + "title":"PG_OPCLASS", + "uri":"dws_04_0603.html", + "doc_type":"devg", + "p_code":"322", + "code":"362" + }, + { + "desc":"PG_OPERATOR records information about operators.", + "product_code":"dws", + "title":"PG_OPERATOR", + "uri":"dws_04_0604.html", + "doc_type":"devg", + "p_code":"322", + "code":"363" + }, + { + "desc":"PG_OPFAMILY defines operator families.Each operator family is a collection of operators and associated support routines that implement the semantics specified for a parti", + "product_code":"dws", + "title":"PG_OPFAMILY", + "uri":"dws_04_0605.html", + "doc_type":"devg", + "p_code":"322", + "code":"364" + }, + { + "desc":"PG_PARTITION records all partitioned tables, table partitions, toast tables on table partitions, and index partitions in the database. Partitioned index information is no", + "product_code":"dws", + "title":"PG_PARTITION", + "uri":"dws_04_0606.html", + "doc_type":"devg", + "p_code":"322", + "code":"365" + }, + { + "desc":"PG_PLTEMPLATE records template information for procedural languages.", + "product_code":"dws", + "title":"PG_PLTEMPLATE", + "uri":"dws_04_0607.html", + "doc_type":"devg", + "p_code":"322", + "code":"366" + }, + { + "desc":"PG_PROC records information about functions or procedures.Query the OID of a specified function. For example, obtain the OID 1295 of the justify_days function.Query wheth", + "product_code":"dws", + "title":"PG_PROC", + "uri":"dws_04_0608.html", + "doc_type":"devg", + "p_code":"322", + "code":"367" + }, + { + "desc":"PG_RANGE records information about range types.This is in addition to the types' entries in PG_TYPE.rngsubopc (plus rngcollation, if the element type is collatable) deter", + "product_code":"dws", + "title":"PG_RANGE", + "uri":"dws_04_0609.html", + "doc_type":"devg", + "p_code":"322", + "code":"368" + }, + { + "desc":"PG_REDACTION_COLUMN records the information about the redacted columns.", + "product_code":"dws", + "title":"PG_REDACTION_COLUMN", + "uri":"dws_04_0610.html", + "doc_type":"devg", + "p_code":"322", + "code":"369" + }, + { + "desc":"PG_REDACTION_POLICY records information about the object to be redacted.", + "product_code":"dws", + "title":"PG_REDACTION_POLICY", + "uri":"dws_04_0611.html", + "doc_type":"devg", + "p_code":"322", + "code":"370" + }, + { + "desc":"PG_RLSPOLICY displays the information about row-level access control policies.", + "product_code":"dws", + "title":"PG_RLSPOLICY", + "uri":"dws_04_0612.html", + "doc_type":"devg", + "p_code":"322", + "code":"371" + }, + { + "desc":"PG_RESOURCE_POOL records the information about database resource pool.", + "product_code":"dws", + "title":"PG_RESOURCE_POOL", + "uri":"dws_04_0613.html", + "doc_type":"devg", + "p_code":"322", + "code":"372" + }, + { + "desc":"PG_REWRITE records rewrite rules defined for tables and views.", + "product_code":"dws", + "title":"PG_REWRITE", + "uri":"dws_04_0614.html", + "doc_type":"devg", + "p_code":"322", + "code":"373" + }, + { + "desc":"PG_SECLABEL records security labels on database objects.See also PG_SHSECLABEL, which performs a similar function for security labels of database objects that are shared ", + "product_code":"dws", + "title":"PG_SECLABEL", + "uri":"dws_04_0615.html", + "doc_type":"devg", + "p_code":"322", + "code":"374" + }, + { + "desc":"PG_SHDEPEND records the dependency relationships between database objects and shared objects, such as roles. This information allows GaussDB(DWS) to ensure that those obj", + "product_code":"dws", + "title":"PG_SHDEPEND", + "uri":"dws_04_0616.html", + "doc_type":"devg", + "p_code":"322", + "code":"375" + }, + { + "desc":"PG_SHDESCRIPTION records optional comments for shared database objects. Descriptions can be manipulated with the COMMENT command and viewed with psql's \\d commands.See al", + "product_code":"dws", + "title":"PG_SHDESCRIPTION", + "uri":"dws_04_0617.html", + "doc_type":"devg", + "p_code":"322", + "code":"376" + }, + { + "desc":"PG_SHSECLABEL records security labels on shared database objects. Security labels can be manipulated with the SECURITY LABEL command.For an easier way to view security la", + "product_code":"dws", + "title":"PG_SHSECLABEL", + "uri":"dws_04_0618.html", + "doc_type":"devg", + "p_code":"322", + "code":"377" + }, + { + "desc":"PG_STATISTIC records statistics about tables and index columns in a database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"PG_STATISTIC", + "uri":"dws_04_0619.html", + "doc_type":"devg", + "p_code":"322", + "code":"378" + }, + { + "desc":"PG_STATISTIC_EXT records the extended statistics of tables in a database, such as statistics of multiple columns. Statistics of expressions will be supported later. You c", + "product_code":"dws", + "title":"PG_STATISTIC_EXT", + "uri":"dws_04_0620.html", + "doc_type":"devg", + "p_code":"322", + "code":"379" + }, + { + "desc":"PG_SYNONYM records the mapping between synonym object names and other database object names.", + "product_code":"dws", + "title":"PG_SYNONYM", + "uri":"dws_04_0621.html", + "doc_type":"devg", + "p_code":"322", + "code":"380" + }, + { + "desc":"PG_TABLESPACE records tablespace information.", + "product_code":"dws", + "title":"PG_TABLESPACE", + "uri":"dws_04_0622.html", + "doc_type":"devg", + "p_code":"322", + "code":"381" + }, + { + "desc":"PG_TRIGGER records the trigger information.", + "product_code":"dws", + "title":"PG_TRIGGER", + "uri":"dws_04_0623.html", + "doc_type":"devg", + "p_code":"322", + "code":"382" + }, + { + "desc":"PG_TS_CONFIG records entries representing text search configurations. A configuration specifies a particular text search parser and a list of dictionaries to use for each", + "product_code":"dws", + "title":"PG_TS_CONFIG", + "uri":"dws_04_0624.html", + "doc_type":"devg", + "p_code":"322", + "code":"383" + }, + { + "desc":"PG_TS_CONFIG_MAP records entries showing which text search dictionaries should be consulted, and in what order, for each output token type of each text search configurati", + "product_code":"dws", + "title":"PG_TS_CONFIG_MAP", + "uri":"dws_04_0625.html", + "doc_type":"devg", + "p_code":"322", + "code":"384" + }, + { + "desc":"PG_TS_DICT records entries that define text search dictionaries. A dictionary depends on a text search template, which specifies all the implementation functions needed. ", + "product_code":"dws", + "title":"PG_TS_DICT", + "uri":"dws_04_0626.html", + "doc_type":"devg", + "p_code":"322", + "code":"385" + }, + { + "desc":"PG_TS_PARSER records entries defining text search parsers. A parser splits input text into lexemes and assigns a token type to each lexeme. Since a parser must be impleme", + "product_code":"dws", + "title":"PG_TS_PARSER", + "uri":"dws_04_0627.html", + "doc_type":"devg", + "p_code":"322", + "code":"386" + }, + { + "desc":"PG_TS_TEMPLATE records entries defining text search templates. A template provides a framework for text search dictionaries. Since a template must be implemented by C fun", + "product_code":"dws", + "title":"PG_TS_TEMPLATE", + "uri":"dws_04_0628.html", + "doc_type":"devg", + "p_code":"322", + "code":"387" + }, + { + "desc":"PG_TYPE records the information about data types.", + "product_code":"dws", + "title":"PG_TYPE", + "uri":"dws_04_0629.html", + "doc_type":"devg", + "p_code":"322", + "code":"388" + }, + { + "desc":"PG_USER_MAPPING records the mappings from local users to remote.It is accessible only to users with system administrator rights. You can use view PG_USER_MAPPINGS to quer", + "product_code":"dws", + "title":"PG_USER_MAPPING", + "uri":"dws_04_0630.html", + "doc_type":"devg", + "p_code":"322", + "code":"389" + }, + { + "desc":"PG_USER_STATUS records the states of users that access to the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"PG_USER_STATUS", + "uri":"dws_04_0631.html", + "doc_type":"devg", + "p_code":"322", + "code":"390" + }, + { + "desc":"PG_WORKLOAD_ACTION records information about query_band.", + "product_code":"dws", + "title":"PG_WORKLOAD_ACTION", + "uri":"dws_04_0632.html", + "doc_type":"devg", + "p_code":"322", + "code":"391" + }, + { + "desc":"PGXC_CLASS records the replicated or distributed information for each table.", + "product_code":"dws", + "title":"PGXC_CLASS", + "uri":"dws_04_0633.html", + "doc_type":"devg", + "p_code":"322", + "code":"392" + }, + { + "desc":"PGXC_GROUP records information about node groups.", + "product_code":"dws", + "title":"PGXC_GROUP", + "uri":"dws_04_0634.html", + "doc_type":"devg", + "p_code":"322", + "code":"393" + }, + { + "desc":"PGXC_NODE records information about cluster nodes.Query the CN and DN information of the cluster:", + "product_code":"dws", + "title":"PGXC_NODE", + "uri":"dws_04_0635.html", + "doc_type":"devg", + "p_code":"322", + "code":"394" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"System Views", + "uri":"dws_04_0639.html", + "doc_type":"devg", + "p_code":"320", + "code":"395" + }, + { + "desc":"ALL_ALL_TABLES displays the tables or views accessible to the current user.", + "product_code":"dws", + "title":"ALL_ALL_TABLES", + "uri":"dws_04_0640.html", + "doc_type":"devg", + "p_code":"395", + "code":"396" + }, + { + "desc":"ALL_CONSTRAINTS displays information about constraints accessible to the current user.", + "product_code":"dws", + "title":"ALL_CONSTRAINTS", + "uri":"dws_04_0641.html", + "doc_type":"devg", + "p_code":"395", + "code":"397" + }, + { + "desc":"ALL_CONS_COLUMNS displays information about constraint columns accessible to the current user.", + "product_code":"dws", + "title":"ALL_CONS_COLUMNS", + "uri":"dws_04_0642.html", + "doc_type":"devg", + "p_code":"395", + "code":"398" + }, + { + "desc":"ALL_COL_COMMENTS displays the comment information about table columns accessible to the current user.", + "product_code":"dws", + "title":"ALL_COL_COMMENTS", + "uri":"dws_04_0643.html", + "doc_type":"devg", + "p_code":"395", + "code":"399" + }, + { + "desc":"ALL_DEPENDENCIES displays dependencies between functions and advanced packages accessible to the current user.Currently in GaussDB(DWS), this table is empty without any r", + "product_code":"dws", + "title":"ALL_DEPENDENCIES", + "uri":"dws_04_0644.html", + "doc_type":"devg", + "p_code":"395", + "code":"400" + }, + { + "desc":"ALL_IND_COLUMNS displays all index columns accessible to the current user.", + "product_code":"dws", + "title":"ALL_IND_COLUMNS", + "uri":"dws_04_0645.html", + "doc_type":"devg", + "p_code":"395", + "code":"401" + }, + { + "desc":"ALL_IND_EXPRESSIONS displays information about the expression indexes accessible to the current user.", + "product_code":"dws", + "title":"ALL_IND_EXPRESSIONS", + "uri":"dws_04_0646.html", + "doc_type":"devg", + "p_code":"395", + "code":"402" + }, + { + "desc":"ALL_INDEXES displays information about indexes accessible to the current user.", + "product_code":"dws", + "title":"ALL_INDEXES", + "uri":"dws_04_0647.html", + "doc_type":"devg", + "p_code":"395", + "code":"403" + }, + { + "desc":"ALL_OBJECTS displays all database objects accessible to the current user.For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.", + "product_code":"dws", + "title":"ALL_OBJECTS", + "uri":"dws_04_0648.html", + "doc_type":"devg", + "p_code":"395", + "code":"404" + }, + { + "desc":"ALL_PROCEDURES displays information about all stored procedures or functions accessible to the current user.", + "product_code":"dws", + "title":"ALL_PROCEDURES", + "uri":"dws_04_0649.html", + "doc_type":"devg", + "p_code":"395", + "code":"405" + }, + { + "desc":"ALL_SEQUENCES displays all sequences accessible to the current user.", + "product_code":"dws", + "title":"ALL_SEQUENCES", + "uri":"dws_04_0650.html", + "doc_type":"devg", + "p_code":"395", + "code":"406" + }, + { + "desc":"ALL_SOURCE displays information about stored procedures or functions accessible to the current user, and provides the columns defined by the stored procedures and functio", + "product_code":"dws", + "title":"ALL_SOURCE", + "uri":"dws_04_0651.html", + "doc_type":"devg", + "p_code":"395", + "code":"407" + }, + { + "desc":"ALL_SYNONYMS displays all synonyms accessible to the current user.", + "product_code":"dws", + "title":"ALL_SYNONYMS", + "uri":"dws_04_0652.html", + "doc_type":"devg", + "p_code":"395", + "code":"408" + }, + { + "desc":"ALL_TAB_COLUMNS displays description information about columns of the tables accessible to the current user.", + "product_code":"dws", + "title":"ALL_TAB_COLUMNS", + "uri":"dws_04_0653.html", + "doc_type":"devg", + "p_code":"395", + "code":"409" + }, + { + "desc":"ALL_TAB_COMMENTS displays comments about all tables and views accessible to the current user.", + "product_code":"dws", + "title":"ALL_TAB_COMMENTS", + "uri":"dws_04_0654.html", + "doc_type":"devg", + "p_code":"395", + "code":"410" + }, + { + "desc":"ALL_TABLES displays all the tables accessible to the current user.", + "product_code":"dws", + "title":"ALL_TABLES", + "uri":"dws_04_0655.html", + "doc_type":"devg", + "p_code":"395", + "code":"411" + }, + { + "desc":"ALL_USERS displays all users of the database visible to the current user, however, it does not describe the users.", + "product_code":"dws", + "title":"ALL_USERS", + "uri":"dws_04_0656.html", + "doc_type":"devg", + "p_code":"395", + "code":"412" + }, + { + "desc":"ALL_VIEWS displays the description about all views accessible to the current user.", + "product_code":"dws", + "title":"ALL_VIEWS", + "uri":"dws_04_0657.html", + "doc_type":"devg", + "p_code":"395", + "code":"413" + }, + { + "desc":"DBA_DATA_FILES displays the description of database files. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_DATA_FILES", + "uri":"dws_04_0658.html", + "doc_type":"devg", + "p_code":"395", + "code":"414" + }, + { + "desc":"DBA_USERS displays all user names in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_USERS", + "uri":"dws_04_0659.html", + "doc_type":"devg", + "p_code":"395", + "code":"415" + }, + { + "desc":"DBA_COL_COMMENTS displays information about table colum comments in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_COL_COMMENTS", + "uri":"dws_04_0660.html", + "doc_type":"devg", + "p_code":"395", + "code":"416" + }, + { + "desc":"DBA_CONSTRAINTS displays information about table constraints in database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_CONSTRAINTS", + "uri":"dws_04_0661.html", + "doc_type":"devg", + "p_code":"395", + "code":"417" + }, + { + "desc":"DBA_CONS_COLUMNS displays information about constraint columns in database tables. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_CONS_COLUMNS", + "uri":"dws_04_0662.html", + "doc_type":"devg", + "p_code":"395", + "code":"418" + }, + { + "desc":"DBA_IND_COLUMNS displays column information about all indexes in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_IND_COLUMNS", + "uri":"dws_04_0663.html", + "doc_type":"devg", + "p_code":"395", + "code":"419" + }, + { + "desc":"DBA_IND_EXPRESSIONS displays the information about expression indexes in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_IND_EXPRESSIONS", + "uri":"dws_04_0664.html", + "doc_type":"devg", + "p_code":"395", + "code":"420" + }, + { + "desc":"DBA_IND_PARTITIONS displays information about all index partitions in the database. Each index partition of a partitioned table in the database, if present, has a row of ", + "product_code":"dws", + "title":"DBA_IND_PARTITIONS", + "uri":"dws_04_0665.html", + "doc_type":"devg", + "p_code":"395", + "code":"421" + }, + { + "desc":"DBA_INDEXES displays all indexes in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_INDEXES", + "uri":"dws_04_0666.html", + "doc_type":"devg", + "p_code":"395", + "code":"422" + }, + { + "desc":"DBA_OBJECTS displays all database objects in the database. It is accessible only to users with system administrator rights.For details about the value ranges of last_ddl_", + "product_code":"dws", + "title":"DBA_OBJECTS", + "uri":"dws_04_0667.html", + "doc_type":"devg", + "p_code":"395", + "code":"423" + }, + { + "desc":"DBA_PART_INDEXES displays information about all partitioned table indexes in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_PART_INDEXES", + "uri":"dws_04_0668.html", + "doc_type":"devg", + "p_code":"395", + "code":"424" + }, + { + "desc":"DBA_PART_TABLES displays information about all partitioned tables in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_PART_TABLES", + "uri":"dws_04_0669.html", + "doc_type":"devg", + "p_code":"395", + "code":"425" + }, + { + "desc":"DBA_PROCEDURES displays information about all stored procedures and functions in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_PROCEDURES", + "uri":"dws_04_0670.html", + "doc_type":"devg", + "p_code":"395", + "code":"426" + }, + { + "desc":"DBA_SEQUENCES displays information about all sequences in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_SEQUENCES", + "uri":"dws_04_0671.html", + "doc_type":"devg", + "p_code":"395", + "code":"427" + }, + { + "desc":"DBA_SOURCE displays all stored procedures or functions in the database, and it provides the columns defined by the stored procedures or functions. It is accessible only t", + "product_code":"dws", + "title":"DBA_SOURCE", + "uri":"dws_04_0672.html", + "doc_type":"devg", + "p_code":"395", + "code":"428" + }, + { + "desc":"DBA_SYNONYMS displays all synonyms in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_SYNONYMS", + "uri":"dws_04_0673.html", + "doc_type":"devg", + "p_code":"395", + "code":"429" + }, + { + "desc":"DBA_TAB_COLUMNS displays the columns of tables. Each column of a table in the database has a row in DBA_TAB_COLUMNS. It is accessible only to users with system administra", + "product_code":"dws", + "title":"DBA_TAB_COLUMNS", + "uri":"dws_04_0674.html", + "doc_type":"devg", + "p_code":"395", + "code":"430" + }, + { + "desc":"DBA_TAB_COMMENTS displays comments about all tables and views in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_TAB_COMMENTS", + "uri":"dws_04_0675.html", + "doc_type":"devg", + "p_code":"395", + "code":"431" + }, + { + "desc":"DBA_TAB_PARTITIONS displays information about all partitions in the database.", + "product_code":"dws", + "title":"DBA_TAB_PARTITIONS", + "uri":"dws_04_0676.html", + "doc_type":"devg", + "p_code":"395", + "code":"432" + }, + { + "desc":"DBA_TABLES displays all tables in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_TABLES", + "uri":"dws_04_0677.html", + "doc_type":"devg", + "p_code":"395", + "code":"433" + }, + { + "desc":"DBA_TABLESPACES displays information about available tablespaces. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_TABLESPACES", + "uri":"dws_04_0678.html", + "doc_type":"devg", + "p_code":"395", + "code":"434" + }, + { + "desc":"DBA_TRIGGERS displays information about triggers in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_TRIGGERS", + "uri":"dws_04_0679.html", + "doc_type":"devg", + "p_code":"395", + "code":"435" + }, + { + "desc":"DBA_VIEWS displays views in the database. It is accessible only to users with system administrator rights.", + "product_code":"dws", + "title":"DBA_VIEWS", + "uri":"dws_04_0680.html", + "doc_type":"devg", + "p_code":"395", + "code":"436" + }, + { + "desc":"DUAL is automatically created by the database based on the data dictionary. It has only one text column in only one row for storing expression calculation results. It is ", + "product_code":"dws", + "title":"DUAL", + "uri":"dws_04_0681.html", + "doc_type":"devg", + "p_code":"395", + "code":"437" + }, + { + "desc":"GLOBAL_REDO_STAT displays the total statistics of XLOG redo operations on all nodes in a cluster. Except the avgiotim column (indicating the average redo write time of al", + "product_code":"dws", + "title":"GLOBAL_REDO_STAT", + "uri":"dws_04_0682.html", + "doc_type":"devg", + "p_code":"395", + "code":"438" + }, + { + "desc":"GLOBAL_REL_IOSTAT displays the total disk I/O statistics of all nodes in a cluster. The name of each column in this view is the same as that in the GS_REL_IOSTAT view, bu", + "product_code":"dws", + "title":"GLOBAL_REL_IOSTAT", + "uri":"dws_04_0683.html", + "doc_type":"devg", + "p_code":"395", + "code":"439" + }, + { + "desc":"GLOBAL_STAT_DATABASE displays the status and statistics of databases on all nodes in a cluster.When you query the GLOBAL_STAT_DATABASE view on a CN, the respective values", + "product_code":"dws", + "title":"GLOBAL_STAT_DATABASE", + "uri":"dws_04_0684.html", + "doc_type":"devg", + "p_code":"395", + "code":"440" + }, + { + "desc":"GLOBAL_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in all workload Cgroups in a cluster, including the number of SELECT, UPDATE, INSER", + "product_code":"dws", + "title":"GLOBAL_WORKLOAD_SQL_COUNT", + "uri":"dws_04_0685.html", + "doc_type":"devg", + "p_code":"395", + "code":"441" + }, + { + "desc":"GLOBAL_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in all workload Cgroups in a cluster, including the maximum, minimum, average, ", + "product_code":"dws", + "title":"GLOBAL_WORKLOAD_SQL_ELAPSE_TIME", + "uri":"dws_04_0686.html", + "doc_type":"devg", + "p_code":"395", + "code":"442" + }, + { + "desc":"GLOBAL_WORKLOAD_TRANSACTION provides the total transaction information about workload Cgroups on all CNs in the cluster. This view is accessible only to users with system", + "product_code":"dws", + "title":"GLOBAL_WORKLOAD_TRANSACTION", + "uri":"dws_04_0687.html", + "doc_type":"devg", + "p_code":"395", + "code":"443" + }, + { + "desc":"GS_ALL_CONTROL_GROUP_INFO displays all Cgroup information in a database.", + "product_code":"dws", + "title":"GS_ALL_CONTROL_GROUP_INFO", + "uri":"dws_04_0688.html", + "doc_type":"devg", + "p_code":"395", + "code":"444" + }, + { + "desc":"GS_CLUSTER_RESOURCE_INFO displays a DN resource summary.", + "product_code":"dws", + "title":"GS_CLUSTER_RESOURCE_INFO", + "uri":"dws_04_0689.html", + "doc_type":"devg", + "p_code":"395", + "code":"445" + }, + { + "desc":"The database parses each received SQL text string and generates an internal parsing tree. The database traverses the parsing tree and ignores constant values in the parsi", + "product_code":"dws", + "title":"GS_INSTR_UNIQUE_SQL", + "uri":"dws_04_0690.html", + "doc_type":"devg", + "p_code":"395", + "code":"446" + }, + { + "desc":"GS_REL_IOSTAT displays disk I/O statistics on the current node. In the current version, only one page is read or written in each read or write operation. Therefore, the n", + "product_code":"dws", + "title":"GS_REL_IOSTAT", + "uri":"dws_04_0691.html", + "doc_type":"devg", + "p_code":"395", + "code":"447" + }, + { + "desc":"The GS_NODE_STAT_RESET_TIME view provides the reset time of statistics on the current node and returns the timestamp with the time zone. For details, see the get_node_sta", + "product_code":"dws", + "title":"GS_NODE_STAT_RESET_TIME", + "uri":"dws_04_0692.html", + "doc_type":"devg", + "p_code":"395", + "code":"448" + }, + { + "desc":"GS_SESSION_CPU_STATISTICS displays load management information about CPU usage of ongoing complex jobs executed by the current user.", + "product_code":"dws", + "title":"GS_SESSION_CPU_STATISTICS", + "uri":"dws_04_0693.html", + "doc_type":"devg", + "p_code":"395", + "code":"449" + }, + { + "desc":"GS_SESSION_MEMORY_STATISTICS displays load management information about memory usage of ongoing complex jobs executed by the current user.", + "product_code":"dws", + "title":"GS_SESSION_MEMORY_STATISTICS", + "uri":"dws_04_0694.html", + "doc_type":"devg", + "p_code":"395", + "code":"450" + }, + { + "desc":"GS_SQL_COUNT displays statistics about the five types of statements (SELECT, INSERT, UPDATE, DELETE, and MERGE INTO) executed on the current node of the database, includi", + "product_code":"dws", + "title":"GS_SQL_COUNT", + "uri":"dws_04_0695.html", + "doc_type":"devg", + "p_code":"395", + "code":"451" + }, + { + "desc":"GS_WAIT_EVENTS displays statistics about waiting status and events on the current node.The values of statistical columns in this view are accumulated only when the enable", + "product_code":"dws", + "title":"GS_WAIT_EVENTS", + "uri":"dws_04_0696.html", + "doc_type":"devg", + "p_code":"395", + "code":"452" + }, + { + "desc":"This view displays the execution information about operators in the query statements that have been executed on the current CN. The information comes from the system cata", + "product_code":"dws", + "title":"GS_WLM_OPERAROR_INFO", + "uri":"dws_04_0701.html", + "doc_type":"devg", + "p_code":"395", + "code":"453" + }, + { + "desc":"This view displays the records of operators in jobs that have been executed by the current user on the current CN.This view is used by Database Manager to query data from", + "product_code":"dws", + "title":"GS_WLM_OPERATOR_HISTORY", + "uri":"dws_04_0702.html", + "doc_type":"devg", + "p_code":"395", + "code":"454" + }, + { + "desc":"GS_WLM_OPERATOR_STATISTICS displays the operators of the jobs that are being executed by the current user.", + "product_code":"dws", + "title":"GS_WLM_OPERATOR_STATISTICS", + "uri":"dws_04_0703.html", + "doc_type":"devg", + "p_code":"395", + "code":"455" + }, + { + "desc":"This view displays the execution information about the query statements that have been executed on the current CN. The information comes from the system catalog dbms_om. ", + "product_code":"dws", + "title":"GS_WLM_SESSION_INFO", + "uri":"dws_04_0704.html", + "doc_type":"devg", + "p_code":"395", + "code":"456" + }, + { + "desc":"GS_WLM_SESSION_HISTORY displays load management information about a completed job executed by the current user on the current CN. This view is used by Database Manager to", + "product_code":"dws", + "title":"GS_WLM_SESSION_HISTORY", + "uri":"dws_04_0705.html", + "doc_type":"devg", + "p_code":"395", + "code":"457" + }, + { + "desc":"GS_WLM_SESSION_STATISTICS displays load management information about jobs being executed by the current user on the current CN.", + "product_code":"dws", + "title":"GS_WLM_SESSION_STATISTICS", + "uri":"dws_04_0706.html", + "doc_type":"devg", + "p_code":"395", + "code":"458" + }, + { + "desc":"GS_WLM_SQL_ALLOW displays the configured resource management SQL whitelist, including the default SQL whitelist and the SQL whitelist configured using the GUC parameter w", + "product_code":"dws", + "title":"GS_WLM_SQL_ALLOW", + "uri":"dws_04_0708.html", + "doc_type":"devg", + "p_code":"395", + "code":"459" + }, + { + "desc":"GS_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in workload Cgroups on the current node, including the number of SELECT, UPDATE, INSERT", + "product_code":"dws", + "title":"GS_WORKLOAD_SQL_COUNT", + "uri":"dws_04_0709.html", + "doc_type":"devg", + "p_code":"395", + "code":"460" + }, + { + "desc":"GS_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in workload Cgroups on the current node, including the maximum, minimum, average, a", + "product_code":"dws", + "title":"GS_WORKLOAD_SQL_ELAPSE_TIME", + "uri":"dws_04_0710.html", + "doc_type":"devg", + "p_code":"395", + "code":"461" + }, + { + "desc":"GS_WORKLOAD_TRANSACTION provides transaction information about workload cgroups on a single CN. The database records the number of times that each workload Cgroup commits", + "product_code":"dws", + "title":"GS_WORKLOAD_TRANSACTION", + "uri":"dws_04_0711.html", + "doc_type":"devg", + "p_code":"395", + "code":"462" + }, + { + "desc":"GS_STAT_DB_CU displsys CU hits in a database and in each node in a cluster. You can clear it using gs_stat_reset().", + "product_code":"dws", + "title":"GS_STAT_DB_CU", + "uri":"dws_04_0712.html", + "doc_type":"devg", + "p_code":"395", + "code":"463" + }, + { + "desc":"GS_STAT_SESSION_CU displays the CU hit rate of running sessions on each node in a cluster. This data about a session is cleared when you exit this session or restart the ", + "product_code":"dws", + "title":"GS_STAT_SESSION_CU", + "uri":"dws_04_0713.html", + "doc_type":"devg", + "p_code":"395", + "code":"464" + }, + { + "desc":"GS_TOTAL_NODEGROUP_MEMORY_DETAIL displays statistics about memory usage of the logical cluster that the current database belongs to in the unit of MB.", + "product_code":"dws", + "title":"GS_TOTAL_NODEGROUP_MEMORY_DETAIL", + "uri":"dws_04_0714.html", + "doc_type":"devg", + "p_code":"395", + "code":"465" + }, + { + "desc":"GS_USER_TRANSACTION provides transaction information about users on a single CN. The database records the number of times that each user commits and rolls back transactio", + "product_code":"dws", + "title":"GS_USER_TRANSACTION", + "uri":"dws_04_0715.html", + "doc_type":"devg", + "p_code":"395", + "code":"466" + }, + { + "desc":"GS_VIEW_DEPENDENCY allows you to query the direct dependencies of all views visible to the current user.", + "product_code":"dws", + "title":"GS_VIEW_DEPENDENCY", + "uri":"dws_04_0716.html", + "doc_type":"devg", + "p_code":"395", + "code":"467" + }, + { + "desc":"GS_VIEW_DEPENDENCY_PATH allows you to query the direct dependencies of all views visible to the current user. If the base table on which the view depends exists and the d", + "product_code":"dws", + "title":"GS_VIEW_DEPENDENCY_PATH", + "uri":"dws_04_0948.html", + "doc_type":"devg", + "p_code":"395", + "code":"468" + }, + { + "desc":"GS_VIEW_INVALID queries all unavailable views visible to the current user. If the base table, function, or synonym that the view depends on is abnormal, the validtype col", + "product_code":"dws", + "title":"GS_VIEW_INVALID", + "uri":"dws_04_0717.html", + "doc_type":"devg", + "p_code":"395", + "code":"469" + }, + { + "desc":"MPP_TABLES displays information about tables in PGXC_CLASS.", + "product_code":"dws", + "title":"MPP_TABLES", + "uri":"dws_04_0998.html", + "doc_type":"devg", + "p_code":"395", + "code":"470" + }, + { + "desc":"PG_AVAILABLE_EXTENSION_VERSIONS displays the extension versions of certain database features.", + "product_code":"dws", + "title":"PG_AVAILABLE_EXTENSION_VERSIONS", + "uri":"dws_04_0718.html", + "doc_type":"devg", + "p_code":"395", + "code":"471" + }, + { + "desc":"PG_AVAILABLE_EXTENSIONS displays the extended information about certain database features.", + "product_code":"dws", + "title":"PG_AVAILABLE_EXTENSIONS", + "uri":"dws_04_0719.html", + "doc_type":"devg", + "p_code":"395", + "code":"472" + }, + { + "desc":"On any normal node in a cluster, PG_BULKLOAD_STATISTICS displays the execution status of the import and export services. Each import or export service corresponds to a re", + "product_code":"dws", + "title":"PG_BULKLOAD_STATISTICS", + "uri":"dws_04_0720.html", + "doc_type":"devg", + "p_code":"395", + "code":"473" + }, + { + "desc":"PG_COMM_CLIENT_INFO stores the client connection information of a single node. (You can query this view on a DN to view the information about the connection between the C", + "product_code":"dws", + "title":"PG_COMM_CLIENT_INFO", + "uri":"dws_04_0721.html", + "doc_type":"devg", + "p_code":"395", + "code":"474" + }, + { + "desc":"PG_COMM_DELAY displays the communication library delay status for a single DN.", + "product_code":"dws", + "title":"PG_COMM_DELAY", + "uri":"dws_04_0722.html", + "doc_type":"devg", + "p_code":"395", + "code":"475" + }, + { + "desc":"PG_COMM_STATUS displays the communication library status for a single DN.", + "product_code":"dws", + "title":"PG_COMM_STATUS", + "uri":"dws_04_0723.html", + "doc_type":"devg", + "p_code":"395", + "code":"476" + }, + { + "desc":"PG_COMM_RECV_STREAM displays the receiving stream status of all the communication libraries for a single DN.", + "product_code":"dws", + "title":"PG_COMM_RECV_STREAM", + "uri":"dws_04_0724.html", + "doc_type":"devg", + "p_code":"395", + "code":"477" + }, + { + "desc":"PG_COMM_SEND_STREAM displays the sending stream status of all the communication libraries for a single DN.", + "product_code":"dws", + "title":"PG_COMM_SEND_STREAM", + "uri":"dws_04_0725.html", + "doc_type":"devg", + "p_code":"395", + "code":"478" + }, + { + "desc":"PG_CONTROL_GROUP_CONFIG displays the Cgroup configuration information in the system.", + "product_code":"dws", + "title":"PG_CONTROL_GROUP_CONFIG", + "uri":"dws_04_0726.html", + "doc_type":"devg", + "p_code":"395", + "code":"479" + }, + { + "desc":"PG_CURSORS displays the cursors that are currently available.", + "product_code":"dws", + "title":"PG_CURSORS", + "uri":"dws_04_0727.html", + "doc_type":"devg", + "p_code":"395", + "code":"480" + }, + { + "desc":"PG_EXT_STATS displays extension statistics stored in the PG_STATISTIC_EXT table. The extension statistics means multiple columns of statistics.", + "product_code":"dws", + "title":"PG_EXT_STATS", + "uri":"dws_04_0728.html", + "doc_type":"devg", + "p_code":"395", + "code":"481" + }, + { + "desc":"PG_GET_INVALID_BACKENDS displays the information about backend threads on the CN that are connected to the current standby DN.", + "product_code":"dws", + "title":"PG_GET_INVALID_BACKENDS", + "uri":"dws_04_0729.html", + "doc_type":"devg", + "p_code":"395", + "code":"482" + }, + { + "desc":"PG_GET_SENDERS_CATCHUP_TIME displays the catchup information of the currently active primary/standby instance sending thread on a single DN.", + "product_code":"dws", + "title":"PG_GET_SENDERS_CATCHUP_TIME", + "uri":"dws_04_0730.html", + "doc_type":"devg", + "p_code":"395", + "code":"483" + }, + { + "desc":"PG_GROUP displays the database role authentication and the relationship between roles.", + "product_code":"dws", + "title":"PG_GROUP", + "uri":"dws_04_0731.html", + "doc_type":"devg", + "p_code":"395", + "code":"484" + }, + { + "desc":"PG_INDEXES displays access to useful information about each index in the database.", + "product_code":"dws", + "title":"PG_INDEXES", + "uri":"dws_04_0732.html", + "doc_type":"devg", + "p_code":"395", + "code":"485" + }, + { + "desc":"The PG_JOB view replaces the PG_JOB system catalog in earlier versions and provides forward compatibility with earlier versions. The original PG_JOB system catalog is cha", + "product_code":"dws", + "title":"PG_JOB", + "uri":"dws_04_0733.html", + "doc_type":"devg", + "p_code":"395", + "code":"486" + }, + { + "desc":"The PG_JOB_PROC view replaces the PG_JOB_PROC system catalog in earlier versions and provides forward compatibility with earlier versions. The original PG_JOB_PROC and PG", + "product_code":"dws", + "title":"PG_JOB_PROC", + "uri":"dws_04_0734.html", + "doc_type":"devg", + "p_code":"395", + "code":"487" + }, + { + "desc":"PG_JOB_SINGLE displays job information about the current node.", + "product_code":"dws", + "title":"PG_JOB_SINGLE", + "uri":"dws_04_0735.html", + "doc_type":"devg", + "p_code":"395", + "code":"488" + }, + { + "desc":"PG_LIFECYCLE_DATA_DISTRIBUTE displays the distribution of cold and hot data in a multi-temperature table of OBS.", + "product_code":"dws", + "title":"PG_LIFECYCLE_DATA_DISTRIBUTE", + "uri":"dws_04_0736.html", + "doc_type":"devg", + "p_code":"395", + "code":"489" + }, + { + "desc":"PG_LOCKS displays information about the locks held by open transactions.", + "product_code":"dws", + "title":"PG_LOCKS", + "uri":"dws_04_0737.html", + "doc_type":"devg", + "p_code":"395", + "code":"490" + }, + { + "desc":"PG_NODE_ENVO displays the environmental variable information about the current node.", + "product_code":"dws", + "title":"PG_NODE_ENV", + "uri":"dws_04_0738.html", + "doc_type":"devg", + "p_code":"395", + "code":"491" + }, + { + "desc":"PG_OS_THREADS displays the status information about all the threads under the current node.", + "product_code":"dws", + "title":"PG_OS_THREADS", + "uri":"dws_04_0739.html", + "doc_type":"devg", + "p_code":"395", + "code":"492" + }, + { + "desc":"PG_POOLER_STATUS displays the cache connection status in the pooler. PG_POOLER_STATUS can only query on the CN, and displays the connection cache information about the po", + "product_code":"dws", + "title":"PG_POOLER_STATUS", + "uri":"dws_04_0740.html", + "doc_type":"devg", + "p_code":"395", + "code":"493" + }, + { + "desc":"PG_PREPARED_STATEMENTS displays all prepared statements that are available in the current session.", + "product_code":"dws", + "title":"PG_PREPARED_STATEMENTS", + "uri":"dws_04_0741.html", + "doc_type":"devg", + "p_code":"395", + "code":"494" + }, + { + "desc":"PG_PREPARED_XACTS displays information about transactions that are currently prepared for two-phase commit.", + "product_code":"dws", + "title":"PG_PREPARED_XACTS", + "uri":"dws_04_0742.html", + "doc_type":"devg", + "p_code":"395", + "code":"495" + }, + { + "desc":"PG_QUERYBAND_ACTION displays information about the object associated with query_band and the query_band query order.", + "product_code":"dws", + "title":"PG_QUERYBAND_ACTION", + "uri":"dws_04_0743.html", + "doc_type":"devg", + "p_code":"395", + "code":"496" + }, + { + "desc":"PG_REPLICATION_SLOTS displays the replication node information.", + "product_code":"dws", + "title":"PG_REPLICATION_SLOTS", + "uri":"dws_04_0744.html", + "doc_type":"devg", + "p_code":"395", + "code":"497" + }, + { + "desc":"PG_ROLES displays information about database roles.", + "product_code":"dws", + "title":"PG_ROLES", + "uri":"dws_04_0745.html", + "doc_type":"devg", + "p_code":"395", + "code":"498" + }, + { + "desc":"PG_RULES displays information about rewrite rules.", + "product_code":"dws", + "title":"PG_RULES", + "uri":"dws_04_0746.html", + "doc_type":"devg", + "p_code":"395", + "code":"499" + }, + { + "desc":"PG_RUNNING_XACTS displays the running transaction information on the current node.", + "product_code":"dws", + "title":"PG_RUNNING_XACTS", + "uri":"dws_04_0747.html", + "doc_type":"devg", + "p_code":"395", + "code":"500" + }, + { + "desc":"PG_SECLABELS displays information about security labels.", + "product_code":"dws", + "title":"PG_SECLABELS", + "uri":"dws_04_0748.html", + "doc_type":"devg", + "p_code":"395", + "code":"501" + }, + { + "desc":"PG_SESSION_WLMSTAT displays the corresponding load management information about the task currently executed by the user.", + "product_code":"dws", + "title":"PG_SESSION_WLMSTAT", + "uri":"dws_04_0749.html", + "doc_type":"devg", + "p_code":"395", + "code":"502" + }, + { + "desc":"PG_SESSION_IOSTAT displays the I/O load management information about the task currently executed by the user.IOPS is counted by ones for column storage and by thousands f", + "product_code":"dws", + "title":"PG_SESSION_IOSTAT", + "uri":"dws_04_0750.html", + "doc_type":"devg", + "p_code":"395", + "code":"503" + }, + { + "desc":"PG_SETTINGS displays information about parameters of the running database.", + "product_code":"dws", + "title":"PG_SETTINGS", + "uri":"dws_04_0751.html", + "doc_type":"devg", + "p_code":"395", + "code":"504" + }, + { + "desc":"PG_SHADOW displays properties of all roles that are marked as rolcanlogin in PG_AUTHID.The name stems from the fact that this table should not be readable by the public s", + "product_code":"dws", + "title":"PG_SHADOW", + "uri":"dws_04_0752.html", + "doc_type":"devg", + "p_code":"395", + "code":"505" + }, + { + "desc":"PG_SHARED_MEMORY_DETAIL displays usage information about all the shared memory contexts.", + "product_code":"dws", + "title":"PG_SHARED_MEMORY_DETAIL", + "uri":"dws_04_0753.html", + "doc_type":"devg", + "p_code":"395", + "code":"506" + }, + { + "desc":"PG_STATS displays the single-column statistics stored in the pg_statistic table.", + "product_code":"dws", + "title":"PG_STATS", + "uri":"dws_04_0754.html", + "doc_type":"devg", + "p_code":"395", + "code":"507" + }, + { + "desc":"PG_STAT_ACTIVITY displays information about the current user's queries.", + "product_code":"dws", + "title":"PG_STAT_ACTIVITY", + "uri":"dws_04_0755.html", + "doc_type":"devg", + "p_code":"395", + "code":"508" + }, + { + "desc":"PG_STAT_ALL_INDEXES displays access informaton about all indexes in the database, with information about each index displayed in a row.Indexes can be used via either simp", + "product_code":"dws", + "title":"PG_STAT_ALL_INDEXES", + "uri":"dws_04_0757.html", + "doc_type":"devg", + "p_code":"395", + "code":"509" + }, + { + "desc":"PG_STAT_ALL_TABLES displays access information about all rows in all tables (including TOAST tables) in the database.", + "product_code":"dws", + "title":"PG_STAT_ALL_TABLES", + "uri":"dws_04_0758.html", + "doc_type":"devg", + "p_code":"395", + "code":"510" + }, + { + "desc":"PG_STAT_BAD_BLOCK displays statistics about page or CU verification failures after a node is started.", + "product_code":"dws", + "title":"PG_STAT_BAD_BLOCK", + "uri":"dws_04_0759.html", + "doc_type":"devg", + "p_code":"395", + "code":"511" + }, + { + "desc":"PG_STAT_BGWRITER displays statistics about the background writer process's activity.", + "product_code":"dws", + "title":"PG_STAT_BGWRITER", + "uri":"dws_04_0760.html", + "doc_type":"devg", + "p_code":"395", + "code":"512" + }, + { + "desc":"PG_STAT_DATABASE displays the status and statistics of each database on the current node.", + "product_code":"dws", + "title":"PG_STAT_DATABASE", + "uri":"dws_04_0761.html", + "doc_type":"devg", + "p_code":"395", + "code":"513" + }, + { + "desc":"PG_STAT_DATABASE_CONFLICTS displays statistics about database conflicts.", + "product_code":"dws", + "title":"PG_STAT_DATABASE_CONFLICTS", + "uri":"dws_04_0762.html", + "doc_type":"devg", + "p_code":"395", + "code":"514" + }, + { + "desc":"PG_STAT_GET_MEM_MBYTES_RESERVED displays the current activity information of a thread stored in memory. You need to specify the thread ID (pid in PG_STAT_ACTIVITY) for qu", + "product_code":"dws", + "title":"PG_STAT_GET_MEM_MBYTES_RESERVED", + "uri":"dws_04_0763.html", + "doc_type":"devg", + "p_code":"395", + "code":"515" + }, + { + "desc":"PG_STAT_USER_FUNCTIONS displays user-defined function status information in the namespace. (The language of the function is non-internal language.)", + "product_code":"dws", + "title":"PG_STAT_USER_FUNCTIONS", + "uri":"dws_04_0764.html", + "doc_type":"devg", + "p_code":"395", + "code":"516" + }, + { + "desc":"PG_STAT_USER_INDEXES displays information about the index status of user-defined ordinary tables and TOAST tables.", + "product_code":"dws", + "title":"PG_STAT_USER_INDEXES", + "uri":"dws_04_0765.html", + "doc_type":"devg", + "p_code":"395", + "code":"517" + }, + { + "desc":"PG_STAT_USER_TABLES displays status information about user-defined ordinary tables and TOAST tables in all namespaces.", + "product_code":"dws", + "title":"PG_STAT_USER_TABLES", + "uri":"dws_04_0766.html", + "doc_type":"devg", + "p_code":"395", + "code":"518" + }, + { + "desc":"PG_STAT_REPLICATION displays information about log synchronization status, such as the locations of the sender sending logs and the receiver receiving logs.", + "product_code":"dws", + "title":"PG_STAT_REPLICATION", + "uri":"dws_04_0767.html", + "doc_type":"devg", + "p_code":"395", + "code":"519" + }, + { + "desc":"PG_STAT_SYS_INDEXES displays the index status information about all the system catalogs in the pg_catalog and information_schema schemas.", + "product_code":"dws", + "title":"PG_STAT_SYS_INDEXES", + "uri":"dws_04_0768.html", + "doc_type":"devg", + "p_code":"395", + "code":"520" + }, + { + "desc":"PG_STAT_SYS_TABLES displays the statistics about the system catalogs of all the namespaces in pg_catalog and information_schema schemas.", + "product_code":"dws", + "title":"PG_STAT_SYS_TABLES", + "uri":"dws_04_0769.html", + "doc_type":"devg", + "p_code":"395", + "code":"521" + }, + { + "desc":"PG_STAT_XACT_ALL_TABLES displays the transaction status information about all ordinary tables and TOAST tables in the namespaces.", + "product_code":"dws", + "title":"PG_STAT_XACT_ALL_TABLES", + "uri":"dws_04_0770.html", + "doc_type":"devg", + "p_code":"395", + "code":"522" + }, + { + "desc":"PG_STAT_XACT_SYS_TABLES displays the transaction status information of the system catalog in the namespace.", + "product_code":"dws", + "title":"PG_STAT_XACT_SYS_TABLES", + "uri":"dws_04_0771.html", + "doc_type":"devg", + "p_code":"395", + "code":"523" + }, + { + "desc":"PG_STAT_XACT_USER_FUNCTIONS displays statistics about function executions, with statistics about each execution displayed in a row.", + "product_code":"dws", + "title":"PG_STAT_XACT_USER_FUNCTIONS", + "uri":"dws_04_0772.html", + "doc_type":"devg", + "p_code":"395", + "code":"524" + }, + { + "desc":"PG_STAT_XACT_USER_TABLES displays the transaction status information of the user table in the namespace.", + "product_code":"dws", + "title":"PG_STAT_XACT_USER_TABLES", + "uri":"dws_04_0773.html", + "doc_type":"devg", + "p_code":"395", + "code":"525" + }, + { + "desc":"PG_STATIO_ALL_INDEXES contains each row of each index in the current database, showing I/O statistics about accesses to that specific index.", + "product_code":"dws", + "title":"PG_STATIO_ALL_INDEXES", + "uri":"dws_04_0774.html", + "doc_type":"devg", + "p_code":"395", + "code":"526" + }, + { + "desc":"PG_STATIO_ALL_SEQUENCES contains each row of each sequence in the current database, showing I/O statistics about accesses to that specific sequence.", + "product_code":"dws", + "title":"PG_STATIO_ALL_SEQUENCES", + "uri":"dws_04_0775.html", + "doc_type":"devg", + "p_code":"395", + "code":"527" + }, + { + "desc":"PG_STATIO_ALL_TABLES contains one row for each table in the current database (including TOAST tables), showing I/O statistics about accesses to that specific table.", + "product_code":"dws", + "title":"PG_STATIO_ALL_TABLES", + "uri":"dws_04_0776.html", + "doc_type":"devg", + "p_code":"395", + "code":"528" + }, + { + "desc":"PG_STATIO_SYS_INDEXES displays the I/O status information about all system catalog indexes in the namespace.", + "product_code":"dws", + "title":"PG_STATIO_SYS_INDEXES", + "uri":"dws_04_0777.html", + "doc_type":"devg", + "p_code":"395", + "code":"529" + }, + { + "desc":"PG_STATIO_SYS_SEQUENCES displays the I/O status information about all the system sequences in the namespace.", + "product_code":"dws", + "title":"PG_STATIO_SYS_SEQUENCES", + "uri":"dws_04_0778.html", + "doc_type":"devg", + "p_code":"395", + "code":"530" + }, + { + "desc":"PG_STATIO_SYS_TABLES displays the I/O status information about all the system catalogs in the namespace.", + "product_code":"dws", + "title":"PG_STATIO_SYS_TABLES", + "uri":"dws_04_0779.html", + "doc_type":"devg", + "p_code":"395", + "code":"531" + }, + { + "desc":"PG_STATIO_USER_INDEXES displays the I/O status information about all the user relationship table indexes in the namespace.", + "product_code":"dws", + "title":"PG_STATIO_USER_INDEXES", + "uri":"dws_04_0780.html", + "doc_type":"devg", + "p_code":"395", + "code":"532" + }, + { + "desc":"PG_STATIO_USER_SEQUENCES displays the I/O status information about all the user relation table sequences in the namespace.", + "product_code":"dws", + "title":"PG_STATIO_USER_SEQUENCES", + "uri":"dws_04_0781.html", + "doc_type":"devg", + "p_code":"395", + "code":"533" + }, + { + "desc":"PG_STATIO_USER_TABLES displays the I/O status information about all the user relation tables in the namespace.", + "product_code":"dws", + "title":"PG_STATIO_USER_TABLES", + "uri":"dws_04_0782.html", + "doc_type":"devg", + "p_code":"395", + "code":"534" + }, + { + "desc":"PG_THREAD_WAIT_STATUS allows you to test the block waiting status about the backend thread and auxiliary thread of the current instance.The waiting statuses in the wait_s", + "product_code":"dws", + "title":"PG_THREAD_WAIT_STATUS", + "uri":"dws_04_0783.html", + "doc_type":"devg", + "p_code":"395", + "code":"535" + }, + { + "desc":"PG_TABLES displays access to each table in the database.", + "product_code":"dws", + "title":"PG_TABLES", + "uri":"dws_04_0784.html", + "doc_type":"devg", + "p_code":"395", + "code":"536" + }, + { + "desc":"PG_TDE_INFO displays the encryption information about the current cluster.Check whether the current cluster is encrypted, and check the encryption algorithm (if any) used", + "product_code":"dws", + "title":"PG_TDE_INFO", + "uri":"dws_04_0785.html", + "doc_type":"devg", + "p_code":"395", + "code":"537" + }, + { + "desc":"PG_TIMEZONE_ABBREVS displays all time zone abbreviations that can be recognized by the input routines.", + "product_code":"dws", + "title":"PG_TIMEZONE_ABBREVS", + "uri":"dws_04_0786.html", + "doc_type":"devg", + "p_code":"395", + "code":"538" + }, + { + "desc":"PG_TIMEZONE_NAMES displays all time zone names that can be recognized by SET TIMEZONE, along with their associated abbreviations, UTC offsets, and daylight saving time st", + "product_code":"dws", + "title":"PG_TIMEZONE_NAMES", + "uri":"dws_04_0787.html", + "doc_type":"devg", + "p_code":"395", + "code":"539" + }, + { + "desc":"PG_TOTAL_MEMORY_DETAIL displays the memory usage of a certain node in the database.", + "product_code":"dws", + "title":"PG_TOTAL_MEMORY_DETAIL", + "uri":"dws_04_0788.html", + "doc_type":"devg", + "p_code":"395", + "code":"540" + }, + { + "desc":"PG_TOTAL_SCHEMA_INFO displays the storage usage of all schemas in each database. This view is valid only if use_workload_manager is set to on.", + "product_code":"dws", + "title":"PG_TOTAL_SCHEMA_INFO", + "uri":"dws_04_0789.html", + "doc_type":"devg", + "p_code":"395", + "code":"541" + }, + { + "desc":"PG_TOTAL_USER_RESOURCE_INFO displays the resource usage of all users. Only administrators can query this view. This view is valid only if use_workload_manager is set to o", + "product_code":"dws", + "title":"PG_TOTAL_USER_RESOURCE_INFO", + "uri":"dws_04_0790.html", + "doc_type":"devg", + "p_code":"395", + "code":"542" + }, + { + "desc":"PG_USER displays information about users who can access the database.", + "product_code":"dws", + "title":"PG_USER", + "uri":"dws_04_0791.html", + "doc_type":"devg", + "p_code":"395", + "code":"543" + }, + { + "desc":"PG_USER_MAPPINGS displays information about user mappings.This is essentially a publicly readable view of PG_USER_MAPPING that leaves out the options column if the user h", + "product_code":"dws", + "title":"PG_USER_MAPPINGS", + "uri":"dws_04_0792.html", + "doc_type":"devg", + "p_code":"395", + "code":"544" + }, + { + "desc":"PG_VIEWS displays basic information about each view in the database.", + "product_code":"dws", + "title":"PG_VIEWS", + "uri":"dws_04_0793.html", + "doc_type":"devg", + "p_code":"395", + "code":"545" + }, + { + "desc":"PG_WLM_STATISTICS displays information about workload management after the task is complete or the exception has been handled.", + "product_code":"dws", + "title":"PG_WLM_STATISTICS", + "uri":"dws_04_0794.html", + "doc_type":"devg", + "p_code":"395", + "code":"546" + }, + { + "desc":"PGXC_BULKLOAD_PROGRESS displays the progress of the service import. Only GDS common files can be imported. This view is accessible only to users with system administrator", + "product_code":"dws", + "title":"PGXC_BULKLOAD_PROGRESS", + "uri":"dws_04_0795.html", + "doc_type":"devg", + "p_code":"395", + "code":"547" + }, + { + "desc":"PGXC_BULKLOAD_STATISTICS displays real-time statistics about service execution, such as GDS, COPY, and \\COPY, on a CN. This view summarizes the real-time execution status", + "product_code":"dws", + "title":"PGXC_BULKLOAD_STATISTICS", + "uri":"dws_04_0796.html", + "doc_type":"devg", + "p_code":"395", + "code":"548" + }, + { + "desc":"PGXC_COMM_CLIENT_INFO stores the client connection information of all nodes. (You can query this view on a DN to view the information about the connection between the CN ", + "product_code":"dws", + "title":"PGXC_COMM_CLIENT_INFO", + "uri":"dws_04_0797.html", + "doc_type":"devg", + "p_code":"395", + "code":"549" + }, + { + "desc":"PGXC_COMM_STATUS displays the communication library delay status for all the DNs.", + "product_code":"dws", + "title":"PGXC_COMM_DELAY", + "uri":"dws_04_0798.html", + "doc_type":"devg", + "p_code":"395", + "code":"550" + }, + { + "desc":"PG_COMM_RECV_STREAM displays the receiving stream status of the communication libraries for all the DNs.", + "product_code":"dws", + "title":"PGXC_COMM_RECV_STREAM", + "uri":"dws_04_0799.html", + "doc_type":"devg", + "p_code":"395", + "code":"551" + }, + { + "desc":"PGXC_COMM_SEND_STREAM displays the sending stream status of the communication libraries for all the DNs.", + "product_code":"dws", + "title":"PGXC_COMM_SEND_STREAM", + "uri":"dws_04_0800.html", + "doc_type":"devg", + "p_code":"395", + "code":"552" + }, + { + "desc":"PGXC_COMM_STATUS displays the communication library status for all the DNs.", + "product_code":"dws", + "title":"PGXC_COMM_STATUS", + "uri":"dws_04_0801.html", + "doc_type":"devg", + "p_code":"395", + "code":"553" + }, + { + "desc":"PGXC_DEADLOCK displays lock wait information generated due to distributed deadlocks.Currently, PGXC_DEADLOCK collects only lock wait information about locks whose locktyp", + "product_code":"dws", + "title":"PGXC_DEADLOCK", + "uri":"dws_04_0802.html", + "doc_type":"devg", + "p_code":"395", + "code":"554" + }, + { + "desc":"PGXC_GET_STAT_ALL_TABLES displays information about insertion, update, and deletion operations on tables and the dirty page rate of tables.Before running VACUUM FULL to a", + "product_code":"dws", + "title":"PGXC_GET_STAT_ALL_TABLES", + "uri":"dws_04_0803.html", + "doc_type":"devg", + "p_code":"395", + "code":"555" + }, + { + "desc":"PGXC_GET_STAT_ALL_PARTITIONS displays information about insertion, update, and deletion operations on partitions of partitioned tables and the dirty page rate of tables.T", + "product_code":"dws", + "title":"PGXC_GET_STAT_ALL_PARTITIONS", + "uri":"dws_04_0804.html", + "doc_type":"devg", + "p_code":"395", + "code":"556" + }, + { + "desc":"PGXC_GET_TABLE_SKEWNESS displays the data skew on tables in the current database.", + "product_code":"dws", + "title":"PGXC_GET_TABLE_SKEWNESS", + "uri":"dws_04_0805.html", + "doc_type":"devg", + "p_code":"395", + "code":"557" + }, + { + "desc":"PGXC_GTM_SNAPSHOT_STATUS displays transaction information on the current GTM.", + "product_code":"dws", + "title":"PGXC_GTM_SNAPSHOT_STATUS", + "uri":"dws_04_0806.html", + "doc_type":"devg", + "p_code":"395", + "code":"558" + }, + { + "desc":"PGXC_INSTANCE_TIME displays the running time of processes on each node in the cluster and the time consumed in each execution phase. Except the node_name column, the othe", + "product_code":"dws", + "title":"PGXC_INSTANCE_TIME", + "uri":"dws_04_0807.html", + "doc_type":"devg", + "p_code":"395", + "code":"559" + }, + { + "desc":"PGXC_INSTR_UNIQUE_SQL displays the complete Unique SQL statistics of all CN nodes in the cluster.Only the system administrator can access this view. For details about the", + "product_code":"dws", + "title":"PGXC_INSTR_UNIQUE_SQL", + "uri":"dws_04_0808.html", + "doc_type":"devg", + "p_code":"395", + "code":"560" + }, + { + "desc":"PGXC_LOCK_CONFLICTS displays information about conflicting locks in the cluster.When a lock is waiting for another lock or another lock is waiting for this one, a lock co", + "product_code":"dws", + "title":"PGXC_LOCK_CONFLICTS", + "uri":"dws_04_0809.html", + "doc_type":"devg", + "p_code":"395", + "code":"561" + }, + { + "desc":"PGXC_NODE_ENV displays the environmental variables information about all nodes in a cluster.", + "product_code":"dws", + "title":"PGXC_NODE_ENV", + "uri":"dws_04_0810.html", + "doc_type":"devg", + "p_code":"395", + "code":"562" + }, + { + "desc":"PGXC_NODE_STAT_RESET_TIME displays the time when statistics of each node in the cluster are reset. All columns except node_name are the same as those in the GS_NODE_STAT_", + "product_code":"dws", + "title":"PGXC_NODE_STAT_RESET_TIME", + "uri":"dws_04_0811.html", + "doc_type":"devg", + "p_code":"395", + "code":"563" + }, + { + "desc":"PGXC_OS_RUN_INFO displays the OS running status of each node in the cluster. All columns except node_name are the same as those in the PV_OS_RUN_INFO view. This view is a", + "product_code":"dws", + "title":"PGXC_OS_RUN_INFO", + "uri":"dws_04_0812.html", + "doc_type":"devg", + "p_code":"395", + "code":"564" + }, + { + "desc":"PGXC_OS_THREADS displays thread status information under all normal nodes in the current cluster.", + "product_code":"dws", + "title":"PGXC_OS_THREADS", + "uri":"dws_04_0813.html", + "doc_type":"devg", + "p_code":"395", + "code":"565" + }, + { + "desc":"PGXC_PREPARED_XACTS displays the two-phase transactions in the prepared phase.", + "product_code":"dws", + "title":"PGXC_PREPARED_XACTS", + "uri":"dws_04_0814.html", + "doc_type":"devg", + "p_code":"395", + "code":"566" + }, + { + "desc":"PGXC_REDO_STAT displays statistics on redoing Xlogs of each node in the cluster. All columns except node_name are the same as those in the PV_REDO_STAT view. This view is", + "product_code":"dws", + "title":"PGXC_REDO_STAT", + "uri":"dws_04_0815.html", + "doc_type":"devg", + "p_code":"395", + "code":"567" + }, + { + "desc":"PGXC_REL_IOSTAT displays statistics on disk read and write of each node in the cluster. All columns except node_name are the same as those in the GS_REL_IOSTAT view. This", + "product_code":"dws", + "title":"PGXC_REL_IOSTAT", + "uri":"dws_04_0816.html", + "doc_type":"devg", + "p_code":"395", + "code":"568" + }, + { + "desc":"PGXC_REPLICATION_SLOTS displays the replication information of DNs in the cluster. All columns except node_name are the same as those in the PG_REPLICATION_SLOTS view. Th", + "product_code":"dws", + "title":"PGXC_REPLICATION_SLOTS", + "uri":"dws_04_0817.html", + "doc_type":"devg", + "p_code":"395", + "code":"569" + }, + { + "desc":"PGXC_RUNNING_XACTS displays information about running transactions on each node in the cluster. The content is the same as that displayed in PG_RUNNING_XACTS.", + "product_code":"dws", + "title":"PGXC_RUNNING_XACTS", + "uri":"dws_04_0818.html", + "doc_type":"devg", + "p_code":"395", + "code":"570" + }, + { + "desc":"PGXC_SETTINGS displays the database running status of each node in the cluster. All columns except node_name are the same as those in the PG_SETTINGS view. This view is a", + "product_code":"dws", + "title":"PGXC_SETTINGS", + "uri":"dws_04_0819.html", + "doc_type":"devg", + "p_code":"395", + "code":"571" + }, + { + "desc":"PGXC_STAT_ACTIVITY displays information about the query performed by the current user on all the CNs in the current cluster.Run the following command to view blocked quer", + "product_code":"dws", + "title":"PGXC_STAT_ACTIVITY", + "uri":"dws_04_0820.html", + "doc_type":"devg", + "p_code":"395", + "code":"572" + }, + { + "desc":"PGXC_STAT_BAD_BLOCK displays statistics about page or CU verification failures after all nodes in a cluster are started.", + "product_code":"dws", + "title":"PGXC_STAT_BAD_BLOCK", + "uri":"dws_04_0821.html", + "doc_type":"devg", + "p_code":"395", + "code":"573" + }, + { + "desc":"PGXC_STAT_BGWRITER displays statistics on the background writer of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_BGWRITER vi", + "product_code":"dws", + "title":"PGXC_STAT_BGWRITER", + "uri":"dws_04_0822.html", + "doc_type":"devg", + "p_code":"395", + "code":"574" + }, + { + "desc":"PGXC_STAT_DATABASE displays the database status and statistics of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_DATABASE vie", + "product_code":"dws", + "title":"PGXC_STAT_DATABASE", + "uri":"dws_04_0823.html", + "doc_type":"devg", + "p_code":"395", + "code":"575" + }, + { + "desc":"PGXC_STAT_REPLICATION displays the log synchronization status of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_REPLICATION v", + "product_code":"dws", + "title":"PGXC_STAT_REPLICATION", + "uri":"dws_04_0824.html", + "doc_type":"devg", + "p_code":"395", + "code":"576" + }, + { + "desc":"PGXC_SQL_COUNT displays the node-level and user-level statistics for the SQL statements of SELECT, INSERT, UPDATE, DELETE, and MERGE INTO and DDL, DML, and DCL statements", + "product_code":"dws", + "title":"PGXC_SQL_COUNT", + "uri":"dws_04_0825.html", + "doc_type":"devg", + "p_code":"395", + "code":"577" + }, + { + "desc":"PGXC_THREAD_WAIT_STATUS displays all the call layer hierarchy relationship between threads of the SQL statements on all the nodes in a cluster, and the waiting status of ", + "product_code":"dws", + "title":"PGXC_THREAD_WAIT_STATUS", + "uri":"dws_04_0826.html", + "doc_type":"devg", + "p_code":"395", + "code":"578" + }, + { + "desc":"PGXC_TOTAL_MEMORY_DETAIL displays the memory usage in the cluster.", + "product_code":"dws", + "title":"PGXC_TOTAL_MEMORY_DETAIL", + "uri":"dws_04_0827.html", + "doc_type":"devg", + "p_code":"395", + "code":"579" + }, + { + "desc":"PGXC_TOTAL_SCHEMA_INFO displays the schema space information of all instances in the cluster, providing visibility into the schema space usage of each instance. This view", + "product_code":"dws", + "title":"PGXC_TOTAL_SCHEMA_INFO", + "uri":"dws_04_0828.html", + "doc_type":"devg", + "p_code":"395", + "code":"580" + }, + { + "desc":"PGXC_TOTAL_SCHEMA_INFO_ANALYZE displays the overall schema space information of the cluster, including the total cluster space, average space of instances, skew ratio, ma", + "product_code":"dws", + "title":"PGXC_TOTAL_SCHEMA_INFO_ANALYZE", + "uri":"dws_04_0829.html", + "doc_type":"devg", + "p_code":"395", + "code":"581" + }, + { + "desc":"PGXC_USER_TRANSACTION provides transaction information about users on all CNs. It is accessible only to users with system administrator rights. This view is valid only wh", + "product_code":"dws", + "title":"PGXC_USER_TRANSACTION", + "uri":"dws_04_0830.html", + "doc_type":"devg", + "p_code":"395", + "code":"582" + }, + { + "desc":"PGXC_VARIABLE_INFO displays information about transaction IDs and OIDs of all nodes in a cluster.", + "product_code":"dws", + "title":"PGXC_VARIABLE_INFO", + "uri":"dws_04_0831.html", + "doc_type":"devg", + "p_code":"395", + "code":"583" + }, + { + "desc":"PGXC_WAIT_EVENTS displays statistics on the waiting status and events of each node in the cluster. The content is the same as that displayed in GS_WAIT_EVENTS. This view ", + "product_code":"dws", + "title":"PGXC_WAIT_EVENTS", + "uri":"dws_04_0832.html", + "doc_type":"devg", + "p_code":"395", + "code":"584" + }, + { + "desc":"PGXC_WLM_OPERATOR_HISTORYdisplays the operator information of completed jobs executed on all CNs. This view is used by Database Manager to query data from a database. Dat", + "product_code":"dws", + "title":"PGXC_WLM_OPERATOR_HISTORY", + "uri":"dws_04_0836.html", + "doc_type":"devg", + "p_code":"395", + "code":"585" + }, + { + "desc":"PGXC_WLM_OPERATOR_INFO displays the operator information of completed jobs executed on CNs. The data in this view is obtained from GS_WLM_OPERATOR_INFO.This view is acces", + "product_code":"dws", + "title":"PGXC_WLM_OPERATOR_INFO", + "uri":"dws_04_0837.html", + "doc_type":"devg", + "p_code":"395", + "code":"586" + }, + { + "desc":"PGXC_WLM_OPERATOR_STATISTICS displays the operator information of jobs being executed on CNs.This view is accessible only to users with system administrators rights. For ", + "product_code":"dws", + "title":"PGXC_WLM_OPERATOR_STATISTICS", + "uri":"dws_04_0838.html", + "doc_type":"devg", + "p_code":"395", + "code":"587" + }, + { + "desc":"PGXC_WLM_SESSION_INFO displays load management information for completed jobs executed on all CNs. The data in this view is obtained from GS_WLM_SESSION_INFO.This view is", + "product_code":"dws", + "title":"PGXC_WLM_SESSION_INFO", + "uri":"dws_04_0839.html", + "doc_type":"devg", + "p_code":"395", + "code":"588" + }, + { + "desc":"PGXC_WLM_SESSION_HISTORY displays load management information for completed jobs executed on all CNs. This view is used by Data Manager to query data from a database. Dat", + "product_code":"dws", + "title":"PGXC_WLM_SESSION_HISTORY", + "uri":"dws_04_0840.html", + "doc_type":"devg", + "p_code":"395", + "code":"589" + }, + { + "desc":"PGXC_WLM_SESSION_STATISTICS displays load management information about jobs that are being executed on CNs.This view is accessible only to users with system administrator", + "product_code":"dws", + "title":"PGXC_WLM_SESSION_STATISTICS", + "uri":"dws_04_0841.html", + "doc_type":"devg", + "p_code":"395", + "code":"590" + }, + { + "desc":"PGXC_WLM_WORKLOAD_RECORDS displays the status of job executed by the current user on CNs. It is accessible only to users with system administrator rights. This view is av", + "product_code":"dws", + "title":"PGXC_WLM_WORKLOAD_RECORDS", + "uri":"dws_04_0842.html", + "doc_type":"devg", + "p_code":"395", + "code":"591" + }, + { + "desc":"PGXC_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in workload Cgroups on all CNs in a cluster, including the number of SELECT, UPDATE, ", + "product_code":"dws", + "title":"PGXC_WORKLOAD_SQL_COUNT", + "uri":"dws_04_0843.html", + "doc_type":"devg", + "p_code":"395", + "code":"592" + }, + { + "desc":"PGXC_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in workload Cgroups on all CNs in a cluster, including the maximum, minimum, aver", + "product_code":"dws", + "title":"PGXC_WORKLOAD_SQL_ELAPSE_TIME", + "uri":"dws_04_0844.html", + "doc_type":"devg", + "p_code":"395", + "code":"593" + }, + { + "desc":"PGXC_WORKLOAD_TRANSACTION provides transaction information about workload Cgroups on all CNs. It is accessible only to users with system administrator rights. This view i", + "product_code":"dws", + "title":"PGXC_WORKLOAD_TRANSACTION", + "uri":"dws_04_0845.html", + "doc_type":"devg", + "p_code":"395", + "code":"594" + }, + { + "desc":"PLAN_TABLE displays the plan information collected by EXPLAIN PLAN. Plan information is in a session-level life cycle. After the session exits, the data will be deleted. ", + "product_code":"dws", + "title":"PLAN_TABLE", + "uri":"dws_04_0846.html", + "doc_type":"devg", + "p_code":"395", + "code":"595" + }, + { + "desc":"PLAN_TABLE_DATA displays the plan information collected by EXPLAIN PLAN. Different from the PLAN_TABLE view, the system catalog PLAN_TABLE_DATA stores the plan informatio", + "product_code":"dws", + "title":"PLAN_TABLE_DATA", + "uri":"dws_04_0847.html", + "doc_type":"devg", + "p_code":"395", + "code":"596" + }, + { + "desc":"By collecting statistics about the data file I/Os, PV_FILE_STAT displays the I/O performance of the data to detect the performance problems, such as abnormal I/O operatio", + "product_code":"dws", + "title":"PV_FILE_STAT", + "uri":"dws_04_0848.html", + "doc_type":"devg", + "p_code":"395", + "code":"597" + }, + { + "desc":"PV_INSTANCE_TIME collects statistics on the running time of processes and the time consumed in each execution phase, in microseconds.PV_INSTANCE_TIME records time consump", + "product_code":"dws", + "title":"PV_INSTANCE_TIME", + "uri":"dws_04_0849.html", + "doc_type":"devg", + "p_code":"395", + "code":"598" + }, + { + "desc":"PV_OS_RUN_INFO displays the running status of the current operating system.", + "product_code":"dws", + "title":"PV_OS_RUN_INFO", + "uri":"dws_04_0850.html", + "doc_type":"devg", + "p_code":"395", + "code":"599" + }, + { + "desc":"PV_SESSION_MEMORY displays statistics about memory usage at the session level in the unit of MB, including all the memory allocated to Postgres and Stream threads on DNs ", + "product_code":"dws", + "title":"PV_SESSION_MEMORY", + "uri":"dws_04_0851.html", + "doc_type":"devg", + "p_code":"395", + "code":"600" + }, + { + "desc":"PV_SESSION_MEMORY_DETAIL displays statistics about thread memory usage by memory context.The memory context TempSmallContextGroup collects information about all memory co", + "product_code":"dws", + "title":"PV_SESSION_MEMORY_DETAIL", + "uri":"dws_04_0852.html", + "doc_type":"devg", + "p_code":"395", + "code":"601" + }, + { + "desc":"PV_SESSION_STAT displays session state statistics based on session threads or the AutoVacuum thread.", + "product_code":"dws", + "title":"PV_SESSION_STAT", + "uri":"dws_04_0853.html", + "doc_type":"devg", + "p_code":"395", + "code":"602" + }, + { + "desc":"PV_SESSION_TIME displays statistics about the running time of session threads and time consumed in each execution phase, in microseconds.", + "product_code":"dws", + "title":"PV_SESSION_TIME", + "uri":"dws_04_0854.html", + "doc_type":"devg", + "p_code":"395", + "code":"603" + }, + { + "desc":"PV_TOTAL_MEMORY_DETAIL displays statistics about memory usage of the current database node in the unit of MB.", + "product_code":"dws", + "title":"PV_TOTAL_MEMORY_DETAIL", + "uri":"dws_04_0855.html", + "doc_type":"devg", + "p_code":"395", + "code":"604" + }, + { + "desc":"PV_REDO_STAT displays statistics on redoing Xlogs on the current node.", + "product_code":"dws", + "title":"PV_REDO_STAT", + "uri":"dws_04_0856.html", + "doc_type":"devg", + "p_code":"395", + "code":"605" + }, + { + "desc":"REDACTION_COLUMNS displays information about all redaction columns in the current database.", + "product_code":"dws", + "title":"REDACTION_COLUMNS", + "uri":"dws_04_0857.html", + "doc_type":"devg", + "p_code":"395", + "code":"606" + }, + { + "desc":"REDACTION_POLICIES displays information about all redaction objects in the current database.", + "product_code":"dws", + "title":"REDACTION_POLICIES", + "uri":"dws_04_0858.html", + "doc_type":"devg", + "p_code":"395", + "code":"607" + }, + { + "desc":"USER_COL_COMMENTS displays the column comments of the table accessible to the current user.", + "product_code":"dws", + "title":"USER_COL_COMMENTS", + "uri":"dws_04_0859.html", + "doc_type":"devg", + "p_code":"395", + "code":"608" + }, + { + "desc":"USER_CONSTRAINTS displays the table constraint information accessible to the current user.", + "product_code":"dws", + "title":"USER_CONSTRAINTS", + "uri":"dws_04_0860.html", + "doc_type":"devg", + "p_code":"395", + "code":"609" + }, + { + "desc":"USER_CONSTRAINTS displays the information about constraint columns of the tables accessible to the current user.", + "product_code":"dws", + "title":"USER_CONS_COLUMNS", + "uri":"dws_04_0861.html", + "doc_type":"devg", + "p_code":"395", + "code":"610" + }, + { + "desc":"USER_INDEXES displays index information in the current schema.", + "product_code":"dws", + "title":"USER_INDEXES", + "uri":"dws_04_0862.html", + "doc_type":"devg", + "p_code":"395", + "code":"611" + }, + { + "desc":"USER_IND_COLUMNS displays column information about all indexes accessible to the current user.", + "product_code":"dws", + "title":"USER_IND_COLUMNS", + "uri":"dws_04_0863.html", + "doc_type":"devg", + "p_code":"395", + "code":"612" + }, + { + "desc":"USER_IND_EXPRESSIONSdisplays information about the function-based expression index accessible to the current user.", + "product_code":"dws", + "title":"USER_IND_EXPRESSIONS", + "uri":"dws_04_0864.html", + "doc_type":"devg", + "p_code":"395", + "code":"613" + }, + { + "desc":"USER_IND_PARTITIONS displays information about index partitions accessible to the current user.", + "product_code":"dws", + "title":"USER_IND_PARTITIONS", + "uri":"dws_04_0865.html", + "doc_type":"devg", + "p_code":"395", + "code":"614" + }, + { + "desc":"USER_JOBS displays all jobs owned by the user.", + "product_code":"dws", + "title":"USER_JOBS", + "uri":"dws_04_0866.html", + "doc_type":"devg", + "p_code":"395", + "code":"615" + }, + { + "desc":"USER_OBJECTS displays all database objects accessible to the current user.For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.", + "product_code":"dws", + "title":"USER_OBJECTS", + "uri":"dws_04_0867.html", + "doc_type":"devg", + "p_code":"395", + "code":"616" + }, + { + "desc":"USER_PART_INDEXES displays information about partitioned table indexes accessible to the current user.", + "product_code":"dws", + "title":"USER_PART_INDEXES", + "uri":"dws_04_0868.html", + "doc_type":"devg", + "p_code":"395", + "code":"617" + }, + { + "desc":"USER_PART_TABLES displays information about partitioned tables accessible to the current user.", + "product_code":"dws", + "title":"USER_PART_TABLES", + "uri":"dws_04_0869.html", + "doc_type":"devg", + "p_code":"395", + "code":"618" + }, + { + "desc":"USER_PROCEDURES displays information about all stored procedures and functions in the current schema.", + "product_code":"dws", + "title":"USER_PROCEDURES", + "uri":"dws_04_0870.html", + "doc_type":"devg", + "p_code":"395", + "code":"619" + }, + { + "desc":"USER_SEQUENCES displays sequence information in the current schema.", + "product_code":"dws", + "title":"USER_SEQUENCES", + "uri":"dws_04_0871.html", + "doc_type":"devg", + "p_code":"395", + "code":"620" + }, + { + "desc":"USER_SOURCE displays information about stored procedures or functions in this mode, and provides the columns defined by the stored procedures or the functions.", + "product_code":"dws", + "title":"USER_SOURCE", + "uri":"dws_04_0872.html", + "doc_type":"devg", + "p_code":"395", + "code":"621" + }, + { + "desc":"USER_SYNONYMS displays synonyms accessible to the current user.", + "product_code":"dws", + "title":"USER_SYNONYMS", + "uri":"dws_04_0873.html", + "doc_type":"devg", + "p_code":"395", + "code":"622" + }, + { + "desc":"USER_TAB_COLUMNS displays information about table columns accessible to the current user.", + "product_code":"dws", + "title":"USER_TAB_COLUMNS", + "uri":"dws_04_0874.html", + "doc_type":"devg", + "p_code":"395", + "code":"623" + }, + { + "desc":"USER_TAB_COMMENTS displays comments about all tables and views accessible to the current user.", + "product_code":"dws", + "title":"USER_TAB_COMMENTS", + "uri":"dws_04_0875.html", + "doc_type":"devg", + "p_code":"395", + "code":"624" + }, + { + "desc":"USER_TAB_PARTITIONS displays all table partitions accessible to the current user. Each partition of a partitioned table accessible to the current user has a piece of reco", + "product_code":"dws", + "title":"USER_TAB_PARTITIONS", + "uri":"dws_04_0876.html", + "doc_type":"devg", + "p_code":"395", + "code":"625" + }, + { + "desc":"USER_TABLES displays table information in the current schema.", + "product_code":"dws", + "title":"USER_TABLES", + "uri":"dws_04_0877.html", + "doc_type":"devg", + "p_code":"395", + "code":"626" + }, + { + "desc":"USER_TRIGGERS displays the information about triggers accessible to the current user.", + "product_code":"dws", + "title":"USER_TRIGGERS", + "uri":"dws_04_0878.html", + "doc_type":"devg", + "p_code":"395", + "code":"627" + }, + { + "desc":"USER_VIEWS displays information about all views in the current schema.", + "product_code":"dws", + "title":"USER_VIEWS", + "uri":"dws_04_0879.html", + "doc_type":"devg", + "p_code":"395", + "code":"628" + }, + { + "desc":"V$SESSION displays all session information about the current session.", + "product_code":"dws", + "title":"V$SESSION", + "uri":"dws_04_0880.html", + "doc_type":"devg", + "p_code":"395", + "code":"629" + }, + { + "desc":"V$SESSION_LONGOPS displays the progress of ongoing operations.", + "product_code":"dws", + "title":"V$SESSION_LONGOPS", + "uri":"dws_04_0881.html", + "doc_type":"devg", + "p_code":"395", + "code":"630" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"GUC Parameters", + "uri":"dws_04_0883.html", + "doc_type":"devg", + "p_code":"1", + "code":"631" + }, + { + "desc":"GaussDB(DWS) GUC parameters can control database system behaviors. You can check and adjust the GUC parameters based on your business scenario and data volume.After a clu", + "product_code":"dws", + "title":"Viewing GUC Parameters", + "uri":"dws_04_0884.html", + "doc_type":"devg", + "p_code":"631", + "code":"632" + }, + { + "desc":"To ensure the optimal performance of GaussDB(DWS), you can adjust the GUC parameters in the database.The GUC parameters of GaussDB(DWS) are classified into the following ", + "product_code":"dws", + "title":"Configuring GUC Parameters", + "uri":"dws_04_0885.html", + "doc_type":"devg", + "p_code":"631", + "code":"633" + }, + { + "desc":"The database provides many operation parameters. Configuration of these parameters affects the behavior of the database system. Before modifying these parameters, learn t", + "product_code":"dws", + "title":"GUC Parameter Usage", + "uri":"dws_04_0886.html", + "doc_type":"devg", + "p_code":"631", + "code":"634" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Connection and Authentication", + "uri":"dws_04_0888.html", + "doc_type":"devg", + "p_code":"631", + "code":"635" + }, + { + "desc":"This section describes parameters related to the connection mode between the client and server.Parameter description: Specifies the maximum number of allowed parallel con", + "product_code":"dws", + "title":"Connection Settings", + "uri":"dws_04_0889.html", + "doc_type":"devg", + "p_code":"635", + "code":"636" + }, + { + "desc":"This section describes parameters about how to securely authenticate the client and server.Parameter description: Specifies the longest duration to wait before the client", + "product_code":"dws", + "title":"Security and Authentication (postgresql.conf)", + "uri":"dws_04_0890.html", + "doc_type":"devg", + "p_code":"635", + "code":"637" + }, + { + "desc":"This section describes parameter settings and value ranges for communication libraries.Parameter description: Specifies whether the communication library uses the TCP or ", + "product_code":"dws", + "title":"Communication Library Parameters", + "uri":"dws_04_0891.html", + "doc_type":"devg", + "p_code":"635", + "code":"638" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Resource Consumption", + "uri":"dws_04_0892.html", + "doc_type":"devg", + "p_code":"631", + "code":"639" + }, + { + "desc":"This section describes memory parameters.Parameters described in this section take effect only after the database service restarts.Parameter description: Specifies whethe", + "product_code":"dws", + "title":"Memory", + "uri":"dws_04_0893.html", + "doc_type":"devg", + "p_code":"639", + "code":"640" + }, + { + "desc":"This section describes parameters related to statement disk space control, which are used to limit the disk space usage of statements.Parameter description: Specifies the", + "product_code":"dws", + "title":"Statement Disk Space Control", + "uri":"dws_04_0894.html", + "doc_type":"devg", + "p_code":"639", + "code":"641" + }, + { + "desc":"This section describes kernel resource parameters. Whether these parameters take effect depends on OS settings.Parameter description: Specifies the maximum number of simu", + "product_code":"dws", + "title":"Kernel Resources", + "uri":"dws_04_0895.html", + "doc_type":"devg", + "p_code":"639", + "code":"642" + }, + { + "desc":"This feature allows administrators to reduce the I/O impact of the VACUUM and ANALYZE statements on concurrent database activities. It is often more important to prevent ", + "product_code":"dws", + "title":"Cost-based Vacuum Delay", + "uri":"dws_04_0896.html", + "doc_type":"devg", + "p_code":"639", + "code":"643" + }, + { + "desc":"Parameter description: Specifies whether O&M personnel are allowed to generate some ADIO logs to locate ADIO issues. This parameter is used only by developers. Common use", + "product_code":"dws", + "title":"Asynchronous I/O Operations", + "uri":"dws_04_0898.html", + "doc_type":"devg", + "p_code":"639", + "code":"644" + }, + { + "desc":"GaussDB(DWS) provides a parallel data import function that enables a large amount of data to be imported in a fast and efficient manner. This section describes parameters", + "product_code":"dws", + "title":"Parallel Data Import", + "uri":"dws_04_0899.html", + "doc_type":"devg", + "p_code":"631", + "code":"645" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Write Ahead Logs", + "uri":"dws_04_0900.html", + "doc_type":"devg", + "p_code":"631", + "code":"646" + }, + { + "desc":"Parameter description: Specifies the level of the information that is written to WALs.Type: POSTMASTERValue range: enumerated valuesminimalAdvantages: Certain bulk operat", + "product_code":"dws", + "title":"Settings", + "uri":"dws_04_0901.html", + "doc_type":"devg", + "p_code":"646", + "code":"647" + }, + { + "desc":"Parameter description: Specifies the minimum number of WAL segment files in the period specified by checkpoint_timeout. The size of each log file is 16 MB.Type: SIGHUPVal", + "product_code":"dws", + "title":"Checkpoints", + "uri":"dws_04_0902.html", + "doc_type":"devg", + "p_code":"646", + "code":"648" + }, + { + "desc":"Parameter description: When archive_mode is enabled, completed WAL segments are sent to archive storage by setting archive_command.Type: SIGHUPValue range: Booleanon: The", + "product_code":"dws", + "title":"Archiving", + "uri":"dws_04_0903.html", + "doc_type":"devg", + "p_code":"646", + "code":"649" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"HA Replication", + "uri":"dws_04_0904.html", + "doc_type":"devg", + "p_code":"631", + "code":"650" + }, + { + "desc":"Parameter description: Specifies the number of Xlog file segments. Specifies the minimum number of transaction log files stored in the pg_xlog directory. The standby serv", + "product_code":"dws", + "title":"Sending Server", + "uri":"dws_04_0905.html", + "doc_type":"devg", + "p_code":"650", + "code":"651" + }, + { + "desc":"Parameter description: Specifies the number of transactions by which VACUUM will defer the cleanup of invalid row-store table records, so that VACUUM and VACUUM FULL do n", + "product_code":"dws", + "title":"Primary Server", + "uri":"dws_04_0906.html", + "doc_type":"devg", + "p_code":"650", + "code":"652" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Query Planning", + "uri":"dws_04_0908.html", + "doc_type":"devg", + "p_code":"631", + "code":"653" + }, + { + "desc":"These configuration parameters provide a crude method of influencing the query plans chosen by the query optimizer. If the default plan chosen by the optimizer for a part", + "product_code":"dws", + "title":"Optimizer Method Configuration", + "uri":"dws_04_0909.html", + "doc_type":"devg", + "p_code":"653", + "code":"654" + }, + { + "desc":"This section describes the optimizer cost constants. The cost variables described in this section are measured on an arbitrary scale. Only their relative values matter, t", + "product_code":"dws", + "title":"Optimizer Cost Constants", + "uri":"dws_04_0910.html", + "doc_type":"devg", + "p_code":"653", + "code":"655" + }, + { + "desc":"This section describes parameters related to genetic query optimizer. The genetic query optimizer (GEQO) is an algorithm that plans queries by using heuristic searching. ", + "product_code":"dws", + "title":"Genetic Query Optimizer", + "uri":"dws_04_0911.html", + "doc_type":"devg", + "p_code":"653", + "code":"656" + }, + { + "desc":"Parameter description: Specifies the default statistics target for table columns without a column-specific target set via ALTER TABLE SET STATISTICS. If this parameter is", + "product_code":"dws", + "title":"Other Optimizer Options", + "uri":"dws_04_0912.html", + "doc_type":"devg", + "p_code":"653", + "code":"657" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Error Reporting and Logging", + "uri":"dws_04_0913.html", + "doc_type":"devg", + "p_code":"631", + "code":"658" + }, + { + "desc":"Parameter description: Specifies the writing mode of the log files when logging_collector is set to on.Type: SIGHUPValue range: Booleanon indicates that GaussDB(DWS) over", + "product_code":"dws", + "title":"Logging Destination", + "uri":"dws_04_0914.html", + "doc_type":"devg", + "p_code":"658", + "code":"659" + }, + { + "desc":"Parameter description: Specifies which level of messages are sent to the client. Each level covers all the levels following it. The lower the level is, the fewer messages", + "product_code":"dws", + "title":"Logging Time", + "uri":"dws_04_0915.html", + "doc_type":"devg", + "p_code":"658", + "code":"660" + }, + { + "desc":"Parameter description: Specifies whether to print parsing tree results.Type: SIGHUPValue range: Booleanon indicates the printing result function is enabled.off indicates ", + "product_code":"dws", + "title":"Logging Content", + "uri":"dws_04_0916.html", + "doc_type":"devg", + "p_code":"658", + "code":"661" + }, + { + "desc":"During cluster running, error scenarios can be detected in a timely manner to inform users as soon as possible.Parameter description: Enables the alarm detection thread t", + "product_code":"dws", + "title":"Alarm Detection", + "uri":"dws_04_0918.html", + "doc_type":"devg", + "p_code":"631", + "code":"662" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Statistics During the Database Running", + "uri":"dws_04_0919.html", + "doc_type":"devg", + "p_code":"631", + "code":"663" + }, + { + "desc":"The query and index statistics collector is used to collect statistics during database running. The statistics include the times of inserting and updating a table and an ", + "product_code":"dws", + "title":"Query and Index Statistics Collector", + "uri":"dws_04_0920.html", + "doc_type":"devg", + "p_code":"663", + "code":"664" + }, + { + "desc":"During the running of the database, the lock access, disk I/O operation, and invalid message process are involved. All these operations are the bottleneck of the database", + "product_code":"dws", + "title":"Performance Statistics", + "uri":"dws_04_0921.html", + "doc_type":"devg", + "p_code":"663", + "code":"665" + }, + { + "desc":"If database resource usage is not controlled, concurrent tasks easily preempt resources. As a result, the OS will be overloaded and cannot respond to user tasks; or even ", + "product_code":"dws", + "title":"Workload Management", + "uri":"dws_04_0922.html", + "doc_type":"devg", + "p_code":"631", + "code":"666" + }, + { + "desc":"The automatic cleanup process (autovacuum) in the system automatically runs the VACUUM and ANALYZE commands to recycle the record space marked by the deleted status and u", + "product_code":"dws", + "title":"Automatic Cleanup", + "uri":"dws_04_0923.html", + "doc_type":"devg", + "p_code":"631", + "code":"667" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Default Settings of Client Connection", + "uri":"dws_04_0924.html", + "doc_type":"devg", + "p_code":"631", + "code":"668" + }, + { + "desc":"This section describes related default parameters involved in the execution of SQL statements.Parameter description: Specifies the order in which schemas are searched whe", + "product_code":"dws", + "title":"Statement Behavior", + "uri":"dws_04_0925.html", + "doc_type":"devg", + "p_code":"668", + "code":"669" + }, + { + "desc":"This section describes parameters related to the time format setting.Parameter description: Specifies the display format for date and time values, as well as the rules fo", + "product_code":"dws", + "title":"Zone and Formatting", + "uri":"dws_04_0926.html", + "doc_type":"devg", + "p_code":"668", + "code":"670" + }, + { + "desc":"This section describes the default database loading parameters of the database system.Parameter description: Specifies the path for saving the shared database files that ", + "product_code":"dws", + "title":"Other Default Parameters", + "uri":"dws_04_0927.html", + "doc_type":"devg", + "p_code":"668", + "code":"671" + }, + { + "desc":"In GaussDB(DWS), a deadlock may occur when concurrently executed transactions compete for resources. This section describes parameters used for managing transaction lock ", + "product_code":"dws", + "title":"Lock Management", + "uri":"dws_04_0928.html", + "doc_type":"devg", + "p_code":"631", + "code":"672" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Version and Platform Compatibility", + "uri":"dws_04_0929.html", + "doc_type":"devg", + "p_code":"631", + "code":"673" + }, + { + "desc":"This section describes the parameter control of the downward compatibility and external compatibility features of GaussDB(DWS). Backward compatibility of the database sys", + "product_code":"dws", + "title":"Compatibility with Earlier Versions", + "uri":"dws_04_0930.html", + "doc_type":"devg", + "p_code":"673", + "code":"674" + }, + { + "desc":"Many platforms use the database system. External compatibility of the database system provides a lot of convenience for platforms.Parameter description: Determines whethe", + "product_code":"dws", + "title":"Platform and Client Compatibility", + "uri":"dws_04_0931.html", + "doc_type":"devg", + "p_code":"673", + "code":"675" + }, + { + "desc":"This section describes parameters used for controlling the methods that the server processes an error occurring in the database system.Parameter description: Specifies wh", + "product_code":"dws", + "title":"Fault Tolerance", + "uri":"dws_04_0932.html", + "doc_type":"devg", + "p_code":"631", + "code":"676" + }, + { + "desc":"When a connection pool is used to access the database, database connections are established and then stored in the memory as objects during system running. When you need ", + "product_code":"dws", + "title":"Connection Pool Parameters", + "uri":"dws_04_0933.html", + "doc_type":"devg", + "p_code":"631", + "code":"677" + }, + { + "desc":"This section describes the settings and value ranges of cluster transaction parameters.Parameter description: Specifies the isolation level of the current transaction.Typ", + "product_code":"dws", + "title":"Cluster Transaction Parameters", + "uri":"dws_04_0934.html", + "doc_type":"devg", + "p_code":"631", + "code":"678" + }, + { + "desc":"Parameter description: Specifies whether to enable the lightweight column-store update.Type: USERSETValue range: Booleanon indicates that the lightweight column-store upd", + "product_code":"dws", + "title":"Developer Operations", + "uri":"dws_04_0936.html", + "doc_type":"devg", + "p_code":"631", + "code":"679" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Auditing", + "uri":"dws_04_0937.html", + "doc_type":"devg", + "p_code":"631", + "code":"680" + }, + { + "desc":"Parameter description: Specifies whether to enable or disable the audit process. After the audit process is enabled, the auditing information written by the background pr", + "product_code":"dws", + "title":"Audit Switch", + "uri":"dws_04_0938.html", + "doc_type":"devg", + "p_code":"680", + "code":"681" + }, + { + "desc":"Parameter description: Specifies whether to audit successful operations in GaussDB(DWS). Set this parameter as required.Type: SIGHUPValue range: a stringnone: indicates t", + "product_code":"dws", + "title":"Operation Audit", + "uri":"dws_04_0940.html", + "doc_type":"devg", + "p_code":"680", + "code":"682" + }, + { + "desc":"The automatic rollback transaction can be monitored and its statement problems can be located by setting the transaction timeout warning. In addition, the statements with", + "product_code":"dws", + "title":"Transaction Monitoring", + "uri":"dws_04_0941.html", + "doc_type":"devg", + "p_code":"631", + "code":"683" + }, + { + "desc":"Parameter description: If an SQL statement involves tables belonging to different groups, you can enable this parameter to push the execution plan of the statement to imp", + "product_code":"dws", + "title":"Miscellaneous Parameters", + "uri":"dws_04_0945.html", + "doc_type":"devg", + "p_code":"631", + "code":"684" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Glossary", + "uri":"dws_04_0946.html", + "doc_type":"devg", + "p_code":"1", + "code":"685" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"SQL Syntax Reference", + "uri":"dws_04_2000.html", + "doc_type":"devg", + "p_code":"", + "code":"686" + }, + { + "desc":"SQL is a standard computer language used to control the access to databases and manage data in databases.SQL provides different statements to enable you to:Query data.Ins", + "product_code":"dws", + "title":"GaussDB(DWS) SQL", + "uri":"dws_06_0001.html", + "doc_type":"devg", + "p_code":"686", + "code":"687" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Differences Between GaussDB(DWS) and PostgreSQL", + "uri":"dws_06_0002.html", + "doc_type":"devg", + "p_code":"686", + "code":"688" + }, + { + "desc":"GaussDB(DWS) gsql differs from PostgreSQL psql in that the former has made the following changes to enhance security:User passwords cannot be set by running the \\password", + "product_code":"dws", + "title":"GaussDB(DWS) gsql, PostgreSQL psql, and libpq", + "uri":"dws_06_0003.html", + "doc_type":"devg", + "p_code":"688", + "code":"689" + }, + { + "desc":"For details about supported data types by GaussDB(DWS), see Data Types.The following PostgreSQL data type is not supported:Lines, a geometric typepg_node_tree", + "product_code":"dws", + "title":"Data Type Differences", + "uri":"dws_06_0004.html", + "doc_type":"devg", + "p_code":"688", + "code":"690" + }, + { + "desc":"For details about the functions supported by GaussDB(DWS), see Functions and Operators.The following PostgreSQL functions are not supported:Enum support functionsAccess p", + "product_code":"dws", + "title":"Function Differences", + "uri":"dws_06_0005.html", + "doc_type":"devg", + "p_code":"688", + "code":"691" + }, + { + "desc":"Table inheritanceTable creation features:Use REFERENCES reftable [ (refcolumn) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] t", + "product_code":"dws", + "title":"PostgreSQL Features Unsupported by GaussDB(DWS)", + "uri":"dws_06_0006.html", + "doc_type":"devg", + "p_code":"688", + "code":"692" + }, + { + "desc":"The SQL contains reserved and non-reserved words. Standards require that reserved keywords not be used as other identifiers. Non-reserved keywords have special meanings o", + "product_code":"dws", + "title":"Keyword", + "uri":"dws_06_0007.html", + "doc_type":"devg", + "p_code":"686", + "code":"693" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Data Types", + "uri":"dws_06_0008.html", + "doc_type":"devg", + "p_code":"686", + "code":"694" + }, + { + "desc":"Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision decimals.For details about numeric op", + "product_code":"dws", + "title":"Numeric Types", + "uri":"dws_06_0009.html", + "doc_type":"devg", + "p_code":"694", + "code":"695" + }, + { + "desc":"The money type stores a currency amount with fixed fractional precision. The range shown in Table 1 assumes there are two fractional digits. Input is accepted in a variet", + "product_code":"dws", + "title":"Monetary Types", + "uri":"dws_06_0010.html", + "doc_type":"devg", + "p_code":"694", + "code":"696" + }, + { + "desc":"Valid literal values for the \"true\" state are:TRUE, 't', 'true', 'y', 'yes', '1'Valid literal values for the \"false\" state include:FALSE, 'f', 'false', 'n', 'no', '0'TRUE", + "product_code":"dws", + "title":"Boolean Type", + "uri":"dws_06_0011.html", + "doc_type":"devg", + "p_code":"694", + "code":"697" + }, + { + "desc":"Table 1 lists the character types that can be used in GaussDB(DWS). For string operators and related built-in functions, see Character Processing Functions and Operators.", + "product_code":"dws", + "title":"Character Types", + "uri":"dws_06_0012.html", + "doc_type":"devg", + "p_code":"694", + "code":"698" + }, + { + "desc":"Table 1 lists the binary data types that can be used in GaussDB(DWS).In addition to the size limitation on each column, the total size of each tuple is 8203 bytes less th", + "product_code":"dws", + "title":"Binary Data Types", + "uri":"dws_06_0013.html", + "doc_type":"devg", + "p_code":"694", + "code":"699" + }, + { + "desc":"Table 1 lists date and time types supported by GaussDB(DWS). For the operators and built-in functions of the types, see Date and Time Processing Functions and Operators.I", + "product_code":"dws", + "title":"Date/Time Types", + "uri":"dws_06_0014.html", + "doc_type":"devg", + "p_code":"694", + "code":"700" + }, + { + "desc":"Table 1 lists the geometric types that can be used in GaussDB(DWS). The most fundamental type, the point, forms the basis for all of the other types.A rich set of functio", + "product_code":"dws", + "title":"Geometric Types", + "uri":"dws_06_0015.html", + "doc_type":"devg", + "p_code":"694", + "code":"701" + }, + { + "desc":"GaussDB(DWS) offers data types to store IPv4, IPv6, and MAC addresses.It is better to use network address types instead of plaintext types to store IPv4, IPv6, and MAC ad", + "product_code":"dws", + "title":"Network Address Types", + "uri":"dws_06_0016.html", + "doc_type":"devg", + "p_code":"694", + "code":"702" + }, + { + "desc":"Bit strings are strings of 1's and 0's. They can be used to store bit masks.GaussDB(DWS) supports two SQL bit types: bit(n) and bit varying(n), where n is a positive inte", + "product_code":"dws", + "title":"Bit String Types", + "uri":"dws_06_0017.html", + "doc_type":"devg", + "p_code":"694", + "code":"703" + }, + { + "desc":"GaussDB(DWS) offers two data types that are designed to support full text search. The tsvector type represents a document in a form optimized for text search. The tsquery", + "product_code":"dws", + "title":"Text Search Types", + "uri":"dws_06_0018.html", + "doc_type":"devg", + "p_code":"694", + "code":"704" + }, + { + "desc":"The data type UUID stores Universally Unique Identifiers (UUID) as defined by RFC 4122, ISO/IEF 9834-8:2005, and related standards. This identifier is a 128-bit quantity ", + "product_code":"dws", + "title":"UUID Type", + "uri":"dws_06_0019.html", + "doc_type":"devg", + "p_code":"694", + "code":"705" + }, + { + "desc":"JSON data types are for storing JavaScript Object Notation (JSON) data. Such data can also be stored as TEXT, but the JSON data type has the advantage of checking that ea", + "product_code":"dws", + "title":"JSON Types", + "uri":"dws_06_0020.html", + "doc_type":"devg", + "p_code":"694", + "code":"706" + }, + { + "desc":"HyperLoglog (HLL) is an approximation algorithm for efficiently counting the number of distinct values in a data set. It features faster computing and lower space usage. ", + "product_code":"dws", + "title":"HLL Data Types", + "uri":"dws_06_0021.html", + "doc_type":"devg", + "p_code":"694", + "code":"707" + }, + { + "desc":"Object identifiers (OIDs) are used internally by GaussDB(DWS) as primary keys for various system catalogs. OIDs are not added to user-created tables by the system. The OI", + "product_code":"dws", + "title":"Object Identifier Types", + "uri":"dws_06_0022.html", + "doc_type":"devg", + "p_code":"694", + "code":"708" + }, + { + "desc":"GaussDB(DWS) has a number of special-purpose entries that are collectively called pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to ", + "product_code":"dws", + "title":"Pseudo-Types", + "uri":"dws_06_0023.html", + "doc_type":"devg", + "p_code":"694", + "code":"709" + }, + { + "desc":"Table 1 lists the data types supported by column-store tables.", + "product_code":"dws", + "title":"Data Types Supported by Column-Store Tables", + "uri":"dws_06_0024.html", + "doc_type":"devg", + "p_code":"694", + "code":"710" + }, + { + "desc":"XML data type stores Extensible Markup Language (XML) formatted data. Such data can also be stored as text, but the advantage of the XML data type is that it checks wheth", + "product_code":"dws", + "title":"XML", + "uri":"dws_06_0025.html", + "doc_type":"devg", + "p_code":"694", + "code":"711" + }, + { + "desc":"Table 1 lists the constants and macros that can be used in GaussDB(DWS).", + "product_code":"dws", + "title":"Constant and Macro", + "uri":"dws_06_0026.html", + "doc_type":"devg", + "p_code":"686", + "code":"712" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Functions and Operators", + "uri":"dws_06_0027.html", + "doc_type":"devg", + "p_code":"686", + "code":"713" + }, + { + "desc":"The usual logical operators include AND, OR, and NOT. SQL uses a three-valued logical system with true, false, and null, which represents \"unknown\". Their priorities are ", + "product_code":"dws", + "title":"Logical Operators", + "uri":"dws_06_0028.html", + "doc_type":"devg", + "p_code":"713", + "code":"714" + }, + { + "desc":"Comparison operators are available for all data types and return Boolean values.All comparison operators are binary operators. Only data types that are the same or can be", + "product_code":"dws", + "title":"Comparison Operators", + "uri":"dws_06_0029.html", + "doc_type":"devg", + "p_code":"713", + "code":"715" + }, + { + "desc":"String functions and operators provided by GaussDB(DWS) are for concatenating strings with each other, concatenating strings with non-strings, and matching the patterns o", + "product_code":"dws", + "title":"Character Processing Functions and Operators", + "uri":"dws_06_0030.html", + "doc_type":"devg", + "p_code":"713", + "code":"716" + }, + { + "desc":"SQL defines some string functions that use keywords, rather than commas, to separate arguments.octet_length(string)Description: Number of bytes in binary stringReturn typ", + "product_code":"dws", + "title":"Binary String Functions and Operators", + "uri":"dws_06_0031.html", + "doc_type":"devg", + "p_code":"713", + "code":"717" + }, + { + "desc":"Aside from the usual comparison operators, the following operators can be used. Bit string operands of &, |, and # must be of equal length. When bit shifting, the origina", + "product_code":"dws", + "title":"Bit String Functions and Operators", + "uri":"dws_06_0032.html", + "doc_type":"devg", + "p_code":"713", + "code":"718" + }, + { + "desc":"There are three separate approaches to pattern matching provided by the database: the traditional SQL LIKE operator, the more recent SIMILAR TO operator, and POSIX-style ", + "product_code":"dws", + "title":"Pattern Matching Operators", + "uri":"dws_06_0033.html", + "doc_type":"devg", + "p_code":"713", + "code":"719" + }, + { + "desc":"+Description: AdditionFor example:SELECT 2+3 AS RESULT;\n result \n--------\n 5\n(1 row)Description: AdditionFor example:-Description: SubtractionFor example:SELECT 2-3 ", + "product_code":"dws", + "title":"Mathematical Functions and Operators", + "uri":"dws_06_0034.html", + "doc_type":"devg", + "p_code":"713", + "code":"720" + }, + { + "desc":"When the user uses date/time operators, explicit type prefixes are modified for corresponding operands to ensure that the operands parsed by the database are consistent w", + "product_code":"dws", + "title":"Date and Time Processing Functions and Operators", + "uri":"dws_06_0035.html", + "doc_type":"devg", + "p_code":"713", + "code":"721" + }, + { + "desc":"cast(x as y)Description: Converts x into the type specified by y.For example:SELECT cast('22-oct-1997' as timestamp);\n timestamp \n---------------------\n 1997-10", + "product_code":"dws", + "title":"Type Conversion Functions", + "uri":"dws_06_0036.html", + "doc_type":"devg", + "p_code":"713", + "code":"722" + }, + { + "desc":"+Description: TranslationFor example:SELECT box '((0,0),(1,1))' + point '(2.0,0)' AS RESULT;\n result \n-------------\n (3,1),(2,0)\n(1 row)Description: TranslationFor e", + "product_code":"dws", + "title":"Geometric Functions and Operators", + "uri":"dws_06_0037.html", + "doc_type":"devg", + "p_code":"713", + "code":"723" + }, + { + "desc":"The operators <<, <<=, >>, and >>= test for subnet inclusion. They consider only the network parts of the two addresses (ignoring any host part) and determine whether one", + "product_code":"dws", + "title":"Network Address Functions and Operators", + "uri":"dws_06_0038.html", + "doc_type":"devg", + "p_code":"713", + "code":"724" + }, + { + "desc":"@@Description: Specifies whether the tsvector-typed words match the tsquery-typed words.For example:SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat') AS ", + "product_code":"dws", + "title":"Text Search Functions and Operators", + "uri":"dws_06_0039.html", + "doc_type":"devg", + "p_code":"713", + "code":"725" + }, + { + "desc":"UUID functions are used to generate UUID data (see UUID Type).uuid_generate_v1()Description: Generates a UUID sequence number.Return type: UUIDExample:SELECT uuid_generat", + "product_code":"dws", + "title":"UUID Functions", + "uri":"dws_06_0040.html", + "doc_type":"devg", + "p_code":"713", + "code":"726" + }, + { + "desc":"JSON functions are used to generate JSON data (see JSON Types).array_to_json(anyarray [, pretty_bool])Description: Returns the array as JSON. A multi-dimensional array be", + "product_code":"dws", + "title":"JSON Functions", + "uri":"dws_06_0041.html", + "doc_type":"devg", + "p_code":"713", + "code":"727" + }, + { + "desc":"hll_hash_boolean(bool)Description: Hashes data of the bool type.Return type: hll_hashvalFor example:SELECT hll_hash_boolean(FALSE);\n hll_hash_boolean \n----------------", + "product_code":"dws", + "title":"HLL Functions and Operators", + "uri":"dws_06_0042.html", + "doc_type":"devg", + "p_code":"713", + "code":"728" + }, + { + "desc":"The sequence functions provide a simple method to ensure security of multiple users for users to obtain sequence values from sequence objects.The hybrid data warehouse (s", + "product_code":"dws", + "title":"SEQUENCE Functions", + "uri":"dws_06_0043.html", + "doc_type":"devg", + "p_code":"713", + "code":"729" + }, + { + "desc":"=Description: Specifies whether two arrays are equal.For example:SELECT ARRAY[1.1,2.1,3.1]::int[] = ARRAY[1,2,3] AS RESULT ;\n result \n--------\n t\n(1 row)Description: Spec", + "product_code":"dws", + "title":"Array Functions and Operators", + "uri":"dws_06_0044.html", + "doc_type":"devg", + "p_code":"713", + "code":"730" + }, + { + "desc":"=Description: EqualsFor example:SELECT int4range(1,5) = '[1,4]'::int4range AS RESULT;\n result\n--------\n t\n(1 row)Description: EqualsFor example:<>Description: Does not eq", + "product_code":"dws", + "title":"Range Functions and Operators", + "uri":"dws_06_0045.html", + "doc_type":"devg", + "p_code":"713", + "code":"731" + }, + { + "desc":"sum(expression)Description: Sum of expression across all input valuesReturn type:Generally, same as the argument data type. In the following cases, type conversion occurs", + "product_code":"dws", + "title":"Aggregate Functions", + "uri":"dws_06_0046.html", + "doc_type":"devg", + "p_code":"713", + "code":"732" + }, + { + "desc":"Regular aggregate functions return a single value calculated from values in a row, or group all rows into a single output row. Window functions perform a calculation acro", + "product_code":"dws", + "title":"Window Functions", + "uri":"dws_06_0047.html", + "doc_type":"devg", + "p_code":"713", + "code":"733" + }, + { + "desc":"gs_password_deadline()Description: Indicates the number of remaining days before the password of the current user expires. After the password expires, the system prompts ", + "product_code":"dws", + "title":"Security Functions", + "uri":"dws_06_0048.html", + "doc_type":"devg", + "p_code":"713", + "code":"734" + }, + { + "desc":"generate_series(start, stop)Description: Generates a series of values, from start to stop with a step size of one.Parameter type: int, bigint, or numericReturn type: seto", + "product_code":"dws", + "title":"Set Returning Functions", + "uri":"dws_06_0049.html", + "doc_type":"devg", + "p_code":"713", + "code":"735" + }, + { + "desc":"coalesce(expr1, expr2, ..., exprn)Description: Returns the first argument that is not NULL in the argument list.COALESCE(expr1, expr2) is equivalent to CASE WHEN expr1 IS", + "product_code":"dws", + "title":"Conditional Expression Functions", + "uri":"dws_06_0050.html", + "doc_type":"devg", + "p_code":"713", + "code":"736" + }, + { + "desc":"current_catalogDescription: Name of the current database (called \"catalog\" in the SQL standard)Return type: nameFor example:SELECT current_catalog;\n current_database\n----", + "product_code":"dws", + "title":"System Information Functions", + "uri":"dws_06_0051.html", + "doc_type":"devg", + "p_code":"713", + "code":"737" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"System Administration Functions", + "uri":"dws_06_0052.html", + "doc_type":"devg", + "p_code":"713", + "code":"738" + }, + { + "desc":"Configuration setting functions are used for querying and modifying configuration parameters during running.current_setting(setting_name)Description: Specifies the curren", + "product_code":"dws", + "title":"Configuration Settings Functions", + "uri":"dws_06_0053.html", + "doc_type":"devg", + "p_code":"738", + "code":"739" + }, + { + "desc":"Universal file access functions provide local access interfaces for files on a database server. Only files in the database cluster directory and the log_directory directo", + "product_code":"dws", + "title":"Universal File Access Functions", + "uri":"dws_06_0054.html", + "doc_type":"devg", + "p_code":"738", + "code":"740" + }, + { + "desc":"Server signaling functions send control signals to other server processes. Only system administrators can use these functions.pg_cancel_backend(pid int)Description: Cance", + "product_code":"dws", + "title":"Server Signaling Functions", + "uri":"dws_06_0055.html", + "doc_type":"devg", + "p_code":"738", + "code":"741" + }, + { + "desc":"Backup control functions help online backup.pg_create_restore_point(name text)Description: Creates a named point for performing the restore operation (restricted to syste", + "product_code":"dws", + "title":"Backup and Restoration Control Functions", + "uri":"dws_06_0056.html", + "doc_type":"devg", + "p_code":"738", + "code":"742" + }, + { + "desc":"Snapshot synchronization functions save the current snapshot and return its identifier.pg_export_snapshot()Description: Saves the current snapshot and returns its identif", + "product_code":"dws", + "title":"Snapshot Synchronization Functions", + "uri":"dws_06_0057.html", + "doc_type":"devg", + "p_code":"738", + "code":"743" + }, + { + "desc":"Database object size functions calculate the actual disk space used by database objects.pg_column_size(any)Description: Specifies the number of bytes used to store a part", + "product_code":"dws", + "title":"Database Object Functions", + "uri":"dws_06_0058.html", + "doc_type":"devg", + "p_code":"738", + "code":"744" + }, + { + "desc":"Advisory lock functions manage advisory locks. These functions are only for internal use currently.pg_advisory_lock(key bigint)Description: Obtains an exclusive session-l", + "product_code":"dws", + "title":"Advisory Lock Functions", + "uri":"dws_06_0059.html", + "doc_type":"devg", + "p_code":"738", + "code":"745" + }, + { + "desc":"pg_get_residualfiles()Description: Obtains all residual file records of the current node. This function is an instance-level function and is irrelevant to the current dat", + "product_code":"dws", + "title":"Residual File Management Functions", + "uri":"dws_06_0060.html", + "doc_type":"devg", + "p_code":"738", + "code":"746" + }, + { + "desc":"A replication function synchronizes logs and data between instances. It is a statistics or operation method provided by the system to implement HA.Replication functions e", + "product_code":"dws", + "title":"Replication Functions", + "uri":"dws_06_0061.html", + "doc_type":"devg", + "p_code":"738", + "code":"747" + }, + { + "desc":"pgxc_pool_check()Description: Checks whether the connection data buffered in the pool is consistent with pgxc_node.Return type: booleanDescription: Checks whether the con", + "product_code":"dws", + "title":"Other Functions", + "uri":"dws_06_0062.html", + "doc_type":"devg", + "p_code":"738", + "code":"748" + }, + { + "desc":"This section describes the functions of the resource management module.gs_wlm_readjust_user_space(oid)Description: This function calibrates the permanent storage space of", + "product_code":"dws", + "title":"Resource Management Functions", + "uri":"dws_06_0063.html", + "doc_type":"devg", + "p_code":"738", + "code":"749" + }, + { + "desc":"Data redaction functions are used to mask and protect sensitive data. Generally, you are advised to bind these functions to the columns to be redacted based on the data r", + "product_code":"dws", + "title":"Data Redaction Functions", + "uri":"dws_06_0064.html", + "doc_type":"devg", + "p_code":"713", + "code":"750" + }, + { + "desc":"Statistics information functions are divided into the following two categories: functions that access databases, using the OID of each table or index in a database to mar", + "product_code":"dws", + "title":"Statistics Information Functions", + "uri":"dws_06_0065.html", + "doc_type":"devg", + "p_code":"713", + "code":"751" + }, + { + "desc":"pg_get_triggerdef(oid)Description: Obtains the definition information of a trigger.Parameter: OID of the trigger to be queriedReturn type: textExample:select pg_get_trigg", + "product_code":"dws", + "title":"Trigger Functions", + "uri":"dws_06_0066.html", + "doc_type":"devg", + "p_code":"713", + "code":"752" + }, + { + "desc":"XMLPARSE ( { DOCUMENT | CONTENT } value)Description: Generates an XML value from character data.Return type: XMLExample:XMLSERIALIZE ( { DOCUMENT | CONTENT } value AS typ", + "product_code":"dws", + "title":"XML Functions", + "uri":"dws_06_0067.html", + "doc_type":"devg", + "p_code":"713", + "code":"753" + }, + { + "desc":"The pv_memory_profiling(type int) and environment variable MALLOC_CONF are used by GaussDB(DWS) to control the enabling and disabling of the memory allocation call stack ", + "product_code":"dws", + "title":"Call Stack Recording Functions", + "uri":"dws_06_0068.html", + "doc_type":"devg", + "p_code":"713", + "code":"754" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Expressions", + "uri":"dws_06_0069.html", + "doc_type":"devg", + "p_code":"686", + "code":"755" + }, + { + "desc":"Logical Operators lists the operators and calculation rules of logical expressions.Comparison Operators lists the common comparative operators.In addition to comparative ", + "product_code":"dws", + "title":"Simple Expressions", + "uri":"dws_06_0070.html", + "doc_type":"devg", + "p_code":"755", + "code":"756" + }, + { + "desc":"Data that meets the requirements specified by conditional expressions are filtered during SQL statement execution.Conditional expressions include the following types:CASE", + "product_code":"dws", + "title":"Conditional Expressions", + "uri":"dws_06_0071.html", + "doc_type":"devg", + "p_code":"755", + "code":"757" + }, + { + "desc":"Subquery expressions include the following types:EXISTS/NOT EXISTSFigure 1 shows the syntax of an EXISTS/NOT EXISTS expression.EXISTS/NOT EXISTS::=The parameter of an EXI", + "product_code":"dws", + "title":"Subquery Expressions", + "uri":"dws_06_0072.html", + "doc_type":"devg", + "p_code":"755", + "code":"758" + }, + { + "desc":"expressionIN(value [, ...])The parentheses on the right contain an expression list. The expression result on the left is compared with the content in the expression list.", + "product_code":"dws", + "title":"Array Expressions", + "uri":"dws_06_0073.html", + "doc_type":"devg", + "p_code":"755", + "code":"759" + }, + { + "desc":"Syntax:row_constructor operator row_constructorBoth sides of the row expression are row constructors. The values of both rows must have the same number of fields and they", + "product_code":"dws", + "title":"Row Expressions", + "uri":"dws_06_0074.html", + "doc_type":"devg", + "p_code":"755", + "code":"760" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Type Conversion", + "uri":"dws_06_0075.html", + "doc_type":"devg", + "p_code":"686", + "code":"761" + }, + { + "desc":"SQL is a typed language. That is, every data item has an associated data type which determines its behavior and allowed usage. GaussDB(DWS) has an extensible type system ", + "product_code":"dws", + "title":"Overview", + "uri":"dws_06_0076.html", + "doc_type":"devg", + "p_code":"761", + "code":"762" + }, + { + "desc":"Select the operators to be considered from the pg_operator system catalog. Considered operators are those with the matching name and argument count. If the search path fi", + "product_code":"dws", + "title":"Operators", + "uri":"dws_06_0077.html", + "doc_type":"devg", + "p_code":"761", + "code":"763" + }, + { + "desc":"Select the functions to be considered from the pg_proc system catalog. If a non-schema-qualified function name was used, the functions in the current search path are cons", + "product_code":"dws", + "title":"Functions", + "uri":"dws_06_0078.html", + "doc_type":"devg", + "p_code":"761", + "code":"764" + }, + { + "desc":"Search for an exact match with the target column.Try to convert the expression to the target type. This will succeed if there is a registered cast between the two types. ", + "product_code":"dws", + "title":"Value Storage", + "uri":"dws_06_0079.html", + "doc_type":"devg", + "p_code":"761", + "code":"765" + }, + { + "desc":"SQL UNION constructs must match up possibly dissimilar types to become a single result set. Since all query results from a SELECT UNION statement must appear in a single ", + "product_code":"dws", + "title":"UNION, CASE, and Related Constructs", + "uri":"dws_06_0080.html", + "doc_type":"devg", + "p_code":"761", + "code":"766" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Full Text Search", + "uri":"dws_06_0081.html", + "doc_type":"devg", + "p_code":"686", + "code":"767" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Introduction", + "uri":"dws_06_0082.html", + "doc_type":"devg", + "p_code":"767", + "code":"768" + }, + { + "desc":"Textual search operators have been used in databases for years. GaussDB(DWS) has ~, ~*, LIKE, and ILIKE operators for textual data types, but they lack many essential pro", + "product_code":"dws", + "title":"Full-Text Retrieval", + "uri":"dws_06_0083.html", + "doc_type":"devg", + "p_code":"768", + "code":"769" + }, + { + "desc":"A document is the unit of searching in a full text search system; for example, a magazine article or email message. The text search engine must be able to parse documents", + "product_code":"dws", + "title":"What Is a Document?", + "uri":"dws_06_0084.html", + "doc_type":"devg", + "p_code":"768", + "code":"770" + }, + { + "desc":"Full text search in GaussDB(DWS) is based on the match operator @@, which returns true if a tsvector (document) matches a tsquery (query). It does not matter which data t", + "product_code":"dws", + "title":"Basic Text Matching", + "uri":"dws_06_0085.html", + "doc_type":"devg", + "p_code":"768", + "code":"771" + }, + { + "desc":"Full text search functionality includes the ability to do many more things: skip indexing certain words (stop words), process synonyms, and use sophisticated parsing, for", + "product_code":"dws", + "title":"Configurations", + "uri":"dws_06_0086.html", + "doc_type":"devg", + "p_code":"768", + "code":"772" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Table and index", + "uri":"dws_06_0087.html", + "doc_type":"devg", + "p_code":"767", + "code":"773" + }, + { + "desc":"It is possible to do a full text search without an index.A simple query to print each row that contains the word science in its body column is as follows:DROP SCHEMA IF E", + "product_code":"dws", + "title":"Searching a Table", + "uri":"dws_06_0088.html", + "doc_type":"devg", + "p_code":"773", + "code":"774" + }, + { + "desc":"You can create a GIN index to speed up text searches:The to_tsvector() function accepts one or two augments.If the one-augment version of the index is used, the system wi", + "product_code":"dws", + "title":"Creating an Index", + "uri":"dws_06_0089.html", + "doc_type":"devg", + "p_code":"773", + "code":"775" + }, + { + "desc":"The following is an example of using an index. Run the following statements in a database that uses the UTF-8 or GBK encoding:In this example, table1 has two GIN indexes ", + "product_code":"dws", + "title":"Constraints on Index Use", + "uri":"dws_06_0090.html", + "doc_type":"devg", + "p_code":"773", + "code":"776" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Controlling Text Search", + "uri":"dws_06_0091.html", + "doc_type":"devg", + "p_code":"767", + "code":"777" + }, + { + "desc":"GaussDB(DWS) provides function to_tsvector for converting a document to the tsvector data type.to_tsvector parses a textual document into tokens, reduces the tokens to le", + "product_code":"dws", + "title":"Parsing Documents", + "uri":"dws_06_0092.html", + "doc_type":"devg", + "p_code":"777", + "code":"778" + }, + { + "desc":"GaussDB(DWS) provides functions to_tsquery and plainto_tsquery for converting a query to the tsquery data type. to_tsquery offers access to more features than plainto_tsq", + "product_code":"dws", + "title":"Parsing Queries", + "uri":"dws_06_0093.html", + "doc_type":"devg", + "p_code":"777", + "code":"779" + }, + { + "desc":"Ranking attempts to measure how relevant documents are to a particular query, so that when there are many matches the most relevant ones can be shown first. GaussDB(DWS) ", + "product_code":"dws", + "title":"Ranking Search Results", + "uri":"dws_06_0094.html", + "doc_type":"devg", + "p_code":"777", + "code":"780" + }, + { + "desc":"To present search results it is ideal to show a part of each document and how it is related to the query. Usually, search engines show fragments of the document with mark", + "product_code":"dws", + "title":"Highlighting Results", + "uri":"dws_06_0095.html", + "doc_type":"devg", + "p_code":"777", + "code":"781" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Additional Features", + "uri":"dws_06_0096.html", + "doc_type":"devg", + "p_code":"767", + "code":"782" + }, + { + "desc":"GaussDB(DWS) provides functions and operators that can be used to manipulate documents that are already in tsvector type.tsvector || tsvectorThe tsvector concatenation op", + "product_code":"dws", + "title":"Manipulating tsvector", + "uri":"dws_06_0097.html", + "doc_type":"devg", + "p_code":"782", + "code":"783" + }, + { + "desc":"GaussDB(DWS) provides functions and operators that can be used to manipulate queries that are already in tsquery type.tsquery && tsqueryReturns the AND-combination of the", + "product_code":"dws", + "title":"Manipulating Queries", + "uri":"dws_06_0098.html", + "doc_type":"devg", + "p_code":"782", + "code":"784" + }, + { + "desc":"The ts_rewrite family of functions searches a given tsquery for occurrences of a target subquery, and replace each occurrence with a substitute subquery. In essence this ", + "product_code":"dws", + "title":"Rewriting Queries", + "uri":"dws_06_0099.html", + "doc_type":"devg", + "p_code":"782", + "code":"785" + }, + { + "desc":"The function ts_stat is useful for checking your configuration and for finding stop-word candidates.sqlquery is a text value containing an SQL query which must return a s", + "product_code":"dws", + "title":"Gathering Document Statistics", + "uri":"dws_06_0100.html", + "doc_type":"devg", + "p_code":"782", + "code":"786" + }, + { + "desc":"Text search parsers are responsible for splitting raw document text into tokens and identifying each token's type, where the set of types is defined by the parser itself.", + "product_code":"dws", + "title":"Parsers", + "uri":"dws_06_0101.html", + "doc_type":"devg", + "p_code":"767", + "code":"787" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Dictionaries", + "uri":"dws_06_0102.html", + "doc_type":"devg", + "p_code":"767", + "code":"788" + }, + { + "desc":"A dictionary is used to define stop words, that is, words to be ignored in full-text retrieval.A dictionary can also be used to normalize words so that different derived ", + "product_code":"dws", + "title":"Overview", + "uri":"dws_06_0103.html", + "doc_type":"devg", + "p_code":"788", + "code":"789" + }, + { + "desc":"Stop words are words that are very common, appear in almost every document, and have no discrimination value. Therefore, they can be ignored in the context of full text s", + "product_code":"dws", + "title":"Stop Words", + "uri":"dws_06_0104.html", + "doc_type":"devg", + "p_code":"788", + "code":"790" + }, + { + "desc":"A Simple dictionary operates by converting the input token to lower case and checking it against a list of stop words. If the token is found in the list, an empty array w", + "product_code":"dws", + "title":"Simple Dictionary", + "uri":"dws_06_0105.html", + "doc_type":"devg", + "p_code":"788", + "code":"791" + }, + { + "desc":"A synonym dictionary is used to define, identify, and convert synonyms of tokens. Phrases are not supported (use the thesaurus dictionary in Thesaurus Dictionary).A synon", + "product_code":"dws", + "title":"Synonym Dictionary", + "uri":"dws_06_0106.html", + "doc_type":"devg", + "p_code":"788", + "code":"792" + }, + { + "desc":"A thesaurus dictionary (sometimes abbreviated as TZ) is a collection of words that include relationships between words and phrases, such as broader terms (BT), narrower t", + "product_code":"dws", + "title":"Thesaurus Dictionary", + "uri":"dws_06_0107.html", + "doc_type":"devg", + "p_code":"788", + "code":"793" + }, + { + "desc":"The Ispell dictionary template supports morphological dictionaries, which can normalize many different linguistic forms of a word into the same lexeme. For example, an En", + "product_code":"dws", + "title":"Ispell Dictionary", + "uri":"dws_06_0108.html", + "doc_type":"devg", + "p_code":"788", + "code":"794" + }, + { + "desc":"A Snowball dictionary is based on a project by Martin Porter and is used for stem analysis, providing stemming algorithms for many languages. GaussDB(DWS) provides predef", + "product_code":"dws", + "title":"Snowball Dictionary", + "uri":"dws_06_0109.html", + "doc_type":"devg", + "p_code":"788", + "code":"795" + }, + { + "desc":"Text search configuration specifies the following components required for converting a document into a tsvector:A parser, decomposes a text into tokens.Dictionary list, c", + "product_code":"dws", + "title":"Configuration Examples", + "uri":"dws_06_0110.html", + "doc_type":"devg", + "p_code":"767", + "code":"796" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Testing and Debugging Text Search", + "uri":"dws_06_0111.html", + "doc_type":"devg", + "p_code":"767", + "code":"797" + }, + { + "desc":"The function ts_debug allows easy testing of a text search configuration.ts_debug displays information about every token of document as produced by the parser and process", + "product_code":"dws", + "title":"Testing a Configuration", + "uri":"dws_06_0112.html", + "doc_type":"devg", + "p_code":"797", + "code":"798" + }, + { + "desc":"The ts_parse function allows direct testing of a text search parser.ts_parse parses the given document and returns a series of records, one for each token produced by par", + "product_code":"dws", + "title":"Testing a Parser", + "uri":"dws_06_0113.html", + "doc_type":"devg", + "p_code":"797", + "code":"799" + }, + { + "desc":"The ts_lexize function facilitates dictionary testing.ts_lexize(dict regdictionary, token text) returns text[] ts_lexize returns an array of lexemes if the input token is", + "product_code":"dws", + "title":"Testing a Dictionary", + "uri":"dws_06_0114.html", + "doc_type":"devg", + "p_code":"797", + "code":"800" + }, + { + "desc":"The current limitations of GaussDB(DWS)'s full text search are:The length of each lexeme must be less than 2 KB.The length of a tsvector (lexemes + positions) must be les", + "product_code":"dws", + "title":"Limitations", + "uri":"dws_06_0115.html", + "doc_type":"devg", + "p_code":"767", + "code":"801" + }, + { + "desc":"GaussDB(DWS) runs SQL statements to perform different system operations, such as setting variables, displaying the execution plan, and collecting garbage data.For details", + "product_code":"dws", + "title":"System Operation", + "uri":"dws_06_0116.html", + "doc_type":"devg", + "p_code":"686", + "code":"802" + }, + { + "desc":"A transaction is a user-defined sequence of database operations, which form an integral unit of work.GaussDB(DWS) starts a transaction using START TRANSACTION and BEGIN. ", + "product_code":"dws", + "title":"Controlling Transactions", + "uri":"dws_06_0117.html", + "doc_type":"devg", + "p_code":"686", + "code":"803" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"DDL Syntax", + "uri":"dws_06_0118.html", + "doc_type":"devg", + "p_code":"686", + "code":"804" + }, + { + "desc":"Data definition language (DDL) is used to define or modify an object in a database, such as a table, index, or view.GaussDB(DWS) does not support DDL if its CN is unavail", + "product_code":"dws", + "title":"DDL Syntax Overview", + "uri":"dws_06_0119.html", + "doc_type":"devg", + "p_code":"804", + "code":"805" + }, + { + "desc":"This command is used to modify the attributes of a database, including the database name, owner, maximum number of connections, and object isolation attribute.Only the ow", + "product_code":"dws", + "title":"ALTER DATABASE", + "uri":"dws_06_0120.html", + "doc_type":"devg", + "p_code":"804", + "code":"806" + }, + { + "desc":"ALTER FOREIGN TABLE modifies a foreign table.NoneSet the attributes of a foreign table.ALTER FOREIGN TABLE [ IF EXISTS ] table_name\n OPTIONS ( {[ ADD | SET | DROP ] o", + "product_code":"dws", + "title":"ALTER FOREIGN TABLE (for GDS)", + "uri":"dws_06_0123.html", + "doc_type":"devg", + "p_code":"804", + "code":"807" + }, + { + "desc":"ALTER FOREIGN TABLE modifies an HDFS or OBS foreign table.NoneSet a foreign table's attributes.ALTER FOREIGN TABLE [ IF EXISTS ] table_name\n OPTIONS ( {[ ADD | SET | ", + "product_code":"dws", + "title":"ALTER FOREIGN TABLE (for HDFS or OBS)", + "uri":"dws_06_0124.html", + "doc_type":"devg", + "p_code":"804", + "code":"808" + }, + { + "desc":"ALTER FUNCTION modifies the attributes of a customized function.Only the owner of a function or a system administrator can run this statement. If a function involves oper", + "product_code":"dws", + "title":"ALTER FUNCTION", + "uri":"dws_06_0126.html", + "doc_type":"devg", + "p_code":"804", + "code":"809" + }, + { + "desc":"ALTER GROUP modifies the attributes of a user group.ALTER GROUP is an alias for ALTER ROLE, and it is not a standard SQL command and not recommended. Users can use ALTER ", + "product_code":"dws", + "title":"ALTER GROUP", + "uri":"dws_06_0127.html", + "doc_type":"devg", + "p_code":"804", + "code":"810" + }, + { + "desc":"ALTER INDEX modifies the definition of an existing index.There are several sub-forms:IF EXISTSIf the specified index does not exist, a notice instead of an error is sent.", + "product_code":"dws", + "title":"ALTER INDEX", + "uri":"dws_06_0128.html", + "doc_type":"devg", + "p_code":"804", + "code":"811" + }, + { + "desc":"ALTER LARGE OBJECT modifies the definition of a large object. It can only assign a new owner to a large object.Only the administrator or the owner of the to-be-modified l", + "product_code":"dws", + "title":"ALTER LARGE OBJECT", + "uri":"dws_06_0129.html", + "doc_type":"devg", + "p_code":"804", + "code":"812" + }, + { + "desc":"ALTER REDACTION POLICY modifies a data redaction policy applied to a specified table.Only the owner of the table to which the redaction policy is applied has the permissi", + "product_code":"dws", + "title":"ALTER REDACTION POLICY", + "uri":"dws_06_0132.html", + "doc_type":"devg", + "p_code":"804", + "code":"813" + }, + { + "desc":"ALTER RESOURCE POOL changes the Cgroup of a resource pool.Users having the ALTER permission can modify resource pools.pool_nameSpecifies the name of the resource pool.The", + "product_code":"dws", + "title":"ALTER RESOURCE POOL", + "uri":"dws_06_0133.html", + "doc_type":"devg", + "p_code":"804", + "code":"814" + }, + { + "desc":"ALTER ROLE changes the attributes of a role.NoneModifying the Rights of a RoleALTER ROLE role_name [ [ WITH ] option [ ... ] ];The option clause for granting rights is as", + "product_code":"dws", + "title":"ALTER ROLE", + "uri":"dws_06_0134.html", + "doc_type":"devg", + "p_code":"804", + "code":"815" + }, + { + "desc":"ALTER ROW LEVEL SECURITY POLICY modifies an existing row-level access control policy, including the policy name and the users and expressions affected by the policy.Only ", + "product_code":"dws", + "title":"ALTER ROW LEVEL SECURITY POLICY", + "uri":"dws_06_0135.html", + "doc_type":"devg", + "p_code":"804", + "code":"816" + }, + { + "desc":"ALTER SCHEMA changes the attributes of a schema.Only the owner of an index or a system administrator can run this statement.Rename a schema.ALTER SCHEMA schema_name \n ", + "product_code":"dws", + "title":"ALTER SCHEMA", + "uri":"dws_06_0136.html", + "doc_type":"devg", + "p_code":"804", + "code":"817" + }, + { + "desc":"ALTER SEQUENCE modifies the parameters of an existing sequence.You must be the owner of the sequence to use ALTER SEQUENCE.In the current version, you can modify only the", + "product_code":"dws", + "title":"ALTER SEQUENCE", + "uri":"dws_06_0137.html", + "doc_type":"devg", + "p_code":"804", + "code":"818" + }, + { + "desc":"ALTER SERVER adds, modifies, or deletes the parameters of an existing server. You can query existing servers from the pg_foreign_server system catalog.Only the owner of a", + "product_code":"dws", + "title":"ALTER SERVER", + "uri":"dws_06_0138.html", + "doc_type":"devg", + "p_code":"804", + "code":"819" + }, + { + "desc":"ALTER SESSION defines or modifies the conditions or parameters that affect the current session. Modified session parameters are kept until the current session is disconne", + "product_code":"dws", + "title":"ALTER SESSION", + "uri":"dws_06_0139.html", + "doc_type":"devg", + "p_code":"804", + "code":"820" + }, + { + "desc":"ALTER SYNONYM is used to modify the attribute of a synonym.Only the synonym owner can be changed.Only the system administrator and the synonym owner has the permission to", + "product_code":"dws", + "title":"ALTER SYNONYM", + "uri":"dws_06_0140.html", + "doc_type":"devg", + "p_code":"804", + "code":"821" + }, + { + "desc":"ALTER SYSTEM KILL SESSION ends a session.Nonesession_sid, serialSpecifies SID and SERIAL of a session (see examples for format).Value range: The SIDs and SERIALs of all s", + "product_code":"dws", + "title":"ALTER SYSTEM KILL SESSION", + "uri":"dws_06_0141.html", + "doc_type":"devg", + "p_code":"804", + "code":"822" + }, + { + "desc":"ALTER TABLE is used to modify tables, including modifying table definitions, renaming tables, renaming specified columns in tables, renaming table constraints, setting ta", + "product_code":"dws", + "title":"ALTER TABLE", + "uri":"dws_06_0142.html", + "doc_type":"devg", + "p_code":"804", + "code":"823" + }, + { + "desc":"ALTER TABLE PARTITION modifies table partitioning, including adding, deleting, splitting, merging partitions, and modifying partition attributes.The name of the added par", + "product_code":"dws", + "title":"ALTER TABLE PARTITION", + "uri":"dws_06_0143.html", + "doc_type":"devg", + "p_code":"804", + "code":"824" + }, + { + "desc":"ALTER TEXT SEARCH CONFIGURATION modifies the definition of a text search configuration. You can modify its mappings from token types to dictionaries, change the configura", + "product_code":"dws", + "title":"ALTER TEXT SEARCH CONFIGURATION", + "uri":"dws_06_0145.html", + "doc_type":"devg", + "p_code":"804", + "code":"825" + }, + { + "desc":"ALTER TEXT SEARCH DICTIONARY modifies the definition of a full-text retrieval dictionary, including its parameters, name, owner, and schema.ALTER is not supported by pred", + "product_code":"dws", + "title":"ALTER TEXT SEARCH DICTIONARY", + "uri":"dws_06_0146.html", + "doc_type":"devg", + "p_code":"804", + "code":"826" + }, + { + "desc":"ALTER TRIGGER modifies the definition of a trigger.Only the owner of a table where a trigger is created and system administrators can run the ALTER TRIGGER statement.trig", + "product_code":"dws", + "title":"ALTER TRIGGER", + "uri":"dws_06_0147.html", + "doc_type":"devg", + "p_code":"804", + "code":"827" + }, + { + "desc":"ALTER TYPE modifies the definition of a type.Modify a type.ALTER TYPE name action [, ... ]\nALTER TYPE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER }\nALTER TYPE", + "product_code":"dws", + "title":"ALTER TYPE", + "uri":"dws_06_0148.html", + "doc_type":"devg", + "p_code":"804", + "code":"828" + }, + { + "desc":"ALTER USER modifies the attributes of a database user.Session parameters modified by ALTER USER apply to a specified user and take effect in the next session.Modify user ", + "product_code":"dws", + "title":"ALTER USER", + "uri":"dws_06_0149.html", + "doc_type":"devg", + "p_code":"804", + "code":"829" + }, + { + "desc":"ALTER VIEW modifies all auxiliary attributes of a view. (To modify the query definition of a view, use CREATE OR REPLACE VIEW.)Only the view owner can modify a view by ru", + "product_code":"dws", + "title":"ALTER VIEW", + "uri":"dws_06_0150.html", + "doc_type":"devg", + "p_code":"804", + "code":"830" + }, + { + "desc":"CLEAN CONNECTION clears database connections when a database is abnormal. You may use this statement to delete a specific user's connections to a specified database.NoneC", + "product_code":"dws", + "title":"CLEAN CONNECTION", + "uri":"dws_06_0151.html", + "doc_type":"devg", + "p_code":"804", + "code":"831" + }, + { + "desc":"CLOSE frees the resources associated with an open cursor.After a cursor is closed, no subsequent operations are allowed on it.A cursor should be closed when it is no long", + "product_code":"dws", + "title":"CLOSE", + "uri":"dws_06_0152.html", + "doc_type":"devg", + "p_code":"804", + "code":"832" + }, + { + "desc":"Cluster a table according to an index.CLUSTER instructs GaussDB(DWS) to cluster the table specified by table_name based on the index specified by index_name. The index mu", + "product_code":"dws", + "title":"CLUSTER", + "uri":"dws_06_0153.html", + "doc_type":"devg", + "p_code":"804", + "code":"833" + }, + { + "desc":"COMMENT defines or changes the comment of an object.Only one comment string is stored for each object. To modify a comment, issue a new COMMENT command for the same objec", + "product_code":"dws", + "title":"COMMENT", + "uri":"dws_06_0154.html", + "doc_type":"devg", + "p_code":"804", + "code":"834" + }, + { + "desc":"Creates a barrier for cluster nodes. The barrier can be used for data restoration.Before creating a barrier, ensure that gtm_backup_barrier and enable_cbm_tracking are se", + "product_code":"dws", + "title":"CREATE BARRIER", + "uri":"dws_06_0155.html", + "doc_type":"devg", + "p_code":"804", + "code":"835" + }, + { + "desc":"CREATE DATABASE creates a database. By default, the new database will be created by cloning the standard system database template1. A different template can be specified ", + "product_code":"dws", + "title":"CREATE DATABASE", + "uri":"dws_06_0156.html", + "doc_type":"devg", + "p_code":"804", + "code":"836" + }, + { + "desc":"CREATE FOREIGN TABLE creates a GDS foreign table.CREATE FOREIGN TABLE creates a GDS foreign table in the current database for concurrent data import and export. The GDS f", + "product_code":"dws", + "title":"CREATE FOREIGN TABLE (for GDS Import and Export)", + "uri":"dws_06_0159.html", + "doc_type":"devg", + "p_code":"804", + "code":"837" + }, + { + "desc":"CREATE FOREIGN TABLE creates an HDFS or OBS foreign table in the current database to access or export structured data stored on HDFS or OBS. You can also export data in O", + "product_code":"dws", + "title":"CREATE FOREIGN TABLE (SQL on OBS or Hadoop)", + "uri":"dws_06_0161.html", + "doc_type":"devg", + "p_code":"804", + "code":"838" + }, + { + "desc":"CREATE FOREIGN TABLE creates a foreign table in the current database for parallel data import and export of OBS data. The server used is gsmpp_server, which is created by", + "product_code":"dws", + "title":"CREATE FOREIGN TABLE (for OBS Import and Export)", + "uri":"dws_06_0160.html", + "doc_type":"devg", + "p_code":"804", + "code":"839" + }, + { + "desc":"CREATE FUNCTION creates a function.The precision values (if any) of the parameters or return values of a function are not checked.When creating a function, you are advise", + "product_code":"dws", + "title":"CREATE FUNCTION", + "uri":"dws_06_0163.html", + "doc_type":"devg", + "p_code":"804", + "code":"840" + }, + { + "desc":"CREATE GROUP creates a user group.CREATE GROUP is an alias for CREATE ROLE, and it is not a standard SQL command and not recommended. Users can use CREATE ROLE directly.T", + "product_code":"dws", + "title":"CREATE GROUP", + "uri":"dws_06_0164.html", + "doc_type":"devg", + "p_code":"804", + "code":"841" + }, + { + "desc":"CREATE INDEX-bak defines a new index.Indexes are primarily used to enhance database performance (though inappropriate use can result in slower database performance). You ", + "product_code":"dws", + "title":"CREATE INDEX", + "uri":"dws_06_0165.html", + "doc_type":"devg", + "p_code":"804", + "code":"842" + }, + { + "desc":"CREATE REDACTION POLICY creates a data redaction policy for a table.Only the table owner has the permission to create a data redaction policy.You can create data redactio", + "product_code":"dws", + "title":"CREATE REDACTION POLICY", + "uri":"dws_06_0168.html", + "doc_type":"devg", + "p_code":"804", + "code":"843" + }, + { + "desc":"CREATE ROW LEVEL SECURITY POLICY creates a row-level access control policy for a table.The policy takes effect only after row-level access control is enabled (by running ", + "product_code":"dws", + "title":"CREATE ROW LEVEL SECURITY POLICY", + "uri":"dws_06_0169.html", + "doc_type":"devg", + "p_code":"804", + "code":"844" + }, + { + "desc":"CREATE PROCEDURE creates a stored procedure.The precision values (if any) of the parameters or return values of a stored procedure are not checked.When creating a stored ", + "product_code":"dws", + "title":"CREATE PROCEDURE", + "uri":"dws_06_0170.html", + "doc_type":"devg", + "p_code":"804", + "code":"845" + }, + { + "desc":"CREATE RESOURCE POOL creates a resource pool and specifies the Cgroup for the resource pool.As long as the current user has CREATE permission, it can create a resource po", + "product_code":"dws", + "title":"CREATE RESOURCE POOL", + "uri":"dws_06_0171.html", + "doc_type":"devg", + "p_code":"804", + "code":"846" + }, + { + "desc":"Create a role.A role is an entity that has own database objects and permissions. In different environments, a role can be considered a user, a group, or both.CREATE ROLE ", + "product_code":"dws", + "title":"CREATE ROLE", + "uri":"dws_06_0172.html", + "doc_type":"devg", + "p_code":"804", + "code":"847" + }, + { + "desc":"CREATE SCHEMA creates a schema.Named objects are accessed either by \"qualifying\" their names with the schema name as a prefix, or by setting a search path that includes t", + "product_code":"dws", + "title":"CREATE SCHEMA", + "uri":"dws_06_0173.html", + "doc_type":"devg", + "p_code":"804", + "code":"848" + }, + { + "desc":"CREATE SEQUENCE adds a sequence to the current database. The owner of a sequence is the user who creates the sequence.A sequence is a special table that stores arithmetic", + "product_code":"dws", + "title":"CREATE SEQUENCE", + "uri":"dws_06_0174.html", + "doc_type":"devg", + "p_code":"804", + "code":"849" + }, + { + "desc":"CREATE SERVER creates an external server.An external server stores information of HDFS clusters, OBS servers, DLI connections, or other homogeneous clusters.By default, o", + "product_code":"dws", + "title":"CREATE SERVER", + "uri":"dws_06_0175.html", + "doc_type":"devg", + "p_code":"804", + "code":"850" + }, + { + "desc":"CREATE SYNONYM is used to create a synonym object. A synonym is an alias of a database object and is used to record the mapping between database object names. You can use", + "product_code":"dws", + "title":"CREATE SYNONYM", + "uri":"dws_06_0176.html", + "doc_type":"devg", + "p_code":"804", + "code":"851" + }, + { + "desc":"CREATE TABLE creates a table in the current database. The table will be owned by the user who created it.For details about the data types supported by column-store tables", + "product_code":"dws", + "title":"CREATE TABLE", + "uri":"dws_06_0177.html", + "doc_type":"devg", + "p_code":"804", + "code":"852" + }, + { + "desc":"CREATE TABLE AS creates a table based on the results of a query.It creates a table and fills it with data obtained using SELECT. The table columns have the names and data", + "product_code":"dws", + "title":"CREATE TABLE AS", + "uri":"dws_06_0178.html", + "doc_type":"devg", + "p_code":"804", + "code":"853" + }, + { + "desc":"CREATE TABLE PARTITION creates a partitioned table. Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific sche", + "product_code":"dws", + "title":"CREATE TABLE PARTITION", + "uri":"dws_06_0179.html", + "doc_type":"devg", + "p_code":"804", + "code":"854" + }, + { + "desc":"CREATE TEXT SEARCH CONFIGURATION creates a text search configuration. A text search configuration specifies a text search parser that can divide a string into tokens, plu", + "product_code":"dws", + "title":"CREATE TEXT SEARCH CONFIGURATION", + "uri":"dws_06_0182.html", + "doc_type":"devg", + "p_code":"804", + "code":"855" + }, + { + "desc":"CREATE TEXT SEARCH DICTIONARY creates a full-text search dictionary. A dictionary is used to identify and process specified words during full-text search.Dictionaries are", + "product_code":"dws", + "title":"CREATE TEXT SEARCH DICTIONARY", + "uri":"dws_06_0183.html", + "doc_type":"devg", + "p_code":"804", + "code":"856" + }, + { + "desc":"CREATE TRIGGER creates a trigger. The trigger will be associated with a specified table or view, and will execute a specified function when certain events occur.Currently", + "product_code":"dws", + "title":"CREATE TRIGGER", + "uri":"dws_06_0184.html", + "doc_type":"devg", + "p_code":"804", + "code":"857" + }, + { + "desc":"CREATE TYPE defines a new data type in the current database. The user who defines a new data type becomes its owner. Types are designed only for row-store tables.Four typ", + "product_code":"dws", + "title":"CREATE TYPE", + "uri":"dws_06_0185.html", + "doc_type":"devg", + "p_code":"804", + "code":"858" + }, + { + "desc":"CREATE USER creates a user.A user created using the CREATE USER statement has the LOGIN permission by default.A schema named after the user is automatically created in th", + "product_code":"dws", + "title":"CREATE USER", + "uri":"dws_06_0186.html", + "doc_type":"devg", + "p_code":"804", + "code":"859" + }, + { + "desc":"CREATE VIEW creates a view. A view is a virtual table, not a base table. A database only stores the definition of a view and does not store its data. The data is still st", + "product_code":"dws", + "title":"CREATE VIEW", + "uri":"dws_06_0187.html", + "doc_type":"devg", + "p_code":"804", + "code":"860" + }, + { + "desc":"CURSOR defines a cursor. This command retrieves few rows of data in a query.To process SQL statements, the stored procedure process assigns a memory segment to store cont", + "product_code":"dws", + "title":"CURSOR", + "uri":"dws_06_0188.html", + "doc_type":"devg", + "p_code":"804", + "code":"861" + }, + { + "desc":"DROP DATABASE deletes a database.Only the owner of a database or a system administrator has the permission to run the DROP DATABASE command.DROP DATABASE does not take ef", + "product_code":"dws", + "title":"DROP DATABASE", + "uri":"dws_06_0189.html", + "doc_type":"devg", + "p_code":"804", + "code":"862" + }, + { + "desc":"DROP FOREIGN TABLE deletes a specified foreign table.DROP FOREIGN TABLE forcibly deletes a specified table. After a table is deleted, any indexes that exist for the table", + "product_code":"dws", + "title":"DROP FOREIGN TABLE", + "uri":"dws_06_0192.html", + "doc_type":"devg", + "p_code":"804", + "code":"863" + }, + { + "desc":"DROP FUNCTION deletes an existing function.If a function involves operations on temporary tables, the function cannot be deleted by running DROP FUNCTION.IF EXISTSSends a", + "product_code":"dws", + "title":"DROP FUNCTION", + "uri":"dws_06_0193.html", + "doc_type":"devg", + "p_code":"804", + "code":"864" + }, + { + "desc":"DROP GROUP deletes a user group.DROP GROUP is the alias for DROP ROLE.DROP GROUP is the internal interface encapsulated in the gs_om tool. You are not advised to use this", + "product_code":"dws", + "title":"DROP GROUP", + "uri":"dws_06_0194.html", + "doc_type":"devg", + "p_code":"804", + "code":"865" + }, + { + "desc":"DROP INDEX deletes an index.Only the owner of an index or a system administrator can run DROP INDEX command.IF EXISTSSends a notice instead of an error if the specified i", + "product_code":"dws", + "title":"DROP INDEX", + "uri":"dws_06_0195.html", + "doc_type":"devg", + "p_code":"804", + "code":"866" + }, + { + "desc":"DROP OWNED deletes the database objects of a database role.The role's permissions on all the database objects in the current database and shared objects (databases and ta", + "product_code":"dws", + "title":"DROP OWNED", + "uri":"dws_06_0198.html", + "doc_type":"devg", + "p_code":"804", + "code":"867" + }, + { + "desc":"DROP REDACTION POLICY deletes a data redaction policy applied to a specified table.Only the table owner has the permission to delete a data redaction policy.IF EXISTSSend", + "product_code":"dws", + "title":"DROP REDACTION POLICY", + "uri":"dws_06_0199.html", + "doc_type":"devg", + "p_code":"804", + "code":"868" + }, + { + "desc":"DROP ROW LEVEL SECURITY POLICY deletes a row-level access control policy from a table.Only the table owner or administrators can delete a row-level access control policy ", + "product_code":"dws", + "title":"DROP ROW LEVEL SECURITY POLICY", + "uri":"dws_06_0200.html", + "doc_type":"devg", + "p_code":"804", + "code":"869" + }, + { + "desc":"DROP PROCEDURE deletes an existing stored procedure.None.IF EXISTSSends a notice instead of an error if the stored procedure does not exist.Sends a notice instead of an e", + "product_code":"dws", + "title":"DROP PROCEDURE", + "uri":"dws_06_0201.html", + "doc_type":"devg", + "p_code":"804", + "code":"870" + }, + { + "desc":"DROP RESOURCE POOL deletes a resource pool.The resource pool cannot be deleted if it is associated with a role.The user must have the DROP permission in order to delete a", + "product_code":"dws", + "title":"DROP RESOURCE POOL", + "uri":"dws_06_0202.html", + "doc_type":"devg", + "p_code":"804", + "code":"871" + }, + { + "desc":"DROP ROLE deletes a specified role.If a \"role is being used by other users\" error is displayed when you run DROP ROLE, it might be that threads cannot respond to signals ", + "product_code":"dws", + "title":"DROP ROLE", + "uri":"dws_06_0203.html", + "doc_type":"devg", + "p_code":"804", + "code":"872" + }, + { + "desc":"DROP SCHEMA deletes a schema in a database.Only a schema owner or a system administrator can run the DROP SCHEMA command.IF EXISTSSends a notice instead of an error if th", + "product_code":"dws", + "title":"DROP SCHEMA", + "uri":"dws_06_0204.html", + "doc_type":"devg", + "p_code":"804", + "code":"873" + }, + { + "desc":"DROP SEQUENCE deletes a sequence from the current database.Only a sequence owner or a system administrator can delete a sequence.IF EXISTSSends a notice instead of an err", + "product_code":"dws", + "title":"DROP SEQUENCE", + "uri":"dws_06_0205.html", + "doc_type":"devg", + "p_code":"804", + "code":"874" + }, + { + "desc":"DROP SERVER deletes an existing data server.Only the server owner can delete a server.IF EXISTSSends a notice instead of an error if the specified table does not exist.Se", + "product_code":"dws", + "title":"DROP SERVER", + "uri":"dws_06_0206.html", + "doc_type":"devg", + "p_code":"804", + "code":"875" + }, + { + "desc":"DROP SYNONYM is used to delete a synonym object.Only a synonym owner or a system administrator can run the DROP SYNONYM command.IF EXISTSSend a notice instead of reportin", + "product_code":"dws", + "title":"DROP SYNONYM", + "uri":"dws_06_0207.html", + "doc_type":"devg", + "p_code":"804", + "code":"876" + }, + { + "desc":"DROP TABLE deletes a specified table.Only the table owner, schema owner, and system administrator have the permission to delete a table. To delete all the rows in a table", + "product_code":"dws", + "title":"DROP TABLE", + "uri":"dws_06_0208.html", + "doc_type":"devg", + "p_code":"804", + "code":"877" + }, + { + "desc":"DROP TEXT SEARCH CONFIGURATION deletes an existing text search configuration.To run the DROP TEXT SEARCH CONFIGURATION command, you must be the owner of the text search c", + "product_code":"dws", + "title":"DROP TEXT SEARCH CONFIGURATION", + "uri":"dws_06_0210.html", + "doc_type":"devg", + "p_code":"804", + "code":"878" + }, + { + "desc":"DROPTEXT SEARCHDICTIONARY deletes a full-text retrieval dictionary.DROP is not supported by predefined dictionaries.Only the owner of a dictionary can do DROP to the dict", + "product_code":"dws", + "title":"DROP TEXT SEARCH DICTIONARY", + "uri":"dws_06_0211.html", + "doc_type":"devg", + "p_code":"804", + "code":"879" + }, + { + "desc":"DROP TRIGGER deletes a trigger.Only the owner of a trigger and system administrators can run the DROP TRIGGER statement.IF EXISTSSends a notice instead of an error if the", + "product_code":"dws", + "title":"DROP TRIGGER", + "uri":"dws_06_0212.html", + "doc_type":"devg", + "p_code":"804", + "code":"880" + }, + { + "desc":"DROP TYPE deletes a user-defined data type. Only the type owner has permission to run this statement.IF EXISTSSends a notice instead of an error if the specified type doe", + "product_code":"dws", + "title":"DROP TYPE", + "uri":"dws_06_0213.html", + "doc_type":"devg", + "p_code":"804", + "code":"881" + }, + { + "desc":"Deleting a user will also delete the schema having the same name as the user.CASCADE is used to delete objects (excluding databases) that depend on the user. CASCADE cann", + "product_code":"dws", + "title":"DROP USER", + "uri":"dws_06_0214.html", + "doc_type":"devg", + "p_code":"804", + "code":"882" + }, + { + "desc":"DROP VIEW forcibly deletes an existing view in a database.Only a view owner or a system administrator can run DROP VIEW command.IF EXISTSSends a notice instead of an erro", + "product_code":"dws", + "title":"DROP VIEW", + "uri":"dws_06_0215.html", + "doc_type":"devg", + "p_code":"804", + "code":"883" + }, + { + "desc":"FETCH retrieves data using a previously-created cursor.A cursor has an associated position, which is used by FETCH. The cursor position can be before the first row of the", + "product_code":"dws", + "title":"FETCH", + "uri":"dws_06_0216.html", + "doc_type":"devg", + "p_code":"804", + "code":"884" + }, + { + "desc":"MOVE repositions a cursor without retrieving any data. MOVE works exactly like the FETCH command, except it only repositions the cursor and does not return rows.NoneThe d", + "product_code":"dws", + "title":"MOVE", + "uri":"dws_06_0217.html", + "doc_type":"devg", + "p_code":"804", + "code":"885" + }, + { + "desc":"REINDEX rebuilds an index using the data stored in the index's table, replacing the old copy of the index.There are several scenarios in which REINDEX can be used:An inde", + "product_code":"dws", + "title":"REINDEX", + "uri":"dws_06_0218.html", + "doc_type":"devg", + "p_code":"804", + "code":"886" + }, + { + "desc":"RESET restores run-time parameters to their default values. The default values are parameter default values complied in the postgresql.conf configuration file.RESET is an", + "product_code":"dws", + "title":"RESET", + "uri":"dws_06_0219.html", + "doc_type":"devg", + "p_code":"804", + "code":"887" + }, + { + "desc":"SET modifies a run-time parameter.Most run-time parameters can be modified by executing SET. Some parameters cannot be modified after a server or session starts.Set the s", + "product_code":"dws", + "title":"SET", + "uri":"dws_06_0220.html", + "doc_type":"devg", + "p_code":"804", + "code":"888" + }, + { + "desc":"SET CONSTRAINTS sets the behavior of constraint checking within the current transaction.IMMEDIATE constraints are checked at the end of each statement. DEFERRED constrain", + "product_code":"dws", + "title":"SET CONSTRAINTS", + "uri":"dws_06_0221.html", + "doc_type":"devg", + "p_code":"804", + "code":"889" + }, + { + "desc":"SET ROLE sets the current user identifier of the current session.Users of the current session must be members of specified rolename, but the system administrator can choo", + "product_code":"dws", + "title":"SET ROLE", + "uri":"dws_06_0222.html", + "doc_type":"devg", + "p_code":"804", + "code":"890" + }, + { + "desc":"SET SESSION AUTHORIZATION sets the session user identifier and the current user identifier of the current SQL session to a specified user.The session identifier can be ch", + "product_code":"dws", + "title":"SET SESSION AUTHORIZATION", + "uri":"dws_06_0223.html", + "doc_type":"devg", + "p_code":"804", + "code":"891" + }, + { + "desc":"SHOW shows the current value of a run-time parameter. You can use the SET statement to set these parameters.Some parameters that can be viewed by SHOW are read-only. You ", + "product_code":"dws", + "title":"SHOW", + "uri":"dws_06_0224.html", + "doc_type":"devg", + "p_code":"804", + "code":"892" + }, + { + "desc":"TRUNCATE quickly removes all rows from a database table.It has the same effect as an unqualified DELETE on each table, but it is faster since it does not actually scan th", + "product_code":"dws", + "title":"TRUNCATE", + "uri":"dws_06_0225.html", + "doc_type":"devg", + "p_code":"804", + "code":"893" + }, + { + "desc":"VACUUM reclaims storage space occupied by tables or B-tree indexes. In normal database operation, rows that have been deleted or obsoleted by an update are not physically", + "product_code":"dws", + "title":"VACUUM", + "uri":"dws_06_0226.html", + "doc_type":"devg", + "p_code":"804", + "code":"894" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"DML Syntax", + "uri":"dws_06_0227.html", + "doc_type":"devg", + "p_code":"686", + "code":"895" + }, + { + "desc":"Data Manipulation Language (DML) is used to perform operations on data in database tables, such as inserting, updating, querying, or deleting data.Inserting data refers t", + "product_code":"dws", + "title":"DML Syntax Overview", + "uri":"dws_06_0228.html", + "doc_type":"devg", + "p_code":"895", + "code":"896" + }, + { + "desc":"CALL calls defined functions or stored procedures.NoneschemaSpecifies the name of the schema where a function or stored procedure is located.Specifies the name of the sch", + "product_code":"dws", + "title":"CALL", + "uri":"dws_06_0229.html", + "doc_type":"devg", + "p_code":"895", + "code":"897" + }, + { + "desc":"COPY copies data between tables and files.COPY FROM copies data from a file to a table. COPY TO copies data from a table to a file.If CNs and DNs are enabled in security ", + "product_code":"dws", + "title":"COPY", + "uri":"dws_06_0230.html", + "doc_type":"devg", + "p_code":"895", + "code":"898" + }, + { + "desc":"DELETE deletes rows that satisfy the WHERE clause from the specified table. If the WHERE clause does not exist, all rows in the table will be deleted. The result is a val", + "product_code":"dws", + "title":"DELETE", + "uri":"dws_06_0231.html", + "doc_type":"devg", + "p_code":"895", + "code":"899" + }, + { + "desc":"EXPLAIN shows the execution plan of an SQL statement.The execution plan shows how the tables referenced by the SQL statement will be scanned, for example, by plain sequen", + "product_code":"dws", + "title":"EXPLAIN", + "uri":"dws_06_0232.html", + "doc_type":"devg", + "p_code":"895", + "code":"900" + }, + { + "desc":"You can run the EXPLAIN PLAN statement to save the information about an execution plan to the PLAN_TABLE table. Different from the EXPLAIN statement, EXPLAIN PLAN only st", + "product_code":"dws", + "title":"EXPLAIN PLAN", + "uri":"dws_06_0233.html", + "doc_type":"devg", + "p_code":"895", + "code":"901" + }, + { + "desc":"LOCK TABLE obtains a table-level lock.GaussDB(DWS) always tries to select the lock mode with minimum constraints when automatically requesting a lock for a command refere", + "product_code":"dws", + "title":"LOCK", + "uri":"dws_06_0234.html", + "doc_type":"devg", + "p_code":"895", + "code":"902" + }, + { + "desc":"The MERGE INTO statement is used to conditionally match data in a target table with that in a source table. If data matches, UPDATE is executed on the target table; if da", + "product_code":"dws", + "title":"MERGE INTO", + "uri":"dws_06_0235.html", + "doc_type":"devg", + "p_code":"895", + "code":"903" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"INSERT and UPSERT", + "uri":"dws_06_0275.html", + "doc_type":"devg", + "p_code":"895", + "code":"904" + }, + { + "desc":"INSERT inserts new rows into a table.You must have the INSERT permission on a table in order to insert into it.Use of the RETURNING clause requires the SELECT permission ", + "product_code":"dws", + "title":"INSERT", + "uri":"dws_06_0236.html", + "doc_type":"devg", + "p_code":"904", + "code":"905" + }, + { + "desc":"UPSERT inserts rows into a table. When a row duplicates an existing primary key or unique key value, the row will be ignored or updated.The UPSERT syntax is supported onl", + "product_code":"dws", + "title":"UPSERT", + "uri":"dws_06_0237.html", + "doc_type":"devg", + "p_code":"904", + "code":"906" + }, + { + "desc":"UPDATE updates data in a table. UPDATE changes the values of the specified columns in all rows that satisfy the condition. The WHERE clause clarifies conditions. The colu", + "product_code":"dws", + "title":"UPDATE", + "uri":"dws_06_0240.html", + "doc_type":"devg", + "p_code":"895", + "code":"907" + }, + { + "desc":"VALUES computes a row or a set of rows based on given values. It is most commonly used to generate a constant table within a large command.VALUES lists with large numbers", + "product_code":"dws", + "title":"VALUES", + "uri":"dws_06_0241.html", + "doc_type":"devg", + "p_code":"895", + "code":"908" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"DCL Syntax", + "uri":"dws_06_0242.html", + "doc_type":"devg", + "p_code":"686", + "code":"909" + }, + { + "desc":"Data control language (DCL) is used to set or modify database users or role rights.GaussDB(DWS) provides a statement for granting rights to data objects and roles. For de", + "product_code":"dws", + "title":"DCL Syntax Overview", + "uri":"dws_06_0243.html", + "doc_type":"devg", + "p_code":"909", + "code":"910" + }, + { + "desc":"ALTER DEFAULT PRIVILEGES allows you to set the permissions that will be used for objects to be created. It does not affect permissions assigned to existing objects.To iso", + "product_code":"dws", + "title":"ALTER DEFAULT PRIVILEGES", + "uri":"dws_06_0244.html", + "doc_type":"devg", + "p_code":"909", + "code":"911" + }, + { + "desc":"ANALYZE collects statistics about ordinary tables in a database, and stores the results in the PG_STATISTIC system catalog. The execution plan generator uses these statis", + "product_code":"dws", + "title":"ANALYZE | ANALYSE", + "uri":"dws_06_0245.html", + "doc_type":"devg", + "p_code":"909", + "code":"912" + }, + { + "desc":"DEALLOCATE deallocates a previously prepared statement. If you do not explicitly deallocate a prepared statement, it is deallocated when the session ends.The PREPARE key ", + "product_code":"dws", + "title":"DEALLOCATE", + "uri":"dws_06_0246.html", + "doc_type":"devg", + "p_code":"909", + "code":"913" + }, + { + "desc":"DO executes an anonymous code block.A code block is a function body without parameters that returns void. It is analyzed and executed at the same time.Before using a prog", + "product_code":"dws", + "title":"DO", + "uri":"dws_06_0247.html", + "doc_type":"devg", + "p_code":"909", + "code":"914" + }, + { + "desc":"EXECUTE executes a prepared statement. A prepared statement only exists in the lifecycle of a session. Therefore, only prepared statements created using PREPARE earlier i", + "product_code":"dws", + "title":"EXECUTE", + "uri":"dws_06_0248.html", + "doc_type":"devg", + "p_code":"909", + "code":"915" + }, + { + "desc":"EXECUTE DIRECT executes an SQL statement on a specified node. Generally, the cluster automatically allocates an SQL statement to proper nodes. EXECUTE DIRECT is mainly us", + "product_code":"dws", + "title":"EXECUTE DIRECT", + "uri":"dws_06_0249.html", + "doc_type":"devg", + "p_code":"909", + "code":"916" + }, + { + "desc":"GRANT grants permissions to roles and users.GRANT is used in the following scenarios:Granting system permissions to roles or usersSystem permissions are also called user ", + "product_code":"dws", + "title":"GRANT", + "uri":"dws_06_0250.html", + "doc_type":"devg", + "p_code":"909", + "code":"917" + }, + { + "desc":"PREPARE creates a prepared statement.A prepared statement is a performance optimizing object on the server. When the PREPARE statement is executed, the specified query is", + "product_code":"dws", + "title":"PREPARE", + "uri":"dws_06_0251.html", + "doc_type":"devg", + "p_code":"909", + "code":"918" + }, + { + "desc":"REASSIGN OWNED changes the owner of a database.REASSIGN OWNED requires that the system change owners of all the database objects owned by old_roles to new_role.REASSIGN O", + "product_code":"dws", + "title":"REASSIGN OWNED", + "uri":"dws_06_0252.html", + "doc_type":"devg", + "p_code":"909", + "code":"919" + }, + { + "desc":"REVOKE revokes rights from one or more roles.If a non-owner user of an object attempts to REVOKE rights on the object, the command is executed based on the following rule", + "product_code":"dws", + "title":"REVOKE", + "uri":"dws_06_0253.html", + "doc_type":"devg", + "p_code":"909", + "code":"920" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"DQL Syntax", + "uri":"dws_06_0276.html", + "doc_type":"devg", + "p_code":"686", + "code":"921" + }, + { + "desc":"Data Query Language (DQL) can obtain data from tables or views.GaussDB(DWS) provides statements for obtaining data from tables or views. For details, see SELECT.GaussDB(D", + "product_code":"dws", + "title":"DQL Syntax Overview", + "uri":"dws_06_0277.html", + "doc_type":"devg", + "p_code":"921", + "code":"922" + }, + { + "desc":"SELECT retrieves data from a table or view.Serving as an overlaid filter for a database table, SELECT using SQL keywords retrieves required data from data tables.Using SE", + "product_code":"dws", + "title":"SELECT", + "uri":"dws_06_0238.html", + "doc_type":"devg", + "p_code":"921", + "code":"923" + }, + { + "desc":"SELECT INTO defines a new table based on a query result and insert data obtained by query to the new table.Different from SELECT, data found by SELECT INTO is not returne", + "product_code":"dws", + "title":"SELECT INTO", + "uri":"dws_06_0239.html", + "doc_type":"devg", + "p_code":"921", + "code":"924" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"TCL Syntax", + "uri":"dws_06_0254.html", + "doc_type":"devg", + "p_code":"686", + "code":"925" + }, + { + "desc":"Transaction Control Language (TCL) controls the time and effect of database transactions and monitors the database.GaussDB(DWS) uses the COMMIT or END statement to commit", + "product_code":"dws", + "title":"TCL Syntax Overview", + "uri":"dws_06_0255.html", + "doc_type":"devg", + "p_code":"925", + "code":"926" + }, + { + "desc":"ABORT rolls back the current transaction and cancels the changes in the transaction.This command is equivalent to ROLLBACK, and is present only for historical reasons. No", + "product_code":"dws", + "title":"ABORT", + "uri":"dws_06_0256.html", + "doc_type":"devg", + "p_code":"925", + "code":"927" + }, + { + "desc":"BEGIN may be used to initiate an anonymous block or a single transaction. This section describes the syntax of BEGIN used to initiate an anonymous block. For details abou", + "product_code":"dws", + "title":"BEGIN", + "uri":"dws_06_0257.html", + "doc_type":"devg", + "p_code":"925", + "code":"928" + }, + { + "desc":"A checkpoint is a point in the transaction log sequence at which all data files have been updated to reflect the information in the log. All data files will be flushed to", + "product_code":"dws", + "title":"CHECKPOINT", + "uri":"dws_06_0258.html", + "doc_type":"devg", + "p_code":"925", + "code":"929" + }, + { + "desc":"COMMIT or END commits all operations of a transaction.Only the transaction creators or system administrators can run the COMMIT command. The creation and commit operation", + "product_code":"dws", + "title":"COMMIT | END", + "uri":"dws_06_0259.html", + "doc_type":"devg", + "p_code":"925", + "code":"930" + }, + { + "desc":"COMMIT PREPARED commits a prepared two-phase transaction.The function is only available in maintenance mode (when GUC parameter xc_maintenance_mode is on). Exercise cauti", + "product_code":"dws", + "title":"COMMIT PREPARED", + "uri":"dws_06_0260.html", + "doc_type":"devg", + "p_code":"925", + "code":"931" + }, + { + "desc":"PREPARE TRANSACTION prepares the current transaction for two-phase commit.After this command, the transaction is no longer associated with the current session; instead, i", + "product_code":"dws", + "title":"PREPARE TRANSACTION", + "uri":"dws_06_0262.html", + "doc_type":"devg", + "p_code":"925", + "code":"932" + }, + { + "desc":"SAVEPOINT establishes a new savepoint within the current transaction.A savepoint is a special mark inside a transaction that rolls back all commands that are executed aft", + "product_code":"dws", + "title":"SAVEPOINT", + "uri":"dws_06_0263.html", + "doc_type":"devg", + "p_code":"925", + "code":"933" + }, + { + "desc":"SET TRANSACTION sets the characteristics of the current transaction. It has no effect on any subsequent transactions. Available transaction characteristics include the tr", + "product_code":"dws", + "title":"SET TRANSACTION", + "uri":"dws_06_0264.html", + "doc_type":"devg", + "p_code":"925", + "code":"934" + }, + { + "desc":"START TRANSACTION starts a transaction. If the isolation level, read/write mode, or deferrable mode is specified, a new transaction will have those characteristics. You c", + "product_code":"dws", + "title":"START TRANSACTION", + "uri":"dws_06_0265.html", + "doc_type":"devg", + "p_code":"925", + "code":"935" + }, + { + "desc":"Rolls back the current transaction and backs out all updates in the transaction.ROLLBACK backs out of all changes that a transaction makes to a database if the transactio", + "product_code":"dws", + "title":"ROLLBACK", + "uri":"dws_06_0266.html", + "doc_type":"devg", + "p_code":"925", + "code":"936" + }, + { + "desc":"RELEASE SAVEPOINT destroys a savepoint previously defined in the current transaction.Destroying a savepoint makes it unavailable as a rollback point, but it has no other ", + "product_code":"dws", + "title":"RELEASE SAVEPOINT", + "uri":"dws_06_0267.html", + "doc_type":"devg", + "p_code":"925", + "code":"937" + }, + { + "desc":"ROLLBACK PREPARED cancels a transaction ready for two-phase committing.The function is only available in maintenance mode (when GUC parameter xc_maintenance_mode is on). ", + "product_code":"dws", + "title":"ROLLBACK PREPARED", + "uri":"dws_06_0268.html", + "doc_type":"devg", + "p_code":"925", + "code":"938" + }, + { + "desc":"ROLLBACK TO SAVEPOINT rolls back to a savepoint. It implicitly destroys all savepoints that were established after the named savepoint.Rolls back all commands that were e", + "product_code":"dws", + "title":"ROLLBACK TO SAVEPOINT", + "uri":"dws_06_0269.html", + "doc_type":"devg", + "p_code":"925", + "code":"939" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"GIN Indexes", + "uri":"dws_06_0270.html", + "doc_type":"devg", + "p_code":"686", + "code":"940" + }, + { + "desc":"Generalized Inverted Index (GIN) is designed for handling cases where the items to be indexed are composite values, and the queries to be handled by the index need to sea", + "product_code":"dws", + "title":"Introduction", + "uri":"dws_06_0271.html", + "doc_type":"devg", + "p_code":"940", + "code":"941" + }, + { + "desc":"The GIN interface has a high level of abstraction, requiring the access method implementer only to implement the semantics of the data type being accessed. The GIN layer ", + "product_code":"dws", + "title":"Scalability", + "uri":"dws_06_0272.html", + "doc_type":"devg", + "p_code":"940", + "code":"942" + }, + { + "desc":"Internally, a GIN index contains a B-tree index constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and w", + "product_code":"dws", + "title":"Implementation", + "uri":"dws_06_0273.html", + "doc_type":"devg", + "p_code":"940", + "code":"943" + }, + { + "desc":"Create vs. InsertInsertion into a GIN index can be slow due to the likelihood of many keys being inserted for each item. So, for bulk insertions into a table, it is advis", + "product_code":"dws", + "title":"GIN Tips and Tricks", + "uri":"dws_06_0274.html", + "doc_type":"devg", + "p_code":"940", + "code":"944" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Change History", + "uri":"dws_04_3333.html", + "doc_type":"devg", + "p_code":"", + "code":"945" + } +] \ No newline at end of file diff --git a/docs/dws/dev/PARAMETERS.txt b/docs/dws/dev/PARAMETERS.txt new file mode 100644 index 00000000..6da8d5f0 --- /dev/null +++ b/docs/dws/dev/PARAMETERS.txt @@ -0,0 +1,3 @@ +version="" +language="en-us" +type="" \ No newline at end of file diff --git a/docs/dws/dev/dws_01_0127.html b/docs/dws/dev/dws_01_0127.html new file mode 100644 index 00000000..9681f16c --- /dev/null +++ b/docs/dws/dev/dws_01_0127.html @@ -0,0 +1,20 @@ + + +
The DSC is a CLI tool running on the Linux or Windows OS. It is dedicated to providing customers with simple, fast, and reliable application SQL script migration services. It parses the SQL scripts of source database applications using the built-in syntax migration logic, and converts them to SQL scripts applicable to GaussDB(DWS) databases. You do not need to connect the DSC to a database. It can migrate data in offline mode without service interruption. In GaussDB(DWS), you can run the migrated SQL scripts to restore the database, thereby easily migrating offline databases to the cloud.
+The DSC can migrate SQL scripts of Teradata, Oracle, Netezza, MySQL, and DB2 databases.
+If you have clusters of different versions, the system displays a dialog box, prompting you to select the cluster version and download the client corresponding to the cluster version. In the cluster list on the Cluster Management page, click the name of the specified cluster and click the Basic Information tab to view the cluster version.
+The user who uploads the tool must have the full control permission on the target directory of the Linux host.
+For details, see "DSC - SQL Syntax Migration Tool" in the Data Warehouse Service Tool Guide.
+This document is intended for database designers, application developers, and database administrators, and provides information required for designing, building, querying and maintaining data warehouses.
+As a database administrator or application developer, you need to be familiar with:
+When writing documents, the writers of GaussDB(DWS) try their best to provide guidance from the perspective of commercial use, application scenarios, and task completion. Even so, references to PostgreSQL content may still exist in the document. For this type of content, the following PostgreSQL Copyright is applicable:
+Postgres-XC is Copyright © 1996-2013 by the PostgreSQL Global Development Group.
+PostgreSQL is Copyright © 1996-2013 by the PostgreSQL Global Development Group.
+Postgres95 is Copyright © 1994-5 by the Regents of the University of California.
+IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS-IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+If you are a new GaussDB(DWS) user, you are advised to read the following contents first:
+If you intend to or are migrating applications from other data warehouses to GaussDB(DWS), you might want to know how GaussDB(DWS) differs from them.
+You can find useful information from the following table for GaussDB(DWS) database application development.
+ +If you want to... + |
+Query Suggestions + |
+
---|---|
Quickly get started with GaussDB(DWS). + |
+Deploy a cluster, connect to the database, and perform some queries by following the instructions provided in "Getting Started" in the Data Warehouse Service (DWS) User Guide. +When you are ready to construct a database, load data to tables and compile the query content to operate the data in the data warehouse. Then, you can return to the Data Warehouse Service Database Developer Guide. + |
+
Understand the internal architecture of a GaussDB(DWS) data warehouse. + |
+To know more about GaussDB(DWS), go to the GaussDB(DWS) home page. + |
+
Learn how to design tables to achieve the excellent performance. + |
+Development and Design Proposal introduces the design specifications that should be complied with during the development of database applications. Modeling compliant with these specifications fits the distributed processing architecture of GaussDB(DWS) and provides efficient SQL code. +To facilitate service execution through optimization, you can refer to Query Performance Optimization. Successful performance optimization depends more on database administrators' experience and judgment than on instructions and explanation. However, Query Performance Optimization still tries to systematically illustrate the performance optimization methods for application development personnel and new GaussDB(DWS) database administrators. + |
+
Load data. + |
+Data Import describes how to import data to GaussDB(DWS). + |
+
Manage users, groups, and database security. + |
+Database Security Management covers database security topics. + |
+
Monitor and optimize system performance. + |
+System Catalogs and System Views describes the system catalogs where you can query the database status and monitor the query content and process. +You can learn how to check the system running status and monitoring metrics on the GaussDB(DWS) console by referring to "Monitoring Clusters" in the Data Warehouse Service (DWS) User Guide. + |
+
Example + |
+Description + |
+
---|---|
dbadmin + |
+Indicates the user operating and maintaining GaussDB(DWS) appointed when the cluster is created. + |
+
8000 + |
+Indicates the port number used by GaussDB(DWS) to monitor connection requests from the client. + |
+
SQL examples in this manual are developed based on the TPC-DS model. Before you execute the examples, install the TPC-DS benchmark by following the instructions on the official website https://www.tpc.org/tpcds/.
+To better understand the syntax usage, you can refer to the SQL syntax text conventions described as follows:
+ +Format + |
+Description + |
+
---|---|
Uppercase characters + |
+Indicates that keywords must be in uppercase. + |
+
Lowercase characters + |
+Indicates that parameters must be in lowercase. + |
+
[ ] + |
+Indicates that the items in brackets [] are optional. + |
+
... + |
+Indicates that preceding elements can appear repeatedly. + |
+
[ x | y | ... ] + |
+Indicates that one item is selected from two or more options or no item is selected. + |
+
{ x | y | ... } + |
+Indicates that one item is selected from two or more options. + |
+
[x | y | ... ] [ ... ] + |
+Indicates that multiple parameters or no parameter can be selected. If multiple parameters are selected, separate them with spaces. + |
+
[ x | y | ... ] [ ,... ] + |
+Indicates that multiple parameters or no parameter can be selected. If multiple parameters are selected, separate them with commas (,). + |
+
{ x | y | ... } [ ... ] + |
+Indicates that at least one parameter can be selected. If multiple parameters are selected, separate them with spaces. + |
+
{ x | y | ... } [ ,... ] + |
+Indicates that at least one parameter can be selected. If multiple parameters are selected, separate them with commas (,). + |
+
Complete the following tasks before you perform operations described in this document:
+For details about the preceding tasks, see "Getting Started" in the Data Warehouse Service (DWS) User Guide.
+GaussDB(DWS) manages cluster transactions, the basis of HA and failovers. This ensures speedy fault recovery, guarantees the Atomicity, Consistency, Isolation, Durability (ACID) properties for transactions and after a recovery, and enables concurrent control.
+Fault Rectification
+GaussDB(DWS) provides an HA mechanism to reduce the service interruption time when a cluster is faulty. It protects key user programs to continuously provide external services, minimizing the impact of hardware, software, and human faults on services and ensuring service continuity.
+Transaction Management
+The following GaussDB(DWS) features help achieve high query performance.
+GaussDB(DWS) is an MPP system with the shared-nothing architecture. It consists of multiple independent logical nodes that do not share system resources, such as the CPU, memory, and storage units. In such a system architecture, service data is separately stored on numerous nodes. Data analysis tasks are executed in parallel on the nodes where data is stored. The massively parallel data processing significantly improves response speed.
+In addition, GaussDB(DWS) improves data query performance by executing operators in parallel, executing commands in registers in parallel, and using LLVM to dynamically compile the logical conditions of redundancy prune.
+GaussDB(DWS) supports both the row and column storage models. You can choose a row- or column-store table as needed.
+The hybrid row-column storage engine achieves higher data compression ratio (column storage), index performance (column storage), and point update and point query (row storage) performance.
+You can compress old, inactive data to free up space, reducing procurement and O&M costs.
+In GaussDB(DWS), data can be compressed using the Delta Value Encoding, Dictionary, RLE, LZ4, and ZLIB algorithms. The system automatically selects a compression algorithm based on data characteristics. The average compression ratio is 7:1. Compressed data can be directly accessed and is transparent to services, greatly reducing the preparation time before accessing historical data.
+A database manages data objects and is isolated from other databases. While creating a database, you can specify a tablespace. If you do not specify it, database objects will be saved to the PG_DEFAULT by default. Objects managed by a database can be distributed to multiple tablespaces.
+In GaussDB(DWS), instances are a group of database processes running in the memory. An instance can manage one or more databases that form a cluster. A cluster is an area in the storage disk. This area is initialized during installation and composed of a directory. The directory, called data directory, stores all data and is created by initdb. Theoretically, one server can start multiple instances on different ports, but GaussDB(DWS) manages only one instance at a time. The start and stop of an instance rely on the specific data directory. For compatibility purposes, the concept of instance name may be introduced.
+In GaussDB(DWS), a tablespace is a directory storing physical files of the databases the tablespace contains. Multiple tablespaces can coexist. Files are physically isolated using tablespaces and managed by a file system.
+GaussDB(DWS) schemas logically separate databases. All database objects are created under certain schemas. In GaussDB(DWS), schemas and users are loosely bound. When you create a user, a schema with the same name as the user will be created automatically. You can also create a schema or specify another schema.
+GaussDB(DWS) uses users and roles to control the access to databases. A role can be a database user or a group of database users, depending on role settings. In GaussDB(DWS), the difference between roles and users is that a role does not have the LOGIN permission by default. In GaussDB(DWS), one user can have only one role, but you can put a user's role under a parent role to grant multiple permissions to the user.
+In GaussDB(DWS), transactions are managed by multi-version concurrency control (MVCC) and two-phase locking (2PL). It enables smooth data reads and writes. In GaussDB(DWS), MVCC saves historical version data together with the current tuple version. GaussDB(DWS) uses the VACUUM process instead of rollback segments to routinely delete historical version data. Unless in performance optimization, you do not need to pay attention to the VACUUM process. Transactions are automatically submitted in GaussDB(DWS).
+GaussDB(DWS) is compatible with Oracle, Teradata and MySQL syntax, of which the syntax behavior is different.
+ +Compatibility Item + |
+Oracle + |
+Teradata + |
+MySQL + |
+
---|---|---|---|
Empty string + |
+An empty string is treated as NULL. + |
+An empty string is distinguished from NULL. + |
+An empty string is distinguished from NULL. + |
+
Conversion of an empty string to a number + |
+NULL + |
+0 + |
+0 + |
+
Automatic truncation of overlong characters + |
+Not supported + |
+Supported (set GUC parameter td_compatible_truncation to ON) + |
+Not supported + |
+
NULL concatenation + |
+Returns a non-NULL object after combining a non-NULL object with NULL. +For example, 'abc'||NULL returns 'abc'. + |
+The strict_text_concat_td option is added to the GUC parameter behavior_compat_options to be compatible with the Teradata behavior. After the NULL type is concatenated, NULL is returned. +For example, 'abc'||NULL returns NULL. + |
+Is compatible with MySQL behavior. After the NULL type is concatenated, NULL is returned. +For example, 'abc'||NULL returns NULL. + |
+
Concatenation of the char(n) type + |
+Removes spaces and placeholders on the right when the char(n) type is concatenated. +For example, cast('a' as char(3))||'b' returns 'ab'. + |
+After the bpchar_text_without_rtrim option is added to the GUC parameter behavior_compat_options, when the char(n) type is concatenated, spaces are reserved and supplemented to the specified length n. +Currently, ignoring spaces at the end of a string for comparison is not supported. If the concatenated string contains spaces at the end, the comparison is space-sensitive. +For example, cast('a' as char(3))||'b' returns 'a b'. + |
+Removes spaces and placeholders on the right. + |
+
concat(str1,str2) + |
+Returns the concatenation of all non-NULL strings. + |
+Returns the concatenation of all non-NULL strings. + |
+If an input parameter is NULL, NULL is returned. + |
+
left and right processing of negative values + |
+Returns all characters except the first and last |n| characters. + |
+Returns all characters except the first and last |n| characters. + |
+Returns an empty string. + |
+
lpad(string text, length int [, fill text]) +rpad(string text, length int [, fill text]) + |
+Fills up the string to the specified length by appending the fill characters (a space by default). If the string is already longer than length then it is truncated (on the right). If fill is an empty string or length is a negative number, null is returned. + |
+If fill is an empty string and the string length is less than the specified length, the original string is returned. If length is a negative number, an empty string is returned. + |
+If fill is an empty string and the string length is less than the specified length, an empty string is returned. If length is a negative number, null is returned. + |
+
log(x) + |
+Returns the logarithm with 10 as the base. + |
+Returns the logarithm with 10 as the base. + |
+Returns the natural logarithm. + |
+
mod(x, 0) + |
+Returns x if the divisor is 0. + |
+Returns x if the divisor is 0. + |
+Reports an error if the divisor is 0. + |
+
Data type DATE + |
+Converts the DATE data type to the TIMESTAMP data type which stores year, month, day, hour, minute, and second values. + |
+Stores year and month values. + |
+Stores year and month values. + |
+
to_char(date) + |
+The maximum value of the input parameter can only be the maximum value of the timestamp type. The maximum value of the date type is not supported. The return value is of the timestamp type. + |
+The maximum value of the input parameter can only be the maximum value of the timestamp type. The maximum value of the date type is not supported. The return value is of the date type in YYYY/MM/DD format. (The GUC parameter convert_empty_str_to_null_td is enabled.) + |
+The maximum value of the input parameter can only be the maximum value of the timestamp type. The maximum value of the date type is not supported. The return value is of the date type. + |
+
to_date, to_timestamp, and to_number processing of empty strings + |
+Returns NULL. + |
+Returns NULL. (The convert_empty_str_to_null_td parameter is enabled.) + |
+to_date and to_timestamp returns NULL. If the parameter passed to to_number is an empty string, 0 is returned. + |
+
Return value types of last_day and next_day + |
+Returns values of the timestamp type. + |
+Returns values of the timestamp type. + |
+Returns values of the date type. + |
+
Return value type of add_months + |
+Returns values of the timestamp type. + |
+Returns values of the timestamp type. + |
+If the input parameter is of the date type, the return value is of the date type. +If the input parameter is of the timestamp type, the return value is of the timestamp type. +If the input parameter is of the timestamptz type, the return value is of the timestamptz type. + |
+
CURRENT_TIME +CURRENT_TIME(p) + |
+Obtains the time of the current transaction. The return value type is timetz. + |
+Obtains the time of the current transaction. The return value type is timetz. + |
+Obtains the execution time of the current statement. The return value type is time. + |
+
CURRENT_TIMESTAMP +CURRENT_TIMESTAMP(p) + |
+Obtains the execution time of the current statement. The return value type is timestamptz. + |
+Obtains the execution time of the current statement. The return value type is timestamptz. + |
+Obtains the execution time of the current statement. The return value type is timestamp. + |
+
LOCALTIME +LOCALTIME(p) + |
+Obtains the time of the current transaction. The return value type is time. + |
+Obtains the time of the current transaction. The return value type is time. + |
+Obtains the execution time of the current statement. The return value type is time. + |
+
LOCALTIMESTAMP +LOCALTIMESTAMP(p) + |
+Obtains the time of the current transaction. The return value type is timestamp. + |
+Obtains the time of the current transaction. The return value type is timestamp. + |
+Obtains the execution time of the current statement. The return value type is timestamp. + |
+
SYSDATE +SYSDATE(p) + |
+Obtains the execution time of the current statement. The return value type is timestamp(0). + |
+Obtains the execution time of the current statement. The return value type is timestamp(0). + |
+Obtains the current system time. The return value type is timestamp(0). + |
+
NOW() + |
+Obtains the time of the current transaction. The return value type is timestamptz. + |
+Obtains the time of the current transaction. The return value type is timestamptz. + |
+Obtains the statement execution time. The return value type is timestamptz. + |
+
Operator ^ + |
+Performs exponentiation. + |
+Performs exponentiation. + |
+Performs the exclusive OR operation. + |
+
Different input parameter types of CASE, COALESCE, IF, and IFNULL expressions + |
+Reports error. + |
+Is compatible with behavior of Teradata and supports type conversion between digits and strings. For example, if input parameters for COALESCE are of INT and VARCHAR types, the parameters are resolved as VARCHAR type. + |
+Is compatible with behavior of MySQL and supports type conversion between strings and other types. For example, if input parameters for COALESCE are of DATE, INT, and VARCHAR types, the parameters are resolved as VARCHAR type. + |
+
A user who creates an object is the owner of this object. By default, Separation of Permissions is disabled after cluster installation. A database system administrator has the same permissions as object owners. After an object is created, only the object owner or system administrator can query, modify, and delete the object, and grant permissions for the object to other users through GRANT by default.
+To enable another user to use the object, grant required permissions to the user or the role that contains the user.
+GaussDB(DWS) supports the following permissions: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, CREATE, CONNECT, EXECUTE, USAGE and ANALYZE|ANALYSE. Permission types are associated with object types. For permission details, see GRANT.
+To remove permissions, use REVOKE. Object owner permissions such as ALTER, DROP, GRANT, and REVOKE are implicit and cannot be granted or revoked. That is, you have the implicit permissions for an object if you are the owner of the object. Object owners can remove their own common permissions, for example, making tables read-only to themselves or others.
+System catalogs and views are visible to either system administrators or all users. System catalogs and views that require system administrator permissions can be queried only by system administrators. For details, see System Catalogs and System Views.
+The database provides the object isolation feature. If this feature is enabled, users can view only the objects (tables, views, columns, and functions) that they have the permission to access. System administrators are not affected by this feature. For details, see ALTER DATABASE.
+A system administrator is an account with the SYSADMIN permission. After a cluster is installed, a system administrator has the permissions of all object owners by default.
+The user dbadmin created upon GaussDB(DWS) startup is a system administrator.
+To create a database administrator, connect to the database as an administrator and run the CREATE USER or ALTER statement with SYSADMIN specified.
+1 | CREATE USER sysadmin WITH SYSADMIN password 'password'; + |
Alternatively, you can run the following statement:
+1 | ALTER USER joe SYSADMIN; + |
To run the ALTER USER statement, the user must exist.
+Descriptions in Default Permission Mechanism and System Administrator are about the initial situation after a cluster is created. By default, a system administrator with the SYSADMIN attribute has the highest-level permissions.
+To avoid risks caused by centralized permissions, you can enable the separation of permissions to delegate system administrator permissions to security administrators and audit administrators.
+After the separation of permissions is enabled, a system administrator does not have the CREATEROLE attribute (security administrator) and AUDITADMIN attribute (audit administrator). That is, you do not have the permissions for creating roles and users and the permissions for viewing and maintaining database audit logs. For details about the CREATEROLE and AUDITADMIN attributes, see CREATE ROLE.
+After the separation of permissions is enabled, system administrators have the permissions only for the objects owned by them.
+For details, see Separating Rights of Roles.
+For details about permission changes before and after enabling the separation of permissions, see Table 1 and Table 2.
+ +Object + |
+System Administrator + |
+Security Administrator + |
+Audit Administrator + |
+Common User + |
+
---|---|---|---|---|
Tablespace + |
+Can create, modify, delete, access, and allocate tablespaces. + |
+Cannot create, modify, delete, or allocate tablespaces, with authorization required for accessing tablespaces. + |
+||
Table + |
+Has permissions for all tables. + |
+Has permissions for its own tables, but does not have permissions for other users' tables. + |
+||
Index + |
+Can create indexes on all tables. + |
+Can create indexes on their own tables. + |
+||
Schema + |
+Has permissions for all schemas. + |
+Has all permissions for its own schemas, but does not have permissions for other users' schemas. + |
+||
Function + |
+Has permissions for all functions. + |
+Has permissions for its own functions, has the call permission for other users' functions in the public schema, but does not have permissions for other users' functions in other schemas. + |
+||
Customized view + |
+Has permissions for all views. + |
+Has permissions for its own views, but does not have permissions for other users' views. + |
+||
System catalog and system view + |
+Has permissions for querying all system catalogs and views. + |
+Has permissions for querying only some system catalogs and views. For details, see System Catalogs and System Views. + |
+
Object + |
+System Administrator + |
+Security Administrator + |
+Audit Administrator + |
+Common User + |
+
---|---|---|---|---|
Tablespace + |
+No change + |
+No change + |
+||
Table + |
+Permissions reduced +Has all permissions for its own tables, but does not have permissions for other users' tables in their schemas. + |
+No change + |
+||
Index + |
+Permissions reduced +Can create indexes on its own tables. + |
+No change + |
+||
Schema + |
+Permissions reduced +Has all permissions for its own schemas, but does not have permissions for other users' schemas. + |
+No change + |
+||
Function + |
+Permissions reduced +Has all permissions for its own functions, but does not have permissions for other users' functions in their schemas. + |
+No change + |
+||
Customized view + |
+Permissions reduced +Has all permissions for its own views and other users' views in the public schema, but does not have permissions for other users' views in their schemas. + |
+No change + |
+||
System catalog and system view + |
+No change + |
+No change + |
+No change + |
+Has no permission for viewing any system catalogs or views. + |
+
You can use CREATE USER and ALTER USER to create and manage database users, respectively. The database cluster has one or more named databases. Users and roles are shared within the entire cluster, but their data is not shared. That is, a user can connect to any database, but after the connection is successful, any user can access only the database declared in the connection request.
+In non-separation-of-duty scenarios, a GaussDB(DWS) user account can be created and deleted only by a system administrator or a security administrator with the CREATEROLE attribute. In separation-of-duty scenarios, a user account can be created only by a security administrator.
+When a user logs in, GaussDB(DWS) authenticates the user. A user can own databases and database objects (such as tables), and grant permissions of these objects to other users and roles. In addition to system administrators, users with the CREATEDB attribute can create databases and grant permissions to these databases.
+1 | SELECT * FROM pg_user; + |
1 | SELECT * FROM pg_authid; + |
If multiple service departments use different database user accounts to perform service operations and a database maintenance department at the same level uses database administrator accounts to perform maintenance operations, service departments may require that database administrators, without specific authorization, can manage (DROP, ALTER, and TRUNCATE) their data but cannot access (INSERT, DELETE, UPDATE, SELECT, and COPY) the data. That is, the management permissions of database administrators for tables need to be isolated from their access permissions to improve the data security of common users.
+In Separation of Permissions mode, a database administrator does not have permissions for the tables in schemas of other users. In this case, database administrators have neither management permissions nor access permissions, which does not meet the requirements of the service departments mentioned above. Therefore, GaussDB(DWS) provides private users to solve the problem. That is, create private users with the INDEPENDENT attribute in non-separation-of-duties mode.
+1 | CREATE USER user_independent WITH INDEPENDENT IDENTIFIED BY "password"; + |
Database administrators can manage (DROP, ALTER, and TRUNCATE) objects of private users but cannot access (INSERT, DELETE, SELECT, UPDATE, COPY, GRANT, REVOKE, and ALTER OWNER the objects before being authorized.
+A role is a set of permissions. After a role is granted to a user through GRANT, the user will have all the permissions of the role. It is recommended that roles be used to efficiently grant permissions. For example, you can create different roles of design, development, and maintenance personnel, grant the roles to users, and then grant specific data permissions required by different users. When permissions are granted or revoked at the role level, these changes take effect on all members of the role.
+GaussDB(DWS) provides an implicitly defined group PUBLIC that contains all roles. By default, all new users and roles have the permissions of PUBLIC. For details about the default permissions of PUBLIC, see GRANT. To revoke permissions of PUBLIC from a user or role, or re-grant these permissions to them, add the PUBLIC keyword in the REVOKE or GRANT statement.
+To view all roles, query the system catalog PG_ROLES.
+1 | SELECT * FROM PG_ROLES; + |
In non-separation-of-duty scenarios, a role can be created, modified, and deleted only by a system administrator or a user with the CREATEROLE attribute. In separation-of-duty scenarios, a role can be created, modified, and deleted only by a user with the CREATEROLE attribute.
+Schemas function as models. Schema management allows multiple users to use the same database without mutual impacts, to organize database objects as manageable logical groups, and to add third-party applications to the same schema without causing conflicts.
+Each database has one or more schemas. Each schema contains tables and other types of objects. When a database is created, a schema named public is created by default, and all users have permissions for this schema. You can group database objects by schema. A schema is similar to an OS directory but cannot be nested.
+The same database object name can be used in different schemas of the same database without causing conflicts. For example, both a_schema and b_schema can contain a table named mytable. Users with required permissions can access objects across multiple schemas of the same database.
+If a user is created, a schema named after the user will also be created in the current database.
+Database objects are generally created in the first schema in a database search path. For details about the first schema and how to change the schema order, see Search Path.
+1 | SELECT s.nspname,u.usename AS nspowner FROM pg_namespace s, pg_user u WHERE nspname='schema_name' AND s.nspowner = u.usesysid; + |
1 | SELECT * FROM pg_namespace; + |
1 | SELECT distinct(tablename),schemaname from pg_tables where schemaname = 'pg_catalog'; + |
A search path is defined in the search_path parameter. The parameter value is a list of schema names separated by commas (,). If no target schema is specified during object creation, the object will be added to the first schema listed in the search path. If there are objects with the same name across different schemas and no schema is specified for an object query, the object will be returned from the first schema containing the object in the search path.
+1 +2 +3 +4 +5 | SHOW SEARCH_PATH; + search_path +---------------- + "$user",public +(1 row) + |
The default value of search_path is "$user",public. $user indicates the name of the schema with the same name as the current session user. If the schema does not exist, $user will be ignored. By default, after a user connects to a database that has schemas with the same name, objects will be added to all the schemas. If there are no such schemas, objects will be added to only to the public schema.
+1 +2 | SET SEARCH_PATH TO myschema, public; +SET + |
When permissions for a table or view in a schema are granted to a user or role, the USAGE permission of the schema must be granted together. Otherwise, the user or role can only see the names of the objects but cannot actually access them.
+In the following example, permissions for the schema tpcds are first granted to the user joe, and then the SELECT permission for the tpcds.web_returns table is also granted.
+1 +2 | GRANT USAGE ON SCHEMA tpcds TO joe; +GRANT SELECT ON TABLE tpcds.web_returns to joe; + |
Create a role lily and grant the system permission CREATEDB to the role.
+1 | CREATE ROLE lily WITH CREATEDB PASSWORD 'password'; + |
For example, first grant permissions for the schema tpcds to the role lily, and then grant the SELECT permission of the tpcds.web_returns table to lily.
+1 +2 | GRANT USAGE ON SCHEMA tpcds TO lily; +GRANT SELECT ON TABLE tpcds.web_returns to lily; + |
1 | GRANT lily to joe; + |
When the permissions of a role are granted to a user, the attributes of the role are not transferred together.
+The row-level access control feature enables database access control to be accurate to each row of data tables. In this way, the same SQL query may return different results for different users.
+You can create a row-level access control policy for a data table. The policy defines an expression that takes effect only for specific database users and SQL operations. When a database user accesses the data table, if a SQL statement meets the specified row-level access control policies of the data table, the expressions that meet the specified condition will be combined by using AND or OR based on the attribute type (PERMISSIVE | RESTRICTIVE) and applied to the execution plan in the query optimization phase.
+Row-level access control is used to control the visibility of row-level data in tables. By predefining filters for data tables, the expressions that meet the specified condition can be applied to execution plans in the query optimization phase, which will affect the final execution result. Currently, the SQL statements that can be affected include SELECT, UPDATE, and DELETE.
+Scenario 1: A table summarizes the data of different users. Users can view only their own data.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 | -- Create users alice, bob, and peter. +CREATE ROLE alice PASSWORD 'password'; +CREATE ROLE bob PASSWORD 'password'; +CREATE ROLE peter PASSWORD 'password'; + +-- Create the public.all_data table that contains user information. +CREATE TABLE public.all_data(id int, role varchar(100), data varchar(100)); + +-- Insert data into the data table. +INSERT INTO all_data VALUES(1, 'alice', 'alice data'); +INSERT INTO all_data VALUES(2, 'bob', 'bob data'); +INSERT INTO all_data VALUES(3, 'peter', 'peter data'); + +-- Grant the read permission for the all_data table to users alice, bob, and peter. +GRANT SELECT ON all_data TO alice, bob, peter; + +-- Enable row-level access control. +ALTER TABLE all_data ENABLE ROW LEVEL SECURITY; + +-- Create a row-level access control policy to specify that the current user can view only their own data. +CREATE ROW LEVEL SECURITY POLICY all_data_rls ON all_data USING(role = CURRENT_USER); + +-- View table details. +\d+ all_data + Table "public.all_data" + Column | Type | Modifiers | Storage | Stats target | Description +--------+------------------------+-----------+----------+--------------+------------- + id | integer | | plain | | + role | character varying(100) | | extended | | + data | character varying(100) | | extended | | +Row Level Security Policies: + POLICY "all_data_rls" + USING (((role)::name = "current_user"())) +Has OIDs: no +Distribute By: HASH(id) +Location Nodes: ALL DATANODES +Options: orientation=row, compression=no, enable_rowsecurity=true + +-- Switch to user alice and run SELECT * FROM all_data. +SET ROLE alice PASSWORD 'password'; +SELECT * FROM all_data; + id | role | data +----+-------+------------ + 1 | alice | alice data +(1 row) + +EXPLAIN(COSTS OFF) SELECT * FROM all_data; + QUERY PLAN +---------------------------------------------------------------- + id | operation + ----+------------------------------ + 1 | -> Streaming (type: GATHER) + 2 | -> Seq Scan on all_data + + Predicate Information (identified by plan id) + -------------------------------------------------------------- + 2 --Seq Scan on all_data + Filter: ((role)::name = 'alice'::name) + Notice: This query is influenced by row level security feature +(10 rows) + +-- Switch to user peter and run SELECT * FROM .all_data. +SET ROLE peter PASSWORD 'password'; +SELECT * FROM all_data; + id | role | data +----+-------+------------ + 3 | peter | peter data +(1 row) + +EXPLAIN(COSTS OFF) SELECT * FROM all_data; + QUERY PLAN +---------------------------------------------------------------- + id | operation + ----+------------------------------ + 1 | -> Streaming (type: GATHER) + 2 | -> Seq Scan on all_data + + Predicate Information (identified by plan id) + -------------------------------------------------------------- + 2 --Seq Scan on all_data + Filter: ((role)::name = 'peter'::name) + Notice: This query is influenced by row level security feature +(10 rows) + |
GaussDB(DWS) provides the column-level dynamic data masking (DDM) function. For sensitive data, such as the ID card number, mobile number, and bank card number, the DDM function is used to redact the original data to protect data security and user privacy.
+The following uses the employee table emp, administrator alice, and common users matu and july as examples to describe the data redaction process. The user alice is the owner of the emp table. The emp table contains private data such as the employee name, mobile number, email address, bank card number, and salary.
+CREATE ROLE alice PASSWORD 'password'; +CREATE ROLE matu PASSWORD 'password'; +CREATE ROLE july PASSWORD 'password';+
CREATE TABLE emp(id int, name varchar(20), phone_no varchar(11), card_no number, card_string varchar(19), email text, salary numeric(100, 4), birthday date); + +INSERT INTO emp VALUES(1, 'anny', '13420002340', 1234123412341234, '1234-1234-1234-1234', 'smithWu@163.com', 10000.00, '1999-10-02'); +INSERT INTO emp VALUES(2, 'bob', '18299023211', 3456345634563456, '3456-3456-3456-3456', '66allen_mm@qq.com', 9999.99, '1989-12-12'); +INSERT INTO emp VALUES(3, 'cici', '15512231233', NULL, NULL, 'jonesishere@sina.com', NULL, '1992-11-06');+
GRANT SELECT ON emp TO matu, july;+
CREATE REDACTION POLICY mask_emp ON emp WHEN (current_user IN ('matu', 'july')) +ADD COLUMN card_no WITH mask_full(card_no), +ADD COLUMN card_string WITH mask_partial(card_string, 'VVVVFVVVVFVVVVFVVVV','VVVV-VVVV-VVVV-VVVV','#',1,12), +ADD COLUMN salary WITH mask_partial(salary, '9', 1, length(salary) - 2);+
SET ROLE matu PASSWORD 'password'; +SELECT * FROM emp; + id | name | phone_no | card_no | card_string | email | salary | birthday +----+------+-------------+---------+---------------------+----------------------+------------+--------------------- + 1 | anny | 13420002340 | 0 | ####-####-####-1234 | smithWu@163.com | 99999.9990 | 1999-10-02 00:00:00 + 2 | bob | 18299023211 | 0 | ####-####-####-3456 | 66allen_mm@qq.com | 9999.9990 | 1989-12-12 00:00:00 + 3 | cici | 15512231233 | | | jonesishere@sina.com | | 1992-11-06 00:00:00 +(3 rows) + +SET ROLE july PASSWORD 'password'; +SELECT * FROM emp; + id | name | phone_no | card_no | card_string | email | salary | birthday +----+------+-------------+---------+---------------------+----------------------+------------+--------------------- + 1 | anny | 13420002340 | 0 | ####-####-####-1234 | smithWu@163.com | 99999.9990 | 1999-10-02 00:00:00 + 2 | bob | 18299023211 | 0 | ####-####-####-3456 | 66allen_mm@qq.com | 9999.9990 | 1989-12-12 00:00:00 + 3 | cici | 15512231233 | | | jonesishere@sina.com | | 1992-11-06 00:00:00 +(3 rows)+
ALTER REDACTION POLICY mask_emp ON emp WHEN(current_user = 'july');+
SET ROLE matu PASSWORD 'password'; +SELECT * FROM emp; + id | name | phone_no | card_no | card_string | email | salary | birthday +----+------+-------------+------------------+---------------------+----------------------+------------+--------------------- + 1 | anny | 13420002340 | 1234123412341234 | 1234-1234-1234-1234 | smithWu@163.com | 10000.0000 | 1999-10-02 00:00:00 + 2 | bob | 18299023211 | 3456345634563456 | 3456-3456-3456-3456 | 66allen_mm@qq.com | 9999.9900 | 1989-12-12 00:00:00 + 3 | cici | 15512231233 | | | jonesishere@sina.com | | 1992-11-06 00:00:00 +(3 rows) + +SET ROLE july PASSWORD 'password'; +SELECT * FROM emp; + id | name | phone_no | card_no | card_string | email | salary | birthday +----+------+-------------+---------+---------------------+----------------------+------------+--------------------- + 1 | anny | 13420002340 | 0 | ####-####-####-1234 | smithWu@163.com | 99999.9990 | 1999-10-02 00:00:00 + 2 | bob | 18299023211 | 0 | ####-####-####-3456 | 66allen_mm@qq.com | 9999.9990 | 1989-12-12 00:00:00 + 3 | cici | 15512231233 | | | jonesishere@sina.com | | 1992-11-06 00:00:00 +(3 rows)+
ALTER REDACTION POLICY mask_emp ON emp ADD COLUMN phone_no WITH mask_partial(phone_no, '*', 4); +ALTER REDACTION POLICY mask_emp ON emp ADD COLUMN email WITH mask_partial(email, '*', 1, position('@' in email)); +ALTER REDACTION POLICY mask_emp ON emp ADD COLUMN birthday WITH mask_full(birthday);+
SET ROLE july PASSWORD 'password'; +SELECT * FROM emp; + id | name | phone_no | card_no | card_string | email | salary | birthday +----+------+-------------+---------+---------------------+----------------------+------------+--------------------- + 1 | anny | 134******** | 0 | ####-####-####-1234 | ********163.com | 99999.9990 | 1970-01-01 00:00:00 + 2 | bob | 182******** | 0 | ####-####-####-3456 | ***********qq.com | 9999.9990 | 1970-01-01 00:00:00 + 3 | cici | 155******** | | | ************sina.com | | 1970-01-01 00:00:00 +(3 rows)+
SELECT * FROM redaction_policies; + object_schema | object_owner | object_name | policy_name | expression | enable | policy_description +---------------+--------------+-------------+-------------+-----------------------------------+--------+-------------------- + public | alice | emp | mask_emp | ("current_user"() = 'july'::name) | t | +(1 row) + +SELECT object_name, column_name, function_info FROM redaction_columns; + object_name | column_name | function_info +-------------+-------------+------------------------------------------------------------------------------------------------------- + emp | card_no | mask_full(card_no) + emp | card_string | mask_partial(card_string, 'VVVVFVVVVFVVVVFVVVV'::text, 'VVVV-VVVV-VVVV-VVVV'::text, '#'::text, 1, 12) + emp | email | mask_partial(email, '*'::text, 1, "position"(email, '@'::text)) + emp | salary | mask_partial(salary, '9'::text, 1, (length((salary)::text) - 2)) + emp | birthday | mask_full(birthday) + emp | phone_no | mask_partial(phone_no, '*'::text, 4) +(6 rows)+
ALTER TABLE emp ADD COLUMN salary_info TEXT; +UPDATE emp SET salary_info = salary::text; + +CREATE FUNCTION mask_regexp_salary(salary_info text) RETURNS text AS +$$ + SELECT regexp_replace($1, '[0-9]+','*','g'); +$$ +LANGUAGE SQL +STRICT SHIPPABLE; + +ALTER REDACTION POLICY mask_emp ON emp ADD COLUMN salary_info WITH mask_regexp_salary(salary_info); + +SET ROLE july PASSWORD 'password'; +SELECT id, name, salary_info FROM emp; + id | name | salary_info +----+------+------------- + 1 | anny | *.* + 2 | bob | *.* + 3 | cici | +(3 rows)+
DROP REDACTION POLICY mask_emp ON emp;+
For data security purposes, GaussDB(DWS) provides a series of security measures, such as automatically locking and unlocking accounts, manually locking and unlocking abnormal accounts, and deleting accounts that are no longer used.
+If administrators detect an abnormal account that may be stolen or illegally accesses the database, they can manually lock the account.
+The administrator can also manually unlock the account if the account becomes normal again.
+For details about how to create a user, see Users. To manually lock and unlock user joe, run commands in the following format:
+1 | ALTER USER joe ACCOUNT LOCK; + |
1 | ALTER USER joe ACCOUNT UNLOCK; + |
An administrator can delete an account that is no longer used. This operation cannot be rolled back.
+When an account to be deleted is in the active state, it is deleted after the session is disconnected.
+For example, if you want to delete account joe, run the command in the following format:
+1 | DROP USER joe CASCADE; + |
When creating a user, you need to specify the validity period of the user, including the start time and end time.
+To enable a user not within the validity period to use its account, set a new validity period.
+1 | CREATE USER joe WITH PASSWORD 'password' VALID BEGIN '2015-10-10 08:00:00' VALID UNTIL '2016-10-10 08:00:00'; + |
1 | ALTER USER joe WITH VALID BEGIN '2016-11-10 08:00:00' VALID UNTIL '2017-11-10 08:00:00'; + |
If VALID BEGIN is not specified in the CREATE ROLE or ALTER ROLE statement, the start time of the validity period is not limited. If VALID UNTIL is not specified, the end time of the validity period is not limited. If both of the parameters are not specified, the user is always valid.
+MRS is a big data cluster running based on the open-source Hadoop ecosystem. It provides the industry's latest cutting-edge storage and analytical capabilities of massive volumes of data, satisfying your data storage and processing requirements. For details, see the MapReduce Service User Guide.
+You can use Hive/Spark (analysis cluster of MRS) to store massive volumes of service data. Hive/Spark data files are stored on HDFS. On GaussDB(DWS), you can connect a GaussDB(DWS) cluster to an MRS cluster, read data from HDFS files, and write the data to GaussDB(DWS) when the clusters are on the same network.
+Ensure that MRS can communicate with DWS:
+Scenario 1: If MRS and DWS are in the same region and VPC, they can communicate with each other by default.
+Scenario 2: If MRS and DWS are in the same region but in different VPCs, you need to create a VPC peering connection. For details, see "VPC Peering Connection Overview" in Virtual Private Cloud User Guide.
+Scenario 3: If MRS and DWS are not in the same region. You need to use Cloud Connect (CC) to create network connections. For details, see the user guide of the corresponding service.
+Scenario 4: If MRS is deployed on-premises, you need to use Direct Connect (DC) or Virtual Private Network (VPN) to create network connections. For details, see the user guide of the corresponding service.
+User passwords are stored in the system catalog pg_authid. To prevent password leakage, GaussDB(DWS) encrypts and stores the user passwords.
+The password complexity requirements are as follows:
+A password must contain at least three types of the preceding characters (uppercase letters, lowercase letters, digits, and special characters).
+When a user changes the password, the user can reuse a password only if it has not been used for over 60 days.
+A validity period (90 days by default) is set for each database user password. If the password is about to expire (in seven days), the system displays a message reminding the user to change it upon login.
+Considering the usage and service continuity of a database, the database still allows a user to log in after the password expires. A password change notification is displayed every time the user logs in to the database until the password is changed.
+Change the password as prompted.
+For example, to change the password of the user user1, connect to the database as the administrator and run the following command:
+1 | ALTER USER user1 IDENTIFIED BY "1234@abc" REPLACE "5678@def"; + |
1234@abc and 5678@def represent the new password and the original password of the user user1, respectively. The new password must conform to the complexity rules. Otherwise, the new password is invalid.
+To change the password of the user joe, run the following command:
+1 | ALTER USER joe IDENTIFIED BY 'password'; + |
Password verification is required when you set the user or role in the current session. If the entered password is inconsistent with the stored password of the user, an error is reported.
+To set the password of the user joe, run the following command:
+1 | SET ROLE joe PASSWORD 'password'; + |
No. + |
+Character + |
+No. + |
+Character + |
+No. + |
+Character + |
+No. + |
+Character + |
+
---|---|---|---|---|---|---|---|
1 + |
+~ + |
+9 + |
+* + |
+17 + |
+| + |
+25 + |
+< + |
+
2 + |
+! + |
+10 + |
+( + |
+18 + |
+[ + |
+26 + |
+. + |
+
3 + |
+@ + |
+11 + |
+) + |
+19 + |
+{ + |
+27 + |
+> + |
+
4 + |
+# + |
+12 + |
+- + |
+20 + |
+} + |
+28 + |
+/ + |
+
5 + |
+$ + |
+13 + |
+_ + |
+21 + |
+] + |
+29 + |
+? + |
+
6 + |
+% + |
+14 + |
+= + |
+22 + |
+; + |
+- + |
+- + |
+
7 + |
+^ + |
+15 + |
++ + |
+23 + |
+: + |
+- + |
+- + |
+
8 + |
+& + |
+16 + |
+\ + |
+24 + |
+, + |
+- + |
+- + |
+
This chapter describes the design specifications for database modeling and application development. Modeling compliant with these specifications fits the distributed processing architecture of GaussDB(DWS) and provides efficient SQL code.
+The meaning of "Proposal" and "Notice" in this chapter is as follows:
+The name of a database object must contain 1 to 63 characters, start with a letter or underscore (_), and can contain letters, digits, underscores (_), dollar signs ($), and number signs (#).
+To query the keywords of GaussDB(DWS), run select * from pg_get_keywords() or refer to section "Keyword."
+In GaussDB(DWS), services can be isolated by databases and schemas. Databases share little resources and cannot directly access each other. Connections to and permissions on them are also isolated. Schemas share more resources than databases do. User permissions on schemas and subordinate objects can be controlled using the GRANT and REVOKE syntax.
+GaussDB(DWS) uses a distributed architecture. Data is distributed on DNs. Comply with the following principles to properly design a table:
+[Proposal] Selecting a storage mode is the first step in defining a table. The storage mode mainly depends on the customer's service type. For details, see Table 1.
+ +Storage Mode + |
+Application Scenarios + |
+
---|---|
Row storage + |
+
|
+
Column storage + |
+
|
+
Distribution Mode + |
+Description + |
+Application Scenarios + |
+
---|---|---|
Hash + |
+Table data is distributed on all DNs in a cluster by hash. + |
+Fact tables containing a large amount of data + |
+
Replication + |
+Full data in a table is stored on every DN in a cluster. + |
+Dimension tables and fact tables containing a small amount of data + |
+
Comply with the following rules to partition a table containing a large amount of data:
+The example of a partitioned table definition is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | CREATE TABLE staffS_p1 +( + staff_ID NUMBER(6) not null, + FIRST_NAME VARCHAR2(20), + LAST_NAME VARCHAR2(25), + EMAIL VARCHAR2(25), + PHONE_NUMBER VARCHAR2(20), + HIRE_DATE DATE, + employment_ID VARCHAR2(10), + SALARY NUMBER(8,2), + COMMISSION_PCT NUMBER(4,2), + MANAGER_ID NUMBER(6), + section_ID NUMBER(4) +) +PARTITION BY RANGE (HIRE_DATE) +( + PARTITION HIRE_19950501 VALUES LESS THAN ('1995-05-01 00:00:00'), + PARTITION HIRE_19950502 VALUES LESS THAN ('1995-05-02 00:00:00'), + PARTITION HIRE_maxvalue VALUES LESS THAN (MAXVALUE) +); + |
Selecting a distribution key is important for a hash table. An improper distribution key may cause data skew. As a result, the I/O load is heavy on several DNs, affecting the overall query performance. After you select a distribution policy for a hash table, check for data skew to ensure that data is evenly distributed. Comply with the following rules to select a distribution key:
+Comply with the following rules to improve query efficiency when you design columns:
+If all of the following number types provide the required service precision, they are recommended in descending order of priority: integer, floating point, and numeric.
+For details about string types, see Common String Types.
+Every column requires a data type suitable for its data characteristics. The following table lists common string types in GaussDB(DWS).
+Parameter + |
+Description + |
+Max. Storage Capacity + |
+
---|---|---|
CHAR(n) + |
+Fixed-length string, where n indicates the stored bytes. If the length of an input string is smaller than n, the string is automatically padded to n bytes using NULL characters. + |
+10 MB + |
+
CHARACTER(n) + |
+Fixed-length string, where n indicates the stored bytes. If the length of an input string is smaller than n, the string is automatically padded to n bytes using NULL characters. + |
+10 MB + |
+
NCHAR(n) + |
+Fixed-length string, where n indicates the stored bytes. If the length of an input string is smaller than n, the string is automatically padded to n bytes using NULL characters. + |
+10 MB + |
+
BPCHAR(n) + |
+Fixed-length string, where n indicates the stored bytes. If the length of an input string is smaller than n, the string is automatically padded to n bytes using NULL characters. + |
+10 MB + |
+
VARCHAR(n) + |
+Variable-length string, where n indicates the maximum number of bytes that can be stored. + |
+10 MB + |
+
CHARACTER VARYING(n) + |
+Variable-length string, where n indicates the maximum number of bytes that can be stored. This data type and VARCHAR(n) are different representations of the same data type. + |
+10 MB + |
+
VARCHAR2(n) + |
+Variable-length string, where n indicates the maximum number of bytes that can be stored. This data type is added to be compatible with the Oracle database, and its behavior is the same as that of VARCHAR(n). + |
+10 MB + |
+
NVARCHAR2(n) + |
+Variable-length string, where n indicates the maximum number of bytes that can be stored. + |
+10 MB + |
+
TEXT + |
+Variable-length string. Its maximum length is 8203 bytes less than 1 GB. + |
+8203 bytes less than 1 GB + |
+
A partial cluster key (PCK) is a local clustering technology used for column-store tables. After creating a PCK, you can quickly filter and scan fact tables using min or max sparse indexes in GaussDB(DWS). Comply with the following rules to create a PCK:
+Currently, third-party tools are connected to GaussDB(DWS) trough JDBC. This section describes the precautions for configuring the tools.
+params = { +{ "user", user }, +{ "database", database }, +{ "client_encoding", "UTF8" }, +{ "DateStyle", "ISO" }, +{ "extra_float_digits", "2" }, +{ "TimeZone", createPostgresTimeZone() }, +};+
These parameters may cause the JDBC and gsql clients to display inconsistent data, for example, date data display mode, floating point precision representation, and timezone.
+If the result is not as expected, you are advised to explicitly set these parameters in the Java connection setting.
+[Notice] To use fetchsize in applications, disable the autocommit switch. Enabling the autocommit switch makes the fetchsize configuration invalid.
+[Proposal] It is recommended that you enable the autocommit switch in the code for connecting to GaussDB(DWS) by the JDBC. If autocommit needs to be disabled to improve performance or for other purposes, applications need to ensure their transactions are committed. For example, explicitly commit translations after specifying service SQL statements. Particularly, ensure that all transactions are committed before the client exits.
+[Proposal] You are advised to use connection pools to limit the number of connections from applications. Do not connect to a database every time you run an SQL statement.
+[Proposal] After an application completes its tasks, disconnect its connection to GaussDB(DWS) to release occupied resources. You are advised to set the session timeout interval in the task.
+[Proposal] Reset the session environment before releasing connections to the JDBC connection tool. Otherwise, historical session information may cause object conflicts.
+[Proposal] In the scenario where the ETL tool is not used and real-time data import is required, it is recommended that you use the CopyManger interface driven by the GaussDB(DWS) JDBC to import data in batches during application development. For details about how to use CopyManager, see CopyManager.
+1 | INSERT INTO task(name,id,comment) VALUES ('task1','100','100th task'); + |
SELECT * FROM test WHERE timestamp_col = 20000101;+
In the preceding example, if timestamp_col is the timestamp type, the system first searches for the function that supports the "equal" operation of the timestamp and int types (constant numbers are considered as the int type). If no such function is found, the timestamp_col data and constant numbers are implicitly converted into the text type for calculation.
+1 | SELECT id, (SELECT COUNT(*) FROM films f WHERE f.did = s.id) FROM staffs_p1 s; + |
Scalar subqueries often result in query performance deterioration. During application development, scalar subqueries need to be converted into equivalent table associations based on the service logic.
+1 | SELECT id, from_image_id, from_person_id, from_video_id FROM face_data WHERE current_timestamp(6) - time < '1 days'::interval; + |
The modification is as follows:
+1 | SELECT id, from_image_id, from_person_id, from_video_id FROM face_data where time > current_timestamp(6) - '1 days'::interval; + |
1 +2 | SELECT * FROM scdc.pub_menu +WHERE (cdp= 300 AND inline=301) OR (cdp= 301 AND inline=302) OR (cdp= 302 AND inline=301); + |
Convert the statement to the following:
+1 +2 +3 +4 +5 +6 +7 +8 | SELECT * FROM scdc.pub_menu +WHERE (cdp= 300 AND inline=301) +union all +SELECT * FROM scdc.pub_menu +WHERE (cdp= 301 AND inline=302) +union all +SELECT * FROM tablename +WHERE (cdp= 302 AND inline=301); + |
1 | SELECT * FROM T1 WHERE T1.C1 NOT IN (SELECT T2.C2 FROM T2); + |
Rewrite the statement as follows:
+1 | SELECT * FROM T1 WHERE NOT EXISTS (SELECT * FROM T1,T2 WHERE T1.C1=T2.C2); + |
If the connection pool mechanism is used during application development, comply with the following specifications:
+If you do not do so, the status of connections in the connection pool will remain, which affects subsequent operations using the connection pool.
+For details, see section "Downloading the JDBC or ODBC Driver" in the Data Warehouse Service User Guide.
+Java Database Connectivity (JDBC) is a Java API for executing SQL statements, providing a unified access interface for different relational databases, based on which applications process data. GaussDB(DWS) supports JDBC 4.0 and requires JDK 1.6 or later for code compiling. It does not support JDBC-ODBC Bridge.
+Obtain the package dws_8.1.x_jdbc_driver.zip from the management console. For details, see Downloading Drivers.
+Compressed in it is the JDBC driver JAR package:
+gsjdbc4.jar: Driver package compatible with PostgreSQL. The class name and class structure in the driver are the same as those in the PostgreSQL driver. All the applications running on PostgreSQL can be smoothly transferred to the current system.
+Before creating a database connection, you need to load the database driver class org.postgresql.Driver (decompressed from gsjdbc4.jar).
+Load the database driver before creating a database connection.
+You can load the driver in the following ways:
+After a database is connected, you can execute SQL statements in the database.
+If you use an open-source Java Database Connectivity (JDBC) driver, ensure that the database parameter password_encryption_type is set to 1. If the value is not 1, the connection may fail. A typical error message is "none of the server's SASL authentication mechanisms are supported." To avoid such problems, perform the following operations:
+Here are the reasons why you need to perform these operations:
+JDBC provides the following three database connection methods:
+Parameter + |
+Description + |
+
---|---|
url + |
+gsjdbc4.jar database connection descriptor. The descriptor format can be: +
NOTE:
+If gsjdbc200.jar is used, replace jdbc:postgresql with jdbc:gaussdb. +
|
+
info + |
+Database connection properties. Common properties include: +
|
+
user + |
+Indicates a database user. + |
+
password + |
+Indicates the password of a database user. + |
+
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 | //gsjdbc4.jar is used as an example. +//The following code encapsulates database connection operations into an interface. The database can then be connected using an authorized username and password. + +public static Connection GetConnection(String username, String passwd) + { + //Set the driver class. + String driver = "org.postgresql.Driver"; + //Set the database connection descriptor. + String sourceURL = "jdbc:postgresql://10.10.0.13:8000/postgres?currentSchema=test"; + Connection conn = null; + + try + { + //Load the driver. + Class.forName(driver); + } + catch( Exception e ) + { + e.printStackTrace(); + return null; + } + + try + { + //Create a connection. + conn = DriverManager.getConnection(sourceURL, username, passwd); + System.out.println("Connection succeed!"); + } + catch(Exception e) + { + e.printStackTrace(); + return null; + } + + return conn; + }; + |
The application performs data (parameter statements do not need to be transferred) in the database by running SQL statements, and you need to perform the following steps:
+1 | Statement stmt = con.createStatement(); + |
1 | int rc = stmt.executeUpdate("CREATE TABLE customer_t1(c_customer_sk INTEGER, c_customer_name VARCHAR(32));"); + |
If an execution request (not in a transaction block) received in the database contains multiple statements, the request is packed into a transaction. VACUUM is not supported in a transaction block. If one of the statements fails, the entire request will be rolled back.
+1 | stmt.close(); + |
Pre-compiled statements were once complied and optimized and can have additional parameters for different usage. For the statements have been pre-compiled, the execution efficiency is greatly improved. If you want to execute a statement for several times, use a precompiled statement. Perform the following procedure:
+1 | PreparedStatement pstmt = con.prepareStatement("UPDATE customer_t1 SET c_customer_name = ? WHERE c_customer_sk = 1"); + |
1 | pstmt.setShort(1, (short)2); + |
1 | int rowcount = pstmt.executeUpdate(); + |
1 | pstmt.close(); + |
Perform the following steps to call existing stored procedures through the JDBC interface in GaussDB(DWS):
+1 | CallableStatement cstmt = myConn.prepareCall("{? = CALL TESTPROC(?,?,?)}"); + |
1 +2 +3 | cstmt.setInt(2, 50); +cstmt.setInt(1, 20); +cstmt.setInt(3, 90); + |
1 | cstmt.registerOutParameter(4, Types.INTEGER); //Register an OUT parameter as an integer. + |
1 | cstmt.execute(); + |
1 | int out = cstmt.getInt(4); //Obtain the OUT parameter. + |
For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | //The following stored procedure has been created with the OUT parameter: +create or replace procedure testproc +( + psv_in1 in integer, + psv_in2 in integer, + psv_inout in out integer +) +as +begin + psv_inout := psv_in1 + psv_in2 + psv_inout; +end; +/ + |
1 | cstmt.close(); + |
When a prepared statement batch processes multiple pieces of similar data, the database creates only one execution plan. This improves the compilation and optimization efficiency. Perform the following procedure:
+1 | PreparedStatement pstmt = con.prepareStatement("INSERT INTO customer_t1 VALUES (?)"); + |
1 +2 | pstmt.setShort(1, (short)2); +pstmt.addBatch(); + |
1 | int[] rowcount = pstmt.executeBatch(); + |
1 | pstmt.close(); + |
Do not terminate a batch processing action when it is ongoing; otherwise, the database performance will deteriorate. Therefore, disable the automatic submission function during batch processing, and manually submit every several lines. The statement for disabling automatic submission is conn.setAutoCommit(false).
+Different types of result sets are applicable to different application scenarios. Applications select proper types of result sets based on requirements. Before executing an SQL statement, you must create a statement object. Some methods of creating statement objects can set the type of a result set. Table 1 lists result set parameters. The related Connection methods are as follows:
+1 +2 +3 +4 +5 +6 +7 +8 | //Create a Statement object. This object will generate a ResultSet object with a specified type and concurrency. +createStatement(int resultSetType, int resultSetConcurrency); + +//Create a PreparedStatement object. This object will generate a ResultSet object with a specified type and concurrency. +prepareStatement(String sql, int resultSetType, int resultSetConcurrency); + +//Create a CallableStatement object. This object will generate a ResultSet object with a specified type and concurrency. +prepareCall(String sql, int resultSetType, int resultSetConcurrency); + |
Parameter + |
+Description + |
+
---|---|
resultSetType + |
+Indicates the type of a result set. There are three types of result sets: +
NOTE:
+After a result set has obtained data from the database, the result set is insensitive to data changes made by other transactions, even if the result set type is ResultSet.TYPE_SCROLL_SENSITIVE. To obtain up-to-date data of the record pointed by the cursor from the database, call the refreshRow() method in a ResultSet object. + |
+
resultSetConcurrency + |
+Indicates the concurrency type of a result set. There are two types of concurrency. +
|
+
ResultSet objects include a cursor pointing to the current data row. The cursor is initially positioned before the first row. The next method moves the cursor to the next row from its current position. When a ResultSet object does not have a next row, a call to the next method returns false. Therefore, this method is used in the while loop for result set iteration. However, the JDBC driver provides more cursor positioning methods for scrollable result sets, which allows positioning cursor in the specified row. Table 2 lists these methods.
+ +Method + |
+Description + |
+
---|---|
next() + |
+Moves cursor to the next row from its current position. + |
+
previous() + |
+Moves cursor to the previous row from its current position. + |
+
beforeFirst() + |
+Places cursor before the first row. + |
+
afterLast() + |
+Places cursor after the last row. + |
+
first() + |
+Places cursor to the first row. + |
+
last() + |
+Places cursor to the last row. + |
+
absolute(int) + |
+Places cursor to a specified row. + |
+
relative(int) + |
+Moves cursor forward or backward a specified number of rows. + |
+
This cursor positioning method will be used to change the cursor position for a scrollable result set. JDBC driver provides a method to obtain the cursor position in a result set. Table 3 lists the method.
+ +Method + |
+Description + |
+
---|---|
isFirst() + |
+Checks whether the cursor is in the first row. + |
+
isLast() + |
+Checks whether the cursor is in the last row. + |
+
isBeforeFirst() + |
+Checks whether the cursor is before the first row. + |
+
isAfterLast() + |
+Checks whether the cursor is after the last row. + |
+
getRow() + |
+Gets the current row number of the cursor. + |
+
ResultSet objects provide a variety of methods to obtain data from a result set. Table 4 lists the common methods for obtaining data. If you want to know more about other methods, see JDK official documents.
+ +Method + |
+Description + |
+
---|---|
int getInt(int columnIndex) + |
+Retrieves the value of the column designated by a column index in the current row as an int. + |
+
int getInt(String columnLabel) + |
+Retrieves the value of the column designated by a column label in the current row as an int. + |
+
String getString(int columnIndex) + |
+Retrieves the value of the column designated by a column index in the current row as a String. + |
+
String getString(String columnLabel) + |
+Retrieves the value of the column designated by a column label in the current row as a String. + |
+
Date getDate(int columnIndex) + |
+Retrieves the value of the column designated by a column index in the current row as a Date. + |
+
Date getDate(String columnLabel) + |
+Retrieves the value of the column designated by a column name in the current row as a Date. + |
+
After you complete required data operations in the database, close the database connection.
+Call the close method to close the connection, such as, conn. close().
+Before completing the following example, you need to create a stored procedure.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | create or replace procedure testproc +( + psv_in1 in integer, + psv_in2 in integer, + psv_inout in out integer +) +as +begin + psv_inout := psv_in1 + psv_in2 + psv_inout; +end; +/ + |
This example illustrates how to develop applications based on the GaussDB(DWS) JDBC interface.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 + 24 + 25 + 26 + 27 + 28 + 29 + 30 + 31 + 32 + 33 + 34 + 35 + 36 + 37 + 38 + 39 + 40 + 41 + 42 + 43 + 44 + 45 + 46 + 47 + 48 + 49 + 50 + 51 + 52 + 53 + 54 + 55 + 56 + 57 + 58 + 59 + 60 + 61 + 62 + 63 + 64 + 65 + 66 + 67 + 68 + 69 + 70 + 71 + 72 + 73 + 74 + 75 + 76 + 77 + 78 + 79 + 80 + 81 + 82 + 83 + 84 + 85 + 86 + 87 + 88 + 89 + 90 + 91 + 92 + 93 + 94 + 95 + 96 + 97 + 98 + 99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 | //DBtest.java +//gsjdbc4.jar is used as an example. +// This example illustrates the main processes of JDBC-based development, covering database connection creation, table creation, and data insertion. + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.CallableStatement; + +public class DBTest { + + //Establish a connection to the database. + public static Connection GetConnection(String username, String passwd) { + String driver = "org.postgresql.Driver"; + String sourceURL = "jdbc:postgresql://localhost:8000/gaussdb"; + Connection conn = null; + try { + //Load the database driver. + Class.forName(driver).newInstance(); + } catch (Exception e) { + e.printStackTrace(); + return null; + } + + try { + //Establish a connection to the database. + conn = DriverManager.getConnection(sourceURL, username, passwd); + System.out.println("Connection succeed!"); + } catch (Exception e) { + e.printStackTrace(); + return null; + } + + return conn; + }; + + //Run an ordinary SQL statement. Create a customer_t1 table. + public static void CreateTable(Connection conn) { + Statement stmt = null; + try { + stmt = conn.createStatement(); + + //Run an ordinary SQL statement. + int rc = stmt + .executeUpdate("CREATE TABLE customer_t1(c_customer_sk INTEGER, c_customer_name VARCHAR(32));"); + + stmt.close(); + } catch (SQLException e) { + if (stmt != null) { + try { + stmt.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + } + + //Run the preprocessing statement to insert data in batches. + public static void BatchInsertData(Connection conn) { + PreparedStatement pst = null; + + try { + //Generate a prepared statement. + pst = conn.prepareStatement("INSERT INTO customer_t1 VALUES (?,?)"); + for (int i = 0; i < 3; i++) { + //Add parameters. + pst.setInt(1, i); + pst.setString(2, "data " + i); + pst.addBatch(); + } + //Run batch processing. + pst.executeBatch(); + pst.close(); + } catch (SQLException e) { + if (pst != null) { + try { + pst.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + } + + //Run the precompilation statement to update data. + public static void ExecPreparedSQL(Connection conn) { + PreparedStatement pstmt = null; + try { + pstmt = conn + .prepareStatement("UPDATE customer_t1 SET c_customer_name = ? WHERE c_customer_sk = 1"); + pstmt.setString(1, "new Data"); + int rowcount = pstmt.executeUpdate(); + pstmt.close(); + } catch (SQLException e) { + if (pstmt != null) { + try { + pstmt.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + } + + +//Run a stored procedure. + public static void ExecCallableSQL(Connection conn) { + CallableStatement cstmt = null; + try { + + cstmt=conn.prepareCall("{? = CALL TESTPROC(?,?,?)}"); + cstmt.setInt(2, 50); + cstmt.setInt(1, 20); + cstmt.setInt(3, 90); + cstmt.registerOutParameter(4, Types.INTEGER); //Register an OUT parameter as an integer. + cstmt.execute(); + int out = cstmt.getInt(4); //Obtain the out parameter value. + System.out.println("The CallableStatment TESTPROC returns:"+out); + cstmt.close(); + } catch (SQLException e) { + if (cstmt != null) { + try { + cstmt.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + } + + + /** + * Main process. Call static methods one by one. + * @param args + */ + public static void main(String[] args) { + //Establish a connection to the database. + Connection conn = GetConnection("tester", "password"); + + //Create a table. + CreateTable(conn); + + //Insert data in batches. + BatchInsertData(conn); + + //Run the precompilation statement to update data. + ExecPreparedSQL(conn); + + //Run a stored procedure. + ExecCallableSQL(conn); + + //Close the connection to the database. + try { + conn.close(); + } catch (SQLException e) { + e.printStackTrace(); + } + + } + +} + |
In this example, setFetchSize adjusts the memory usage of the client by using the database cursor to obtain server data in batches. It may increase network interaction and damage some performance.
+The cursor is valid within a transaction. Therefore, you need to disable the autocommit function.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | // Disable the autocommit function. +conn.setAutoCommit(false); +Statement st = conn.createStatement(); + +// Open the cursor and obtain 50 lines of data each time. +st.setFetchSize(50); +ResultSet rs = st.executeQuery("SELECT * FROM mytable"); +while (rs.next()) +{ + System.out.print("a row was returned."); +} +rs.close(); + +// Disable the server cursor. +st.setFetchSize(0); +rs = st.executeQuery("SELECT * FROM mytable"); +while (rs.next()) +{ + System.out.print("many rows were returned."); +} +rs.close(); + +// Close the statement. +st.close(); + |
If the primary DN is faulty and cannot be restored within 40s, its standby is automatically promoted to primary to ensure the normal running of the cluster. Jobs running during the failover will fail and those started after the failover will not be affected. To protect upper-layer services from being affected by the failover, refer to the following example to construct a SQL retry mechanism at the service layer.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 + 24 + 25 + 26 + 27 + 28 + 29 + 30 + 31 + 32 + 33 + 34 + 35 + 36 + 37 + 38 + 39 + 40 + 41 + 42 + 43 + 44 + 45 + 46 + 47 + 48 + 49 + 50 + 51 + 52 + 53 + 54 + 55 + 56 + 57 + 58 + 59 + 60 + 61 + 62 + 63 + 64 + 65 + 66 + 67 + 68 + 69 + 70 + 71 + 72 + 73 + 74 + 75 + 76 + 77 + 78 + 79 + 80 + 81 + 82 + 83 + 84 + 85 + 86 + 87 + 88 + 89 + 90 + 91 + 92 + 93 + 94 + 95 + 96 + 97 + 98 + 99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 | //gsjdbc4.jar is used as an example. +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +/** + * + * + */ + +class ExitHandler extends Thread { + private Statement cancel_stmt = null; + + public ExitHandler(Statement stmt) { + super("Exit Handler"); + this.cancel_stmt = stmt; + } + public void run() { + System.out.println("exit handle"); + try { + this.cancel_stmt.cancel(); + } catch (SQLException e) { + System.out.println("cancel query failed."); + e.printStackTrace(); + } + } +} + +public class SQLRetry { + //Establish a connection to the database. + public static Connection GetConnection(String username, String passwd) { + String driver = "org.postgresql.Driver"; + String sourceURL = "jdbc:postgresql://10.131.72.136:8000/gaussdb"; + Connection conn = null; + try { + //Load the database driver. + Class.forName(driver).newInstance(); + } catch (Exception e) { + e.printStackTrace(); + return null; + } + + try { + //Establish a connection to the database. + conn = DriverManager.getConnection(sourceURL, username, passwd); + System.out.println("Connection succeed!"); + } catch (Exception e) { + e.printStackTrace(); + return null; + } + + return conn; +} + + //Run an ordinary SQL statement. Create a jdbc_test1 table. + public static void CreateTable(Connection conn) { + Statement stmt = null; + try { + stmt = conn.createStatement(); + + // add ctrl+c handler + Runtime.getRuntime().addShutdownHook(new ExitHandler(stmt)); + + // Run an ordinary SQL statement. + int rc2 = stmt + .executeUpdate("DROP TABLE if exists jdbc_test1;"); + + int rc1 = stmt + .executeUpdate("CREATE TABLE jdbc_test1(col1 INTEGER, col2 VARCHAR(10));"); + + stmt.close(); + } catch (SQLException e) { + if (stmt != null) { + try { + stmt.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + } + + //Run the preprocessing statement to insert data in batches. + public static void BatchInsertData(Connection conn) { + PreparedStatement pst = null; + + try { + //Generate a prepared statement. + pst = conn.prepareStatement("INSERT INTO jdbc_test1 VALUES (?,?)"); + for (int i = 0; i < 100; i++) { + //Add parameters. + pst.setInt(1, i); + pst.setString(2, "data " + i); + pst.addBatch(); + } + //Perform batch processing. + pst.executeBatch(); + pst.close(); + } catch (SQLException e) { + if (pst != null) { + try { + pst.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + } + + //Run the precompilation statement to update data. + private static boolean QueryRedo(Connection conn){ + PreparedStatement pstmt = null; + boolean retValue = false; + try { + pstmt = conn + .prepareStatement("SELECT col1 FROM jdbc_test1 WHERE col2 = ?"); + + pstmt.setString(1, "data 10"); + ResultSet rs = pstmt.executeQuery(); + + while (rs.next()) { + System.out.println("col1 = " + rs.getString("col1")); + } + rs.close(); + + pstmt.close(); + retValue = true; + } catch (SQLException e) { + System.out.println("catch...... retValue " + retValue); + if (pstmt != null) { + try { + pstmt.close(); + } catch (SQLException e1) { + e1.printStackTrace(); + } + } + e.printStackTrace(); + } + + System.out.println("finesh......"); + return retValue; + } + +//Run a query statement and retry upon a failure. The number of retry times can be configured. + public static void ExecPreparedSQL(Connection conn) throws InterruptedException { + int maxRetryTime = 50; + int time = 0; + String result = null; + do { + time++; + try { + System.out.println("time:" + time); + boolean ret = QueryRedo(conn); + if(ret == false){ + System.out.println("retry, time:" + time); + Thread.sleep(10000); + QueryRedo(conn); + } + } catch (Exception e) { + e.printStackTrace(); + } + } while (null == result && time < maxRetryTime); + + } + + /** + * Main process. Call static methods one by one. + * @param args + * @throws InterruptedException + */ + public static void main(String[] args) throws InterruptedException { + //Establish a connection to the database. + Connection conn = GetConnection("testuser", "test@123"); + + //Create a table. + CreateTable(conn); + + //Insert data in batches. + BatchInsertData(conn); + + //Run the precompilation statement to update data. + ExecPreparedSQL(conn); + + //Disconnect from the database. + try { + conn.close(); + } catch (SQLException e) { + e.printStackTrace(); + } + + } + + } + |
When the JAVA language is used for secondary development based on GaussDB(DWS), you can use the CopyManager interface to export data from the database to a local file or import a local file to the database by streaming. The file can be in CSV or TEXT format.
+The sample program is as follows. Load the GaussDB(DWS) JDBC driver before running it.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 + 24 + 25 + 26 + 27 + 28 + 29 + 30 + 31 + 32 + 33 + 34 + 35 + 36 + 37 + 38 + 39 + 40 + 41 + 42 + 43 + 44 + 45 + 46 + 47 + 48 + 49 + 50 + 51 + 52 + 53 + 54 + 55 + 56 + 57 + 58 + 59 + 60 + 61 + 62 + 63 + 64 + 65 + 66 + 67 + 68 + 69 + 70 + 71 + 72 + 73 + 74 + 75 + 76 + 77 + 78 + 79 + 80 + 81 + 82 + 83 + 84 + 85 + 86 + 87 + 88 + 89 + 90 + 91 + 92 + 93 + 94 + 95 + 96 + 97 + 98 + 99 +100 +101 +102 +103 +104 | //gsjdbc4.jar is used as an example. +import java.sql.Connection; +import java.sql.DriverManager; +import java.io.IOException; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.sql.SQLException; +import org.postgresql.copy.CopyManager; +import org.postgresql.core.BaseConnection; + +public class Copy{ + + public static void main(String[] args) + { + String urls = new String("jdbc:postgresql://10.180.155.74:8000/gaussdb"); //Database URL + String username = new String("jack"); //Username + String password = new String("********"); // Password + String tablename = new String("migration_table"); //Define table information. + String tablename1 = new String("migration_table_1"); //Define table information. + String driver = "org.postgresql.Driver"; + Connection conn = null; + + try { + Class.forName(driver); + conn = DriverManager.getConnection(urls, username, password); + } catch (ClassNotFoundException e) { + e.printStackTrace(System.out); + } catch (SQLException e) { + e.printStackTrace(System.out); + } + + //Export the query result of SELECT * FROM migration_table to the local file d:/data.txt. + try { + copyToFile(conn, "d:/data.txt", "(SELECT * FROM migration_table)"); + } catch (SQLException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } catch (IOException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + //Import data from the d:/data.txt file to the migration_table_1 table. + try { + copyFromFile(conn, "d:/data.txt", tablename1); + } catch (SQLException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } catch (IOException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + + //Export the data from the migration_table_1 table to the d:/data1.txt file. + try { + copyToFile(conn, "d:/data1.txt", tablename1); + } catch (SQLException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } catch (IOException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + } + + public static void copyFromFile(Connection connection, String filePath, String tableName) + throws SQLException, IOException { + + FileInputStream fileInputStream = null; + + try { + CopyManager copyManager = new CopyManager((BaseConnection)connection); + fileInputStream = new FileInputStream(filePath); + copyManager.copyIn("COPY " + tableName + " FROM STDIN", fileInputStream); + } finally { + if (fileInputStream != null) { + try { + fileInputStream.close(); + } catch (IOException e) { + e.printStackTrace(); + } + } + } + } + + public static void copyToFile(Connection connection, String filePath, String tableOrQuery) + throws SQLException, IOException { + + FileOutputStream fileOutputStream = null; + + try { + CopyManager copyManager = new CopyManager((BaseConnection)connection); + fileOutputStream = new FileOutputStream(filePath); + copyManager.copyOut("COPY " + tableOrQuery + " TO STDOUT", fileOutputStream); + } finally { + if (fileOutputStream != null) { + try { + fileOutputStream.close(); + } catch (IOException e) { + e.printStackTrace(); + } + } + } + } +} + |
The following example shows how to use CopyManager to migrate data from MySQL to GaussDB(DWS).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 | //gsjdbc4.jar is used as an example. +import java.io.StringReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.postgresql.copy.CopyManager; +import org.postgresql.core.BaseConnection; + +public class Migration{ + + public static void main(String[] args) { + String url = new String("jdbc:postgresql://10.180.155.74:8000/gaussdb"); //Database URL + String user = new String("jack"); //Database username + String pass = new String("********"); //Database password + String tablename = new String("migration_table"); //Define table information. + String delimiter = new String("|"); //Define a delimiter. + String encoding = new String("UTF8"); //Define a character set. + String driver = "org.postgresql.Driver"; + StringBuffer buffer = new StringBuffer(); //Define the buffer to store formatted data. + + try { + //Obtain the query result set of the source database. + ResultSet rs = getDataSet(); + + //Traverse the result set and obtain records row by row. + //The values of columns in each record are separated by the specified delimiter and end with a newline character to form strings. + ////Add the strings to the buffer. + while (rs.next()) { + buffer.append(rs.getString(1) + delimiter + + rs.getString(2) + delimiter + + rs.getString(3) + delimiter + + rs.getString(4) + + "\n"); + } + rs.close(); + + try { + //Connect to the target database. + Class.forName(driver); + Connection conn = DriverManager.getConnection(url, user, pass); + BaseConnection baseConn = (BaseConnection) conn; + baseConn.setAutoCommit(false); + + //Initialize table information. + String sql = "Copy " + tablename + " from STDIN DELIMITER " + "'" + delimiter + "'" + " ENCODING " + "'" + encoding + "'"; + + //Submit data in the buffer. + CopyManager cp = new CopyManager(baseConn); + StringReader reader = new StringReader(buffer.toString()); + cp.copyIn(sql, reader); + baseConn.commit(); + reader.close(); + baseConn.close(); + } catch (ClassNotFoundException e) { + e.printStackTrace(System.out); + } catch (SQLException e) { + e.printStackTrace(System.out); + } + + } catch (Exception e) { + e.printStackTrace(); + } + } + + //******************************** + //Return the query result set from the source database. + //********************************* + private static ResultSet getDataSet() { + ResultSet rs = null; + try { + Class.forName("com.mysql.jdbc.Driver").newInstance(); + Connection conn = DriverManager.getConnection("jdbc:mysql://10.119.179.227:3306/jack?useSSL=false&allowPublicKeyRetrieval=true", "jack", "********"); + Statement stmt = conn.createStatement(); + rs = stmt.executeQuery("select * from migration_table"); + } catch (SQLException e) { + e.printStackTrace(); + } catch (Exception e) { + e.printStackTrace(); + } + return rs; + } +} + |
JDBC interface is a set of API methods for users. This section describes some common interfaces. For other interfaces, see information in JDK1.6 (software package) and JDBC4.0.
+This section describes java.sql.Connection, the interface for connecting to a database.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
close() + |
+void + |
+Yes + |
+
commit() + |
+void + |
+Yes + |
+
createStatement() + |
+Statement + |
+Yes + |
+
getAutoCommit() + |
+boolean + |
+Yes + |
+
getClientInfo() + |
+Properties + |
+Yes + |
+
getClientInfo(String name) + |
+String + |
+Yes + |
+
getTransactionIsolation() + |
+int + |
+Yes + |
+
isClosed() + |
+boolean + |
+Yes + |
+
isReadOnly() + |
+boolean + |
+Yes + |
+
prepareStatement(String sql) + |
+PreparedStatement + |
+Yes + |
+
rollback() + |
+void + |
+Yes + |
+
setAutoCommit(boolean autoCommit) + |
+void + |
+Yes + |
+
setClientInfo(Properties properties) + |
+void + |
+Yes + |
+
setClientInfo(String name,String value) + |
+void + |
+Yes + |
+
The AutoCommit mode is used by default within the interface. If you disable it running setAutoCommit(false), all the statements executed later will be packaged in explicit transactions, and you cannot execute statements that cannot be executed within transactions.
+This section describes java.sql.CallableStatement, the stored procedure execution interface.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
registerOutParameter(int parameterIndex, int type) + |
+void + |
+Yes + |
+
wasNull() + |
+boolean + |
+Yes + |
+
getString(int parameterIndex) + |
+String + |
+Yes + |
+
getBoolean(int parameterIndex) + |
+boolean + |
+Yes + |
+
getByte(int parameterIndex) + |
+byte + |
+Yes + |
+
getShort(int parameterIndex) + |
+short + |
+Yes + |
+
getInt(int parameterIndex) + |
+int + |
+Yes + |
+
getLong(int parameterIndex) + |
+long + |
+Yes + |
+
getFloat(int parameterIndex) + |
+float + |
+Yes + |
+
getDouble(int parameterIndex) + |
+double + |
+Yes + |
+
getBigDecimal(int parameterIndex) + |
+BigDecimal + |
+Yes + |
+
getBytes(int parameterIndex) + |
+byte[] + |
+Yes + |
+
getDate(int parameterIndex) + |
+Date + |
+Yes + |
+
getTime(int parameterIndex) + |
+Time + |
+Yes + |
+
getTimestamp(int parameterIndex) + |
+Timestamp + |
+Yes + |
+
getObject(int parameterIndex) + |
+Object + |
+Yes + |
+
This section describes java.sql.DatabaseMetaData, the interface for defining database objects.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
getTables(String catalog, String schemaPattern, String tableNamePattern, String[] types) + |
+ResultSet + |
+Yes + |
+
getColumns(String catalog, String schemaPattern, String tableNamePattern, String columnNamePattern) + |
+ResultSet + |
+Yes + |
+
getTableTypes() + |
+ResultSet + |
+Yes + |
+
getUserName() + |
+String + |
+Yes + |
+
isReadOnly() + |
+boolean + |
+Yes + |
+
nullsAreSortedHigh() + |
+boolean + |
+Yes + |
+
nullsAreSortedLow() + |
+boolean + |
+Yes + |
+
nullsAreSortedAtStart() + |
+boolean + |
+Yes + |
+
nullsAreSortedAtEnd() + |
+boolean + |
+Yes + |
+
getDatabaseProductName() + |
+String + |
+Yes + |
+
getDatabaseProductVersion() + |
+String + |
+Yes + |
+
getDriverName() + |
+String + |
+Yes + |
+
getDriverVersion() + |
+String + |
+Yes + |
+
getDriverMajorVersion() + |
+int + |
+Yes + |
+
getDriverMinorVersion() + |
+int + |
+Yes + |
+
usesLocalFiles() + |
+boolean + |
+Yes + |
+
usesLocalFilePerTable() + |
+boolean + |
+Yes + |
+
supportsMixedCaseIdentifiers() + |
+boolean + |
+Yes + |
+
storesUpperCaseIdentifiers() + |
+boolean + |
+Yes + |
+
storesLowerCaseIdentifiers() + |
+boolean + |
+Yes + |
+
supportsMixedCaseQuotedIdentifiers() + |
+boolean + |
+Yes + |
+
storesUpperCaseQuotedIdentifiers() + |
+boolean + |
+Yes + |
+
storesLowerCaseQuotedIdentifiers() + |
+boolean + |
+Yes + |
+
storesMixedCaseQuotedIdentifiers() + |
+boolean + |
+Yes + |
+
supportsAlterTableWithAddColumn() + |
+boolean + |
+Yes + |
+
supportsAlterTableWithDropColumn() + |
+boolean + |
+Yes + |
+
supportsColumnAliasing() + |
+boolean + |
+Yes + |
+
nullPlusNonNullIsNull() + |
+boolean + |
+Yes + |
+
supportsConvert() + |
+boolean + |
+Yes + |
+
supportsConvert(int fromType, int toType) + |
+boolean + |
+Yes + |
+
supportsTableCorrelationNames() + |
+boolean + |
+Yes + |
+
supportsDifferentTableCorrelationNames() + |
+boolean + |
+Yes + |
+
supportsExpressionsInOrderBy() + |
+boolean + |
+Yes + |
+
supportsOrderByUnrelated() + |
+boolean + |
+Yes + |
+
supportsGroupBy() + |
+boolean + |
+Yes + |
+
supportsGroupByUnrelated() + |
+boolean + |
+Yes + |
+
supportsGroupByBeyondSelect() + |
+boolean + |
+Yes + |
+
supportsLikeEscapeClause() + |
+boolean + |
+Yes + |
+
supportsMultipleResultSets() + |
+boolean + |
+Yes + |
+
supportsMultipleTransactions() + |
+boolean + |
+Yes + |
+
supportsNonNullableColumns() + |
+boolean + |
+Yes + |
+
supportsMinimumSQLGrammar() + |
+boolean + |
+Yes + |
+
supportsCoreSQLGrammar() + |
+boolean + |
+Yes + |
+
supportsExtendedSQLGrammar() + |
+boolean + |
+Yes + |
+
supportsANSI92EntryLevelSQL() + |
+boolean + |
+Yes + |
+
supportsANSI92IntermediateSQL() + |
+boolean + |
+Yes + |
+
supportsANSI92FullSQL() + |
+boolean + |
+Yes + |
+
supportsIntegrityEnhancementFacility() + |
+boolean + |
+Yes + |
+
supportsOuterJoins() + |
+boolean + |
+Yes + |
+
supportsFullOuterJoins() + |
+boolean + |
+Yes + |
+
supportsLimitedOuterJoins() + |
+boolean + |
+Yes + |
+
isCatalogAtStart() + |
+boolean + |
+Yes + |
+
supportsSchemasInDataManipulation() + |
+boolean + |
+Yes + |
+
supportsSavepoints() + |
+boolean + |
+Yes + |
+
supportsResultSetHoldability(int holdability) + |
+boolean + |
+Yes + |
+
getResultSetHoldability() + |
+int + |
+Yes + |
+
getDatabaseMajorVersion() + |
+int + |
+Yes + |
+
getDatabaseMinorVersion() + |
+int + |
+Yes + |
+
getJDBCMajorVersion() + |
+int + |
+Yes + |
+
getJDBCMinorVersion() + |
+int + |
+Yes + |
+
This section describes java.sql.Driver, the database driver interface.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
acceptsURL(String url) + |
+boolean + |
+Yes + |
+
connect(String url, Properties info) + |
+Connection + |
+Yes + |
+
jdbcCompliant() + |
+boolean + |
+Yes + |
+
getMajorVersion() + |
+int + |
+Yes + |
+
getMinorVersion() + |
+int + |
+Yes + |
+
This section describes java.sql.PreparedStatement, the interface for preparing statements.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
clearParameters() + |
+void + |
+Yes + |
+
execute() + |
+boolean + |
+Yes + |
+
executeQuery() + |
+ResultSet + |
+Yes + |
+
excuteUpdate() + |
+int + |
+Yes + |
+
getMetaData() + |
+ResultSetMetaData + |
+Yes + |
+
setBoolean(int parameterIndex, boolean x) + |
+void + |
+Yes + |
+
setBigDecimal(int parameterIndex, BigDecimal x) + |
+void + |
+Yes + |
+
setByte(int parameterIndex, byte x) + |
+void + |
+Yes + |
+
setBytes(int parameterIndex, byte[] x) + |
+void + |
+Yes + |
+
setDate(int parameterIndex, Date x) + |
+void + |
+Yes + |
+
setDouble(int parameterIndex, double x) + |
+void + |
+Yes + |
+
setFloat(int parameterIndex, float x) + |
+void + |
+Yes + |
+
setInt(int parameterIndex, int x) + |
+void + |
+Yes + |
+
setLong(int parameterIndex, long x) + |
+void + |
+Yes + |
+
setNString(int parameterIndex, String value) + |
+void + |
+Yes + |
+
setShort(int parameterIndex, short x) + |
+void + |
+Yes + |
+
setString(int parameterIndex, String x) + |
+void + |
+Yes + |
+
addBatch() + |
+void + |
+Yes + |
+
executeBatch() + |
+int[] + |
+Yes + |
+
clearBatch() + |
+void + |
+Yes + |
+
This section describes java.sql.ResultSet, the interface for execution result sets.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
findColumn(String columnLabel) + |
+int + |
+Yes + |
+
getBigDecimal(int columnIndex) + |
+BigDecimal + |
+Yes + |
+
getBigDecimal(String columnLabel) + |
+BigDecimal + |
+Yes + |
+
getBoolean(int columnIndex) + |
+boolean + |
+Yes + |
+
getBoolean(String columnLabel) + |
+boolean + |
+Yes + |
+
getByte(int columnIndex) + |
+byte + |
+Yes + |
+
getBytes(int columnIndex) + |
+byte[] + |
+Yes + |
+
getByte(String columnLabel) + |
+byte + |
+Yes + |
+
getBytes(String columnLabel) + |
+byte[] + |
+Yes + |
+
getDate(int columnIndex) + |
+Date + |
+Yes + |
+
getDate(String columnLabel) + |
+Date + |
+Yes + |
+
getDouble(int columnIndex) + |
+double + |
+Yes + |
+
getDouble(String columnLabel) + |
+double + |
+Yes + |
+
getFloat(int columnIndex) + |
+float + |
+Yes + |
+
getFloat(String columnLabel) + |
+float + |
+Yes + |
+
getInt(int columnIndex) + |
+int + |
+Yes + |
+
getInt(String columnLabel) + |
+int + |
+Yes + |
+
getLong(int columnIndex) + |
+long + |
+Yes + |
+
getLong(String columnLabel) + |
+long + |
+Yes + |
+
getShort(int columnIndex) + |
+short + |
+Yes + |
+
getShort(String columnLabel) + |
+short + |
+Yes + |
+
getString(int columnIndex) + |
+String + |
+Yes + |
+
getString(String columnLabel) + |
+String + |
+Yes + |
+
getTime(int columnIndex) + |
+Time + |
+Yes + |
+
getTime(String columnLabel) + |
+Time + |
+Yes + |
+
getTimestamp(int columnIndex) + |
+Timestamp + |
+Yes + |
+
getTimestamp(String columnLabel) + |
+Timestamp + |
+Yes + |
+
isAfterLast() + |
+boolean + |
+Yes + |
+
isBeforeFirst() + |
+boolean + |
+Yes + |
+
isFirst() + |
+boolean + |
+Yes + |
+
next() + |
+boolean + |
+Yes + |
+
This section describes java.sql.ResultSetMetaData, which provides details about ResultSet object information.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
getColumnCount() + |
+int + |
+Yes + |
+
getColumnName(int column) + |
+String + |
+Yes + |
+
getColumnType(int column) + |
+int + |
+Yes + |
+
getColumnTypeName(int column) + |
+String + |
+Yes + |
+
This section describes java.sql.Statement, the interface for executing SQL statements.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
close() + |
+void + |
+Yes + |
+
execute(String sql) + |
+boolean + |
+Yes + |
+
executeQuery(String sql) + |
+ResultSet + |
+Yes + |
+
executeUpdate(String sql) + |
+int + |
+Yes + |
+
getConnection() + |
+Connection + |
+Yes + |
+
getResultSet() + |
+ResultSet + |
+Yes + |
+
getQueryTimeout() + |
+int + |
+Yes + |
+
getUpdateCount() + |
+int + |
+Yes + |
+
isClosed() + |
+boolean + |
+Yes + |
+
setQueryTimeout(int seconds) + |
+void + |
+Yes + |
+
setFetchSize(int rows) + |
+void + |
+Yes + |
+
cancel() + |
+void + |
+Yes + |
+
Using setFetchSize can reduce the memory occupied by result sets on the client. Result sets are packaged into cursors and segmented for processing, which will increase the communication traffic between the database and the client, affecting performance.
+Database cursors are valid only within their transaction. If setFetchSize is set, set setAutoCommit(false) and commit transactions on the connection to flush service data to a database.
+This section describes javax.sql.ConnectionPoolDataSource, the interface for data source connection pools.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
getLoginTimeout() + |
+int + |
+Yes + |
+
getLogWriter() + |
+PrintWriter + |
+Yes + |
+
getPooledConnection() + |
+PooledConnection + |
+Yes + |
+
getPooledConnection(String user,String password) + |
+PooledConnection + |
+Yes + |
+
setLoginTimeout(int seconds) + |
+void + |
+Yes + |
+
setLogWriter(PrintWriter out) + |
+void + |
+Yes + |
+
This section describes javax.sql.DataSource, the interface for data sources.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
getConneciton() + |
+Connection + |
+Yes + |
+
getConnection(String username,String password) + |
+Connection + |
+Yes + |
+
getLoginTimeout() + |
+int + |
+Yes + |
+
getLogWriter() + |
+PrintWriter + |
+Yes + |
+
setLoginTimeout(int seconds) + |
+void + |
+Yes + |
+
setLogWriter(PrintWriter out) + |
+void + |
+Yes + |
+
This section describes javax.sql.PooledConnection, the connection interface created by a connection pool.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
addConnectionEventListener (ConnectionEventListener listener) + |
+void + |
+Yes + |
+
close() + |
+void + |
+Yes + |
+
getConnection() + |
+Connection + |
+Yes + |
+
removeConnectionEventListener (ConnectionEventListener listener) + |
+void + |
+Yes + |
+
addStatementEventListener (StatementEventListener listener) + |
+void + |
+Yes + |
+
removeStatementEventListener (StatementEventListener listener) + |
+void + |
+Yes + |
+
This section describes javax.naming.Context, the context interface for connection configuration.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
bind(Name name, Object obj) + |
+void + |
+Yes + |
+
bind(String name, Object obj) + |
+void + |
+Yes + |
+
lookup(Name name) + |
+Object + |
+Yes + |
+
lookup(String name) + |
+Object + |
+Yes + |
+
rebind(Name name, Object obj) + |
+void + |
+Yes + |
+
rebind(String name, Object obj) + |
+void + |
+Yes + |
+
rename(Name oldName, Name newName) + |
+void + |
+Yes + |
+
rename(String oldName, String newName) + |
+void + |
+Yes + |
+
unbind(Name name) + |
+void + |
+Yes + |
+
unbind(String name) + |
+void + |
+Yes + |
+
This section describes javax.naming.spi.InitialContextFactory, the initial context factory interface.
+ +Method Name + |
+Return Type + |
+Support JDBC 4 + |
+
---|---|---|
getInitialContext(Hashtable<?,?> environment) + |
+Context + |
+Yes + |
+
CopyManager is an API interface class provided by the JDBC driver in GaussDB(DWS). It is used to import data to GaussDB(DWS) in batches.
+The CopyManager class is in the org.postgresql.copy package class and inherits the java.lang.Object class. The declaration of the class is as follows:
+public class CopyManager +extends Object+
public CopyManager(BaseConnection connection)
+throws SQLException
+Return Value + |
+Method + |
+Description + |
+throws + |
+
---|---|---|---|
CopyIn + |
+copyIn(String sql) + |
+- + |
+SQLException + |
+
long + |
+copyIn(String sql, InputStream from) + |
+Uses COPY FROM STDIN to quickly load data to tables in the database from InputStream. + |
+SQLException,IOException + |
+
long + |
+copyIn(String sql, InputStream from, int bufferSize) + |
+Uses COPY FROM STDIN to quickly load data to tables in the database from InputStream. + |
+SQLException,IOException + |
+
long + |
+copyIn(String sql, Reader from) + |
+Uses COPY FROM STDIN to quickly load data to tables in the database from Reader. + |
+SQLException,IOException + |
+
long + |
+copyIn(String sql, Reader from, int bufferSize) + |
+Uses COPY FROM STDIN to quickly load data to tables in the database from Reader. + |
+SQLException,IOException + |
+
CopyOut + |
+copyOut(String sql) + |
+- + |
+SQLException + |
+
long + |
+copyOut(String sql, OutputStream to) + |
+Sends the result set of COPY TO STDOUT from the database to the OutputStream class. + |
+SQLException,IOException + |
+
long + |
+copyOut(String sql, Writer to) + |
+Sends the result set of COPY TO STDOUT from the database to the Writer class. + |
+SQLException,IOException + |
+
Open Database Connectivity (ODBC) is a Microsoft API for accessing databases based on the X/OPEN CLI. The ODBC API alleviates applications from directly operating in databases, and enhances the database portability, extensibility, and maintainability.
+Figure 1 shows the system structure of ODBC.
+ +GaussDB(DWS) supports ODBC 3.5 in the following environments.
+ +OS + |
+Platform + |
+
---|---|
SUSE Linux Enterprise Server 11 SP1/SP2/SP3/SP4 +SUSE Linux Enterprise Server 12 and SP1/SP2/SP3/SP5 + |
+x86_64 + |
+
Red Hat Enterprise Linux 6.4/6.5/6.6/6.7/6.8/6.9/7.0/7.1/7.2/7.3/7.4/7.5 + |
+x86_64 + |
+
Red Hat Enterprise Linux 7.5 + |
+ARM64 + |
+
CentOS 6.4/6.5/6.6/6.7/6.8/6.9/7.0/7.1/7.2/7.3/7.4 + |
+x86_64 + |
+
CentOS 7.6 + |
+ARM64 + |
+
EulerOS 2.0 SP2/SP3 + |
+x86_64 + |
+
EulerOS 2.0 SP8 + |
+ARM64 + |
+
NeoKylin 7.5/7.6 + |
+ARM64 + |
+
Oracle Linux R7U4 + |
+x86_64 + |
+
Windows 7 + |
+32-bit + |
+
Windows 7 + |
+64-bit + |
+
Windows Server 2008 + |
+32-bit + |
+
Windows Server 2008 + |
+64-bit + |
+
The operating systems listed above refer to the operating systems on which the ODBC program runs. They can be different from the operating systems where databases are deployed.
+The ODBC Driver Manager running on UNIX or Linux can be unixODBC or iODBC. Select unixODBC-2.3.0 here as the component for connecting the database.
+Windows has a native ODBC Driver Manager. You can locate Data Sources (ODBC) by choosing Control Panel > Administrative Tools.
+The current database ODBC driver is based on an open source version and may be incompatible with vendor-unique data types, such as tinyint, smalldatetime, and nvarchar2.
+Obtain the dws_8.1.x_odbc_driver_for_xxx_xxx.zip package from the release package. In the Linux OS, header files (including sql.h and sqlext.h) and library (libodbc.so) are required in application development. These header files and libraries can be obtained from the unixODBC-2.3.0 installation package.
+Obtain the dws_8.1.x_odbc_driver_for_windows.zip package from the release package. In the Windows OS, the required header files and library files are system-resident.
+The ODBC DRIVER (psqlodbcw.so) provided by GaussDB(DWS) can be used after it has been configured in the data source. To configure data sources, users must configure the odbc.ini and odbcinst.ini files on the server. The two files are generated during the unixODBC compilation and installation, and are saved in the /usr/local/etc directory by default.
+https://sourceforge.net/projects/unixodbc/files/unixODBC/2.3.0/unixODBC-2.3.0.tar.gz/download
+tar zxvf unixODBC-2.3.0.tar.gz +cd unixODBC-2.3.0 +# Open the configure file. If it does not exist, open the configure.ac file. Find LIB_VERSION. +# Change the value of LIB_VERSION to 1:0:0 to compile a *.so.1 dynamic library with the same dependency on psqlodbcw.so. +vim configure + +./configure --enable-gui=no # To perform the compilation on a TaiShan server, add the configure parameter --build=aarch64-unknown-linux-gnu. +make +# The installation may require root permissions. +make install+
Install unixODBC. If another version of unixODBC has been installed, it will be overwritten after installation.
+Decompress the dws_8.1.x_odbc_driver_for_xxx_xxx.zip package.
+Add the following content to the end of the /usr/local/etc/odbcinst.ini file:
+[GaussMPP] +Driver64=/usr/local/lib/psqlodbcw.so +setup=/usr/local/lib/psqlodbcw.so+
For descriptions of the parameters in the odbcinst.ini file, see Table 1.
+ +Parameter + |
+Description + |
+Example + |
+
---|---|---|
[DriverName] + |
+Driver name, corresponding to Driver in DSN. + |
+[DRIVER_N] + |
+
Driver64 + |
+Path of the dynamic driver library + |
+Driver64=/xxx/odbc/lib/psqlodbcw.so + |
+
setup + |
+Driver installation path, which is the same as the dynamic library path in Driver64. + |
+setup=/xxx/odbc/lib/psqlodbcw.so + |
+
Add the following content to the end of the /usr/local/etc/odbc.ini file:
+[MPPODBC] +Driver=GaussMPP +Servername=10.10.0.13 (database server IP address) +Database=gaussdb (database name) +Username=dbadmin (database username) +Password= (database user password) +Port=8000 (database listening port) +Sslmode=allow+
For descriptions of the parameters in the odbc.ini file, see Table 2.
+ +Parameter + |
+Description + |
+Example + |
+
---|---|---|
[DSN] + |
+Data source name + |
+[MPPODBC] + |
+
Driver + |
+Driver name, corresponding to DriverName in odbcinst.ini + |
+Driver=DRIVER_N + |
+
Servername + |
+IP address of the server + |
+Servername=10.145.130.26 + |
+
Database + |
+Name of the database to connect to + |
+Database=gaussdb + |
+
Username + |
+Name of the database user + |
+Username=dbadmin + |
+
Password + |
+Password of the database user + |
+Password= + NOTE:
+After a user established a connection, the ODBC driver automatically clears their password stored in memory. +However, if this parameter is configured, unixODBC will cache data source files, which may cause the password to be stored in the memory for a long time. +When you connect to an application, you are advised to send your password through an API instead of writing it in a data source configuration file. After the connection has been established, immediately clear the memory segment where your password is stored. + |
+
Port + |
+Port ID of the server + |
+Port=8000 + |
+
Sslmode + |
+Whether to enable the SSL mode + |
+Sslmode=allow + |
+
UseServerSidePrepare + |
+Whether to enable the extended query protocol for the database. +The value can be 0 or 1. The default value is 1, indicating that the extended query protocol is enabled. + |
+UseServerSidePrepare=1 + |
+
UseBatchProtocol + |
+Whether to enable the batch query protocol. If it is enabled, the DML performance can be improved. The value can be 0 or 1. The default value is 1. +If this parameter is set to 0, the batch query protocol is disabled (mainly for communication with earlier database versions). +If this parameter is set to 1 and the support_batch_bind parameter is set to on, the batch query protocol is enabled. + |
+UseBatchProtocol=1 + |
+
ConnectionExtraInfo + |
+Whether to display the driver deployment path and process owner in the connection_info parameter mentioned in connection_info + |
+ConnectionExtraInfo=1 + NOTE:
+The default value is 0. If this parameter is set to 1, the ODBC driver reports the driver deployment path and process owner to the database and displays the information in the connection_info parameter (see connection_info). In this case, you can query the information from PG_STAT_ACTIVITY or PGXC_STAT_ACTIVITY. + |
+
ForExtensionConnector + |
+ETL tool performance optimization parameter. It can be used to optimize the memory and reduce the memory usage by the peer CN, to avoid system instability caused by excessive CN memory usage. +The value can be 0 or 1. The default value is 0, indicating that the optimization item is disabled. +Do not set this parameter for other services outside the database system. Otherwise, the service correctness may be affected. + |
+ForExtensionConnector=1 + |
+
KeepDisallowPremature + |
+Specifies whether the cursor in the SQL statement has the with hold attribute when the following conditions are met: UseDeclareFetch is set to 1, and the application invokes SQLNumResultCols, SQLDescribeCol, or SQLColAttribute after invoking SQLPrepare to obtain the column information of the result set. +The value can be 0 or 1. 0 indicates that the with hold attribute is supported, and 1 indicates that the with hold attribute is not supported. The default value is 0. + |
+KeepDisallowPremature=1 + NOTE:
+
|
+
The valid values of sslmode are as follows:
+ +sslmode + |
+Whether SSL Encryption Is Enabled + |
+Description + |
+
---|---|---|
disable + |
+No + |
+The SSL secure connection is not used. + |
+
allow + |
+Probably + |
+The SSL secure encrypted connection is used if required by the database server, but does not check the authenticity of the server. + |
+
prefer + |
+Probably + |
+The SSL secure encrypted connection is used as a preferred mode if supported by the database, but does not check the authenticity of the server. + |
+
require + |
+Yes + |
+The SSL secure connection must be used, but it only encrypts data and does not check the authenticity of the server. + |
+
verify-ca + |
+Yes + |
+The SSL secure connection must be used, and it checks whether the database has certificates issued by a trusted CA. + |
+
verify-full + |
+Yes + |
+The SSL secure connection must be used. In addition to the check scope specified by verify-ca, it checks whether the name of the host where the database resides is the same as that on the certificate. This mode is not supported. + |
+
To use SSL certificates for connection, decompress the certificate package contained in the GaussDB(DWS) installation package, and run the source sslcert_env.sh file in a shell environment to deploy certificates in the default location of the current session.
+Or manually declare the following environment variables and ensure that the permission for the client.key* series files is set to 600.
+export PGSSLCERT= "/YOUR/PATH/OF/client.crt" # Change the path to the absolute path of client.crt. +export PGSSLKEY= "/YOUR/PATH/OF/client.key" # Change the path to the absolute path of client.key.+
In addition, change the value of Sslmode in the data source to verify-ca.
+vim ~/.bashrc+
Add the following content to the end of the configuration file:
+export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH +export ODBCSYSINI=/usr/local/etc +export ODBCINI=/usr/local/etc/odbc.ini+
source ~/.bashrc+
Run the isql -v GaussODBC command (GaussODBC is the data source name).
++---------------------------------------+ +| Connected! | +| | +| sql-statement | +| help [tablename] | +| quit | +| | ++---------------------------------------+ +SQL>+
Run ls to check the path in the error information, ensuring that the psqlodbcw.so file exists and you have execution permissions on it.
+Run ldd to check the path in the error information. If libodbc.so.1 or other unixODBC libraries are lacking, configure unixODBC again following the procedure provided in this section, and add the lib directory under its installation directory to LD_LIBRARY_PATH. If other libraries are lacking, add the lib directory under the ODBC driver package to LD_LIBRARY_PATH.
+Check the Servername and Port configuration items in data sources.
+If Servername and Port are correctly configured, ensure the proper network adapter and port are monitored based on database server configurations in the procedure in this section.
+Check firewall settings, ensuring that the database communication port is trusted.
+Check to ensure network gatekeeper settings are proper (if any).
+The sslmode configuration item is not configured in the data sources.
+Solution:
+Set it to allow or a higher level. For more details, see Table 3.
+When verify-full is used for SSL encryption, the driver checks whether the host name in certificates is the same as the actual one.
+Solution:
+To solve this problem, use verify-ca to stop checking host names, or generate a set of CA certificates containing the actual host names.
+The executable file (such as the isql tool of unixODBC) and the database driver (psqlodbcw.so) depend on different library versions of ODBC, such as libodbc.so.1 and libodbc.so.2. You can verify this problem by using the following method:
+ldd `which isql` | grep odbc +ldd psqlodbcw.so | grep odbc+
If the suffix digits of the outputs libodbc.so are different or indicate different physical disk files, this problem exists. Both isql and psqlodbcw.so load libodbc.so. If different physical files are loaded, different ODBC libraries with the same function list conflict with each other in a visible domain. As a result, the database driver cannot be loaded.
+Solution:
+Uninstall the unnecessary unixODBC, such as libodbc.so.2, and create a soft link with the same name and the .so.2 suffix for the remaining libodbc.so.1 library.
+For security purposes, the CN forbids access from other nodes in the cluster without authentication.
+To access the CN from inside the cluster, deploy the ODBC program on the machine where the CN is located and use 127.0.0.1 as the server address. It is recommended that the service system be deployed outside the cluster. If it is deployed inside, the database performance may be affected.
+This problem occurs when you use SQL on other GaussDB. The possible cause is that the unixODBC version is not the recommended one. You are advised to run the odbcinst --version command to check the unixODBC version.
+If this error occurs on an open source client, the cause may be:
+The database stores only the SHA-256 hash of the password, but the open source client supports only MD5 hashes.
+To solve this problem, you can update the user password. For details, see "ALTER USER" in the SQL Syntax. Alternatively, create a user (see "CREATE USER" in the SQL Syntax), assign the same permissions to the user, and use the new user to connect to the database.
+The database version is too early or the database is an open-source database. Use the driver of the required version to connect to the database.
+Configure the ODBC data source using the ODBC data source manager preinstalled in the Windows OS.
+Decompress GaussDB-8.1.1-Windows-Odbc.tar.gz and install psqlodbc.msi (for 32-bit OS) or psqlodbc_x64.msi (for 64-bit OS).
+Use the Driver Manager suitable for your OS to configure the data source. (Assume the Windows system drive is drive C.)
+Do not open Driver Manager by choosing Control Panel, clicking Administrative Tools, and clicking Data Sources (ODBC).
+WoW64 is the acronym for "Windows 32-bit on Windows 64-bit". C:\Windows\SysWOW64\ stores the 32-bit environment on a 64-bit system. C:\Windows\System32\ stores the environment consistent with the current OS. For technical details, see Windows technical documents.
+Do not open Driver Manager by choosing Control Panel, clicking Administrative Tools, and clicking Data Sources (ODBC).
+In the Windows OS, click Computer, and choose Control Panel. Click Administrative Tools and click Data Sources (ODBC).
+On the User DSN tab, click Add, and choose PostgreSQL Unicode for setup. (An identifier will be displayed for the 64-bit OS.)
+ +The entered username and password will be recorded in the Windows registry and you do not need to enter them again when connecting to the database next time. For security purposes, you are advised to delete sensitive information before clicking Save and enter the required username and password again when using ODBC APIs to connect to the database.
+To use SSL certificates for connection, decompress the certificate package contained in the GaussDB(DWS) installation package, and double-click the sslcert_env.bat file to deploy certificates in the default location.
+The sslcert_env.bat file ensures the purity of the certificate environment. When the %APPDATA%\postgresql directory exists, a message will be prompted asking you whether you want to remove related directories. If you want to remove related directories, back up files in the directory.
+Alternatively, you can copy the client.crt, client.key, client.key.cipher, and client.key.rand files in the certificate file folder to the manually created %APPDATA%\postgresql directory. Change client in the file names to postgres, for example, change client.key to postgres.key. Copy the cacert.pem file to the %APPDATA%\postgresql directory and change its name to root.crt.
+Change the value of SSL Mode in step 2 to verify-ca.
+ +sslmode + |
+Whether SSL Encryption Is Enabled + |
+Description + |
+
---|---|---|
disable + |
+No + |
+The SSL secure connection is not used. + |
+
allow + |
+Probably + |
+The SSL secure encrypted connection is used if required by the database server, but does not check the authenticity of the server. + |
+
prefer + |
+Probably + |
+The SSL secure encrypted connection is used as a preferred mode if supported by the database, but does not check the authenticity of the server. + |
+
require + |
+Yes + |
+The SSL secure connection must be used, but it only encrypts data and does not check the authenticity of the server. + |
+
verify-ca + |
+Yes + |
+The SSL secure connection must be used, and it checks whether the database has certificates issued by a trusted CA. + |
+
verify-full + |
+Yes + |
+The SSL secure connection must be used. In addition to the check scope specified by verify-ca, it checks whether the name of the host where the database resides is the same as that on the certificate. + NOTE:
+This mode cannot be used. + |
+
Click Test.
+ +This problem occurs because when verify-full is used for SSL encryption, the driver checks whether the host name in certificates is the same as the actual one. To solve this problem, use verify-ca to stop checking host names, or generate a set of CA certificates containing the actual host names.
+Check the Servername and Port configuration items in data sources.
+If Servername and Port are correctly configured, ensure the proper network adapter and port are monitored based on database server configurations in the procedure in this section.
+Check firewall settings, ensuring that the database communication port is trusted.
+Check to ensure network gatekeeper settings are proper (if any).
+Possible cause: The bit versions of the drive and program are different.
+C:\Windows\SysWOW64\odbcad32.exe is a 32-bit ODBC Drive Manager.
+C:\Windows\System32\odbcad32.exe is a 64-bit ODBC Drive Manager.
+sslmode is not configured for the data source. Set this configuration item to allow or a higher level to enable SSL connections. For details about sslmode, see Table 1.
+If this error occurs on an open source client, the cause may be:
+The database stores only the SHA-256 hash of the password, but the open source client supports only MD5 hashes.
+To solve this problem, you can update the user password. For details, see "ALTER USER" in the SQL Syntax. Alternatively, create a user (see "CREATE USER" in the SQL Syntax), assign the same permissions to the user, and use the new user to connect to the database.
+The database version is too early or the database is an open-source database. Use the driver of the required version to connect to the database.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 | // The following example shows how to obtain data from GaussDB(DWS) through the ODBC interface. +// DBtest.c (compile with: libodbc.so) +#include <stdlib.h> +#include <stdio.h> +#include <sqlext.h> +#ifdef WIN32 +#include <windows.h> +#endif +SQLHENV V_OD_Env; // Handle ODBC environment +SQLHSTMT V_OD_hstmt; // Handle statement +SQLHDBC V_OD_hdbc; // Handle connection +char typename[100]; +SQLINTEGER value = 100; +SQLINTEGER V_OD_erg,V_OD_buffer,V_OD_err,V_OD_id; +SQLLEN V_StrLen_or_IndPtr; +int main(int argc,char *argv[]) +{ + // 1. Apply for an environment handle. + V_OD_erg = SQLAllocHandle(SQL_HANDLE_ENV,SQL_NULL_HANDLE,&V_OD_Env); + if ((V_OD_erg != SQL_SUCCESS) && (V_OD_erg != SQL_SUCCESS_WITH_INFO)) + { + printf("Error AllocHandle\n"); + exit(0); + } + // 2. Set environment attributes (version information) + SQLSetEnvAttr(V_OD_Env, SQL_ATTR_ODBC_VERSION, (void*)SQL_OV_ODBC3, 0); + // 3. Apply for a connection handle. + V_OD_erg = SQLAllocHandle(SQL_HANDLE_DBC, V_OD_Env, &V_OD_hdbc); + if ((V_OD_erg != SQL_SUCCESS) && (V_OD_erg != SQL_SUCCESS_WITH_INFO)) + { + SQLFreeHandle(SQL_HANDLE_ENV, V_OD_Env); + exit(0); + } + // 4. Set connection attributes. + SQLSetConnectAttr(V_OD_hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_ON, 0); +// 5. Connect to the data source. userName and password indicate the username and password for connecting to the database. Set them as needed. +// If the username and password have been set in the odbc.ini file, you do not need to set userName or password here, retaining "" for them. However, you are not advised to do so because the username and password will be disclosed if the permission for odbc.ini is abused. + V_OD_erg = SQLConnect(V_OD_hdbc, (SQLCHAR*) "gaussdb", SQL_NTS, + (SQLCHAR*) "userName", SQL_NTS, (SQLCHAR*) "password", SQL_NTS); + if ((V_OD_erg != SQL_SUCCESS) && (V_OD_erg != SQL_SUCCESS_WITH_INFO)) + { + printf("Error SQLConnect %d\n",V_OD_erg); + SQLFreeHandle(SQL_HANDLE_ENV, V_OD_Env); + exit(0); + } + printf("Connected !\n"); + // 6. Set statement attributes + SQLSetStmtAttr(V_OD_hstmt,SQL_ATTR_QUERY_TIMEOUT,(SQLPOINTER *)3,0); + // 7. Apply for a statement handle + SQLAllocHandle(SQL_HANDLE_STMT, V_OD_hdbc, &V_OD_hstmt); + // 8. Executes an SQL statement directly + SQLExecDirect(V_OD_hstmt,"drop table IF EXISTS customer_t1",SQL_NTS); + SQLExecDirect(V_OD_hstmt,"CREATE TABLE customer_t1(c_customer_sk INTEGER, c_customer_name VARCHAR(32));",SQL_NTS); + SQLExecDirect(V_OD_hstmt,"insert into customer_t1 values(25,'li')",SQL_NTS); + // 9. Prepare for execution + SQLPrepare(V_OD_hstmt,"insert into customer_t1 values(?)",SQL_NTS); + // 10. Bind parameters + SQLBindParameter(V_OD_hstmt,1,SQL_PARAM_INPUT,SQL_C_SLONG,SQL_INTEGER,0,0, + &value,0,NULL); + // 11. Execute the ready statement + SQLExecute(V_OD_hstmt); + SQLExecDirect(V_OD_hstmt,"select id from testtable",SQL_NTS); + // 12. Obtain the attributes of a certain column in the result set + SQLColAttribute(V_OD_hstmt,1,SQL_DESC_TYPE_NAME,typename,sizeof(typename),NULL,NULL); + printf("SQLColAtrribute %s\n",typename); + // 13. Bind the result set + SQLBindCol(V_OD_hstmt,1,SQL_C_SLONG, (SQLPOINTER)&V_OD_buffer,150, + (SQLLEN *)&V_StrLen_or_IndPtr); + // 14. Collect data using SQLFetch + V_OD_erg=SQLFetch(V_OD_hstmt); + // 15. Obtain and return data using SQLGetData + while(V_OD_erg != SQL_NO_DATA) + { + SQLGetData(V_OD_hstmt,1,SQL_C_SLONG,(SQLPOINTER)&V_OD_id,0,NULL); + printf("SQLGetData ----ID = %d\n",V_OD_id); + V_OD_erg=SQLFetch(V_OD_hstmt); + }; + printf("Done !\n"); + // 16. Disconnect from the data source and release handles + SQLFreeHandle(SQL_HANDLE_STMT,V_OD_hstmt); + SQLDisconnect(V_OD_hdbc); + SQLFreeHandle(SQL_HANDLE_DBC,V_OD_hdbc); + SQLFreeHandle(SQL_HANDLE_ENV, V_OD_Env); + return(0); + } + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 + 24 + 25 + 26 + 27 + 28 + 29 + 30 + 31 + 32 + 33 + 34 + 35 + 36 + 37 + 38 + 39 + 40 + 41 + 42 + 43 + 44 + 45 + 46 + 47 + 48 + 49 + 50 + 51 + 52 + 53 + 54 + 55 + 56 + 57 + 58 + 59 + 60 + 61 + 62 + 63 + 64 + 65 + 66 + 67 + 68 + 69 + 70 + 71 + 72 + 73 + 74 + 75 + 76 + 77 + 78 + 79 + 80 + 81 + 82 + 83 + 84 + 85 + 86 + 87 + 88 + 89 + 90 + 91 + 92 + 93 + 94 + 95 + 96 + 97 + 98 + 99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 | /********************************************************************** +* Set UseBatchProtocol to 1 in the data source and set the database parameter support_batch_bind +* to on. +* The CHECK_ERROR command is used to check and print error information. +* This example is used to interactively obtain the DSN, data volume to be processed, and volume of ignored data from users, and insert required data into the test_odbc_batch_insert table. +***********************************************************************/ +#include <stdio.h> +#include <stdlib.h> +#include <sql.h> +#include <sqlext.h> +#include <string.h> + +#include "util.c" + +void Exec(SQLHDBC hdbc, SQLCHAR* sql) +{ + SQLRETURN retcode; // Return status + SQLHSTMT hstmt = SQL_NULL_HSTMT; // Statement handle + SQLCHAR loginfo[2048]; + + // Allocate Statement Handle + retcode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt); + CHECK_ERROR(retcode, "SQLAllocHandle(SQL_HANDLE_STMT)", + hstmt, SQL_HANDLE_STMT); + + // Prepare Statement + retcode = SQLPrepare(hstmt, (SQLCHAR*) sql, SQL_NTS); + sprintf((char*)loginfo, "SQLPrepare log: %s", (char*)sql); + CHECK_ERROR(retcode, loginfo, hstmt, SQL_HANDLE_STMT); + + retcode = SQLExecute(hstmt); + sprintf((char*)loginfo, "SQLExecute stmt log: %s", (char*)sql); + CHECK_ERROR(retcode, loginfo, hstmt, SQL_HANDLE_STMT); + + retcode = SQLFreeHandle(SQL_HANDLE_STMT, hstmt); + sprintf((char*)loginfo, "SQLFreeHandle stmt log: %s", (char*)sql); + CHECK_ERROR(retcode, loginfo, hstmt, SQL_HANDLE_STMT); +} + +int main () +{ + SQLHENV henv = SQL_NULL_HENV; + SQLHDBC hdbc = SQL_NULL_HDBC; + int batchCount = 1000; + SQLLEN rowsCount = 0; + int ignoreCount = 0; + + SQLRETURN retcode; + SQLCHAR dsn[1024] = {'\0'}; + SQLCHAR loginfo[2048]; + +// Interactively obtain data source names. + getStr("Please input your DSN", (char*)dsn, sizeof(dsn), 'N'); +// Interactively obtain the amount of data to be batch processed. + getInt("batchCount", &batchCount, 'N', 1); + do + { +// Interactively obtain the amount of batch processing data that is not inserted into the database. + getInt("ignoreCount", &ignoreCount, 'N', 1); + if (ignoreCount > batchCount) + { + printf("ignoreCount(%d) should be less than batchCount(%d)\n", ignoreCount, batchCount); + } + }while(ignoreCount > batchCount); + + retcode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv); + CHECK_ERROR(retcode, "SQLAllocHandle(SQL_HANDLE_ENV)", + henv, SQL_HANDLE_ENV); + + // Set ODBC Verion + retcode = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, + (SQLPOINTER*)SQL_OV_ODBC3, 0); + CHECK_ERROR(retcode, "SQLSetEnvAttr(SQL_ATTR_ODBC_VERSION)", + henv, SQL_HANDLE_ENV); + + // Allocate Connection + retcode = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc); + CHECK_ERROR(retcode, "SQLAllocHandle(SQL_HANDLE_DBC)", + henv, SQL_HANDLE_DBC); + + // Set Login Timeout + retcode = SQLSetConnectAttr(hdbc, SQL_LOGIN_TIMEOUT, (SQLPOINTER)5, 0); + CHECK_ERROR(retcode, "SQLSetConnectAttr(SQL_LOGIN_TIMEOUT)", + hdbc, SQL_HANDLE_DBC); + + // Set Auto Commit + retcode = SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, + (SQLPOINTER)(1), 0); + CHECK_ERROR(retcode, "SQLSetConnectAttr(SQL_ATTR_AUTOCOMMIT)", + hdbc, SQL_HANDLE_DBC); + + // Connect to DSN + sprintf(loginfo, "SQLConnect(DSN:%s)", dsn); + retcode = SQLConnect(hdbc, (SQLCHAR*) dsn, SQL_NTS, + (SQLCHAR*) NULL, 0, NULL, 0); + CHECK_ERROR(retcode, loginfo, hdbc, SQL_HANDLE_DBC); + + // init table info. + Exec(hdbc, "drop table if exists test_odbc_batch_insert"); + Exec(hdbc, "create table test_odbc_batch_insert(id int primary key, col varchar2(50))"); + +// The following code constructs the data to be inserted based on the data volume entered by users: + { + SQLRETURN retcode; + SQLHSTMT hstmtinesrt = SQL_NULL_HSTMT; + int i; + SQLCHAR *sql = NULL; + SQLINTEGER *ids = NULL; + SQLCHAR *cols = NULL; + SQLLEN *bufLenIds = NULL; + SQLLEN *bufLenCols = NULL; + SQLUSMALLINT *operptr = NULL; + SQLUSMALLINT *statusptr = NULL; + SQLULEN process = 0; + +// Data is constructed by column. Each column is stored continuously. + ids = (SQLINTEGER*)malloc(sizeof(ids[0]) * batchCount); + cols = (SQLCHAR*)malloc(sizeof(cols[0]) * batchCount * 50); +// Data size in each row for a column + bufLenIds = (SQLLEN*)malloc(sizeof(bufLenIds[0]) * batchCount); + bufLenCols = (SQLLEN*)malloc(sizeof(bufLenCols[0]) * batchCount); +// Whether this row needs to be processed. The value is SQL_PARAM_IGNORE or SQL_PARAM_PROCEED. + operptr = (SQLUSMALLINT*)malloc(sizeof(operptr[0]) * batchCount); + memset(operptr, 0, sizeof(operptr[0]) * batchCount); +// Processing result of the row +// Note: In the database, a statement belongs to one transaction. Therefore, data is processed as a unit. That is, either all data is inserted successfully or all data fails to be inserted. + statusptr = (SQLUSMALLINT*)malloc(sizeof(statusptr[0]) * batchCount); + memset(statusptr, 88, sizeof(statusptr[0]) * batchCount); + + if (NULL == ids || NULL == cols || NULL == bufLenCols || NULL == bufLenIds) + { + fprintf(stderr, "FAILED:\tmalloc data memory failed\n"); + goto exit; + } + + for (int i = 0; i < batchCount; i++) + { + ids[i] = i; + sprintf(cols + 50 * i, "column test value %d", i); + bufLenIds[i] = sizeof(ids[i]); + bufLenCols[i] = strlen(cols + 50 * i); + operptr[i] = (i < ignoreCount) ? SQL_PARAM_IGNORE : SQL_PARAM_PROCEED; + } + + // Allocate Statement Handle + retcode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmtinesrt); + CHECK_ERROR(retcode, "SQLAllocHandle(SQL_HANDLE_STMT)", + hstmtinesrt, SQL_HANDLE_STMT); + + // Prepare Statement + sql = (SQLCHAR*)"insert into test_odbc_batch_insert values(?, ?)"; + retcode = SQLPrepare(hstmtinesrt, (SQLCHAR*) sql, SQL_NTS); + sprintf((char*)loginfo, "SQLPrepare log: %s", (char*)sql); + CHECK_ERROR(retcode, loginfo, hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLSetStmtAttr(hstmtinesrt, SQL_ATTR_PARAMSET_SIZE, (SQLPOINTER)batchCount, sizeof(batchCount)); + CHECK_ERROR(retcode, "SQLSetStmtAttr", hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLBindParameter(hstmtinesrt, 1, SQL_PARAM_INPUT, SQL_C_SLONG, SQL_INTEGER, sizeof(ids[0]), 0,&(ids[0]), 0, bufLenIds); + CHECK_ERROR(retcode, "SQLBindParameter for id", hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLBindParameter(hstmtinesrt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, 50, 50, cols, 50, bufLenCols); + CHECK_ERROR(retcode, "SQLBindParameter for cols", hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLSetStmtAttr(hstmtinesrt, SQL_ATTR_PARAMS_PROCESSED_PTR, (SQLPOINTER)&process, sizeof(process)); + CHECK_ERROR(retcode, "SQLSetStmtAttr for SQL_ATTR_PARAMS_PROCESSED_PTR", hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLSetStmtAttr(hstmtinesrt, SQL_ATTR_PARAM_STATUS_PTR, (SQLPOINTER)statusptr, sizeof(statusptr[0]) * batchCount); + CHECK_ERROR(retcode, "SQLSetStmtAttr for SQL_ATTR_PARAM_STATUS_PTR", hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLSetStmtAttr(hstmtinesrt, SQL_ATTR_PARAM_OPERATION_PTR, (SQLPOINTER)operptr, sizeof(operptr[0]) * batchCount); + CHECK_ERROR(retcode, "SQLSetStmtAttr for SQL_ATTR_PARAM_OPERATION_PTR", hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLExecute(hstmtinesrt); + sprintf((char*)loginfo, "SQLExecute stmt log: %s", (char*)sql); + CHECK_ERROR(retcode, loginfo, hstmtinesrt, SQL_HANDLE_STMT); + + retcode = SQLRowCount(hstmtinesrt, &rowsCount); + CHECK_ERROR(retcode, "SQLRowCount execution", hstmtinesrt, SQL_HANDLE_STMT); + + if (rowsCount != (batchCount - ignoreCount)) + { + sprintf(loginfo, "(batchCount - ignoreCount)(%d) != rowsCount(%d)", (batchCount - ignoreCount), rowsCount); + CHECK_ERROR(SQL_ERROR, loginfo, NULL, SQL_HANDLE_STMT); + } + else + { + sprintf(loginfo, "(batchCount - ignoreCount)(%d) == rowsCount(%d)", (batchCount - ignoreCount), rowsCount); + CHECK_ERROR(SQL_SUCCESS, loginfo, NULL, SQL_HANDLE_STMT); + } + + if (rowsCount != process) + { + sprintf(loginfo, "process(%d) != rowsCount(%d)", process, rowsCount); + CHECK_ERROR(SQL_ERROR, loginfo, NULL, SQL_HANDLE_STMT); + } + else + { + sprintf(loginfo, "process(%d) == rowsCount(%d)", process, rowsCount); + CHECK_ERROR(SQL_SUCCESS, loginfo, NULL, SQL_HANDLE_STMT); + } + + for (int i = 0; i < batchCount; i++) + { + if (i < ignoreCount) + { + if (statusptr[i] != SQL_PARAM_UNUSED) + { + sprintf(loginfo, "statusptr[%d](%d) != SQL_PARAM_UNUSED", i, statusptr[i]); + CHECK_ERROR(SQL_ERROR, loginfo, NULL, SQL_HANDLE_STMT); + } + } + else if (statusptr[i] != SQL_PARAM_SUCCESS) + { + sprintf(loginfo, "statusptr[%d](%d) != SQL_PARAM_SUCCESS", i, statusptr[i]); + CHECK_ERROR(SQL_ERROR, loginfo, NULL, SQL_HANDLE_STMT); + } + } + + retcode = SQLFreeHandle(SQL_HANDLE_STMT, hstmtinesrt); + sprintf((char*)loginfo, "SQLFreeHandle hstmtinesrt"); + CHECK_ERROR(retcode, loginfo, hstmtinesrt, SQL_HANDLE_STMT); + } + + +exit: + printf ("\nComplete.\n"); + + // Connection + if (hdbc != SQL_NULL_HDBC) { + SQLDisconnect(hdbc); + SQLFreeHandle(SQL_HANDLE_DBC, hdbc); + } + + // Environment + if (henv != SQL_NULL_HENV) + SQLFreeHandle(SQL_HANDLE_ENV, henv); + + return 0; +} + |
The ODBC interface is a set of API functions provided to users. This chapter describes its common interfaces. For details on other interfaces, see "ODBC Programmer's Reference" at MSDN (https://msdn.microsoft.com/en-us/library/windows/desktop/ms714177(v=vs.85).aspx).
+In ODBC 3.x, SQLAllocEnv (an ODBC 2.x function) was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.
+In ODBC 3.x, SQLAllocConnect (an ODBC 2.x function) was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.
+SQLAllocHandle allocates environment, connection, or statement handles. This function is a generic function for allocating handles that replaces the deprecated ODBC 2.x functions SQLAllocEnv, SQLAllocConnect, and SQLAllocStmt.
+1 +2 +3 | SQLRETURN SQLAllocHandle(SQLSMALLINT HandleType, + SQLHANDLE InputHandle, + SQLHANDLE *OutputHandlePtr); + |
Keyword + |
+Description + |
+
---|---|
HandleType + |
+The type of handle to be allocated by SQLAllocHandle. The value must be one of the following: +
The handle application sequence is: SQL_HANDLE_ENV > SQL_HANDLE_DBC > SQL_HANDLE_STMT. The handle applied later depends on the handle applied prior to it. + |
+
InputHandle + |
+Existing handle to use as a context for the new handle being allocated. +
|
+
OutputHandlePtr + |
+Output parameter: Pointer to a buffer in which to return the handle to the newly allocated data structure. + |
+
When allocating a non-environment handle, if SQLAllocHandle returns SQL_ERROR, it sets OutputHandlePtr to SQL_NULL_HENV, SQL_NULL_HDBC, SQL_NULL_HSTMT, or SQL_NULL_HDESC. The application can then call SQLGetDiagRec, with HandleType and Handle set to IntputHandle, to obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+In ODBC 3.x, SQLAllocStmt was deprecated and replaced with SQLAllocHandle. For details, see SQLAllocHandle.
+SQLBindCol is used to associate (bind) columns in a result set to an application data buffer.
+1 +2 +3 +4 +5 +6 | SQLRETURN SQLBindCol(SQLHSTMT StatementHandle, + SQLUSMALLINT ColumnNumber, + SQLSMALLINT TargetType, + SQLPOINTER TargetValuePtr, + SQLLEN BufferLength, + SQLLEN *StrLen_or_IndPtr); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle. + |
+
ColumnNumber + |
+Number of the column to be bound. The column number starts with 0 and increases in ascending order. Column 0 is the bookmark column. If no bookmark column is set, column numbers start at 1. + |
+
TargetType + |
+The C data type in the buffer. + |
+
TargetValuePtr + |
+Output parameter: pointer to the buffer bound with the column. The SQLFetch function returns data in the buffer. If TargetValuePtr is null, StrLen_or_IndPtr is a valid value. + |
+
BufferLength + |
+Size of the TargetValuePtr buffer in bytes available to store the column data. + |
+
StrLen_or_IndPtr + |
+Output parameter: pointer to the length or indicator of the buffer. If StrLen_or_IndPtr is null, no length or indicator is used. + |
+
If SQLBindCol returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, with HandleType and Handle set to SQL_HANDLE_STMT and StatementHandle, respectively, to obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLBindParameter is used to associate (bind) parameter markers in an SQL statement to a buffer.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SQLRETURN SQLBindParameter(SQLHSTMT StatementHandle, + SQLUSMALLINT ParameterNumber, + SQLSMALLINT InputOutputType, + SQLSMALLINT ValuetType, + SQLSMALLINT ParameterType, + SQLULEN ColumnSize, + SQLSMALLINT DecimalDigits, + SQLPOINTER ParameterValuePtr, + SQLLEN BufferLength, + SQLLEN *StrLen_or_IndPtr); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle. + |
+
ParameterNumber + |
+Parameter marker number, starting at 1 and increasing in an ascending order. + |
+
InputOutputType + |
+Input/output type of the parameter. + |
+
ValueType + |
+C data type of the parameter. + |
+
ParameterType + |
+SQL data type of the parameter. + |
+
ColumnSize + |
+Size of the column or expression of the corresponding parameter marker. + |
+
DecimalDigits + |
+Digital number of the column or the expression of the corresponding parameter marker. + |
+
ParameterValuePtr + |
+Pointer to the storage parameter buffer. + |
+
BufferLength + |
+Size of the ParameterValuePtr buffer in bytes. + |
+
StrLen_or_IndPtr + |
+Pointer to the length or indicator of the buffer. If StrLen_or_IndPtr is null, no length or indicator is used. + |
+
If SQLBindCol returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, with HandleType and Handle set to SQL_HANDLE_STMT and StatementHandle, respectively, to obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLColAttribute returns the descriptor information about a column in the result set.
+1 +2 +3 +4 +5 +6 +7 | SQLRETURN SQLColAttribute(SQLHSTMT StatementHandle, + SQLUSMALLINT ColumnNumber, + SQLUSMALLINT FieldIdentifier, + SQLPOINTER CharacterAtrriburePtr, + SQLSMALLINT BufferLength, + SQLSMALLINT *StringLengthPtr, + SQLPOINTER NumericAttributePtr); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle. + |
+
ColumnNumber + |
+Column number of the field to be queried, starting at 1 and increasing in an ascending order. + |
+
FieldIdentifier + |
+Field identifier of ColumnNumber in IRD. + |
+
CharacterAttributePtr + |
+Output parameter: pointer to the buffer that returns FieldIdentifier field value. + |
+
BufferLength + |
+
|
+
StringLengthPtr + |
+Output parameter: pointer to a buffer in which the total number of valid bytes (for string data) is stored in *CharacterAttributePtr. Ignore the value of BufferLength if the data is not a string. + |
+
NumericAttributePtr + |
+Output parameter: pointer to an integer buffer in which the value of the FieldIdentifier field in the ColumnNumber row of the IRD is returned. + |
+
If SQLColAttribute returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLConnect establishes a connection between a driver and a data source. After the connection, the connection handle can be used to access all information about the data source, including its application operating status, transaction processing status, and error information.
+1 +2 +3 +4 +5 +6 +7 | SQLRETURN SQLConnect(SQLHDBC ConnectionHandle, + SQLCHAR *ServerName, + SQLSMALLINT NameLength1, + SQLCHAR *UserName, + SQLSMALLINT NameLength2, + SQLCHAR *Authentication, + SQLSMALLINT NameLength3); + |
Keyword + |
+Description + |
+
---|---|
ConnectionHandle + |
+Connection handle, obtained from SQLAllocHandle. + |
+
ServerName + |
+Name of the data source to connect to. + |
+
NameLength1 + |
+Length of ServerName. + |
+
UserName + |
+User name of the database in the data source. + |
+
NameLength2 + |
+Length of UserName. + |
+
Authentication + |
+User password of the database in the data source. + |
+
NameLength3 + |
+Length of Authentication. + |
+
If SQLConnect returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_DBC and ConnectionHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLDisconnect closes the connection associated with the database connection handle.
+1 | SQLRETURN SQLDisconnect(SQLHDBC ConnectionHandle); + |
Keyword + |
+Description + |
+
---|---|
ConnectionHandle + |
+Connection handle, obtained from SQLAllocHandle. + |
+
If SQLDisconnect returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_DBC and ConnectionHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLExecDirect executes a prepared SQL statement specified in this parameter. This is the fastest execution method for executing only one SQL statement at a time.
+1 +2 +3 | SQLRETURN SQLExecDirect(SQLHSTMT StatementHandle, + SQLCHAR *StatementText, + SQLINTEGER TextLength); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle, obtained from SQLAllocHandle. + |
+
StatementText + |
+SQL statement to be executed. One SQL statement can be executed at a time. + |
+
TextLength + |
+Length of StatementText. + |
+
If SQLExecDirect returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+The SQLExecute function executes a prepared SQL statement using SQLPrepare. The statement is executed using the current value of any application variables that were bound to parameter markers by SQLBindParameter.
+1 | SQLRETURN SQLExecute(SQLHSTMT StatementHandle); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle to be executed. + |
+
If SQLExecute returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLFetch advances the cursor to the next row of the result set and retrieves any bound columns.
+1 | SQLRETURN SQLFetch(SQLHSTMT StatementHandle); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle, obtained from SQLAllocHandle. + |
+
If SQLFetch returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+In ODBC 3.x, SQLFreeStmt (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.
+In ODBC 3.x, SQLFreeConnect (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.
+SQLFreeHandle releases resources associated with a specific environment, connection, or statement handle. It replaces the ODBC 2.x functions: SQLFreeEnv, SQLFreeConnect, and SQLFreeStmt.
+1 +2 | SQLRETURN SQLFreeHandle(SQLSMALLINT HandleType, + SQLHANDLE Handle); + |
Keyword + |
+Description + |
+
---|---|
HandleType + |
+The type of handle to be freed by SQLFreeHandle. The value must be one of the following: +
If HandleType is not one of the preceding values, SQLFreeHandle returns SQL_INVALID_HANDLE. + |
+
Handle + |
+The name of the handle to be freed. + |
+
If SQLFreeHandle returns SQL_ERROR, the handle is still valid.
+See Examples.
+In ODBC 3.x, SQLFreeEnv (an ODBC 2.x function) was deprecated and replaced with SQLFreeHandle. For details, see SQLFreeHandle.
+SQLPrepare prepares an SQL statement to be executed.
+1 +2 +3 | SQLRETURN SQLPrepare(SQLHSTMT StatementHandle, + SQLCHAR *StatementText, + SQLINTEGER TextLength); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle. + |
+
StatementText + |
+SQL text string. + |
+
TextLength + |
+Length of StatementText. + |
+
If SQLPrepare returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLGetData retrieves data for a single column in the current row of the result set. It can be called for many times to retrieve data of variable lengths.
+1 +2 +3 +4 +5 +6 | SQLRETURN SQLGetData(SQLHSTMT StatementHandle, + SQLUSMALLINT Col_or_Param_Num, + SQLSMALLINT TargetType, + SQLPOINTER TargetValuePtr, + SQLLEN BufferLength, + SQLLEN *StrLen_or_IndPtr); + |
Keyword + |
+Description + |
+
---|---|
StatementHandle + |
+Statement handle, obtained from SQLAllocHandle. + |
+
Col_or_Param_Num + |
+Column number for which the data retrieval is requested. The column number starts with 1 and increases in ascending order. The number of the bookmark column is 0. + |
+
TargetType + |
+C data type in the TargetValuePtr buffer. If TargetType is SQL_ARD_TYPE, the driver uses the data type of the SQL_DESC_CONCISE_TYPE field in ARD. If TargetType is SQL_C_DEFAULT, the driver selects a default data type according to the source SQL data type. + |
+
TargetValuePtr + |
+Output parameter: pointer to the pointer that points to the buffer where the data is located. + |
+
BufferLength + |
+Size of the buffer pointed to by TargetValuePtr. + |
+
StrLen_or_IndPtr + |
+Output parameter: pointer to the buffer where the length or identifier value is returned. + |
+
If SQLFetch returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLGetDiagRec returns the current values of multiple fields of a diagnostic record that contains error, warning, and status information.
+1 +2 +3 +4 +5 +6 +7 +8 | SQLRETURN SQLGetDiagRec(SQLSMALLINT HandleType + SQLHANDLE Handle, + SQLSMALLINT RecNumber, + SQLCHAR *SQLState, + SQLINTEGER *NativeErrorPtr, + SQLCHAR *MessageText, + SQLSMALLINT BufferLength + SQLSMALLINT *TextLengthPtr); + |
Keyword + |
+Description + |
+
---|---|
HandleType + |
+A handle-type identifier that describes the type of handle for which diagnostics are desired. The value must be one of the following: +
|
+
Handle + |
+A handle for the diagnostic data structure. Its type is indicated by HandleType. If HandleType is SQL_HANDLE_ENV, Handle may be shared or non-shared environment handle. + |
+
RecNumber + |
+Indicates the status record from which the application seeks information. RecNumber starts with 1. + |
+
SQLState + |
+Output parameter: pointer to a buffer that saves the 5-character SQLSTATE code pertaining to RecNumber. + |
+
NativeErrorPtr + |
+Output parameter: pointer to a buffer that saves the native error code. + |
+
MessageText + |
+Pointer to a buffer that saves text strings of diagnostic information. + |
+
BufferLength + |
+Length of MessageText. + |
+
TextLengthPtr + |
+Output parameter: pointer to the buffer, the total number of bytes in the returned MessageText. If the number of bytes available to return is greater than BufferLength, then the diagnostics information text in MessageText is truncated to BufferLength minus the length of the null termination character. + |
+
SQLGetDiagRec does not release diagnostic records for itself. It uses the following returned values to report execution results:
+If an ODBC function returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec and obtain the SQLSTATE value. The possible SQLSTATE values are listed as follows:
+ +SQLSATATE + |
+Error + |
+Description + |
+
---|---|---|
HY000 + |
+General error + |
+An error occurred for which there is no specific SQLSTATE. + |
+
HY001 + |
+Memory allocation error + |
+The driver is unable to allocate memory required to support execution or completion of the function. + |
+
HY008 + |
+Operation canceled + |
+SQLCancel is called to terminate the statement execution, but the StatementHandle function is still called. + |
+
HY010 + |
+Function sequence error + |
+The function is called prior to sending data to data parameters or columns being executed. + |
+
HY013 + |
+Memory management error + |
+The function fails to be called. The error may be caused by low memory conditions. + |
+
HYT01 + |
+Connection timed out + |
+The timeout period expired before the application was able to connect to the data source. + |
+
IM001 + |
+Function not supported by the driver + |
+The called function is not supported by the StatementHandle driver. + |
+
See Examples.
+SQLSetConnectAttr sets connection attributes.
+1 +2 +3 +4 | SQLRETURN SQLSetConnectAttr(SQLHDBC ConnectionHandle + SQLINTEGER Attribute, + SQLPOINTER ValuePtr, + SQLINTEGER StringLength); + |
Keyword + |
+Description + |
+
---|---|
StatementtHandle + |
+Connection handle. + |
+
Attribute + |
+Attribute to set. + |
+
ValuePtr + |
+Pointer to the Attribute value. ValuePtr depends on the Attribute value, and can be a 32-bit unsigned integer value or a null-terminated string. If ValuePtr parameter is driver-specific value, it may be signed integer. + |
+
StringLength + |
+If ValuePtr points to a string or a binary buffer, this parameter should be the length of *ValuePtr. If ValuePtr points to an integer, StringLength is ignored. + |
+
If SQLSetConnectAttr returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_DBC and ConnectionHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLSetEnvAttr sets environment attributes.
+1 +2 +3 +4 | SQLRETURN SQLSetEnvAttr(SQLHENV EnvironmentHandle + SQLINTEGER Attribute, + SQLPOINTER ValuePtr, + SQLINTEGER StringLength); + |
Keyword + |
+Description + |
+
---|---|
EnviromentHandle + |
+Environment handle. + |
+
Attribute + |
+Environment attribute to be set. Its value must be one of the following: +
|
+
ValuePtr + |
+Pointer to the Attribute value. ValuePtr depends on the Attribute value, and can be a 32-bit integer value or a null-terminated string. + |
+
StringLength + |
+If ValuePtr points to a string or a binary buffer, this parameter should be the length of *ValuePtr. If ValuePtr points to an integer, StringLength is ignored. + |
+
If SQLSetEnvAttr returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_ENV and EnvironmentHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+SQLSetStmtAttr sets attributes related to a statement.
+1 +2 +3 +4 | SQLRETURN SQLSetStmtAttr(SQLHSTMT StatementHandle + SQLINTEGER Attribute, + SQLPOINTER ValuePtr, + SQLINTEGER StringLength); + |
Keyword + |
+Description + |
+
---|---|
StatementtHandle + |
+Statement handle. + |
+
Attribute + |
+Attribute to set. + |
+
ValuePtr + |
+Pointer to the Attribute value. ValuePtr depends on the Attribute value, and can be a 32-bit unsigned integer value or a pointer to a null-terminated string, a binary buffer, and a driver-specified value. If ValuePtr parameter is driver-specific value, it may be signed integer. + |
+
StringLength + |
+If ValuePtr points to a string or a binary buffer, this parameter should be the length of *ValuePtr. If ValuePtr points to an integer, StringLength is ignored. + |
+
If SQLSetStmtAttr returns SQL_ERROR or SQL_SUCCESS_WITH_INFO, the application can then call SQLGetDiagRec, set HandleType and Handle to SQL_HANDLE_STMT and StatementHandle, and obtain the SQLSTATE value. The SQLSTATE value provides the detailed function calling information.
+See Examples.
+In the big data field, the mainstream file format is ORC, which is supported by GaussDB(DWS). You can use Hive to export data to an ORC file and use a read-only foreign table to query and analyze the data in the ORC file. Therefore, you need to map the data types supported by the ORC file format with the data types supported by GaussDB(DWS). For details, see Table 1. Similarly, GaussDB(DWS) exports data through a write-only foreign table, and stores the data in the ORC format. Using Hive to read the ORC file content also requires matched data types. Table 2 shows the matching relationship.
+ +Type + |
+Type Supported by GaussDB(DWS) Foreign Tables + |
+Hive Table Type + |
+
---|---|---|
1-byte integer + |
+TINYINT (not recommended) + |
+TINYINT + |
+
SMALLINT (recommended) + |
+TINYINT + |
+|
2-byte integer + |
+SMALLINT + |
+SMALLINT + |
+
4-byte integer + |
+INTEGER + |
+INT + |
+
8-byte integer + |
+BIGINT + |
+BIGINT + |
+
Single-precision floating point number + |
+FLOAT4 (REAL) + |
+FLOAT + |
+
Double-precision floating point number + |
+FLOAT8(DOUBLE PRECISION) + |
+DOUBLE + |
+
Scientific data type + |
+DECIMAL[p (,s)] (The maximum precision can reach up to 38.) + |
+DECIMAL (The maximum precision can reach up to 38.) (HIVE 0.11) + |
+
Date type + |
+DATE + |
+DATE + |
+
Time type + |
+TIMESTAMP + |
+TIMESTAMP + |
+
Boolean type + |
+BOOLEAN + |
+BOOLEAN + |
+
CHAR type + |
+CHAR(n) + |
+CHAR (n) + |
+
VARCHAR type + |
+VARCHAR(n) + |
+VARCHAR (n) + |
+
String (large text object) + |
+TEXT(CLOB) + |
+STRING + |
+
Type + |
+Type Supported by GaussDB(DWS) Internal Tables (Data Source Table) + |
+Type Supported by GaussDB(DWS) Write-only Foreign Tables + |
+Hive Table Type + |
+
---|---|---|---|
1-byte integer + |
+TINYINT + |
+TINYINT (not recommended) + |
+SMALLINT + |
+
SMALLINT (recommended) + |
+SMALLINT + |
+||
2-byte integer + |
+SMALLINT + |
+SMALLINT + |
+SMALLINT + |
+
4-byte integer + |
+INTEGER, BINARY_INTEGER + |
+INTEGER + |
+INT + |
+
8-byte integer + |
+BIGINT + |
+BIGINT + |
+BIGINT + |
+
Single-precision floating point number + |
+FLOAT4, REAL + |
+FLOAT4, REAL + |
+FLOAT + |
+
Double-precision floating point number + |
+DOUBLE PRECISION, FLOAT8, BINARY_DOUBLE + |
+DOUBLE PRECISION, FLOAT8, BINARY_DOUBLE + |
+DOUBLE + |
+
Scientific data type + |
+DECIMAL, NUMERIC + |
+DECIMAL[p (,s)] (The maximum precision can reach up to 38.) + |
+precision ≤ 38: DECIMAL; precision > 38: STRING + |
+
Date type + |
+DATE + |
+TIMESTAMP[(p)] [WITHOUT TIME ZONE] + |
+TIMESTAMP + |
+
+
+ Time type + |
+TIME [(p)] [WITHOUT TIME ZONE], TIME [(p)] [WITH TIME ZONE] + |
+TEXT + |
+STRING + |
+
TIMESTAMP[(p)] [WITHOUT TIME ZONE], TIMESTAMP[(p)][WITH TIME ZONE], SMALLDATETIME + |
+TIMESTAMP[(p)] [WITHOUT TIME ZONE] + |
+TIMESTAMP + |
+|
INTERVAL DAY (l) TO SECOND (p), INTERVAL [FIELDS] [(p)] + |
+VARCHAR(n) + |
+VARCHAR(n) + |
+|
Boolean type + |
+BOOLEAN + |
+BOOLEAN + |
+BOOLEAN + |
+
CHAR type + |
+CHAR(n), CHARACTER(n), NCHAR(n) + |
+CHAR(n), CHARACTER(n), NCHAR(n) + |
+n ≤ 255: CHAR(n); n > 255: STRING + |
+
VARCHAR type + + |
+VARCHAR(n), CHARACTER VARYING(n), VARCHAR2(n) + |
+VARCHAR(n) + |
+n ≤ 65535: VARCHAR(n); n > 65535: STRING + |
+
NVARCHAR2(n) + |
+TEXT + |
+STRING + |
+|
String (large text object) + |
+TEXT, CLOB + |
+TEXT, CLOB + |
+STRING + |
+
Monetary type + |
+MONEY + |
+NUMERIC + |
+BIGINT + |
+
1 | INSERT INTO [Foreign table name] SELECT * FROM [Source table name]; + |
1 | INSERT INTO product_info_output_ext SELECT * FROM product_info_output; + |
INSERT 0 10+
1 | INSERT INTO product_info_output_ext SELECT * FROM product_info_output WHERE product_price>500; + |
Data of a special type, such as RAW, is exported as a binary file, which cannot be recognized by the import tool. As a result, you need to use the RAWTOHEX() function to convert it to the hexadecimal format before export.
+The rules for naming ORC data files exported from GaussDB(DWS) are as follows:
+For details about the data types that can be exported to MRS, see Table 2.
+For details about HDFS data export or MRS configuration, see the MapReduce Service User Guide.
+For details about creating a foreign server on HDFS, see Manually Creating a Foreign Server.
+After operations in Creating a Foreign Server are complete, create an HDFS write-only foreign table in the GaussDB(DWS) database to access data stored in HDFS. The foreign table is write-only and can be used only for data export.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name +( [ { column_name type_name + [ { [CONSTRAINT constraint_name] NULL | + [CONSTRAINT constraint_name] NOT NULL | + column_constraint [...]} ] | + table_constraint [, ...]} [, ...] ] ) + SERVER dfs_server + OPTIONS ( { option_name ' value ' } [, ...] ) + [ {WRITE ONLY }] + DISTRIBUTE BY {ROUNDROBIN | REPLICATION} + [ PARTITION BY ( column_name ) [ AUTOMAPPED ] ] ; + |
For example, when creating a foreign table product_info_ext_obs, configure the parameters in the syntax as follows.
+Specifies the name of the foreign table.
+Multiple columns are separate by commas (,).
+Specifies the foreign server name of the foreign table. This server must exist. The foreign table connects to OBS/HDFS to read data through the foreign server.
+Enter the name of the foreign server created in Creating a Foreign Server.
+These parameters are associated with the foreign table. The key parameters are as follows:
+(Optional) Specifies the file size of a write-only foreign table. If this parameter is not specified, the file size in the distributed file system is used by default. This syntax is available only for the write-only foreign table.
+Value range: an integer ranging from 1 to 1024
+The filesize parameter is valid only for the write-only HDFS foreign table in ORC format.
+(Optional) Specifies the compression mode of ORC files. This syntax is available only for the write-only foreign table.
+Value range: zlib, snappy, and lz4. The default value is snappy.
+(Optional) Specifies the ORC version number. This syntax is available only for the write-only foreign table.
+Value range: Only 0.12 is supported. The default value is 0.12.
+(Optional) Specifies the data encoding of the data table to be exported when the database encoding is different from the data encoding of the data table. For example, the database encoding is Latin-1, but the data encoding of the exported data table is in UTF-8 format. If this parameter is not specified, the database encoding is used by default. This syntax is valid only for the write-only HDFS foreign table.
+Value range: data encoding types supported by the database encoding
+The dataencoding parameter is valid only for the write-only HDFS foreign table in ORC format.
+Other parameters are optional. You can configure them as required. In this example, you do not need to configure these parameters. For details, see CREATE FOREIGN TABLE (SQL on Hadoop or OBS).
+Based on the preceding settings, the command for creating the foreign table is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | DROP FOREIGN TABLE IF EXISTS product_info_ext_obs; + +-- Create an OBS foreign table that does not contain partition columns. The foreign server associated with the table is hdfs_server, the format of the file on HDFS corresponding to the table is ORC, and the data storage path on OBS is /user/hive/warehouse/product_info_orc/. + +CREATE FOREIGN TABLE product_info_ext_obs +( + product_price integer , + product_id char(30) , + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) SERVER hdfs_server +OPTIONS ( +format 'orc', +foldername '/user/hive/warehouse/product_info_orc/', + compression 'snappy', + version '0.12' +) Write Only; + |
1 | INSERT INTO [Foreign table name] SELECT * FROM [Source table name]; + |
1 | INSERT INTO product_info_output_ext SELECT * FROM product_info_output; + |
INSERT 0 10+
1 | INSERT INTO product_info_output_ext SELECT * FROM product_info_output WHERE product_price>500; + |
Data of a special type, such as RAW, is exported as a binary file, which cannot be recognized by the import tool. As a result, you need to use the RAWTOHEX() function to convert it to the hexadecimal format before export.
+GaussDB(DWS) provides flexible methods for importing data. You can import data from different sources to GaussDB(DWS). The features of each method are listed in Table 1. You can select a method as required. You are advised to use GaussDB(DWS) with Cloud Data Migration (CDM), and Data Lake Factory (DLF). CDM is used for batch data migration, and DLF orchestrates and schedules the entire ETL process and provides a visualized development environment.
+Data Migration Mode + |
+Supported Data Source/Database + |
+Description + |
+Advantage + |
+
---|---|---|---|
+ | +TEXT, CSV, ORC, and CarbonData data file formats + |
+You can import data in TEXT, CSV, ORC, or CarbonData format from OBS to GaussDB(DWS) for query, and can remotely read data from OBS. +It is recommended for GaussDB(DWS). + |
+This method features high performance and flexible scale-out. + |
+
+ | +TEXT and CSV data file formats + |
+You can use the GDS tool provided by GaussDB(DWS) to import data from the remote server to GaussDB(DWS) in parallel. Multiple DNs are used for the import. This method is efficient and applicable to importing a large amount of data to the database. + |
+|
+ | +
|
+You can configure a GaussDB(DWS) cluster to connect to an MRS cluster, and read data from the HDFS of MRS to GaussDB(DWS). + NOTE:
+This import method is not supported currently. + |
+This method features high performance and flexible scale-out. + |
+
+ | +- + |
+Data communication between two GaussDB(DWS) clusters is supported. You can use foreign tables to access and import data across GaussDB(DWS) clusters. + |
+This method is applicable to data synchronization between multiple GaussDB(DWS) clusters. + |
+
+ | +Local files + |
+Unlike the SQL COPY statement, the \copy command can be used to read data from or write data to only local files on a gsql client. + |
+This method is easy-to-operate and suitable for importing a small amount of data to the database. + |
+
+ | +Other files or databases + |
+When you use Java to develop applications, the CopyManager interface of the JDBC driver is invoked to write data from files or other databases to GaussDB(DWS). + |
+Data is directly written from other databases to GaussDB(DWS). Service data does not need to be stored in files. + |
+
+ | +
+
|
+CDM can migrate various types of data in batches between homogeneous and heterogeneous data sources. CDM migrates data to GaussDB(DWS) using the COPY statement or the GDS parallel import method. + |
+This method supports data import from abundant data sources and is easy-to-operate. + |
+
+ | +Databases, NoSQL, file systems, and big data platforms + |
+For details, see the documents of the third-party ETL tool. +GaussDB(DWS) provides the DSC tool to migrate Teradata/Oracle scripts to GaussDB(DWS). + |
+This method supports abundant data sources and provides powerful data conversion through OBS. + |
+
+ | +
|
+gs_dump exports a single database or its objects. gs_dumpall exports all databases or global objects in a cluster. +To migrate database information, you can use a tool to import the exported metadata to a target database. + |
+
+ This method is applicable to metadata migration. + |
+
+ | +SQL, TMP, and TAR file formats + |
+During database migration, you can use the gs_restore tool to import the file exported using the gs_dump tool to a GaussDB(DWS) cluster. In this way, metadata, such as table definitions and database object definitions, is imported. The following data needs to be imported: +
|
+
The object storage service (OBS) is an object-based cloud storage service, featuring data storage of high security, proven reliability, and cost-effectiveness. OBS provides large storage capacity for you to store files of any type.
+GaussDB(DWS), a data warehouse service, uses OBS as a platform for converting cluster data and external data, satisfying the requirements for secure, reliable, and cost-effective storage.
+You can import data in TXT, CSV, ORC, or CarbonData format from OBS to GaussDB(DWS) for query, and can remotely read data from OBS. You are advised to import frequently accessed hot data to GaussDB(DWS) to facilitate queries and store cold data to OBS for remote read to reduce cost.
+Currently, data can be imported using either of the following methods:
+During data migration and Extract-Transform-Load (ETL), a massive volume of data needs to be imported to GaussDB(DWS) in parallel. The common import mode is time-consuming. When you import data in parallel using OBS foreign tables, source data files to be imported are identified based on the import URL and data formats specified in the tables. Data is imported in parallel through DNs to GaussDB(DWS), which improves the overall import performance.
+Disadvantage:
+You need to create OBS foreign tables and store to-be-imported data on OBS.
+Application Scenario:
+A large volume of local data is imported concurrently on many DNs.
+Generally, objects are managed as files. However, OBS has no file system–related concepts, such as files and folders. To let users easily manage data, OBS allows them to simulate folders. Users can add a slash (/) in the object name, for example, tpcds1000/stock.csv. In this name, tpcds1000 is regarded as the folder name and stock.csv the file name. The value of key (object name) is still tpcds1000/stock.csv, and the content of the object is the content of the stock.csv file.
+Figure 1 shows how data is imported from OBS. The CN plans and delivers data import tasks. It delivers tasks to each DN by file.
+The delivery method is as follows:
+In Figure 1, there are four DNs (DN0 to DN3) and OBS stores six files numbered from t1.data.0 to t1.data.5. The files are delivered as follows:
+t1.data.0 -> DN0
+t1.data.1 -> DN1
+t1.data.2 -> DN2
+t1.data.3 -> DN3
+t1.data.4 -> DN0
+t1.data.5 -> DN1
+Two files are delivered to DN0 and DN1, respectively. One file is delivered to each of the other DNs.
+The import performance is the best when one OBS file is delivered to each DN and all the files have the same size. To improve the performance of loading data from OBS, split the data file into multiple files as evenly as possible before storing it to OBS. The recommended number of split files is an integer multiple of the DN quantity.
+ +Procedure + |
+Description + |
+Subtask + |
+
---|---|---|
Upload data to OBS. + |
+Plan the storage path on the OBS server and upload data files. +For details, see Uploading Data to OBS. + |
+- + |
+
Create an OBS foreign table. + |
+Create a foreign table to identify source data files on the OBS server. The OBS foreign table stores data source information, such as its bucket name, object name, file format, storage location, encoding format, and delimiter. +For details, see Creating an OBS Foreign Table. + |
+- + |
+
Import data. + |
+After creating the foreign table, run the INSERT statement to efficiently import data to the target tables. +For details, see Importing Data. + |
+- + |
+
Handle the table with import errors. + |
+If errors occur during data import, handle them based on the displayed error information described in Handling Import Errors to ensure data integrity. + |
+- + |
+
Improve query efficiency. + |
+After data is imported, run the ANALYZE statement to generate table statistics. The ANALYZE statement stores the statistics in the PG_STATISTIC system catalog. When you run the plan generator, the statistics help you generate an efficient query execution plan. + |
+- + |
+
In this example, OBS data is imported to GaussDB(DWS) databases. When users who have registered with the cloud platform access OBS using clients, call APIs, or SDKs, access keys (AK/SK) are required for user authentication. Therefore, if you want to connect to the GaussDB(DWS) database through a client or a JDBC/ODBC application to access OBS, obtain the access keys (AK and SK) first.
+Before creating an AK/SK pair, ensure that your account (used to log in to the management console) has passed real-name authentication.
+To create an AK/SK pair on the management console, perform the following steps:
+If an access key already exists in the access key list, you can directly use it. However, you can view only Access Key ID in the access key list. You can download the key file containing the AK and SK only when adding an access key. If you do not have the key file, click Create Access Key to create one.
+If you find that your AK/SK pair is abnormally used (for example, the AK/SK pair is lost or leaked) or will be no longer used, delete your AK/SK pair in the access key list or contact the administrator to reset your AK/SK pair.
+When deleting the access keys, you need to enter the login password and either an email or mobile verification code.
+Deleted AK/SK pairs cannot be restored.
+Before importing data from OBS to a cluster, prepare source data files and upload these files to OBS. If the data files have been stored on OBS, you only need to complete 2 to 3 in Uploading Data to OBS.
+Prepare source data files to be uploaded to OBS. GaussDB(DWS) supports only source data files in CSV, TXT, ORC, or CarbonData format.
+If user data cannot be saved in CSV format, store the data as any text file.
+According to How Data Is Imported, when the source data file contains a large volume of data, evenly split the file into multiple files before storing it to OBS. The import performance is better when the number of files is an integer multiple of the DN quantity.
+Assume that you have stored the following three CSV files in OBS:
+The file contains the following data:
+1 +2 +3 +4 +5 | 100,XHDK-A-1293-#fJ3,2017-09-01,A,2017 Autumn New Shirt Women,red,M,328,2017-09-04,715,good! +205,KDKE-B-9947-#kL5,2017-09-01,A,2017 Autumn New Knitwear Women,pink,L,584,2017-09-05,406,very good! +300,JODL-X-1937-#pV7,2017-09-01,A,2017 autumn new T-shirt men,red,XL,1245,2017-09-03,502,Bad. +310,QQPX-R-3956-#aD8,2017-09-02,B,2017 autumn new jacket women,red,L,411,2017-09-05,436,It's really super nice. +150,ABEF-C-1820-#mC6,2017-09-03,B,2017 Autumn New Jeans Women,blue,M,1223,2017-09-06,1200,The seller's packaging is exquisite. + |
The file contains the following data:
+1 +2 +3 +4 +5 | 200,BCQP-E-2365-#qE4,2017-09-04,B,2017 autumn new casual pants men,black,L,997,2017-09-10,301,The clothes are of good quality. +250,EABE-D-1476-#oB1,2017-09-10,A,2017 autumn new dress women,black,S,841,2017-09-15,299,Follow the store for a long time. +108,CDXK-F-1527-#pL2,2017-09-11,A,2017 autumn new dress women,red,M,85,2017-09-14,22,It's really amazing to buy. +450,MMCE-H-4728-#nP9,2017-09-11,A,2017 autumn new jacket women,white,M,114,2017-09-14,22,Open the package and the clothes have no odor. +260,OCDA-G-2817-#bD3,2017-09-12,B,2017 autumn new woolen coat women,red,L,2004,2017-09-15,826,Very favorite clothes. + |
The file contains the following data:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | 980,"ZKDS-J",2017-09-13,"B","2017 Women's Cotton Clothing","red","M",112,,, +98,"FKQB-I",2017-09-15,"B","2017 new shoes men","red","M",4345,2017-09-18,5473 +50,"DMQY-K",2017-09-21,"A","2017 pants men","red","37",28,2017-09-25,58,"good","good","good" +80,"GKLW-l",2017-09-22,"A","2017 Jeans Men","red","39",58,2017-09-25,72,"Very comfortable." +30,"HWEC-L",2017-09-23,"A","2017 shoes women","red","M",403,2017-09-26,607,"good!" +40,"IQPD-M",2017-09-24,"B","2017 new pants Women","red","M",35,2017-09-27,52,"very good." +50,"LPEC-N",2017-09-25,"B","2017 dress Women","red","M",29,2017-09-28,47,"not good at all." +60,"NQAB-O",2017-09-26,"B","2017 jacket women","red","S",69,2017-09-29,70,"It's beautiful." +70,"HWNB-P",2017-09-27,"B","2017 jacket women","red","L",30,2017-09-30,55,"I like it so much" +80,"JKHU-Q",2017-09-29,"C","2017 T-shirt","red","M",90,2017-10-02,82,"very good." + |
Store the source data files to be imported in the OBS bucket in advance.
+Click Service List and choose Object Storage Service to open the OBS management console.
+For details about how to create an OBS bucket, see "OBS Console Operation Guide > Managing Buckets > Creating a Bucket" in the Object Storage Service User Guide..
+For example, create two buckets named mybucket and mybucket02.
+For details, see "OBS Console Operation Guide > Managing Objects > Creating a Folder" in the Object Storage Service User Guide.
+For example:
+For details, see "OBS Console Operation Guide > Managing Objects > Uploading a File" in the Object Storage Service User Guide..
+For example:
+1 +2 | product_info.0 +product_info.1 + |
1 | product_info.2 + |
After the source data files are uploaded to an OBS bucket, a globally unique access path is generated. The OBS path of the source data files is the value of the location parameter used for creating a foreign table.
+The OBS path in the location parameter is in the format of obs://bucket_name/file_path/
+For example, the OBS paths are as follows:
+1 +2 +3 | obs://mybucket/input_data/product_info.0 +obs://mybucket/input_data/product_info.1 +obs://mybucket02/input_data/product_info.2 + |
When importing data from OBS to a cluster, the user must have the read permission for the OBS buckets where the source data files are located. You can configure the ACL for the OBS buckets to grant the read permission to a specific user.
+For details, see "OBS Console Operation Guide > Permission Control > Configuring a Bucket ACL" in the Object Storage Service User Guide.
+The following describes the rules for converting an invalid character:
+Create a foreign table in the GaussDB(DWS) database. Parameters are described as follows:
+The values of access_key and secret_access_key are examples only.
+When exporting data from OBS, this parameter cannot be set to true. Use the default value false.
+Based on the preceding settings, the foreign table is created using the following statements:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 | DROP FOREIGN TABLE product_info_ext; + +CREATE FOREIGN TABLE product_info_ext +( + product_price integer not null, + product_id char(30) not null, + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) +SERVER gsmpp_server +OPTIONS( + +LOCATION 'obs://mybucket/input_data/product_info | obs://mybucket02/input_data/product_info', +FORMAT 'CSV' , +DELIMITER ',', +encoding 'utf8', +header 'false', +ACCESS_KEY 'access_key_value_to_be_replaced', +SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced', +fill_missing_fields 'true', +ignore_extra_data 'true' +) +READ ONLY +LOG INTO product_info_err +PER NODE REJECT LIMIT 'unlimited'; + |
If the following information is displayed, the foreign table has been created:
+1 | CREATE FOREIGN TABLE + |
Before importing data, you are advised to optimize your design and deployment based on the following excellent practices, helping maximize system resource utilization and improving data import performance.
+The structure of the table must be consistent with that of the fields in the source data file. That is, the number of fields and field types must be the same. In addition, the structure of the target table must be the same as that of the foreign table. The field names can be different.
+1 | INSERT INTO [Target table name] SELECT * FROM [Foreign table name] + |
1 | INSERT 0 20 + |
For example, create a table named product_info.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | DROP TABLE IF EXISTS product_info; +CREATE TABLE product_info +( + product_price integer not null, + product_id char(30) not null, + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) +with ( +orientation = column, +compression=middle +) +DISTRIBUTE BY HASH (product_id); + |
Run the following statement to import data from the product_info_ext foreign table to the product_info table:
+1 | INSERT INTO product_info SELECT * FROM product_info_ext; + |
Handle errors that occurred during data import.
+Errors that occur when data is imported are divided into data format errors and non-data format errors.
+When creating a foreign table, specify LOG INTO error_table_name. Data format errors occurring during the data import will be written into the specified table. You can run the following SQL statement to query error details:
+1 | SELECT * FROM error_table_name; + |
Column + |
+Type + |
+Description + |
+
---|---|---|
nodeid + |
+integer + |
+ID of the node where an error is reported + |
+
begintime + |
+timestamp with time zone + |
+Time when a data format error is reported + |
+
filename + |
+character varying + |
+Name of the source data file where an error about data format occurs + |
+
rownum + |
+bigint + |
+Number of the row where an error occurs in a source data file + |
+
rawrecord + |
+text + |
+Raw record of the data format error in the source data file + |
+
detail + |
+text + |
+Error details + |
+
A non-data format error leads to the failure of an entire data import task. You can locate and troubleshoot a non-data format error based on the error message displayed during data import.
+Troubleshoot data import errors based on obtained error information and the description in the following table.
+ +INSERT and COPY statements are serially executed to import a small volume of data. To import a large volume of data to GaussDB(DWS), you can use GDS to import data in parallel using a foreign table.
+In the current GDS version, you can import data to databases from pipe files.
+You can import data in parallel from the common file system (excluding HDFS) of a server to GaussDB(DWS).
+Data files to be imported are specified based on the import policy and data formats set in a foreign table. Data is imported in parallel through multiple DNs from source data files to the database, which improves the overall data import performance. Figure 1 shows an example.
+The concepts mentioned in the preceding figure are described as follows:
+The number of GDS processes cannot exceed that of DNs. If multiple GDS processes are connected to one DN, some of the processes may become abnormal.
+GDS determines the number of threads based on the number of concurrent import transactions. That is, even if multi-thread import is configured before GDS startup, the import of a single transaction will not be accelerated. By default, an INSERT statement is an import transaction.
+Multi-thread concurrent import enables you to:
+Table data is split into multiple data files, and multi-thread concurrent import is implemented by importing data using multiple foreign tables at the same time. Ensure that a data file can be read only by one foreign table.
+Process + |
+Description + |
+
---|---|
Prepare source data. + |
+Prepare the source data files to be imported to the database and upload the files to the data server. +For details, see Preparing Source Data. + |
+
Start GDS. + |
+Install, configure, and start GDS on the data server. +For details, see Installing, Configuring, and Starting GDS. + |
+
Create a foreign table. + |
+A foreign table is used to identify source files. The foreign table stores information of a source data file, such as location, format, destination location, encoding format, and data delimiter. +For details, see Creating a GDS Foreign Table. + |
+
Import data. + |
+After creating the foreign table, run the INSERT statement to quickly import data to the target table. For details, see Importing Data. + |
+
Handle import errors. + |
+If errors occur during parallel data import, handle errors based on the error information to ensure data integrity. +For details, see Handling Import Errors. + |
+
Improve query efficiency. + |
+After data is imported, run the ANALYZE statement to generate table statistics. The ANALYZE statement stores the statistics in the PG_STATISTIC system catalog. When you run the plan generator, the statistics help you generate an efficient query execution plan. + |
+
Stop GDS. + |
+After data is imported, log in to each data server and stop GDS. +For details, see Stopping GDS. + |
+
Generally, the data to be imported has been uploaded to the data server. In this case, you only need to check the communication between the data server and GaussDB(DWS), and record the data storage directory on the data server before the import.
+If the data has not been uploaded to the data server, perform the following operations to upload it:
+mkdir -p /input_data+
GDS parallel import supports source data only in CSV or TEXT format.
+GaussDB(DWS) uses GDS to allocate the source data for parallel data import. Deploy GDS on the data server.
+If a large volume of data is stored on multiple data servers, install, configure, and start GDS on each server. Then, data on all the servers can be imported in parallel. The procedure for installing, configuring, and starting GDS is the same on each data server. This section describes how to perform this procedure on one data server.
+Therefore, use the latest version of GDS. After the database is upgraded, download the latest version of GaussDB(DWS) GDS as instructed in Procedure. When the import or export starts, GaussDB(DWS) checks the GDS versions. If the versions do not match, an error message is displayed and the import or export is terminated.
+To obtain the version number of GDS, run the following command in the GDS decompression directory:
+gds -V+
To view the database version, run the following SQL statement after connecting to the database:
+1 | SELECT version(); + |
mkdir -p /opt/bin/dws+
Use the SUSE Linux package as an example. Upload the GDS package dws_client_8.1.x_suse_x64.zip to the directory created in the previous step.
+cd /opt/bin/dws +unzip dws_client_8.1.x_suse_x64.zip+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /opt/bin/dws/gds +chown -R gds_user:gdsgrp /input_data+
su - gds_user+
If the current cluster version is 8.0.x or earlier, skip 9 and go to 10.
+If the current cluster version is 8.1.x, go to the next step.
+cd /opt/bin/dws/gds/bin +source gds_env+
GDS is green software and can be started after being decompressed. There are two ways to start GDS. One is to run the gds command to configure startup parameters. The other is to write the startup parameters into the gds.conf configuration file and run the gds_ctl.py command to start GDS.
+gds -d dir -p ip:port -H address_string -l log_file -D -t worker_num+
Example:
++/opt/bin/dws/gds/bin/gds -d /input_data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 -l /opt/bin/dws/gds/gds_log.txt -D -t 2+
gds -d dir -p ip:port -H address_string -l log_file -D +-t worker_num --enable-ssl --ssl-dir Cert_file+
Example:
++/opt/bin/dws/gds/bin/gds -d /input_data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 -l /opt/bin/dws/gds/gds_log.txt -D --enable-ssl --ssl-dir /opt/bin/+
Replace the information in italic as required.
+GDS determines the number of threads based on the number of concurrent import transactions. Even if multi-thread import is configured before GDS startup, the import of a single transaction will not be accelerated. By default, an INSERT statement is an import transaction.
+vim /opt/bin/dws/gds/config/gds.conf
+Example:
+The gds.conf configuration file contains the following information:
+<?xml version="1.0"?> +<config> +<gds name="gds1" ip="192.168.0.90" port="5000" data_dir="/input_data/" err_dir="/err" data_seg="100MB" err_seg="100MB" log_file="/log/gds_log.txt" host="10.10.0.1/24" daemon='true' recursive="true" parallel="32"></gds> +</config>+
Information in the configuration file is described as follows:
+python3 gds_ctl.py start+
Example:
+cd /opt/bin/dws/gds/bin
+python3 gds_ctl.py start
+Start GDS gds1 [OK]
+gds [options]:
+ -d dir Set data directory.
+ -p port Set GDS listening port.
+ ip:port Set GDS listening ip address and port.
+ -l log_file Set log file.
+ -H secure_ip_range
+ Set secure IP checklist in CIDR notation. Required for GDS to start.
+ -e dir Set error log directory.
+ -E size Set size of per error log segment.(0 < size < 1TB)
+ -S size Set size of data segment.(1MB < size < 100TB)
+ -t worker_num Set number of worker thread in multi-thread mode, the upper limit is 32. If without setting, the default value is 1.
+ -s status_file Enable GDS status report.
+ -D Run the GDS as a daemon process.
+ -r Read the working directory recursively.
+ -h Display usage.
+Attribute + |
+Description + |
+Value Range + |
+
---|---|---|
name + |
+Identifier + |
+- + |
+
ip + |
+Listening IP address + |
+The IP address must be valid. +Default value: 127.0.0.1 + |
+
port + |
+Listening port + |
+Value range: 1024 to 65535 (integer) +Default value: 8098 + |
+
data_dir + |
+Data file directory + |
+- + |
+
err_dir + |
+Error log file directory + |
+Default value: data file directory + |
+
log_file + |
+Log file Path + |
+- + |
+
host + |
+Host IP address allowed to be connected to GDS (The value must in CIDR format and this parameter is available for the Linux OS only.) + |
+- + |
+
recursive + |
+Whether the data file directories are recursive + |
+Value range: +
Default value: false + |
+
daemon + |
+Whether the process is running in daemon mode + |
+Value range: +
Default value: false + |
+
parallel + |
+Number of concurrent data import threads + |
+Value range: 0 to 32 (integer) +Default value: 1 + |
+
The source data information and GDS access information are configured in a foreign table. Then, GaussDB(DWS) can import data from a data server to a database table based on the configuration in the foreign table.
+You need to collect the following source data information:
+You need to collect the following GDS access information:
+location: GDS URL. GDS information in Installing, Configuring, and Starting GDS is used as an example. In non-SSL mode, location is set to gsfs://192.168.0.90:5000//input_data/. In SSL mode, location is set to gsfss://192.168.0.90:5000//input_data/. 192.168.0.90:5000 indicates the IP address and port number of GDS. input_data indicates the path of data source files managed by GDS. Replace the values as required.
+The following describes the rules for converting an illegal character:
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | CREATE FOREIGN TABLE foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) + SERVER gsmpp_server + OPTIONS +( +LOCATION 'gsfs://192.168.0.90:5000/input_data | gsfs://192.168.0.91:5000/input_data', +FORMAT 'CSV' , +DELIMITER ',', +ENCODING 'utf8', +HEADER 'false', +FILL_MISSING_FIELDS 'true', +IGNORE_EXTRA_DATA 'true' +) +LOG INTO product_info_err +PER NODE REJECT LIMIT 'unlimited'; + |
The following describes information in the preceding command:
+For details about the CREATE FOREIGN TABLE syntax, see CREATE FOREIGN TABLE (for GDS Import and Export).
+For more examples, see Example of Importing Data Using GDS.
+1 +2 +3 +4 +5 +6 +7 | CREATE FOREIGN TABLE foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) + SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/* | gsfs://192.168.0.91:5000/*', FORMAT 'CSV',MODE 'Normal', ENCODING 'utf8', DELIMITER E'\x08', QUOTE E'\x1b', NULL ''); + |
1 +2 +3 +4 +5 +6 +7 | CREATE FOREIGN TABLE foreign_tpcds_reasons_SSL +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) + SERVER gsmpp_server OPTIONS (location 'gsfss://192.168.0.90:5000/* | gsfss://192.168.0.91:5000/*', FORMAT 'CSV',MODE 'Normal', ENCODING 'utf8', DELIMITER E'\x08', QUOTE E'\x1b', NULL ''); + |
1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/* | gsfs://192.168.0.91:5000/*', FORMAT 'TEXT', delimiter E'\x08', null '',reject_limit '2',EOL '0x0D') WITH err_foreign_tpcds_reasons; + |
This section describes how to create tables in GaussDB(DWS) and import data to the tables.
+Before importing all the data from a table containing over 10 million records, you are advised to import some of the data, and check whether there is data skew and whether the distribution keys need to be changed. Troubleshoot the data skew if any. It is costly to address data skew and change the distribution keys after a large amount of data has been imported.
+The GDS server can communicate with GaussDB(DWS).
+1 | INSERT INTO [Target table name] SELECT * FROM [Foreign table name] + |
INSERT 0 9+
1 +2 +3 +4 +5 +6 +7 | CREATE TABLE reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) +DISTRIBUTE BY HASH (r_reason_sk); + |
1 | INSERT INTO reasons SELECT * FROM foreign_tpcds_reasons ; + |
1 | CREATE INDEX reasons_idx ON reasons(r_reasons_id); + |
Handle errors that occurred during data import.
+Errors that occur when data is imported are divided into data format errors and non-data format errors.
+When creating a foreign table, specify LOG INTO error_table_name. Data format errors occurring during the data import will be written into the specified table. You can run the following SQL statement to query error details:
+1 | SELECT * FROM error_table_name; + |
Column + |
+Type + |
+Description + |
+
---|---|---|
nodeid + |
+integer + |
+ID of the node where an error is reported + |
+
begintime + |
+timestamp with time zone + |
+Time when a data format error is reported + |
+
filename + |
+character varying + |
+Name of the source data file where a data format error occurs +If you use GDS for importing data, the error information includes the IP address and port number of the GDS server. + |
+
rownum + |
+bigint + |
+Number of the row where an error occurs in a source data file + |
+
rawrecord + |
+text + |
+Raw record of the data format error in the source data file + |
+
detail + |
+text + |
+Error details + |
+
A non-data format error leads to the failure of an entire data import task. You can locate and troubleshoot a non-data format error based on the error message displayed during data import.
+Troubleshoot data import errors based on obtained error information and the description in the following table.
+ +Error Information + |
+Cause + |
+Solution + |
+||
---|---|---|---|---|
missing data for column "r_reason_desc" + |
+
|
+
|
+||
extra data after last expected column + |
+The number of columns in the source data file is greater than that in the foreign table. + |
+
|
+||
invalid input syntax for type numeric: "a" + |
+The data type is incorrect. + |
+In the source data file, change the data type of the columns to be imported. If this error information is displayed, change the data type to numeric. + |
+||
null value in column "staff_id" violates not-null constraint + |
+The not-null constraint is violated. + + |
+In the source data file, add values to the specified columns. If this error information is displayed, add values to the staff_id column. + |
+||
duplicate key value violates unique constraint "reg_id_pk" + |
+The unique constraint is violated. + |
+
|
+||
value too long for type character varying(16) + |
+The column length exceeds the upper limit. + |
+In the source data file, change the column length. If this error information is displayed, reduce the column length to no greater than 16 bytes (VARCHAR2). + |
+
Stop GDS after data is imported successfully.
+ps -ef|grep gds+
For example, the GDS process ID is 128954.
+ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /input_data/ -p 192.168.0.90:5000 -l /log/gds_log.txt -D +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds+
kill -9 128954+
The data servers and the cluster reside on the same intranet. The IP addresses are 192.168.0.90 and 192.168.0.91. Source data files are in CSV format.
+1 +2 +3 +4 +5 +6 | CREATE TABLE tpcds.reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +); + |
mkdir -p /input_data+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /input_data+
The GDS installation path is /opt/bin/dws/gds. Source data files are stored in /input_data/. The IP addresses of the data servers are 192.168.0.90 and 192.168.0.91. The GDS listening port is 5000. GDS runs in daemon mode.
+/opt/bin/dws/gds/bin/gds -d /input_data -p 192.168.0.90:5000 -H 10.10.0.1/24 -D+
Start GDS on the data server whose IP address is 192.168.0.91.
+/opt/bin/dws/gds/bin/gds -d /input_data -p 192.168.0.91:5000 -H 10.10.0.1/24 -D+
Data export mode settings are as follows:
+Information about the data format is configured based on data format parameters specified during data export. The parameter configurations are as follows:
+Configure import error tolerance parameters as follows:
+Based on the above settings, the foreign table is created using the following statement:
+1 +2 +3 +4 +5 +6 +7 | CREATE FOREIGN TABLE tpcds.foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) +SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/* | gsfs://192.168.0.91:5000/*', format 'CSV',mode 'Normal', encoding 'utf8', delimiter E'\x08', quote E'\x1b', null '', fill_missing_fields 'false') LOG INTO err_tpcds_reasons PER NODE REJECT LIMIT 'unlimited'; + |
1 | INSERT INTO tpcds.reasons SELECT * FROM tpcds.foreign_tpcds_reasons; + |
1 | SELECT * FROM err_tpcds_reasons; + |
ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /input_data -p 192.168.0.90:5000 -D +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds +kill -9 128954+
The data servers and the cluster reside on the same intranet. The server IP address is 192.168.0.90. Source data files are in CSV format. Data will be imported to two tables using multiple threads in Normal mode.
+1 +2 +3 +4 +5 +6 | CREATE TABLE tpcds.reasons1 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) ; + |
1 +2 +3 +4 +5 +6 | CREATE TABLE tpcds.reasons2 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) ; + |
mkdir -p /input_data+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /input_data+
/gds/gds -d /input_data -p 192.168.0.90:5000 -H 10.10.0.1/24 -D -t 2 -r+
The foreign table tpcds.foreign_tpcds_reasons1 is used as an example to describe how to configure parameters in a foreign table.
+Data export mode settings are as follows:
+Information about the data format is configured based on data format parameters specified during data export. The parameter configurations are as follows:
+Configure import error tolerance parameters as follows:
+Based on the preceding settings, the foreign table tpcds.foreign_tpcds_reasons1 is created using the following statement:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE tpcds.foreign_tpcds_reasons1 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/import1/*', format 'CSV',mode 'Normal', encoding 'utf8', delimiter E'\x08', quote E'\x1b', null '',fill_missing_fields 'on')LOG INTO err_tpcds_reasons1 PER NODE REJECT LIMIT 'unlimited'; + |
Based on the preceding settings, the foreign table tpcds.foreign_tpcds_reasons2 is created using the following statement:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE tpcds.foreign_tpcds_reasons2 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/import2/*', format 'CSV',mode 'Normal', encoding 'utf8', delimiter E'\x08', quote E'\x1b', null '',fill_missing_fields 'on')LOG INTO err_tpcds_reasons2 PER NODE REJECT LIMIT 'unlimited'; + |
1 | INSERT INTO tpcds.reasons1 SELECT * FROM tpcds.foreign_tpcds_reasons1; + |
1 | INSERT INTO tpcds.reasons2 SELECT * FROM tpcds.foreign_tpcds_reasons2; + |
1 +2 | SELECT * FROM err_tpcds_reasons1; +SELECT * FROM err_tpcds_reasons2; + |
ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /input_data -p 192.168.0.90:5000 -D -t 2 -r +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds +kill -9 128954+
gds -d /***/gds_data/ -D -p 192.168.0.1:7789 -l /***/gds_log/aa.log -H 0/0 -t 10 -D+
If you need to set the timeout interval of a pipe, use the --pipe-timeout parameter.
+CREATE TABLE test_pipe_1( id integer not null, sex text not null, name text );+
CREATE FOREIGN TABLE foreign_test_pipe_tr( like test_pipe ) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.1:7789/foreign_test_pipe.pipe', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe',auto_create_pipe 'false');+
INSERT INTO test_pipe_1 select * from foreign_test_pipe_tr;+
cd /***/gds_data/+
mkfifo foreign_test_pipe.pipe;+
A pipe will be automatically cleared after an operation is complete. To perform another operation, create a pipe again.
+cat postgres_public_foreign_test_pipe_tw.txt > foreign_test_pipe.pipe+
gzip -d < out.gz > foreign_test_pipe.pipe+
hdfs dfs -cat - /user/hive/***/test_pipe.txt > foreign_test_pipe.pipe+
INSERT INTO test_pipe_1 select * from foreign_test_pipe_tr; +INSERT 0 4 +SELECT * FROM test_pipe_1; +id | sex | name +----+-----+---------------- +3 | 2 | 11111111111111 +1 | 2 | 11111111111111 +2 | 2 | 11111111111111 +4 | 2 | 11111111111111 +(4 rows) ++
GDS also supports importing data through multi-process pipes. That is, one foreign table corresponds to multiple GDSs.
+The following takes importing a local file as an example.
+gds -d /***/gds_data/ -D -p 192.168.0.1:7789 -l /***/gds_log/aa.log -H 0/0 -t 10 -D +gds -d /***/gds_data_1/ -D -p 192.168.0.1:7790 -l /***/gds_log_1/aa.log -H 0/0 -t 10 -D+
If you need to set the timeout interval of a pipe, use the --pipe-timeout parameter.
+CREATE TABLE test_pipe( id integer not null, sex text not null, name text );+
CREATE FOREIGN TABLE foreign_test_pipe_tr( like test_pipe ) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.1:7789/foreign_test_pipe.pipe|gsfs://192.168.0.1:7790/foreign_test_pipe.pipe', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' , file_type 'pipe', auto_create_pipe 'false');+
INSERT INTO test_pipe_1 select * from foreign_test_pipe_tr;+
cd /***/gds_data/ +cd /***/gds_data_1/+
mkfifo foreign_test_pipe.pipe;+
cat postgres_public_foreign_test_pipe_tw.txt > foreign_test_pipe.pipe+
INSERT INTO test_pipe_1 select * from foreign_test_pipe_tr; +INSERT 0 4 +SELECT * FROM test_pipe_1; +id | sex | name +----+-----+---------------- +3 | 2 | 11111111111111 +1 | 2 | 11111111111111 +2 | 2 | 11111111111111 +4 | 2 | 11111111111111 +(4 rows)+
gds -d /***/gds_data/ -D -p GDS_IP:GDS_PORT -l /***/gds_log/aa.log -H 0/0 -t 10 -D+
If you need to set the timeout interval of a pipe, use the --pipe-timeout parameter.
+CREATE TABLE test_pipe( id integer not null, sex text not null, name text ); +INSERT INTO test_pipe values(1,2,'11111111111111'); +INSERT INTO test_pipe values(2,2,'11111111111111'); +INSERT INTO test_pipe values(3,2,'11111111111111'); +INSERT INTO test_pipe values(4,2,'11111111111111'); +INSERT INTO test_pipe values(5,2,'11111111111111');+
CREATE FOREIGN TABLE foreign_test_pipe( id integer not null, age text not null, name text ) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://GDS_IP:GDS_PORT/', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe') WRITE ONLY;+
INSERT INTO foreign_test_pipe SELECT * FROM test_pipe;+
CREATE TABLE test_pipe (id integer not null, sex text not null, name text);+
CREATE FOREIGN TABLE foreign_test_pipe(like test_pipe) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://GDS_IP:GDS_PORT/', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' , file_type 'pipe', auto_create_pipe 'false');+
INSERT INTO test_pipe SELECT * FROM foreign_test_pipe;+
SELECT * FROM test_pipe; + id | sex | name +----+-----+---------------- + 3 | 2 | 11111111111111 + 6 | 2 | 11111111111111 + 7 | 2 | 11111111111111 + 1 | 2 | 11111111111111 + 2 | 2 | 11111111111111 + 4 | 2 | 11111111111111 + 5 | 2 | 11111111111111 + 8 | 2 | 11111111111111 + 9 | 2 | 11111111111111 +(9 rows)+
By default, the pipeline file exported from or imported to GDS is named in the format of Database name_Schema name_Foreign table name .pipe. Therefore, the database name and schema name of the target cluster must be the same as those of the source cluster. If the database or schema is inconsistent, you can specify the same pipe file in the URL of the location.
+For example:
+CREATE FOREIGN TABLE foreign_test_pipe(id integer not null, age text not null, name text) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://GDS_IP:GDS_PORT/foreign_test_pipe.pipe', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe') WRITE ONLY;+
CREATE FOREIGN TABLE foreign_test_pipe(like test_pipe) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://GDS_IP:GDS_PORT/foreign_test_pipe.pipe', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe',auto_create_pipe 'false');+
This method is applicable to low-concurrency scenarios where a small volume of data is to be imported.
+Use either of the following methods to write data to GaussDB(DWS) using the COPY FROM STDIN statement:
+CopyManager is an API interface class provided by the JDBC driver in GaussDB(DWS). It is used to import data to GaussDB(DWS) in batches.
+The CopyManager class is in the org.postgresql.copy package class and is inherited from the java.lang.Object class. The declaration of the class is as follows:
+public class CopyManager +extends Object+
public CopyManager(BaseConnection connection)
+throws SQLException
+Return Value + |
+Method + |
+Description + |
+Throws + |
+
---|---|---|---|
CopyIn + |
+copyIn(String sql) + |
+- + |
+SQLException + |
+
long + |
+copyIn(String sql, InputStream from) + |
+Uses COPY FROM STDIN to quickly import data to tables in a database from InputStream. + |
+SQLException,IOException + |
+
long + |
+copyIn(String sql, InputStream from, int bufferSize) + |
+Uses COPY FROM STDIN to quickly import data to tables in a database from InputStream. + |
+SQLException,IOException + |
+
long + |
+copyIn(String sql, Reader from) + |
+Uses COPY FROM STDIN to quickly import data to tables in a database from Reader. + |
+SQLException,IOException + |
+
long + |
+copyIn(String sql, Reader from, int bufferSize) + |
+Uses COPY FROM STDIN to quickly import data to tables in a database from Reader. + |
+SQLException,IOException + |
+
CopyOut + |
+copyOut(String sql) + |
+- + |
+SQLException + |
+
long + |
+copyOut(String sql, OutputStream to) + |
+Sends the result set of COPY TO STDOUT from a database to the OutputStream class. + |
+SQLException,IOException + |
+
long + |
+copyOut(String sql, Writer to) + |
+Sends the result set of COPY TO STDOUT from a database to the Writer class. + |
+SQLException,IOException + |
+
When the JAVA language is used for secondary development based on GaussDB(DWS), you can use the CopyManager interface to export data from the database to a local file or import a local file to the database by streaming. The file can be in CSV or TEXT format.
+The sample program is as follows. Load the GaussDB(DWS) JDBC driver before running it.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 + 24 + 25 + 26 + 27 + 28 + 29 + 30 + 31 + 32 + 33 + 34 + 35 + 36 + 37 + 38 + 39 + 40 + 41 + 42 + 43 + 44 + 45 + 46 + 47 + 48 + 49 + 50 + 51 + 52 + 53 + 54 + 55 + 56 + 57 + 58 + 59 + 60 + 61 + 62 + 63 + 64 + 65 + 66 + 67 + 68 + 69 + 70 + 71 + 72 + 73 + 74 + 75 + 76 + 77 + 78 + 79 + 80 + 81 + 82 + 83 + 84 + 85 + 86 + 87 + 88 + 89 + 90 + 91 + 92 + 93 + 94 + 95 + 96 + 97 + 98 + 99 +100 +101 +102 +103 +104 | //gsjdbc4.jar is used as an example. +import java.sql.Connection; +import java.sql.DriverManager; +import java.io.IOException; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.sql.SQLException; +import org.postgresql.copy.CopyManager; +import org.postgresql.core.BaseConnection; + +public class Copy{ + + public static void main(String[] args) + { + String urls = new String("jdbc:postgresql://10.180.155.74:8000/gaussdb"); //Database URL + String username = new String("jack"); //Username + String password = new String("********"); // Password + String tablename = new String("migration_table"); //Define table information. + String tablename1 = new String("migration_table_1"); //Define table information. + String driver = "org.postgresql.Driver"; + Connection conn = null; + + try { + Class.forName(driver); + conn = DriverManager.getConnection(urls, username, password); + } catch (ClassNotFoundException e) { + e.printStackTrace(System.out); + } catch (SQLException e) { + e.printStackTrace(System.out); + } + + //Export the query result of SELECT * FROM migration_table to the local file d:/data.txt. + try { + copyToFile(conn, "d:/data.txt", "(SELECT * FROM migration_table)"); + } catch (SQLException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } catch (IOException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + //Import data from the d:/data.txt file to the migration_table_1 table. + try { + copyFromFile(conn, "d:/data.txt", tablename1); + } catch (SQLException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } catch (IOException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + + //Export the data from the migration_table_1 table to the d:/data1.txt file. + try { + copyToFile(conn, "d:/data1.txt", tablename1); + } catch (SQLException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } catch (IOException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + } + + public static void copyFromFile(Connection connection, String filePath, String tableName) + throws SQLException, IOException { + + FileInputStream fileInputStream = null; + + try { + CopyManager copyManager = new CopyManager((BaseConnection)connection); + fileInputStream = new FileInputStream(filePath); + copyManager.copyIn("COPY " + tableName + " FROM STDIN", fileInputStream); + } finally { + if (fileInputStream != null) { + try { + fileInputStream.close(); + } catch (IOException e) { + e.printStackTrace(); + } + } + } + } + + public static void copyToFile(Connection connection, String filePath, String tableOrQuery) + throws SQLException, IOException { + + FileOutputStream fileOutputStream = null; + + try { + CopyManager copyManager = new CopyManager((BaseConnection)connection); + fileOutputStream = new FileOutputStream(filePath); + copyManager.copyOut("COPY " + tableOrQuery + " TO STDOUT", fileOutputStream); + } finally { + if (fileOutputStream != null) { + try { + fileOutputStream.close(); + } catch (IOException e) { + e.printStackTrace(); + } + } + } + } +} + |
The following example shows how to use CopyManager to migrate data from MySQL to GaussDB(DWS).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 | //gsjdbc4.jar is used as an example. +import java.io.StringReader; +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +import org.postgresql.copy.CopyManager; +import org.postgresql.core.BaseConnection; + +public class Migration{ + + public static void main(String[] args) { + String url = new String("jdbc:postgresql://10.180.155.74:8000/gaussdb"); //Database URL + String user = new String("jack"); //Database username + String pass = new String("********"); //Database password + String tablename = new String("migration_table"); //Define table information. + String delimiter = new String("|"); //Define a delimiter. + String encoding = new String("UTF8"); //Define a character set. + String driver = "org.postgresql.Driver"; + StringBuffer buffer = new StringBuffer(); //Define the buffer to store formatted data. + + try { + //Obtain the query result set of the source database. + ResultSet rs = getDataSet(); + + //Traverse the result set and obtain records row by row. + //The values of columns in each record are separated by the specified delimiter and end with a newline character to form strings. + ////Add the strings to the buffer. + while (rs.next()) { + buffer.append(rs.getString(1) + delimiter + + rs.getString(2) + delimiter + + rs.getString(3) + delimiter + + rs.getString(4) + + "\n"); + } + rs.close(); + + try { + //Connect to the target database. + Class.forName(driver); + Connection conn = DriverManager.getConnection(url, user, pass); + BaseConnection baseConn = (BaseConnection) conn; + baseConn.setAutoCommit(false); + + //Initialize table information. + String sql = "Copy " + tablename + " from STDIN DELIMITER " + "'" + delimiter + "'" + " ENCODING " + "'" + encoding + "'"; + + //Submit data in the buffer. + CopyManager cp = new CopyManager(baseConn); + StringReader reader = new StringReader(buffer.toString()); + cp.copyIn(sql, reader); + baseConn.commit(); + reader.close(); + baseConn.close(); + } catch (ClassNotFoundException e) { + e.printStackTrace(System.out); + } catch (SQLException e) { + e.printStackTrace(System.out); + } + + } catch (Exception e) { + e.printStackTrace(); + } + } + + //******************************** + //Return the query result set from the source database. + //********************************* + private static ResultSet getDataSet() { + ResultSet rs = null; + try { + Class.forName("com.mysql.jdbc.Driver").newInstance(); + Connection conn = DriverManager.getConnection("jdbc:mysql://10.119.179.227:3306/jack?useSSL=false&allowPublicKeyRetrieval=true", "jack", "********"); + Statement stmt = conn.createStatement(); + rs = stmt.executeQuery("select * from migration_table"); + } catch (SQLException e) { + e.printStackTrace(); + } catch (Exception e) { + e.printStackTrace(); + } + return rs; + } +} + |
The gsql tool of GaussDB(DWS) provides the \copy meta-command to import data.
+For details about the \copy command, see Table 1.
+ +Syntax + |
+Description + |
+
---|---|
\copy { table [ ( column_list ) ] | +( query ) } { from | to } { filename | +stdin | stdout | pstdin | pstdout } +[ with ] [ binary ] [ oids ] [ delimiter +[ as ] 'character' ] [ null [ as ] 'string' ] +[ csv [ header ] [ quote [ as ] +'character' ] [ escape [ as ] 'character' ] +[ force quote column_list | * ] [ force +not null column_list ] ] + |
+You can run this command to import or export data after logging in to the database on any gsql client. Different from the COPY statement in SQL, this command performs read/write operations on local files rather than files on database servers. The accessibility and permissions of the local files are restricted to local users. + NOTE:
+\copy only applies to small-batch data import with uniform formats but poor error tolerance capability. GDS or COPY is preferred for data import. + |
+
Specifies the name (possibly schema-qualified) of an existing table.
+Value range: an existing table name
+Specifies an optional list of columns to be copied.
+Value range: any field in the table. If the column list is not specified, all columns in the table will be copied.
+Specifies that the results will be copied.
+Valid value: a SELECT or VALUES command in parentheses.
+Specifies the absolute path of a file. To run the \copy command, the user must have the write permission for this path.
+Specifies that data is stored and read in binary mode instead of text mode. In binary mode, you cannot declare DELIMITER, NULL, or CSV. After specifying BINARY, CSV, FIXED and TEXT cannot be specified through option or copy_option.
+Specifies the internal OID to be copied for each row.
+An error is raised if OIDs are specified for a table that does not have OIDs, or in the case of copying a query.
+Valid value: true, on, false, and off.
+Default value: false
+Value range: a multi-character delimiter within 10 bytes.
+Default value:
+Specifies that a string represents a null value in a data file.
+Value range:
+Default value:
+Specifies whether a data file contains a table header. header is available only for CSV and FIXED files.
+In data import scenarios, if header is on, the first row of the data file will be identified as the header and ignored. If header is off, the first row will be identified as a data row.
+If header is on, fileheader must be specified. fileheader specifies the content in the header. If header is off, the exported file does not contain a header.
+Valid value: true, on, false, and off.
+Default value: false
+Specifies the quote character for a CSV file.
+Default value: double quotation mark ("").
+This option is allowed only when using CSV format. This must be a single one-byte character.
+Default value: double quotation mark (""). If the value is the same as the quote value, it will be replaced with \0.
+In CSV COPY TO mode, forces quoting to be used for all not-null values in each specified column. NULL will not be quoted.
+Value range: an existing column.
+In CSV COPY FROM mode, processes each specified column as though it were quoted and hence not a null value.
+Value range: an existing column.
+1 +2 +3 +4 +5 +6 +7 +8 | create table copy_example +( + col_1 integer, + col_2 text, + col_3 varchar(12), + col_4 date, + col_5 time +); + |
1 | \copy copy_example from stdin csv; + |
1 +2 +3 +4 | Enter data to be copied followed by a newline. +End with a backslash and a period on a line by itself. +>> 1,"iamtext","iamvarchar",2006-07-07,12:00:00 +>> \. + |
iamheader
+1|"iamtext"|"iamvarchar"|2006-07-07|12:00:00
+2|"iamtext"|"iamvarchar"|2022-07-07|19:00:02
+1 | \copy copy_example from '/local/data/example.csv' with(header 'on', format 'csv', delimiter '|', date_format 'yyyy-mm-dd', time_format 'hh24:mi:ss'); + |
1,"iamtext","iamvarchar",2006-07-07
+2,"iamtext","iamvarchar",2022-07-07,19:00:02,12:00:00
+Import data from the local file example.csv to the target table copy_example. The default delimiter is (,). Therefore, you do not need to specify the delimiter. Because the fault tolerance parameters IGNORE_EXTRA_DATA and FILL_MISSING_FIELD are specified, the missing fields will be replaced with NULL, the extra fields are ignored.
+1 | \copy copy_example from '/local/data/example.csv' with( format 'csv', date_format 'yyyy-mm-dd', time_format 'hh24:mi:ss', IGNORE_EXTRA_DATA 'true', FILL_MISSING_FIELD 'true'); + |
1 | \copy copy_example to stdout CSV quote as '"' force quote col_4,col_5; + |
gs_restore is an import tool provided by GaussDB(DWS). You can use gs_restore to import the files exported by gs_dump to a database. gs_restore can import the files in .tar, custom, or directory format.
+gs_restore can:
+If a database is specified, data is imported to the database. If multiple databases are specified, the password for connecting to each database also needs to be specified.
+If no database is specified, a script containing the SQL statement to recreate the database is created and written to a file or standard output. This script output is equivalent to the plain text output of gs_dump.
+You can specify and sort the data to be imported.
+gs_restore incrementally imports data by default. To prevent data exception caused by consecutive imports, use the -e and -c parameters for each import. In this way, existing data is deleted from the target database before each import; the system exists the import task with an error (error message is displayed after the import process is complete) and proceeds with the next.
+1 | cd /opt/bin + |
gs_restore -W password -U jack /home//backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d backupdb -s -e -c+ +
Parameter + |
+Description + |
+Example Value + |
+
---|---|---|
-U + |
+Username for database connection. + |
+-U jack + |
+
-W + |
+User password for database connection. +
|
+-W Password + |
+
-d + |
+Database to which data will be imported. + |
+-d backupdb + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
-e + |
+Exits the current import task and performs the next if an error occurs when you send a SQL statement in the current import task. Error messages are displayed after the import process is complete. + |
+- + |
+
-c + |
+Cleans existing objects from the target database before the import. + |
+- + |
+
-s + |
+Imports only object definitions in schemas and does not import data. Sequence values will also not be imported. + |
+- + |
+
For details about other parameters, see "Server Tools > gs_restore" in the Tool Reference.
+Example 1: Use gs_restore to run the following command to import data and all object definitions of the gaussdb database from the MPPDB_backup.dmp file (custom format).
+gs_restore -W password backup/MPPDB_backup.dmp -p 8000 -h 10.10.10.100 -d backupdb +gs_restore[2017-07-21 19:16:26]: restore operation successfu +gs_restore: total time: 13053 ms+
Example 2: Use gs_restore to run the following command to import data and all object definitions of the gaussdb database from the MPPDB_backup.tar file.
+gs_restore backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d backupdb +gs_restore[2017-07-21 19:21:32]: restore operation successful +gs_restore[2017-07-21 19:21:32]: total time: 21203 ms+
Example 3: Use gs_restore to run the following command to import data and all object definitions of the gaussdb database from the MPPDB_backup directory.
+gs_restore backup/MPPDB_backup -p 8000 -h 10.10.10.100 -d backupdb +gs_restore[2017-07-21 19:26:46]: restore operation successful +gs_restore[2017-07-21 19:26:46]: total time: 21003 ms+
Example 4: Use gs_restore to run the following command to import all object definitions of the gaussdb database to the backupdb database. Before the import, there are complete definitions in the gaussdb database. After the import, all object definitions exist in the backupdb database and there is no data in tables.
+gs_restore -W password /home//backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d backupdb -s -e -c +gs_restore[2017-07-21 19:46:27]: restore operation successful +gs_restore[2017-07-21 19:46:27]: total time: 32993 ms+
Example 5: Use gs_restore to run the following command to import data and all definitions in the PUBLIC schema from the MPPDB_backup.dmp file. Existing objects are deleted from the target database before the import. If an existing object references to an object in another schema, you need to manually delete the referenced object first.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -h 10.10.10.100 -d backupdb -e -c -n PUBLIC +gs_restore: [archiver (db)] Error while PROCESSING TOC: +gs_restore: [archiver (db)] Error from TOC entry 313; 1259 337399 TABLE table1 gaussdba +gs_restore: [archiver (db)] could not execute query: ERROR: cannot drop table table1 because other objects depend on it +DETAIL: view t1.v1 depends on table table1 +HINT: Use DROP ... CASCADE to drop the dependent objects too. +Command was: DROP TABLE public.table1;+
Manually delete the referenced object and create it again after the import is complete.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -h 10.10.10.100 -d backupdb -e -c -n PUBLIC +gs_restore[2017-07-21 19:52:26]: restore operation successful +gs_restore[2017-07-21 19:52:26]: total time: 2203 ms+
Example 6: Use gs_restore to run the following command to import the definition of the hr.staffs table in the PUBLIC schema from the MPPDB_backup.dmp file. Before the import, the hr.staffs table does not exist.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -h 10.10.10.100 -d backupdb -e -c -s -n PUBLIC -t hr.staffs +gs_restore[2017-07-21 19:56:29]: restore operation successful +gs_restore[2017-07-21 19:56:29]: total time: 21000 ms+
Example 7: Use gs_restore to run the following command to import data of the hr.staffs table in the PUBLIC schema from the MPPDB_backup.dmp file. Before the import, the hr.staffs table is empty.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -h 10.10.10.100 -d backupdb -e -a -n PUBLIC -t hr.staffs +gs_restore[2017-07-21 20:12:32]: restore operation successful +gs_restore[2017-07-21 20:12:32]: total time: 20203 ms+
human_resource=# select * from hr.staffs; + staff_id | first_name | last_name | email | phone_number | hire_date | employment_id | salary | commission_pct | manager_id | section_id +----------+-------------+-------------+----------+--------------------+---------------------+---------------+----------+----------------+------------+------------ + 200 | Jennifer | Whalen | JWHALEN | 515.123.4444 | 1987-09-17 00:00:00 | AD_ASST | 4400.00 | | 101 | 10 + 201 | Michael | Hartstein | MHARTSTE | 515.123.5555 | 1996-02-17 00:00:00 | MK_MAN | 13000.00 | | 100 | 20 + +gsql -d human_resource -p 8000 +gsql ((GaussDB 8.1.1 build af002019) compiled at 2020-01-10 05:43:20 commit 6995 last mr 11566 ) +Non-SSL connection (SSL connection is recommended when requiring high-security) +Type "help" for help. + +human_resource=# drop table hr.staffs CASCADE; +NOTICE: drop cascades to view hr.staff_details_view +DROP TABLE + +gs_restore -W password /home//backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d human_resource -n hr -t staffs -s -e +restore operation successful +total time: 904 ms + +human_resource=# select * from hr.staffs; + staff_id | first_name | last_name | email | phone_number | hire_date | employment_id | salary | commission_pct | manager_id | section_id +----------+------------+-----------+-------+--------------+-----------+---------------+--------+----------------+------------+------------ +(0 rows)+
human_resource=# \d + List of relations + Schema | Name | Type | Owner | Storage +--------+--------------------+-------+----------+---------------------------------- + hr | employment_history | table | | {orientation=row,compression=no} + hr | employments | table | | {orientation=row,compression=no} + hr | places | table | | {orientation=row,compression=no} + hr | sections | table | | {orientation=row,compression=no} + hr | states | table | | {orientation=row,compression=no} +(5 rows) + +gs_restore -W password /home/mppdb/backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d human_resource -n hr -t staffs -n hr -t areas +restore operation successful +total time: 724 ms + +human_resource=# \d + List of relations + Schema | Name | Type | Owner | Storage +--------+--------------------+-------+----------+---------------------------------- + hr | areas | table | | {orientation=row,compression=no} + hr | employment_history | table | | {orientation=row,compression=no} + hr | employments | table | | {orientation=row,compression=no} + hr | places | table | | {orientation=row,compression=no} + hr | sections | table | | {orientation=row,compression=no} + hr | staffs | table | | {orientation=row,compression=no} + hr | states | table | | {orientation=row,compression=no} +(7 rows) + +human_resource=# select * from hr.areas; + area_id | area_name +---------+------------------------ + 4 | Iron + 1 | Wood + 2 | Lake + 3 | Desert +(4 rows)+
gs_restore -W password /home//backup/MPPDB_backup1.sql -p 8000 -h 10.10.10.100 -d backupdb -n hr -e -c +restore operation successful +total time: 702 ms+
gs_restore -W password /home//backup/MPPDB_backup2.dmp -p 8000 -h 10.10.10.100 -d backupdb -n hr -n hr1 -s +restore operation successful +total time: 665 ms+
Example 12: Use gs_restore to run the following command to decrypt the files exported from the human_resource database and import them to the backupdb database.
+create database backupdb; +CREATE DATABASE + +gs_restore /home//backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d backupdb --with-key=1234567812345678 +restore operation successful +total time: 23472 ms + +gsql -d backupdb -p 8000 -r +gsql ((GaussDB 8.1.1 build af002019) compiled at 2020-01-10 05:43:20 commit 6995 last mr 11566 ) +Non-SSL connection (SSL connection is recommended when requiring high-security) +Type "help" for help. + +backupdb=# select * from hr.areas; + area_id | area_name +---------+------------------------ + 4 | Iron + 1 | Wood + 2 | Lake + 3 | Desert +(4 rows)+
Example 13: user1 does not have the permission for importing data from an exported file to the backupdb database and role1 has this permission. To import the exported data to the backupdb database, you can set --role to role1 in the export command.
+human_resource=# CREATE USER user1 IDENTIFIED BY 'password'; + +gs_restore -U user1 -W password /home//backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 -d backupdb --role role1 --rolepassword password +restore operation successful +total time: 554 ms + +gsql -d backupdb -p 8000 -r +gsql ((GaussDB 8.1.1 build af002019) compiled at 2020-01-10 05:43:20 commit 6995 last mr 11566 ) +Non-SSL connection (SSL connection is recommended when requiring high-security) +Type "help" for help. + +backupdb=# select * from hr.areas; + area_id | area_name +---------+------------------------ + 4 | Iron + 1 | Wood + 2 | Lake + 3 | Desert +(4 rows)+
Before importing data from MRS to a GaussDB(DWS) cluster, you must have:
+If you have completed the preparations, skip this section.
+In this tutorial, the Hive ORC table will be created in the MRS cluster as an example to complete the preparation work. The process for creating the Spark ORC table in the MRS cluster and the SQL syntax are similar to those of Hive.
+The sample data of the product_info.txt data file is as follows:
+100,XHDK-A-1293-#fJ3,2017-09-01,A,2017 Autumn New Shirt Women,red,M,328,2017-09-04,715,good +205,KDKE-B-9947-#kL5,2017-09-01,A,2017 Autumn New Knitwear Women,pink,L,584,2017-09-05,406,very good! +300,JODL-X-1937-#pV7,2017-09-01,A,2017 autumn new T-shirt men,red,XL,1245,2017-09-03,502,Bad. +310,QQPX-R-3956-#aD8,2017-09-02,B,2017 autumn new jacket women,red,L,411,2017-09-05,436,It's really super nice +150,ABEF-C-1820-#mC6,2017-09-03,B,2017 Autumn New Jeans Women,blue,M,1223,2017-09-06,1200,The seller's packaging is exquisite +200,BCQP-E-2365-#qE4,2017-09-04,B,2017 autumn new casual pants men,black,L,997,2017-09-10,301,The clothes are of good quality. +250,EABE-D-1476-#oB1,2017-09-10,A,2017 autumn new dress women,black,S,841,2017-09-15,299,Follow the store for a long time. +108,CDXK-F-1527-#pL2,2017-09-11,A,2017 autumn new dress women,red,M,85,2017-09-14,22,It's really amazing to buy +450,MMCE-H-4728-#nP9,2017-09-11,A,2017 autumn new jacket women,white,M,114,2017-09-14,22,Open the package and the clothes have no odor +260,OCDA-G-2817-#bD3,2017-09-12,B,2017 autumn new woolen coat women,red,L,2004,2017-09-15,826,Very favorite clothes +980,ZKDS-J-5490-#cW4,2017-09-13,B,2017 Autumn New Women's Cotton Clothing,red,M,112,2017-09-16,219,The clothes are small +98,FKQB-I-2564-#dA5,2017-09-15,B,2017 autumn new shoes men,green,M,4345,2017-09-18,5473,The clothes are thick and it's better this winter. +150,DMQY-K-6579-#eS6,2017-09-21,A,2017 autumn new underwear men,yellow,37,2840,2017-09-25,5831,This price is very cost effective +200,GKLW-l-2897-#wQ7,2017-09-22,A,2017 Autumn New Jeans Men,blue,39,5879,2017-09-25,7200,The clothes are very comfortable to wear +300,HWEC-L-2531-#xP8,2017-09-23,A,2017 autumn new shoes women,brown,M,403,2017-09-26,607,good +100,IQPD-M-3214-#yQ1,2017-09-24,B,2017 Autumn New Wide Leg Pants Women,black,M,3045,2017-09-27,5021,very good. +350,LPEC-N-4572-#zX2,2017-09-25,B,2017 Autumn New Underwear Women,red,M,239,2017-09-28,407,The seller's service is very good +110,NQAB-O-3768-#sM3,2017-09-26,B,2017 autumn new underwear women,red,S,6089,2017-09-29,7021,The color is very good +210,HWNB-P-7879-#tN4,2017-09-27,B,2017 autumn new underwear women,red,L,3201,2017-09-30,4059,I like it very much and the quality is good. +230,JKHU-Q-8865-#uO5,2017-09-29,C,2017 Autumn New Clothes with Chiffon Shirt,black,M,2056,2017-10-02,3842,very good+ +
For details, see "Creating a Cluster > Custom Creation of a Cluster" in the MapReduce Service User Guide.
+For details, see "Remote Login Guide > Logging In to a Master Node" in the MapReduce Service User Guide.
+sudo su - omm+
cd /opt/client+
source bigdata_env+
kinit MRS cluster user+
Example: kinit hiveuser
+beeline+
Run the following command to create the database demo:
+CREATE DATABASE demo;+
Run the following command to switch to the database demo:
+USE demo;+
Run the following command to create table product_info and define the table fields based on data in the Data File.
+DROP TABLE product_info; + +CREATE TABLE product_info +( + product_price int , + product_id char(30) , + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt int , + product_comment_time date , + product_comment_num int , + product_comment_content varchar(200) +) +row format delimited fields terminated by ',' +stored as TEXTFILE;+
For details about how to import data to an MRS cluster, see "Cluster Operation Guide > Managing Active Clusters > Managing Data Files" in the MapReduce Service User Guide.
+Run the following command to create the Hive ORC table product_info_orc. The table fields are the same as those of the product_info table created in the previous step.
+DROP TABLE product_info_orc; + +CREATE TABLE product_info_orc +( + product_price int , + product_id char(30) , + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt int , + product_comment_time date , + product_comment_num int , + product_comment_content varchar(200) +) +row format delimited fields terminated by ',' +stored as orc;+
insert into product_info_orc select * from product_info;+
Query table product_info_orc.
+select * from product_info_orc;+
If data displayed in the Data File can be queried, the data has been successfully inserted to the ORC table.
+In the syntax CREATE FOREIGN TABLE (SQL on Hadoop or OBS) for creating a foreign table, you need to specify a foreign server associated with the MRS data source connection.
+When you create an MRS data source connection on the GaussDB(DWS) management console, the database administrator dbadmin automatically creates a foreign server in the default database postgres. If you want to create a foreign table in the default database postgres to read MRS data, skip this section.
+To allow a common user to create a foreign table in a user-defined database to read MRS data, you must manually create a foreign server in the user-defined database. This section describes how does a common user create a foreign server in a user-defined database. The procedure is as follows:
+For details, see "Managing MRS Data Sources > Creating an MRS Data Source Connection" in the Data Warehouse Service User Guide.
+If you no longer need to read data from the MRS data source and have deleted the MRS data source on the GaussDB(DWS) management console, only the foreign server automatically created in the default database gaussdb will be deleted, and the manually created foreign server needs to be deleted manually. For details about the deletion, see Deleting the Manually Created Foreign Server.
+In the following example, a common user dbuser and a database mydatabase are created. Then, an administrator is used to grant foreign table permissions to user dbuser.
+For example, use the gsql client to connect to the database by running the following command:
+gsql -d postgres -h 192.168.2.30 -U dbadmin -p 8000 -W password -r+
Enter your password as prompted.
+Create a user named dbuser that has the permission to create databases.
+CREATE USER dbuser WITH CREATEDB PASSWORD 'password';+
SET ROLE dbuser PASSWORD 'password';+
CREATE DATABASE mydatabase;+
Query the database.
+SELECT * FROM pg_database;+
The database is successfully created if the returned result contains information about mydatabase.
+datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datcompatibility | datacl + +------------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+---------------+------------------+-------------------------------------- +-------------- + template1 | 10 | 0 | C | C | t | t | -1 | 14146 | 1351 | 1663 | ORA | {=c/Ruby,omm=CTc/Ruby} + template0 | 10 | 0 | C | C | t | f | -1 | 14146 | 1350 | 1663 | ORA | {=c/Ruby,Ruby=CTc/Ruby} + postgres | 10 | 0 | C | C | f | t | -1 | 14146 | 1352 | 1663 | ORA | {=Tc/Ruby,Ruby=CTc/Ruby,chaojun=C/Ruby,hu +obinru=C/Ruby} + mydatabase | 17000 | 0 | C | C | f | t | -1 | 14146 | 1351 | 1663 | ORA | +(4 rows)+
Use the connection to create a database as a database administrator.
+\c mydatabase dbadmin;+
Enter the password as prompted.
+Note that you must use the administrator account to connect to the database where a foreign server is to be created and foreign tables are used; and then grant permissions to the common user.
+GRANT ALL ON FOREIGN DATA WRAPPER hdfs_fdw TO dbuser;+
The name of FOREIGN DATA WRAPPER must be hdfs_fdw. dbuser is the username for creating SERVER.
+Run the following command to grant the user the permission to use foreign tables:
+ALTER USER dbuser USEFT;+
Query for the user.
+SELECT r.rolname, r.rolsuper, r.rolinherit, + r.rolcreaterole, r.rolcreatedb, r.rolcanlogin, + r.rolconnlimit, r.rolvalidbegin, r.rolvaliduntil, + ARRAY(SELECT b.rolname + FROM pg_catalog.pg_auth_members m + JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid) + WHERE m.member = r.oid) as memberof +, r.rolreplication +, r.rolauditadmin +, r.rolsystemadmin +, r.roluseft +FROM pg_catalog.pg_roles r +ORDER BY 1;+
The authorization is successful if the dbuser information in the returned result contains the UseFT permission.
+ rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolconnlimit | rolvalidbegin | rolvaliduntil | memberof | rolreplication | rolauditadmin | rolsystemadmin | roluseft
+-----------+----------+------------+---------------+-------------+-------------+--------------+---------------+---------------+----------+----------------+---------------+----------------+----------
+ dbuser | f | t | f | t | t | -1 | | | {} | f | f | f | t
+ lily | f | t | f | f | t | -1 | | | {} | f | f | f | f
+ Ruby | t | t | t | t | t | -1 | | | {} | t | t | t | t
+You can use the gsql client to log in to the database in either of the following ways:
+You can use either of the following methods to create the connection:
+\c postgres dbadmin;+
Enter the password as prompted.
+gsql -d postgres -h 192.168.2.30 -U dbadmin -p 8000 -W password -r+
SELECT * FROM pg_foreign_server;+
The returned result is as follows:
+srvname | srvowner | srvfdw | srvtype | srvversion | srvacl | srvoptions +--------------------------------------------------+----------+--------+---------+------------+--------+--------------------------------------------------------------------------------------------------------------------- + gsmpp_server | 10 | 13673 | | | | + gsmpp_errorinfo_server | 10 | 13678 | | | | + hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca | 16476 | 13685 | | | | {"address=192.168.1.245:25000,192.168.1.218:25000",hdfscfgpath=/MRS/8f79ada0-d998-4026-9020-80d6de2692ca,type=hdfs} +(3 rows)+
In the query result, each row contains the information about a foreign server. The foreign server associated with the MRS data source connection contains the following information:
+You can find the foreign server you want based on the above information and record the values of its srvname and srvoptions.
+\c mydatabase dbuser;+
For details about the syntax for creating foreign servers, see CREATE SERVER. For example:
+CREATE SERVER hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca FOREIGN DATA WRAPPER HDFS_FDW +OPTIONS +( +address '192.168.1.245:25000,192.168.1.218:25000', +hdfscfgpath '/MRS/8f79ada0-d998-4026-9020-80d6de2692ca', +type 'hdfs' +);+
Mandatory parameters are described as follows:
+You can customize a name.
+In this example, specify the name to the value of the srvname field recorded in 2, such as hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca.
+Resources in different databases are isolated. Therefore, the names of foreign servers in different databases can be the same.
+This parameter can only be set to HDFS_FDW, which already exists in the database.
+Specifies the IP address and port number of the primary and standby nodes of the HDFS cluster.
+Specifies the configuration file path of the HDFS cluster. This parameter is available only when type is HDFS. You can set only one path.
+Its value is hdfs, which indicates that HDFS_FDW connects to HDFS.
+SELECT * FROM pg_foreign_server WHERE srvname='hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca';
+The server is successfully created if the returned result is as follows:
+srvname | srvowner | srvfdw | srvtype | srvversion | srvacl | srvoptions +--------------------------------------------------+----------+--------+---------+------------+--------+--------------------------------------------------------------------------------------------------------------------- + hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca | 16476 | 13685 | | | | {"address=192.168.1.245:25000,192.168.1.218:25000",hdfscfgpath=/MRS/8f79ada0-d998-4026-9020-80d6de2692ca,type=hdfs} +(1 row)+
This section describes how to create a Hadoop foreign table in the GaussDB(DWS) database to access the Hadoop structured data stored on MRS HDFS. A Hadoop foreign table is read-only. It can only be queried using SELECT.
+For details, see Preparing Data in an MRS Cluster.
+For details, see "Managing MRS Data Sources > Creating an MRS Data Source Connection" in the Data Warehouse Service User Guide.
+There are two methods for you to obtain the HDFS path.
+For Hive data, log in to the Hive client of MRS (see 2), run the following command to view the detailed information about the table, and record the data storage path in the location parameter:
+use <database_name>; +desc formatted <table_name>;+
For example, if the value of the location parameter in the returned result is hdfs://hacluster/user/hive/warehouse/demo.db/product_info_orc/, the HDFS path is /user/hive/warehouse/demo.db/product_info_orc/.
+Determine whether to use a common user to create a foreign table in the customized database based on requirements.
+\c mydatabase dbuser;+
Enter your password as prompted.
+When you create an MRS data source connection on the GaussDB(DWS) management console, the database administrator dbadmin automatically creates a foreign server in the default database postgres. If you create a foreign table in the default database postgres as the database administrator dbadmin, you need to connect to the database using the database client tool provided by GaussDB(DWS). For example, use the gsql client to connect to the database by running the following command:
+gsql -d postgres -h 192.168.2.30 -U dbadmin -p 8000 -W password -r+
Enter your password as prompted.
+SELECT * FROM pg_foreign_server;+
You can also run the \desc+ command to view the information about the foreign server.
+The returned result is as follows:
+srvname | srvowner | srvfdw | srvtype | srvversion | srvacl | srvoptions +--------------------------------------------------+----------+--------+---------+------------+--------+--------------------------------------------------------------------------------------------------------------------- + gsmpp_server | 10 | 13673 | | | | + gsmpp_errorinfo_server | 10 | 13678 | | | | + hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca | 16476 | 13685 | | | | {"address=192.168.1.245:25000,192.168.1.218:25000",hdfscfgpath=/MRS/8f79ada0-d998-4026-9020-80d6de2692ca,type=hdfs} +(3 rows)+
In the query result, each row contains the information about a foreign server. The foreign server associated with the MRS data source connection contains the following information:
+You can find the foreign server you want based on the above information and record the values of its srvname and srvoptions.
+After Obtaining Information About the Foreign Server Connected to the MRS Data Source and Obtaining the HDFS Path of the MRS Data Source are completed, you can create a foreign table to read data from the MRS data source.
+The syntax for creating a foreign table is as follows. For details, see the syntax CREATE FOREIGN TABLE (SQL on Hadoop or OBS).
+CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name +( [ { column_name type_name + [ { [CONSTRAINT constraint_name] NULL | + [CONSTRAINT constraint_name] NOT NULL | + column_constraint [...]} ] | + table_constraint [, ...]} [, ...] ] ) + SERVER dfs_server + OPTIONS ( { option_name ' value ' } [, ...] ) + DISTRIBUTE BY {ROUNDROBIN | REPLICATION} + [ PARTITION BY ( column_name ) [ AUTOMAPPED ] ] ;+
For example, when creating a foreign table named foreign_product_info, set parameters in the syntax as follows:
+Mandatory. This parameter specifies the name of the foreign table to be created.
+Multiple columns are separate by commas (,).
+The number of columns and column types in the foreign table must be the same as those in the data stored on MRS. Learn Data Type Conversion before defining column data types.
+This parameter specifies the foreign server name of the foreign table. This server must exist. The foreign table can read data from an MRS cluster by configuring the foreign server and connecting to the MRS data source.
+Enter the value of the srvname field queried in Obtaining Information About the Foreign Server Connected to the MRS Data Source.
+These are parameters associated with the foreign table. The key parameters are as follows:
+If the MRS analysis cluster has enabled Kerberos authentication, ensure that the MRS user having the MRS data source connection has the read and write permissions for the directory.
+Follow the steps in Obtaining the HDFS Path of the MRS Data Source to obtain the HDFS path, which is the value of parameter foldername.
+Other parameters are optional. You can set them as required. In this example, you do not need to set these parameters.
+Based on the above settings, the foreign table is created using the following statements:
+DROP FOREIGN TABLE IF EXISTS foreign_product_info; + +CREATE FOREIGN TABLE foreign_product_info +( + product_price integer , + product_id char(30) , + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) SERVER hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca +OPTIONS ( +format 'orc', +encoding 'utf8', +foldername '/user/hive/warehouse/demo.db/product_info_orc/' +) +DISTRIBUTE BY ROUNDROBIN;+
Data is imported to Hive/Spark and then stored on HDFS in ORC format. Actually, GaussDB(DWS) reads ORC files on HDFS, and queries and analyzes data in these files.
+Data types supported by Hive/Spark are different from those supported by GaussDB(DWS). Therefore, you need to learn the mapping between them. Table 1 describes the mapping in detail.
+ +Type + |
+Column Type Supported by an HDFS/OBS Foreign Table of GaussDB(DWS) + |
+Column Type Supported by a Hive Table + |
+Column Type Supported by a Spark Table + |
+
---|---|---|---|
Integer in two bytes + |
+SMALLINT + |
+SMALLINT + |
+SMALLINT + |
+
Integer in four bytes + |
+INTEGER + |
+INT + |
+INT + |
+
Integer in eight bytes + |
+BIGINT + |
+BIGINT + |
+BIGINT + |
+
Single-precision floating point number + |
+FLOAT4 (REAL) + |
+FLOAT + |
+FLOAT + |
+
Double-precision floating point number + |
+FLOAT8(DOUBLE PRECISION) + |
+DOUBLE + |
+FLOAT + |
+
Scientific data type + |
+DECIMAL[p (,s)] +The maximum precision can reach up to 38. + |
+DECIMAL +The maximum precision can reach up to 38 (Hive 0.11). + |
+DECIMAL + |
+
Date type + |
+DATE + |
+DATE + |
+DATE + |
+
Time type + |
+TIMESTAMP + |
+TIMESTAMP + |
+TIMESTAMP + |
+
BOOLEAN type + |
+BOOLEAN + |
+BOOLEAN + |
+BOOLEAN + |
+
CHAR type + |
+CHAR(n) + |
+CHAR (n) + |
+STRING + |
+
VARCHAR type + |
+VARCHAR(n) + |
+VARCHAR (n) + |
+VARCHAR (n) + |
+
String + |
+TEXT(CLOB) + |
+STRING + |
+STRING + |
+
If the data amount is small, you can directly run SELECT to query the foreign table and view the data in the MRS data source.
+1 | SELECT * FROM foreign_product_info; + |
If the query result is the same as the data in Data File, the import is successful. The following information is displayed at the end of the query result:
+(20 rows)+
After data is queried, you can insert the data to common tables in the database.
+You can query the MRS data after importing it to GaussDB(DWS).
+The target table structure must be the same as the structure of the foreign table created in Creating a Foreign Table. That is, both tables must have the same number of columns and column types.
+For example, create a table named product_info. The table example is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | DROP TABLE IF EXISTS product_info; +CREATE TABLE product_info +( + product_price integer , + product_id char(30) , + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) +with ( +orientation = column, +compression=middle +) +DISTRIBUTE BY HASH (product_id); + |
Example:
+1 | INSERT INTO product_info SELECT * FROM foreign_product_info; + |
INSERT 0 20+
1 | SELECT * FROM product_info; + |
If the query result is the same as the data in Data File, the import is successful. The following information is displayed at the end of the query result:
+(20 rows)+
After completing operations in this tutorial, if you no longer need to use the resources created during the operations, you can delete them to avoid resource waste or quota occupation.
+DROP TABLE product_info;+
DROP FOREIGN TABLE foreign_product_info;+
If operations in Manually Creating a Foreign Server have been performed, perform the following steps to delete the foreign server, database, and user:
+You can use the gsql client to log in to the database in either of the following ways:
+\c mydatabase dbuser;+
Enter the password as prompted.
+gsql -d mydatabase -h 192.168.2.30 -U dbuser -p 8000 -r+
Enter the password as prompted.
+Run the following command to delete the server. For details about the syntax, see DROP SERVER.
+DROP SERVER hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca;+
The foreign server is deleted if the following information is displayed:
+DROP SERVER+
View the foreign server.
+SELECT * FROM pg_foreign_server WHERE srvname='hdfs_server_8f79ada0_d998_4026_9020_80d6de2692ca';
+The server is successfully deleted if the returned result is as follows:
+srvname | srvowner | srvfdw | srvtype | srvversion | srvacl | srvoptions +---------+----------+--------+---------+------------+--------+------------ +(0 rows)+
Connect to the default database gaussdb through the database client tool provided by GaussDB(DWS).
+If you have logged in to the database using the gsql client, run the following command to switch the database and user:
+\c gaussdb+
Enter your password as prompted.
+Run the following command to delete the customized database:
+DROP DATABASE mydatabase;+
The database is deleted if the following information is displayed:
+DROP DATABASE+
Connect to the database as a database administrator through the database client tool provided by GaussDB(DWS).
+If you have logged in to the database using the gsql client, run the following command to switch the database and user:
+\c gaussdb dbadmin+
REVOKE ALL ON FOREIGN DATA WRAPPER hdfs_fdw FROM dbuser;+
The name of FOREIGN DATA WRAPPER must be hdfs_fdw. dbuser is the username for creating SERVER.
+Run the following command to delete the user:
+DROP USER dbuser;+
You can run the \du command to query for the user and check whether the user has been deleted.
+The following error information indicates that GaussDB(DWS) is to read an ORC data file but the actual file is in text format. Therefore, create a table of the Hive ORC type and store the data to the table.
+ERROR: dn_6009_6010: Error occurs while creating an orc reader for file /user/hive/warehouse/products_info.txt, detail can be found in dn log of dn_6009_6010.+ +
You can use CDM to migrate data from other data sources (for example, MySQL) to the databases in clusters on GaussDB(DWS).
+ + +For details about scenarios where CDM is used to migrate data to GaussDB(DWS), see the following sections of Cloud Data Migration User Guide:
+Data skew causes the query performance to deteriorate. Before importing all the data from a table consisting of over 10 million records, you are advised to import some of the data and check whether data skew occurs and whether the distribution keys need to be changed. Troubleshoot the problems if any. It is costly to address data skew and change the distribution keys after a large amount of data has been imported.
+GaussDB(DWS) uses a massively parallel processing (MPP) system of the shared-nothing architecture. The MPP performs horizontal partitioning to store tuples in service data tables on all DNs using proper distribution policies.
+The following user table distribution policies are supported:
+If an inappropriate distribution key is used, data skew may occur when you use the hash policy. Check for data skew when you use the hash distribution policy so that data can be evenly distributed to each DN. You are advised to use the column with few replicated values as the distribution key.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name + ({ column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] + | table_constraint | LIKE source_table [ like_option [...] ] } + [, ... ]) [ WITH ( {storage_parameter = value} [, ... ] ) ] + [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] + [ COMPRESS | NOCOMPRESS ] [ TABLESPACE tablespace_name ] + [ DISTRIBUTE BY { REPLICATION + + | { HASH ( column_name [,...] ) } } ]; + |
When importing a single data file, you can evenly split this file and import a part of it to check for the data skew in the target table.
+1 | SELECT a.count,b.node_name FROM (SELECT count(*) AS count,xc_node_id FROM table_name GROUP BY xc_node_id) a, pgxc_node b WHERE a.xc_node_id=b.node_id ORDER BY a.count desc; + |
If data distribution deviation across DNs is greater than or equal to 10%, data skew occurs. Remove this distribution key from the candidates in 1, delete the target table, and repeat 2 through 5.
+The data distribution deviation indicates the difference between the actual data volume on DNs and the average data volume on DNs.
+Assume you want to select an appropriate distribution key for the staffs table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | CREATE TABLE staffs +( + staff_ID NUMBER(6) not null, + FIRST_NAME VARCHAR2(20), + LAST_NAME VARCHAR2(25), + EMAIL VARCHAR2(25), + PHONE_NUMBER VARCHAR2(20), + HIRE_DATE DATE, + employment_ID VARCHAR2(10), + SALARY NUMBER(8,2), + COMMISSION_PCT NUMBER(2,2), + MANAGER_ID NUMBER(6), + section_ID NUMBER(4) +) +DISTRIBUTE BY hash(staff_ID); + |
1 +2 +3 +4 +5 | SELECT count(*) FROM pgxc_node where node_type='D'; + count +------- + 8 +(1 row) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | SELECT a.count,b.node_name FROM (select count(*) as count,xc_node_id FROM staffs GROUP BY xc_node_id) a, pgxc_node b WHERE a.xc_node_id=b.node_id ORDER BY a.count desc; +count | node_name +------+----------- +11010 | datanode4 +10000 | datanode3 +12001 | datanode2 + 8995 | datanode1 +10000 | datanode5 + 7999 | datanode6 + 9995 | datanode7 +10000 | datanode8 +(8 rows) + |
1 | DROP TABLE staffs; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | CREATE TABLE staffs +( + staff_ID NUMBER(6) not null, + FIRST_NAME VARCHAR2(20), + LAST_NAME VARCHAR2(25), + EMAIL VARCHAR2(25), + PHONE_NUMBER VARCHAR2(20), + HIRE_DATE DATE, + employment_ID VARCHAR2(10), + SALARY NUMBER(8,2), + COMMISSION_PCT NUMBER(2,2), + MANAGER_ID NUMBER(6), + section_ID NUMBER(4) +) +DISTRIBUTE BY hash(staff_ID,FIRST_NAME,LAST_NAME); + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | SELECT a.count,b.node_name FROM (select count(*) as count,xc_node_id FROM staffs GROUP BY xc_node_id) a, pgxc_node b WHERE a.xc_node_id=b.node_id ORDER BY a.count desc; +count | node_name +------+----------- +10010 | datanode4 +10000 | datanode3 +10001 | datanode2 + 9995 | datanode1 +10000 | datanode5 + 9999 | datanode6 + 9995 | datanode7 +10000 | datanode8 +(8 rows) + |
1 | TRUNCATE TABLE staffs; + |
Before you use the SQL on OBS feature to query OBS data:
+For example, the ORC table has been created when you use the Hive or Spark component, and the ORC data has been stored on OBS.
+Assume that there are two ORC data files, named product_info.0 and product_info.1, whose original data is stored in the demo.db/product_info_orc/ directory of the mybucket OBS bucket. You can view their original data in Original Data.
+This section uses the ORC format as an example to describe how to import data. The method for importing CarbonData data is similar.
+Assume that you have stored the two ORC data files on OBS and their original data is as follows:
+The file contains the following data:
+1 +2 +3 +4 +5 | 100,XHDK-A-1293-#fJ3,2017-09-01,A,2017 Autumn New Shirt Women,red,M,328,2017-09-04,715,good! +205,KDKE-B-9947-#kL5,2017-09-01,A,2017 Autumn New Knitwear Women,pink,L,584,2017-09-05,406,very good! +300,JODL-X-1937-#pV7,2017-09-01,A,2017 autumn new T-shirt men,red,XL,1245,2017-09-03,502,Bad. +310,QQPX-R-3956-#aD8,2017-09-02,B,2017 autumn new jacket women,red,L,411,2017-09-05,436,It's really super nice. +150,ABEF-C-1820-#mC6,2017-09-03,B,2017 Autumn New Jeans Women,blue,M,1223,2017-09-06,1200,The seller's packaging is exquisite. + |
The file contains the following data:
+1 +2 +3 +4 +5 | 200,BCQP-E-2365-#qE4,2017-09-04,B,2017 autumn new casual pants men,black,L,997,2017-09-10,301,The clothes are of good quality. +250,EABE-D-1476-#oB1,2017-09-10,A,2017 autumn new dress women,black,S,841,2017-09-15,299,Follow the store for a long time. +108,CDXK-F-1527-#pL2,2017-09-11,A,2017 autumn new dress women,red,M,85,2017-09-14,22,It's really amazing to buy. +450,MMCE-H-4728-#nP9,2017-09-11,A,2017 autumn new jacket women,white,M,114,2017-09-14,22,Open the package and the clothes have no odor. +260,OCDA-G-2817-#bD3,2017-09-12,B,2017 autumn new woolen coat women,red,L,2004,2017-09-15,826,Very favorite clothes. + |
Click Service List and choose Object Storage Service to open the OBS management console.
+After the source data files are uploaded to an OBS bucket, a globally unique access path is generated. You need to specify the OBS paths of source data files when creating a foreign table.
+For details about how to view an OBS path, see "OBS Console Operation Guide > Managing Objects > Accessing an Object Using Its Object URL" in the Object Storage Service User Guide.
+For example, the OBS paths are as follows:
+1 +2 | https://obs.xxx.com/mybucket/demo.db/product_info_orc/product_info.0 +https://obs.xxx.com/mybucket/demo.db/product_info_orc/product_info.1 + |
The user who executes the SQL on OBS function needs to obtain the read permission on the OBS bucket where the source data file is located. You can configure the ACL for the OBS buckets to grant the read permission to a specific user.
+For details, see "OBS Console Operation Guide > Permission Control > Configuring a Bucket ACL" in the Object Storage Service User Guide.
+This section describes how to create a foreign server that is used to define the information about OBS servers and is invoked by foreign tables. For details about the syntax for creating foreign servers, see CREATE SERVER.
+Common users do not have permissions to create foreign servers and tables. If you want to use a common user to create foreign servers and tables in a customized database, perform the following steps to create a user and a database, and grant the user foreign table permissions.
+In the following example, a common user dbuser and a database mydatabase are created. Then, an administrator is used to grant foreign table permissions to user dbuser.
+For example, use the gsql client to connect to the database by running the following command:
+1 | gsql -d gaussdb -h 192.168.2.30 -U dbadmin -p 8000 -W password -r + |
Create a user named dbuser that has the permission to create databases.
+1 | CREATE USER dbuser WITH CREATEDB PASSWORD 'password'; + |
1 | SET ROLE dbuser PASSWORD 'password'; + |
1 | CREATE DATABASE mydatabase; + |
Query the database.
+1 | SELECT * FROM pg_database; + |
The database is successfully created if the returned result contains information about mydatabase.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datcompatibility | datacl + +------------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+---------------+------------------+-------------------------------------- +-------------- + template1 | 10 | 0 | C | C | t | t | -1 | 14146 | 1351 | 1663 | ORA | {=c/Ruby,Ruby=CTc/Ruby} + template0 | 10 | 0 | C | C | t | f | -1 | 14146 | 1350 | 1663 | ORA | {=c/Ruby,Ruby=CTc/Ruby} + gaussdb | 10 | 0 | C | C | f | t | -1 | 14146 | 1352 | 1663 | ORA | {=Tc/Ruby,Ruby=CTc/Ruby,chaojun=C/Ruby,hu +obinru=C/Ruby} + mydatabase | 17000 | 0 | C | C | f | t | -1 | 14146 | 1351 | 1663 | ORA | +(4 rows) + |
Connect to the new database as a database administrator through the database client tool provided by GaussDB(DWS).
+You can use the gsql client to run the following command to switch to an administrator user and connect to the new database:
+1 | \c mydatabase dbadmin; + |
Enter the password of the system administrator as prompted.
+Note that you must use the administrator account to connect to the database where a foreign server is to be created and foreign tables are used; and then grant permissions to the common user.
+1 +2 | GRANT ALL ON SCHEMA public TO dbuser; +GRANT ALL ON FOREIGN DATA WRAPPER dfs_fdw TO dbuser; + |
where fdw_name can be hdfs_fdw or dfs_fdw, and dbuser is the name of the user who creates SERVER.
+Run the following command to grant the user the permission to use foreign tables:
+1 | ALTER USER dbuser USEFT; + |
Query for the user.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | SELECT r.rolname, r.rolsuper, r.rolinherit, + r.rolcreaterole, r.rolcreatedb, r.rolcanlogin, + r.rolconnlimit, r.rolvalidbegin, r.rolvaliduntil, + ARRAY(SELECT b.rolname + FROM pg_catalog.pg_auth_members m + JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid) + WHERE m.member = r.oid) as memberof +, r.rolreplication +, r.rolauditadmin +, r.rolsystemadmin +, r.roluseft +FROM pg_catalog.pg_roles r +ORDER BY 1; + |
The authorization is successful if the dbuser information in the returned result contains the UseFT permission.
+1 +2 +3 +4 +5 | rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolconnlimit | rolvalidbegin | rolvaliduntil | memberof | rolreplication | rolauditadmin | rolsystemadmin | roluseft +-----------+----------+------------+---------------+-------------+-------------+--------------+---------------+---------------+----------+----------------+---------------+----------------+---------- + dbuser | f | t | f | t | t | -1 | | | {} | f | f | f | t + lily | f | t | f | f | t | -1 | | | {} | f | f | f | f + Ruby | t | t | t | t | t | -1 | | | {} | t | t | t | t + |
In this example, use common user dbuser created in (Optional) Creating a User and a Database and Granting the User Foreign Table Permissions to connect to mydatabase created by the user. You need to connect to the database through the database client tool provided by GaussDB(DWS).
+You can use the gsql client to log in to the database in either of the following ways:
+1 | \c mydatabase dbuser; + |
Enter the password as prompted.
+1 | gsql -d mydatabase -h 192.168.2.30 -U dbuser -p 8000 -r + |
Enter the password as prompted.
+For details about the syntax for creating foreign servers, see CREATE SERVER.
+For example, run the following command to create a foreign server named obs_server.
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE SERVER obs_server FOREIGN DATA WRAPPER dfs_fdw +OPTIONS ( + address 'obs.otc.t-systems.com' , + ACCESS_KEY 'access_key_value_to_be_replaced', + SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced', + encrypt 'on', + type 'obs' +); + |
Mandatory parameters are described as follows:
+You can customize a name.
+In this example, the name is set to obs_server.
+fdw_name can be hdfs_fdw or dfs_fdw, which already exists in the database.
+Specifies the endpoint of the OBS service.
+Obtain the address as follows:
+For details about how to obtain the access keys, see Creating Access Keys (AK and SK).
+Its value is obs, which indicates that dfs_fdw connects to OBS.
+1 | SELECT * FROM pg_foreign_server WHERE srvname='obs_server'; + |
The server is successfully created if the returned result is as follows:
+1 +2 +3 +4 +5 +6 | srvname | srvowner | srvfdw | srvtype | srvversion | srvacl | srvoptions + +------------+----------+--------+---------+------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +----------- + obs_server | 24661 | 13686 | | | | {address=xxx.xxx.x.xxx,access_key=xxxxxxxxxxxxxxxxxxxx,type=obs,secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx} +(1 row) + |
After performing steps in Creating a Foreign Server, create an OBS foreign table in the GaussDB(DWS) database to access the data stored in OBS. An OBS foreign table is read-only. It can only be queried using SELECT.
+The syntax for creating a foreign table is as follows. For details, see the syntax CREATE FOREIGN TABLE (SQL on Hadoop or OBS).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name +( [ { column_name type_name + [ { [CONSTRAINT constraint_name] NULL | + [CONSTRAINT constraint_name] NOT NULL | + column_constraint [...]} ] | + table_constraint [, ...]} [, ...] ] ) + SERVER dfs_server + OPTIONS ( { option_name ' value ' } [, ...] ) + DISTRIBUTE BY {ROUNDROBIN | REPLICATION} + [ PARTITION BY ( column_name ) [ AUTOMAPPED ] ] ; + |
For example, when creating a foreign table named product_info_ext_obs, set parameters in the syntax as follows:
+Specifies the name of the foreign table to be created.
+Multiple columns are separate by commas (,).
+The number of fields and field types in the foreign table must be the same as those in the data stored on OBS.
+This parameter specifies the foreign server name of the foreign table. This server must exist. The foreign server connects to OBS to read data by setting its foreign server.
+Enter the name of the foreign server created by following steps in Creating a Foreign Server.
+These are parameters associated with the foreign table. The key parameters are as follows:
+You can perform 2 in Preparing Data on OBS to obtain the complete OBS path of the data source file. The path is the endpoint of the OBS service.
+This clause is mandatory. Currently, OBS foreign tables support only the ROUNDROBIN distribution mode.
+It indicates that when a foreign table reads data from the data source, each node in the GaussDB(DWS) cluster randomly reads some data and integrates the random data to a complete data set.
+Other parameters are optional. You can set them as required. In this example, you do not need to set these parameters.
+Based on the preceding settings, the command for creating the foreign table is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | DROP FOREIGN TABLE IF EXISTS product_info_ext_obs; +CREATE FOREIGN TABLE product_info_ext_obs +( + product_price integer not null, + product_id char(30) not null, + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) SERVER obs_server +OPTIONS ( +format 'orc', +foldername '/mybucket/demo.db/product_info_orc/', +encoding 'utf8', +totalrows '10' +) +DISTRIBUTE BY ROUNDROBIN; + |
Create an OBS foreign table that contains partition columns. The product_info_ext_obs foreign table uses the product_manufacturer column as the partition key. The following partition directories exist in obs/mybucket/demo.db/product_info_orc/:
+Partition directory 1: product_manufacturer=10001
+Partition directory 2: product_manufacturer=10010
+Partition directory 3: product_manufacturer=10086
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | DROP FOREIGN TABLE IF EXISTS product_info_ext_obs; +CREATE FOREIGN TABLE product_info_ext_obs +( + product_price integer not null, + product_id char(30) not null, + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) , + product_manufacturer integer +) SERVER obs_server +OPTIONS ( +format 'orc', +foldername '/mybucket/demo.db/product_info_orc/', +encoding 'utf8', +totalrows '10' +) +DISTRIBUTE BY ROUNDROBIN +PARTITION BY (product_manufacturer) AUTOMAPPED; + |
If the data amount is small, you can directly run SELECT to query the foreign table and view the data on OBS.
+1 | SELECT * FROM product_info_ext_obs; + |
If the query result is the same as the data in Original Data, the import is successful. The following information is displayed at the end of the query result:
+(10 rows)+
After data is queried, you can insert the data to common tables in the database.
+The target table structure must be the same as the structure of the foreign table created in Creating a Foreign Table. That is, both tables must have the same number of columns and column types.
+For example, create a table named product_info. The table example is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | DROP TABLE IF EXISTS product_info; + +CREATE TABLE product_info +( + product_price integer not null, + product_id char(30) not null, + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) +with ( +orientation = column, +compression=middle +) +DISTRIBUTE BY HASH (product_id); + |
Example:
+1 | INSERT INTO product_info SELECT * FROM product_info_ext_obs; + |
INSERT 0 10+
1 | SELECT * FROM product_info; + |
If the query result is the same as the data in Original Data, the import is successful. The following information is displayed at the end of the query result:
+(10 rows)+
After completing operations in this tutorial, if you no longer need to use the resources created during the operations, you can delete them to avoid resource waste or quota occupation. The procedure is as follows:
+If you have performed steps in (Optional) Creating a User and a Database and Granting the User Foreign Table Permissions, delete the database and the user to which the database belongs.
+1 | DROP TABLE product_info; + |
If the following information is displayed, the table has been deleted.
+DROP TABLE+
1 | DROP FOREIGN TABLE product_info_ext_obs; + |
If the following information is displayed, the table has been deleted.
+DROP FOREIGN TABLE+
In this example, common user dbuser is used to create the foreign server in mydatabase. You need to connect to the database through the database client tool provided by GaussDB(DWS). You can use the gsql client to log in to the database in either of the following ways:
+1 | \c mydatabase dbuser; + |
Enter the password as prompted.
+1 | gsql -d mydatabase -h 192.168.2.30 -U dbuser -p 8000 -r + |
Enter the password as prompted.
+Run the following command to delete the server. For details about the syntax, see DROP SERVER.
+1 | DROP SERVER obs_server; + |
The database is deleted if the following information is displayed:
+DROP SERVER+
View the foreign server.
+1 | SELECT * FROM pg_foreign_server WHERE srvname='obs_server'; + |
The server is successfully deleted if the returned result is as follows:
+srvname | srvowner | srvfdw | srvtype | srvversion | srvacl | srvoptions +---------+----------+--------+---------+------------+--------+------------ +(0 rows)+
If you have performed steps in (Optional) Creating a User and a Database and Granting the User Foreign Table Permissions, perform the following steps to delete the database and the user to which the database belongs.
+Connect to the default database gaussdb through the database client tool provided by GaussDB(DWS).
+If you have logged in to the database using the gsql client, run the following command to switch the database and user:
+Switch to the default database.
+\c gaussdb
+Enter your password as prompted.
+Run the following command to delete the customized database:
+1 | DROP DATABASE mydatabase; + |
The database is deleted if the following information is displayed:
+DROP DATABASE+
Connect to the database as a database administrator through the database client tool provided by GaussDB(DWS).
+If you have logged in to the database using the gsql client, run the following command to switch the database and user:
+\c gaussdb dbadmin
+1 | REVOKE ALL ON FOREIGN DATA WRAPPER dfs_fdw FROM dbuser; + |
The name of FOREIGN DATA WRAPPER must be dfs_fdw. dbuser is the username for creating SERVER.
+Run the following command to delete the user:
+1 | DROP USER dbuser; + |
You can run the \du command to query for the user and check whether the user has been deleted.
+Generally, objects are managed as files. However, OBS has no file system–related concepts, such as files and folders. To let users easily manage data, OBS allows them to simulate folders. Users can add a slash (/) in the object name, for example, tpcds1000/stock.csv. In this name, tpcds1000 is regarded as the folder name and stock.csv the file name. The value of key (object name) is still tpcds1000/stock.csv, and the content of the object is the content of the stock.csv file.
+The following describes the principles of exporting data from a cluster to OBS by using a distributed hash table or a replication table.
+A distributed hash table stores data in hash mode. Figure 1 shows how to export data from table (T2) to OBS as an example.
+During table data storage, the col2 hash column in table T2 is hashed, and a hash value is generated. The tuple is distributed to corresponding DNs for storage according to the mapping between the DNs and the hash value.
+When data is exported to OBS, DNs that store the exported data of T2 directly export their data files to OBS. Original data on multiple nodes will be exported in parallel.
+ +A replication table stores a package of complete table data on each GaussDB(DWS) node. When exporting data to OBS, GaussDB(DWS) randomly selects a DN for export.
+Rules for naming the files exported from GaussDB(DWS) to OBS are as follows:
+For example, the data of table t1 on datanode3 will be exported as t1_datanode3_segment.0, t1_datanode3_segment.1, and so on.
+You are advised to export data from different clusters or databases to different OBS buckets or different paths of the same OBS bucket.
+A segment has already stored 100 pieces of tuples (1023 MB) when datanode3 exports data from t1 to OBS. If a 5 MB tuple is inserted to the segment, the data size becomes 1028 MB. In this case, file t1_datanode3_segment.0 (1023 MB) is generated and stored on OBS, and the new tuple is stored on OBS as file t1_datanode3_segment.1.
+For example, a cluster has DataNode1, DataNode2, DataNode3, DataNode4, DataNode5, and DataNode6, which store 1.5 GB, 0.7 GB, 0.6 GB, 0.8 GB, 0.4 GB, and 0.5 GB data, respectively. Seven OBS segment files will be generated during data export because DataNode1 will generate two segment files, which store 1 GB and 0.5 GB data, respectively.
+Procedure + |
+Description + |
+Subtask + |
+
---|---|---|
Plan data export. + |
+Create an OBS bucket and a folder in the OBS bucket as the directory for storing exported data files. +For details, see Planning Data Export. + |
+- + |
+
Create an OBS foreign table. + |
+Create a foreign table to help OBS specify information about data files to be exported. The foreign table stores information, such as the destination location, format, encoding, and data delimiter of a source data file. +For details, see Creating an OBS Foreign Table. + |
+- + |
+
Export data. + |
+After the foreign table is created, run the INSERT statement to efficiently export data to data files. +For details, see Exporting Data. + + |
+- + |
+
Plan the storage location of exported data in OBS.
+You need to specify the OBS path (to directory) for storing data that you want to export. The exported data can be saved to a file in CSV format. The system also supports TEXT so that you can import the exported data to various applications.
+The target directory cannot contain any files.
+The user used to export data must:
+You can configure ACL permissions for the OBS bucket to grant the write permission to a specific user.
+For details, see Granting Write Permission to OBS Storage Location and OBS Bucket as Planned.
+You must prepare data to be exported in the database table, and the data volume per row must be less than 1 GB. Based on the data to be exported, plan foreign tables whose attributes such as columns, column types, and length match those of user data.
+Click Service List and choose Object Storage Service to open the OBS management console.
+For details about how to create an OBS bucket, see "OBS Console Operation Guide > Managing Buckets > Creating a Bucket" in the Object Storage Service User Guide..
+For example, create a bucket named mybucket.
+In the OBS bucket, create a folder for storing exported data.
+For details, see "OBS Console Operation Guide > Managing Objects > Creating a Folder" in the .
+For example, create a folder named output_data in the created mybucket OBS bucket.
+Specify the OBS path for storing exported data files. This path is the value of the location parameter used for creating a foreign table.
+The OBS folder path in the location parameter consists of obs://, a bucket name, and a file path.
+In this example, the OBS folder path is as follows:
+obs://mybucket/output_data/+
The OBS directory to be used for storing data files must be empty.
+When exporting data, a user must have the write permission on the OBS bucket where the data export path is located. You can configure ACL permissions for the OBS bucket to grant the write permission to a specific user.
+For details, see "OBS Console Operation Guide > Permission Control > Configuring a Bucket ACL" in the Object Storage Service User Guide.
+To obtain access keys, log in to the management console, click the username in the upper right corner, and select My Credential from the menu. Then choose Access Keys in the navigation tree on the left. On the Access Keys page, you can view the existing AKs or click Add Access Key to create and download access keys.
+For example, in the GaussDB(DWS) database, create a write-only foreign table with the format parameter as text to export text files. Set parameters as follows:
+The OBS path of the source data file has been obtained in Obtain the OBS path for storing source data files in Planning Data Export.
+For example, set location as follows:
+location 'obs://mybucket/output_data/',+
access_key and secret_access_key have been obtained during user creation. Replace the italic part with the actual keys.
+Based on the above settings, the foreign table is created using the following statement:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | DROP FOREIGN TABLE IF EXISTS product_info_output_ext1; +CREATE FOREIGN TABLE product_info_output_ext1 +( + c_bigint bigint, + c_char char(30), + c_varchar varchar(30), + c_nvarchar2 nvarchar2(30) , + c_data date, + c_time time , + c_test varchar(30)) + server gsmpp_server + options ( + LOCATION 'obs://mybucket/output_data/', + ACCESS_KEY 'access_key_value_to_be_replaced', + SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced' + format 'text', + delimiter '|', + encoding 'utf-8', + encrypt 'on' + ) + WRITE ONLY; + |
If the following information is displayed, the foreign table has been created:
+CREATE FOREIGN TABLE+
For example, in the GaussDB(DWS) database, create a write-only foreign table with the format parameter as CSV to export CSV files. Set parameters as follows:
+The OBS path of the source data file has been obtained in Obtain the OBS path for storing source data files in Planning Data Export.
+For example, set location as follows:
+location 'obs://mybucket/output_data/',+
access_key and secret_access_key have been obtained during user creation. Replace the italic part with the actual keys.
+Specifies whether a file contains a header with the names of each column in the file.
+When exporting data from OBS, this parameter cannot be set to true. Use the default value false, indicating that the first row of the exported data file is not the header.
+Based on the preceding settings, the foreign table is created using the following statements:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 | DROP FOREIGN TABLE IF EXISTS product_info_output_ext2; +CREATE FOREIGN TABLE product_info_output_ext2 +( + product_price integer not null, + product_id char(30) not null, + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) +SERVER gsmpp_server +OPTIONS( +location 'obs://mybucket/output_data/', +FORMAT 'CSV' , +DELIMITER ',', +encoding 'utf8', +header 'false', +ACCESS_KEY 'access_key_value_to_be_replaced', +SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced' +) +WRITE ONLY ; + |
If the following information is displayed, the foreign table has been created:
+CREATE FOREIGN TABLE+
1 | INSERT INTO [Foreign table name] SELECT * FROM [Source table name]; + |
1 | INSERT INTO product_info_output_ext SELECT * FROM product_info_output; + |
INSERT 0 10+
1 | INSERT INTO product_info_output_ext SELECT * FROM product_info_output WHERE product_price>500; + |
Create two foreign tables and use them to export tables from a database to two buckets in OBS.
+OBS and the database are in the same region. The example GaussDB(DWS) table to be exported is tpcds.customer_address.
+Export information is set as follows:
+Information about data formats is set based on the detailed data format parameters specified during data export from a database. The parameter settings are as follows:
+access_key and secret_access_key have been obtained during user creation. Replace the italic part with the actual keys.
+Based on the preceding settings, the foreign table is created using the following statements:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | CREATE FOREIGN TABLE tpcds.customer_address_ext1 +( +ca_address_sk integer , +ca_address_id char(16) , +ca_street_number char(10) , +ca_street_name varchar(60) , +ca_street_type char(15) , +ca_suite_number char(10) , +ca_city varchar(60) , +ca_county varchar(30) , +ca_state char(2) , +ca_zip char(10) , +ca_country varchar(20) , +ca_gmt_offset decimal(5,2) , +ca_location_type char(20) +) +SERVER gsmpp_server +OPTIONS(LOCATION 'obs://input-data1/data/', +FORMAT 'CSV', +ENCODING 'utf8', +DELIMITER E'\x08', +ENCRYPT 'off', +ACCESS_KEY 'access_key_value_to_be_replaced', +SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced' +)Write Only; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | CREATE FOREIGN TABLE tpcds.customer_address_ext2 +( +ca_address_sk integer , +ca_address_id char(16) , +ca_street_number char(10) , +ca_street_name varchar(60) , +ca_street_type char(15) , +ca_suite_number char(10) , +ca_city varchar(60) , +ca_county varchar(30) , +ca_state char(2) , +ca_zip char(10) , +ca_country varchar(20) , +ca_gmt_offset decimal(5,2) , +ca_location_type char(20) +) +SERVER gsmpp_server +OPTIONS(LOCATION 'obs://input-data2/data/', +FORMAT 'CSV', +ENCODING 'utf8', +DELIMITER E'\x08', +ENCRYPT 'off', +ACCESS_KEY 'access_key_value_to_be_replaced', +SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced' +)Write Only; + |
1 | INSERT INTO tpcds.customer_address_ext1 SELECT * FROM tpcds.customer_address; + |
1 | INSERT INTO tpcds.customer_address_ext2 SELECT * FROM tpcds.customer_address; + |
The design of OBS foreign tables does not allow exporting files to a non-empty path. However, in concurrent export scenarios, multiple files are exported to the same path, causing an error.
+Assume that a user concurrently exports data from the same table to the same OBS foreign table, and that one SQL statement is executed to export data when another SQL statement is being executed and has not generated any file on the OBS server. In this case, certain data is overwritten although both SQL statements are successfully executed. Therefore, you are advised not to concurrently export data to the same OBS foreign table.
+Use the two foreign tables to export tables from the database to two buckets in OBS.
+OBS and the database are in the same region. Tables to be exported are tpcds.customer_address and tpcds.customer_demographics.
+Information about data formats is set based on the detailed data format parameters specified during data export from GaussDB(DWS). The parameter settings are as follows:
+access_key and secret_access_key have been obtained during user creation. Replace the italic part with the actual keys.
+Based on the preceding settings, the foreign table is created using the following statements:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | CREATE FOREIGN TABLE tpcds.customer_address_ext1 +( +ca_address_sk integer , +ca_address_id char(16) , +ca_street_number char(10) , +ca_street_name varchar(60) , +ca_street_type char(15) , +ca_suite_number char(10) , +ca_city varchar(60) , +ca_county varchar(30) , +ca_state char(2) , +ca_zip char(10) , +ca_country varchar(20) , +ca_gmt_offset decimal(5,2) , +ca_location_type char(20) +) +SERVER gsmpp_server +OPTIONS(LOCATION 'obs://input-data1/data/', +FORMAT 'CSV', +ENCODING 'utf8', +DELIMITER E'\x08', +ENCRYPT 'off', +ACCESS_KEY 'access_key_value_to_be_replaced', +SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced' +)Write Only; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 | CREATE FOREIGN TABLE tpcds.customer_address_ext2 +( +ca_address_sk integer , +ca_address_id char(16) , +ca_address_name varchar(20) , +ca_address_code integer , +ca_street_number char(10) , +ca_street_name varchar(60) , +ca_street_type char(15) , +ca_suite_number char(10) , +ca_city varchar(60) , +ca_county varchar(30) , +ca_state char(2) , +ca_zip char(10) , +ca_country varchar(20) , +ca_gmt_offset decimal(5,2) +) +SERVER gsmpp_server +OPTIONS(LOCATION 'obs://input_data2/data/', +FORMAT 'CSV', +ENCODING 'utf8', +DELIMITER E'\x08', +QUOTE E'\x1b', +ENCRYPT 'off', +ACCESS_KEY 'access_key_value_to_be_replaced', +SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced' +)Write Only; + |
1 | INSERT INTO tpcds.customer_address_ext1 SELECT * FROM tpcds.customer_address; + |
1 | INSERT INTO tpcds.customer_address_ext2 SELECT * FROM tpcds.warehouse; + |
For details about exporting data to OBS, see Planning Data Export.
+For details about the data types that can be exported to OBS, see Table 2.
+For details about HDFS data export or MRS configuration, see the MapReduce Service User Guide.
+For details about creating a foreign server on OBS, see Creating a Foreign Server.
+For details about creating a foreign server in HDFS, see Manually Creating a Foreign Server.
+After operations in Creating a Foreign Server are complete, create an OBS/HDFS write-only foreign table in the GaussDB(DWS) database to access data stored in OBS/HDFS. The foreign table is write-only and can be used only for data export.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name +( [ { column_name type_name + [ { [CONSTRAINT constraint_name] NULL | + [CONSTRAINT constraint_name] NOT NULL | + column_constraint [...]} ] | + table_constraint [, ...]} [, ...] ] ) + SERVER dfs_server + OPTIONS ( { option_name ' value ' } [, ...] ) + [ {WRITE ONLY }] + DISTRIBUTE BY {ROUNDROBIN | REPLICATION} + [ PARTITION BY ( column_name ) [ AUTOMAPPED ] ] ; + |
For example, when creating a foreign table named product_info_ext_obs, set parameters in the syntax as follows:
+Specifies the name of the foreign table to be created.
+Multiple columns are separate by commas (,).
+Specifies the foreign server name of the foreign table. This server must exist. The foreign table connects to OBS/HDFS to read data through the foreign server.
+Set this parameter to the name of the foreign server created in 9.2.3 Creating a Foreign Server.
+These are parameters associated with the foreign table. The key parameters are as follows:
+(Optional) Specifies the file size of a write-only foreign table. If this parameter is not specified, the file size in the distributed file system configuration is used by default. This syntax is available only for the write-only foreign table.
+Value range: an integer ranging from 1 to 1024
+The filesize parameter is valid only for the ORC-formatted write-only HDFS foreign table.
+(Optional) Specifies the compression mode of ORC files. This syntax is available only for the write-only foreign table.
+Value range: zlib, snappy, and lz4 The default value is snappy.
+(Optional) Specifies the ORC version number. This syntax is available only for the write-only foreign table.
+Value range: Only 0.12 is supported. The default value is 0.12.
+(Optional) Specifies the data code of the data table to be exported when the database code is different from the data code of the data table. For example, the database code is Latin-1, but the data in the exported data table is in UTF-8 format. If this parameter is not specified, the database encoding format is used by default. This syntax is valid only for the write-only HDFS foreign table.
+Value range: data code types supported by the database encoding
+The dataencoding parameter is valid only for the ORC-formatted write-only HDFS foreign table.
+Other parameters are optional. You can set them as required. In this example, you do not need to set these parameters.
+Based on the preceding settings, the command for creating the foreign table is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | DROP FOREIGN TABLE IF EXISTS product_info_ext_obs; + +-- Create an OBS foreign table that does not contain partition columns. The foreign server associated with the table is obs_server, the file format on OBS corresponding to the table is ORC, and the data storage path on OBS is/mybucket/data/. + +CREATE FOREIGN TABLE product_info_ext_obs +( + product_price integer , + product_id char(30) , + product_time date , + product_level char(10) , + product_name varchar(200) , + product_type1 varchar(20) , + product_type2 char(10) , + product_monthly_sales_cnt integer , + product_comment_time date , + product_comment_num integer , + product_comment_content varchar(200) +) SERVER obs_server +OPTIONS ( +format 'orc', +foldername '/mybucket/demo.db/product_info_orc/', + compression 'snappy', + version '0.12' +) Write Only; + |
In high-concurrency scenarios, you can use GDS to export data from a database to a common file system.
+In the current GDS version, data can be exported from a database to a pipe file.
+Data can be exported to GaussDB(DWS) in Remote mode.
+Process + |
+Description + |
+Subtask + |
+
---|---|---|
Plan data export. + |
+Prepare data to be exported and plan the export path for the mode to be selected. +For details, see Planning Data Export. + |
+- + |
+
Start GDS. + |
+If the Remote mode is selected, install, configure, and start GDS on data servers. +For details, see Installing, Configuring, and Starting GDS. + |
+- + |
+
Create a foreign table, + |
+Create a foreign table to help GDS specify information about a data file. The foreign table stores information, such as the location, format, encoding, and inter-data delimiter of a data file. +For details, see Creating a GDS Foreign Table. + |
+- + |
+
Export data. + |
+After the foreign table is created, run the INSERT statement to efficiently export data to data files. +For details, see Exporting Data. + |
+- + |
+
Stop GDS. + |
+Stop GDS after data is exported. +For details, see Stopping GDS. + |
+- + |
+
Before you use GDS to export data from a cluster, prepare data to be exported and plan the export path.
+mkdir -p /output_data+
groupadd gdsgrp +useradd -g gdsgrp gdsuser+
If the following information is displayed, the user and user group already exist. Skip this step.
+useradd: Account 'gdsuser' already exists. +groupadd: Group 'gdsgrp' already exists.+
chown -R gdsuser:gdsgrp /output_data+
GDS is a data service tool provided by GaussDB(DWS). Using the foreign table mechanism, this tool helps export data at a high speed.
+For details, see Installing, Configuring, and Starting GDS.
+Set the location parameter to the URL of the directory that stores the data files.
+You do not need to specify any file.
+For example:
+The IP address of the GDS data server is 192.168.0.90. The listening port number set during GDS startup is 5000. The directory for storing data files is /output_data.
+In this case, set the location parameter to gsfs://192.168.0.90:5000/.
+Data export mode settings are as follows:
+The data server resides on the same intranet as the cluster. The IP address of the data server is 192.168.0.90. Data is to be exported as CSV files. The Remote mode is selected for parallel data export.
+Assume that the directory for storing data files is /output_data/ and the GDS listening port is 5000 when GDS is started. Therefore, the location parameter is set to gsfs://192.168.0.90:5000/.
+Data format parameter settings are as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | CREATE FOREIGN TABLE foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) +SERVER gsmpp_server +OPTIONS (LOCATION 'gsfs://192.168.0.90:5000/', +FORMAT 'CSV', +DELIMITER E'\x08', +QUOTE E'\x1b', +NULL '', +EOL '0x0a' +) +WRITE ONLY; + |
Ensure that the IP addresses and ports of servers where CNs and DNs are deployed can connect to those of the GDS server.
+1 | INSERT INTO [Foreign table name] SELECT * FROM [Source table name]; + |
Create batch processing scripts to export data in parallel. The degree of parallelism depends on the server resource usage. You can test several tables and monitor resource usage to determine whether to increase or reduce the amount. Common resource monitoring commands include top for memory and CPU usage, iostat for I/O usage, and sar for networks. For details about application cases, see Exporting Data Using Multiple Threads.
+GDS is a data service tool provided by GaussDB(DWS). Using the foreign table mechanism, this tool helps export data at a high speed.
+For details, see Stopping GDS.
+The data server and the cluster reside on the same intranet, the IP address of the data server is 192.168.0.90, and data source files are in CSV format. In this scenario, data is exported in parallel in Remote mode.
+To export data in parallel in Remote mode, perform the following operations:
+mkdir -p /output_data+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /output_data+
/opt/bin/dws/gds/bin/gds -d /output_data -p 192.168.0.90:5000 -H 10.10.0.1/24 -D+
Data export mode settings are as follows:
+Data format parameter settings are as follows:
+Based on the above settings, the foreign table is created using the following statement:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.90:5000/', FORMAT 'CSV',ENCODING 'utf8',DELIMITER E'\x08', QUOTE E'\x1b', NULL '') WRITE ONLY; + |
1 | INSERT INTO foreign_tpcds_reasons SELECT * FROM tpcds.reason; + |
ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /output_data -p 192.168.0.90:5000 -D +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds +kill -9 128954+
The data server and the cluster reside on the same intranet, the IP address of the data server is 192.168.0.90, and data source files are in CSV format. In this scenario, data is concurrently exported to two target tables using multiple threads in Remote mode.
+To concurrently export data using multiple threads in Remote mode, perform the following operations:
+mkdir -p /output_data +groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /output_data+
/opt/bin/dws/gds/bin/gds -d /output_data -p 192.168.0.90:5000 -H 10.10.0.1/24 -D -t 2+
Based on the preceding settings, the foreign table foreign_tpcds_reasons1 is created using the following statement:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE foreign_tpcds_reasons1 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.90:5000/', FORMAT 'CSV',ENCODING 'utf8', DELIMITER E'\x08', QUOTE E'\x1b', NULL '') WRITE ONLY; + |
Based on the preceding settings, the foreign table foreign_tpcds_reasons2 is created using the following statement:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE foreign_tpcds_reasons2 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.90:5000/', FORMAT 'CSV', DELIMITER E'\x08', QUOTE E'\x1b', NULL '') WRITE ONLY; + |
1 | INSERT INTO foreign_tpcds_reasons1 SELECT * FROM tpcds.reason; + |
1 | INSERT INTO foreign_tpcds_reasons2 SELECT * FROM tpcds.reason; + |
ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /output_data -p 192.168.0.90:5000 -D -t 2 +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds +kill -9 128954+
gds -d /***/gds_data/ -D -p 192.168.0.1:7789 -l /***/gds_log/aa.log -H 0/0 -t 10 -D+
If you need to set the timeout interval of a pipe, use the --pipe-timeout parameter.
+CREATE TABLE test_pipe( id integer not null, sex text not null, name text ) ; + +INSERT INTO test_pipe values(1,2,'11111111111111'); +INSERT INTO test_pipe values(2,2,'11111111111111'); +INSERT INTO test_pipe values(3,2,'11111111111111'); +INSERT INTO test_pipe values(4,2,'11111111111111');+
CREATE FOREIGN TABLE foreign_test_pipe_tw( id integer not null, age text not null, name text ) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.1:7789/', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe', auto_create_pipe 'false') WRITE ONLY;+
INSERT INTO foreign_test_pipe_tw select * from test_pipe;+
cd /***/gds_data/+
mkfifo postgres_public_foreign_test_pipe_tw.pipe+
A pipe will be automatically cleared after an operation is complete. To perform another operation, create a pipe again.
+cat postgres_public_foreign_test_pipe_tw.pipe > postgres_public_foreign_test_pipe_tw.txt+
gzip -9 -c < postgres_public_foreign_test_pipe_tw.pipe > out.gz+
cat postgres_public_foreign_test_pipe_tw.pipe | hdfs dfs -put - /user/hive/***/test_pipe.txt+
cat postgres_public_foreign_test_pipe_tw.txt +3,2,11111111111111 +1,2,11111111111111 +2,2,11111111111111 +4,2,11111111111111+
vim out.gz +3,2,11111111111111 +1,2,11111111111111 +2,2,11111111111111 +4,2,11111111111111+
hdfs dfs -cat /user/hive/***/test_pipe.txt +3,2,11111111111111 +1,2,11111111111111 +2,2,11111111111111 +4,2,11111111111111+
GDS also supports importing and exporting data through multi-process pipes. That is, one foreign table corresponds to multiple GDSs.
+The following takes exporting a local file as an example.
+gds -d /***/gds_data/ -D -p 192.168.0.1:7789 -l /***/gds_log/aa.log -H 0/0 -t 10 -D +gds -d /***/gds_data_1/ -D -p 192.168.0.1:7790 -l /***/gds_log/aa.log -H 0/0 -t 10 -D+
If you need to set the timeout interval of a pipe, use the --pipe-timeout parameter.
+CREATE TABLE test_pipe (id integer not null, sex text not null, name text);+
INSERT INTO test_pipe values(1,2,'11111111111111'); +INSERT INTO test_pipe values(2,2,'11111111111111'); +INSERT INTO test_pipe values(3,2,'11111111111111'); +INSERT INTO test_pipe values(4,2,'11111111111111');+
CREATE FOREIGN TABLE foreign_test_pipe_tw( id integer not null, age text not null, name text ) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.1:7789/|gsfs://192.168.0.1:7790/', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe', auto_create_pipe 'false') WRITE ONLY;+
INSERT INTO foreign_test_pipe_tw select * from test_pipe;+
cd /***/gds_data/ +cd /***/gds_data_1/+
mkfifo postgres_public_foreign_test_pipe_tw.pipe+
cat postgres_public_foreign_test_pipe_tw.pipe > postgres_public_foreign_test_pipe_tw.txt+
cat /***/gds_data/postgres_public_foreign_test_pipe_tw.txt +3,2,11111111111111+
cat /***/gds_data_1/postgres_public_foreign_test_pipe_tw.txt +1,2,11111111111111 +2,2,11111111111111 +4,2,11111111111111+
GaussDB(DWS) provides gs_dump and gs_dumpall to export required database objects and related information. To migrate database information, you can use a tool to import the exported metadata to a target database. gs_dump exports a single database or its objects. gs_dumpall exports all databases or global objects in a cluster. For details, see Table 1.
+ +Application Scenario + |
+Export Granularity + |
+Export Format + |
+Import Method + |
+
---|---|---|---|
Exporting a single database + |
+
+
|
+
|
+
|
+
+
|
+|||
Table-level export
+
|
+|||
Exporting all databases in a cluster + |
+
+
|
+Plain text + |
+For details about how to import data files, see Using the gsql Meta-Command \COPY to Import Data. + |
+
Global object export
+
|
+
gs_dump and gs_dumpall use -U to specify the user that performs the export. If the specified user does not have the required permission, data cannot be exported. In this case, you can set --role in the export command to the role that has the permission. Then, gs_dump or gs_dumpall uses the specified role to export data. See Table 1 for application scenarios and Data Export By a User Without Required Permissions for operation details.
+gs_dump and gs_dumpall encrypt the exported data files. These files are decrypted before being imported to prevent data disclosure for higher database security.
+When gs_dump or gs_dumpall is used to export data from a cluster, other users can still access (read data from and write data to) databases in the cluster.
+gs_dump and gs_dumpall can export complete, consistent data. For example, if gs_dump is used to export database A or gs_dumpall is used to export all databases from a cluster at T1, data of database A or all databases in the cluster at that time point will be exported, and modifications on the databases after that time point will not be exported.
+Obtain gs_dump and gs_dumpall by decompressing the gsql CLI client package.
+You can use gs_dump to export data and all object definitions of a database from GaussDB(DWS). You can specify the information to be exported as follows:
+You can use the exported information to create a same database containing the same data as the current one.
+You can use the exported object definitions to quickly create a same database as the current one, without data.
+The user who uploads the client must have the full control permission on the target directory on the host to which the client is uploaded.
+cd <Path_for_storing_the_client> +unzip dws_client_8.1.x_redhat_x64.zip+
Where,
+source gsql_env.sh+
If the following information is displayed, the GaussDB(DWS) client is successfully configured:
+All things done.+
gs_dump -W password -U jack -f /home//backup/postgres_backup.tar -p 8000 gaussdb -h 10.10.10.100 -F t+ +
Parameter + |
+Description + |
+Example Value + |
+
---|---|---|
-U + |
+Username for connecting to the database. If this parameter is not configured, the username of the connected database is used. + |
+-U jack + |
+
-W + |
+User password for database connection. +
|
+-W Password + |
+
-f + |
+Folder to store exported files. If this parameter is not specified, the exported files are stored in the standard output. + |
+-f /home//backup/postgres_backup.tar + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
dbname + |
+Name of the database to be exported. + |
+gaussdb + |
+
-F + |
+Format of exported files. The values of -F are as follows: +
|
+-F t + |
+
For details about other parameters, see "gs_dump" in the Tool Guide.
+Example 1: Use gs_dump to run the following command to export full information of the database gaussdb and compress the exported files in SQL format.
+gs_dump -W password -U jack -f /home//backup/postgres_backup.sql -p 8000 -h 10.10.10.100 gaussdb -Z 8 -F p +gs_dump[port=''][gaussdb][2017-07-21 15:36:13]: dump database gaussdb successfully +gs_dump[port=''][gaussdb][2017-07-21 15:36:13]: total time: 3793 ms+
Example 2: Use gs_dump to run the following command to export data of the database gaussdb, excluding object definitions. The exported files are in a custom format.
+gs_dump -W Password -U jack -f /home//backup/postgres_data_backup.dmp -p 8000 -h 10.10.10.100 gaussdb -a -F c +gs_dump[port=''][gaussdb][2017-07-21 15:36:13]: dump database gaussdb successfully +gs_dump[port=''][gaussdb][2017-07-21 15:36:13]: total time: 3793 ms+
Example 3: Use gs_dump to run the following command to export object definitions of the database gaussdb. The exported files are in SQL format.
+--Before the export, the nation table contains data. +select n_nationkey,n_name,n_regionkey from nation limit 3; + n_nationkey | n_name | n_regionkey +-------------+---------------------------+------------- + 0 | ALGERIA | 0 + 3 | CANADA | 1 + 11 | IRAQ | 4 +(3 rows) + +gs_dump -W password -U jack -f /home//backup/postgres_def_backup.sql -p 8000 -h 10.10.10.100 gaussdb -s -F p +gs_dump[port=''][gaussdb][2017-07-20 15:04:14]: dump database gaussdb successfully +gs_dump[port=''][gaussdb][2017-07-20 15:04:14]: total time: 472 ms+
Example 4: Use gs_dump to run the following command to export object definitions of the database gaussdb. The exported files are in text format and are encrypted.
+gs_dump -W password -U jack -f /home//backup/postgres_def_backup.sql -p 8000 -h 10.10.10.100 gaussdb --with-encryption AES128 --with-key 1234567812345678 -s -F p +gs_dump[port=''][gaussdb][2018-11-14 11:25:18]: dump database gaussdb successfully +gs_dump[port=''][gaussdb][2018-11-14 11:25:18]: total time: 1161 ms+
You can use gs_dump to export data and all object definitions of a schema from GaussDB(DWS). You can export one or more specified schemas as needed. You can specify the information to be exported as follows:
+The user who uploads the client must have the full control permission on the target directory on the host to which the client is uploaded.
+cd <Path_for_storing_the_client> +unzip dws_client_8.1.x_redhat_x64.zip+
Where,
+source gsql_env.sh+
If the following information is displayed, the GaussDB(DWS) client is successfully configured:
+All things done.+
gs_dump -W Password -U jack -f /home//backup/MPPDB_schema_backup -p 8000 -h 10.10.10.100 human_resource -n hr -F d
+
+Parameter + |
+Description + |
+Example Value + |
+
---|---|---|
-U + |
+Username for connecting to the database. If this parameter is not configured, the username of the connected database is used. + |
+-U jack + |
+
-W + |
+User password for database connection. +
|
+-W Password + |
+
-f + |
+Folder to store exported files. If this parameter is not specified, the exported files are stored in the standard output. + |
+-f /home//backup/MPPDB_schema_backup + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
dbname + |
+Name of the database to be exported. + |
+human_resource + |
+
-n + |
+Names of schemas to be exported. Data of the specified schemas will also be exported. +
|
+
|
+
-F + |
+Format of exported files. The values of -F are as follows: +
|
+-F d + |
+
For details about other parameters, see "gs_dump" in the Tool Guide.
+gs_dump -W password -U jack -f /home//backup/MPPDB_schema_backup.sql -p 8000 -h 10.10.10.100 human_resource -n hr -Z 6 -F p
+gs_dump[port=''][human_resource][2017-07-21 16:05:55]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 16:05:55]: total time: 2425 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_schema_data_backup.tar -p 8000 -h 10.10.10.100 human_resource -n hr -a -F t
+gs_dump[port=''][human_resource][2018-11-14 15:07:16]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2018-11-14 15:07:16]: total time: 1865 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_schema_def_backup -p 8000 -h 10.10.10.100 human_resource -n hr -s -F d +gs_dump[port=''][human_resource][2018-11-14 15:11:34]: dump database human_resource successfully +gs_dump[port=''][human_resource][2018-11-14 15:11:34]: total time: 1652 ms+
gs_dump -W password -U jack -f /home//backup/MPPDB_schema_backup.dmp -p 8000 -h 10.10.10.100 human_resource -N hr -F c
+gs_dump[port=''][human_resource][2017-07-21 16:06:31]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 16:06:31]: total time: 2522 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_schema_backup1.tar -p 8000 -h 10.10.10.100 human_resource -n hr -n public -s --with-encryption AES128 --with-key 1234567812345678 -F t
+gs_dump[port=''][human_resource][2017-07-21 16:07:16]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 16:07:16]: total time: 2132 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_schema_backup2.dmp -p 8000 -h 10.10.10.100 human_resource -N hr -N public -F c
+gs_dump[port=''][human_resource][2017-07-21 16:07:55]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 16:07:55]: total time: 2296 ms
+Example 7: Use gs_dump to run the following command to export all tables, including views, sequences, and foreign tables, in the public schema, and the staffs table in the hr schema, including data and table definition. The exported files are in a custom format.
+gs_dump -W password -U jack -f /home//backup/MPPDB_backup3.dmp -p 8000 -h 10.10.10.100 human_resource -t public.* -t hr.staffs -F c
+gs_dump[port=''][human_resource][2018-12-13 09:40:24]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2018-12-13 09:40:24]: total time: 896 ms
+You can use gs_dump to export data and all object definitions of a table-level object from GaussDB(DWS). Views, sequences, and foreign tables are special tables. You can export one or more specified tables as needed. You can specify the information to be exported as follows:
+The user who uploads the client must have the full control permission on the target directory on the host to which the client is uploaded.
+cd <Path_for_storing_the_client> +unzip dws_client_8.1.x_redhat_x64.zip+
Where,
+source gsql_env.sh+
If the following information is displayed, the GaussDB(DWS) client is successfully configured:
+All things done.+
gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup -p 8000 -h 10.10.10.100 human_resource -t hr.staffs -F d+ +
Parameter + |
+Description + |
+Example Value + |
+
---|---|---|
-U + |
+Username for connecting to the database. If this parameter is not configured, the username of the connected database is used. + |
+-U jack + |
+
-W + |
+User password for database connection. +
|
+-W password + |
+
-f + |
+Folder to store exported files. If this parameter is not specified, the exported files are stored in the standard output. + |
+-f /home//backup/MPPDB_table_backup + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
dbname + |
+Name of the database to be exported. + |
+human_resource + |
+
-t + |
+Table (or view, sequence, foreign table) to be exported. You can specify multiple tables by listing them or using wildcard characters. When you use wildcard characters, quote wildcard patterns with single quotation marks ('') to prevent the shell from expanding the wildcard characters. +
|
+
|
+
-F + |
+Format of exported files. The values of -F are as follows: +
|
+-F d + |
+
For details about other parameters, see "gs_dump" in the Tool Guide.
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup.sql -p 8000 -h 10.10.10.100 human_resource -t hr.staffs -Z 6 -F p
+gs_dump[port=''][human_resource][2017-07-21 17:05:10]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 17:05:10]: total time: 3116 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_data_backup.tar -p 8000 -h 10.10.10.100 human_resource -t hr.staffs -a -F t
+gs_dump[port=''][human_resource][2017-07-21 17:04:26]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 17:04:26]: total time: 2570 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_def_backup -p 8000 -h 10.10.10.100 human_resource -t hr.staffs -s -F d
+gs_dump[port=''][human_resource][2017-07-21 17:03:09]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 17:03:09]: total time: 2297 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup4.dmp -p 8000 -h 10.10.10.100 human_resource -T hr.staffs -F c
+gs_dump[port=''][human_resource][2017-07-21 17:14:11]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 17:14:11]: total time: 2450 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup1.sql -p 8000 -h 10.10.10.100 human_resource -t hr.staffs -t hr.employments -F p
+gs_dump[port=''][human_resource][2017-07-21 17:19:42]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 17:19:42]: total time: 2414 ms
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup2.sql -p 8000 -h 10.10.10.100 human_resource -T hr.staffs -T hr.employments -F p
+gs_dump[port=''][human_resource][2017-07-21 17:21:02]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2017-07-21 17:21:02]: total time: 3165 ms
+Example 7: Use gs_dump to run the following command to export data and definition of the hr.staffs table, and the definition of the hr.employments table. The exported files are in .tar format.
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup3.tar -p 8000 -h 10.10.10.100 human_resource -t hr.staffs -t hr.employments --exclude-table-data hr.employments -F t
+gs_dump[port=''][human_resource][2018-11-14 11:32:02]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2018-11-14 11:32:02]: total time: 1645 ms
+Example 8: Use gs_dump to run the following command to export data and definition of the hr.staffs table, encrypt the exported files, and store them in text format.
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup4.sql -p 8000 -h 10.10.10.100 human_resource -t hr.staffs --with-encryption AES128 --with-key 1212121212121212 -F p
+gs_dump[port=''][human_resource][2018-11-14 11:35:30]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2018-11-14 11:35:30]: total time: 6708 ms
+Example 9: Use gs_dump to run the following command to export all tables, including views, sequences, and foreign tables, in the public schema, and the staffs table in the hr schema, including data and table definition. The exported files are in a custom format.
+gs_dump -W password -U jack -f /home//backup/MPPDB_table_backup5.dmp -p 8000 -h 10.10.10.100 human_resource -t public.* -t hr.staffs -F c
+gs_dump[port=''][human_resource][2018-12-13 09:40:24]: dump database human_resource successfully
+gs_dump[port=''][human_resource][2018-12-13 09:40:24]: total time: 896 ms
+Example 10: Use gs_dump to run the following command to export the definition of the view referencing to the test1 table in the t1 schema. The exported files are in a custom format.
+gs_dump -W password -U jack -f /home//backup/MPPDB_view_backup6 -p 8000 -h 10.10.10.100 human_resource -t t1.test1 --include-depend-objs --exclude-self -F d
+gs_dump[port=''][jack][2018-11-14 17:21:18]: dump database human_resource successfully
+gs_dump[port=''][jack][2018-11-14 17:21:23]: total time: 4239 ms
+You can use gs_dumpall to export full information of all databases in a cluster from GaussDB(DWS), including information about each database and global objects in the cluster. You can specify the information to be exported as follows:
+You can use the exported information to create a same cluster containing the same databases, global objects, and data as the current one.
+You can use the exported object definitions to quickly create a same cluster as the current one, containing the same databases and tablespaces but without data.
+The user who uploads the client must have the full control permission on the target directory on the host to which the client is uploaded.
+cd <Path_for_storing_the_client> +unzip dws_client_8.1.x_redhat_x64.zip+
Where,
+source gsql_env.sh+
If the following information is displayed, the GaussDB(DWS) client is successfully configured:
+All things done.+
gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_backup.sql -p 8000 -h 10.10.10.100+ +
Parameter + |
+Description + |
+Example Value + |
+
---|---|---|
-U + |
+Username for database connection. The user must be a cluster administrator. + |
+-U dbadmin + |
+
-W + |
+User password for database connection. +
|
+-W Password + |
+
-f + |
+Folder to store exported files. If this parameter is not specified, the exported files are stored in the standard output. + |
+-f /home/dbadmin/backup/MPPDB_backup.sql + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
For details about other parameters, see "gs_dumpall" in the Tool Guide.
+Example 1: Use gs_dumpall to run the following command as the cluster administrator dbadmin to export information of all databases in a cluster. The exported files are in text format. After the command is executed, a large amount of output information will be displayed. total time will be displayed at the end of the information, indicating that the export is successful. In this example, only related output information is included.
+gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_backup.sql -p 8000 -h 10.10.10.100 +gs_dumpall[port=''][2017-07-21 15:57:31]: dumpall operation successful +gs_dumpall[port=''][2017-07-21 15:57:31]: total time: 9627 ms+
Example 2: Use gs_dumpall to run the following command as the cluster administrator dbadmin to export definitions of all databases in a cluster. The exported files are in text format. After the command is executed, a large amount of output information will be displayed. total time will be displayed at the end of the information, indicating that the export is successful. In this example, only related output information is included.
+gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_backup.sql -p 8000 -h 10.10.10.100 -s +gs_dumpall[port=''][2018-11-14 11:28:14]: dumpall operation successful +gs_dumpall[port=''][2018-11-14 11:28:14]: total time: 4147 ms+
Example 3: Use gs_dumpall to run the following command export data of all databases in a cluster, encrypt the exported files, and store them in text format. After the command is executed, a large amount of output information will be displayed. total time will be displayed at the end of the information, indicating that the export is successful. In this example, only related output information is included.
+gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_backup.sql -p 8000 -h 10.10.10.100 -a --with-encryption AES128 --with-key 1234567812345678 +gs_dumpall[port=''][2018-11-14 11:32:26]: dumpall operation successful +gs_dumpall[port=''][2018-11-14 11:23:26]: total time: 4147 ms+
You can use gs_dumpall to export global objects from GaussDB(DWS), including database users, user groups, tablespaces, and attributes (for example, global access permissions).
+The user who uploads the client must have the full control permission on the target directory on the host to which the client is uploaded.
+cd <Path_for_storing_the_client> +unzip dws_client_8.1.x_redhat_x64.zip+
Where,
+source gsql_env.sh+
If the following information is displayed, the GaussDB(DWS) client is successfully configured:
+All things done.+
gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_tablespace.sql -p 8000 -h 10.10.10.100 -t+ +
Parameter + |
+Description + |
+Example Value + |
+
---|---|---|
-U + |
+Username for database connection. The user must be a cluster administrator. + |
+-U dbadmin + |
+
-W + |
+User password for database connection. +
|
+-W Password + |
+
-f + |
+Folder to store exported files. If this parameter is not specified, the exported files are stored in the standard output. + |
+-f /home//backup/MPPDB_tablespace.sql + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
-t + |
+Dumps only tablespaces. You can also use --tablespaces-only alternatively. + |
+- + |
+
For details about other parameters, see "gs_dumpall" in the Tool Guide.
+Example 1: Use gs_dumpall to run the following command as the cluster administrator dbadmin to export information of global tablespaces and users in a cluster. The exported files are in text format.
+gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_globals.sql -p 8000 -h 10.10.10.100 -g +gs_dumpall[port=''][2018-11-14 19:06:24]: dumpall operation successful +gs_dumpall[port=''][2018-11-14 19:06:24]: total time: 1150 ms+
Example 2: Use gs_dumpall to run the following command as the cluster administrator dbadmin to export global tablespaces in a cluster, encrypt the exported files, and store them in text format.
+gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_tablespace.sql -p 8000 -h 10.10.10.100 -t --with-encryption AES128 --with-key 1212121212121212 +gs_dumpall[port=''][2018-11-14 19:00:58]: dumpall operation successful +gs_dumpall[port=''][2018-11-14 19:00:58]: total time: 186 ms+
Example 3: Use gs_dumpall to run the following command as the cluster administrator dbadmin to export information of global users in a cluster. The exported files are in text format.
+gs_dumpall -W password -U dbadmin -f /home/dbadmin/backup/MPPDB_user.sql -p 8000 -h 10.10.10.100 -r +gs_dumpall[port=''][2018-11-14 19:03:18]: dumpall operation successful +gs_dumpall[port=''][2018-11-14 19:03:18]: total time: 162 ms+
gs_dump and gs_dumpall use -U to specify the user that performs the export. If the specified user does not have the required permission, data cannot be exported. In this case, you can set --role in the export command to the role that has the permission. Then, gs_dump or gs_dumpall uses the specified role to export data.
+The user who uploads the client must have the full control permission on the target directory on the host to which the client is uploaded.
+cd <Path_for_storing_the_client> +unzip dws_client_8.1.x_redhat_x64.zip+
Where,
+source gsql_env.sh+
If the following information is displayed, the GaussDB(DWS) client is successfully configured:
+All things done.+
gs_dump -U jack -W password -f /home//backup/MPPDB_backup.tar -p 8000 -h 10.10.10.100 human_resource --role role1 --rolepassword password -F t+
Parameter + |
+Description + |
+Example Value (dbadmin) + |
+
---|---|---|
-U + |
+Username for database connection. + |
+-U jack + |
+
-W + |
+User password for database connection. +
|
+-W Password + |
+
-f + |
+Folder to store exported files. If this parameter is not specified, the exported files are stored in the standard output. + |
+-f /home//backup/MPPDB_backup.tar + |
+
-p + |
+Name extension of the TCP port on which the server is listening or the local Unix domain socket. This parameter is configured to ensure connections. + |
+-p 8000 + |
+
-h + |
+Cluster address: If a public network address is used for connection, set this parameter to Public Network Address or Public Network Domain Name. If a private network address is used for connection, set this parameter to Private Network Address or Private Network Domain Name. + |
+-h 10.10.10.100 + |
+
dbname + |
+Name of the database to be exported. + |
+human_resource + |
+
--role + |
+Role name for the export operation. After this parameter is set and gs_dump or gs_dumpall connects to the database, the SET ROLE command will be issued. When the user specified by -U does not have the permissions required by gs_dump or gs_dumpall, this parameter allows the user to switch to a role with the required permissions. + |
+-r role1 + |
+
--rolepassword + |
+Role password. + |
+--rolepassword password + |
+
-F + |
+Format of exported files. The values of -F are as follows: +
|
+-F t + |
+
For details about other parameters, see "gs_dump" or "gs_dumpall" in the Tool Guide.
+Example 1: User jack does not have the permission for exporting data of the human_resource database and the role role1 has this permission. To export data of the human_resource database, you can set --role to role1 in the export command. The exported files are in .tar format.
+human_resource=# CREATE USER jack IDENTIFIED BY "password"; + +gs_dump -U jack -W password -f /home//backup/MPPDB_backup11.tar -p 8000 -h 10.10.10.100 human_resource --role role1 --rolepassword password -F t +gs_dump[port='8000'][human_resource][2017-07-21 16:21:10]: dump database human_resource successfully +gs_dump[port='8000'][human_resource][2017-07-21 16:21:10]: total time: 4239 ms+
Example 2: User jack does not have the permission for exporting the public schema and the role role1 has this permission. To export the public schema, you can set --role to role1 in the export command. The exported files are in .tar format.
+human_resource=# CREATE USER jack IDENTIFIED BY "1234@abc"; + +gs_dump -U jack -W password -f /home//backup/MPPDB_backup12.tar -p 8000 -h 10.10.10.100 human_resource -n public --role role1 --rolepassword password -F t +gs_dump[port='8000'][human_resource][2017-07-21 16:21:10]: dump database human_resource successfully +gs_dump[port='8000'][human_resource][2017-07-21 16:21:10]: total time: 3278 ms+
Example 3: User jack does not have the permission for exporting all databases in a cluster and the role role1 has this permission. To export all databases, you can set --role to role1 in the export command. The exported files are in text format.
+human_resource=# CREATE USER jack IDENTIFIED BY "password"; + +gs_dumpall -U jack -W password -f /home//backup/MPPDB_backup.sql -p 8000 -h 10.10.10.100 --role role1 --rolepassword password +gs_dumpall[port='8000'][human_resource][2018-11-14 17:26:18]: dumpall operation successful +gs_dumpall[port='8000'][human_resource][2018-11-14 17:26:18]: total time: 6437 ms+
CREATE FOREIGN TABLE foreign_test_pipe_tr( like test_pipe ) SERVER gsmpp_server OPTIONS (LOCATION 'gsfs://192.168.0.0.1:7789/foreign_test_*', FORMAT 'text', DELIMITER ',', NULL '', EOL '0x0a' ,file_type 'pipe',auto_create_pipe 'false');+
Locating method: The type of the GDS foreign table file_type is pipe, but the operated file is a common file. Check whether the postgres_public_foreign_test_pipe_tr.pipe file is a pipe file.
+Locating method: GDS does not have the permission to open the pipe file.
+Locating method: Opening the pipe times out when GDS is used to export data. This is because the pipe is not created within 300 seconds after auto_create_pipe is set to false, or the pipe is created but is not read by any program within 300 seconds.
+Locating method: Opening the pipe times out when GDS is used to export data. This is because the pipe is not created within 300 seconds after auto_create_pipe is set to false, or the pipe is created but is not written by any program within 300 seconds.
+Locating method: If the GDS does not receive any write event on the pipe within 300 seconds during data export, the pipe is not read for more than 300 seconds.
+Locating method: If the GDS does not receive any read event on the pipe within 300 seconds during data import, the pipe is not written for more than 300 seconds.
+Locating method: It indicates that the /***/postgres_public_foreign_test_pipe_tw.pipe file is not read by any program. As a result, GDS cannot open the pipe file by writing.
+GaussDB(DWS) provides PostGIS Extension (PostGIS-2.4.2). PostGIS Extension is a spatial database extender for PostgreSQL. It provides the following spatial information services: spatial objects, spatial indexes, spatial functions, and spatial operators. PostGIS Extension complies with the OpenGIS specifications.
+In GaussDB(DWS), PostGIS Extension depends on the listed third-party open-source software.
+Run the CREATE EXTENSION command to create PostGIS Extension.
+1 | CREATE EXTENSION postgis; + |
Use the following function to invoke a PostGIS Extension:
+1 | SELECT GisFunction (Param1, Param2,......); + |
GisFunction is the function, and Param1 and Param2 are function parameters. The following SQL statements are a simple illustration for PostGIS use. For details about related functions, see PostGIS 2.4.2 Manual.
+Example 1: Create a geometry table.
+1 +2 | CREATE TABLE cities ( id integer, city_name varchar(50) ); +SELECT AddGeometryColumn('cities', 'position', 4326, 'POINT', 2); + |
Example 2: Insert geometry data.
+1 +2 +3 | INSERT INTO cities (id, position, city_name) VALUES (1,ST_GeomFromText('POINT(-9.5 23)',4326),'CityA'); +INSERT INTO cities (id, position, city_name) VALUES (2,ST_GeomFromText('POINT(-10.6 40.3)',4326),'CityB'); +INSERT INTO cities (id, position, city_name) VALUES (3,ST_GeomFromText('POINT(20.8 30.3)',4326), 'CityC'); + |
Example 3: Calculate the distance between any two cities among three cities.
+1 | SELECT p1.city_name,p2.city_name,ST_Distance(p1.position,p2.position) FROM cities AS p1, cities AS p2 WHERE p1.id > p2.id; + |
Run the following command to delete PostGIS Extension from GaussDB(DWS):
+1 | DROP EXTENSION postgis [CASCADE]; + |
If PostGIS Extension is the dependee of other objects (for example, geometry tables), you need to add the CASCADE keyword to delete all these objects.
+In GaussDB(DWS), PostGIS Extension support the following data types:
+SET behavior_compat_options = 'bind_procedure_searchpath';+
Category + |
+Function + |
+
---|---|
Management functions + |
+AddGeometryColumn, DropGeometryColumn, DropGeometryTable, PostGIS_Full_Version, PostGIS_GEOS_Version, PostGIS_Liblwgeom_Version, PostGIS_Lib_Build_Date, PostGIS_Lib_Version, PostGIS_PROJ_Version, PostGIS_Scripts_Build_Date, PostGIS_Scripts_Installed, PostGIS_Version, PostGIS_LibXML_Version, PostGIS_Scripts_Released, Populate_Geometry_Columns, UpdateGeometrySRID + |
+
Geometry constructors + |
+ST_BdPolyFromText, ST_BdMPolyFromText, ST_Box2dFromGeoHash, ST_GeogFromText, ST_GeographyFromText, ST_GeogFromWKB, ST_GeomCollFromText, ST_GeomFromEWKB, ST_GeomFromEWKT, ST_GeometryFromText, ST_GeomFromGeoHash, ST_GeomFromGML, ST_GeomFromGeoJSON, ST_GeomFromKML, ST_GMLToSQL, ST_GeomFromText, ST_GeomFromWKB, ST_LineFromMultiPoint, ST_LineFromText, ST_LineFromWKB, ST_LinestringFromWKB, ST_MakeBox2D, ST_3DMakeBox, ST_MakeEnvelope, ST_MakePolygon, ST_MakePoint, ST_MakePointM, ST_MLineFromText, ST_MPointFromText, ST_MPolyFromText, ST_Point, ST_PointFromGeoHash, ST_PointFromText, ST_PointFromWKB, ST_Polygon, ST_PolygonFromText, ST_WKBToSQL, ST_WKTToSQL + |
+
Geometry accessors + |
+GeometryType, ST_Boundary, ST_CoordDim, ST_Dimension, ST_EndPoint, ST_Envelope, ST_ExteriorRing, ST_GeometryN, ST_GeometryType, ST_InteriorRingN, ST_IsClosed, ST_IsCollection, ST_IsEmpty, ST_IsRing, ST_IsSimple, ST_IsValid, ST_IsValidReason, ST_IsValidDetail, ST_M, ST_NDims, ST_NPoints, ST_NRings, ST_NumGeometries, ST_NumInteriorRings, ST_NumInteriorRing, ST_NumPatches, ST_NumPoints, ST_PatchN, ST_PointN, ST_SRID, ST_StartPoint, ST_Summary, ST_X, ST_XMax, ST_XMin, ST_Y, ST_YMax, ST_YMin, ST_Z, ST_ZMax, ST_Zmflag, ST_ZMin + |
+
Geometry editors + |
+ST_AddPoint, ST_Affine, ST_Force2D, ST_Force3D, ST_Force3DZ, ST_Force3DM, ST_Force4D, ST_ForceCollection, ST_ForceSFS, ST_ForceRHR, ST_LineMerge, ST_CollectionExtract, ST_CollectionHomogenize, ST_Multi, ST_RemovePoint, ST_Reverse, ST_Rotate, ST_RotateX, ST_RotateY, ST_RotateZ, ST_Scale, ST_Segmentize, ST_SetPoint, ST_SetSRID, ST_SnapToGrid, ST_Snap, ST_Transform, ST_Translate, ST_TransScale + |
+
Geometry outputs + |
+ST_AsBinary, ST_AsEWKB, ST_AsEWKT, ST_AsGeoJSON, ST_AsGML, ST_AsHEXEWKB, ST_AsKML, ST_AsLatLonText, ST_AsSVG, ST_AsText, ST_AsX3D, ST_GeoHash + |
+
Operators + |
+&&, &&&, &<, &<|, &>, <<, <<|, =, >>, @, |&>, |>>, ~, ~=, <->, <#> + |
+
Spatial relationships and measurements + |
+ST_3DClosestPoint, ST_3DDistance, ST_3DDWithin, ST_3DDFullyWithin, ST_3DIntersects, ST_3DLongestLine, ST_3DMaxDistance, ST_3DShortestLine, ST_Area, ST_Azimuth, ST_Centroid, ST_ClosestPoint, ST_Contains, ST_ContainsProperly, ST_Covers, ST_CoveredBy, ST_Crosses, ST_LineCrossingDirection, ST_Disjoint, ST_Distance, ST_HausdorffDistance, ST_MaxDistance, ST_DistanceSphere, ST_DistanceSpheroid, ST_DFullyWithin, ST_DWithin, ST_Equals, ST_HasArc, ST_Intersects, ST_Length, ST_Length2D, ST_3DLength, ST_Length_Spheroid, ST_Length2D_Spheroid, ST_3DLength_Spheroid, ST_LongestLine, ST_OrderingEquals, ST_Overlaps, ST_Perimeter, ST_Perimeter2D, ST_3DPerimeter, ST_PointOnSurface, ST_Project, ST_Relate, ST_RelateMatch, ST_ShortestLine, ST_Touches, ST_Within + |
+
Geometry processing + |
+ST_Buffer, ST_BuildArea, ST_Collect, ST_ConcaveHull, ST_ConvexHull, ST_CurveToLine, ST_DelaunayTriangles, ST_Difference, ST_Dump, ST_DumpPoints, ST_DumpRings, ST_FlipCoordinates, ST_Intersection, ST_LineToCurve, ST_MakeValid, ST_MemUnion, ST_MinimumBoundingCircle, ST_Polygonize, ST_Node, ST_OffsetCurve, ST_RemoveRepeatedPoints, ST_SharedPaths, ST_Shift_Longitude, ST_Simplify, ST_SimplifyPreserveTopology, ST_Split, ST_SymDifference, ST_Union, ST_UnaryUnion + |
+
Linear referencing + |
+ST_LineInterpolatePoint, ST_LineLocatePoint, ST_LineSubstring, ST_LocateAlong, ST_LocateBetween, ST_LocateBetweenElevations, ST_InterpolatePoint, ST_AddMeasure + |
+
Miscellaneous functions + |
+ST_Accum, Box2D, Box3D, ST_Expand, ST_Extent, ST_3Dextent, Find_SRID, ST_MemSize + |
+
Exceptional functions + |
+PostGIS_AddBBox, PostGIS_DropBBox, PostGIS_HasBBox + |
+
Raster Management Functions + |
+AddRasterConstraints, DropRasterConstraints, AddOverviewConstraints, DropOverviewConstraints, PostGIS_GDAL_Version, PostGIS_Raster_Lib_Build_Date, PostGIS_Raster_Lib_Version, and ST_GDALDrivers, and UpdateRasterSRID + |
+
Raster Constructors + |
+ST_AddBand, ST_AsRaster, ST_Band, ST_MakeEmptyRaster, ST_Tile, and ST_FromGDALRaster + |
+
Raster Accessors + |
+ST_GeoReference, ST_Height, ST_IsEmpty, ST_MetaData, ST_NumBands, ST_PixelHeight, ST_PixelWidth, ST_ScaleX, ST_ScaleY, ST_RasterToWorldCoord, ST_RasterToWorldCoordX, ST_RasterToWorldCoordY, ST_Rotation, ST_SkewX, ST_SkewY, ST_SRID, ST_Summary, ST_UpperLeftX, ST_UpperLeftY, ST_Width, ST_WorldToRasterCoord, ST_WorldToRasterCoordX, ST_WorldToRasterCoordY + |
+
Raster Band Accessors + |
+ST_BandMetaData, ST_BandNoDataValue, ST_BandIsNoData, ST_BandPath, ST_BandPixelType, and ST_HasNoBand + |
+
Raster Pixel Accessors and Setters + |
+ST_PixelAsPolygon, ST_PixelAsPolygons, ST_PixelAsPoint, ST_PixelAsPoints, ST_PixelAsCentroid, ST_PixelAsCentroids, ST_Value, ST_NearestValue, ST_Neighborhood, ST_SetValue, ST_SetValues, ST_DumpValues, and ST_PixelOfValue + |
+
Raster Editors + |
+ST_SetGeoReference, ST_SetRotation, ST_SetScale, ST_SetSkew, ST_SetSRID, ST_SetUpperLeft, ST_Resample, ST_Rescale, ST_Reskew, and ST_SnapToGrid, ST_Resize, and ST_Transform + |
+
Raster Band Editors + |
+ST_SetBandNoDataValue and ST_SetBandIsNoData + |
+
Raster Band Statistics and Analytics + |
+ST_Count, ST_CountAgg, ST_Histogram, ST_Quantile, ST_SummaryStats, ST_SummaryStatsAgg, and ST_ValueCount + |
+
Raster Outputs + |
+ST_AsBinary, ST_AsGDALRaster, ST_AsJPEG, ST_AsPNG, and ST_AsTIFF + |
+
Raster Processing + |
+ST_Clip, ST_ColorMap, ST_Intersection, ST_MapAlgebra, ST_Reclass, and ST_Union ST_Distinct4ma, ST_InvDistWeight4ma, ST_Max4ma, ST_Mean4ma, ST_Min4ma, ST_MinDist4ma, ST_Range4ma, ST_StdDev4ma, and ST _Sum4ma, ST_Aspect, ST_HillShade, ST_Roughness, ST_Slope, ST_TPI, ST_TRI, Box3D, ST_ConvexHull, ST_DumpAsPolygons, and ST_ Envelope, ST_MinConvexHull, ST_Polygon, ST_Contains, ST_ContainsProperly, ST_Covers, ST_CoveredBy, ST_Disjoint, ST_Intersects, and ST_Overlaps, ST_Touches, ST_SameAlignment, ST_NotSameAlignmentReason, ST_Within, ST_DWithin, and ST_DFullyWithin + |
+
Raster Operators + |
+&&, &<, &>, =, @, ~=, and ~ + |
+
In GaussDB(DWS), PostGIS Extension supports Generalized Search Tree (GIST) spatial indexes. This index type is inapplicable to partitioned tables. Different from B-tree indexes, GIS indexes are adaptable to all kinds of irregular data structures, which can effectively improve the retrieval efficiency for geometry and geographic data.
+Run the following command to create a GiST index:
+1 | CREATE INDEX indexname ON tablename USING GIST ( geometryfield ); + |
This document contains open source software notice for the product. And this document is confidential information of copyright holder. Recipient shall protect it in due care and shall not disseminate it without permission.
+ +Warranty Disclaimer
+This document is provided "as is" without any warranty whatsoever, including the accuracy or comprehensiveness. Copyright holder of this document may change the contents of this document at any time without prior notice, and copyright holder disclaims any liability in relation to recipient's use of this document.
+Open source software is provided by the author "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the author be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of data or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of open source software, even if advised of the possibility of such damage.
+ +Copyright Notice And License Texts
+Software: postgis-2.4.2
+Copyright notice:
+"Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Copyright 2008 Kevin Neufeld
+Copyright (c) 2009 Walter Bruce Sinclair
+Copyright 2006-2013 Stephen Woodbridge.
+Copyright (c) 2008 Walter Bruce Sinclair
+Copyright (c) 2012 TJ Holowaychuk <tj@vision-media.ca>
+Copyright (c) 2008, by Attractive Chaos <attractivechaos@aol.co.uk>
+Copyright (c) 2001-2012 Walter Bruce Sinclair
+Copyright (c) 2010 Walter Bruce Sinclair
+Copyright 2006 Stephen Woodbridge
+Copyright 2006-2010 Stephen Woodbridge.
+Copyright (c) 2006-2014 Stephen Woodbridge.
+Copyright (c) 2017, Even Rouault <even.rouault at spatialys.com>
+Copyright (C) 2004-2015 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2008-2011 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2008 Mark Cave-Ayland <mark.cave-ayland@siriusit.co.uk>
+Copyright 2015 Nicklas Avén <nicklas.aven@jordogskog.no>
+Copyright 2008 Paul Ramsey
+Copyright (C) 2012 Sandro Santilli <strk@kbt.io>
+Copyright 2012 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2014 Sandro Santilli <strk@kbt.io>
+Copyright 2013 Olivier Courtin <olivier.courtin@oslandia.com>
+Copyright 2009 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright 2008 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright 2011 Sandro Santilli <strk@kbt.io>
+Copyright 2015 Daniel Baston
+Copyright 2009 Olivier Courtin <olivier.courtin@oslandia.com>
+Copyright 2014 Kashif Rasul <kashif.rasul@gmail.com> and
+Shoaib Burq <saburq@gmail.com>
+Copyright 2013 Sandro Santilli <strk@kbt.io>
+Copyright 2010 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2017 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2015 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2009 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2011 Sandro Santilli <strk@kbt.io>
+Copyright 2010 Olivier Courtin <olivier.courtin@oslandia.com>
+Copyright 2014 Nicklas Avén
+Copyright 2011-2016 Regina Obe
+Copyright (C) 2008 Paul Ramsey
+Copyright (C) 2011-2015 Sandro Santilli <strk@kbt.io>
+Copyright 2010-2012 Olivier Courtin <olivier.courtin@oslandia.com>
+Copyright (C) 2015 Daniel Baston <dbaston@gmail.com>
+Copyright (C) 2013 Nicklas Avén
+Copyright (C) 2016 Sandro Santilli <strk@kbt.io>
+Copyright 2017 Darafei Praliaskouski <me@komzpa.net>
+Copyright (c) 2016, Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2011-2012 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2011 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2007-2008 Mark Cave-Ayland
+Copyright (C) 2001-2006 Refractions Research Inc.
+Copyright 2015 Daniel Baston <dbaston@gmail.com>
+Copyright 2009 David Skea <David.Skea@gov.bc.ca>
+Copyright (C) 2012-2015 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2012-2015 Sandro Santilli <strk@kbt.io>
+Copyright 2001-2006 Refractions Research Inc.
+Copyright (C) 2004 Refractions Research Inc.
+Copyright 2011-2014 Sandro Santilli <strk@kbt.io>
+Copyright 2009-2010 Sandro Santilli <strk@kbt.io>
+Copyright 2015-2016 Daniel Baston <dbaston@gmail.com>
+Copyright 2011-2015 Sandro Santilli <strk@kbt.io>
+Copyright 2007-2008 Mark Cave-Ayland
+Copyright 2012-2013 Oslandia <infos@oslandia.com>
+Copyright (C) 2015-2017 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2001-2003 Refractions Research Inc.
+Copyright 2016 Sandro Santilli <strk@kbt.io>
+Copyright 2011 Kashif Rasul <kashif.rasul@gmail.com>
+Copyright (C) 2014 Nicklas Avén
+Copyright (C) 2010 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2010-2015 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2011 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2011-2014 Sandro Santilli <strk@kbt.io>
+Copyright (C) 1984, 1989-1990, 2000-2015 Free Software Foundation, Inc.
+Copyright (C) 2011 Paul Ramsey
+Copyright 2001-2003 Refractions Research Inc.
+Copyright 2009-2010 Olivier Courtin <olivier.courtin@oslandia.com>
+Copyright 2010-2012 Oslandia
+Copyright 2006 Corporacion Autonoma Regional de Santander
+Copyright 2013 Nicklas Avén
+Copyright 2011-2016 Arrival 3D, Regina Obe
+Copyright (C) 2009 David Skea <David.Skea@gov.bc.ca>
+Copyright (C) 2017 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2009-2012 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2010 - Oslandia
+Copyright (C) 2006 Mark Leslie <mark.leslie@lisasoft.com>
+Copyright (C) 2008-2009 Mark Cave-Ayland <mark.cave-ayland@siriusit.co.uk>
+Copyright (C) 2009-2015 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2010 Olivier Courtin <olivier.courtin@camptocamp.com>
+Copyright 2010 Nicklas Avén
+Copyright 2012 Paul Ramsey
+Copyright 2011 Nicklas Avén
+Copyright 2002 Thamer Alharbash
+Copyright 2011 OSGeo
+Copyright (C) 2009-2011 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2008 Mark Cave-Ayland <mark.cave-ayland@siriusit.co.uk>
+Copyright (C) 2004-2007 Refractions Research Inc.
+Copyright 2010 LISAsoft Pty Ltd
+Copyright 2010 Mark Leslie
+Copyright (c) 1999, Frank Warmerdam
+Copyright 2009 Mark Cave-Ayland <mark.cave-ayland@siriusit.co.uk>
+Copyright (c) 2007, Frank Warmerdam
+Copyright 2008 OpenGeo.org
+Copyright (C) 2008 OpenGeo.org
+Copyright (C) 2009 Mark Cave-Ayland <mark.cave-ayland@siriusit.co.uk>
+Copyright 2010 LISAsoft
+Copyright (C) 2010 Mark Cave-Ayland <mark.cave-ayland@siriusit.co.uk>
+Copyright (c) 1999, 2001, Frank Warmerdam
+Copyright (C) 2016-2017 Bj?rn Harrtell <bjorn@wololo.org>
+Copyright (C) 2017 Danny G?tte <danny.goette@fem.tu-ilmenau.de>
+Copyright 2009-2011 Paul Ramsey <pramsey@cleverelephant.ca>
+^copyright^
+Copyright 2012 (C) Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright 2009 Paul Ramsey <pramsey@opengeo.org>
+Copyright 2001-2009 Refractions Research Inc.
+Copyright (C) 2010 Olivier Courtin <olivier.courtin@oslandia.com>
+By Nathan Wagner, copyright disclaimed,
+this entire file is in the public domain
+Copyright 2009-2011 Olivier Courtin <olivier.courtin@oslandia.com>
+Copyright (C) 2001-2005 Refractions Research Inc.
+Copyright 2001-2011 Refractions Research Inc.
+Copyright 2009-2014 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2008 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2010 Sandro Santilli <strk@kbt.io>
+Copyright 2012 J Smith <dark.panda@gmail.com>
+Copyright 2009 - 2010 Oslandia
+Copyright 2009 Oslandia
+Copyright 2001-2005 Refractions Research Inc.
+Copyright 2016 Paul Ramsey <pramsey@cleverelephant.ca>
+Copyright 2016 Daniel Baston <dbaston@gmail.com>
+Copyright (C) 2011 OpenGeo.org
+Copyright (c) 2003-2017, Troy D. Hanson http:troydhanson.github.com/uthash/
+Copyright (C) 2011 Regents of the University of California
+Copyright (C) 2011-2013 Regents of the University of California
+Copyright (C) 2010-2011 Jorge Arevalo <jorge.arevalo@deimos-space.com>
+Copyright (C) 2010-2011 David Zwarg <dzwarg@azavea.com>
+Copyright (C) 2009-2011 Pierre Racine <pierre.racine@sbf.ulaval.ca>
+Copyright (C) 2009-2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2008-2009 Sandro Santilli <strk@kbt.io>
+Copyright (C) 2013 Nathaneil Hunter Clay <clay.nathaniel@gmail.com
+Copyright (C) 2013 Nathaniel Hunter Clay <clay.nathaniel@gmail.com>
+Copyright (C) 2013 Bborie Park <dustymugs@gmail.com>
+Copyright (C) 2013 Nathaniel Hunter Clay <clay.nathaniel@gmail.com>
+(C) 2009 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2009 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2009-2010 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2009-2010 Jorge Arevalo <jorge.arevalo@deimos-space.com>
+Copyright (C) 2012 Regents of the University of California
+Copyright (C) 2013 Regents of the University of California
+Copyright (C) 2012-2013 Regents of the University of California
+Copyright (C) 2009 Sandro Santilli <strk@kbt.io>
+"
+License: The GPL v2 License.
+GNU GENERAL PUBLIC LICENSE
+Version 2, June 1991
+ +Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+ +Preamble
+ +The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too.
+ +When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
+ +To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.
+ +For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
+ +We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.
+ +Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.
+ +Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.
+ +The precise terms and conditions for copying, distribution and modification follow.?
+GNU GENERAL PUBLIC LICENSE
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+ +0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you".
+ +Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.
+ +1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.
+ +You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
+ +2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
+ +a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.
+ +b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.
+ +c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)
+These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
+ +Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.
+ +In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
+ +3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:
+ +a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
+ +b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
+ +c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)
+ +The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.
+ +If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.
+ +4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
+ +5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.
+ +6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.
+ +7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.
+ +If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.
+ +It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
+ +This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
+ +8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
+ +9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
+ +Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.
+ +10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
+ +NO WARRANTY
+ +11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+ +12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+ +END OF TERMS AND CONDITIONS
+How to Apply These Terms to Your New Programs
+ +If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
+ +To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
+ +<one line to give the program's name and a brief idea of what it does.>
+Copyright (C) <year> <name of author>
+ +This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
+ +This program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
+ +You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+Also add information on how to contact you by electronic and paper mail.
+ +If the program is interactive, make it output a short notice like this when it starts in an interactive mode:
+ +Gnomovision version 69, Copyright (C) year name of author
+Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
+ +The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program.
+ +You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names:
+ +Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker.
+ +<signature of Ty Coon>, 1 April 1989 Ty Coon, President of Vice
+ +This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.
+ +Software:Geos
+Copyright notice:
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2013 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+ +Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+Copyright (C) 2005-2011 Refractions Research Inc.
+Copyright (C) 2009 Ragi Y. Burhum <ragi@burhum.com>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2005 2006 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006-2011 Refractions Research Inc.
+Copyright (C) 2011 Sandro Santilli <strk@keybit.net
+Copyright (C) 2009-2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2016 Daniel Baston
+Copyright (C) 2008 Sean Gillies
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Refractions Research Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2008-2010 Safe Software Inc.
+Copyright (C) 2006-2007 Refractions Research Inc.
+Copyright (C) 2005-2007 Refractions Research Inc.
+Copyright (C) 2007 Refractions Research Inc.
+Copyright (C) 2014 Mika Heiskanen <mika.heiskanen@fmi.fi>
+Copyright (C) 2009-2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 2011 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2010 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2009 Mateusz Loskot
+Copyright (C) 2005-2009 Refractions Research Inc.
+Copyright (C) 2001-2009 Vivid Solutions Inc.
+Copyright (C) 2012 Sandro Santilli <strk@keybit.net>
+Copyright (C) 2006 Wu Yongwei
+Copyright (C) 2012 Excensus LLC.
+Copyright (C) 1996-2015 Free Software Foundation, Inc.
+Copyright (c) 1995 Olivier Devillers <Olivier.Devillers@sophia.inria.fr>
+Copyright (C) 2007-2010 Safe Software Inc.
+Copyright (C) 2010 Safe Software Inc.
+Copyright (C) 2006 Refractions Research
+Copyright 2004 Sean Gillies, sgillies@frii.com
+Copyright (C) 2011 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2015 Nyall Dawson <nyall dot dawson at gmail dot com>
+Original code (2.0 and earlier )copyright (c) 2000-2006 Lee Thomason (www.grinninglizard.com)
+Original code (2.0 and earlier )copyright (c) 2000-2002 Lee Thomason (www.grinninglizard.com)
+ +License: LGPL V2.1
+ +GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
+ +[This is the first released version of the Lesser GPL. It also counts as the successor of the GNU Library Public License, version 2, hence the version number 2.1.]
+ +Preamble
+ +The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users.
+ +This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below.
+ +When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things.
+ +To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it.
+ +For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights.
+ +We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library.
+ +To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others.
+ +Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license.
+ +Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs.
+ +When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library.
+ +We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances.
+ +For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License.
+ +In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system.
+ +Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library.
+The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The
+former contains code derived from the library, whereas the latter must be combined with the library in order to run.
+ +GNU LESSER GENERAL PUBLIC LICENSE
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+ +0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you".
+ +A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables.
+ +The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".)
+ +"Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library.
+ +Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does.
+ +1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the
+Library.
+ +You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
+ +2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+ +a) The modified work must itself be a software library.
+ +b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change.
+ +c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License.
+ +d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of
+its purpose remains meaningful.
+ +(For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.)
+ +These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
+ +Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or
+collective works based on the Library.
+ +In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
+ +3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices.
+ +Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy.
+ +This option is useful when you wish to copy part of the code of the Library into a program that is not a library.
+ +4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange.
+ +If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not compelled to copy the source along with the object code.
+ +5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a
+work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License.
+ +However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+ +When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law.
+ +If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.)
+ +Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+ +6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications.
+ +You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things:
+ +a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.)
+ +b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system,
+rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with.
+ +c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution.
+ +d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place.
+ +e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy.
+ +For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception,
+the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies the executable.
+ +It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot
+use both them and the Library together in an executable that you distribute.
+ +7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things:
+ +a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above.
+ +b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.
+ +8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
+ +9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it.
+ +10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with this License.
+ +11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library.
+ +If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances.
+ +It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot
+impose that choice.
+ +This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
+ +12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
+ +13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
+ +Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation.
+ +14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is
+copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing and reuse of software generally.
+ +NO WARRANTY
+ +15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+ +16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+ +END OF TERMS AND CONDITIONS
+ +How to Apply These Terms to Your New Libraries
+ +If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License).
+ +To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
+ +<one line to give the library's name and a brief idea of what it does.>
+Copyright (C) <year> <name of author>
+ +This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
+ +This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+Lesser General Public License for more details.
+ +You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ +Also add information on how to contact you by electronic and paper mail.
+ +You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names:
+ +Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker.
+ +<signature of Ty Coon>, 1 April 1990
+Ty Coon, President of Vice
+ +That's all there is to it!
+ +Software: JSON-C
+ +Copyright notice:
+Copyright (c) 2004, 2005 Metaparadigm Pte. Ltd.
+Copyright (c) 2009-2012 Eric Haszlakiewicz
+Copyright (c) 2004, 2005 Metaparadigm Pte Ltd
+Copyright (c) 2009 Hewlett-Packard Development Company, L.P.
+Copyright 2011, John Resig
+Copyright 2011, The Dojo Foundation
+Copyright (c) 2012 Eric Haszlakiewicz
+Copyright (c) 2009-2012 Hewlett-Packard Development Company, L.P.
+Copyright (c) 2008-2009 Yahoo! Inc. All rights reserved.
+Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
+2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
+Copyright (c) 2013 Metaparadigm Pte. Ltd.
+ +License: MIT License
+ +Copyright (c) 2009-2012 Eric Haszlakiewicz
+ +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+ +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+ +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ +----------------------------------------------------------------
+ +Copyright (c) 2004, 2005 Metaparadigm Pte Ltd
+ +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+ +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+ +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ +Software: proj
+Copyright notice:
+"Copyright (C) 2010 Mateusz Loskot <mateusz@loskot.net>
+Copyright (C) 2007 Douglas Gregor <doug.gregor@gmail.com>
+Copyright (C) 2007 Troy Straszheim
+CMake, Copyright (C) 2009-2010 Mateusz Loskot <mateusz@loskot.net> )
+Copyright (C) 2011 Nicolas David <nicolas.david@ign.fr>
+Copyright (c) 2000, Frank Warmerdam
+Copyright (c) 2011, Open Geospatial Consortium, Inc.
+Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
+2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
+Copyright (c) Charles Karney (2012-2015) <charles@karney.com> and licensed
+Copyright (c) 2005, Antonello Andrea
+Copyright (c) 2010, Frank Warmerdam
+Copyright (c) 1995, Gerald Evenden
+Copyright (c) 2000, Frank Warmerdam <warmerdam@pobox.com>
+Copyright (c) 2010, Frank Warmerdam <warmerdam@pobox.com>
+Copyright (c) 2013, Frank Warmerdam
+Copyright (c) 2003 Gerald I. Evenden
+Copyright (c) 2012, Frank Warmerdam <warmerdam@pobox.com>
+Copyright (c) 2002, Frank Warmerdam
+Copyright (c) 2004 Gerald I. Evenden
+Copyright (c) 2012 Martin Raspaud
+Copyright (c) 2001, Thomas Flemming, tf@ttqv.com
+Copyright (c) 2002, Frank Warmerdam <warmerdam@pobox.com>
+Copyright (c) 2009, Frank Warmerdam
+Copyright (c) 2003, 2006 Gerald I. Evenden
+Copyright (c) 2011, 2012 Martin Lambers <marlam@marlam.de>
+Copyright (c) 2006, Andrey Kiselev
+Copyright (c) 2008-2012, Even Rouault <even dot rouault at mines-paris dot org>
+Copyright (c) 2001, Frank Warmerdam
+Copyright (c) 2001, Frank Warmerdam <warmerdam@pobox.com>
+Copyright (c) 2008 Gerald I. Evenden
+"
+ +License: MIT License
+Please see above
+ +Software: libxml2
+Copyright notice:
+ +"See Copyright for the status of this software.
+Copyright (C) 1998-2003 Daniel Veillard. All Rights Reserved.
+Copyright (C) 2003 Daniel Veillard.
+copy: see Copyright for the status of this software.
+copy: see Copyright for the status of this software
+copy: see Copyright for the status of this software.
+Copyright (C) 2000 Bjorn Reese and Daniel Veillard.
+Copy: See Copyright for the status of this software.
+See COPYRIGHT for the status of this software
+Copyright (C) 2000 Gary Pennington and Daniel Veillard.
+Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
+2007 Free Software Foundation, Inc.
+Copyright (C) 1998 Bjorn Reese and Daniel Stenberg.
+Copyright (C) 2001 Bjorn Reese <breese@users.sourceforge.net>
+Copyright (C) 2000 Bjorn Reese and Daniel Stenberg.
+Copyright (C) 2001 Bjorn Reese and Daniel Stenberg.
+See Copyright for the status of this software
+"
+License: MIT License
+Please see above
+ +GaussDB(DWS) provides multiple dimensional resource monitoring views to show the real-time and historical resource usage of tasks.
+In the multi-tenant management framework, you can query the real-time or historical usage of all user resources (including memory, CPU cores, storage space, temporary space, and I/Os).
+1 | SELECT * FROM PG_TOTAL_USER_RESOURCE_INFO; + |
The result view is as follows:
+username | used_memory | total_memory | used_cpu | total_cpu | used_space | total_space | used_temp_space | total_temp_space | used_spill_space | total_spill_space | read_kbytes | write_kbytes | read_counts | write_counts | read_speed | write_speed +-----------------------+-------------+--------------+----------+-----------+------------+-------------+-----------------+------------------+------------------+-------------------+-------------+--------------+-------------+--------------+------------+------------- +perfadm | 0 | 17250 | 0 | 0 | 0 | -1 | 0 | -1 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 +usern | 0 | 17250 | 0 | 48 | 0 | -1 | 0 | -1 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 +userg | 34 | 15525 | 23.53 | 48 | 0 | -1 | 0 | -1 | 814955731 | -1 | 6111952 | 1145864 | 763994 | 143233 | 42678 | 8001 +userg1 | 34 | 13972 | 23.53 | 48 | 0 | -1 | 0 | -1 | 814972419 | -1 | 6111952 | 1145864 | 763994 | 143233 | 42710 | 8007 +(4 rows)+
The I/O resource monitoring fields (read_kbytes, write_kbytes, read_counts, write_counts, read_speed, and write_speed) can be available only when the GUC parameter enable_user_metric_persistent is enabled.
+For details about each column, see PG_TOTAL_USER_RESOURCE_INFO.
+1 | SELECT * FROM GS_WLM_USER_RESOURCE_INFO('username'); + |
The query result is as follows:
+userid | used_memory | total_memory | used_cpu | total_cpu | used_space | total_space | used_temp_space | total_temp_space | used_spill_space | total_spill_space | read_kbytes | write_kbytes | read_counts | write_counts | read_speed | write_speed +--------+-------------+--------------+----------+-----------+------------+-------------+-----------------+------------------+------------------+-------------------+-------------+--------------+-------------+--------------+------------+------------- +16407 | 18 | 1655 | 6 | 19 | 13787176 | -1 | 0 | -1 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 +(1 row)+
1 | SELECT * FROM GS_WLM_USER_RESOURCE_HISTORY; + |
The query result is as follows:
+username | timestamp | used_memory | total_memory | used_cpu | total_cpu | used_space | total_space | used_temp_space | total_temp_space | used_spill_space | total_spill_space | read_kbytes | write_kbytes | read_counts | write_counts | read_speed | write_speed +-----------------------+-------------------------------+-------------+--------------+----------+-----------+------------+-------------+-----------------+------------------+------------------+-------------------+-------------+--------------+-------------+--------------+-------------+------------- +usern | 2020-01-08 22:56:06.456855+08 | 0 | 17250 | 0 | 48 | 0 | -1 | 0 | -1 | 88349078 | -1 | 45680 | 34 | 5710 | 8 | 320 | 0 +userg | 2020-01-08 22:56:06.458659+08 | 0 | 15525 | 33.48 | 48 | 0 | -1 | 0 | -1 | 110169581 | -1 | 17648 | 23 | 2206 | 5 | 123 | 0 +userg1 | 2020-01-08 22:56:06.460252+08 | 0 | 13972 | 33.48 | 48 | 0 | -1 | 0 | -1 | 136106277 | -1 | 17648 | 23 | 2206 | 5 | 123 | 0+
For the system catalog in GS_WLM_USER_RESOURCE_HISTORY, data in the PG_TOTAL_USER_RESOURCE_INFO view is periodically saved to historical tables only when the GUC parameter enable_user_metric_persistent is enabled.
+For details about each column, see GS_WLM_USER_RESOURCE_HISTORY.
+1 | SELECT * FROM pg_user_iostat('username'); + |
The query result is as follows:
+userid | min_curr_iops | max_curr_iops | min_peak_iops | max_peak_iops | io_limits | io_priority + -------+---------------+---------------+---------------+---------------+-----------+------------- + 10 | 0 | 0 | 0 | 0 | 0 | None +(1 row)+
GaussDB(DWS) provides a view for monitoring the memory usage of the entire cluster.
+1 | SELECT * FROM pgxc_total_memory_detail; + |
1 +2 +3 | SELECT * FROM pgxc_total_memory_detail; +ERROR: unsupported view for memory protection feature is disabled. +CONTEXT: PL/pgSQL function pgxc_total_memory_detail() line 12 at FOR over EXECUTE statement + |
You can query the context information about the shared memory on the pg_shared_memory_detail view.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT * FROM pg_shared_memory_detail; + contextname | level | parent | totalsize | freesize | usedsize +---------------------------------+-------+---------------------------------+-----------+----------+---------- + ProcessMemory | 0 | | 24576 | 9840 | 14736 + Workload manager memory context | 1 | ProcessMemory | 2105400 | 7304 | 2098096 + wlm collector hash table | 2 | Workload manager memory context | 8192 | 3736 | 4456 + Resource pool hash table | 2 | Workload manager memory context | 24576 | 15968 | 8608 + wlm cgroup hash table | 2 | Workload manager memory context | 24576 | 15968 | 8608 +(5 rows) + |
This view lists the context name of the memory, level, the upper-layer memory context, and the total size of the shared memory.
+In the database, GUC parameter memory_tracking_mode is used to configure the memory statistics collecting mode, including the following options:
+When the parameter is set to executor, cvs files are generated under the pg_log directory of the DN process. The file names are in the format of memory_track_<DN name>_query_<queryid>.csv. The information about the operators executed by the postgres thread of the executor and all stream threads are input in this file during task execution.
+The instance is built with a file content similar to the following:
+0, 0, ExecutorState, 0, PortalHeapMemory, 0, 40K, 602K, 23 +1, 3, CStoreScan_29360131_25, 0, ExecutorState, 1, 265K, 554K, 23 +2, 128, cstore scan per scan memory context, 1, CStoreScan_29360131_25, 2, 24K, 24K, 23 +3, 127, cstore scan memory context, 1, CStoreScan_29360131_25, 2, 264K, 264K, 23 +4, 7, InitPartitionMapTmpMemoryContext, 1, CStoreScan_29360131_25, 2, 31K, 31K, 23 +5, 2, VecPartIterator_29360131_24, 0, ExecutorState, 1, 16K, 16K, 23 +0, 0, ExecutorState, 0, PortalHeapMemory, 0, 24K, 1163K, 20 +1, 3, CStoreScan_29360131_22, 0, ExecutorState, 1, 390K, 1122K, 20 +2, 20, cstore scan per scan memory context, 1, CStoreScan_29360131_22, 2, 476K, 476K, 20 +3, 19, cstore scan memory context, 1, CStoreScan_29360131_22, 2, 264K, 264K, 20 +4, 7, InitPartitionMapTmpMemoryContext, 1, CStoreScan_29360131_22, 2, 23K, 23K, 20 +5, 2, VecPartIterator_29360131_21, 0, ExecutorState, 1, 16K, 16K, 20+
The fields include the output SN, SN of the memory allocation context within the thread, name of the current memory context, output SN of the parent memory context, name of the parent memory context, tree layer No. of the memory context, peak memory used by the current memory context, peak memory used by the current memory context and all its child memory contexts, and plan node ID of the query where the thread is executed.
+In this example, the record "1, 3, CStoreScan_29360131_22, 0, ExecutorState, 1, 390K, 1122K, 20" represents the following information about Explain Analyze:
+If the parameter is set to fullexec, the output information will be similar to that for executor, except that some memory context allocation information may be returned because the information about all memory applications (no matter succeeded or not) is printed. As only the memory application information is recorded, the peak memory used by the memory context is recorded as 0.
+GaussDB(DWS) provides system catalogs for monitoring the resource usage of CNs and DNs (including memory, CPU usage, disk I/O, process physical I/O, and process logical I/O), and system catalogs for monitoring the resource usage of the entire cluster.
+For details about the system catalog GS_WLM_INSTANCE_HISTORY, see GS_WLM_INSTANCE_HISTORY.
+Data in the system catalogGS_WLM_INSTANCE_HISTORY is distributed in corresponding instances. CN monitoring data is stored in the CN instance, and DN monitoring data is stored in the DN instance. The DN has a standby node. When the primary DN is abnormal, the monitoring data of the DN can be restored from the standby node. However, a CN has no standby node. When a CN is abnormal and then restored, the monitoring data of the CN will be lost.
+1 | SELECT * FROM GS_WLM_INSTANCE_HISTORY ORDER BY TIMESTAMP DESC; + |
The query result is as follows:
+instancename | timestamp | used_cpu | free_mem | used_mem | io_await | io_util | disk_read | disk_write | process_read | process_write | logical_read | logical_write | read_counts | write_counts +--------------+-------------------------------+----------+----------+----------+----------+----------+-----------+------------+--------------+---------------+--------------+---------------+-------------+-------------- +dn_6015_6016 | 2020-01-10 17:29:17.329495+08 | 0 | 14570 | 8982 | 662.923 | 99.9601 | 697666 | 93655.5 | 183104 | 30082 | 285659 | 30079 | 357717 | 37667 +dn_6015_6016 | 2020-01-10 17:29:07.312049+08 | 0 | 14578 | 8974 | 883.102 | 99.9801 | 756228 | 81417.4 | 189722 | 30786 | 285681 | 30780 | 358103 | 38584 +dn_6015_6016 | 2020-01-10 17:28:57.284472+08 | 0 | 14583 | 8969 | 727.135 | 99.9801 | 648581 | 88799.6 | 177120 | 31176 | 252161 | 31175 | 316085 | 39079 +dn_6015_6016 | 2020-01-10 17:28:47.256613+08 | 0 | 14591 | 8961 | 679.534 | 100.08 | 655360 | 169962 | 179404 | 30424 | 242002 | 30422 | 303351 | 38136+
1 | SELECT * FROM GS_WLM_INSTANCE_HISTORY WHERE TIMESTAMP > '2020-01-10' AND TIMESTAMP < '2020-01-11' ORDER BY TIMESTAMP DESC; + |
The query result is as follows:
+instancename | timestamp | used_cpu | free_mem | used_mem | io_await | io_util | disk_read | disk_write | process_read | process_write | logical_read | logical_write | read_counts | write_counts +--------------+-------------------------------+----------+----------+----------+----------+----------+-----------+------------+--------------+---------------+--------------+---------------+-------------+-------------- +dn_6015_6016 | 2020-01-10 17:29:17.329495+08 | 0 | 14570 | 8982 | 662.923 | 99.9601 | 697666 | 93655.5 | 183104 | 30082 | 285659 | 30079 | 357717 | 37667 +dn_6015_6016 | 2020-01-10 17:29:07.312049+08 | 0 | 14578 | 8974 | 883.102 | 99.9801 | 756228 | 81417.4 | 189722 | 30786 | 285681 | 30780 | 358103 | 38584 +dn_6015_6016 | 2020-01-10 17:28:57.284472+08 | 0 | 14583 | 8969 | 727.135 | 99.9801 | 648581 | 88799.6 | 177120 | 31176 | 252161 | 31175 | 316085 | 39079 +dn_6015_6016 | 2020-01-10 17:28:47.256613+08 | 0 | 14591 | 8961 | 679.534 | 100.08 | 655360 | 169962 | 179404 | 30424 | 242002 | 30422 | 303351 | 38136+
1 | SELECT * FROM pgxc_get_wlm_current_instance_info('ALL'); + |
The query result is as follows:
+instancename | timestamp | used_cpu | free_mem | used_mem | io_await | io_util | disk_read | disk_write | process_read | process_write | logical_read | logical_write | read_counts | write_counts +--------------+-------------------------------+----------+----------+----------+----------+---------+-----------+------------+--------------+---------------+--------------+---------------+-------------+-------------- +coordinator2 | 2020-01-14 21:58:29.290894+08 | 0 | 12010 | 278 | 16.0445 | 7.19561 | 184.431 | 27959.3 | 0 | 10 | 0 | 0 | 0 | 0 +coordinator3 | 2020-01-14 21:58:27.567655+08 | 0 | 12000 | 288 | .964557 | 3.40659 | 332.468 | 3375.02 | 26 | 13 | 0 | 0 | 0 | 0 +datanode1 | 2020-01-14 21:58:23.900321+08 | 0 | 11899 | 389 | 1.17296 | 3.25 | 329.6 | 2870.4 | 28 | 8 | 13 | 3 | 18 | 6 +datanode2 | 2020-01-14 21:58:32.832989+08 | 0 | 11904 | 384 | 17.948 | 8.52148 | 214.186 | 25894.1 | 28 | 10 | 13 | 3 | 18 | 6 +datanode3 | 2020-01-14 21:58:24.826694+08 | 0 | 11894 | 394 | 1.16088 | 3.15 | 328 | 2868.8 | 25 | 10 | 13 | 3 | 18 | 6 +coordinator1 | 2020-01-14 21:58:33.367649+08 | 0 | 11988 | 300 | 9.53286 | 10.05 | 43.2 | 55232 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator1 | 2020-01-14 21:58:23.216645+08 | 0 | 11988 | 300 | 1.17085 | 3.21182 | 324.729 | 2831.13 | 8 | 13 | 0 | 0 | 0 | 0 +(7 rows)+
1 | SELECT * FROM pgxc_get_wlm_history_instance_info('ALL', '2020-01-14 21:00:00', '2020-01-14 22:00:00', 3); + |
The query result is as follows:
+instancename | timestamp | used_cpu | free_mem | used_mem | io_await | io_util | disk_read | disk_write | process_read | process_write | logical_read | logical_write | read_counts | write_counts +--------------+-------------------------------+----------+----------+----------+----------+-----------+-----------+------------+--------------+---------------+--------------+---------------+-------------+-------------- +coordinator2 | 2020-01-14 21:50:49.778902+08 | 0 | 12020 | 268 | .127371 | .789211 | 15.984 | 3994.41 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator2 | 2020-01-14 21:53:49.043646+08 | 0 | 12018 | 270 | 30.2902 | 8.65404 | 276.77 | 16741.8 | 3 | 1 | 0 | 0 | 0 | 0 +coordinator2 | 2020-01-14 21:57:09.202654+08 | 0 | 12018 | 270 | .16051 | .979021 | 59.9401 | 5596 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator3 | 2020-01-14 21:38:48.948646+08 | 0 | 12012 | 276 | .0769231 | .00999001 | 0 | 35.1648 | 0 | 1 | 0 | 0 | 0 | 0 +coordinator3 | 2020-01-14 21:40:29.061178+08 | 0 | 12012 | 276 | .118421 | .0199601 | 0 | 970.858 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator3 | 2020-01-14 21:50:19.612777+08 | 0 | 12010 | 278 | 24.411 | 11.7665 | 8.78244 | 44641.1 | 0 | 0 | 0 | 0 | 0 | 0 +datanode1 | 2020-01-14 21:49:42.758649+08 | 0 | 11909 | 379 | .798776 | 8.02 | 51.2 | 20924.8 | 0 | 0 | 0 | 0 | 0 | 0 +datanode1 | 2020-01-14 21:49:52.760188+08 | 0 | 11909 | 379 | 23.8972 | 14.1 | 0 | 74760 | 0 | 0 | 0 | 0 | 0 | 0 +datanode1 | 2020-01-14 21:50:22.769226+08 | 0 | 11909 | 379 | 39.5868 | 7.4 | 0 | 19760.8 | 0 | 0 | 0 | 0 | 0 | 0 +datanode2 | 2020-01-14 21:58:02.826185+08 | 0 | 11905 | 383 | .351648 | .32 | 20.8 | 504.8 | 0 | 0 | 0 | 0 | 0 | 0 +datanode2 | 2020-01-14 21:56:42.80793+08 | 0 | 11906 | 382 | .559748 | .04 | 0 | 326.4 | 0 | 0 | 0 | 0 | 0 | 0 +datanode2 | 2020-01-14 21:45:21.632407+08 | 0 | 11901 | 387 | 12.1313 | 4.55544 | 3.1968 | 45177.2 | 0 | 0 | 0 | 0 | 0 | 0 +datanode3 | 2020-01-14 21:58:14.823317+08 | 0 | 11898 | 390 | .378205 | .99 | 48 | 23353.6 | 0 | 0 | 0 | 0 | 0 | 0 +datanode3 | 2020-01-14 21:47:50.665028+08 | 0 | 11901 | 387 | 1.07494 | 1.19 | 0 | 15506.4 | 0 | 0 | 0 | 0 | 0 | 0 +datanode3 | 2020-01-14 21:51:21.720117+08 | 0 | 11903 | 385 | 10.2795 | 3.11 | 0 | 11031.2 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator1 | 2020-01-14 21:42:59.121945+08 | 0 | 12020 | 268 | .0857143 | .0699301 | 0 | 6579.02 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator1 | 2020-01-14 21:41:49.042646+08 | 0 | 12020 | 268 | 20.9039 | 11.3786 | 6042.76 | 57903.7 | 0 | 0 | 0 | 0 | 0 | 0 +coordinator1 | 2020-01-14 21:41:09.007652+08 | 0 | 12020 | 268 | .0446429 | .03996 | 0 | 1109.29 | 0 | 0 | 0 | 0 | 0 | 0 +(18 rows)+
You can query real-time Top SQL in real-time resource monitoring views at different levels. The real-time resource monitoring view records the resource usage (including memory, disk, CPU time, and I/O) and performance alarm information during job running.
+The following table describes the external interfaces of the real-time views.
+ +Level + |
+Monitored Node + |
+View + |
+
---|---|---|
Query level/perf level + |
+Current CN + |
++ | +
All CNs + |
++ | +|
Operator level + |
+Current CN + |
++ | +
All CNs + |
++ | +
In the preceding prerequisites, enable_resource_track is a system-level parameter that specifies whether to enable resource monitoring. resource_track_level is a session-level parameter. You can set the resource monitoring level of a session as needed. The following table describes the values of the two parameters.
+ +enable_resource_track + |
+resource_track_level + |
+Query-Level Information + |
+Operator-Level Information + |
+
---|---|---|---|
on(default) + |
+none + |
+Not collected + |
+Not collected + |
+
query(default) + |
+Collected + |
+Not collected + |
+|
perf + |
+Collected + |
+Not collected + |
+|
operator + |
+Collected + |
+Collected + |
+|
off + |
+none/query/operator + |
+Not collected + |
+Not collected + |
+
1 | SELECT * FROM gs_session_cpu_statistics; + |
1 | SELECT * FROM gs_session_memory_statistics; + |
1 | SELECT * FROM gs_wlm_session_statistics; + |
1 | SELECT * FROM pgxc_wlm_session_statistics; + |
1 | SELECT * FROM gs_wlm_operator_statistics; + |
1 | SELECT * FROM pgxc_wlm_operator_statistics; + |
1 | SELECT * FROM pg_session_wlmstat; + |
1 | SELECT * FROM pgxc_wlm_workload_records; + |
You can query historical Top SQL in historical resource monitoring views. The historical resource monitoring view records the resource usage (of memory, disk, CPU time, and I/O), running status (including errors, termination, and exceptions), and performance alarm information during job running. For queries that abnormally terminate due to FATAL or PANIC errors, their status is displayed as aborted and no detailed information is recorded. Status information about query parsing in the optimization phase cannot be monitored.
+The following table describes the external interfaces of the historical views.
+ +Level + |
+Monitored Node + |
+View + |
+|
---|---|---|---|
Query level/perf level + |
+Current CN + |
+History (Database Manager interface) + |
++ | +
History (internal dump interface) + |
++ | +||
All CNs + |
+History (Database Manager interface) + |
++ | +|
History (internal dump interface) + |
++ | +||
Operator + |
+Current CN + |
+History (Database Manager interface) + |
++ | +
History (internal dump interface) + |
++ | +||
All CNs + |
+History (Database Manager interface) + |
++ | +|
History (internal dump interface) + |
++ | +
1 | SELECT * FROM gs_wlm_session_history; + |
1 | SELECT * FROM pgxc_wlm_session_history; + |
1 | SELECT * FROM gs_wlm_session_info; + |
1 | SELECT * FROM gs_wlm_session_info order by max_peak_memory desc limit 10; + |
1 | SELECT * FROM gs_wlm_session_info WHERE start_time >= '2022-05-15 21:00:00' and finish_time <='2022-05-15 23:30:00'order by max_peak_memory desc limit 10; + |
1 | SELECT * FROM gs_wlm_session_info order by total_cpu_time desc limit 10; + |
1 | SELECT * FROM gs_wlm_session_info WHERE start_time >= '2022-05-15 21:00:00' and finish_time <='2022-05-15 23:30:00'order by total_cpu_time desc limit 10; + |
1 | SELECT * FROM pgxc_wlm_session_info; + |
1 | SELECT * FROM pgxc_wlm_session_info order by duration desc limit 10; + |
1 | SELECT * FROM pgxc_wlm_session_info WHERE start_time >= '2022-05-15 21:00:00' and finish_time <='2022-05-15 23:30:00'order by nodename,max_peak_memory desc limit 10; + |
A GaussDB(DWS) cluster uses the UTC time by default, which has an 8-hour time difference with the system time. Before queries, ensure that the database time is the same as the system time.
+1 | SELECT * FROM pgxc_get_wlm_session_info_bytime('start_time', '2019-09-10 15:30:00', '2019-09-10 15:35:00', 10); + |
1 | SELECT * FROM pgxc_get_wlm_session_info_bytime('finish_time', '2019-09-10 15:30:00', '2019-09-10 15:35:00', 10); + |
1 | SELECT * FROM gs_wlm_operator_history; + |
1 | SELECT * FROM pgxc_wlm_operator_history; + |
1 | SELECT * FROM gs_wlm_operator_info; + |
1 | SELECT * FROM pgxc_wlm_operator_info; + |
In this section, TPC-DS sample data is used as an example to describe how to query Real-time TopSQL and Historical TopSQL.
+To query for historical or archived resource monitoring information about jobs of top SQLs, you need to set related GUC parameters first. The procedure is as follows:
+If enable_resource_record is set to on, storage space expansion may occur and thereby slightly affects the performance. Therefore, set is to off if record archiving is unnecessary.
+The TPC-DS sample data is used as an example.
+By default, only resources of a query whose execution cost is greater than the value (default: 100000) of resource_track_cost are monitored and can be queried by users.
+For example, run the following statements to query for the estimated execution cost of the SQL statement:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | SET CURRENT_SCHEMA = tpcds; +EXPLAIN WITH customer_total_return AS +( SELECT sr_customer_sk as ctr_customer_sk, +sr_store_sk as ctr_store_sk, +sum(SR_FEE) as ctr_total_return +FROM store_returns, date_dim +WHERE sr_returned_date_sk = d_date_sk AND d_year =2000 +GROUP BY sr_customer_sk, sr_store_sk ) +SELECT c_customer_id +FROM customer_total_return ctr1, store, customer +WHERE ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2 +FROM customer_total_return ctr2 +WHERE ctr1.ctr_store_sk = ctr2.ctr_store_sk) +AND s_store_sk = ctr1.ctr_store_sk +AND s_state = 'TN' +AND ctr1.ctr_customer_sk = c_customer_sk +ORDER BY c_customer_id +limit 100; + |
In the following query result, the value in the first row of the E-costs column is the estimated cost of the SQL statement.
+In this example, to demonstrate the resource monitoring function of top SQLs, you need to set resource_track_cost to a value smaller than the estimated cost in the EXPLAIN result, for example, 100. For details about the parameter setting, see resource_track_cost.
+After completing this example, you still need to reset resource_track_cost to its default value 100000 or a proper value. An overly small parameter value will compromise the database performance.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | SET CURRENT_SCHEMA = tpcds; +WITH customer_total_return AS +(SELECT sr_customer_sk as ctr_customer_sk, +sr_store_sk as ctr_store_sk, +sum(SR_FEE) as ctr_total_return +FROM store_returns,date_dim +WHERE sr_returned_date_sk = d_date_sk +AND d_year =2000 +GROUP BY sr_customer_sk ,sr_store_sk) +SELECT c_customer_id +FROM customer_total_return ctr1, store, customer +WHERE ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2 +FROM customer_total_return ctr2 +WHERE ctr1.ctr_store_sk = ctr2.ctr_store_sk) +AND s_store_sk = ctr1.ctr_store_sk +AND s_state = 'TN' +AND ctr1.ctr_customer_sk = c_customer_sk +ORDER BY c_customer_id +limit 100; + |
1 | SELECT query,max_peak_memory,average_peak_memory,memory_skew_percent FROM gs_wlm_session_statistics ORDER BY start_time DESC; + |
The preceding command queries for the real-time peak information at the query-level. The peak information includes the maximum memory peak among all DNs per second, average memory peak among all DNs per second, and memory usage skew across DNs.
+For more examples of querying for the real-time resource monitoring information of top SQLs, see Real-time TopSQL.
+1 | select query,start_time,finish_time,duration,status from gs_wlm_session_history order by start_time desc; + |
The preceding command queries for the historical information at the query-level. The peak information includes the execution start time, execution duration (unit: ms), and execution status. The time unit is ms.
+For more examples of querying for the historical resource monitoring information of top SQLs, see Historical TopSQL.
+If enable_resource_record is set to on and the execution time of the SQL statement in 3 is no less than the value of resource_track_duration, historical information about the SQL statement will be archived to the gs_wlm_session_info view 3 minutes after the execution of the SQL statement is complete.
+The info view can be queried only when the gaussdb database is connected. Therefore, switch to the gaussdb database before running the following statement:
+1 | select query,start_time,finish_time,duration,status from gs_wlm_session_info order by start_time desc; + |
The aim of SQL optimization is to maximize the utilization of resources, including CPU, memory, disk I/O, and network I/O. To maximize resource utilization is to run SQL statements as efficiently as possible to achieve the highest performance at a lower cost. For example, when performing a typical point query, you can use the seqscan and filter (that is, read every tuple and query conditions for match). You can also use an index scan, which can be implemented at a lower cost but achieve the same effect.
+This chapter describes how to analyze and improve query performance, and provides common cases and troubleshooting methods.
+The process from receiving SQL statements to the statement execution by the SQL engine is shown in Figure 1 and Table 1. The texts in red are steps where database administrators can optimize queries.
+ + +Procedure + |
+Description + |
+
---|---|
1. Perform syntax and lexical parsing. + |
+Converts the input SQL statements from the string data type to the formatted structure stmt based on the specified SQL statement rules. + |
+
2. Perform semantic parsing. + |
+Converts the formatted structure obtained from the previous step into objects that can be recognized by the database. + |
+
3. Rewrite the query statements. + |
+Converts the output of the last step into the structure that optimizes the query execution. + |
+
4. Optimize the query. + |
+Determines the execution mode of SQL statements (the execution plan) based on the result obtained from the last step and the internal database statistics. For details about the impact of statistics and GUC parameters on query optimization (execution plan), see Optimizing Queries Using Statistics and Optimizing Queries Using GUC parameters. + |
+
5. Perform the query. + |
+Executes the SQL statements based on the execution path specified in the last step. Selecting a proper underlying storage mode improves the query execution efficiency. For details, see Optimizing Queries Using the Underlying Storage. + |
+
The GaussDB(DWS) optimizer is a typical Cost-based Optimization (CBO). By using CBO, the database calculates the number of tuples and the execution cost for each execution step under each execution plan based on the number of table tuples, column width, NULL record ratio, and characteristic values, such as distinct, MCV, and HB values, and certain cost calculation methods. The database then selects the execution plan that takes the lowest cost for the overall execution or for the return of the first tuple. These characteristic values are the statistics, which is the core for optimizing a query. Accurate statistics helps the planner select the most appropriate query plan. Generally, you can collect statistics of a table or that of some columns in a table using ANALYZE. You are advised to periodically execute ANALYZE or execute it immediately after you modified most contents in a table.
+Optimizing queries aims to select an efficient execution mode.
+Take the following statement as an example:
+1 +2 | select count(1) +from customer inner join store_sales on (ss_customer_sk = c_customer_sk); + |
During execution of customer inner join store_sales, GaussDB(DWS) supports nested loop, merge join, and hash join. The optimizer estimates the result set value and the execution cost under each join mode based on the statistics of the customer and store_sales tables and selects the execution plan that takes the lowest execution cost.
+As described in the preceding content, the execution cost is calculated based on certain methods and statistics. If the actual execution cost cannot be accurately estimated, you need to optimize the execution plan by setting the GUC parameters.
+GaussDB(DWS) supports row- and column-based tables. The selection of an underlying storage mode strongly depends on specific customer business scenarios. You are advised to use column-store tables for computing service scenarios (mainly involving association and aggregation operations) and row-store tables for service scenarios, such as point queries and massive UPDATE or DELETE executions.
+Optimization methods of each storage mode will be described in details in the performance optimization chapter.
+Besides the preceding methods that improve the performance of the execution plan generated by the SQL engine, database administrators can also enhance SQL statement performance by rewriting SQL statements while retaining the original service logic based on the execution mechanism of the database and abundant practical experience.
+This requires that the system administrators know the customer business well and have professional knowledge of SQL statements.
+The SQL execution plan is a node tree, which displays detailed procedure when GaussDB(DWS) runs an SQL statement. A database operator indicates one step.
+You can run the EXPLAIN command to view the execution plan generated for each query by an optimizer. Explain outputs a row of information for each execution node, showing the basic node type and the expense estimate that the optimizer makes for executing the node. See Figure 1.
+ +GaussDB(DWS) provides four display formats: normal, pretty, summary, and run.
+You can change the display format of execution plans by setting explain_perf_mode. Later examples use the pretty format by default.
+In addition to setting different display formats for an execution plan, you can use different EXPLAIN syntax to display execution plan information in details. The following lists the common EXPLAIN syntax. For details, see EXPLAIN.
+To measure the run time cost of each node in the execution plan, the current execution of EXPLAIN ANALYZE or EXPLAIN PERFORMANCE adds profiling overhead to query execution. Running EXPLAIN ANALYZE or PERFORMANCE on a query sometimes takes longer time than executing the query normally. The amount of overhead depends on the nature of the query, as well as the platform being used.
+Therefore, if an SQL statement is not finished after being running for a long time, run the EXPLAIN statement to view the execution plan and then locate the fault. If the SQL statement has been properly executed, run the EXPLAIN ANALYZE or EXPLAIN PERFORMANCE statement to check the execution plan and information to locate the fault.
+The EXPLAIN PERFORMANCE lightweight execution is consistent with EXPLAIN PERFORMANCE but greatly reduces the time spent on performance analysis.
+As described in Overview of the SQL Execution Plan, EXPLAIN displays the execution plan, but will not actually run SQL statements. EXPLAIN ANALYZE and EXPLAIN PERFORMANCE both will actually run SQL statements and return the execution information. In this section, detailed execution plan and execution information are described.
+The following SQL statement is used as an example:
+1 +2 +3 +4 +5 | select + cjxh, + count(1) +from dwcjk +group by cjxh; + |
Run the EXPLAIN command and the output is as follows:
+Interpretation of the execution plan column (horizontal):
+The operator of the Vector prefix refers to a vectorized execution engine operator, which exists in a query containing a column-store table.
+Interpretation of the execution plan level (vertical):
+The table scan operator scans the table dwcjk using Cstore Scan. The function of this layer is to read data in the table dwcjk from the buffer or disks, or transfers it to the upper-layer node to participate in the calculation.
+Aggregation operators are used to perform aggregation operations (group by) on operators calculated from the lower layer.
+The GATHER-typed Shuffle operator aggregates data from DNs to the CN.
+Storage format conversion operator is used to convert data in columns of the memory to data in rows for client display.
+It should be noted that when operators in the top layer are Data Node Scan, you need to set enable_fast_query_shipping to off to view detailed execution plan as follows:
+After enable_fast_query_shipping is set, the execution plan is displayed as follows:
+Keywords in the execution plan:
+The optimizer uses a two-step plan: the child plan node visits an index to find the locations of rows matching the index condition, and then the upper plan node actually fetches those rows from the table itself. Fetching rows separately is much more expensive than reading them sequentially, but because not all pages of the table have to be visited, this is still cheaper than a sequential scan. The upper-layer planning node first sort the location of index identifier rows based on physical locations before reading them. This minimizes the independent capturing overhead.
+If there are separate indexes on multiple columns referenced in WHERE, the optimizer might choose to use an AND or OR combination of the indexes. However, this requires the visiting of both indexes, so it is not necessarily a win compared to using just one index and treating the other condition as a filter.
+The following Index scans featured with different sorting mechanisms are involved:
+Fetches table rows in index order, which makes them even more expensive to read. However, there are so few rows that the extra cost of sorting the row locations is unnecessary. This plan type is used mainly for queries fetching just a single row and queries having an ORDER BY condition that matches the index order, because no extra sorting step is needed to satisfy ORDER BY.
+Nested-loop is used for queries that have a smaller data set connected. In a Nested-loop join, the foreign table drives the internal table and each row returned from the foreign table should have a matching row in the internal table. The returned result set of all queries should be less than 10,000. The table that returns a smaller subset will work as a foreign table, and indexes are recommended for connection fields of the internal table.
+A Hash join is used for large tables. The optimizer uses a hash join, in which rows of one table are entered into an in-memory hash table, after which the other table is scanned and the hash table is probed for matches to each row. Sonic and non-Sonic hash joins differ in their hash table structures, which do not affect the execution result set.
+In a merge join, data in the two joined tables is sorted by join columns. Then, data is extracted from the two tables to a sorted table for matching.
+Merge join requires more resources for sorting and its performance is lower than that of hash join. If the source data has been sorted, it does not need to be sorted again when merge join is performed. In this case, the performance of merge join is better than that of hash join.
+The EXPLAIN output shows the WHERE clause being applied as a Filter condition attached to the Seq Scan plan node. This means that the plan node checks the condition for each row it scans, and returns only the ones that meet the condition. The estimated number of output rows has been reduced because of the WHERE clause. However, the scan will still have to visit all 10000 rows. As a result, the cost is not decreased. It increases a bit (by 10000 x cpu_operator_cost) to reflect the extra CPU time spent on checking the WHERE condition.
+LIMIT limits the number of output execution results. If a LIMIT condition is added, not all rows are retrieved.
+You can use EXPLAIN ANALYZE or EXPLAIN PERFORMANCE to check the SQL statement execution information and compare the actual execution and the optimizer's estimation to find what to optimize. EXPLAIN PERFORMANCE provides the execution information on each DN, whereas EXPLAIN ANALYZE does not.
+The following SQL statement is used as an example:
+1 | select count(1) from tb1; + |
The output of running EXPLAIN PERFORMANCE is as follows:
+This figure shows that the execution information can be classified into the following 7 aspects.
+This part displays the static information that does not change during the plan execution process, such as some join conditions and filter information.
+This part displays the memory usage information printed by certain operators (mainly Hash and Sort), including peak memory, control memory, operator memory, width, auto spread num, and early spilled; and spill details, including spill Time(s), inner/outer partition spill num, temp file num, split data volume, and written disk IO [min, max]. The Sort operator does not display the number of files written to disks, and displays disks only when displaying sorting methods.
+This part displays the target columns provided by each operator.
+This part displays the execution time of each operator (including the execution time of filtering and projection, if any), CPU usage, and buffer usage.
+This part displays CNs and DNs, DN and DN connection time, and some execution information in the storage layer.
+The total execution time and network traffic, including the maximum and minimum execution time in the initialization and end phases on each DN, initialization, execution, and time in the end phase on each CN, and the system available memory during the current statement execution, and statement estimation memory information.
+This section describes how to query SQL statements whose execution takes a long time, leading to poor system performance.
+1 | SELECT current_timestamp - query_start AS runtime, datname, usename, query FROM pg_stat_activity where state != 'idle' ORDER BY 1 desc; + |
After the query, query statements are returned as a list, ranked by execution time in descending order. The first result is the query statement that has the longest execution time in the system. The returned result contains the SQL statement invoked by the system and the SQL statement run by users. Find the statements that were run by users and took a long time.
+1 | SELECT query FROM pg_stat_activity WHERE current_timestamp - query_start > interval '1 days'; + |
1 | SET track_activities = on; + |
The database collects the running information about active queries only if the parameter is set to on.
+Viewing pg_stat_activity is used as an example here.
+1 +2 +3 +4 +5 +6 | SELECT datname, usename, state FROM pg_stat_activity; + datname | usename | state | +----------+---------+--------+ + postgres | omm | idle | + postgres | omm | active | +(2 rows) + |
If the state column is idle, the connection is idle and requires a user to enter a command.
+To identify only active query statements, run the following command:
+1 | SELECT datname, usename, state FROM pg_stat_activity WHERE state != 'idle'; + |
1 | SELECT datname, usename, state, query FROM pg_stat_activity WHERE waiting = true; + |
The command output lists a query statement in the block state. The lock resource requested by this query statement is occupied by another session, so this query statement is waiting for the session to release the lock resource.
+Only when the query is blocked by internal lock resources, the waiting field is true. In most cases, block happens when query statements are waiting for lock resources to be released. However, query statements may be blocked because they are waiting to write in files or for timers. Such blocked queries are not displayed in the pg_stat_activity view.
+During database running, query statements are blocked in some service scenarios and run for an excessively long time. In this case, you can forcibly terminate the faulty session.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT w.query as waiting_query, +w.pid as w_pid, +w.usename as w_user, +l.query as locking_query, +l.pid as l_pid, +l.usename as l_user, +t.schemaname || '.' || t.relname as tablename +from pg_stat_activity w join pg_locks l1 on w.pid = l1.pid +and not l1.granted join pg_locks l2 on l1.relation = l2.relation +and l2.granted join pg_stat_activity l on l2.pid = l.pid join pg_stat_user_tables t on l1.relation = t.relid +where w.waiting; + |
The thread ID, user information, query status, as well as information about the tables and schemas that block the query statements are returned.
+1 | SELECT PG_TERMINATE_BACKEND(139834762094352); + |
If information similar to the following is displayed, the session is successfully terminated:
+PG_TERMINATE_BACKEND +---------------------- + t +(1 row)+
If a command output similar to the following is displayed, a user is attempting to terminate the session, and the session will be reconnected rather than being terminated.
+FATAL: terminating connection due to administrator command +FATAL: terminating connection due to administrator command +The connection to the server was lost. Attempting reset: Succeeded.+
If the PG_TERMINATE_BACKEND function is used to terminate the background threads of the session, the gsql client will be reconnected rather than be logged out.
+You can analyze slow SQL statements to optimize them.
+In a database, statistics indicate the source data of a plan generated by a planner. If no collection statistics are available or out of date, the execution plan may seriously deteriorate, leading to low performance.
+The ANALYZE statement collects statistic about table contents in databases, which will be stored in the system table PG_STATISTIC. Then, the query optimizer uses the statistics to work out the most efficient execution plan.
+After executing batch insertion and deletions, you are advised to run the ANALYZE statement on the table or the entire library to update statistics. By default, 30,000 rows of statistics are sampled. That is, the default value of the GUC parameter default_statistics_target is 100. If the total number of rows in the table exceeds 1,600,000, you are advised to set default_statistics_target to -2, indicating that 2% of the statistics are collected.
+For an intermediate table generated during the execution of a batch script or stored procedure, you also need to run the ANALYZE statement.
+If there are multiple inter-related columns in a table and the conditions or grouping operations based on these columns are involved in the query, collect statistics about these columns so that the query optimizer can accurately estimate the number of rows and generate an effective execution plan.
+Run the following commands to update the statistics about a table or the entire database:
+1 +2 | ANALYZE tablename; --Update statistics about a table. +ANALYZE; ---Update statistics about the entire database. + |
Run the following statements to perform statistics-related operations on multiple columns:
+1 +2 +3 +4 +5 +6 | ANALYZE tablename ((column_1, column_2)); --Collect statistics about column_1 and column_2 of tablename. + +ALTER TABLE tablename ADD STATISTICS ((column_1, column_2)); --Declare statistics about column_1 and column_2 of tablename. +ANALYZE tablename; --Collect statistics about one or more columns. + +ALTER TABLE tablename DELETE STATISTICS ((column_1, column_2)); --Delete statistics about column_1 and column_2 of tablename or their statistics declaration. + |
After the statistics are declared for multiple columns by running the ALTER TABLE tablename ADD STATISTICS statement, the system collects the statistics about these columns next time ANALYZE is performed on the table or the entire database.
+To collect the statistics, run the ANALYZE statement.
+Use EXPLAIN to show the execution plan of each SQL statement. If rows=10 (the default value, probably indicating the table has not been analyzed) is displayed in the SEQ SCAN output of a table, run the ANALYZE statement for this table.
+In a distributed framework, data is distributed on DNs. Data on one or more DNs is stored on a physical storage device. To properly define a table, you must:
+The distribution column is the core for defining a table. The following figure shows the procedure of defining a table. The table definition is created during the database design and is reviewed and modified during the SQL statement optimization.
+During database design, some key factors about table design will greatly affect the subsequent query performance of the database. Table design affects data storage as well. Scientific table design reduces I/O operations and minimizes memory usage, improving the query performance.
+Selecting a model for table storage is the first step of table definition. Select a proper storage model for your service based on the following table.
+ +Storage Model + |
+Application Scenario + |
+
---|---|
Row storage + |
+Point query (simple index–based query that returns only a few records). +Query involving many INSERT, UPDATE, and DELETE operations. + |
+
Column storage + |
+Statistics analysis query, in which operations, such as group and join, are performed many times. + |
+
In replication mode, full data in a table is copied to each DN in the cluster. This mode is used for tables containing a small volume of data. Full data in a table stored on each DN avoids data redistribution during the JOIN operation. This reduces network costs and plan segments (each with a thread), but generates much redundant data. Generally, replication is only used for small dimension tables.
+In hash mode, hash values are generated for one or more columns. You can obtain the storage location of a tuple based on the mapping between DNs and the hash values. In a hash table, I/O resources on each node can be used for data read/write, which greatly accelerates the read/write of a table. Generally, a table containing a large amount of data is defined as a hash table.
+ +Policy + |
+Description + |
+Scenario + |
+
---|---|---|
Hash + |
+Table data is distributed on all DNs in the cluster. + |
+Fact tables containing a large amount of data + |
+
Replication + |
+Full data in a table is stored on each DN in the cluster. + |
+Small tables and dimension tables + |
+
As shown in Figure 1, T1 is a replication table and T2 is a hash table.
+ +The distribution column in a hash table must meet the following requirements, which are ranked by priority in descending order:
+For a hash table, an improper distribution key may cause data skew or poor I/O performance on certain DNs. Therefore, you need to check the table to ensure that data is evenly distributed on each DN. You can run the following SQL statements to check data skew:
+1 +2 +3 +4 +5 | select +xc_node_id, count(1) +from tablename +group by xc_node_id +order by xc_node_id desc; + |
xc_node_id corresponds to a DN. Generally, over 5% difference between the amount of data on different DNs is regarded as data skew. If the difference is over 10%, choose another distribution column.
+Multiple distribution columns can be selected in GaussDB(DWS) to evenly distribute data.
+Partial Cluster Key is the column-based technology. It can minimize or maximize sparse indexes to quickly filter base tables. Partial cluster key can specify multiple columns, but you are advised to specify no more than two columns. Use the following principles to specify columns:
+Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partitioned table, and a physical piece is called a partition. Data is stored on these smaller physical pieces, namely, partitions, instead of the larger logical partitioned table. A partitioned table has the following advantages over an ordinary table:
+GaussDB(DWS) supports range-partitioned tables.
+Range-partitioned table: Data within a specific range is mapped onto each partition. The range is determined by the partition key specified during the partitioned table creation. The partition key is usually a date. For example, sales data is partitioned by month.
+Use the following principles to obtain efficient data types:
+Generally, calculation of integers (including common comparison calculations, such as =, >, <, ≥, ≤, and ≠ and group by) is more efficient than that of strings and floating point numbers. For example, if you need to filter data in a column containing numeric data for a column-store table where point query is performed, the execution takes over 10s. However, the execution time is reduced to 1.8s when you change the data type from NUMERIC to INT.
+Using the data type with a shorter length reduces both the data file size and the memory used for computing, improving the I/O and computing performance. For example, use SMALLINT instead of INT, and INT instead of BIGINT.
+Use the same data type for associated columns. If columns having different data types are associated, the database must dynamically convert the different data types into the same ones for comparison. The conversion results in performance overheads.
+SQL optimization involves continuous analysis and adjustment. You need to test-run a query, locate and fix its performance issues (if any) based on its execution plan, and run it again, until the execution performance meet your requirements.
+Performance issues may occur when you query data or run the INSERT, DELETE, UPDATE, or CREATE TABLE AS statement. You can query the warning column in the GS_WLM_SESSION_STATISTICS, GS_WLM_SESSION_HISTORY, and GS_WLM_SESSION_INFO views to obtain performance diagnosis information for tuning.
+Alarms that can trigger SQL self-diagnosis depend on the settings of resource_track_level. If resource_track_level is set to query or perf, you can diagnose alarms indicating that statistics of multiple columns or a single column are not collected or SQL statements are not pushed down. If resource_track_level is set to operator, all alarm scenarios can be diagnosed.
+Whether a SQL plan will be diagnosed depends on the settings of resource_track_cost. A SQL plan will be diagnosed only if its execution cost is greater than resource_track_cost. You can use the EXPLAIN keyword to check the plan execution cost.
+Currently, the following performance alarms will be reported:
+If statistics of a single column or multiple columns are not collected, an alarm is reported. For details about the optimization, see Updating Statistics and Optimizing Statistics.
+If no statistics are collected for the OBS foreign table and HDFS foreign table in the query statement, an alarm indicating that statistics are not collected will be reported. Because the ANALYZE performance of the OBS foreign table and HDFS foreign table is poor, you are not advised to perform ANALYZE on these tables. Instead, you are advised to use the ALTER FOREIGN TABLE syntax to modify the totalrows attribute of the foreign table to correct the estimated number of rows.
+Example alarms:
+No statistics about a table are not collected.
+Statistic Not Collect: + schema_test.t1+
The statistics about a single column are not collected.
+Statistic Not Collect: + schema_test.t2(c1)+
The statistics about multiple columns are not collected.
+Statistic Not Collect: + schema_test.t3((c1,c2))+
The statistics about a single column and multiple columns are not collected.
+Statistic Not Collect: + schema_test.t4(c1) schema_test.t5((c1,c2))+
Example alarms:
+SQL is not plan-shipping, reason : ""enable_stream_operator" is off" +SQL is not plan-shipping, reason : ""Distinct On" can not be shipped" +SQL is not plan-shipping, reason : ""v_test_unshipping_log" is VIEW that will be treated as Record type can't be shipped"+
An alarm will be reported if the number of rows in the inner table reaches or exceeds 10 times of that in the foreign table, more than 100,000 inner-table rows are processed on each DN in average, and data has been flushed to disks. You can view the query_plan column in GS_WLM_SESSION_HISTORY to check whether hash joins are used. In this scenario, you need to adjust the sequence of the HashJoin internal and foreign tables. For details, see Join Order Hints.
+Example alarm:
+PlanNode[7] Large Table is INNER in HashJoin "Vector Hash Aggregate"+
In the preceding command, 7 indicates the operator whose ID is 7 in the query_plan column.
+An alarm will be reported if nested loop is used in an equivalent join where more than 100,000 larger-table rows are processed on each DN in average. You can view the query_plan column of GS_WLM_SESSION_HISTORY to check whether nested loop is used. In this scenario, you need to adjust the table join mode and disable the NestLoop join mode between the current internal and foreign tables. For details, see Join Operation Hints.
+Example alarm:
+PlanNode[5] Large Table with Equal-Condition use Nestloop"Nested Loop"+
An alarm will be reported if more than 100 thousand of rows are broadcasted on each DN in average. In this scenario, the broadcast operation of the BroadCast lower-layer operator needs to be disabled. For details about the optimization, see Stream Operation Hints.
+Example alarm:
+PlanNode[5] Large Table in Broadcast "Streaming(type: BROADCAST dop: 1/2)"+
An alarm will be reported if the number of rows processed on any DN exceeds 100 thousand, and the number of rows processed on a DN reaches or exceeds 10 times of that processed on another DN. Generally, this alarm is generated due to storage layer skew or computing layer skew. For details about the optimization, see Optimizing Data Skew.
+Example alarm:
+PlanNode[6] DataSkew:"Seq Scan", min_dn_tuples:0, max_dn_tuples:524288+
During base table scanning, an alarm is reported if the following conditions are met:
+For details about the optimization, see Optimizing Operators. You can also refer to Case: Creating an Appropriate Index and Case: Setting Partial Cluster Keys.
+Example alarms:
+PlanNode[4] Indexscan is not properly used:"Index Only Scan", output:524288, filtered:0, rate:1.00000 +PlanNode[5] Indexscan is ought to be used:"Seq Scan", output:1, filtered:524288, rate:0.00000+
An alarm will be reported if the maximum number or the estimated maximum number of rows processed on a DN is over 100,000, and the larger number reaches or exceeds 10 times of the smaller one. In this scenario, you can refer to Rows Hints to correct the estimation on the number of rows, so that the optimizer can re-design the execution plan based on the correct number.
+Example alarm:
+PlanNode[5] Inaccurate Estimation-Rows: "Hash Join" A-Rows:0, E-Rows:52488+
WARNING, "Planner issue report is truncated, the rest of planner issues will be skipped"+
Currently, the GaussDB(DWS) optimizer can use three methods to develop statement execution policies in the distributed framework: generating a statement pushdown plan, a distributed execution plan, or a distributed execution plan for sending statements.
+The third policy sends many intermediate results from the DNs to a CN for further execution. In this case, the CN performance bottleneck (in bandwidth, storage, and computing) is caused by statements that cannot be pushed down to DNs. Therefore, you are not advised to use the query statements that only the third policy is applicable to.
+Statements cannot be pushed down to DNs if they have Functions That Do Not Support Pushdown or Syntax That Does Not Support Pushdown. Generally, you can rewrite the execution statements to solve the problem.
+Perform the following procedure to quickly determine whether the execution plan can be pushed down to DNs:
+1 | SET enable_fast_query_shipping = off; + |
If the execution plan contains Data Node Scan, the SQL statements cannot be pushed down to DNs. If the execution plan contains Streaming, the SQL statements can be pushed down to DNs.
+For example:
+1 +2 +3 +4 +5 | select +count(ss.ss_sold_date_sk order by ss.ss_sold_date_sk)c1 +from store_sales ss, store_returns sr +where +sr.sr_customer_sk = ss.ss_customer_sk; + |
The execution plan is as follows, which indicates that the SQL statement cannot be pushed down.
+ +QUERY PLAN +-------------------------------------------------------------------------- +Aggregate +-> Hash Join +Hash Cond: (ss.ss_customer_sk = sr.sr_customer_sk) +-> Data Node Scan on store_sales "_REMOTE_TABLE_QUERY_" +Node/s: All datanodes +-> Hash +-> Data Node Scan on store_returns "_REMOTE_TABLE_QUERY_" +Node/s: All datanodes +(8 rows)+
SQL syntax that does not support pushdown is described using the following table definition examples:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | CREATE TABLE CUSTOMER1 +( + C_CUSTKEY BIGINT NOT NULL + , C_NAME VARCHAR(25) NOT NULL + , C_ADDRESS VARCHAR(40) NOT NULL + , C_NATIONKEY INT NOT NULL + , C_PHONE CHAR(15) NOT NULL + , C_ACCTBAL DECIMAL(15,2) NOT NULL + , C_MKTSEGMENT CHAR(10) NOT NULL + , C_COMMENT VARCHAR(117) NOT NULL +) +DISTRIBUTE BY hash(C_CUSTKEY); +CREATE TABLE test_stream(a int, b float);--float does not support redistribution. +CREATE TABLE sal_emp ( c1 integer[] ) DISTRIBUTE BY replication; + |
1 +2 +3 +4 +5 +6 +7 +8 +9 | explain update customer1 set C_NAME = 'a' returning c_name; + QUERY PLAN +------------------------------------------------------------------ + Update on customer1 (cost=0.00..0.00 rows=30 width=187) + Node/s: All datanodes + Node expr: c_custkey + -> Data Node Scan on customer1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=30 width=187) + Node/s: All datanodes +(5 rows) + |
1 +2 +3 +4 +5 +6 +7 +8 +9 | explain verbose select count(distinct b) from test_stream; + QUERY PLAN +------------------------------------------------------------------ Aggregate (cost=2.50..2.51 rows=1 width=8) + Output: count(DISTINCT test_stream.b) + -> Data Node Scan on test_stream "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=30 width=8) + Output: test_stream.b + Node/s: All datanodes + Remote query: SELECT b FROM ONLY public.test_stream WHERE true +(6 rows) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | explain verbose select distinct on (c_custkey) c_custkey from customer1 order by c_custkey; + QUERY PLAN +------------------------------------------------------------------ Unique (cost=49.83..54.83 rows=30 width=8) + Output: customer1.c_custkey + -> Sort (cost=49.83..52.33 rows=30 width=8) + Output: customer1.c_custkey + Sort Key: customer1.c_custkey + -> Data Node Scan on customer1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=30 width=8) + Output: customer1.c_custkey + Node/s: All datanodes + Remote query: SELECT c_custkey FROM ONLY public.customer1 WHERE true +(9 rows) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | explain select * from test_stream t1 full join test_stream t2 on t1.a=t2.b; + QUERY PLAN +------------------------------------------------------------------ Hash Full Join (cost=0.38..0.82 rows=30 width=24) + Hash Cond: ((t1.a)::double precision = t2.b) + -> Data Node Scan on test_stream "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=30 width=12) + Node/s: All datanodes + -> Hash (cost=0.00..0.00 rows=30 width=12) + -> Data Node Scan on test_stream "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=30 width=12) + Node/s: All datanodes +(7 rows) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | explain verbose select array[c_custkey,1] from customer1 order by c_custkey; + + QUERY PLAN +------------------------------------------------------------------ Sort (cost=49.83..52.33 rows=30 width=8) + Output: (ARRAY[customer1.c_custkey, 1::bigint]), customer1.c_custkey + Sort Key: customer1.c_custkey + -> Data Node Scan on "__REMOTE_SORT_QUERY__" (cost=0.00..0.00 rows=30 width=8) + Output: (ARRAY[customer1.c_custkey, 1::bigint]), customer1.c_custkey + Node/s: All datanodes + Remote query: SELECT ARRAY[c_custkey, 1::bigint], c_custkey FROM ONLY public.customer1 WHERE true ORDER BY 2 +(7 rows) + |
No. + |
+Scenario + |
+Cause of Not Supporting Pushdown + |
+
---|---|---|
1 + |
+The query contains foreign tables or HDFS tables. + |
+LOG: SQL can't be shipped, reason: RecursiveUnion contains HDFS Table or ForeignScan is not shippable (In this table, LOG describes the cause of not supporting pushdown.) + +In the current version, queries containing foreign tables or HDFS tables do not support pushdown. + |
+
2 + |
+Multiple Node Groups + |
+LOG: SQL can't be shipped, reason: With-Recursive under multi-nodegroup scenario is not shippable + +In the current version, pushdown is supported only when all base tables are stored and computed in the same Node Group. + |
+
3 + |
+WITH recursive t_result AS ( +SELECT dm,sj_dm,name,1 as level +FROM test_rec_part +WHERE sj_dm > 10 +UNION +SELECT t2.dm,t2.sj_dm,t2.name||' > '||t1.name,t1.level+1 +FROM t_result t1 +JOIN test_rec_part t2 ON t2.sj_dm = t1.dm +) +SELECT * FROM t_result t;+ |
+LOG: SQL can't be shipped, reason: With-Recursive does not contain "ALL" to bind recursive & none-recursive branches + +ALL is not used for UNION. In this case, the return result is deduplicated. + |
+
4 + |
+WITH RECURSIVE x(id) AS +( +select count(1) from pg_class where oid=1247 +UNION ALL +SELECT id+1 FROM x WHERE id < 5 +), y(id) AS +( +select count(1) from pg_class where oid=1247 +UNION ALL +SELECT id+1 FROM x WHERE id < 10 +) +SELECT y.*, x.* FROM y LEFT JOIN x USING (id) ORDER BY 1;+ |
+LOG: SQL can't be shipped, reason: With-Recursive contains system table is not shippable + +A base table contains the system catalog. + |
+
5 + |
+WITH RECURSIVE t(n) AS ( +VALUES (1) +UNION ALL +SELECT n+1 FROM t WHERE n < 100 +) +SELECT sum(n) FROM t;+ |
+LOG: SQL can't be shipped, reason: With-Recursive contains only values rte is not shippable + +Only VALUES is used for scanning base tables. In this case, the statement can be executed on the CN, and DNs are unnecessary. + |
+
6 + |
+select a.ID,a.Name, +( +with recursive cte as ( +select ID, PID, NAME from b where b.ID = 1 +union all +select parent.ID,parent.PID,parent.NAME +from cte as child join b as parent on child.pid=parent.id +where child.ID = a.ID +) +select NAME from cte limit 1 +) cName +from +( +select id, name, count(*) as cnt +from a group by id,name +) a order by 1,2;+ |
+LOG: SQL can't be shipped, reason: With-Recursive recursive term correlated only is not shippable + +The correlation conditions of correlated subqueries are only in the recursion part, and the non-recursion part has no correlation condition. + |
+
7 + |
+WITH recursive t_result AS ( +select * from( +SELECT dm,sj_dm,name,1 as level +FROM test_rec_part +WHERE sj_dm < 10 order by dm limit 6 offset 2) +UNION all +SELECT t2.dm,t2.sj_dm,t2.name||' > '||t1.name,t1.level+1 +FROM t_result t1 +JOIN test_rec_part t2 ON t2.sj_dm = t1.dm +) +SELECT * FROM t_result t;+ |
+LOG: SQL can't be shipped, reason: With-Recursive contains conflict distribution in none-recursive(Replicate) recursive(Hash) + +The replicate plan is used for limit in the non-recursion part but the hash plan is used in the recursion part, resulting in conflicts. + |
+
8 + |
+with recursive cte as +( +select * from rec_tb4 where id<4 +union all +select h.id,h.parentID,h.name from +( +with recursive cte as +( +select * from rec_tb4 where id<4 +union all +select h.id,h.parentID,h.name from rec_tb4 h inner join cte c on h.id=c.parentID +) +SELECT id ,parentID,name from cte order by parentID +) h +inner join cte c on h.id=c.parentID +) +SELECT id ,parentID,name from cte order by parentID,1,2,3;+ |
+LOG: SQL can't be shipped, reason: Recursive CTE references recursive CTE "cte" + +recursive of multiple-layers are nested. That is, a recursive is nested in the recursion part of another recursive. + |
+
This module describes the variability of functions. The function variability in GaussDB(DWS) is as follows:
+Indicates that the function always returns the same result if the parameter values are the same.
+Indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same parameter values, but that its result varies by SQL statements.
+Indicates that the function value can change even within a single table scan, so no optimizations can be made.
+The volatility of a function can be obtained by querying its provolatile column in pg_proc. The value i indicates immutable, s indicates stable, and v indicates volatile. The valid values of the proshippable column in pg_proc are t, f, and NULL. This column and the provolatile column together describe whether a function is pushed down.
+For a UDF, you can specify the values of provolatile and proshippable during its creation. For details, see CREATE FUNCTION.
+In scenarios where a function does not support pushdown, perform one of the following as required:
+Define a user-defined function that generates fixed output for a certain input as the immutable type.
+Take the sales information of TPCDS as an example. If you want to write a function to calculate the discount data of a product, you can define the function as follows:
+1 +2 +3 +4 | CREATE FUNCTION func_percent_2 (NUMERIC, NUMERIC) RETURNS NUMERIC +AS 'SELECT $1 / $2 WHERE $2 > 0.01' +LANGUAGE SQL +VOLATILE; + |
Run the following statement:
+1 +2 | SELECT func_percent_2(ss_sales_price, ss_list_price) +FROM store_sales; + |
The execution plan is as follows:
+func_percent_2 is not pushed down, and ss_sales_price and ss_list_price are executed on a CN. In this case, a large amount of resources on the CN is consumed, and the performance deteriorates as a result.
+In this example, the function returns certain output when certain input is entered. Therefore, we can modify the function to the following one:
+1 +2 +3 +4 | CREATE FUNCTION func_percent_1 (NUMERIC, NUMERIC) RETURNS NUMERIC +AS 'SELECT $1 / $2 WHERE $2 > 0.01' +LANGUAGE SQL +IMMUTABLE; + |
Run the following statement:
+1 +2 | SELECT func_percent_1(ss_sales_price, ss_list_price) +FROM store_sales; + |
The execution plan is as follows:
+func_percent_1 is pushed down to DNs for quicker execution. (In TPCDS 1000X, where three CNs and 18 DNs are used, the query efficiency is improved by over 100 times).
+For details, see Case: Pushing Down Sort Operations to DNs.
+When an application runs a SQL statement to operate the database, a large number of subqueries are used because they are more clear than table join. Especially in complicated query statements, subqueries have more complete and independent semantics, which makes SQL statements clearer and easy to understand. Therefore, subqueries are widely used.
+In GaussDB(DWS), subqueries can also be called sublinks based on the location of subqueries in SQL statements.
+The sublinks commonly used in OLAP and HTAP are exist_sublink and any_sublink. The sublinks are pulled up by the optimization engine of GaussDB(DWS). Because of the flexible use of subqueries in SQL statements, complex subqueries may affect query performance. Subqueries are classified into non-correlated subqueries and correlated subqueries.
+The execution of a subquery is independent from any attribute of outer queries. In this way, a subquery can be executed before outer queries.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | select t1.c1,t1.c2 +from t1 +where t1.c1 in ( + select c2 + from t2 + where t2.c2 IN (2,3,4) +); + QUERY PLAN +--------------------------------------------------------------- +Streaming (type: GATHER) + Node/s: All datanodes + -> Hash Right Semi Join + Hash Cond: (t2.c2 = t1.c1) + -> Streaming(type: REDISTRIBUTE) + Spawn on: All datanodes + -> Seq Scan on t2 + Filter: (c2 = ANY ('{2,3,4}'::integer[])) + -> Hash + -> Seq Scan on t1 +(10 rows) + |
The execution of a subquery depends on some attributes of outer queries which are used as AND conditions of the subquery. In the following example, t1.c1 in the t2.c1 = t1.c1 condition is a dependent attribute. Such a subquery depends on outer queries and needs to be executed once for each outer query.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | select t1.c1,t1.c2 +from t1 +where t1.c1 in ( + select c2 + from t2 + where t2.c1 = t1.c1 AND t2.c2 in (2,3,4) +); + QUERY PLAN +----------------------------------------------------------------------- +Streaming (type: GATHER) + Node/s: All datanodes + -> Seq Scan on t1 + Filter: (SubPlan 1) + SubPlan 1 + -> Result + Filter: (t2.c1 = t1.c1) + -> Materialize + -> Streaming(type: BROADCAST) + Spawn on: All datanodes + -> Seq Scan on t2 + Filter: (c2 = ANY ('{2,3,4}'::integer[])) +(12 rows) + |
A subquery is pulled up to join with tables in outer queries, preventing the subquery from being converted into the combination of a subplan and broadcast. You can run the EXPLAIN statement to check whether a subquery is converted into the combination of a subplan and broadcast.
+Example:
+The WHERE clause must contain a column in the outer query. Other parts of the subquery cannot contain the column. Other restrictions are as follows:
+The WHERE condition of the subquery must contain a column from the outer query. Equivalence comparison must be performed between this column and related columns in tables of the subquery. These conditions must be connected using AND. Other parts of the subquery cannot contain the column. Other restrictions are as follows:
+1 +2 +3 | select * from t1 where c1 >( + select max(t2.c1) from t2 where t2.c1=t1.c1 +); + |
The following subquery cannot be pulled up because the subquery has no aggregation function.
+1 +2 +3 | select * from t1 where c1 >( + select t2.c1 from t2 where t2.c1=t1.c1 +); + |
The following subquery cannot be pulled up because the subquery has two output columns:
+1 +2 +3 | select * from t1 where (c1,c2) >( + select max(t2.c1),min(t2.c2) from t2 where t2.c1=t1.c1 +); + |
1 +2 +3 +4 +5 | select * from t3 where t3.c1=( + select t1.c1 + from t1 where c1 >( + select max(t2.c1) from t2 where t2.c1=t1.c1 +)); + |
If another condition is added to the subquery in the previous example, the subquery cannot be pulled up because the subquery references to the column in the outer query. Example:
+1 +2 +3 +4 +5 +6 | select * from t3 where t3.c1=( + select t1.c1 + from t1 where c1 >( + select max(t2.c1) from t2 where t2.c1=t1.c1 and t3.c1>t2.c2 + +)); + |
If the WHERE condition contains a EXIST-related sublink connected by OR,
+for example,
+1 +2 +3 | select a, c from t1 +where t1.a = (select avg(a) from t3 where t1.b = t3.b) or +exists (select * from t4 where t1.c = t4.c); + |
the process of pulling up such a sublink is as follows:
+1 +2 +3 | select a, c +from t1 left join (select avg(a) avg, t3.b from t3 group by t3.b) as t3 on (t1.a = avg and t1.b = t3.b) +where t3.b is not null or exists (select * from t4 where t1.c = t4.c); + |
1 +2 +3 | select a, c +from t1 left join (select avg(a) avg, t3.b from t3 group by t3.b) as t3 on (t1.a = avg and t1.b = t3.b) +left join (select t4.c from t4 group by t4.c) where t3.b is not null or t4.c is not null; + |
Except the sublinks described above, all the other sublinks cannot be pulled up. In this case, a join subquery is planned as the combination of a subplan and broadcast. As a result, if tables in the subquery have a large amount of data, query performance may be poor.
+If a correlated subquery joins with two tables in outer queries, the subquery cannot be pulled up. You need to change the parent query into a WITH clause and then perform the join.
+Example:
+1 +2 | select distinct t1.a, t2.a +from t1 left join t2 on t1.a=t2.a and not exists (select a,b from test1 where test1.a=t1.a and test1.b=t2.a); + |
The parent query is changed into:
+1 +2 +3 +4 +5 +6 +7 +8 | with temp as +( + select * from (select t1.a as a, t2.a as b from t1 left join t2 on t1.a=t2.a) + +) +select distinct a,b +from temp +where not exists (select a,b from test1 where temp.a=test1.a and temp.b=test1.b); + |
Example:
+1 +2 +3 +4 | explain (costs off) +select (select c2 from t2 where t1.c1 = t2.c1) ssq, t1.c2 +from t1 +where t1.c2 > 10; + |
The execution plan is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | explain (costs off) +select (select c2 from t2 where t1.c1 = t2.c1) ssq, t1.c2 +from t1 +where t1.c2 > 10; + QUERY PLAN +------------------------------------------------------ + Streaming (type: GATHER) + Node/s: All datanodes + -> Seq Scan on t1 + Filter: (c2 > 10) + SubPlan 1 + -> Result + Filter: (t1.c1 = t2.c1) + -> Materialize + -> Streaming(type: BROADCAST) + Spawn on: All datanodes + -> Seq Scan on t2 +(11 rows) + |
The correlated subquery is displayed in the target list (query return list). Values need to be returned even if the condition t1.c1=t2.c1 is not met. Therefore, use left outer join to join T1 and T2 so that SSQ can return padding values when the condition t1.c1=t2.c1 is not met.
+ScalarSubQuery (SSQ) and Correlated-ScalarSubQuery (CSSQ) are described as follows:
+The preceding SQL statement can be changed into:
+1 +2 +3 +4 +5 +6 +7 | with ssq as +( + select t2.c2 from t2 +) +select ssq.c2, t1.c2 +from t1 left join ssq on t1.c1 = ssq.c2 +where t1.c2 > 10; + |
The execution plan after the change is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | QUERY PLAN +------------------------------------------- + Streaming (type: GATHER) + Node/s: All datanodes + -> Hash Right Join + Hash Cond: (t2.c2 = t1.c1) + -> Streaming(type: REDISTRIBUTE) + Spawn on: All datanodes + -> Seq Scan on t2 + -> Hash + -> Seq Scan on t1 + Filter: (c2 > 10) +(10 rows) + |
In the preceding example, the SSQ is pulled up to right join, preventing poor performance caused by the combination of a subplan and broadcast when the table (T2) in the subquery is too large.
+Example:
+1 +2 +3 | select (select count(*) from t2 where t2.c1=t1.c1) cnt, t1.c1, t3.c1 +from t1,t3 +where t1.c1=t3.c1 order by cnt, t1.c1; + |
The execution plan is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | QUERY PLAN +------------------------------------------------------------------ + Streaming (type: GATHER) + Node/s: All datanodes + -> Sort + Sort Key: ((SubPlan 1)), t1.c1 + -> Hash Join + Hash Cond: (t1.c1 = t3.c1) + -> Seq Scan on t1 + -> Hash + -> Seq Scan on t3 + SubPlan 1 + -> Aggregate + -> Result + Filter: (t2.c1 = t1.c1) + -> Materialize + -> Streaming(type: BROADCAST) + Spawn on: All datanodes + -> Seq Scan on t2 +(17 rows) + |
The correlated subquery is displayed in the target list (query return list). Values need to be returned even if the condition t1.c1=t2.c1 is not met. Therefore, use left outer join to join T1 and T2 so that SSQ can return padding values when the condition t1.c1=t2.c1 is not met. However, COUNT is used to ensure that 0 is returned when the condition is note met. Therefore, case-when NULL then 0 else count(*) can be used.
+The preceding SQL statement can be changed into:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | with ssq as +( + select count(*) cnt, c1 from t2 group by c1 +) +select case when + ssq.cnt is null then 0 + else ssq.cnt + end cnt, t1.c1, t3.c1 +from t1 left join ssq on ssq.c1 = t1.c1,t3 +where t1.c1 = t3.c1 +order by ssq.cnt, t1.c1; + |
The execution plan after the change is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | QUERY PLAN +----------------------------------------------------- + Streaming (type: GATHER) + Node/s: All datanodes + -> Sort + Sort Key: (count(*)), t1.c1 + -> Hash Join + Hash Cond: (t1.c1 = t3.c1) + -> Hash Left Join + Hash Cond: (t1.c1 = t2.c1) + -> Seq Scan on t1 + -> Hash + -> HashAggregate + Group By Key: t2.c1 + -> Seq Scan on t2 + -> Hash + -> Seq Scan on t3 +(15 rows) + |
1 +2 +3 | select t1.c1, t1.c2 +from t1 +where t1.c1 = (select agg() from t2.c2 > t1.c2); + |
Nonequivalent subqueries cannot be pulled up. You can perform join twice (one CorrelationKey and one rownum self-join) to rewrite the statement.
+You can rewrite the statement in either of the following ways:
+1 +2 +3 +4 +5 +6 +7 | select t1.c1, t1.c2 +from t1, ( + select t1.rowid, agg() aggref + from t1,t2 + where t1.c2 > t2.c2 group by t1.rowid +) dt /* derived table */ +where t1.rowid = dt.rowid AND t1.c1 = dt.aggref; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | WITH dt as +( + select t1.rowid, agg() aggref + from t1,t2 + where t1.c2 > t2.c2 group by t1.rowid +) +select t1.c1, t1.c2 +from t1, derived_table +where t1.rowid = derived_table.rowid AND +t1.c1 = derived_table.aggref; + |
1. Change the base table to a replication table and create an index on the filter column.
+1 +2 +3 | create table master_table (a int); +create table sub_table(a int, b int); +select a from master_table group by a having a in (select a from sub_table); + |
In this example, a correlated subquery is contained. To improve the query performance, you can change sub_table to a replication table and create an index on the a column.
+2. Modify the SELECT statement, change the subquery to a JOIN relationship between the primary table and the parent query, or modify the subquery to improve the query performance. Ensure that the subquery to be used is semantically correct.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | explain (costs off)select * from master_table as t1 where t1.a in (select t2.a from sub_table as t2 where t1.a = t2.b); + QUERY PLAN +---------------------------------------------------------- + Streaming (type: GATHER) + Node/s: All datanodes + -> Seq Scan on master_table t1 + Filter: (SubPlan 1) + SubPlan 1 + -> Result + Filter: (t1.a = t2.b) + -> Materialize + -> Streaming(type: BROADCAST) + Spawn on: All datanodes + -> Seq Scan on sub_table t2 +(11 rows) + |
In the preceding example, a subplan is used. To remove the subplan, you can modify the statement as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | explain(costs off) select * from master_table as t1 where exists (select t2.a from sub_table as t2 where t1.a = t2.b and t1.a = t2.a); + QUERY PLAN +-------------------------------------------------- + Streaming (type: GATHER) + Node/s: All datanodes + -> Hash Semi Join + Hash Cond: (t1.a = t2.b) + -> Seq Scan on master_table t1 + -> Hash + -> Streaming(type: REDISTRIBUTE) + Spawn on: All datanodes + -> Seq Scan on sub_table t2 +(9 rows) + |
In this way, the subplan is replaced by the semi-join between the two tables, greatly improving the execution efficiency.
+GaussDB(DWS) generates optimal execution plans based on the cost estimation. Optimizers need to estimate the number of data rows and the cost based on statistics collected using ANALYZE. Therefore, the statistics is vital for the estimation of the number of rows and cost. Global statistics are collected using ANALYZE: relpages and reltuples in the pg_class table; stadistinct, stanullfrac, stanumbersN, stavaluesN, and histogram_bounds in the pg_statistic table.
+In most cases, the lack of statistics in tables or columns involved in the query greatly affects the query performance.
+The table structure is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 | CREATE TABLE LINEITEM +( +L_ORDERKEY BIGINT NOT NULL +, L_PARTKEY BIGINT NOT NULL +, L_SUPPKEY BIGINT NOT NULL3 +, L_LINENUMBER BIGINT NOT NULL +, L_QUANTITY DECIMAL(15,2) NOT NULL +, L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL +, L_DISCOUNT DECIMAL(15,2) NOT NULL +, L_TAX DECIMAL(15,2) NOT NULL +, L_RETURNFLAG CHAR(1) NOT NULL +, L_LINESTATUS CHAR(1) NOT NULL +, L_SHIPDATE DATE NOT NULL +, L_COMMITDATE DATE NOT NULL +, L_RECEIPTDATE DATE NOT NULL +, L_SHIPINSTRUCT CHAR(25) NOT NULL +, L_SHIPMODE CHAR(10) NOT NULL +, L_COMMENT VARCHAR(44) NOT NULL +) with (orientation = column, COMPRESSION = MIDDLE) distribute by hash(L_ORDERKEY); + +CREATE TABLE ORDERS +( +O_ORDERKEY BIGINT NOT NULL +, O_CUSTKEY BIGINT NOT NULL +, O_ORDERSTATUS CHAR(1) NOT NULL +, O_TOTALPRICE DECIMAL(15,2) NOT NULL +, O_ORDERDATE DATE NOT NULL +, O_ORDERPRIORITY CHAR(15) NOT NULL +, O_CLERK CHAR(15) NOT NULL +, O_SHIPPRIORITY BIGINT NOT NULL +, O_COMMENT VARCHAR(79) NOT NULL +)with (orientation = column, COMPRESSION = MIDDLE) distribute by hash(O_ORDERKEY); + |
The query statements are as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | explain verbose select +count(*) as numwait +from +lineitem l1, +orders +where +o_orderkey = l1.l_orderkey +and o_orderstatus = 'F' +and l1.l_receiptdate > l1.l_commitdate +and not exists ( +select +* +from +lineitem l3 +where +l3.l_orderkey = l1.l_orderkey +and l3.l_suppkey <> l1.l_suppkey +and l3.l_receiptdate > l3.l_commitdate +) +order by +numwait desc; + |
If such an issue occurs, you can use the following methods to check whether statistics in tables or columns has been collected using ANALYZE.
+WARNING:Statistics in some tables or columns(public.lineitem.l_receiptdate, public.lineitem.l_commitdate, public.lineitem.l_orderkey, public.lineitem.l_suppkey, public.orders.o_orderstatus, public.orders.o_orderkey) are not collected. +HINT:Do analyze for them in order to generate optimized plan.+
2017-06-14 17:28:30.336 CST 140644024579856 20971684 [BACKEND] LOG:Statistics in some tables or columns(public.lineitem.l_receiptdate, public.lineitem.l_commitdate, public.lineitem.l_orderkey, public.linei +tem.l_suppkey, public.orders.o_orderstatus, public.orders.o_orderkey) are not collected. +2017-06-14 17:28:30.336 CST 140644024579856 20971684 [BACKEND] HINT:Do analyze for them in order to generate optimized plan.+
By using any of the preceding methods, you can identify tables or columns whose statistics have not been collected using ANALYZE. You can execute ANALYZE to warnings or tables and columns recorded in logs to resolve the problem.
+For details, see Case: Configuring cost_param for Better Query Performance.
+Symptom: Query the personnel who have checked in an Internet cafe within 15 minutes before and after the check-in of a specified person.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | SELECT +C.WBM, +C.DZQH, +C.DZ, +B.ZJHM, +B.SWKSSJ, +B.XWSJ +FROM +b_zyk_wbswxx A, +b_zyk_wbswxx B, +b_zyk_wbcs C +WHERE +A.ZJHM = '522522******3824' +AND A.WBDM = B.WBDM +AND A.WBDM = C.WBDM +AND abs(to_date(A.SWKSSJ,'yyyymmddHH24MISS') - to_date(B.SWKSSJ,'yyyymmddHH24MISS')) < INTERVAL '15 MINUTES' +ORDER BY +B.SWKSSJ, +B.ZJHM +limit 10 offset 0 +; + |
Figure 1 shows the execution plan. This query takes about 12s.
+ + +Optimization analysis:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 | //Create a temporary unlogged table. +CREATE UNLOGGED TABLE temp_tsw +( +ZJHM NVARCHAR2(18), +WBDM NVARCHAR2(14), +SWKSSJ_START NVARCHAR2(14), +SWKSSJ_END NVARCHAR2(14), +WBM NVARCHAR2(70), +DZQH NVARCHAR2(6), +DZ NVARCHAR2(70), +IPDZ NVARCHAR2(39) +) +; +//Insert the Internet access record of the specified person, and process the start time and end time. +INSERT INTO +temp_tsw +SELECT +A.ZJHM, +A.WBDM, +to_char((to_date(A.SWKSSJ,'yyyymmddHH24MISS') - INTERVAL '15 MINUTES'),'yyyymmddHH24MISS'), +to_char((to_date(A.SWKSSJ,'yyyymmddHH24MISS') + INTERVAL '15 MINUTES'),'yyyymmddHH24MISS'), +B.WBM,B.DZQH,B.DZ,B.IPDZ +FROM +b_zyk_wbswxx A, +b_zyk_wbcs B +WHERE +A.ZJHM='522522******3824' AND A.WBDM = B.WBDM +; + +//Query the personnel who have check in an Internet cafe before and after 15 minutes of the check-in of the specified person. Convert their ID card number format to int8 in comparison. +SELECT +A.WBM, +A.DZQH, +A.DZ, +A.IPDZ, +B.ZJHM, +B.XM, +to_date(B.SWKSSJ,'yyyymmddHH24MISS') as SWKSSJ, +to_date(B.XWSJ,'yyyymmddHH24MISS') as XWSJ, +B.SWZDH +FROM temp_tsw A, +b_zyk_wbswxx B +WHERE +A.ZJHM <> B.ZJHM +AND A.WBDM = B.WBDM +AND (B.SWKSSJ)::int8 > (A.swkssj_start)::int8 +AND (B.SWKSSJ)::int8 < (A.swkssj_end)::int8 +order by +B.SWKSSJ, +B.ZJHM +limit 10 offset 0 +; + |
The query takes about 7s. Figure 2 shows the execution plan.
+ +temp_tsw contains only hundreds of records, and an equal-value connection is created between temp_tsw and b_zyk_wbswxx using wbdm (the Internet cafe code). Therefore, if JOIN is changed to NEST LOOP JOIN, index scan can be used for node scanning, and the performance will be boosted.
+1 | SET enable_hashjoin = off; + |
Figure 3 shows the execution plan. The query takes about 3s.
+ +If paging display needs to be achieved on the upper-layer application page, change the offset value to determine the result set on the target page. In this way, the previous query statement will be executed every time after a page turning operation, which causes long response latency.
+To resolve this problem, you are advised to use the unlogged table to save the result set.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 | //Create an unlogged table to save the result set. +CREATE UNLOGGED TABLE temp_result +( +WBM NVARCHAR2(70), +DZQH NVARCHAR2(6), +DZ NVARCHAR2(70), +IPDZ NVARCHAR2(39), +ZJHM NVARCHAR2(18), +XM NVARCHAR2(30), +SWKSSJ date, +XWSJ date, +SWZDH NVARCHAR2(32) +); + +//Insert the result set to the unlogged table. The insertion takes about 3s. +INSERT INTO +temp_result +SELECT +A.WBM, +A.DZQH, +A.DZ, +A.IPDZ, +B.ZJHM, +B.XM, +to_date(B.SWKSSJ,'yyyymmddHH24MISS') as SWKSSJ, +to_date(B.XWSJ,'yyyymmddHH24MISS') as XWSJ, +B.SWZDH +FROM temp_tsw A, +b_zyk_wbswxx B +WHERE +A.ZJHM <> B.ZJHM +AND A.WBDM = B.WBDM +AND (B.SWKSSJ)::int8 > (A.swkssj_start)::int8 +AND (B.SWKSSJ)::int8 < (A.swkssj_end)::int8 +; + +//Perform paging query on the result set. The paging query takes about 10 ms. +SELECT +* +FROM +temp_result +ORDER BY +SWKSSJ, +ZJHM +LIMIT 10 OFFSET 0; + |
Collecting global statistics using ANALYZE improves query performance.
+If a performance problem occurs, you can use plan hint to adjust the query plan to the previous one. For details, see Hint-based Tuning.
+A query statement needs to go through multiple operator procedures to generate the final result. Sometimes, the overall query performance deteriorates due to long execution time of certain operators, which are regarded as bottleneck operators. In this case, you need to execute the EXPLAIN ANALYZE/PERFORMANCE command to view the bottleneck operators, and then perform optimization.
+For example, in the following execution process, the execution time of the Hashagg operator accounts for about 66% [(51016-13535)/56476 ≈ 66%] of the total execution time. Therefore, the Hashagg operator is the bottleneck operator for this query. Optimize this operator first.
+1. Scan the base table. For queries requiring large volume of data filtering, such as point queries or queries that need range scanning, a full table scan using SeqScan will take a long time. To facilitate scanning, you can create indexes on the condition column and select IndexScan for index scanning.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | explain (analyze on, costs off) select * from store_sales where ss_sold_date_sk = 2450944; + id | operation | A-time | A-rows | Peak Memory | A-width +----+--------------------------------+---------------------+--------+--------------+--------- + 1 | -> Streaming (type: GATHER) | 3666.020 | 3360 | 195KB | + 2 | -> Seq Scan on store_sales | [3594.611,3594.611] | 3360 | [34KB, 34KB] | + + Predicate Information (identified by plan id) +----------------------------------------------- + 2 --Seq Scan on store_sales + Filter: (ss_sold_date_sk = 2450944) + Rows Removed by Filter: 4968936 + |
1 +2 +3 +4 +5 +6 +7 | create index idx on store_sales_row(ss_sold_date_sk); +CREATE INDEX + explain (analyze on, costs off) select * from store_sales_row where ss_sold_date_sk = 2450944; + id | operation | A-time | A-rows | Peak Memory | A-width +----+------------------------------------------------+-----------------+--------+--------------+---------- + 1 | -> Streaming (type: GATHER) | 81.524 | 3360 | 195KB | + 2 | -> Index Scan using idx on store_sales_row | [13.352,13.352] | 3360 | [34KB, 34KB] | + |
In this example, the full table scan filters much data and returns 3360 records. After an index has been created on the ss_sold_date_sk column, the scanning efficiency is significantly boosted from 3.6s to 13 ms by using IndexScan.
+2: If NestLoop is used for joining tables with a large number of rows, the join may take a long time. In the following example, NestLoop takes 181s. If enable_mergejoin=off is set to disable merge join and enable_nestloop=off is set to disable NestLoop so that the optimizer selects hash join, the join takes more than 200 ms.
+3. Generally, query performance can be improved by selecting HashAgg. If Sort and GroupAgg are used for a large result set, you need to set enable_sort to off. HashAgg consumes less time than Sort and GroupAgg.
+Data skew breaks the balance among nodes in the distributed MPP architecture. If the amount of data stored or processed by a node is much greater than that by other nodes, the following problems may occur:
+GaussDB(DWS) provides a complete solution for data skew, including storage and computing skew.
+In the GaussDB(DWS) database, data is distributed and stored on each DN. You can improve the query efficiency by using distributed execution. However, if data skew occurs, bottlenecks exist on some DNs during distribution execution, affecting the query performance. This is because the distribution column is not properly selected. This can be solved by adjusting the distribution column.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | explain performance select count(*) from inventory; +5 --CStore Scan on lmz.inventory + dn_6001_6002 (actual time=0.444..83.127 rows=42000000 loops=1) + dn_6003_6004 (actual time=0.512..63.554 rows=27000000 loops=1) + dn_6005_6006 (actual time=0.722..99.033 rows=45000000 loops=1) + dn_6007_6008 (actual time=0.529..100.379 rows=51000000 loops=1) + dn_6009_6010 (actual time=0.382..71.341 rows=36000000 loops=1) + dn_6011_6012 (actual time=0.547..100.274 rows=51000000 loops=1) + dn_6013_6014 (actual time=0.596..118.289 rows=60000000 loops=1) + dn_6015_6016 (actual time=1.057..132.346 rows=63000000 loops=1) + dn_6017_6018 (actual time=0.940..110.310 rows=54000000 loops=1) + dn_6019_6020 (actual time=0.231..41.198 rows=21000000 loops=1) + dn_6021_6022 (actual time=0.927..114.538 rows=54000000 loops=1) + dn_6023_6024 (actual time=0.637..118.385 rows=60000000 loops=1) + dn_6025_6026 (actual time=0.288..32.240 rows=15000000 loops=1) + dn_6027_6028 (actual time=0.566..118.096 rows=60000000 loops=1) + dn_6029_6030 (actual time=0.423..82.913 rows=42000000 loops=1) + dn_6031_6032 (actual time=0.395..78.103 rows=39000000 loops=1) + dn_6033_6034 (actual time=0.376..51.052 rows=24000000 loops=1) + dn_6035_6036 (actual time=0.569..79.463 rows=39000000 loops=1) + |
In the performance information, you can view the number of scan rows of each DN in the inventory table. The number of rows of each DN differs a lot, the biggest is 63000000 and the smallest value is 15000000. This value difference on the performance of data scan is acceptable, but if the join operator exists in the upper-layer, the impact on the performance cannot be ignored.
+Generally, the data table is hash distributed on each DN; therefore, it is important to choose a proper distribution column. Run table_skewness() to view data skew of each DN in the inventory table. The query result is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | select table_skewness('inventory'); + table_skewness +------------------------------------------ + ("dn_6015_6016 ",63000000,8.046%) + ("dn_6013_6014 ",60000000,7.663%) + ("dn_6023_6024 ",60000000,7.663%) + ("dn_6027_6028 ",60000000,7.663%) + ("dn_6017_6018 ",54000000,6.897%) + ("dn_6021_6022 ",54000000,6.897%) + ("dn_6007_6008 ",51000000,6.513%) + ("dn_6011_6012 ",51000000,6.513%) + ("dn_6005_6006 ",45000000,5.747%) + ("dn_6001_6002 ",42000000,5.364%) + ("dn_6029_6030 ",42000000,5.364%) + ("dn_6031_6032 ",39000000,4.981%) + ("dn_6035_6036 ",39000000,4.981%) + ("dn_6009_6010 ",36000000,4.598%) + ("dn_6003_6004 ",27000000,3.448%) + ("dn_6033_6034 ",24000000,3.065%) + ("dn_6019_6020 ",21000000,2.682%) + ("dn_6025_6026 ",15000000,1.916%) +(18 rows) + |
The table definition indicates that the table uses the inv_date_sk column as the distribution column, which causes a data skew. Based on the data distribution of each column, change the distribution column to inv_item_sk. The skew status is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | select table_skewness('inventory'); + table_skewness +------------------------------------------ + ("dn_6001_6002 ",43934200,5.611%) + ("dn_6007_6008 ",43829420,5.598%) + ("dn_6003_6004 ",43781960,5.592%) + ("dn_6031_6032 ",43773880,5.591%) + ("dn_6033_6034 ",43763280,5.589%) + ("dn_6011_6012 ",43683600,5.579%) + ("dn_6013_6014 ",43551660,5.562%) + ("dn_6027_6028 ",43546340,5.561%) + ("dn_6009_6010 ",43508700,5.557%) + ("dn_6023_6024 ",43484540,5.554%) + ("dn_6019_6020 ",43466800,5.551%) + ("dn_6021_6022 ",43458500,5.550%) + ("dn_6017_6018 ",43448040,5.549%) + ("dn_6015_6016 ",43247700,5.523%) + ("dn_6005_6006 ",43200240,5.517%) + ("dn_6029_6030 ",43181360,5.515%) + ("dn_6025_6026 ",43179700,5.515%) + ("dn_6035_6036 ",42960080,5.487%) +(18 rows) + |
Data skew is solved.
+In addition to the table_skewness() view, you can use the table_distribution function and the PGXC_GET_TABLE_SKEWNESS view to efficiently query the data skew of each table.
+Even if data is balanced across nodes after you change the distribution key of a table, data skew may still occur during a query. If data skew occurs in the result set of an operator on a DN, skew will also occur during the computing that involves the operator. Generally, this is caused by data redistribution during the execution.
+During a query, JOIN keys and GROUP BY keys are not used as distribution columns. Data is redistributed among DNs based on the hash values of data on the keys. The redistribution is implemented using the Redistribute operator in an execution plan. Data skew in redistribution columns can lead to data skew during system operation. After the redistribution, some nodes will have much more data, process more data, and will have much lower performance than others.
+In the following example, the s and t tables are joined, and s.x and t.x columns in the join condition are not their distribution keys. Table data is redistributed using the REDISTRIBUTE operator. Data skew occurs in the s.x column and not in the t.x column. The result set of the Streaming operator (id being 6) on datanode2 has data three times that of other DNs and causes a skew.
+1 | select * from skew s,test t where s.x = t.x order by s.a limit 1; + |
id | operation | A-time +----+-----------------------------------------------------+----------------------- + 1 | -> Limit | 52622.382 + 2 | -> Streaming (type: GATHER) | 52622.374 + 3 | -> Limit | [30138.494,52598.994] + 4 | -> Sort | [30138.486,52598.986] + 5 | -> Hash Join (6,8) | [30127.013,41483.275] + 6 | -> Streaming(type: REDISTRIBUTE) | [11365.110,22024.845] + 7 | -> Seq Scan on public.skew s | [2019.168,2175.369] + 8 | -> Hash | [2460.108,2499.850] + 9 | -> Streaming(type: REDISTRIBUTE) | [1056.214,1121.887] + 10 | -> Seq Scan on public.test t | [310.848,325.569] + +6 --Streaming(type: REDISTRIBUTE) + datanode1 (rows=5050368) + datanode2 (rows=15276032) + datanode3 (rows=5174272) + datanode4 (rows=5219328)+
It is more difficult to detect skew in computing than in storage. To solve skew in computing, GaussDB provides the Runtime Load Balance Technology (RLBT) solution controlled by the skew_option parameter. The RLBT solution addresses how to detect and solve data skew.
+The solution first checks whether skew data exists in redistribution columns used for computing. RLBT can detect data skew based on statistics, specified hints, or rules.
+Run the ANALYZE statement to collect statistics on tables. The optimizer will automatically identify skew data on redistribution keys based on the statistics and generate optimization plans for queries having potential skew. When the redistribution key has multiple columns, statistics information can be used for identification only when all columns belong to the same base table.
+The statistics information can only provide the skew of the base table. When a column in the base table is skewed, other columns have filtering conditions, or after the join of other tables, we cannot determine whether the skewed data still exists on the skewed column. If skew_option is set to normal, it indicates that data skew persists and the base tables will be optimized to solve the skew. If skew_option is set to lazy, it indicates that data skew is solved and the optimization will stop.
+The intermediate results of complex queries are difficult to estimate based on statistics. In this case, you can specify hints provide the skew information, based on which the optimizer optimize queries. For details about the syntax of hints, see Skew Hints.
+In a business intelligence (BI) system, a large number of SQL statements having outer joins (including left joins, right joins, and full joins) are generated, and many NULL values will be generated in empty columns that have no match for outer joins. If JOIN or GROUP BY operations are performed on the columns, data skew will occur. RLBT can automatically identify this scenario and generate an optimization plan for NULL value skew.
+Skew and non-skew data is separately processed. Details are as follows:
+Use PART_REDISTRIBUTE_PART_ROUNDROBIN on the side with skew. Specifically, perform round-robin on skew data and redistribution on non-skew data.
+Use PART_REDISTRIBUTE_PART_BROADCAST on the side with no skew. Specifically, perform broadcast on skew data and redistribution on non-skew data.
+Use PART_REDISTRIBUTE_PART_ROUNDROBIN on the side where redistribution is required.
+Use PART_LOCAL_PART_BROADCAST on the side where redistribution is not required. Specifically, perform broadcast on skew data and retain other data locally.
+Use PART_REDISTRIBUTE_PART_LOCAL on the table. Specifically, retain the NULL values locally and perform redistribution on other data.
+In the example query, the s.x column contains skewed data and its value is 0. The optimizer identifies the skew data in statistics and generates the following optimization plan:
+id | operation | A-time +----+-------------------------------------------------------------------------+----------------------- + 1 | -> Limit | 23642.049 + 2 | -> Streaming (type: GATHER) | 23642.041 + 3 | -> Limit | [23310.768,23618.021] + 4 | -> Sort | [23310.761,23618.012] + 5 | -> Hash Join (6,8) | [20898.341,21115.272] + 6 | -> Streaming(type: PART REDISTRIBUTE PART ROUNDROBIN) | [7125.834,7472.111] + 7 | -> Seq Scan on public.skew s | [1837.079,1911.025] + 8 | -> Hash | [2612.484,2640.572] + 9 | -> Streaming(type: PART REDISTRIBUTE PART BROADCAST) | [1193.548,1297.894] + 10 | -> Seq Scan on public.test t | [314.343,328.707] + + 5 --Vector Hash Join (6,8) + Hash Cond: s.x = t.x + Skew Join Optimizated by Statistic + 6 --Streaming(type: PART REDISTRIBUTE PART ROUNDROBIN) + datanode1 (rows=7635968) + datanode2 (rows=7517184) + datanode3 (rows=7748608) + datanode4 (rows=7818240)+
In the preceding execution plan, Skew Join Optimized by Statistic indicates that this is an optimized plan used for handling data skew. The Statistic keyword indicates that the plan optimization is based on statistics; Hint indicates that the optimization is based on hints; Rule indicates that the optimization is based on rules. In this plan, skew and non-skew data is separately processed. Non-skew data in the s table is redistributed based on its hash values, and skew data (whose value is 0) is evenly distributed on all nodes in round-robin mode. In this way, data skew is solved.
+To ensure result correctness, the t table also needs to be processed. In the t table, the data whose value is 0 (skew value in the s.x table) is broadcast and other data is redistributed based on its hash values.
+In this way, data skew in JOIN operations is solved. The above result shows that the output of the Streaming operator (id being 6) is balanced and the end-to-end performance of the query is doubled.
+If the stream operator type in the execution plan is HYBRID, the stream mode varies depending on the skew data. The following plan is an example:
+EXPLAIN (nodes OFF, costs OFF) SELECT COUNT(*) FROM skew_scol s, skew_scol1 s1 WHERE s.b = s1.c; +QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------------------ +id | operation +----+----------------------------------------------------------------------------------------------------------------------------------------------------------- +1 | -> Aggregate +2 | -> Streaming (type: GATHER) +3 | -> Aggregate +4 | -> Hash Join (5,7) +5 | -> Streaming(type: HYBRID) +6 | -> Seq Scan on skew_scol s +7 | -> Hash +8 | -> Streaming(type: HYBRID) +9 | -> Seq Scan on skew_scol1 s1 + +Predicate Information (identified by plan id) +-------------------------------------------------------------------------------------------------------------------------------------------- +4 --Hash Join (5,7) +Hash Cond: (s.b = s1.c) +Skew Join Optimized by Statistic +5 --Streaming(type: HYBRID) +Skew Filter: (b = 1) +Skew Filter: (b = 0) +8 --Streaming(type: HYBRID) +Skew Filter: (c = 0) +Skew Filter: (c = 1)+
Data 1 has skew in the skew_scol table. Perform ROUNDROBIN on skew data and REDISTRIBUTE on non-skew data.
+Data 0 is the side with no skew in the skew_scol table. Perform BROADCAST on skew data and REDISTRIBUTE on non-skew data.
+As shown in the preceding figure, the two stream types are PART REDISTRIBUTE PART ROUNDROBIN and PART REDISTRIBUTE PART BROADCAST. In this example, the stream type is HYBRID.
+For aggregation, data on each DN is deduplicated based on the GROUP BY key and then redistributed. After the deduplication on DNs, the global occurrences of each value will not be greater than the number of DNs. Therefore, no serious data skew will occur. Take the following query as an example:
+1 | select c1, c2, c3, c4, c5, c6, c7, c8, c9, count(*) from t group by c1, c2, c3, c4, c5, c6, c7, c8, c9 limit 10; + |
The command output is as follows:
+id | operation | A-time | A-rows +----+--------------------------------------------+------------------------+---------- + 1 | -> Streaming (type: GATHER) | 130621.783 | 12 + 2 | -> GroupAggregate | [85499.711,130432.341] | 12 + 3 | -> Sort | [85499.509,103145.632] | 36679237 + 4 | -> Streaming(type: REDISTRIBUTE) | [25668.897,85499.050] | 36679237 + 5 | -> Seq Scan on public.t | [9835.069,10416.388] | 36679237 + + 4 --Streaming(type: REDISTRIBUTE) + datanode1 (rows=36678837) + datanode2 (rows=100) + datanode3 (rows=100) + datanode4 (rows=200)+
A large amount of skew data exists. As a result, after data is redistributed based on its GROUP BY key, the data volume of datanode1 is hundreds of thousands of times that of others. After optimization, a GROUP BY operation is performed on the DN to deduplicate data. After redistribution, no data skew occurs.
+id | operation | A-time +----+--------------------------------------------+----------------------- + 1 | -> Streaming (type: GATHER) | 10961.337 + 2 | -> HashAggregate | [10953.014,10953.705] + 3 | -> HashAggregate | [10952.957,10953.632] + 4 | -> Streaming(type: REDISTRIBUTE) | [10952.859,10953.502] + 5 | -> HashAggregate | [10084.280,10947.139] + 6 | -> Seq Scan on public.t | [4757.031,5201.168] + + Predicate Information (identified by plan id) +----------------------------------------------- + 3 --HashAggregate + Skew Agg Optimized by Statistic + + 4 --Streaming(type: REDISTRIBUTE) + datanode1 (rows=17) + datanode2 (rows=8) + datanode3 (rows=8) + datanode4 (rows=14)+
Applicable scope
+UNION eliminates duplicate rows while merging two result sets but UNION ALL merges the two result sets without deduplication. Therefore, replace UNION with UNION ALL if you are sure that the two result sets do not contain duplicate rows based on the service logic.
+If there are many NULL values in the JOIN columns, you can add the filter criterion IS NOT NULL to filter data in advance to improve the JOIN efficiency.
+nestloop anti join must be used to implement NOT IN, and Hash anti join is required for NOT EXISTS. If no NULL value exists in the JOIN column, NOT IN is equivalent to NOT EXISTS. Therefore, if you are sure that no NULL value exists, you can convert NOT IN to NOT EXISTS to generate hash joins and to improve the query performance.
+As shown in the following figure, the t2.d2 column does not contain null values (it is set to NOT NULL) and NOT EXISTS is used for the query.
+1 | SELECT * FROM t1 WHERE NOT EXISTS (SELECT * FROM t2 WHERE t1.c1=t2.d2); + |
The generated execution plan is as follows:
+If a plan involving groupAgg and SORT operations generated by the GROUP BY statement is poor in performance, you can set work_mem to a larger value to generate a hashagg plan, which does not require sorting and improves the performance.
+The GaussDB(DWS) performance greatly deteriorates if a large number of functions are called. In this case, you can modify the pushdown functions to CASE statements.
+Using functions or expressions for indexes stops indexing. Instead, it enables scanning on the full table.
+You can split an SQL statement into several ones and save the execution result to a temporary table if the SQL statement is too complex to be tuned using the solutions above, including but not limited to the following scenarios:
+This section describes the key CN parameters that affect GaussDB(DWS) SQL tuning performance. For details about how to configure these parameters, see Configuring GUC Parameters.
+ +Parameter/Reference Value + |
+Description + |
+||
---|---|---|---|
enable_nestloop=on + |
+Specifies how the optimizer uses Nest Loop Join. If this parameter is set to on, the optimizer preferentially uses Nest Loop Join. If it is set to off, the optimizer preferentially uses other methods, if any. + NOTE:
+To temporarily change the value of this parameter in the current database connection (that is, the current session), run the following SQL statement: +
By default, this parameter is set to on. Change the value as required. Generally, nested loop join has the poorest performance among the three JOIN methods (nested loop join, merge join, and hash join). You are advised to set this parameter to off. + |
+||
enable_bitmapscan=on + |
+Specifies whether the optimizer uses bitmap scanning. If the value is on, bitmap scanning is used. If the value is off, it is not used. + NOTE:
+If you only want to temporarily change the value of this parameter during the current database connection (that is, the current session), run the following SQL statements: +
The bitmap scanning applies only in the query condition where a > 1 and b > 1 and indexes are created on columns a and b. During performance tuning, if the query performance is poor and bitmapscan operators are in the execution plan, set this parameter to off and check whether the performance is improved. + |
+||
enable_fast_query_shipping=on + |
+Specifies whether the optimizer uses a distribution framework. If the value is on, the execution plan is generated on both CNs and DNs. If the value is off, the distribution framework is used, that is, the execution plan is generated on the CNs and then sent to DNs for execution. + NOTE:
+To temporarily change the value of this parameter in the current database connection (that is, the current session), run the following SQL statement: +
|
+||
enable_hashagg=on + |
+Specifies whether to enable the optimizer's use of Hash-aggregation plan types. + |
+||
enable_hashjoin=on + |
+Specifies whether to enable the optimizer's use of Hash-join plan types. + |
+||
enable_mergejoin=on + |
+Specifies whether to enable the optimizer's use of Hash-merge plan types. + |
+||
enable_indexscan=on + |
+Specifies whether to enable the optimizer's use of index-scan plan types. + |
+||
enable_indexonlyscan=on + |
+Specifies whether to enable the optimizer's use of index-only-scan plan types. + |
+||
enable_seqscan=on + |
+Specifies whether the optimizer uses bitmap scanning. It is impossible to suppress sequential scans entirely, but setting this variable to off allows the optimizer to preferentially choose other methods if available. + |
+||
enable_sort=on + |
+Specifies the optimizer sorts. It is impossible to fully suppress explicit sorts, but setting this variable to off allows the optimizer to preferentially choose other methods if available. + |
+||
enable_broadcast=on + |
+Specifies whether enable the optimizer's use of data broadcast. In data broadcast, a large amount of data is transferred on the network. When the number of transmission nodes (stream) is large and the estimation is inaccurate, set this parameter to off and check whether the performance is improved. + |
+||
rewrite_rule + |
+Specifies whether the optimizer enables the LAZY_AGG and MAGIC_SET rewriting rules. + |
+
In plan hints, you can specify a join order, join, stream, and scan operations, the number of rows in a result, and redistribution skew information to tune an execution plan, improving query performance.
+The hint syntax must follow immediately after a SELECT keyword and is written in the following format:
+1 | /*+ <plan hint>*/
+ |
You can specify multiple hints for a query plan and separate them by spaces. A hint specified for a query plan does not apply to its subquery plans. To specify a hint for a subquery, add the hint following the SELECT of this subquery.
+For example:
+1 | select /*+ <plan_hint1> <plan_hint2> */ * from t1, (select /*+ <plan_hint3> */ from t2) where 1=1; + |
In the preceding command, <plan_hint1> and <plan_hint2> are the hints of a query, and <plan_hint3> is the hint of its subquery.
+If a hint is specified in the CREATE VIEW statement, the hint will be applied each time this view is used.
+If the random plan function is enabled (plan_mode_seed is set to a value other than 0), the specified hint will not be used.
+Currently, the following hints are supported:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 | explain +select i_product_name product_name +,i_item_sk item_sk +,s_store_name store_name +,s_zip store_zip +,ad2.ca_street_number c_street_number +,ad2.ca_street_name c_street_name +,ad2.ca_city c_city +,ad2.ca_zip c_zip +,count(*) cnt +,sum(ss_wholesale_cost) s1 +,sum(ss_list_price) s2 +,sum(ss_coupon_amt) s3 +FROM store_sales +,store_returns +,store +,customer +,promotion +,customer_address ad2 +,item +WHERE ss_store_sk = s_store_sk AND +ss_customer_sk = c_customer_sk AND +ss_item_sk = i_item_sk and +ss_item_sk = sr_item_sk and +ss_ticket_number = sr_ticket_number and +c_current_addr_sk = ad2.ca_address_sk and +ss_promo_sk = p_promo_sk and +i_color in ('maroon','burnished','dim','steel','navajo','chocolate') and +i_current_price between 35 and 35 + 10 and +i_current_price between 35 + 1 and 35 + 15 +group by i_product_name +,i_item_sk +,s_store_name +,s_zip +,ad2.ca_street_number +,ad2.ca_street_name +,ad2.ca_city +,ad2.ca_zip +; + |
Theses hints specify the join order and outer/inner tables.
+1 | leading(join_table_list) + |
1 | leading((join_table_list)) + |
join_table_list specifies the tables to be joined. The values can be table names or table aliases. If a subquery is pulled up, the value can also be the subquery alias. Separate the values with spaces. You can add parentheses to specify the join priorities of tables.
+A table name or alias can only be a string without a schema name.
+An alias (if any) is used to represent a table.
+To prevent semantic errors, tables in the list must meet the following requirements:
+For example:
+leading(t1 t2 t3 t4 t5): t1, t2, t3, t4, and t5 are joined. The join order and outer/inner tables are not specified.
+leading(t1 t2 t3 t4 t5): t1, t2, t3, t4, and t5 are joined in sequence. The table on the right is used as the inner table in each join.
+leading(t1 (t2 t3 t4) t5): First, t2, t3, and t4 are joined and the outer/inner tables are not specified. Then, the result is joined with t1 and t5, and the outer/inner tables are not specified.
+leading(t1 (t2 t3 t4) t5): First, t2, t3, and t4 are joined and the outer/inner tables are not specified. Then, the result is joined with t1, and (t2 t3 t4) is used as the inner table. Finally, the result is joined with t5, and t5 is used as the inner table.
+leading((t1 (t2 t3) t4 t5)) leading((t3 t2)): First, t2 and t3 are joined and t2 is used as the inner table. Then, the result is joined with t1, and (t2 t3) is used as the inner table. Finally, the result is joined with t4 and then t5, and the table on the right in each join is used as the inner table.
+Hint the query plan in Examples as follows:
+1 +2 | explain +select /*+ leading((((((store_sales store) promotion) item) customer) ad2) store_returns) leading((store store_sales))*/ i_product_name product_name ... + |
First, store_sales and store are joined and store_sales is the inner table. Then, The result is joined with promotion, item, customer, ad2, and store_returns in sequence. The optimized plan is as follows:
+For details about the warning at the top of the plan, see Hint Errors, Conflicts, and Other Warnings.
+Specifies the join method. It can be nested loop join, hash join, or merge join.
+1 | [no] nestloop|hashjoin|mergejoin(table_list) + |
For example:
+no nestloop(t1 t2 t3): nestloop is not used for joining t1, t2, and t3. The three tables may be joined in either of the two ways: Join t2 and t3, and then t1; join t1 and t2, and then t3. This hint takes effect only for the last join. If necessary, you can hint other joins. For example, you can add no nestloop(t2 t3) to join t2 and t3 first and to forbid the use of nestloop.
+Hint the query plan in Examples as follows:
+1 +2 | explain +select /*+ nestloop(store_sales store_returns item) */ i_product_name product_name ... + |
nestloop is used for the last join between store_sales, store_returns, and item. The optimized plan is as follows:
+These hints specify the number of rows in an intermediate result set. Both absolute values and relative values are supported.
+1 | rows(table_list #|+|-|* const) + |
For example:
+rows(t1 #5): The result set of t1 is five rows.
+rows(t1 t2 t3 *1000): Multiply the result set of joined t1, t2, and t3 by 1000.
+Hint the query plan in Examples as follows:
+1 +2 | explain +select /*+ rows(store_sales store_returns *50) */ i_product_name product_name ... + |
Multiply the result set of joined store_sales and store_returns by 50. The optimized plan is as follows:
+The estimation value after the hint in row 11 is 360, and the original value is rounded off to 7.
+These hints specify a stream operation, which can be broadcast or redistribute.
+1 | [no] broadcast|redistribute(table_list) + |
Hint the query plan in Examples as follows:
+1 +2 | explain +select /*+ no redistribute(store_sales store_returns item store) leading(((store_sales store_returns item store) customer)) */ i_product_name product_name ... + |
In the original plan, the join result of store_sales, store_returns, item, and store is redistributed before it is joined with customer. After the hinting, the redistribution is disabled and the join order is retained. The optimized plan is as follows:
+These hints specify a scan operation, which can be tablescan, indexscan, or indexonlyscan.
+1 | [no] tablescan|indexscan|indexonlyscan(table [index]) + |
indexscan and indexonlyscan hints can be used only when the specified index belongs to the table.
+Scan operation hints can be used for row-store tables, column-store tables, HDFS tables, HDFS foreign tables, OBS tables, and subquery tables. HDFS tables include primary tables and delta tables. The delta tables are invisible to users. Therefore, scan operation hints are used only for primary tables.
+To specify an index-based hint for a scan, create an index named i on the i_item_sk column of the item table.
+1 | create index i on item(i_item_sk); + |
Hint the query plan in Examples as follows:
+1 +2 | explain +select /*+ indexscan(item i) */ i_product_name product_name ... + |
item is scanned based on an index. The optimized plan is as follows:
+These hints specify the name of a sublink block.
+1 | blockname (table) + |
1 | explain select /*+nestloop(store_sales tt) */ * from store_sales where ss_item_sk in (select /*+blockname(tt)*/ i_item_sk from item group by 1); + |
tt indicates the sublink block name. After being pulled up, the sublink is joined with the outer-query table store_sales by using nestloop. The optimized plan is as follows:
+Theses hints specify redistribution keys containing skew data and skew values, and are used to optimize redistribution involving Join or HashAgg.
+1 | skew(table (column) [(value)]) + |
1 | skew((join_rel) (column) [(value)]) + |
Example:
+Each skew hint describes the skew information of one table relationship. To describe the skews of multiple table relationships in a query, specify multiple skew hints.
+Skew hints have the following formats:
+Description: The v1 value in the c1 column of the t table relationship causes skew in query execution.
+Description: Values including v1, v2, and v3 in the c1 column of the t table relationship cause skew in query execution.
+Description: The v1 value in the c1 column and the v2 value in the c2 column of the t table relationship cause skew in query execution.
+Description: Values including v1, v3, and v5 in the c1 column and values including v2, v4, and v6 in the c2 column of the t table relationship cause skew in query execution.
+In the last format, parentheses for skew value groups can be omitted, for example, skew(t (c1 c2) (v1 v2 v3 v4 v5 v6 ...)). In a skew hint, either use parentheses for all skew value groups or for none of them.
+Otherwise, a syntax error will be generated. For example, skew(t (c1 c2) (v1 v2 v3 v4 (v5 v6) ...)) will generate an error.
+If data skew does not occur in base tables but in an intermediate result during query execution, specify skew hints of the intermediate result to solve the skew. The format is skew((t1 t2) (c1) (v1)).
+Description: Data skew occurs after the table relationships t1 and t2 are joined. The c1 column of the t1 table contains skew data and its skew value is v1.
+c1 can exist only in a table relationship of join_rel. If there is another column having the same name, use aliases to avoid ambiguity.
+Specify single-table skew.
+For example, the original query is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | explain +with customer_total_return as +(select sr_customer_sk as ctr_customer_sk +,sr_store_sk as ctr_store_sk +,sum(SR_FEE) as ctr_total_return +from store_returns +,date_dim +where sr_returned_date_sk = d_date_sk +and d_year =2000 +group by sr_customer_sk +,sr_store_sk) + select c_customer_id +from customer_total_return ctr1 +,store +,customer +where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2 +from customer_total_return ctr2 +where ctr1.ctr_store_sk = ctr2.ctr_store_sk) +and s_store_sk = ctr1.ctr_store_sk +and s_state = 'NM' +and ctr1.ctr_customer_sk = c_customer_sk +order by c_customer_id +limit 100; + |
Specify the hints of HashAgg in the inner with clause and of the outer Hash Join. The query containing hints is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | explain +with customer_total_return as +(select /*+ skew(store_returns(sr_store_sk sr_customer_sk)) */sr_customer_sk as ctr_customer_sk +,sr_store_sk as ctr_store_sk +,sum(SR_FEE) as ctr_total_return +from store_returns +,date_dim +where sr_returned_date_sk = d_date_sk +and d_year =2000 +group by sr_customer_sk +,sr_store_sk) + select /*+ skew(ctr1(ctr_customer_sk)(11))*/ c_customer_id +from customer_total_return ctr1 +,store +,customer +where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2 +from customer_total_return ctr2 +where ctr1.ctr_store_sk = ctr2.ctr_store_sk) +and s_store_sk = ctr1.ctr_store_sk +and s_state = 'NM' +and ctr1.ctr_customer_sk = c_customer_sk +order by c_customer_id +limit 100; + |
The hints indicate that the group by in the inner with clause contains skew data during redistribution by HashAgg, corresponding to the original Hash Agg operators 10 and 21; and that the ctr_customer_sk column in the outer ctr1 table contains skew data during redistribution by Hash Join, corresponding to operator 6 in the original plan. The optimized plan is as follows:
+To solve data skew in the redistribution, Hash Agg is changed to double-level Agg operators and the redistribution operators used by Hash Join are changed in the optimized plan.
+For example, the original query and its plan are as follows:
+1 | explain select count(*) from store_sales_1 group by round(ss_list_price); + |
Columns in hints do not support expressions. To specify hints, rewrite the query as several subqueries. The rewritten query and its plan are as follows:
+1 +2 +3 +4 +5 | explain +select count(*) +from (select round(ss_list_price),ss_hdemo_sk +from store_sales_1)tmp(a,ss_hdemo_sk) +group by a; + |
Ensure that the service logic is not changed during the rewriting.
+Specify hints in the rewritten query as follows:
+1 +2 +3 +4 +5 | explain +select /*+ skew(tmp(a)) */ count(*) +from (select round(ss_list_price),ss_hdemo_sk +from store_sales_1)tmp(a,ss_hdemo_sk) +group by a; + |
The plan shows that after Hash Agg is changed to double-layer Agg operators, redistributed data is greatly reduced and redistribution time shortened.
+You can specify hints in columns in a subquery, for example:
+1 +2 +3 +4 +5 | explain +select /*+ skew(tmp(b)) */ count(*) +from (select round(ss_list_price) b,ss_hdemo_sk +from store_sales_1)tmp(a,ss_hdemo_sk) +group by a; + |
A hint, or a GUC hint, specifies a configuration parameter value when a plan is generated. Currently, only the following parameters are supported:
+set [global](guc_name guc_value)+
Hint the query plan in Examples as follows:
+explain +select /*+ set global(query_dop 0) */ i_product_name product_name +...+
This hint indicates that the query_dop parameter is set to 0 when the plan for a statement is generated, which means the SMP adaptation function is enabled. The generated plan is as follows:
+Plan hints change an execution plan. You can run EXPLAIN to view the changes.
+Hints containing errors are invalid and do not affect statement execution. The errors will be displayed in different ways based on statement types. Hint errors in an EXPLAIN statement are displayed as a warning on the interface. Hint errors in other statements will be recorded in debug1-level logs containing the PLANHINT keyword.
+An error will be reported if the syntax tree fails to be reduced. The No. of the row generating an error is displayed in the error details.
+For example, the hint keyword is incorrect, no table or only one table is specified in the leading or join hint, or no tables are specified in other hints. The parsing of a hint is terminated immediately after a syntax error is detected. Only the hints that have been parsed successfully are valid.
+For example:
+1 | leading((t1 t2)) nestloop(t1) rows(t1 t2 #10) + |
The syntax of nestloop(t1) is wrong and its parsing is terminated. Only leading(t1 t2) that has been successfully parsed before nestloop(t1) is valid.
+If hint duplication or conflicts occur, only the first hint takes effect. A message will be displayed to describe the situation.
+The table list in the leading hint is disassembled. For example, leading (t1 t2 t3) will be disassembled as leading(t1 t2) leading((t1 t2) t3), which will conflict with leading(t2 t1) (if any). In this case, the latter leading(t2 t1) becomes invalid. If two hints use duplicated table lists and only one of them has the specified outer/inner table, the one without a specified outer/inner table becomes invalid.
+In this case, a message will be displayed. Generally, such invalidation occurs if a sublink contains multiple tables to be joined, because the table list in the sublink becomes invalid after the sublink is pulled up.
+This section takes the statements in TPC-DS (Q24) as an example to describe how to optimize an execution plan by using hints in 1000X+24DN environments. For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 | select avg(netpaid) from +(select c_last_name +,c_first_name +,s_store_name +,ca_state +,s_state +,i_color +,i_current_price +,i_manager_id +,i_units +,i_size +,sum(ss_sales_price) netpaid +from store_sales +,store_returns +,store +,item +,customer +,customer_address +where ss_ticket_number = sr_ticket_number +and ss_item_sk = sr_item_sk +and ss_customer_sk = c_customer_sk +and ss_item_sk = i_item_sk +and ss_store_sk = s_store_sk +and c_birth_country = upper(ca_country) +and s_zip = ca_zip +and s_market_id=7 +group by c_last_name +,c_first_name +,s_store_name +,ca_state +,s_state +,i_color +,i_current_price +,i_manager_id +,i_units +,i_size); + |
In this plan, the performance of the layer-10 broadcast is poor because the estimation result generated at layer 11 is 2140 rows, which is much less than the actual number of rows. The inaccurate estimation is mainly caused by the underestimated number of rows in layer-13 hash join. In this layer, store_sales and store_returns are joined (based on the ss_ticket_number and ss_item_sk columns in store_sales and the sr_ticket_number and sr_item_sk columns in store_returns) but the multi-column correlation is not considered.
+2. After the rows hint is used for optimization, the plan is as follows and the statement execution takes 318s:
+1 +2 | select avg(netpaid) from +(select /*+rows(store_sales store_returns * 11270)*/ c_last_name ... + |
The execution takes a longer time because layer-9 redistribute is slow. Considering that data skew does not occur at layer-9 redistribute, the slow redistribution is caused by the slow layer-8 hashjoin due to data skew at layer-18 redistribute.
+3. Data skew occurs at layer-18 redistribute because customer_address has a few different values in its two join keys. Therefore, plan customer_address as the last one to be joined. After the hint is used for optimization, the plan is as follows and the statement execution takes 116s:
+1 +2 +3 +4 | select avg(netpaid) from +(select /*+rows(store_sales store_returns *11270) +leading((store_sales store_returns store item customer) customer_address)*/ +c_last_name ... + |
Most of the time is spent on layer-6 redistribute. The plan needs to be further optimized.
+4. Most of the time is spent on layer-6 redistribute because of data skew. To avoid the data skew, plan the item table as the last one to be joined because the number of rows is not reduced after item is joined. After the hint is used for optimization, the plan is as follows and the statement execution takes 120s:
+1 +2 +3 +4 | select avg(netpaid) from +(select /*+rows(store_sales store_returns *11270) +leading((customer_address (store_sales store_returns store customer) item)) +c_last_name ... + |
Data skew occurs after the join of item and customer_address because item is broadcasted at layer-22. As a result, layer-6 redistribute is still slow.
+5. Add a hint to disable broadcast for item or add a redistribute hint for the join result of item and customer_address. After the hint is used for optimization, the plan is as follows and the statement execution takes 105s:
+1 +2 +3 +4 +5 | select avg(netpaid) from +(select /*+rows(store_sales store_returns *11270) +leading((customer_address (store_sales store_returns store customer) item)) +no broadcast(item)*/ +c_last_name ... + |
6. The last layer uses single-layer Agg and the number of rows is greatly reduced. Set best_agg_plan to 3 and change the single-layer Agg to a double-layer Agg. The plan is as follows and the statement execution takes 94s. The optimization ends.
+If the query performance deteriorates due to statistics changes, you can use hints to optimize the query plan. Take TPCH-Q17 as an example. The query performance deteriorates after the value of default_statistics_target is changed from the default one to –2 for statistics collection.
+1. If default_statistics_target is set to the default value 100, the plan is as follows:
+2. If default_statistics_target is set to –2, the plan is as follows:
+3. After the analysis, the cause is that the stream type is changed from BroadCast to Redistribute during the join of the lineitem and part tables. You can use a hint to change the stream type back to BroadCast. For example:
+To ensure proper database running, after INSERT and DELETE operations, you need to routinely do VACUUM FULL and ANALYZE as appropriate for customer scenarios and update statistics to obtain better performance.
+You need to routinely run VACUUM, VACUUM FULL, and ANALYZE to maintain tables, because:
+VACUUM customer;+
VACUUM+
This command can be concurrently executed with database operation commands, including SELECT, INSERT, UPDATE, and DELETE; excluding ALTER TABLE.
+Do VACUUM to the partitioned table:
+VACUUM customer_par PARTITION ( P1 );+
VACUUM+
VACUUM FULL customer;+
VACUUM+
VACUUM FULL needs to add exclusive locks on tables it operates on and requires that all other database operations be suspended.
+When reclaiming disk space, you can query for the session corresponding to the earliest transactions in the cluster, and then end the earliest long transactions as needed to make full use of the disk space.
+select * from pgxc_gtm_snapshot_status();+
select * from pgxc_running_xacts() where xmin=1400202010;+
ANALYZE customer;+
ANALYZE+
Do ANALYZE VERBOSE to update statistics and display table information.
+ANALYZE VERBOSE customer;+
ANALYZE+
You can use VACUUM ANALYZE at the same time to optimize the query.
+VACUUM ANALYZE customer;+
VACUUM+
VACUUM and ANALYZE cause a substantial increase in I/O traffic, which may cause poor performance of other active sessions. Therefore, you are advised to set by specifying the vacuum_cost_delay parameter.
+DROP TABLE customer; +DROP TABLE customer_par; +DROP TABLE part;+
If the following output is displayed, the index has been deleted.
+DROP TABLE+
When data deletion is repeatedly performed in the database, index keys will be deleted from the index page, resulting in index distention. Recreating an index routinely improves query efficiency.
+The database supports B-tree, GIN, and psort indexes.
+Use either of the following two methods to recreate an index:
+When you delete an index, a temporary exclusive lock is added in the parent table to block related read/write operations. When you create an index, the write operation is locked but the read operation is not. The data is read and scanned by order.
+DROP INDEX areaS_idx; +DROP INDEX+
CREATE INDEX areaS_idx ON areaS (area_id); +CREATE INDEX+
REINDEX TABLE areaS; +REINDEX+
REINDEX INTERNAL TABLE areaS; +REINDEX+
This section describes the usage restrictions, application scenarios, and configuration guide of symmetric multiprocessing (SMP).
+The SMP feature improves the performance through operator parallelism and occupies more system resources, including CPU, memory, network, and I/O. Actually, SMP is a method consuming resources to save time. It improves system performance in appropriate scenarios and when resources are sufficient, but may deteriorate performance otherwise. In addition, compared with the serial processing, SMP generates more candidate plans, which is more time-consuming and may deteriorate performance.
+The execution plan contains the following operators:
+To execute queries in parallel, Stream operators are added for data exchange of the SMP feature. These new operators can be considered as the subtypes of Stream operators.
+Among these operators, Local operators exchange data between parallel threads within a DN, and non-Local operators exchange data across DNs.
+The TPCH Q1 parallel plan is used as an example.
+In this plan, implement the Hdfs Scan and HashAgg operator parallel, and adds the Local Gather and Split Redistribute data exchange operator.
+In this example, the sixth operator is Split Redistribute, and dop: 4/4 next to the operator indicates that the degree of parallelism of the sender and receiver is 4. 4 No operator is Local Gather, marked dop: 1/4 above, this operator sender thread parallel degree is 4, while the receiving end thread parallelism degree to 1, that is, lower-layer 5 number Hash Aggregate operators according to the 4 parallel degree, while the working mode of the port on the upper-layer 1 to 3 number operator according to the executed one by one, 4 number operator is used to achieve intra-DN concurrent threads data aggregation.
+You can view the parallelism situation of each operator in the dop information.
+The SMP architecture uses abundant resources to obtain time. After the plan parallelism is executed, the resource consumption is added, including the CPU, memory, I/O, and network bandwidth resources. As the parallelism degree is expanded, the resource consumption increases. If these resources become a bottleneck, the SMP cannot improve the performance and the overall cluster performance may be deteriorated. Adaptive SMP is provided to dynamically select the optimal parallel degree for each query based on the resource usage and query requirements. The following information describes the situations that the SMP affects theses resources:
+In a general customer scenario, the system CPU usage rate is not high. Using the SMP parallelism architecture will fully use the CPU resource to improve the system performance. If the number of CPU kernels of the database server is too small and the CPU usage is already high, enabling the SMP parallelism may deteriorate the system performance due to resource compete between multiple threads.
+The query parallel causes memory usage growth, but the memory upper limit used by each operator is still restricted by work_mem. Assume that work_mem is 4 GB, and the degree of parallelism is 2, then the memory upper limit of each concurrent thread is 2 GB. When work_mem is small or the system memory is sufficient, running SMP parallelism may push data down to disks. As a result, the query performance deteriorates.
+To execute a query in parallel, data exchange operators are added. Local Stream operators exchange data between threads within a DN. Data is exchanged in memory and network performance is not affected. Non-Local operators exchange data over the network and increase network load. If the capacity of a network resource becomes a bottleneck, parallelism may also increase the network load.
+A parallel scan increases I/O resource consumption. It can improve performance only when I/O resources are sufficient.
+Besides resource factors, there are other factors that impact the SMP parallelism performance, such as unevenly data distributed in a partitioned table and system parallelism degree.
+Serious data skew deteriorates parallel execution performance. For example, if the data volume of a value in the join column is much more than that of other values, the data volume of a parallel thread will be much more than that of others after Hash-based data redistribution, resulting in the long-tail issue and poor parallelism performance.
+The SMP feature uses more resources, and unused resources are decreasing in a high concurrency scenario. Therefore, enabling the SMP parallelism will result in serious resource compete among queries. Once resource competes occur, no matter the CPU, I/O, memory, or network resources, all of them will result in entire performance deterioration. In the high concurrency scenario, enabling the SMP will not improve the performance effect and even may cause performance deterioration.
+Starting from this version, SMP auto adaptation is enabled. For newly deployed clusters, the default value of query_dop is 0, and SMP parameters have been adjusted. To ensure forward compatibility, the value of query_dop should remain unchanged after an existing cluster is upgraded.
+For an upgraded cluster, if you want to set query_dop to 0 and enable SMP parallel processing, modify the following parameters to obtain better dop options:
+If the system memory is large, the value of max_process_memory is large. In this case, you are advised to set the value of this parameter to 5% of max_process_memory, that is, 4 GB by default.
+The recommended value for this parameter is calculated as follows: comm_max_stream = Min(dop_limit x dop_limit x 20 x 2, max_process_memory (bytes) x 0.025/Number of DNs/260). The value must be within the value range of comm_max_stream.
+The recommended value for this parameter is calculated as follows: max_connections = dop_limit x 20 x 6 + 24. The value must be within the value range of max_connections.
+In the preceding formulas, dop_limit indicates the number of CPUs corresponding to each DN in the cluster. It is calculated as follows: dop_limit = Number of logical CPU cores of a single server/Number of DNs of a single server.
+To manually optimize SMP, you need to be familiar with Suggestions for SMP Parameter Settings. This section describes how to optimize SMP.
+The CPU, memory, I/O, and network bandwidth resources are sufficient. The SMP architecture uses abundant resources to save time. After the plan parallelism is executed, resource consumption increases. When these resources become a bottleneck, SMP may deteriorate, rather than improve performance. In addition, it takes a longer time to generate SMP plans than serial plans. Therefore, in TP services that mainly involve short queries or in case resources are insufficient, you are advised to disable SMP by setting query_dop to 1.
+1 +2 +3 +4 | SET query_dop = 0; +SELECT COUNT(*) FROM t1 GROUP BY a; +...... +SET query_dop = 1; + |
Tables are defined as follows:
+1 +2 | CREATE TABLE t1 (a int, b int); +CREATE TABLE t2 (a int, b int); + |
The following query is executed:
+1 | SELECT * FROM t1, t2 WHERE t1.a = t2.b; + |
If a is the distribution column of t1 and t2:
+1 +2 | CREATE TABLE t1 (a int, b int) DISTRIBUTE BY HASH (a); +CREATE TABLE t2 (a int, b int) DISTRIBUTE BY HASH (a); + |
Then Streaming exists in the execution plan and the data volume is heavy among DNs, as shown in Figure 1.
+ +If a is the distribution column of t1 and b is the distribution column of t2:
+1 +2 | CREATE TABLE t1 (a int, b int) DISTRIBUTE BY HASH (a); +CREATE TABLE t2 (a int, b int) DISTRIBUTE BY HASH (b); + |
Then Streaming does not exist in the execution plan, and the data volume among DNs is decreasing and the query performance is increasing, as shown in Figure 2.
+ +Query the information about all personnel in the sales department.
+1 +2 +3 +4 +5 +6 +7 | SELECT staff_id,first_name,last_name,employment_id,state_name,city +FROM staffs,sections,states,places +WHERE sections.section_name='Sales' +AND staffs.section_id = sections.section_id +AND sections.place_id = places.place_id +AND places.state_id = states.state_id +ORDER BY staff_id; + |
The original execution plan is as follows before creating the places.place_id and states.state_id indexes:
+The optimized execution plan is as follows (two indexes have been created on the places.place_id and states.state_id columns):
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 | SELECT + * +FROM +( ( SELECT + STARTTIME STTIME, + SUM(NVL(PAGE_DELAY_MSEL,0)) PAGE_DELAY_MSEL, + SUM(NVL(PAGE_SUCCEED_TIMES,0)) PAGE_SUCCEED_TIMES, + SUM(NVL(FST_PAGE_REQ_NUM,0)) FST_PAGE_REQ_NUM, + SUM(NVL(PAGE_AVG_SIZE,0)) PAGE_AVG_SIZE, + SUM(NVL(FST_PAGE_ACK_NUM,0)) FST_PAGE_ACK_NUM, + SUM(NVL(DATATRANS_DW_DURATION,0)) DATATRANS_DW_DURATION, + SUM(NVL(PAGE_SR_DELAY_MSEL,0)) PAGE_SR_DELAY_MSEL + FROM + PS.SDR_WEB_BSCRNC_1DAY SDR + INNER JOIN (SELECT + BSCRNC_ID, + BSCRNC_NAME, + ACCESS_TYPE, + ACCESS_TYPE_ID + FROM + nethouse.DIM_LOC_BSCRNC + GROUP BY + BSCRNC_ID, + BSCRNC_NAME, + ACCESS_TYPE, + ACCESS_TYPE_ID) DIM + ON SDR.BSCRNC_ID = DIM.BSCRNC_ID + AND DIM.ACCESS_TYPE_ID IN (0,1,2) + INNER JOIN nethouse.DIM_RAT_MAPPING RAT + ON (RAT.RAT = SDR.RAT) + WHERE + ( (STARTTIME >= 1461340800 + AND STARTTIME < 1461427200) ) + AND RAT.ACCESS_TYPE_ID IN (0,1,2) + --and SDR.BSCRNC_ID is not null + GROUP BY + STTIME ) ) ; + |
Figure 1 shows the execution plan.
+ +Therefore, you are advised to manually add NOT NULL for JOIN columns in the statement, as shown below:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 | SELECT + * +FROM +( ( SELECT + STARTTIME STTIME, + SUM(NVL(PAGE_DELAY_MSEL,0)) PAGE_DELAY_MSEL, + SUM(NVL(PAGE_SUCCEED_TIMES,0)) PAGE_SUCCEED_TIMES, + SUM(NVL(FST_PAGE_REQ_NUM,0)) FST_PAGE_REQ_NUM, + SUM(NVL(PAGE_AVG_SIZE,0)) PAGE_AVG_SIZE, + SUM(NVL(FST_PAGE_ACK_NUM,0)) FST_PAGE_ACK_NUM, + SUM(NVL(DATATRANS_DW_DURATION,0)) DATATRANS_DW_DURATION, + SUM(NVL(PAGE_SR_DELAY_MSEL,0)) PAGE_SR_DELAY_MSEL + FROM + PS.SDR_WEB_BSCRNC_1DAY SDR + INNER JOIN (SELECT + BSCRNC_ID, + BSCRNC_NAME, + ACCESS_TYPE, + ACCESS_TYPE_ID + FROM + nethouse.DIM_LOC_BSCRNC + GROUP BY + BSCRNC_ID, + BSCRNC_NAME, + ACCESS_TYPE, + ACCESS_TYPE_ID) DIM + ON SDR.BSCRNC_ID = DIM.BSCRNC_ID + AND DIM.ACCESS_TYPE_ID IN (0,1,2) + INNER JOIN nethouse.DIM_RAT_MAPPING RAT + ON (RAT.RAT = SDR.RAT) + WHERE + ( (STARTTIME >= 1461340800 + AND STARTTIME < 1461427200) ) + AND RAT.ACCESS_TYPE_ID IN (0,1,2) + and SDR.BSCRNC_ID is not null + GROUP BY + STTIME ) ) A; + |
Figure 2 shows the execution plan.
+ +In an execution plan, more than 95% of the execution time is spent on window agg performed on the CN. In this case, sum is performed for the two columns separately, and then another sum is performed for the separate sum results of the two columns. After this, trunc and sorting are performed in sequence.
+The table structure is as follows:
+1 +2 | CREATE TABLE public.test(imsi int,L4_DW_THROUGHPUT int,L4_UL_THROUGHPUT int) +with (orientation = column) DISTRIBUTE BY hash(imsi); + |
The query statements are as follows:
+1 +2 +3 +4 +5 +6 +7 | SELECT COUNT(1) over() AS DATACNT, +IMSI AS IMSI_IMSI, +CAST(TRUNC(((SUM(L4_UL_THROUGHPUT) + SUM(L4_DW_THROUGHPUT))), 0) AS +DECIMAL(20)) AS TOTAL_VOLOME_KPIID +FROM public.test AS test +GROUP BY IMSI +order by TOTAL_VOLOME_KPIID DESC; + |
The execution plan is as follows:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | Row Adapter (cost=10.70..10.70 rows=10 width=12) + -> Vector Sort (cost=10.68..10.70 rows=10 width=12) + Sort Key: ((trunc((((sum(l4_ul_throughput)) + (sum(l4_dw_throughput))))::numeric, 0))::numeric(20,0)) + -> Vector WindowAgg (cost=10.09..10.51 rows=10 width=12) + -> Vector Streaming (type: GATHER) (cost=242.04..246.84 rows=240 width=12) + Node/s: All datanodes + -> Vector Hash Aggregate (cost=10.09..10.29 rows=10 width=12) + Group By Key: imsi + -> CStore Scan on test (cost=0.00..10.01 rows=10 width=12) + |
As we can see, both window agg and sort are performed on the CN, which is time consuming.
+Modify the statement to a subquery statement, as shown below:
+1 +2 +3 +4 +5 +6 +7 | SELECT COUNT(1) over() AS DATACNT, IMSI_IMSI, TOTAL_VOLOME_KPIID +FROM (SELECT IMSI AS IMSI_IMSI, +CAST(TRUNC(((SUM(L4_UL_THROUGHPUT) + SUM(L4_DW_THROUGHPUT))), +0) AS DECIMAL(20)) AS TOTAL_VOLOME_KPIID +FROM public.test AS test +GROUP BY IMSI +ORDER BY TOTAL_VOLOME_KPIID DESC); + |
Perform sum on the trunc results of the two columns, take it as a subquery, and then perform window agg for the subquery to push down the sorting operation to DNs, as shown below:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | Row Adapter (cost=10.70..10.70 rows=10 width=24) + -> Vector WindowAgg (cost=10.45..10.70 rows=10 width=24) + -> Vector Streaming (type: GATHER) (cost=250.83..253.83 rows=240 width=24) + Node/s: All datanodes + -> Vector Sort (cost=10.45..10.48 rows=10 width=12) + Sort Key: ((trunc(((sum(test.l4_ul_throughput) + sum(test.l4_dw_throughput)))::numeric, 0))::numeric(20,0)) + -> Vector Hash Aggregate (cost=10.09..10.29 rows=10 width=12) + Group By Key: test.imsi + -> CStore Scan on test (cost=0.00..10.01 rows=10 width=12) + |
The optimized SQL statement greatly improves the performance by reducing the execution time from 120s to 7s.
+If bit0 of cost_param is set to 1, an improved mechanism is used for estimating the selection rate of non-equi-joins. This method is more accurate for estimating the selection rate of joins between two identical tables. The following example describes the optimization scenario when bit0 of cost_param is set to 1. In V300R002C00 and later, cost_param & 1=0 is not used. That is, an optimized formula is selected for calculation.
+Note: The selection rate indicates the percentage for which the number of rows meeting the join conditions account of the JOIN results when the JOIN relationship is established between two tables.
+The table structure is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 | CREATE TABLE LINEITEM +( +L_ORDERKEY BIGINT NOT NULL +, L_PARTKEY BIGINT NOT NULL +, L_SUPPKEY BIGINT NOT NULL +, L_LINENUMBER BIGINT NOT NULL +, L_QUANTITY DECIMAL(15,2) NOT NULL +, L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL +, L_DISCOUNT DECIMAL(15,2) NOT NULL +, L_TAX DECIMAL(15,2) NOT NULL +, L_RETURNFLAG CHAR(1) NOT NULL +, L_LINESTATUS CHAR(1) NOT NULL +, L_SHIPDATE DATE NOT NULL +, L_COMMITDATE DATE NOT NULL +, L_RECEIPTDATE DATE NOT NULL +, L_SHIPINSTRUCT CHAR(25) NOT NULL +, L_SHIPMODE CHAR(10) NOT NULL +, L_COMMENT VARCHAR(44) NOT NULL +) with (orientation = column, COMPRESSION = MIDDLE) distribute by hash(L_ORDERKEY); + +CREATE TABLE ORDERS +( +O_ORDERKEY BIGINT NOT NULL +, O_CUSTKEY BIGINT NOT NULL +, O_ORDERSTATUS CHAR(1) NOT NULL +, O_TOTALPRICE DECIMAL(15,2) NOT NULL +, O_ORDERDATE DATE NOT NULL +, O_ORDERPRIORITY CHAR(15) NOT NULL +, O_CLERK CHAR(15) NOT NULL +, O_SHIPPRIORITY BIGINT NOT NULL +, O_COMMENT VARCHAR(79) NOT NULL +)with (orientation = column, COMPRESSION = MIDDLE) distribute by hash(O_ORDERKEY); + |
The query statements are as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | explain verbose select +count(*) as numwait +from +lineitem l1, +orders +where +o_orderkey = l1.l_orderkey +and o_orderstatus = 'F' +and l1.l_receiptdate > l1.l_commitdate +and not exists ( +select +* +from +lineitem l3 +where +l3.l_orderkey = l1.l_orderkey +and l3.l_suppkey <> l1.l_suppkey +and l3.l_receiptdate > l3.l_commitdate +) +order by +numwait desc; + |
The following figure shows the execution plan. (When verbose is used, distinct is added for column selection which is controlled by cost off/on. The hash join rows show the estimated number of distinct values and the other rows do not.)
+These queries are from Anti Join connected in the lineitem table. When cost_param & bit0 is 1, the estimated number of Anti Join rows greatly differ from that of the actual number of rows so that the query performance deteriorates. You can estimate the number of Anti Join rows more accurately by setting cost_param & bit0 to 1 to improve the query performance. The optimized execution plan is as follows:
+If bit1 is set to 1 (set cost_param=2), the selection rate is estimated based on multiple filter criteria. The lowest selection rate among all filter criteria, but not the product of the selection rates for two tables under a specific filter criterion, is used as the total selection rate. This method is more accurate when a close correlation exists between the columns to be filtered. The following example describes the optimization scenario when cost_param & bit1 is 1.
+The table structure is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | CREATE TABLE NATION +( +N_NATIONKEYINT NOT NULL +, N_NAMECHAR(25) NOT NULL +, N_REGIONKEYINT NOT NULL +, N_COMMENTVARCHAR(152) +) distribute by replication; +CREATE TABLE SUPPLIER +( +S_SUPPKEYBIGINT NOT NULL +, S_NAMECHAR(25) NOT NULL +, S_ADDRESSVARCHAR(40) NOT NULL +, S_NATIONKEYINT NOT NULL +, S_PHONECHAR(15) NOT NULL +, S_ACCTBALDECIMAL(15,2) NOT NULL +, S_COMMENTVARCHAR(101) NOT NULL +) distribute by hash(S_SUPPKEY); +CREATE TABLE PARTSUPP +( +PS_PARTKEYBIGINT NOT NULL +, PS_SUPPKEYBIGINT NOT NULL +, PS_AVAILQTYBIGINT NOT NULL +, PS_SUPPLYCOSTDECIMAL(15,2)NOT NULL +, PS_COMMENTVARCHAR(199) NOT NULL +)distribute by hash(PS_PARTKEY); + |
The query statements are as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | set cost_param=2; +explain verbose select +nation, +sum(amount) as sum_profit +from +( +select +n_name as nation, +l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount +from +supplier, +lineitem, +partsupp, +nation +where +s_suppkey = l_suppkey +and ps_suppkey = l_suppkey +and ps_partkey = l_partkey +and s_nationkey = n_nationkey +) as profit +group by nation +order by nation; + |
When bit1 of cost_param is 0, the execution plan is shown as follows:
+In the preceding queries, the hash join criteria of the supplier, lineitem, and partsupp tables are setting lineitem.l_suppkey to supplier.s_suppkey and lineitem.l_partkey to partsupp.ps_partkey. Two filter criteria exist in the hash join conditions. lineitem.l_suppkey in the first filter criteria and lineitem.l_partkey in the second filter criteria are two columns with strong relationship of the lineitem table. In this situation, when you estimate the rate of the hash join conditions, if cost_param & bit1 is 0, the selection rate is estimated based on multiple filter criteria. The lowest selection rate among all filter criteria, but not the product of the selection rates for two tables under a specific filter criterion, is used as the total selection rate. This method is more accurate when a close correlation exists between the columns to be filtered. The plan after optimization is shown as follows:
+During a site test, the information is displayed after EXPLAIN ANALYZE is executed:
+According to the execution information, HashJoin becomes the performance bottleneck of the whole plan. Based on the execution time of HashJoin [2657.406, 93339.924], it can be seen that severe skew occurs on different DNs during the HashJoin operation.
+In the memory information (as shown in the following figure), it can be seen that the data skew occurs in the memory usage of each node.
+The preceding two symptoms indicate that this SQL statement has serious computing skew. The further lower-layer analysis on the HashJoin operator shows that serious computing skew [38.885,2940.983] occurs in Seq Scan on s_riskrate_setting. Based on the description of the Scan, we can infer that the performance problems of this plan lie in data skew occurred in the s_riskrate_setting table. Later, it is proved that serious data skew occurred in the s_riskrate_setting table. After performance optimization, the execution time is reduced from 94s to 50s.
+Information on the EXPLAIN PERFORMANCE at a site is as follows: As shown in the red boxes, two performance bottlenecks are scan operations in a table.
+After further analysis, we found that the filter condition acct_id ='A012709548':: bpchar exists in the two tables.
+Try to add the partial clustering key in the acct_id column of the two tables, and run the VACUUM FULL statement to make the local clustering take effect. The table performance is improved.
+In the GaussDB(DWS) database, row-store tables use the row execution engine, and column-store tables use the column execution engine. If both row-store table and column-store tables exist in a SQL statement, the system will automatically select the row execution engine. The performance of a column execution engine (except for the indexscan related operators) is much better than that of a row execution engine. Therefore, a column-store table is recommended. This is important for some medium result set dumping tables, and you need to select a proper table storage type.
+During the test at a site, if the following execution plan is performed, the customer expects that the performance can be improved and the result can be returned within 3s.
+It is found that the row engine is used after analysis, because both the temporary plan table input_acct_id_tbl and the medium result dumping table row_unlogged_table use a row-store table.
+After the two tables are changed into column-store tables, the system performance is improved and the result is returned by 1.6s.
+During the test at a site, if the following execution plan is performed, the customer expects that the performance can be improved and the result can be returned within 3s.
+The analysis shows that the performance bottleneck of this plan is lfbank. f_ev_dp_kdpl_zhminx. The scan condition of this table is as follows:
+Try to modify the lfbank. f_ev_dp_kdpl_zhmin table to a column-store table, and then create the PCK (local clustering) in the yezdminc column, and set PARTIAL_CLUSTER_ROWS to 100000000. The execution plan after optimization is as follows:
+In the following simple SQL statements, the performance bottlenecks exist in the scan operation of dwcjk.
+Obviously, there are date features in the cjrq field of table data in the service layer, and this meet the features of a partitioned table. Replan the table definition of the dwcjk table. Set the cjrq field as a partition key, and day as an interval unit. Define the partitioned table dwcjk_part. The modified result is as follows, and the performance is nearly doubled.
+The t1 table is defined as follows:
+1 | create table t1(a int, b int, c int) distribute by hash(a); + |
Assume that the distribution column of the result set provided by the agg lower-layer operator is setA, and the group by column of the agg operation is setB, the agg operations can be performed in two scenarios in the stream framework.
+In this scenario, the aggregation result of the lower-layer result set is the correct result, which can be directly used by the upper-layer operator. For details, see the following figure:
+1 +2 +3 +4 +5 +6 +7 | explain select a, count(1) from t1 group by a; + id | operation | E-rows | E-width | E-costs +----+------------------------------+--------+---------+--------- + 1 | -> Streaming (type: GATHER) | 30 | 4 | 15.56 + 2 | -> HashAggregate | 30 | 4 | 14.31 + 3 | -> Seq Scan on t1 | 30 | 4 | 14.14 +(3 rows) + |
In this scenario, the Stream execution framework is classified into the following three plans:
+hashagg+gather(redistribute)+hashagg
+redistribute+hashagg(+gather)
+hashagg+redistribute+hashagg(+gather)
+GaussDB(DWS) provides the guc parameter best_agg_plan to intervene the execution plan, and forces the plan to generate the corresponding execution plan. This parameter can be set to 0, 1, 2, and 3.
+For details, see the following figure.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 | set best_agg_plan to 1; +SET +explain select b,count(1) from t1 group by b; + id | operation | E-rows | E-width | E-costs +----+---------------------------------+--------+---------+--------- + 1 | -> HashAggregate | 8 | 4 | 15.83 + 2 | -> Streaming (type: GATHER) | 25 | 4 | 15.83 + 3 | -> HashAggregate | 25 | 4 | 14.33 + 4 | -> Seq Scan on t1 | 30 | 4 | 14.14 +(4 rows) +set best_agg_plan to 2; +SET +explain select b,count(1) from t1 group by b; + id | operation | E-rows | E-width | E-costs +----+-----------------------------------------+--------+---------+--------- + 1 | -> Streaming (type: GATHER) | 30 | 4 | 15.85 + 2 | -> HashAggregate | 30 | 4 | 14.60 + 3 | -> Streaming(type: REDISTRIBUTE) | 30 | 4 | 14.45 + 4 | -> Seq Scan on t1 | 30 | 4 | 14.14 +(4 rows) +set best_agg_plan to 3; +SET +explain select b,count(1) from t1 group by b; + id | operation | E-rows | E-width | E-costs +----+-----------------------------------------+--------+---------+--------- + 1 | -> Streaming (type: GATHER) | 30 | 4 | 15.84 + 2 | -> HashAggregate | 30 | 4 | 14.59 + 3 | -> Streaming(type: REDISTRIBUTE) | 25 | 4 | 14.59 + 4 | -> HashAggregate | 25 | 4 | 14.33 + 5 | -> Seq Scan on t1 | 30 | 4 | 14.14 +(5 rows) + |
Generally, the optimizer chooses an optimal execution plan, but the cost estimation, especially that of the intermediate result set, has large deviations, which may result in large deviations in agg calculation. In this case, you need to use best_agg_plan to adjust the agg calculation model.
+When the aggregation convergence ratio is very small, that is, the number of result sets does not become small obviously after the agg operation (5 times is a critical point), you can select the redistribute+hashagg or hashagg+redistribute+hashagg execution mode.
+1 +2 +3 +4 | select + 1, + (select count(*) from customer_address_001 a4 where a4.ca_address_sk = a.ca_address_sk) as GZCS +from customer_address_001 a; + |
This SQL performance is poor. SubPlan exists in the execution plan as follows:
+The core of this optimization is to eliminate subqueries. Based on the service scenario analysis, a.ca_address_sk is not null. In terms of SQL syntax, you can rewrite the SQL statement as follows:
+1 +2 +3 +4 +5 | select +count(*) +from customer_address_001 a4, customer_address_001 a +where a4.ca_address_sk = a.ca_address_sk +group by a.ca_address_sk; + |
To ensure that the modified statements have the same functions, not null is added to customer_address_001. ca_address_sk.
+On a site, the customer gave the feedback saying that the execution time of the following SQL statements lasted over one day and did not end:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | UPDATE calc_empfyc_c_cusr1 t1 +SET ln_rec_count = + ( + SELECT CASE WHEN current_date - ln_process_date + 1 <= 12 THEN 0 ELSE t2.ln_rec_count END + FROM calc_empfyc_c1_policysend_tmp t2 + WHERE t1.ln_branch = t2.ln_branch AND t1.ls_policyno_cusr1 = t2.ls_policyno_cusr1 +) +WHERE dsign = '1' +AND flag = '1' +AND EXISTS + (SELECT 1 + FROM calc_empfyc_c1_policysend_tmp t2 + WHERE t1.ln_branch = t2.ln_branch AND t1.ls_policyno_cusr1 = t2.ls_policyno_cusr1 + ); + |
The corresponding execution plan is as follows:
+SubPlan exists in the execution plan, and the calculation accounts for a large proportion in the SubPlan query. That is, SubPlan is a performance bottleneck.
+Based on the SQL syntax, you can rewrite the SQL statements and delete SubPlan as follows:
+1 +2 +3 +4 +5 +6 | UPDATE calc_empfyc_c_cusr1 t1 +SET ln_rec_count = CASE WHEN current_date - ln_process_date + 1 <= 12 THEN 0 ELSE t2.ln_rec_count END +FROM calc_empfyc_c1_policysend_tmp t2 +WHERE +t1.dsign = '1' AND t1.flag = '1' +AND t1.ln_branch = t2.ln_branch AND t1.ls_policyno_cusr1 = t2.ls_policyno_cusr1; + |
The modified SQL statement task is complete within 50s.
+In a test at a site, ddw_f10_op_cust_asset_mon is a partitioned table and the partition key is year_mth whose value is a combined string of month and year values.
+The following figure shows the tested SQL statements:
+1 +2 +3 +4 | select + count(1) +from t_ddw_f10_op_cust_asset_mon b1 +where b1.year_mth between to_char(add_months(to_date(''20170222'','yyyymmdd'), -11),'yyyymm') and substr(''20170222'',1 ,6 ); + |
The test result shows the SQL Scan table takes 135s. This may be the performance bottleneck.
+add_months is a local adaptation function.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | CREATE OR REPLACE FUNCTION ADD_MONTHS(date, integer) RETURNS date + AS $$ + SELECT + CASE + WHEN (EXTRACT(day FROM $1) = EXTRACT(day FROM (date_trunc('month', $1) + INTERVAL '1 month - 1 day'))) THEN + date_trunc('month', $1) + CAST($2 + 1 || ' month - 1 day' as interval) + ELSE + $1 + CAST($2 || ' month' as interval) + END + $$ + LANGUAGE SQL + IMMUTABLE; + |
According to the statement execution plan, the base table filter is displayed as follows:
+Filter: (((year_mth)::text <= '201702'::text) AND ((year_mth)::text >= to_char(add_months(to_date('20170222'::text, 'YYYYMMDD'::text), (-11)), 'YYYYMM'::text)))+
The query condition expression to_char(add_months(to_date(''20170222'','yyyymmdd'),-11),'yyyymm') exists in the filter condition, and this non-constant expression cannot be used for pruning. Therefore, all data of query statements in the partitioned tables is scanned.
+to_date and to_char are stable functions as queried in the pg_proc. Based on the function behaviors described in Postgresql, this type of function cannot be converted into the Const value in the preprocessing phase, which is the root cause of preventing partition pruning.
+Based on the preceding analysis, the optimization expression can be used for partition pruning, which is the key to performance optimization. The original SQL statements can be written to as follows:
+1 +2 +3 +4 | select + count(1) +from t_ddw_f10_op_cust_asset_mon b1 +where b1.year_mth between(substr(ADD_MONTHS('20170222'::date, -11), 1, 4)||substr(ADD_MONTHS('20170222'::date, -11), 6, 2)) and substr(''20170222'',1 ,6 ); + |
The execution time of modified SQL statements is reduced from 135s to 18s.
+in-clause/any-clause is a common SQL statement constraint. Sometimes, the clause following in or any is a constant. For example:
+1 +2 +3 +4 | select +count(1) +from calc_empfyc_c1_result_tmp_t1 +where ls_pid_cusr1 in ('20120405', '20130405'); + |
or
+1 +2 +3 +4 | select +count(1) +from calc_empfyc_c1_result_tmp_t1 +where ls_pid_cusr1 in any('20120405', '20130405'); + |
Some special usages are as follows:
+1 +2 +3 +4 +5 | SELECT +ls_pid_cusr1,COALESCE(max(round((current_date-bthdate)/365)),0) +FROM calc_empfyc_c1_result_tmp_t1 t1,p10_md_tmp_t2 t2 +WHERE t1.ls_pid_cusr1 = any(values(id),(id15)) +GROUP BY ls_pid_cusr1; + |
Where id and id15 are columns of p10_md_tmp_t2. ls_pid_cusr1 = any(values(id),(id15)) equals t1. ls_pid_cusr1 = id or t1. ls_pid_cusr1 = id15.
+Therefore, join-condition is essentially an inequality, and nestloop must be used for this join operation. The execution plan is as follows:
+The test result shows that both result sets are too large. As a result, nestloop is time-consuming with more than one hour to return results. Therefore, the key to performance optimization is to eliminate nestloop, using more efficient hashjoin. From the perspective of semantic equivalence, the SQL statements can be written as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | select +ls_pid_cusr1,COALESCE(max(round(ym/365)),0) +from +( + ( + SELECT + ls_pid_cusr1,(current_date-bthdate) as ym + FROM calc_empfyc_c1_result_tmp_t1 t1,p10_md_tmp_t2 t2 + WHERE t1.ls_pid_cusr1 = t2.id and t1.ls_pid_cusr1 != t2.id15 + ) + union all + ( + SELECT + ls_pid_cusr1,(current_date-bthdate) as ym + FROM calc_empfyc_c1_result_tmp_t1 t1,p10_md_tmp_t2 t2 + WHERE t1.ls_pid_cusr1 = id15 + ) +) +GROUP BY ls_pid_cusr1; + |
The optimized SQL queries consist of two equivalent join subqueries, and each subquery can be used for hashjoin in this scenario. The optimized execution plan is as follows:
+Before the optimization, no result is returned for more than 1 hour. After the optimization, the result is returned within 7s.
+You can add PARTIAL CLUSTER KEY(column_name[,...]) to the definition of a column-store table to set one or more columns of this table as partial cluster keys. In this way, each 70 CUs (4.2 million rows) will be sorted based on the cluster keys by default during data import and the value range is narrowed down for each of the new 70 CUs. If the where condition in the query statement contains these columns, the filtering performance will be improved.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 | CREATE TABLE lineitem +( +L_ORDERKEY BIGINT NOT NULL +, L_PARTKEY BIGINT NOT NULL +, L_SUPPKEY BIGINT NOT NULL +, L_LINENUMBER BIGINT NOT NULL +, L_QUANTITY DECIMAL(15,2) NOT NULL +, L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL +, L_DISCOUNT DECIMAL(15,2) NOT NULL +, L_TAX DECIMAL(15,2) NOT NULL +, L_RETURNFLAG CHAR(1) NOT NULL +, L_LINESTATUS CHAR(1) NOT NULL +, L_SHIPDATE DATE NOT NULL +, L_COMMITDATE DATE NOT NULL +, L_RECEIPTDATE DATE NOT NULL +, L_SHIPINSTRUCT CHAR(25) NOT NULL +, L_SHIPMODE CHAR(10) NOT NULL +, L_COMMENT VARCHAR(44) NOT NULL +) +with (orientation = column) +distribute by hash(L_ORDERKEY); + +select +sum(l_extendedprice * l_discount) as revenue +from +lineitem +where +l_shipdate >= '1994-01-01'::date +and l_shipdate < '1994-01-01'::date + interval '1 year' +and l_discount between 0.06 - 0.01 and 0.06 + 0.01 +and l_quantity < 24; + |
In the where condition, both the l_shipdate and l_quantity columns have a few distinct values, and their values can be used for min/max filtering. Therefore, modify the table definition as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | CREATE TABLE lineitem +( +L_ORDERKEY BIGINT NOT NULL +, L_PARTKEY BIGINT NOT NULL +, L_SUPPKEY BIGINT NOT NULL +, L_LINENUMBER BIGINT NOT NULL +, L_QUANTITY DECIMAL(15,2) NOT NULL +, L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL +, L_DISCOUNT DECIMAL(15,2) NOT NULL +, L_TAX DECIMAL(15,2) NOT NULL +, L_RETURNFLAG CHAR(1) NOT NULL +, L_LINESTATUS CHAR(1) NOT NULL +, L_SHIPDATE DATE NOT NULL +, L_COMMITDATE DATE NOT NULL +, L_RECEIPTDATE DATE NOT NULL +, L_SHIPINSTRUCT CHAR(25) NOT NULL +, L_SHIPMODE CHAR(10) NOT NULL +, L_COMMENT VARCHAR(44) NOT NULL +, partial cluster key(l_shipdate, l_quantity) +) +with (orientation = column) +distribute by hash(L_ORDERKEY); + |
Import the data again and run the query statement. Then, compare the execution time before and after partial cluster keys are used.
+After partial cluster keys are used, the execution time of 5-- CStore Scan on public.lineitem decreases by 1.2s because 84 CUs are filtered out.
+After partial cluster keys are used, data will be sorted when they are imported, affecting the import performance. If all the data can be sorted in the memory, the keys have little impact on import. If some data cannot be sorted in the memory and is written into a temporary file for sorting, the import performance will be greatly affected.
+The memory used for sorting is specified by the psort_work_mem parameter. You can set it to a larger value so that the sorting has less impact on the import performance.
+The volume of data to be sorted is specified by the PARTIAL_CLUSTER_ROWS parameter of the table. Decreasing the value of this parameter reduces the amount of data to be sorted at a time. PARTIAL_CLUSTER_ROWS is usually used along with the MAX_BATCHROW parameter. The value of PARTIAL_CLUSTER_ROWS must be an integer multiple of the MAX_BATCHROW value. MAX_BATCHROW specifies the maximum number of rows in a CU.
+ +A query task that used to take a few milliseconds to complete is now requiring several seconds, and that used to take several seconds is now requiring even half an hour. This section describes how to analyze and rectify such low efficiency issues.
+Perform the following procedure to locate the cause of this fault.
+The analyze command updates data statistics information, such as data sizes and attributes in all tables. This is a lightweight command and can be executed frequently. If the query efficiency is improved or restored after the command execution, the autovacuum process does not function well and requires further analysis.
+For example, if we only need the first 10 records in a table but the query statement searches all records in the table, the query efficiency is fine for a table containing only 50 records but very low for a table containing 50,000 records.
+If an application requires only a part of data information but the query statement returns all information, add a LIMIT clause to the query statement to restrict the number of returned records. In this way, the database optimizer can optimize space and improve query efficiency.
+Run the query statement when there are no or only a few other query requests in the database, and observe the query efficiency. If the efficiency is high, the previous issue is possibly caused by a heavily loaded host in the database system or an inefficient execution plan.
+One major cause that will reduce query efficiency is that the required information is not cached in the memory or is replaced by other query requests because of insufficient memory resources.
+Run the same query statement repeatedly. If the query efficiency increases gradually, the previous issue might be caused by this reason.
+DROP TABLE fails to be executed in the following scenarios:
+The table_name table exists on some nodes only.
+In the preceding scenarios, if DROP TABLE table_name fails to be executed, run DROP TABLE IF EXISTS table_name to successfully drop table_name.
+Two users log in to the same database human_resource and run the select count(*) from areas statement separately to query the areas table, but obtain different results.
+Check whether the two users really query the same table. In a relational database, a table is identified by three elements: database, schema, and table. In this issue, database is human_resource and table is areas. Then, check schema. Log in as users dbadmin and user01 separately. It is found that search_path is public for dbadmin and $user for user01. By default, a schema having the same name as user dbadmin, the cluster administrator, is not created. That is, all tables will be created in public if no schema is specified. However, when a common user, such as user01, is created, the same-name schema (user01) is created by default. That is, all tables are created in user01 if the schema is not specified. In conclusion, both the two users are operating the table, causing that the same-name table is not really the same table.
+Use schema.table to determine a table for query.
+The following error is reported during the integer conversion:
+Invalid input syntax for integer: "13."+
Some data types cannot be converted to the target data type.
+Gradually narrow down the range of SQL statements to locate the fault.
+With automatic retry (referred to as CN retry), GaussDB(DWS) retries an SQL statement when the execution of this statement fails. If an SQL statement sent from the gsql client, JDBC driver, or ODBC driver fails to be executed, the CN can automatically identify the error reported during execution and re-deliver the task to retry.
+The restrictions of this function are as follows:
+Only the error types in Table 1 are supported.
+Support single-statement CN retry, stored procedures, functions, and anonymous blocks. Statements in transaction blocks are not supported.
+Table 1 lists the error types supported by CN retry and the corresponding error codes. You can use the GUC parameter retry_ecode_list to set the list of error types supported by CN retry. You are not advised to modify this parameter. To modify it, contact the technical support.
+ +Error Type + |
+Error Code + |
+Remarks + |
+
---|---|---|
CONNECTION_RESET_BY_PEER + |
+YY001 + |
+TCP communication errors: Connection reset by peer (communication between the CN and DNs) + |
+
STREAM_CONNECTION_RESET_BY_PEER + |
+YY002 + |
+TCP communication errors: Stream connection reset by peer (communication between DNs) + |
+
LOCK_WAIT_TIMEOUT + |
+YY003 + |
+Lock wait timeout + |
+
CONNECTION_TIMED_OUT + |
+YY004 + |
+TCP communication errors: Connection timed out + |
+
SET_QUERY_ERROR + |
+YY005 + |
+Failed to deliver the SET command: Set query + |
+
OUT_OF_LOGICAL_MEMORY + |
+YY006 + |
+Failed to apply for memory: Out of logical memory + |
+
SCTP_MEMORY_ALLOC + |
+YY007 + |
+SCTP communication errors: Memory allocate error + |
+
SCTP_NO_DATA_IN_BUFFER + |
+YY008 + |
+SCTP communication errors: SCTP no data in buffer + |
+
SCTP_RELEASE_MEMORY_CLOSE + |
+YY009 + |
+SCTP communication errors: Release memory close + |
+
SCTP_TCP_DISCONNECT + |
+YY010 + |
+SCTP communication errors: TCP disconnect + |
+
SCTP_DISCONNECT + |
+YY011 + |
+SCTP communication errors: SCTP disconnect + |
+
SCTP_REMOTE_CLOSE + |
+YY012 + |
+SCTP communication errors: Stream closed by remote + |
+
SCTP_WAIT_POLL_UNKNOW + |
+YY013 + |
+Waiting for an unknown poll: SCTP wait poll unknown + |
+
SNAPSHOT_INVALID + |
+YY014 + |
+Snapshot invalid + |
+
ERRCODE_CONNECTION_RECEIVE_WRONG + |
+YY015 + |
+Connection receive wrong + |
+
OUT_OF_MEMORY + |
+53200 + |
+Out of memory + |
+
CONNECTION_FAILURE + |
+08006 + |
+GTM errors: Connection failure + |
+
CONNECTION_EXCEPTION + |
+08000 + |
+Failed to communicate with DNs due to connection errors: Connection exception + |
+
ADMIN_SHUTDOWN + |
+57P01 + |
+System shutdown by administrators: Admin shutdown + |
+
STREAM_REMOTE_CLOSE_SOCKET + |
+XX003 + |
+Remote socket disabled: Stream remote close socket + |
+
ERRCODE_STREAM_DUPLICATE_QUERY_ID + |
+XX009 + |
+Duplicate query id + |
+
ERRCODE_STREAM_CONCURRENT_UPDATE + |
+YY016 + |
+Stream concurrent update + |
+
ERRCODE_LLVM_BAD_ALLOC_ERROR + |
+CG003 + |
+Memory allocation error: Allocate error + |
+
ERRCODE_LLVM_FATAL_ERROR + |
+CG004 + |
+Fatal error + |
+
HashJoin Temporary File Read Error(ERRCODE_HASHJOIN_TEMP_FILE_ERROR) + |
+F0011 + |
+Temporary file read error, File error + |
+
With the GaussDB(DWS) PL/Java functions, you can choose your favorite Java IDE to write Java methods and install the JAR files containing these methods into the GaussDB(DWS) database before invoking them. GaussDB(DWS) PL/Java is developed based on open-source PL/Java 1.5.5 and uses JDK 1.8.0_292.
+Java UDF can be used for some Java logical computing. You are not advised to encapsulate services in Java UDF.
+Before using PL/Java, you need to pack the implementation of Java methods into a JAR package and deploy it into the database. Then, create functions as a database administrator. For compatibility purposes, use JDK 1.8.0_262 for compilation.
+Java method implementation and JAR package archiving can be achieved in an integrated development environment (IDE). The following is a simple example of compilation and archiving through command lines. You can create a JAR package that contains a single method in the similar way.
+First, prepare an Example.java file that contains a method for converting substrings to uppercase. In the following example, Example is the class name and upperString is the method name:
+1 +2 +3 +4 +5 +6 +7 | public class Example +{ + public static String upperString (String text, int beginIndex, int endIndex) + { + return text.substring(beginIndex, endIndex).toUpperCase(); + } +} + |
Then, create a manifest.txt file containing the following content:
+1 +2 +3 +4 +5 +6 | Manifest-Version: 1.0 +Main-Class: Example +Specification-Title: "Example" +Specification-Version: "1.0" +Created-By: 1.6.0_35-b10-428-11M3811 +Build-Date: 08/14/2018 10:09 AM + |
Manifest-Version specifies the version of the manifest file. Main-Class specifies the main class used by the .jar file. Specification-Title and Specification-Version are the extended attributes of the package. Specification-Title specifies the title of the extended specification and Specification-Version specifies the version of the extended specification. Created-By specifies the person who created the file. Build-Date specifies the date when the file was created.
+Finally, archive the .java file and package it into javaudf-example.jar.
+1 +2 | javac Example.java +jar cfm javaudf-example.jar manifest.txt Example.class + |
JAR package names must comply with JDK rules. If a name contains invalid characters, an error occurs when a function is deployed or used.
+First store the JAR package on an OBS server. For details, see "Uploading a File" in Object Storage Service Console Operation Guide. Then, create the access key AK/SK. For details about how to create access keys, see "Creating an Access Key (AK and SK)" in Data Warehouse Service User Guide. After that, log in to the database, run the gs_extend_library function, and import the package to GaussDB(DWS).
+1 | SELECT gs_extend_library('addjar', 'obs://bucket/path/javaudf-example.jar accesskey=access_key_value_to_be_replaced secretkey=secret_access_key_value_to_be_replaced region=region_name libraryname=example'); + |
For details about how to use the gs_extend_library function, see Manage JAR packages and files. Change the values of AK and SK as needed. Replace region_name with an actual region name.
+Log in to the database as a user who has the sysadmin permission (for example, dbadmin) and create the java_upperstring function:
+1 +2 +3 +4 | CREATE FUNCTION java_upperstring(VARCHAR, INTEGER, INTEGER) + RETURNS VARCHAR + AS 'Example.upperString' +LANGUAGE JAVA; + |
Execute the java_upperstring function.
+1 | SELECT java_upperstring('test', 0, 1); + |
The expected result is as follows:
+1 +2 +3 +4 | java_upperstring +--------------------- + T +(1 row) + |
Create a common user named udf_user.
+1 | CREATE USER udf_user PASSWORD 'password'; + |
This command grants user udf_user the permission for the java_upperstring function. Note that the user can use this function only if it also has the permission for using the schema of the function.
+1 +2 | GRANT ALL PRIVILEGES ON SCHEMA public TO udf_user; +GRANT ALL PRIVILEGES ON FUNCTION java_upperstring(VARCHAR, INTEGER, INTEGER) TO udf_user; + |
Log in to the database as user udf_user.
+1 | SET SESSION SESSION AUTHORIZATION udf_user PASSWORD 'password'; + |
Execute the java_upperstring function.
+1 | SELECT public.java_upperstring('test', 0, 1); + |
The expected result is as follows:
+1 +2 +3 +4 | java_upperstring +--------------------- + T +(1 row) + |
1 | DROP FUNCTION java_upperstring; + |
Use the gs_extend_library function to uninstall the JAR package.
+1 | SELECT gs_extend_library('rmjar', 'libraryname=example'); + |
A database user having the sysadmin permission can use the gs_extend_library function to deploy, view, and delete JAR packages in the database. The syntax of the function is as follows:
+1 | SELECT gs_extend_library('[action]', '[operation]'); + |
obs://[bucket]/[source_filepath] accesskey=[accesskey] secretkey=[secretkey] region=[region] libraryname=[libraryname]
+PL/Java functions can be created using the CREATE FUNCTION syntax and are defined as LANGUAGE JAVA, including the RETURNS and AS clauses.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE [ OR REPLACE ] FUNCTION function_name +( [ { argname [ argmode ] argtype [ { DEFAULT | := | = } expression ]} [, ...] ]) +[ RETURNS rettype [ DETERMINISTIC ] ] +LANGAUGE JAVA +[ + { IMMUTABLE | STATBLE | VOLATILE } + | [ NOT ] LEAKPROOF + | WINDOW + | { CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT |STRICT } + | {[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER | AUTHID DEFINER | AUTHID CURRENT_USER} + | { FENCED } + | COST execution_cost + | ROWS result_rows + | SET configuration_parameter { {TO |=} value | FROM CURRENT} +] [...] +{ + AS 'class_name.method_name' ( { argtype } [, ...] ) +} + |
During execution, PL/Java searches for the Java class specified by a function among all the deployed JAR packages, which are ranked by name in alphabetical order, invokes the Java method in the first found class, and returns results.
+PL/Java functions can be deleted by using the DROP FUNCTION syntax. For details about the syntax, see DROP FUNCTION.
+DROP FUNCTION [ IF EXISTS ] function_name [ ( [ {[ argmode ] [ argname ] argtype} [, ...] ] ) [ CASCADE | RESTRICT ] ];+
To delete an overloaded function (for details, see Overloaded Functions), specify argtype in the function. To delete other functions, simply specify function_name.
+Only user sysadmin can create PL/Java functions. It can also grant other users the permission to use the PL/Java functions. For details about the syntax, see GRANT.
+GRANT { EXECUTE | ALL [ PRIVILEGES ] } + ON { FUNCTION {function_name ( [ {[ argmode ] [ arg_name ] arg_type} [, ...] ] )} [, ...] + | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ];+
GaussDB(DWS) + |
+Java + |
+
---|---|
BOOLEAN + |
+boolean + |
+
"char" + |
+byte + |
+
bytea + |
+byte[] + |
+
SMALLINT + |
+short + |
+
INTEGER + |
+int + |
+
BIGINT + |
+long + |
+
FLOAT4 + |
+float + |
+
FLOAT8 + |
+double + |
+
CHAR + |
+java.lang.String + |
+
VARCHAR + |
+java.lang.String + |
+
TEXT + |
+java.lang.String + |
+
name + |
+java.lang.String + |
+
DATE + |
+java.sql.Timestamp + |
+
TIME + |
+java.sql.Time (stored value treated as local time) + |
+
TIMETZ + |
+java.sql.Time + |
+
TIMESTAMP + |
+java.sql.Timestamp + |
+
TIMESTAMPTZ + |
+java.sql.Timestamp + |
+
GaussDB(DWS) can convert basic array types. You only need to append a pair of square brackets ([]) to the data type when creating a function.
+CREATE FUNCTION java_arrayLength(INTEGER[]) + RETURNS INTEGER + AS 'Example.getArrayLength' +LANGUAGE JAVA;+
Java code is similar to the following:
+public class Example +{ + public static int getArrayLength(Integer[] intArray) + { + return intArray.length; + } +}+
Invoke the following statement:
+SELECT java_arrayLength(ARRAY[1, 2, 3]);+
The expected result is as follows:
+java_arrayLength +--------------------- +3 +(1 row)+
NULL values cannot be handled for GaussDB(DWS) data types that are mapped and can be converted to simple Java types by default. If you use a Java function to obtain and process the NULL value transferred from GaussDB(DWS), specify the Java encapsulation class in the AS clause as follows:
+CREATE FUNCTION java_countnulls(INTEGER[]) + RETURNS INTEGER + AS 'Example.countNulls(java.lang.Integer[])' +LANGUAGE JAVA;+
Java code is similar to the following:
+public class Example +{ + public static int countNulls(Integer[] intArray) + { + int nullCount = 0; + for (int idx = 0; idx < intArray.length; ++idx) + { + if (intArray[idx] == null) + nullCount++; + } + return nullCount; + } +}+
Invoke the following statement:
+SELECT java_countNulls(ARRAY[null, 1, null, 2, null]);+
The expected result is as follows:
+java_countNulls +-------------------- +3 +(1 row)+
PL/Java supports overloaded functions. You can create functions with the same name or invoke overloaded functions from Java code. The procedure is as follows:
+For example, create two Java methods with the same name, and specify the methods dummy(int) and dummy(String) with different parameter types.
+public class Example +{ + public static int dummy(int value) + { + return value*2; + } + public static String dummy(String value) + { + return value; + } +}+
In addition, create two functions with the same names as the above two functions in GaussDB(DWS).
+CREATE FUNCTION java_dummy(INTEGER) + RETURNS INTEGER + AS 'Example.dummy' +LANGUAGE JAVA; + +CREATE FUNCTION java_dummy(VARCHAR) + RETURNS VARCHAR + AS 'Example.dummy' +LANGUAGE JAVA;+
GaussDB(DWS) invokes the functions that match the specified parameter type. The results of invoking the above two functions are as follows:
+SELECT java_dummy(5); + java_dummy +----------------- + 10 +(1 row) + +SELECT java_dummy('5'); + java_dummy +--------------- +5 +(1 row)+
Note that GaussDB(DWS) may implicitly convert data types. Therefore, you are advised to specify the parameter type when invoking an overloaded function.
+SELECT java_dummy(5::varchar); + java_dummy +---------------- +5 +(1 row)+
In this case, the specified parameter type is preferentially used for matching. If there is no Java method matching the specified parameter type, the system implicitly converts the parameter and searches for Java methods based on the conversion result.
+SELECT java_dummy(5::INTEGER); + java_dummy +----------------- +10 +(1 row) + +DROP FUNCTION java_dummy(INTEGER); + +SELECT java_dummy(5::INTEGER); + java_dummy +---------------- +5 +(1 row)+
Data types supporting implicit conversion are as follows:
+To delete an overloaded function, specify the parameter type for the function. Otherwise, the function cannot be deleted.
+DROP FUNCTION java_dummy(INTEGER);+
A session-level GUC parameter. It is used to set JVM startup parameters.
+SET pljava_vmoptions='-Xmx64m –Xms2m –XX:MaxMetaspaceSize=8m';+
pljava_vmoptions supports:
+You are not advised to set any parameters that contain directories because such setting may lead to unpredictable behavior.
+If a user sets pljava_vmoptions to a value beyond the value range, an error will be reported during function revoking.
+SET pljava_vmoptions=' illegal.option'; +SET +SELECT java_dummy(5::int); +ERROR: UDF Error:cannot use PL/Java before successfully completing its setup.Please check if your pljava_vmoption is set correctly,since we do not ignore illegal parameters.Or check the log for more messages.+
A session-level GUC parameter. It is used to specify the maximum virtual memory used by a single Fenced UDF Worker process initiated by a session.
+SET FencedUDFMemoryLimit='512MB';+
The value range of this parameter is (150 MB, 1G]. If the value is greater than 1G, an error will be reported immediately. If the value is less than or equal to 150 MB, an error will be reported during function invoking.
+If there is an exception in a JVM, PL/Java will export JVM stack information during the exception to a client.
+PL/Java uses the standard Java Logger. Therefore, you can record logs as follows:
+Logger.getAnonymousLogger().config( "Time is " + new +Date(System.currentTimeMillis()));+
An initialized Java Logger class is set to the CONFIG level by default, corresponding to the LOG level in GaussDB(DWS). In this case, log messages generated by Java Logger are all redirected to the GaussDB(DWS) backend. Then, the log messages are written into server logs or displayed on the user interface. MPPDB server logs record information at the LOG, WARNING, and ERROR levels. The SQL user interface displays logs at the WARNING and ERROR levels. The following table lists mapping between Java Logger levels and GaussDB(DWS) log levels.
+ +java.util.logging.Level + |
+GaussDB(DWS) Log Level + |
+
---|---|
SERVER + |
+ERROR + |
+
WARINING + |
+WARNING + |
+
CONFIG + |
+LOG + |
+
INFO + |
+INFO + |
+
FINE + |
+DEBUG1 + |
+
FINER + |
+DEBUG2 + |
+
FINEST + |
+DEBUG3 + |
+
You can change Java Logger levels. For example, if the Java Logger level is changed to SEVERE by the following Java code, log messages (msg) will not be recorded in GaussDB(DWS) logs during WARNING logging.
+Logger log = Logger.getAnonymousLogger(); +Log.setLevel(Level.SEVERE); +log.log(Level.WARNING, msg);+
In GaussDB(DWS), PL/Java is an untrusted language. Only user sysadmin can create PL/Java functions. The user can grant other users the permission for using the PL/Java functions. For details, see Authorize permissions for functions.
+In addition, PL/Java controls user access to file systems, forbidding users from reading most system files, or writing, deleting, or executing any system files in Java methods.
+PL/pgSQL is similar to PL/SQL of Oracle. It is a loadable procedural language.
+The functions created using PL/pgSQL can be used in any place where you can use built-in functions. For example, you can create calculation functions with complex conditions and use them to define operators or use them for index expressions.
+SQL is used by most databases as a query language. It is portable and easy to learn. Each SQL statement must be executed independently by a database server.
+In this case, when a client application sends a query to the server, it must wait for it to be processed, receive and process the results, and then perform some calculation before sending more queries to the server. If the client and server are not on the same machine, all these operations will cause inter-process communication and increase network loads.
+PL/pgSQL enables a whole computing part and a series of queries to be grouped inside a database server. This makes procedural language available and SQL easier to use. In addition, the client/server communication cost is reduced.
+PL/pgSQL can use all data types, operators, and functions in SQL.
+For details about the PL/pgSQL syntax for creating functions, see CREATE FUNCTION. As mentioned earlier, PL/pgSQL is similar to PL/SQL of Oracle and is a loadable procedural language. Its application method is similar to that of Stored Procedures. There is only one difference. Stored procedures have no return values but the functions have.
+In GaussDB(DWS), business rules and logics are saved as stored procedures.
+A stored procedure is a combination of SQL, PL/SQL, and Java statements, enabling business rule code to be moved from applications to databases and used by multiple programs at a time.
+For details about how to create and invoke a stored procedure, see section "CREATE PROCEDURE" in SQL Syntax.
+The functions and stored procedures created by using PL/pgSQL in PL/pgSQL Functions are applicable to all the following sections.
+A data type refers to a value set and an operation set defined on the value set. A GaussDB(DWS) database consists of tables, each of which is defined by its own columns. Each column corresponds to a data type. GaussDB(DWS) uses corresponding functions to perform operations on data based on data types. For example, GaussDB(DWS) can perform addition, subtraction, multiplication, and division operations on data of numeric values.
+Certain data types in the database support implicit data type conversions, such as assignments and parameters invoked by functions. For other data types, you can use the type conversion functions provided by GaussDB(DWS), such as the CAST function, to forcibly convert them.
+Table 1 lists common implicit data type conversions in GaussDB(DWS).
+The valid value range of DATE supported by GaussDB(DWS) is from 4713 B.C. to 294276 A.D.
+Raw Data Type + |
+Target Data Type + |
+Remarks + |
+
---|---|---|
CHAR + |
+VARCHAR2 + |
+- + |
+
CHAR + |
+NUMBER + |
+Raw data must consist of digits. + |
+
CHAR + |
+DATE + |
+Raw data cannot exceed the valid date range. + |
+
CHAR + |
+RAW + |
+- + |
+
CHAR + |
+CLOB + |
+- + |
+
VARCHAR2 + |
+CHAR + |
+- + |
+
VARCHAR2 + |
+NUMBER + |
+Raw data must consist of digits. + |
+
VARCHAR2 + |
+DATE + |
+Raw data cannot exceed the valid date range. + |
+
VARCHAR2 + |
+CLOB + |
+- + |
+
NUMBER + |
+CHAR + |
+- + |
+
NUMBER + |
+VARCHAR2 + |
+- + |
+
DATE + |
+CHAR + |
+- + |
+
DATE + |
+VARCHAR2 + |
+- + |
+
RAW + |
+CHAR + |
+- + |
+
RAW + |
+VARCHAR2 + |
+- + |
+
CLOB + |
+CHAR + |
+- + |
+
CLOB + |
+VARCHAR2 + |
+- + |
+
CLOB + |
+NUMBER + |
+Raw data must consist of digits. + |
+
INT4 + |
+CHAR + |
+- + |
+
Before the use of arrays, an array type needs to be defined:
+TYPE array_type IS VARRAY(size) OF data_type [NOT NULL];+
Its parameters are as follows:
+In GaussDB(DWS) 8.1.0 and earlier versions, the system does not verify the length of array elements and out-of-bounds write because the array can automatically increase. This version adds related constraints to be compatible with Oracle databases. If out-of-bounds write exists, you can configure varray_verification in the parameter behavior_compat_options to be compatible with previously unverified operations.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 | -- Declare an array in a stored procedure. +CREATE OR REPLACE PROCEDURE array_proc +AS + TYPE ARRAY_INTEGER IS VARRAY(1024) OF INTEGER;--Define the array type. + TYPE ARRAY_INTEGER_NOT_NULL IS VARRAY(1024) OF INTEGER NOT NULL;-- Defines non-null array types. + ARRINT ARRAY_INTEGER: = ARRAY_INTEGER(); --Declare the variable of the array type. +BEGIN + ARRINT.extend(10); + FOR I IN 1..10 LOOP + ARRINT(I) := I; + END LOOP; + DBMS_OUTPUT.PUT_LINE(ARRINT.COUNT); + DBMS_OUTPUT.PUT_LINE(ARRINT(1)); + DBMS_OUTPUT.PUT_LINE(ARRINT(10)); + DBMS_OUTPUT.PUT_LINE(ARRINT(ARRINT.FIRST)); + DBMS_OUTPUT.PUT_LINE(ARRINT(ARRINT.last)); +END; +/ + +-- Invoke the stored procedure. +CALL array_proc(); +10 +1 +10 +1 +10 + +-- Delete the stored procedure. +DROP PROCEDURE array_proc; + |
In addition to the declaration and use of common arrays and non-null arrays in the preceding example, the array also supports the declaration and use of rowtype arrays.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | -- Use the COUNT function on an array in a stored procedure. +CREATE TABLE tbl (a int, b int); +INSERT INTO tbl VALUES(1, 2),(2, 3),(3, 4); +CREATE OR REPLACE PROCEDURE array_proc +AS + CURSOR all_tbl IS SELECT * FROM tbl ORDER BY a; + TYPE tbl_array_type IS varray(50) OF tbl%rowtype; -- Defines the array of the rowtype type. tbl indicates any table. + tbl_array tbl_array_type; + tbl_item tbl%rowtype; + inx1 int; +BEGIN + tbl_array := tbl_array_type(); + inx1 := 0; + FOR tbl_item IN all_tbl LOOP + inx1 := inx1 + 1; + tbl_array(inx1) := tbl_item; + END LOOP; + WHILE inx1 IS NOT NULL LOOP + DBMS_OUTPUT.PUT_LINE('tbl_array(inx1).a=' || tbl_array(inx1).a || ' tbl_array(inx1).b=' || tbl_array(inx1).b); + inx1 := tbl_array.PRIOR(inx1); + END LOOP; +END; +/ + |
The execution output is as follows:
+1 +2 +3 +4 | call array_proc(); +tbl_array(inx1).a=3 tbl_array(inx1).b=4 +tbl_array(inx1).a=2 tbl_array(inx1).b=3 +tbl_array(inx1).a=1 tbl_array(inx1).b=2 + |
GaussDB(DWS) supports Oracle-related array functions. You can use the following functions to obtain array attributes or perform operations on the array content.
+Returns the number of elements in the current array. Only the initialized elements or the elements extended by the EXTEND function are counted.
+Use:
+varray.COUNT or varray.COUNT()
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | -- Use the COUNT function on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3); + DBMS_OUTPUT.PUT_LINE('v_varray.count=' || v_varray.count); + v_varray.extend; + DBMS_OUTPUT.PUT_LINE('v_varray.count=' || v_varray.count); +END; +/ + |
The execution output is as follows:
+1 +2 +3 | call test_varray(); +v_varray.count=3 +v_varray.count=4 + |
The FIRST function can return the subscript of the first element. The LAST function can return the subscript of the last element.
+Use:
+varray.FIRST or varray.FIRST()
+varray.LAST or varray.LAST()
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | -- Use the FIRST and LAST functions on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3); + DBMS_OUTPUT.PUT_LINE('v_varray.first=' || v_varray.first); + DBMS_OUTPUT.PUT_LINE('v_varray.last=' || v_varray.last); +END; +/ + |
The execution output is as follows:
+1 +2 +3 | call test_varray(); +v_varray.first=1 +v_varray.last=3 + |
The EXTEND function is used to be compatible with two Oracle database operations. In GaussDB(DWS), an array automatically grows, and the EXTEND function is not necessary. For a newly written stored procedure, you do not need to use the EXTEND function.
+The EXTEND function can extend arrays. The EXTEND function can be invoked in either of the following ways:
+EXTEND contains an integer input parameter, indicating that the array size is extended by the specified length. After executing the EXTEND function, the values of the COUNT and LAST functions change accordingly.
+Use:
+varray.EXTEND(size)
+By default, one bit is added to the end of varray.EXTEND, which is equivalent to varray.EXTEND(1).
+EXTEND contains two integer input parameters. The first parameter indicates the length of the extended size. The second parameter indicates that the value of the extended array element is the same as that of the element with the index subscript.
+Use:
+varray.EXTEND(size, index)
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | -- Use the EXTEND function on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3); + v_varray.extend(3); + DBMS_OUTPUT.PUT_LINE('v_varray.count=' || v_varray.count); + v_varray.extend(2,3); + DBMS_OUTPUT.PUT_LINE('v_varray.count=' || v_varray.count); + DBMS_OUTPUT.PUT_LINE('v_varray(7)=' || v_varray(7)); + DBMS_OUTPUT.PUT_LINE('v_varray(8)=' || v_varray(7)); +END; +/ + |
The execution output is as follows:
+1 +2 +3 +4 +5 | call test_varray(); +v_varray.count=6 +v_varray.count=8 +v_varray(7)=3 +v_varray(8)=3 + |
The NEXT and PRIOR functions are used for cyclic array traversal. The NEXT function returns the subscript of the next array element based on the input parameter index. If the subscript reaches the maximum value, NULL is returned. The PRIOR function returns the subscript of the previous array element based on the input parameter index. If the minimum value of the array subscript is reached, NULL is returned.
+Use:
+varray.NEXT(index)
+varray.PRIOR(index)
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | -- Use the NEXT and PRIOR functions on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; + i int; +BEGIN + v_varray := varray_type(1, 2, 3); + + i := v_varray.COUNT; + WHILE i IS NOT NULL LOOP + DBMS_OUTPUT.PUT_LINE('test prior v_varray('||i||')=' || v_varray(i)); + i := v_varray.PRIOR(i); + END LOOP; + + i := 1; + WHILE i IS NOT NULL LOOP + DBMS_OUTPUT.PUT_LINE('test next v_varray('||i||')=' || v_varray(i)); + i := v_varray.NEXT(i); + END LOOP; +END; +/ + |
The execution output is as follows:
+1 +2 +3 +4 +5 +6 +7 | call test_varray(); +test prior v_varray(3)=3 +test prior v_varray(2)=2 +test prior v_varray(1)=1 +test next v_varray(1)=1 +test next v_varray(2)=2 +test next v_varray(3)=3 + |
Determines whether an array subscript exists.
+Use:
+varray.EXISTS(index)
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | -- Use the EXISTS function on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3); + IF v_varray.EXISTS(1) THEN + DBMS_OUTPUT.PUT_LINE('v_varray.EXISTS(1)'); + END IF; + IF NOT v_varray.EXISTS(10) THEN + DBMS_OUTPUT.PUT_LINE('NOT v_varray.EXISTS(10)'); + END IF; +END; +/ + |
The execution output is as follows:
+1 +2 +3 | call test_varray(); +v_varray.EXISTS(1) +NOT v_varray.EXISTS(10) + |
Deletes a specified number of elements from the end of an array.
+Use:
+varray.TRIM(size)
+varray.TRIM is equivalent to varray.TRIM(1), because the default input parameter is 1.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | -- Use the TRIM function on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3, 4, 5); + v_varray.trim(3); + DBMS_OUTPUT.PUT_LINE('v_varray.count' || v_varray.count); + v_varray.trim; + DBMS_OUTPUT.PUT_LINE('v_varray.count:' || v_varray.count); +END; +/ + |
The execution output is as follows:
+1 +2 +3 | call test_varray(); +v_varray.count:2 +v_varray.count:1 + |
Deletes all elements from an array.
+Use:
+varray.DELETE or varray.DELETE()
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | -- Use the DELETE function on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3, 4, 5); + v_varray.delete; + DBMS_OUTPUT.PUT_LINE('v_varray.count:' || v_varray.count); +END; +/ + |
The execution output is as follows:
+1 +2 | call test_varray(); +v_varray.count:0 + |
Returns the allowed maximum length of an array.
+Use:
+varray.LIMIT or varray.LIMIT()
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | -- Use the LIMIT function on an array in a stored procedure. +CREATE OR REPLACE PROCEDURE test_varray +AS + TYPE varray_type IS VARRAY(20) OF INT; + v_varray varray_type; +BEGIN + v_varray := varray_type(1, 2, 3, 4, 5); + DBMS_OUTPUT.PUT_LINE('v_varray.limit:' || v_varray.limit); +END; +/ + |
The execution output is as follows:
+1 +2 | call test_varray(); +v_varray.limit:20 + |
Perform the following operations to create a record variable:
+Define a record type and use this type to declare a variable.
+For the syntax of the record type, see Figure 1.
+ +The syntax is described as follows:
+In GaussDB(DWS):
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 | The table used in the following stored procedure is defined as follows: +CREATE TABLE emp_rec +( + empno numeric(4,0), + ename character varying(10), + job character varying(9), + mgr numeric(4,0), + hiredate timestamp(0) without time zone, + sal numeric(7,2), + comm numeric(7,2), + deptno numeric(2,0) +) +with (orientation = column,compression=middle) +distribute by hash (sal); +\d emp_rec + Table "public.emp_rec" + Column | Type | Modifiers +----------+--------------------------------+----------- + empno | numeric(4,0) | + ename | character varying(10) | + job | character varying(9) | + mgr | numeric(4,0) | + hiredate | timestamp(0) without time zone | + sal | numeric(7,2) | + comm | numeric(7,2) | + deptno | numeric(2,0) | + +-- Perform array operations in the stored procedure. +CREATE OR REPLACE FUNCTION regress_record(p_w VARCHAR2) +RETURNS +VARCHAR2 AS $$ +DECLARE + + -- Declare a record type. + type rec_type is record (name varchar2(100), epno int); + employer rec_type; + + -- Use %type to declare the record type. + type rec_type1 is record (name emp_rec.ename%type, epno int not null :=10); + employer1 rec_type1; + + -- Declare a record type with a default value. + type rec_type2 is record ( + name varchar2 not null := 'SCOTT', + epno int not null :=10); + employer2 rec_type2; + CURSOR C1 IS select ename,empno from emp_rec order by 1 limit 1; + +BEGIN + -- Assign a value to a member record variable. + employer.name := 'WARD'; + employer.epno = 18; + raise info 'employer name: % , epno:%', employer.name, employer.epno; + + -- Assign the value of a record variable to another variable. + employer1 := employer; + raise info 'employer1 name: % , epno: %',employer1.name, employer1.epno; + + -- Assign the NULL value to a record variable. + employer1 := NULL; + raise info 'employer1 name: % , epno: %',employer1.name, employer1.epno; + + -- Obtain the default value of a record variable. + raise info 'employer2 name: % ,epno: %', employer2.name, employer2.epno; + + -- Use a record variable in the FOR loop. + for employer in select ename,empno from emp_rec order by 1 limit 1 + loop + raise info 'employer name: % , epno: %', employer.name, employer.epno; + end loop; + + -- Use a record variable in the SELECT INTO statement. + select ename,empno into employer2 from emp_rec order by 1 limit 1; + raise info 'employer name: % , epno: %', employer2.name, employer2.epno; + + -- Use a record variable in a cursor. + OPEN C1; + FETCH C1 INTO employer2; + raise info 'employer name: % , epno: %', employer2.name, employer2.epno; + CLOSE C1; + RETURN employer.name; +END; +$$ +LANGUAGE plpgsql; + +-- Invoke the stored procedure. +CALL regress_record('abc'); +INFO: employer name: WARD , epno:18 +INFO: employer1 name: WARD , epno: 18 +INFO: employer1 name: <NULL> , epno: <NULL> +INFO: employer2 name: SCOTT ,epno: 10 + +-- Delete the stored procedure. +DROP PROCEDURE regress_record; + |
A PL/SQL block can contain a sub-block which can be placed in any section. The following describes the architecture of a PL/SQL block:
+DECLARE+
This part is optional if no variable needs to be declared.
+BEGIN+
EXCEPTION+
END; +/+
You are not allowed to use consecutive tabs in the PL/SQL block, because they may result in an exception when the parameter -r is executed using the gsql tool.
+PL/SQL blocks are classified into the following types:
+An anonymous block applies to a script infrequently executed or a one-off activity. An anonymous block is executed in a session and is not stored.
+Figure 1 shows the syntax diagrams for an anonymous block.
+ +Details about the syntax diagram are as follows:
+The terminator "/" must be written in an independent row.
+The following lists basic anonymous block programs:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | -- Null statement block: +BEGIN + NULL; +END; +/ + +-- Print information to the console: +BEGIN + dbms_output.put_line('hello world!'); +END; +/ + +-- Print variable contents to the console: +DECLARE + my_var VARCHAR2(30); +BEGIN + my_var :='world'; + dbms_output.put_line('hello'||my_var); +END; +/ + |
A subprogram stores stored procedures, functions, operators, and advanced packages. A subprogram created in a database can be called by other programs.
+This section describes the declaration of variables in the PL/SQL and the scope of this variable in codes.
+For details about the variable declaration syntax, see Figure 1.
+ +The above syntax diagram is explained as follows:
+Example:
+1 +2 +3 +4 +5 +6 | DECLARE + emp_id INTEGER := 7788; -- Define a variable and assign a value to it. +BEGIN + emp_id := 5*7784; -- Assign a value to the variable. +END; +/ + |
In addition to the declaration of basic variable types, %TYPE and %ROWTYPE can be used to declare variables related to table columns or table structures.
+%TYPE declares a variable to be of the same data type as a previously declared variable (for example, a column in a table). For example, if you want to define a my_name variable whose data type is the same as the data type of the firstname column in the employee table, you can define the variable as follows:
+my_name employee.firstname%TYPE+
In this way, you can declare my_name without the need of knowing the data type of firstname in employee, and the data type of my_name can be automatically updated when the data type of firstname changes.
+%ROWTYPE declares data types of a set of data. It stores a row of table data or results fetched from a cursor. For example, if you want to define a set of data with the same column names and column data types as the employee table, you can define the data as follows:
+my_employee employee%ROWTYPE+
If multiple CNs are used, the %ROWTYPE and %TYPE attributes of temporary tables cannot be declared in a stored procedure, because a temporary table is valid only in the current session and is invisible to other CNs in the compilation phase. In this case, a message is displayed indicating that the temporary table does not exist.
+The scope of a variable indicates the accessibility and availability of a variable in code block. In other words, a variable takes effect only within its scope.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | DECLARE + emp_id INTEGER :=7788; -- Define a variable and assign a value to it. + outer_var INTEGER :=6688; -- Define a variable and assign a value to it. +BEGIN + DECLARE + emp_id INTEGER :=7799; -- Define a variable and assign a value to it. + inner_var INTEGER :=6688; -- Define a variable and assign a value to it. + BEGIN + dbms_output.put_line('inner emp_id ='||emp_id); -- Display the value as 7799. + dbms_output.put_line('outer_var ='||outer_var); -- Cite variables of an outer block. + END; + dbms_output.put_line('outer emp_id ='||emp_id); -- Display the value as 7788. +END; +/ + |
Figure 1 shows the syntax diagram for assigning a value to a variable.
+ +The above syntax diagram is explained as follows:
+1 +2 +3 +4 +5 +6 +7 | DECLARE + emp_id INTEGER := 7788; --Assignment +BEGIN + emp_id := 5; --Assignment + emp_id := 5*7784; +END; +/ + |
Figure 1 shows the syntax diagram for calling a clause.
+ +The above syntax diagram is explained as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 | -- Create the stored procedure proc_staffs: +CREATE OR REPLACE PROCEDURE proc_staffs +( +section NUMBER(6), +salary_sum out NUMBER(8,2), +staffs_count out INTEGER +) +IS +BEGIN +SELECT sum(salary), count(*) INTO salary_sum, staffs_count FROM staffs where section_id = section; +END; +/ + +-- Create the stored procedure proc_return: +CREATE OR REPLACE PROCEDURE proc_return +AS +v_num NUMBER(8,2); +v_sum INTEGER; +BEGIN +proc_staffs(30, v_sum, v_num); --Invoke a statement +dbms_output.put_line(v_sum||'#'||v_num); +RETURN; --Return a statement +END; +/ + +-- Invoke a stored procedure proc_return: +CALL proc_return(); + +-- Delete a stored procedure: +DROP PROCEDURE proc_staffs; +DROP PROCEDURE proc_return; + +--Create the function func_return. +CREATE OR REPLACE FUNCTION func_return returns void +language plpgsql +AS $$ +DECLARE +v_num INTEGER := 1; +BEGIN +dbms_output.put_line(v_num); +RETURN; --Return a statement +END $$; + + +-- Invoke the function func_return. + CALL func_return(); +1 + +-- Delete the function: + DROP FUNCTION func_return; + |
You can perform dynamic queries using EXECUTE IMMEDIATE or OPEN FOR in GaussDB(DWS). EXECUTE IMMEDIATE dynamically executes SELECT statements and OPEN FOR combines use of cursors. If you need to store query results in a data set, use OPEN FOR.
+Figure 1 shows the syntax diagram.
+ +Figure 2 shows the syntax diagram for using_clause.
+ +The above syntax diagram is explained as follows:
+Example
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 | --Retrieve values from dynamic statements (INTO clause). +DECLARE + staff_count VARCHAR2(20); +BEGIN + EXECUTE IMMEDIATE 'select count(*) from staffs' + INTO staff_count; + dbms_output.put_line(staff_count); +END; +/ + +--Pass and retrieve values (the INTO clause is used before the USING clause). +CREATE OR REPLACE PROCEDURE dynamic_proc +AS + staff_id NUMBER(6) := 200; + first_name VARCHAR2(20); + salary NUMBER(8,2); +BEGIN + EXECUTE IMMEDIATE 'select first_name, salary from staffs where staff_id = :1' + INTO first_name, salary + USING IN staff_id; + dbms_output.put_line(first_name || ' ' || salary); +END; +/ + +-- Invoke the stored procedure. +CALL dynamic_proc(); + +-- Delete the stored procedure. +DROP PROCEDURE dynamic_proc; + |
Dynamic query statements can be executed by using OPEN FOR to open dynamic cursors.
+For details about the syntax, see Figure 3.
+ +Parameter description:
+For use of cursors, see Cursors.
+Example
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | DECLARE + name VARCHAR2(20); + phone_number VARCHAR2(20); + salary NUMBER(8,2); + sqlstr VARCHAR2(1024); + + TYPE app_ref_cur_type IS REF CURSOR; -- Define the cursor type. + my_cur app_ref_cur_type; -- Define the cursor variable. + +BEGIN + sqlstr := 'select first_name,phone_number,salary from staffs + where section_id = :1'; + OPEN my_cur FOR sqlstr USING '30'; -- Open the cursor. using is optional. + FETCH my_cur INTO name, phone_number, salary; -- Retrieve the data. + WHILE my_cur%FOUND LOOP + dbms_output.put_line(name||'#'||phone_number||'#'||salary); + FETCH my_cur INTO name, phone_number, salary; + END LOOP; + CLOSE my_cur; -- Close the cursor. +END; +/ + |
Figure 1 shows the syntax diagram.
+ +Figure 2 shows the syntax diagram for using_clause.
+ +The above syntax diagram is explained as follows:
+USING IN bind_argument is used to specify the variable that transfers values to dynamic SQL statements. It is used when a placeholder exists in dynamic_noselect_string. That is, a placeholder is replaced by the corresponding bind_argument when a dynamic SQL statement is executed. Note that bind_argument can only be a value, variable, or expression, and cannot be a database object such as a table name, column name, and data type. If a stored procedure needs to transfer database objects through bind_argument to construct dynamic SQL statements (generally, DDL statements), you are advised to use double vertical bars (||) to concatenate dynamic_select_clause with a database object. In addition, a dynamic PL/SQL block allows duplicate placeholders. That is, a placeholder can correspond to only one bind_argument.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 | -- Create a table: +CREATE TABLE sections_t1 +( + section NUMBER(4) , + section_name VARCHAR2(30), + manager_id NUMBER(6), + place_id NUMBER(4) +) +DISTRIBUTE BY hash(manager_id); + +--Declare a variable: +DECLARE + section NUMBER(4) := 280; + section_name VARCHAR2(30) := 'Info support'; + manager_id NUMBER(6) := 103; + place_id NUMBER(4) := 1400; + new_colname VARCHAR2(10) := 'sec_name'; +BEGIN +-- Execute the query: + EXECUTE IMMEDIATE 'insert into sections_t1 values(:1, :2, :3, :4)' + USING section, section_name, manager_id,place_id; +-- Execute the query (duplicate placeholders): + EXECUTE IMMEDIATE 'insert into sections_t1 values(:1, :2, :3, :1)' + USING section, section_name, manager_id; +-- Run the ALTER statement. (You are advised to use double vertical bars (||) to concatenate the dynamic DDL statement with a database object.) + EXECUTE IMMEDIATE 'alter table sections_t1 rename section_name to ' || new_colname; +END; +/ + +-- Query data: +SELECT * FROM sections_t1; + +--Delete the table. +DROP TABLE sections_t1; + |
This section describes how to dynamically call store procedures. You must use anonymous statement blocks to package stored procedures or statement blocks and append IN and OUT behind the EXECUTE IMMEDIATE...USING statement to input and output parameters.
+Figure 1 shows the syntax diagram.
+ +Figure 2 shows the syntax diagram for using_clause.
+ +The above syntax diagram is explained as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 | --Create the stored procedure proc_add: +CREATE OR REPLACE PROCEDURE proc_add +( + param1 in INTEGER, + param2 out INTEGER, + param3 in INTEGER +) +AS +BEGIN + param2:= param1 + param3; +END; +/ + +DECLARE + input1 INTEGER:=1; + input2 INTEGER:=2; + statement VARCHAR2(200); + param2 INTEGER; +BEGIN + --Declare the call statement: + statement := 'call proc_add(:col_1, :col_2, :col_3)'; + --Execute the statement: + EXECUTE IMMEDIATE statement + USING IN input1, OUT param2, IN input2; + dbms_output.put_line('result is: '||to_char(param2)); +END; +/ + +-- Delete the stored procedure. +DROP PROCEDURE proc_add; + |
This section describes how to execute anonymous blocks in dynamic statements. Append IN and OUT behind the EXECUTE IMMEDIATE...USING statement to input and output parameters.
+Figure 1 shows the syntax diagram.
+ +Figure 2 shows the syntax diagram for using_clause.
+ +The above syntax diagram is explained as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | --Create the stored procedure dynamic_proc. +CREATE OR REPLACE PROCEDURE dynamic_proc +AS + staff_id NUMBER(6) := 200; + first_name VARCHAR2(20); + salary NUMBER(8,2); +BEGIN +--Execute the anonymous block. + EXECUTE IMMEDIATE 'begin select first_name, salary into :first_name, :salary from staffs where staff_id= :dno; end;' + USING OUT first_name, OUT salary, IN staff_id; + dbms_output.put_line(first_name|| ' ' || salary); +END; +/ + +-- Invoke the stored procedure. +CALL dynamic_proc(); + +-- Delete the stored procedure. +DROP PROCEDURE dynamic_proc; + |
In GaussDB(DWS), data can be returned in either of the following ways: RETURN, RETURN NEXT, or RETURN QUERY. RETURN NEXT and RETURN QUERY are used only for functions and cannot be used for stored procedures.
+Figure 1 shows the syntax diagram for a return statement.
+ +The syntax details are as follows:
+This statement returns control from a stored procedure or function to a caller.
+See Examples for call statement examples.
+When creating a function, specify SETOF datatype for the return values.
+return_next_clause::=
+return_query_clause::=
+The syntax details are as follows:
+If a function needs to return a result set, use RETURN NEXT or RETURN QUERY to add results to the result set, and then continue to execute the next statement of the function. As the RETURN NEXT or RETURN QUERY statement is executed repeatedly, more and more results will be added to the result set. After the function is executed, all results are returned.
+RETURN NEXT can be used for scalar and compound data types.
+RETURN QUERY has a variant RETURN QUERY EXECUTE. You can add dynamic queries and add parameters to the queries by using USING.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 | CREATE TABLE t1(a int); +INSERT INTO t1 VALUES(1),(10); + +--RETURN NEXT +CREATE OR REPLACE FUNCTION fun_for_return_next() RETURNS SETOF t1 AS $$ +DECLARE + r t1%ROWTYPE; +BEGIN + FOR r IN select * from t1 + LOOP + RETURN NEXT r; + END LOOP; + RETURN; +END; +$$ LANGUAGE PLPGSQL; +call fun_for_return_next(); + a +--- + 1 + 10 +(2 rows) + +-- RETURN QUERY +CREATE OR REPLACE FUNCTION fun_for_return_query() RETURNS SETOF t1 AS $$ +DECLARE + r t1%ROWTYPE; +BEGIN + RETURN QUERY select * from t1; +END; +$$ +language plpgsql; +call fun_for_return_next(); + a +--- + 1 + 10 +(2 rows) + |
Conditional statements are used to decide whether given conditions are met. Operations are executed based on the decisions made.
+GaussDB(DWS) supports five usages of IF:
+IF_THEN is the simplest form of IF. If the condition is true, statements are executed. If it is false, they are skipped.
+Example
+1 +2 +3 | IF v_user_id <> 0 THEN + UPDATE users SET email = v_email WHERE user_id = v_user_id; +END IF; + |
IF-THEN-ELSE statements add ELSE branches and can be executed if the condition is false.
+Example
+1 +2 +3 +4 +5 +6 | IF parentid IS NULL OR parentid = '' +THEN + RETURN; +ELSE + hp_true_filename(parentid); -- Call the stored procedure. +END IF; + |
IF statements can be nested in the following way:
+1 +2 +3 +4 +5 +6 +7 | IF sex = 'm' THEN + pretty_sex := 'man'; +ELSE + IF sex = 'f' THEN + pretty_sex := 'woman'; + END IF; +END IF; + |
Actually, this is a way of an IF statement nesting in the ELSE part of another IF statement. Therefore, an END IF statement is required for each nesting IF statement and another END IF statement to end the parent IF-ELSE statement. To set multiple options, use the following form:
+Example
+1 +2 +3 +4 +5 +6 +7 +8 +9 | IF number_tmp = 0 THEN + result := 'zero'; +ELSIF number_tmp > 0 THEN + result := 'positive'; +ELSIF number_tmp < 0 THEN + result := 'negative'; +ELSE + result := 'NULL'; +END IF; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE OR REPLACE PROCEDURE proc_control_structure(i in integer) +AS + BEGIN + IF i > 0 THEN + raise info 'i:% is greater than 0. ',i; + ELSIF i < 0 THEN + raise info 'i:% is smaller than 0. ',i; + ELSE + raise info 'i:% is equal to 0. ',i; + END IF; + RETURN; + END; +/ + +CALL proc_control_structure(3); + +-- Delete the stored procedure: +DROP PROCEDURE proc_control_structure; + |
The syntax diagram is as follows.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | CREATE OR REPLACE PROCEDURE proc_loop(i in integer, count out integer) +AS + BEGIN + count:=0; + LOOP + IF count > i THEN + raise info 'count is %. ', count; + EXIT; + ELSE + count:=count+1; + END IF; + END LOOP; + END; +/ + +CALL proc_loop(10,5); + |
The loop must be exploited together with EXIT; otherwise, a dead loop occurs.
+The syntax diagram is as follows.
+If the conditional expression is true, a series of statements in the WHILE statement are repeatedly executed and the condition is decided each time the loop body is executed.
+Examples
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | CREATE TABLE integertable(c1 integer) DISTRIBUTE BY hash(c1); +CREATE OR REPLACE PROCEDURE proc_while_loop(maxval in integer) +AS + DECLARE + i int :=1; + BEGIN + WHILE i < maxval LOOP + INSERT INTO integertable VALUES(i); + i:=i+1; + END LOOP; + END; +/ + +-- Invoke a function: +CALL proc_while_loop(10); + +-- Delete the stored procedure and table: +DROP PROCEDURE proc_while_loop; +DROP TABLE integertable; + |
The syntax diagram is as follows.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | -- Loop from 0 to 5: +CREATE OR REPLACE PROCEDURE proc_for_loop() +AS + BEGIN + FOR I IN 0..5 LOOP + DBMS_OUTPUT.PUT_LINE('It is '||to_char(I) || ' time;') ; + END LOOP; +END; +/ + +-- Invoke a function: +CALL proc_for_loop(); + +-- Delete the stored procedure: +DROP PROCEDURE proc_for_loop; + |
The syntax diagram is as follows.
+The variable target is automatically defined, its type is the same as that in the query result, and it is valid only in this loop. The target value is the query result.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | -- Display the query result from the loop: +CREATE OR REPLACE PROCEDURE proc_for_loop_query() +AS + record VARCHAR2(50); +BEGIN + FOR record IN SELECT spcname FROM pg_tablespace LOOP + dbms_output.put_line(record); + END LOOP; +END; +/ + +-- Invoke a function. +CALL proc_for_loop_query(); + +-- Delete the stored procedure. +DROP PROCEDURE proc_for_loop_query; + |
The syntax diagram is as follows.
+The variable index is automatically defined as the integer type and exists only in this loop. The index value falls between low_bound and upper_bound.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 | CREATE TABLE hdfs_t1 ( + title NUMBER(6), + did VARCHAR2(20), + data_peroid VARCHAR2(25), + kind VARCHAR2(25), + interval VARCHAR2(20), + time DATE, + isModified VARCHAR2(10) +) +DISTRIBUTE BY hash(did); + +INSERT INTO hdfs_t1 VALUES( 8, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'), 'SH_CLERK' ); + +CREATE OR REPLACE PROCEDURE proc_forall() +AS +BEGIN + FORALL i IN 100..120 + insert into hdfs_t1(title) values(i); +END; +/ + +-- Invoke a function: +CALL proc_forall(); + +-- Query the invocation result of the stored procedure: +SELECT * FROM hdfs_t1 WHERE title BETWEEN 100 AND 120; + +-- Delete the stored procedure and table: +DROP PROCEDURE proc_forall; +DROP TABLE hdfs_t1; + |
Figure 1 shows the syntax diagram.
+ +Figure 2 shows the syntax diagram for when_clause.
+ +Parameter description:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 | CREATE OR REPLACE PROCEDURE proc_case_branch(pi_result in integer, pi_return out integer) +AS + BEGIN + CASE pi_result + WHEN 1 THEN + pi_return := 111; + WHEN 2 THEN + pi_return := 222; + WHEN 3 THEN + pi_return := 333; + WHEN 6 THEN + pi_return := 444; + WHEN 7 THEN + pi_return := 555; + WHEN 8 THEN + pi_return := 666; + WHEN 9 THEN + pi_return := 777; + WHEN 10 THEN + pi_return := 888; + ELSE + pi_return := 999; + END CASE; + raise info 'pi_return : %',pi_return ; +END; +/ + +CALL proc_case_branch(3,0); + +-- Delete the stored procedure: +DROP PROCEDURE proc_case_branch; + |
In PL/SQL programs, NULL statements are used to indicate "nothing should be done", equal to placeholders. They grant meanings to some statements and improve program readability.
+The following shows example use of NULL statements.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | DECLARE + ... +BEGIN + ... + IF v_num IS NULL THEN + NULL; --No data needs to be processed. + END IF; +END; +/ + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | [<<label>>] +[DECLARE + declarations] +BEGIN + statements +EXCEPTION + WHEN condition [OR condition ...] THEN + handler_statements + [WHEN condition [OR condition ...] THEN + handler_statements + ...] +END; + |
If no error occurs, this form of block simply executes all the statements, and then control passes to the next statement after END. But if an error occurs within the statements, further processing of the statements is abandoned, and control passes to the EXCEPTION list. The list is searched for the first condition matching the error that occurred. If a match is found, the corresponding handler_statements are executed, and then control passes to the next statement after END. If no match is found, the error propagates out as though the EXCEPTION clause were not there at all:
+The error can be caught by an enclosing block with EXCEPTION, or if there is none it aborts processing of the function.
+The condition names can be any of those shown in GaussDB(DWS) Error Code Reference. The special condition name OTHERS matches every error type except QUERY_CANCELED.
+If a new error occurs within the selected handler_statements, it cannot be caught by this EXCEPTION clause, but is propagated out. A surrounding EXCEPTION clause could catch it.
+When an error is caught by an EXCEPTION clause, the local variables of the PL/SQL function remain as they were when the error occurred, but all changes to persistent database state within the block are rolled back.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 | CREATE TABLE mytab(id INT,firstname VARCHAR(20),lastname VARCHAR(20)) DISTRIBUTE BY hash(id); + +INSERT INTO mytab(firstname, lastname) VALUES('Tom', 'Jones'); + +CREATE FUNCTION fun_exp() RETURNS INT +AS $$ +DECLARE + x INT :=0; + y INT; +BEGIN + UPDATE mytab SET firstname = 'Joe' WHERE lastname = 'Jones'; + x := x + 1; + y := x / 0; +EXCEPTION + WHEN division_by_zero THEN + RAISE NOTICE 'caught division_by_zero'; + RETURN x; +END;$$ +LANGUAGE plpgsql; + +call fun_exp(); +NOTICE: caught division_by_zero + fun_exp +--------- + 1 +(1 row) + +select * from mytab; + id | firstname | lastname +----+-----------+---------- + | Tom | Jones +(1 row) + +DROP FUNCTION fun_exp(); +DROP TABLE mytab; + |
When control reaches the assignment to y, it will fail with a division_by_zero error. This will be caught by the EXCEPTION clause. The value returned in the RETURN statement will be the incremented value of x.
+A block containing an EXCEPTION clause is more expensive to enter and exit than a block without one. Therefore, do not use EXCEPTION without need.
+In the following scenario, an exception cannot be caught, and the entire transaction rolls back. The threads of the nodes participating the stored procedure exit abnormally due to node failure and network fault, or the source data is inconsistent with that of the table structure of the target table during the COPY FROM operation.
+Example: Exceptions with UPDATE/INSERT
+This example uses exception handling to perform either UPDATE or INSERT, as appropriate:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 | CREATE TABLE db (a INT, b TEXT); + +CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS +$$ +BEGIN + LOOP + +-- Try updating the key: + UPDATE db SET b = data WHERE a = key; + IF found THEN + RETURN; + END IF; +-- Not there, so try to insert the key. If someone else inserts the same key concurrently, we could get a unique-key failure. + BEGIN + INSERT INTO db(a,b) VALUES (key, data); + RETURN; + EXCEPTION WHEN unique_violation THEN + -- Loop to try the UPDATE again: + END; + END LOOP; +END; +$$ +LANGUAGE plpgsql; + +SELECT merge_db(1, 'david'); +SELECT merge_db(1, 'dennis'); + +-- Delete FUNCTION and TABLE: +DROP FUNCTION merge_db; +DROP TABLE db ; + |
The GOTO statement unconditionally transfers the control from the current statement to a labeled statement. The GOTO statement changes the execution logic. Therefore, use this statement only when necessary. Alternatively, you can use the EXCEPTION statement to handle issues in special scenarios. To run the GOTO statement, the labeled statement must be unique.
+label declaration ::=
+goto statement ::=
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | CREATE OR REPLACE PROCEDURE GOTO_test() +AS +DECLARE + v1 int; +BEGIN + v1 := 0; + LOOP + EXIT WHEN v1 > 100; + v1 := v1 + 2; + if v1 > 25 THEN + GOTO pos1; + END IF; + END LOOP; +<<pos1>> +v1 := v1 + 10; +raise info 'v1 is %. ', v1; +END; +/ + +call GOTO_test(); +DROP PROCEDURE GOTO_test(); + |
The GOTO statement has the following constraints:
+1 +2 +3 +4 +5 +6 +7 | BEGIN + GOTO pos1; + <<pos1>> + SELECT * FROM ... + <<pos1>> + UPDATE t1 SET ... +END; + |
1 +2 +3 +4 +5 +6 +7 | BEGIN + GOTO pos1; + IF valid THEN + <<pos1>> + SELECT * FROM ... + END IF; + END; + |
1 +2 +3 +4 +5 +6 +7 +8 +9 | BEGIN + IF valid THEN + GOTO pos1; + SELECT * FROM ... + ELSE + <<pos1>> + UPDATE t1 SET ... + END IF; + END; + |
1 +2 +3 +4 +5 +6 +7 | BEGIN + GOTO pos1; + BEGIN + <<pos1>> + UPDATE t1 SET ... + END; + END; + |
1 +2 +3 +4 +5 +6 +7 | BEGIN + <<pos1>> + UPDATE t1 SET ... + EXCEPTION + WHEN condition THEN + GOTO pos1; + END; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | DECLARE + done BOOLEAN; +BEGIN + FOR i IN 1..50 LOOP + IF done THEN + GOTO end_loop; + END IF; + <<end_loop>> -- not allowed unless an executable statement follows + NULL; -- add NULL statement to avoid error + END LOOP; -- raises an error without the previous NULL +END; +/ + |
GaussDB(DWS) provides multiple lock modes to control concurrent accesses to table data. These modes are used when Multi-Version Concurrency Control (MVCC) cannot give expected behaviors. Alike, most GaussDB(DWS) commands automatically apply appropriate locks to ensure that called tables are not deleted or modified in an incompatible manner during command execution. For example, when concurrent operations exist, ALTER TABLE cannot be executed on the same table.
+GaussDB(DWS) provides cursors as a data buffer for users to store execution results of SQL statements. Each cursor region has a name. Users can use SQL statements to obtain records one by one from cursors and grant them to master variables, then being processed further by host languages.
+Cursor operations include cursor definition, open, fetch, and close operations.
+For the complete example of cursor operations, see Explicit Cursor.
+To process SQL statements, the stored procedure process assigns a memory segment to store context association. Cursors are handles or pointers to context areas. With cursors, stored procedures can control alterations in context areas.
+If JDBC is used to call a stored procedure whose returned value is a cursor, the returned cursor is not available.
+Cursors are classified into explicit cursors and implicit cursors. Table 1 shows the usage conditions of explicit and implicit cursors for different SQL statements.
+ + +An explicit cursor is used to process query statements, particularly when the query results contain multiple records.
+An explicit cursor performs the following six PL/SQL steps to process query statements:
+Figure 1 shows the syntax diagram for defining a static cursor.
+ +Parameter description:
+parameter_name datatype+
The system automatically determines whether the cursor can be used for backward fetches based on the execution plan.
+Define a dynamic cursor: Define a ref cursor, which means that the cursor can be opened dynamically by a set of static SQL statements. Define the type of the ref cursor first and then the cursor variable of this cursor type. Dynamically bind a SELECT statement through OPEN FOR when the cursor is opened.
+Figure 2 and Figure 3 show the syntax diagrams for defining a dynamic cursor.
+ +GaussDB(DWS) supports the dynamic cursor type sys_refcursor. A function or stored procedure can use the sys_refcursor parameter to pass on or pass out the cursor result set. A function can return sys_refcursor to return the cursor result set.
+ +Figure 4 shows the syntax diagram for opening a static cursor.
+ +Open the dynamic cursor: Use the OPEN FOR statement to open the dynamic cursor and the SQL statement is dynamically bound.
+Figure 5 shows the syntax diagram for opening a dynamic cursor.
+ +A PL/SQL program cannot use the OPEN statement to repeatedly open a cursor.
+Figure 6 shows the syntax diagram for fetching cursor data.
+ +Figure 7 shows the syntax diagram for closing a cursor.
+ +Cursor attributes are used to control program procedures or learn about program status. When a DML statement is executed, the PL/SQL opens a built-in cursor and processes its result. A cursor is a memory segment for maintaining query results. It is opened when a DML statement is executed and closed when the execution is finished. An explicit cursor has the following attributes:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 | -- Specify the method for passing cursor parameters: +CREATE OR REPLACE PROCEDURE cursor_proc1() +AS +DECLARE + DEPT_NAME VARCHAR(100); + DEPT_LOC NUMBER(4); + -- Define a cursor: + CURSOR C1 IS + SELECT section_name, place_id FROM sections WHERE section_id <= 50; + CURSOR C2(sect_id INTEGER) IS + SELECT section_name, place_id FROM sections WHERE section_id <= sect_id; + TYPE CURSOR_TYPE IS REF CURSOR; + C3 CURSOR_TYPE; + SQL_STR VARCHAR(100); +BEGIN + OPEN C1;-- Open the cursor: + LOOP + -- Fetch data from the cursor: + FETCH C1 INTO DEPT_NAME, DEPT_LOC; + EXIT WHEN C1%NOTFOUND; + DBMS_OUTPUT.PUT_LINE(DEPT_NAME||'---'||DEPT_LOC); + END LOOP; + CLOSE C1;-- Close the cursor. + + OPEN C2(10); + LOOP + FETCH C2 INTO DEPT_NAME, DEPT_LOC; + EXIT WHEN C2%NOTFOUND; + DBMS_OUTPUT.PUT_LINE(DEPT_NAME||'---'||DEPT_LOC); + END LOOP; + CLOSE C2; + + SQL_STR := 'SELECT section_name, place_id FROM sections WHERE section_id <= :DEPT_NO;'; + OPEN C3 FOR SQL_STR USING 50; + LOOP + FETCH C3 INTO DEPT_NAME, DEPT_LOC; + EXIT WHEN C3%NOTFOUND; + DBMS_OUTPUT.PUT_LINE(DEPT_NAME||'---'||DEPT_LOC); + END LOOP; + CLOSE C3; +END; +/ + +CALL cursor_proc1(); + +DROP PROCEDURE cursor_proc1; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 | -- Increase the salary of employees whose salary is lower than CNY3000 by CNY500: +CREATE TABLE staffs_t1 AS TABLE staffs; + +CREATE OR REPLACE PROCEDURE cursor_proc2() +AS +DECLARE + V_EMPNO NUMBER(6); + V_SAL NUMBER(8,2); + CURSOR C IS SELECT staff_id, salary FROM staffs_t1; +BEGIN + OPEN C; + LOOP + FETCH C INTO V_EMPNO, V_SAL; + EXIT WHEN C%NOTFOUND; + IF V_SAL<=3000 THEN + UPDATE staffs_t1 SET salary =salary + 500 WHERE staff_id = V_EMPNO; + END IF; + END LOOP; + CLOSE C; +END; +/ + +CALL cursor_proc2(); + +-- Drop the stored procedure: +DROP PROCEDURE cursor_proc2; +DROP TABLE staffs_t1; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | -- Use function parameters of the SYS_REFCURSOR type: +CREATE OR REPLACE PROCEDURE proc_sys_ref(O OUT SYS_REFCURSOR) +IS +C1 SYS_REFCURSOR; +BEGIN +OPEN C1 FOR SELECT section_ID FROM sections ORDER BY section_ID; +O := C1; +END; +/ + +DECLARE +C1 SYS_REFCURSOR; +TEMP NUMBER(4); +BEGIN +proc_sys_ref(C1); +LOOP + FETCH C1 INTO TEMP; + DBMS_OUTPUT.PUT_LINE(C1%ROWCOUNT); + EXIT WHEN C1%NOTFOUND; +END LOOP; +END; +/ + +-- Drop the stored procedure: +DROP PROCEDURE proc_sys_ref; + |
The system automatically sets implicit cursors for non-query statements, such as ALTER and DROP, and creates work areas for these statements. These implicit cursors are named SQL, which is defined by the system.
+Implicit cursor operations, such as definition, opening, value-grant, and closing, are automatically performed by the system. Users can use only the attributes of implicit cursors to complete operations. The data stored in the work area of an implicit cursor is the latest SQL statement, and is not related to the user-defined explicit cursors.
+Format call: SQL%
+INSERT, UPDATE, DROP, and SELECT statements do not require defined cursors.
+An implicit cursor has the following attributes:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | -- Delete all employees in a department from the EMP table. If the department has no employees, delete the department from the DEPT table. +CREATE TABLE staffs_t1 AS TABLE staffs; +CREATE TABLE sections_t1 AS TABLE sections; + +CREATE OR REPLACE PROCEDURE proc_cursor3() +AS + DECLARE + V_DEPTNO NUMBER(4) := 100; + BEGIN + DELETE FROM staffs WHERE section_ID = V_DEPTNO; + -- Proceed based on cursor status: + IF SQL%NOTFOUND THEN + DELETE FROM sections_t1 WHERE section_ID = V_DEPTNO; + END IF; + END; +/ + +CALL proc_cursor3(); + +-- Drop the stored procedure and the temporary table: +DROP PROCEDURE proc_cursor3; +DROP TABLE staffs_t1; +DROP TABLE sections_t1; + |
The use of cursors in WHILE and LOOP statements is called a cursor loop. Generally, OPEN, FETCH, and CLOSE statements are needed in cursor loop. The following describes a loop that is applicable to a static cursor loop without executing the four steps of a static cursor.
+ +1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 | BEGIN +FOR ROW_TRANS IN + SELECT first_name FROM staffs + LOOP + DBMS_OUTPUT.PUT_LINE (ROW_TRANS.first_name ); + END LOOP; +END; +/ + +-- Create a table: +CREATE TABLE integerTable1( A INTEGER) DISTRIBUTE BY hash(A); +CREATE TABLE integerTable2( B INTEGER) DISTRIBUTE BY hash(B); +INSERT INTO integerTable2 VALUES(2); + +-- Multiple cursors share the parameters of cursor attributes: +DECLARE + CURSOR C1 IS SELECT A FROM integerTable1;--Declare the cursor. + CURSOR C2 IS SELECT B FROM integerTable2; + PI_A INTEGER; + PI_B INTEGER; +BEGIN + OPEN C1;-- Open the cursor. + OPEN C2; + FETCH C1 INTO PI_A; ---- The value of C1%FOUND and C2%FOUND is FALSE. + FETCH C2 INTO PI_B; ---- The value of C1%FOUND and C2%FOUND is TRUE. +-- Determine the cursor status: + IF C1%FOUND THEN + IF C2%FOUND THEN + DBMS_OUTPUT.PUT_LINE('Dual cursor share paremeter.'); + END IF; + END IF; + CLOSE C1;-- Close the cursor. + CLOSE C2; +END; +/ + +-- Drop the temporary table: +DROP TABLE integerTable1; +DROP TABLE integerTable2; + |
Table 1 provides all interfaces supported by the DBMS_LOB package.
+ +API + |
+Description + |
+
---|---|
+ | +Obtains and returns the specified length of a LOB object. + |
+
+ | +Opens a LOB and returns a LOB descriptor. + |
+
+ | +Loads a part of LOB contents to BUFFER area according to the specified length and initial position offset. + |
+
+ | +Copies contents in BUFFER area to LOB according to the specified length and initial position offset. + |
+
+ | +Copies contents in BUFFER area to the end part of LOB according to the specified length. + |
+
+ | +Copies contents in BLOB to another BLOB according to the specified length and initial position offset. + |
+
+ | +Deletes contents in BLOB according to the specified length and initial position offset. + |
+
+ | +Closes a LOB descriptor. + |
+
+ | +Returns the position of the Nth occurrence of a character string in LOB. + |
+
+ | +Compares two LOBs or a certain part of two LOBs. + |
+
+ | +Reads the substring of a LOB and returns the number of read bytes or the number of characters. + |
+
+ | +Truncates the LOB of a specified length. After the execution is complete, the length of the LOB is set to the length specified by the newlen parameter. + |
+
+ | +Creates a temporary BLOB or CLOB. + |
+
+ | +Adds the content of a LOB to another LOB. + |
+
Specifies the length of a LOB type object obtained and returned by the stored procedure GETLENGTH.
+The function prototype of DBMS_LOB.GETLENGTH is:
+1 +2 +3 +4 +5 +6 +7 | DBMS_LOB.GETLENGTH ( +lob_loc IN BLOB) +RETURN INTEGER; + +DBMS_LOB.GETLENGTH ( +lob_loc IN CLOB) +RETURN INTEGER; + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB type object whose length is to be obtained + |
+
A stored procedure opens a LOB and returns a LOB descriptor. This process is used only for compatibility.
+The function prototype of DBMS_LOB.OPEN is:
+1 +2 +3 +4 +5 +6 +7 | DBMS_LOB.LOB ( +lob_loc INOUT BLOB, +open_mode IN BINARY_INTEGER); + +DBMS_LOB.LOB ( +lob_loc INOUT CLOB, +open_mode IN BINARY_INTEGER); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+BLOB or CLOB descriptor that is opened + |
+
open_mode IN BINARY_INTEGER + |
+Open mode (currently, DBMS_LOB.LOB_READWRITE is supported) + |
+
The stored procedure READ loads a part of LOB contents to BUFFER according to the specified length and initial position offset.
+The function prototype of DBMS_LOB.READ is:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | DBMS_LOB.READ ( +lob_loc IN BLOB, +amount IN INTEGER, +offset IN INTEGER, +buffer OUT RAW); + +DBMS_LOB.READ ( +lob_loc IN CLOB, +amount IN OUT INTEGER, +offset IN INTEGER, +buffer OUT VARCHAR2); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB type object to be loaded + |
+
amount + |
+Load data length + NOTE:
+If the read length is negative, the error message "ERROR: argument 2 is null, invalid, or out of range." is displayed. + |
+
offset + |
+Indicates where to start reading the LOB contents, that is, the offset bytes to initial position of LOB contents. + |
+
buffer + |
+Target buffer to store the loaded LOB contents + |
+
The stored procedure WRITE copies contents in BUFFER to LOB variables according to the specified length and initial position offset.
+The function prototype of DBMS_LOB.WRITE is:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | DBMS_LOB.WRITE ( +lob_loc IN OUT BLOB, +amount IN INTEGER, +offset IN INTEGER, +buffer IN RAW); + +DBMS_LOB.WRITE ( +lob_loc IN OUT CLOB, +amount IN INTEGER, +offset IN INTEGER, +buffer IN VARCHAR2); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB type object to be written + |
+
amount + |
+Write data length + NOTE:
+If the write data is shorter than 1 or longer than the contents to be written, an error is reported. + |
+
offset + |
+Indicates where to start writing the LOB contents, that is, the offset bytes to initial position of LOB contents. + NOTE:
+If the offset is shorter than 1 or longer than the maximum length of LOB type contents, an error is reported. + |
+
buffer + |
+Content to be written + |
+
The stored procedure WRITEAPPEND copies contents in BUFFER to the end part of LOB according to the specified length.
+The function prototype of DBMS_LOB.WRITEAPPEND is:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | DBMS_LOB.WRITEAPPEND ( +lob_loc IN OUT BLOB, +amount IN INTEGER, +buffer IN RAW); + +DBMS_LOB.WRITEAPPEND ( +lob_loc IN OUT CLOB, +amount IN INTEGER, +buffer IN VARCHAR2); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB type object to be written + |
+
amount + |
+Write data length + NOTE:
+If the write data is shorter than 1 or longer than the contents to be written, an error is reported. + |
+
buffer + |
+Content to be written + |
+
The stored procedure COPY copies contents in BLOB to another BLOB according to the specified length and initial position offset.
+The function prototype of DBMS_LOB.COPY is:
+1 +2 +3 +4 +5 +6 | DBMS_LOB.COPY ( +dest_lob IN OUT BLOB, +src_lob IN BLOB, +amount IN INTEGER, +dest_offset IN INTEGER DEFAULT 1, +src_offset IN INTEGER DEFAULT 1); + |
Parameter + |
+Description + |
+
---|---|
dest_lob + |
+BLOB type object to be pasted + |
+
src_lob + |
+BLOB type object to be copied + |
+
amount + |
+Length of the copied data + NOTE:
+If the copied data is shorter than 1 or longer than the maximum length of BLOB type contents, an error is reported. + |
+
dest_offset + |
+Indicates where to start pasting the BLOB contents, that is, the offset bytes to initial position of BLOB contents. + NOTE:
+If the offset is shorter than 1 or longer than the maximum length of BLOB type contents, an error is reported. + |
+
src_offset + |
+Indicates where to start copying the BLOB contents, that is, the offset bytes to initial position of BLOB contents. + NOTE:
+If the offset is shorter than 1 or longer than the length of source BLOB, an error is reported. + |
+
The stored procedure ERASE deletes contents in BLOB according to the specified length and initial position offset.
+The function prototype of DBMS_LOB.ERASE is:
+1 +2 +3 +4 | DBMS_LOB.ERASE ( +lob_loc IN OUT BLOB, +amount IN OUT INTEGER, +offset IN INTEGER DEFAULT 1); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+BLOB type object whose contents are to be deleted + |
+
amount + |
+Length of contents to be deleted + NOTE:
+If the deleted data is shorter than 1 or longer than the maximum length of BLOB type contents, an error is reported. + |
+
offset + |
+Indicates where to start deleting the BLOB contents, that is, the offset bytes to initial position of BLOB contents. + NOTE:
+If the offset is shorter than 1 or longer than the maximum length of BLOB type contents, an error is reported. + |
+
The procedure CLOSE disables the enabled contents of LOB according to the specified length and initial position offset.
+The function prototype of DBMS_LOB.CLOSE is:
+1 +2 +3 +4 +5 | DBMS_LOB.CLOSE( +src_lob IN BLOB); + +DBMS_LOB.CLOSE ( +src_lob IN CLOB); + |
This function returns the Nth occurrence position in LOB. If invalid values are entered, NULL is returned. The invalid values include offset < 1 or offset > LOBMAXSIZE, nth < 1, and nth > LOBMAXSIZE.
+The function prototype of DBMS_LOB.INSTR is:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | DBMS_LOB.INSTR ( +lob_loc IN BLOB, +pattern IN RAW, +offset IN INTEGER := 1, +nth IN INTEGER := 1) +RETURN INTEGER; + +DBMS_LOB.INSTR ( +lob_loc IN CLOB, +pattern IN VARCHAR2 , +offset IN INTEGER := 1, +nth IN INTEGER := 1) +RETURN INTEGER; + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB descriptor to be searched for + |
+
pattern + |
+Matched pattern. It is RAW for BLOB and TEXT for CLOB. + |
+
offset + |
+For BLOB, the absolute offset is in the unit of byte. For CLOB, the offset is in the unit of character. The matching start position is 1. + |
+
nth + |
+Number of pattern matching times. The minimum value is 1. + |
+
This function compares two LOBs or a certain part of two LOBs.
+The function prototype of DBMS_LOB.READ is:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | DBMS_LOB.COMPARE ( +lob_1 IN BLOB, +lob_2 IN BLOB, +amount IN INTEGER := DBMS_LOB.LOBMAXSIZE, +offset_1 IN INTEGER := 1, +offset_2 IN INTEGER := 1) +RETURN INTEGER; + +DBMS_LOB.COMPARE ( +lob_1 IN CLOB, +lob_2 IN CLOB, +amount IN INTEGER := DBMS_LOB.LOBMAXSIZE, +offset_1 IN INTEGER := 1, +offset_2 IN INTEGER := 1) +RETURN INTEGER; + |
Parameter + |
+Description + |
+
---|---|
lob_1 + |
+First LOB descriptor to be compared + |
+
lob_2 + |
+Second LOB descriptor to be compared + |
+
amount + |
+Number of characters or bytes to be compared. The maximum value is DBMS_LOB.LOBMAXSIZE. + |
+
offset_1 + |
+Offset of the first LOB descriptor. The initial position is 1. + |
+
offset_2 + |
+Offset of the second LOB descriptor. The initial position is 1. + |
+
This function reads the substring of a LOB and returns the number of read bytes or the number of characters. If amount > 1, amount < 32767, offset < 1, or offset > LOBMAXSIZE, NULL is returned.
+The function prototype of DBMS_LOB.SUBSTR is:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | DBMS_LOB.SUBSTR ( +lob_loc IN BLOB, +amount IN INTEGER := 32767, +offset IN INTEGER := 1) +RETURN RAW; + +DBMS_LOB.SUBSTR ( +lob_loc IN CLOB, +amount IN INTEGER := 32767, +offset IN INTEGER := 1) +RETURN VARCHAR2; + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB descriptor of the substring to be read. For BLOB, the return value is the number of read bytes. For CLOB, the return value is the number of characters. + |
+
offset + |
+Number of bytes or characters to be read. + |
+
buffer + |
+Number of characters or bytes offset from the start position. + |
+
This stored procedure truncates the LOB of a specified length. After this stored procedure is executed, the length of the LOB is set to the length specified by the newlen parameter. If an empty LOB is truncated, no execution result is displayed. If the specified length is longer than the length of LOB, an exception occurs.
+The function prototype of DBMS_LOB.TRIM is:
+1 +2 +3 +4 +5 +6 +7 | DBMS_LOB.TRIM ( +lob_loc IN OUT BLOB, +newlen IN INTEGER); + +DBMS_LOB.TRIM ( +lob_loc IN OUT CLOB, +newlen IN INTEGER); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+BLOB type object to be read + |
+
newlen + |
+After truncation, the new LOB length for BLOB is in the unit of byte and that for CLOB is in the unit of character. + |
+
This stored procedure creates a temporary BLOB or CLOB and is used only for syntax compatibility.
+The function prototype of DBMS_LOB.CREATETEMPORARY is:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | DBMS_LOB.CREATETEMPORARY ( +lob_loc IN OUT BLOB, +cache IN BOOLEAN, +dur IN INTEGER); + +DBMS_LOB.CREATETEMPORARY ( +lob_loc IN OUT CLOB, +cache IN BOOLEAN, +dur IN INTEGER); + |
Parameter + |
+Description + |
+
---|---|
lob_loc + |
+LOB descriptor + |
+
cache + |
+This parameter is used only for syntax compatibility. + |
+
dur + |
+This parameter is used only for syntax compatibility. + |
+
The stored procedure READ loads a part of BLOB contents to BUFFER according to the specified length and initial position offset.
+The function prototype of DBMS_LOB.APPEND is:
+1 +2 +3 +4 +5 +6 +7 | DBMS_LOB.APPEND ( +dest_lob IN OUT BLOB, +src_lob IN BLOB); + +DBMS_LOB.APPEND ( +dest_lob IN OUT CLOB, +src_lob IN CLOB); + |
Parameter + |
+Description + |
+
---|---|
dest_lob + |
+LOB descriptor to be written + |
+
src_lob + |
+LOB descriptor to be read + |
+
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 | -- Obtain the length of the character string. +SELECT DBMS_LOB.GETLENGTH('12345678'); + +DECLARE +myraw RAW(100); +amount INTEGER :=2; +buffer INTEGER :=1; +begin +DBMS_LOB.READ('123456789012345',amount,buffer,myraw); +dbms_output.put_line(myraw); +end; +/ + +CREATE TABLE blob_Table (t1 blob) DISTRIBUTE BY REPLICATION; +CREATE TABLE blob_Table_bak (t2 blob) DISTRIBUTE BY REPLICATION; +INSERT INTO blob_Table VALUES('abcdef'); +INSERT INTO blob_Table_bak VALUES('22222'); + +DECLARE +str varchar2(100) := 'abcdef'; +source raw(100); +dest blob; +copyto blob; +amount int; +PSV_SQL varchar2(100); +PSV_SQL1 varchar2(100); +a int :=1; +len int; +BEGIN +source := utl_raw.cast_to_raw(str); +amount := utl_raw.length(source); + +PSV_SQL :='select * from blob_Table for update'; +PSV_SQL1 := 'select * from blob_Table_bak for update'; + +EXECUTE IMMEDIATE PSV_SQL into dest; +EXECUTE IMMEDIATE PSV_SQL1 into copyto; + +DBMS_LOB.WRITE(dest, amount, 1, source); +DBMS_LOB.WRITEAPPEND(dest, amount, source); + +DBMS_LOB.ERASE(dest, a, 1); +DBMS_OUTPUT.PUT_LINE(a); +DBMS_LOB.COPY(copyto, dest, amount, 10, 1); +DBMS_LOB.CLOSE(dest); +RETURN; +END; +/ + +--Delete the table. +DROP TABLE blob_Table; +DROP TABLE blob_Table_bak; + |
Table 1 provides all interfaces supported by the DBMS_RANDOM package.
+ +API + |
+Description + |
+
---|---|
+ | +Sets a seed for a random number. + |
+
+ | +Generates a random number between a specified low and a specified high. + |
+
The stored procedure SEED is used to set a seed for a random number. The DBMS_RANDOM.SEED function prototype is:
+1 | DBMS_RANDOM.SEED (seed IN INTEGER); + |
Parameter + |
+Description + |
+
---|---|
seed + |
+Generates a seed for a random number. + |
+
The stored procedure VALUE generates a random number between a specified low and a specified high. The DBMS_RANDOM.VALUE function prototype is:
+1 +2 +3 +4 | DBMS_RANDOM.VALUE( +low IN NUMBER, +high IN NUMBER) +RETURN NUMBER; + |
Parameter + |
+Description + |
+
---|---|
low + |
+Sets the low bound for a random number. The generated random number is greater than or equal to the low. + |
+
high + |
+Sets the high bound for a random number. The generated random number is less than the high. + |
+
The only requirement is that the parameter type is NUMERIC regardless of the right and left bound values.
+1 +2 +3 +4 +5 | -- Generate a random number between 0 and 1: +SELECT DBMS_RANDOM.VALUE(0,1); + +-- Add the low and high parameters to an integer within the specified range and intercept smaller values from the result. (The maximum value cannot be a possible value.) Therefore, use the following code for an integer between 0 and 99: +SELECT TRUNC(DBMS_RANDOM.VALUE(0,100)); + |
Table 1 provides all interfaces supported by the DBMS_OUTPUT package.
+ +API + |
+Description + |
+
---|---|
+ | +Outputs the specified text. The text length cannot exceed 32,767 bytes. + |
+
+ | +Outputs the specified text to the front of the specified text without adding a line break. The text length cannot exceed 32,767 bytes. + |
+
+ | +Sets the buffer area size. If this interface is not specified, the maximum buffer size is 20,000 bytes and the minimum buffer size is 2000 bytes. If the specified buffer size is less than 2000 bytes, the default minimum buffer size is applied. + |
+
The PUT_LINE procedure writes a row of text carrying a line end symbol in the buffer. The DBMS_OUTPUT.PUT_LINE function prototype is:
+1 +2 | DBMS_OUTPUT.PUT_LINE ( +item IN VARCHAR2); + |
Parameter + |
+Description + |
+
---|---|
item + |
+Specifies the text that was written to the buffer. + |
+
The stored procedure PUT outputs the specified text to the front of the specified text without adding a linefeed. The DBMS_OUTPUT.PUT function prototype is:
+1 +2 | DBMS_OUTPUT.PUT ( +item IN VARCHAR2); + |
Parameter + |
+Description + |
+
---|---|
item + |
+Specifies the text that was written to the specified text. + |
+
The stored procedure ENABLE sets the output buffer size. If the size is not specified, it contains a maximum of 20,000 bytes. The DBMS_OUTPUT.ENABLE function prototype is:
+1 +2 | DBMS_OUTPUT.ENABLE ( +buf IN INTEGER); + |
1 +2 +3 +4 +5 +6 | BEGIN + DBMS_OUTPUT.ENABLE(50); + DBMS_OUTPUT.PUT ('hello, '); + DBMS_OUTPUT.PUT_LINE('database!');-- Displaying "hello, database!" +END; +/ + |
Table 1 provides all interfaces supported by the UTL_RAW package.
+ +API + |
+Description + |
+
---|---|
+ | +Converts an INTEGER type value to a binary representation (RAW type). + |
+
+ | +Converts a binary representation (RAW type) to an INTEGER type value. + |
+
+ | +Obtains the length of the RAW type object. + |
+
+ | +Converts a VARCHAR2 type value to a binary expression (RAW type). + |
+
The external representation of the RAW type data is hexadecimal and its internal storage form is binary. For example, the representation of the RAW type data 11001011 is 'CB'. The input of the actual type conversion is 'CB'.
+The stored procedure CAST_FROM_BINARY_INTEGER converts an INTEGER type value to a binary representation (RAW type).
+The UTL_RAW.CAST_FROM_BINARY_INTEGER function prototype is:
+1 +2 +3 +4 | UTL_RAW.CAST_FROM_BINARY_INTEGER ( +n IN INTEGER, +endianess IN INTEGER) +RETURN RAW; + |
Parameter + |
+Description + |
+
---|---|
n + |
+Specifies the INTEGER type value to be converted to the RAW type. + |
+
endianess + |
+Specifies the INTEGER type value 1 or 2 of the byte sequence. (1 indicates BIG_ENDIAN and 2 indicates LITTLE-ENDIAN.) + |
+
The stored procedure CAST_TO_BINARY_INTEGER converts an INTEGER type value in a binary representation (RAW type) to the INTEGER type.
+The UTL_RAW.CAST_TO_BINARY_INTEGER function prototype is:
+1 +2 +3 +4 | UTL_RAW.CAST_TO_BINARY_INTEGER ( +r IN RAW, +endianess IN INTEGER) +RETURN BINARY_INTEGER; + |
Parameter + |
+Description + |
+
---|---|
r + |
+Specifies an INTEGER type value in a binary representation (RAW type). + |
+
endianess + |
+Specifies the INTEGER type value 1 or 2 of the byte sequence. (1 indicates BIG_ENDIAN and 2 indicates LITTLE-ENDIAN.) + |
+
The stored procedure LENGTH returns the length of a RAW type object.
+The UTL_RAW.LENGTH function prototype is:
+1 +2 +3 | UTL_RAW.LENGTH( +r IN RAW) +RETURN INTEGER; + |
Parameter + |
+Description + |
+
---|---|
r + |
+Specifies a RAW type object. + |
+
The stored procedure CAST_TO_RAW converts a VARCHAR2 type object to the RAW type.
+The UTL_RAW.CAST_TO_RAW function prototype is:
+1 +2 +3 | UTL_RAW.CAST_TO_RAW( +c IN VARCHAR2) +RETURN RAW; + |
Parameter + |
+Description + |
+
---|---|
c + |
+Specifies a VARCHAR2 type object to be converted. + |
+
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | -- Perform operations on RAW data in a stored procedure. +CREATE OR REPLACE PROCEDURE proc_raw +AS +str varchar2(100) := 'abcdef'; +source raw(100); +amount integer; +BEGIN +source := utl_raw.cast_to_raw(str);--Convert the type. +amount := utl_raw.length(source);--Obtain the length. +dbms_output.put_line(amount); +END; +/ + +-- Invoke the stored procedure. +CALL proc_raw(); + +-- Delete the stored procedure. +DROP PROCEDURE proc_raw; + |
Table 1 lists all interfaces supported by the DBMS_JOB package.
+ +Interface + |
+Description + |
+
---|---|
+ | +Submits a job to the job queue. The job number is automatically generated by the system. + |
+
+ | +Submits a job to the job queue. The job number is specified by the user. + |
+
+ | +Removes a job from the job queue by job number. + |
+
+ | +Disables or enables job execution. + |
+
+ | +Modifies user-definable attributes of a job, including the job description, next execution time, and execution interval. + |
+
+ | +Modifies the job description of a job. + |
+
+ | +Modifies the next execution time of a job. + |
+
+ | +Modifies the execution interval of a job. + |
+
+ | +Modifies the owner of a job. + |
+
The stored procedure SUBMIT submits a job provided by the system.
+A prototype of the DBMS_JOB.SUBMIT function is as follows:
+1 +2 +3 +4 +5 | DMBS_JOB.SUBMIT( +what IN TEXT, +next_date IN TIMESTAMP DEFAULT sysdate, +job_interval IN TEXT DEFAULT 'null', +job OUT INTEGER); + |
When a job is created (using DBMS_JOB), the system binds the current database and the username to the job by default. This function can be invoked by using call or select. If you invoke this function by using select, there is no need to specify output parameters. To invoke this function within a stored procedure, use perform.
+Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
what + |
+text + |
+IN + |
+No + |
+SQL statement to be executed. One or multiple DMLs, anonymous blocks, and SQL statements that invoke stored procedures, or all three combined are supported. + |
+
next_date + |
+timestamp + |
+IN + |
+No + |
+Specifies the next time the job will be executed. The default value is the current system time (sysdate). If the specified time has past, the job is executed at the time it is submitted. + |
+
interval + |
+text + |
+IN + |
+Yes + |
+Calculates the next time to execute the job. It can be an interval expression, or sysdate followed by a numeric value, for example, sysdate+1.0/24. If this parameter is left blank or set to null, the job will be executed only once, and the job status will change to 'd' afterward. + |
+
job + |
+integer + |
+OUT + |
+No + |
+Specifies the job number. The value ranges from 1 to 32767. When dbms.submit is invoked using select, this parameter can be skipped. + |
+
For example:
+1 +2 +3 +4 +5 | select DBMS_JOB.SUBMIT('call pro_xxx();', to_date('20180101','yyyymmdd'),'sysdate+1'); + +select DBMS_JOB.SUBMIT('call pro_xxx();', to_date('20180101','yyyymmdd'),'sysdate+1.0/24'); + +CALL DBMS_JOB.SUBMIT('INSERT INTO T_JOB VALUES(1); call pro_1(); call pro_2();', add_months(to_date('201701','yyyymm'),1), 'date_trunc(''day'',SYSDATE) + 1 +(8*60+30.0)/(24*60)' ,:jobid); + |
ISUBMIT has the same syntax function as SUBMIT, but the first parameter of ISUBMIT is an input parameter, that is, a specified job number. In contrast, that last parameter of SUBMIT is an output parameter, indicating the job number automatically generated by the system.
+For example:
+1 | CALL dbms_job.isubmit(101, 'insert_msg_statistic1;', sysdate, 'sysdate+3.0/24'); + |
The stored procedure REMOVE deletes a specified job.
+A prototype of the DBMS_JOB.REMOVE function is as follows:
+1 | REMOVE(job IN INTEGER); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
For example:
+CALL dbms_job.remove(101);+
The stored procedure BROKEN sets the broken flag of a job.
+A prototype of the DBMS_JOB.BROKEN function is as follows:
+1 +2 +3 +4 | DMBS_JOB.BROKEN( +job IN INTEGER, +broken IN BOOLEAN, +next_date IN TIMESTAMP DEFAULT sysdate); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
broken + |
+boolean + |
+IN + |
+No + |
+Specifies the status flag, true for broken and false for not broken. Setting this parameter to true or false updates the current job. If the parameter is left blank, the job status remains unchanged. + |
+
next_date + |
+timestamp + |
+IN + |
+Yes + |
+Specifies the next execution time. The default is the current system time. If broken is set to true, next_date is updated to '4000-1-1'. If broken is false and next_date is not empty, next_date is updated for the job. If next_date is empty, it will not be updated. This parameter can be omitted, and its default value will be used in this case. + |
+
For example:
+1 +2 | CALL dbms_job.broken(101, true); +CALL dbms_job.broken(101, false, sysdate); + |
The stored procedure CHANGE modifies user-definable attributes of a job, including the job content, next-execution time, and execution interval.
+A prototype of the DBMS_JOB.CHANGE function is as follows:
+1 +2 +3 +4 +5 | DMBS_JOB.CHANGE( +job IN INTEGER, +what IN TEXT, +next_date IN TIMESTAMP, +interval IN TEXT); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
what + |
+text + |
+IN + |
+Yes + |
+Specifies the name of the stored procedure or SQL statement block that is executed. If this parameter is left blank, the system does not update the what parameter for the specified job. Otherwise, the system updates the what parameter for the specified job. + |
+
next_date + |
+timestamp + |
+IN + |
+Yes + |
+Specifies the next execution time. If this parameter is left blank, the system does not update the next_date parameter for the specified job. Otherwise, the system updates the next_date parameter for the specified job. + |
+
interval + |
+text + |
+IN + |
+Yes + |
+Specifies the time expression for calculating the next time the job will be executed. If this parameter is left blank, the system does not update the interval parameter for the specified job. Otherwise, the system updates the interval parameter for the specified job after necessary validity check. If this parameter is set to null, the job will be executed only once, and the job status will change to 'd' afterward. + |
+
For example:
+1 +2 | CALL dbms_job.change(101, 'call userproc();', sysdate, 'sysdate + 1.0/1440'); +CALL dbms_job.change(101, 'insert into tbl_a values(sysdate);', sysdate, 'sysdate + 1.0/1440'); + |
The stored procedure WHAT modifies the procedures to be executed by a specified job.
+A prototype of the DBMS_JOB.WHAT function is as follows:
+1 +2 +3 | DMBS_JOB.WHAT( +job IN INTEGER, +what IN TEXT); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
what + |
+text + |
+IN + |
+No + |
+Specifies the name of the stored procedure or SQL statement block that is executed. + |
+
For example:
+1 +2 | CALL dbms_job.what(101, 'call userproc();'); +CALL dbms_job.what(101, 'insert into tbl_a values(sysdate);'); + |
The stored procedure NEXT_DATE modifies the next-execution time attribute of a job.
+A prototype of the DBMS_JOB.NEXT_DATE function is as follows:
+1 +2 +3 | DMBS_JOB.NEXT_DATE( +job IN INTEGER, +next_date IN TIMESTAMP); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
next_date + |
+timestamp + |
+IN + |
+No + |
+Specifies the next execution time. + |
+
If the specified next_date value is earlier than the current date, the job is executed once immediately.
+For example:
+1 | CALL dbms_job.next_date(101, sysdate); + |
The stored procedure INTERVAL modifies the execution interval attribute of a job.
+A prototype of the DBMS_JOB.INTERVAL function is as follows:
+1 +2 +3 | DMBS_JOB.INTERVAL( +job IN INTEGER, +interval IN TEXT); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
interval + |
+text + |
+IN + |
+Yes + |
+Specifies the time expression for calculating the next time the job will be executed. If this parameter is left blank or set to null, the job will be executed only once, and the job status will change to 'd' afterward. interval must be a valid time or interval type. + |
+
For example:
+1 | CALL dbms_job.interval(101, 'sysdate + 1.0/1440'); + |
For a job that is currently running (that is, job_status is 'r'), it is not allowed to use remove, change, next_date, what, or interval to delete or modify job parameters.
+The stored procedure CHANGE_OWNER modifies the owner of a job.
+A prototype of the DBMS_JOB.CHANGE_OWNER function is as follows:
+1 +2 +3 | DMBS_JOB.CHANGE_OWNER( +job IN INTEGER, +new_owner IN NAME); + |
Parameter + |
+Type + |
+Input/Output Parameter + |
+Can Be Empty + |
+Description + |
+
---|---|---|---|---|
job + |
+integer + |
+IN + |
+No + |
+Specifies the job number. + |
+
new_owner + |
+name + |
+IN + |
+No + |
+Specifies the new username. + |
+
For example:
+1 | CALL dbms_job.change_owner(101, 'alice'); + |
Table 1 lists interfaces supported by the DBMS_SQL package.
+API + |
+Description + |
+
---|---|
+ | +Opens a cursor. + |
+
+ | +Closes an open cursor. + |
+
+ | +Transmits a group of SQL statements to a cursor. Currently, only the SELECT statement is supported. + |
+
+ | +Performs a set of dynamically defined operations on the cursor. + |
+
+ | +Reads a row of cursor data. + |
+
+ | +Dynamically defines a column. + |
+
+ | +Dynamically defines a column of the CHAR type. + |
+
+ | +Dynamically defines a column of the INT type. + |
+
+ | +Dynamically defines a column of the LONG type. + |
+
+ | +Dynamically defines a column of the RAW type. + |
+
+ | +Dynamically defines a column of the TEXT type. + |
+
+ | +Dynamically defines a column of an unknown type. + |
+
+ | +Reads a dynamically defined column value. + |
+
+ | +Reads a dynamically defined column value of the CHAR type. + |
+
+ | +Reads a dynamically defined column value of the INT type. + |
+
+ | +Reads a dynamically defined column value of the LONG type. + |
+
+ | +Reads a dynamically defined column value of the RAW type. + |
+
+ | +Reads a dynamically defined column value of the TEXT type. + |
+
+ | +Reads a dynamically defined column value of an unknown type. + |
+
+ | +Checks whether a cursor is opened. + |
+
This function opens a cursor and is the prerequisite for the subsequent dbms_sql operations. This function does not transfer any parameter. It automatically generates cursor IDs in an ascending order and returns values to integer variables.
+The function prototype of DBMS_SQL.OPEN_CURSOR is:
+1 +2 +3 | DBMS_SQL.OPEN_CURSOR ( +) +RETURN INTEGER; + |
This function closes a cursor. It is the end of each dbms_sql operation. If this function is not invoked when the stored procedure ends, the memory is still occupied by the cursor. Therefore, remember to close a cursor when you do not need to use it. If an exception occurs, the stored procedure exits but the cursor is not closed. Therefore, you are advised to include this interface in the exception handling of the stored procedure.
+The function prototype of DBMS_SQL.CLOSE_CURSOR is:
+1 +2 +3 +4 | DBMS_SQL.CLOSE_CURSOR ( +cursorid IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be closed + |
+
This function parses the query statement of a given cursor. The input query statement is executed immediately. Currently, only the SELECT query statement can be parsed. The statement parameters can be transferred only through the TEXT type. The length cannot exceed 1 GB.
+1 +2 +3 +4 +5 +6 | DBMS_SQL.PARSE ( +cursorid IN INTEGER, +query_string IN TEXT, +label IN INTEGER +) +RETURN BOOLEAN; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor whose query statement is parsed + |
+
query_string + |
+Query statements to be parsed + |
+
language_flag + |
+Version language number. Currently, only 1 is supported. + |
+
This function executes a given cursor. This function receives a cursor ID. The obtained data after is used for subsequent operations. Currently, only the SELECT query statement can be executed.
+1 +2 +3 +4 | DBMS_SQL.EXECUTE( +cursorid IN INTEGER, +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor whose query statement is parsed + |
+
This function returns the number of data rows that meet query conditions. Each time the interface is executed, the system obtains a set of new rows until all data is read.
+1 +2 +3 +4 | DBMS_SQL.FETCHE_ROWS( +cursorid IN INTEGER, +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
curosorid + |
+ID of the cursor to be executed + |
+
This function defines columns returned from a given cursor and can be used only for the cursors defined by SELECT. The defined columns are identified by the relative positions in the query list. The data type of the input variable determines the column type.
+1 +2 +3 +4 +5 +6 +7 | DBMS_SQL.DEFINE_COLUMN( +cursorid IN INTEGER, +position IN INTEGER, +column_ref IN ANYELEMENT, +column_size IN INTEGER default 1024 +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column_ref + |
+Variable of any type. You can select an appropriate interface to dynamically define columns based on variable types. + |
+
column_size + |
+Length of a defined column + |
+
This function defines columns of the CHAR type returned from a given cursor and can be used only for the cursors defined by SELECT. The defined columns are identified by the relative positions in the query list. The data type of the input variable determines the column type.
+1 +2 +3 +4 +5 +6 +7 | DBMS_SQL.DEFINE_COLUMN_CHAR( +cursorid IN INTEGER, +position IN INTEGER, +column IN TEXT, +column_size IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column + |
+Parameter to be defined + |
+
column_size + |
+Length of a dynamically defined column + |
+
This function defines columns of the INT type returned from a given cursor and can be used only for the cursors defined by SELECT. The defined columns are identified by the relative positions in the query list. The data type of the input variable determines the column type.
+1 +2 +3 +4 +5 | DBMS_SQL.DEFINE_COLUMN_INT( +cursorid IN INTEGER, +position IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
This function defines columns of a long type (not LONG) returned from a given cursor and can be used only for the cursors defined by SELECT. The defined columns are identified by the relative positions in the query list. The data type of the input variable determines the column type. The maximum size of a long column is 1 GB.
+1 +2 +3 +4 +5 | DBMS_SQL.DEFINE_COLUMN_LONG( +cursorid IN INTEGER, +position IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
This function defines columns of the RAW type returned from a given cursor and can be used only for the cursors defined by SELECT. The defined columns are identified by the relative positions in the query list. The data type of the input variable determines the column type.
+1 +2 +3 +4 +5 +6 +7 | DBMS_SQL.DEFINE_COLUMN_RAW( +cursorid IN INTEGER, +position IN INTEGER, +column IN BYTEA, +column_size IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column + |
+Parameter of the RAW type + |
+
column_size + |
+Column length + |
+
This function defines columns of the TEXT type returned from a given cursor and can be used only for the cursors defined by SELECT. The defined columns are identified by the relative positions in the query list. The data type of the input variable determines the column type.
+1 +2 +3 +4 +5 +6 | DBMS_SQL.DEFINE_COLUMN_CHAR( +cursorid IN INTEGER, +position IN INTEGER, +max_size IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
max_size + |
+Maximum length of the defined TEXT type + |
+
This function processes columns of unknown data types returned from a given cursor and is used only for the system to report an error and exist when the type cannot be identified.
+1 +2 +3 +4 +5 +6 | DBMS_SQL.DEFINE_COLUMN_CHAR( +cursorid IN INTEGER, +position IN INTEGER, +column IN TEXT +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column + |
+Dynamically defined parameter + |
+
This function returns the cursor element value specified by a cursor and accesses the data obtained by DBMS_SQL.FETCH_ROWS.
+1 +2 +3 +4 +5 +6 | DBMS_SQL.COLUMN_VALUE( +cursorid IN INTEGER, +position IN INTEGER, +column_value INOUT ANYELEMENT +) +RETURN ANYELEMENT; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column_value + |
+Return value of a defined column + |
+
This function returns the value of the CHAR type in a specified position of a cursor and accesses the data obtained by DBMS_SQL.FETCH_ROWS.
+1 +2 +3 +4 +5 +6 +7 +8 | DBMS_SQL.COLUMN_VALUE_CHAR( +cursorid IN INTEGER, +position IN INTEGER, +column_value INOUT CHARACTER, +err_num INOUT NUMERIC default 0, +actual_length INOUT INTEGER default 1024 +) +RETURN RECORD; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column_value + |
+Return value + |
+
err_num + |
+Error No. It is an output parameter and the argument must be a variable. Currently, the output value is –1 regardless of the argument. + |
+
actual_length + |
+Length of a return value + |
+
1 +2 +3 +4 +5 | DBMS_SQL.COLUMN_VALUE_INT( +cursorid IN INTEGER, +position IN INTEGER +) +RETURN INTEGER; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
This function returns the value of a long type (not LONG or BIGINT) in a specified position of a cursor and accesses the data obtained by DBMS_SQL.FETCH_ROWS.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | DBMS_SQL.COLUMN_VALUE_LONG( +cursorid IN INTEGER, +position IN INTEGER, +length IN INTEGER, +off_set IN INTEGER, +column_value INOUT TEXT, +actual_length INOUT INTEGER default 1024 +) +RETURN RECORD; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
length + |
+Length of a return value + |
+
off_set + |
+Start position of a return value + |
+
column_value + |
+Return value + |
+
actual_length + |
+Length of a return value + |
+
This function returns the value of the RAW type in a specified position of a cursor and accesses the data obtained by DBMS_SQL.FETCH_ROWS.
+1 +2 +3 +4 +5 +6 +7 +8 | DBMS_SQL.COLUMN_VALUE_RAW( +cursorid IN INTEGER, +position IN INTEGER, +column_value INOUT BYTEA, +err_num INOUT NUMERIC default 0, +actual_length INOUT INTEGER default 1024 +) +RETURN RECORD; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column_value + |
+Returned column value + |
+
err_num + |
+Error No. It is an output parameter and the argument must be a variable. Currently, the output value is –1 regardless of the argument. + |
+
actual_length + |
+Length of a return value. The value longer than this length will be truncated. + |
+
This function returns the value of the TEXT type in a specified position of a cursor and accesses the data obtained by DBMS_SQL.FETCH_ROWS.
+1 +2 +3 +4 +5 | DBMS_SQL.COLUMN_VALUE_TEXT( +cursorid IN INTEGER, +position IN INTEGER +) +RETURN TEXT; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
This function returns the value of an unknown type in a specified position of a cursor. This is an error handling interface when the type is not unknown.
+1 +2 +3 +4 +5 +6 | DBMS_SQL.COLUMN_VALUE_UNKNOWN( +cursorid IN INTEGER, +position IN INTEGER, +COLUMN_TYPE IN TEXT +) +RETURN TEXT; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be executed + |
+
position + |
+Position of a dynamically defined column in the query + |
+
column_type + |
+Returned parameter type + |
+
This function returns the status of a cursor: open, parse, execute, or define. The value is TRUE. If the status is unknown, an error is reported. In other cases, the value is FALSE.
+1 +2 +3 +4 | DBMS_SQL.IS_OPEN( +cursorid IN INTEGER +) +RETURN BOOLEAN; + |
Parameter Name + |
+Description + |
+
---|---|
cursorid + |
+ID of the cursor to be queried + |
+
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 | -- Perform operations on RAW data in a stored procedure. +create or replace procedure pro_dbms_sql_all_02(in_raw raw,v_in int,v_offset int) +as +cursorid int; +v_id int; +v_info bytea :=1; +query varchar(2000); +execute_ret int; +define_column_ret_raw bytea :='1'; +define_column_ret int; +begin +drop table if exists pro_dbms_sql_all_tb1_02 ; +create table pro_dbms_sql_all_tb1_02(a int ,b blob); +insert into pro_dbms_sql_all_tb1_02 values(1,HEXTORAW('DEADBEEE')); +insert into pro_dbms_sql_all_tb1_02 values(2,in_raw); +query := 'select * from pro_dbms_sql_all_tb1_02 order by 1'; +-- Open a cursor. +cursorid := dbms_sql.open_cursor(); +-- Compile the cursor. +dbms_sql.parse(cursorid, query, 1); +-- Define a column. +define_column_ret:= dbms_sql.define_column(cursorid,1,v_id); +define_column_ret_raw:= dbms_sql.define_column_raw(cursorid,2,v_info,10); +-- Execute the cursor. +execute_ret := dbms_sql.execute(cursorid); +loop +exit when (dbms_sql.fetch_rows(cursorid) <= 0); +-- Obtain values. +dbms_sql.column_value(cursorid,1,v_id); +dbms_sql.column_value_raw(cursorid,2,v_info,v_in,v_offset); +-- Output the result. +dbms_output.put_line('id:'|| v_id || ' info:' || v_info); +end loop; +-- Close the cursor. +dbms_sql.close_cursor(cursorid); +end; +/ +-- Invoke the stored procedure. +call pro_dbms_sql_all_02(HEXTORAW('DEADBEEF'),0,1); + +-- Delete the stored procedure. +DROP PROCEDURE pro_dbms_sql_all_02; + |
RAISE has the following five syntax formats:
+Parameter description:
+--v_job_id replaces % in the character string. +RAISE NOTICE 'Calling cs_create_job(%)',v_job_id;+
If neither a condition name nor an SQLSTATE is designated in a RAISE EXCEPTION command, the RAISE EXCEPTION (P0001) is used by default. If no message text is designated, the condition name or SQLSTATE is used as the message text by default.
+If the SQLSTATE designates an error code, the error code is not limited to a defined error code. It can be any error code containing five digits or ASCII uppercase rather than 00000. Do not use an error code ended with three zeros because this kind of error codes are type codes and can be captured by the whole category.
+The syntax described in Figure 5 does not append any parameter. This form is used only for the EXCEPTION statement in a BEGIN block so that the error can be re-processed.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | CREATE OR REPLACE PROCEDURE proc_raise1(user_id in integer) +AS +BEGIN +RAISE EXCEPTION 'Noexistence ID --> %',user_id USING HINT = 'Please check your user ID'; +END; +/ + +call proc_raise1(300011); + +-- Execution result: +ERROR: Noexistence ID --> 300011 +HINT: Please check your user ID + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | CREATE OR REPLACE PROCEDURE proc_raise2(user_id in integer) +AS +BEGIN +RAISE 'Duplicate user ID: %',user_id USING ERRCODE = 'unique_violation'; +END; +/ + +\set VERBOSITY verbose +call proc_raise2(300011); + +-- Execution result: +ERROR: Duplicate user ID: 300011 +SQLSTATE: 23505 +LOCATION: exec_stmt_raise, pl_exec.cpp:3482 + |
If the main parameter is a condition name or SQLSTATE, the following applies:
+RAISE division_by_zero;
+RAISE SQLSTATE '22012';
+For example:
+CREATE OR REPLACE PROCEDURE division(div in integer, dividend in integer) +AS +DECLARE +res int; + BEGIN + IF dividend=0 THEN + RAISE division_by_zero; + RETURN; + ELSE + res := div/dividend; + RAISE INFO 'division result: %', res; + RETURN; + END IF; + END; +/ +call division(3,0); + +-- Execution result: +ERROR: division_by_zero+
1 | RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; + |
System catalogs are used by GaussDB(DWS) to store structure metadata. They are a core component the GaussDB(DWS) database system and provide control information for the database system. These system catalogs contain cluster installation information and information about various queries and processes in GaussDB(DWS). You can collect information about the database by querying the system catalog.
+System views provide ways to query system catalogs and internal database status. If some columns in one or more tables in a database are frequently searched for, an administrator can define a view for these columns, and then users can directly access these columns in the view without entering search criteria. A view is different from a basic table. It is only a virtual object rather than a physical one. A database only stores the definition of a view and does not store its data. The data is still stored in the original base table. If data in the base table changes, the data in the view changes accordingly. In this sense, a view is like a window through which users can know their interested data and data changes in the database. A view is triggered every time it is referenced.
+In separation of duty, non-administrators have no permission to view system catalogs and views. In other scenarios, system catalogs and views are either visible only to administrators or visible to all users. Some of the following system catalogs and views have marked the need of administrator permissions. They are accessible only to administrators.
+Do not add, delete, or modify system catalogs or system views. Manual modification or damage to system catalogs or system views may cause system information inconsistency, system control exceptions, or even cluster unavailability.
+GS_OBSSCANINFO defines the OBS runtime information scanned in cluster acceleration scenarios. Each record corresponds to a piece of runtime information of a foreign table on OBS in a query.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
query_id + |
+bigint + |
+- + |
+Specifies a query ID. + |
+
user_id + |
+text + |
+- + |
+Specifies a database user who performs queries. + |
+
table_name + |
+text + |
+- + |
+Specifies the name of a foreign table on OBS. + |
+
file_type + |
+text + |
+- + |
+Specifies the format of files storing the underlying data. + |
+
time_stamp + |
+time_stam + |
+- + |
+Specifies the scanning start time. + |
+
actual_time + |
+double + |
+- + |
+Specifies the scanning execution time in seconds. + |
+
file_scanned + |
+bigint + |
+- + |
+Specifies the number of files scanned. + |
+
data_size + |
+double + |
+- + |
+Specifies the size of data scanned in bytes. + |
+
billing_info + |
+text + |
+- + |
+Specifies the reserved fields. + |
+
The GS_WLM_INSTANCE_HISTORY system catalog stores information about resource usage related to CN or DN instances. Each record in the system table indicates the resource usage of an instance at a specific time point, including the memory, number of CPU cores, disk I/O, physical I/O of the process, and logical I/O of the process.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
instancename + |
+text + |
+Instance name + |
+
timestamp + |
+timestamp with time zone + |
+Timestamp + |
+
used_cpu + |
+int + |
+CPU usage of an instance + |
+
free_mem + |
+int + |
+Unused memory of an instance (unit: MB) + |
+
used_mem + |
+int + |
+Used memory of an instance (unit: MB) + |
+
io_await + |
+real + |
+Specifies the io_wait value (average value within 10 seconds) of the disk used by an instance. + |
+
io_util + |
+real + |
+Specifies the io_util value (average value within 10 seconds) of the disk used by an instance. + |
+
disk_read + |
+real + |
+Specifies the disk read rate (average value within 10 seconds) of an instance (unit: KB/s). + |
+
disk_write + |
+real + |
+The disk write rate (average value within 10 seconds) of an instance (unit: KB/s). + |
+
process_read + |
+bigint + |
+Specifies the read rate (excluding the number of bytes read from the disk pagecache) of the corresponding instance process that reads data from a disk. (Unit: KB/s) + |
+
process_write + |
+bigint + |
+Specifies the write rate (excluding the number of bytes written to the disk pagecache) of the corresponding instance process that writes data to a disk within 10 seconds. (Unit: KB/s) + |
+
logical_read + |
+bigint + |
+CN instance: N/A +DN instance: Specifies the logical read byte rate of the instance in the statistical interval (10 seconds). (Unit: KB/s) + |
+
logical_write + |
+bigint + |
+CN instance: N/A +DN instance: Specifies the logical write byte rate of the instance within the statistical interval (10 seconds). (Unit: KB/s) + |
+
read_counts + |
+bigint + |
+CN instance: N/A +DN instance: Specifies the total number of logical read operations of the instance in the statistical interval (10 seconds). + |
+
write_counts + |
+bigint + |
+CN instance: N/A +DN instance: Specifies the total number of logical write operations of the instance in the statistical interval (10 seconds). + |
+
GS_WLM_OPERATOR_INFO records operators of completed jobs. The data is dumped from the kernel to a system catalog.
+Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Name of the CN where the statement is executed + |
+
queryid + |
+bigint + |
+Internal query_id used for statement execution + |
+
pid + |
+bigint + |
+Thread ID of the backend + |
+
plan_node_id + |
+integer + |
+plan_node_id of the execution plan of a query + |
+
plan_node_name + |
+text + |
+Name of the operator corresponding to plan_node_id + |
+
start_time + |
+timestamp with time zone + |
+Time when an operator starts to process the first data record + |
+
duration + |
+bigint + |
+Total execution time of an operator. The unit is ms. + |
+
query_dop + |
+integer + |
+Degree of parallelism (DOP) of the current operator + |
+
estimated_rows + |
+bigint + |
+Number of rows estimated by the optimizer + |
+
tuple_processed + |
+bigint + |
+Number of elements returned by the current operator + |
+
min_peak_memory + |
+integer + |
+Minimum peak memory used by the current operator on all DNs. The unit is MB. + |
+
max_peak_memory + |
+integer + |
+Maximum peak memory used by the current operator on all DNs. The unit is MB. + |
+
average_peak_memory + |
+integer + |
+Average peak memory used by the current operator on all DNs. The unit is MB. + |
+
memory_skew_percent + |
+integer + |
+Memory usage skew of the current operator among DNs + |
+
min_spill_size + |
+integer + |
+Minimum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
max_spill_size + |
+integer + |
+Maximum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
average_spill_size + |
+integer + |
+Average spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
spill_skew_percent + |
+integer + |
+DN spill skew when a spill occurs + |
+
min_cpu_time + |
+bigint + |
+Minimum execution time of the operator on all DNs. The unit is ms. + |
+
max_cpu_time + |
+bigint + |
+Maximum execution time of the operator on all DNs. The unit is ms. + |
+
total_cpu_time + |
+bigint + |
+Total execution time of the operator on all DNs. The unit is ms. + |
+
cpu_skew_percent + |
+integer + |
+Skew of the execution time among DNs. + |
+
warning + |
+text + |
+Warning. The following warnings are displayed: +
|
+
GS_WLM_SESSION_INFO records load management information about a completed job executed on all CNs. The data is dumped from the kernel to a system catalog.
+The GS_WLM_USER_RESOURCE_HISTORY system table stores information about resources used by users and is valid only on CNs. Each record in the system table indicates the resource usage of a user at a time point, including the memory, number of CPU cores, storage space, temporary space, operator flushing space, logical I/O traffic, number of logical I/O times, and logical I/O rate. The memory, CPU, and I/O monitoring items record only the resource usage of complex jobs.
+Data in the GS_WLM_USER_RESOURCE_HISTORY system table comes from the PG_TOTAL_USER_RESOURCE_INFO view.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
username + |
+text + |
+Username + |
+
timestamp + |
+timestamp with time zone + |
+Timestamp + |
+
used_memory + |
+int + |
+Specifies the used memory (unit: MB) + |
+
total_memory + |
+int + |
+Available memory (unit: MB). 0 indicates that the available memory is not limited and depends on the maximum memory available in the database. + |
+
used_cpu + |
+real + |
+Number of CPU cores in use + |
+
total_cpu + |
+int + |
+Total number of CPU cores of the Cgroup associated with a user on the node + |
+
used_space + |
+bigint + |
+Used storage space (unit: KB) + |
+
total_space + |
+bigint + |
+Available storage space (unit: KB). -1 indicates that the storage space is not limited. + |
+
used_temp_space + |
+bigint + |
+Used temporary storage space (unit: KB) + |
+
total_temp_space + |
+bigint + |
+Available temporary storage space (unit: KB). -1 indicates that the maximum temporary storage space is not limited. + |
+
used_spill_space + |
+bigint + |
+Used space of operator flushing (unit: KB) + |
+
total_spill_space + |
+bigint + |
+Available storage space for operator flushing (unit: KB). The value -1 indicates that the maximum operator flushing space is not limited. + |
+
read_kbytes + |
+bigint + |
+Byte traffic of read operations in a monitoring period (unit: KB) + |
+
write_kbytes + |
+bigint + |
+Byte traffic of write operations in a monitoring period (unit: KB) + |
+
read_counts + |
+bigint + |
+Number of read operations in a monitoring period. + |
+
write_counts + |
+bigint + |
+Number of write operations in a monitoring period. + |
+
read_speed + |
+real + |
+Byte rate of read operations in a monitoring period (unit: KB) + |
+
write_speed + |
+real + |
+Byte rate of write operations in a monitoring period (unit: KB) + |
+
pg_aggregate records information about aggregation functions. Each entry in pg_aggregate is an extension of an entry in pg_proc. The pg_proc entry carries the aggregate's name, input and output data types, and other information that is similar to ordinary functions.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
aggfnoid + |
+regproc + |
+PG_PROC.oid + |
+PG_PROC OID of the aggregate function + |
+
aggtransfn + |
+regproc + |
+PG_PROC.oid + |
+Transition function + |
+
aggcollectfn + |
+regproc + |
+PG_PROC.oid + |
+Aggregate function + |
+
aggfinalfn + |
+regproc + |
+PG_PROC.oid + |
+Final function (zero if none) + |
+
aggsortop + |
+oid + |
+PG_OPERATOR.oid + |
+Associated sort operator (zero if none) + |
+
aggtranstype + |
+oid + |
+PG_TYPE.oid + |
+Data type of the aggregate function's internal transition (state) data + |
+
agginitval + |
+text + |
+- + |
+Initial value of the transition state. This is a text column containing the initial value in its external string representation. If this column is null, the transition state value starts out null. + |
+
agginitcollect + |
+text + |
+- + |
+Initial value of the collection state. This is a text column containing the initial value in its external string representation. If this column is null, the collection state value starts out null. + |
+
PG_AM records information about index access methods. There is one row for each index access method supported by the system.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
amname + |
+name + |
+- + |
+Name of the access method + |
+
amstrategies + |
+smallint + |
+- + |
+Number of operator strategies for this access method, or zero if access method does not have a fixed set of operator strategies + |
+
amsupport + |
+smallint + |
+- + |
+Number of support routines for this access method + |
+
amcanorder + |
+boolean + |
+- + |
+Whether the access method supports ordered scans sorted by the indexed column's value + |
+
amcanorderbyop + |
+boolean + |
+- + |
+Whether the access method supports ordered scans sorted by the result of an operator on the indexed column + |
+
amcanbackward + |
+boolean + |
+- + |
+Whether the access method supports backward scanning + |
+
amcanunique + |
+boolean + |
+- + |
+Whether the access method supports unique indexes + |
+
amcanmulticol + |
+boolean + |
+- + |
+Whether the access method supports multi-column indexes + |
+
amoptionalkey + |
+boolean + |
+- + |
+Whether the access method supports a scan without any constraint for the first index column + |
+
amsearcharray + |
+boolean + |
+- + |
+Whether the access method supports ScalarArrayOpExpr searches + |
+
amsearchnulls + |
+boolean + |
+- + |
+Whether the access method supports IS NULL/NOT NULL searches + |
+
amstorage + |
+boolean + |
+- + |
+Whether an index storage data type can differ from a column data type + |
+
amclusterable + |
+boolean + |
+- + |
+Whether an index of this type can be clustered on + |
+
ampredlocks + |
+boolean + |
+- + |
+Whether an index of this type manages fine-grained predicate locks + |
+
amkeytype + |
+oid + |
+PG_TYPE.oid + |
+Type of data stored in index, or zero if not a fixed type + |
+
aminsert + |
+regproc + |
+PG_PROC.oid + |
+"Insert this tuple" function + |
+
ambeginscan + |
+regproc + |
+PG_PROC.oid + |
+"Prepare for index scan" function + |
+
amgettuple + |
+regproc + |
+PG_PROC.oid + |
+"Next valid tuple" function, or zero if none + |
+
amgetbitmap + |
+regproc + |
+PG_PROC.oid + |
+"Fetch all valid tuples" function, or zero if none + |
+
amrescan + |
+regproc + |
+PG_PROC.oid + |
+"(Re)start index scan" function + |
+
amendscan + |
+regproc + |
+PG_PROC.oid + |
+"Clean up after index scan" function + |
+
ammarkpos + |
+regproc + |
+PG_PROC.oid + |
+"Mark current scan position" function + |
+
amrestrpos + |
+regproc + |
+PG_PROC.oid + |
+"Restore marked scan position" function + |
+
ammerge + |
+regproc + |
+PG_PROC.oid + |
+"Merge multiple indexes" function + |
+
ambuild + |
+regproc + |
+PG_PROC.oid + |
+"Build new index" function + |
+
ambuildempty + |
+regproc + |
+PG_PROC.oid + |
+"Build empty index" function + |
+
ambulkdelete + |
+regproc + |
+PG_PROC.oid + |
+Bulk-delete function + |
+
amvacuumcleanup + |
+regproc + |
+PG_PROC.oid + |
+Post-VACUUM cleanup function + |
+
amcanreturn + |
+regproc + |
+PG_PROC.oid + |
+Function to check whether index supports index-only scans, or zero if none + |
+
amcostestimate + |
+regproc + |
+PG_PROC.oid + |
+Function to estimate cost of an index scan + |
+
amoptions + |
+regproc + |
+PG_PROC.oid + |
+Function to parse and validate reloptions for an index + |
+
PG_AMOP records information about operators associated with access method operator families. There is one row for each operator that is a member of an operator family. A family member can be either a search operator or an ordering operator. An operator can appear in more than one family, but cannot appear in more than one search position nor more than one ordering position within a family.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
amopfamily + |
+oid + |
+PG_OPFAMILY.oid + |
+Operator family this entry is for + |
+
amoplefttype + |
+oid + |
+PG_TYPE.oid + |
+Left-hand input data type of operator + |
+
amoprighttype + |
+oid + |
+PG_TYPE.oid + |
+Right-hand input data type of operator + |
+
amopstrategy + |
+smallint + |
+- + |
+Number of operator strategies + |
+
amoppurpose + |
+"char" + |
+- + |
+Operator purpose, either s for search or o for ordering + |
+
amopopr + |
+oid + |
+PG_OPERATOR.oid + |
+OID of the operator + |
+
amopmethod + |
+oid + |
+PG_AM.oid + |
+Index access method the operator family is for + |
+
amopsortfamily + |
+oid + |
+PG_OPFAMILY.oid + |
+The btree operator family this entry sorts according to, if an ordering operator; zero if a search operator + |
+
A "search" operator entry indicates that an index of this operator family can be searched to find all rows satisfying WHERE indexed_column operator constant. Obviously, such an operator must return a Boolean value, and its left-hand input type must match the index's column data type.
+An "ordering" operator entry indicates that an index of this operator family can be scanned to return rows in the order represented by ORDER BY indexed_column operator constant. Such an operator could return any sortable data type, though again its left-hand input type must match the index's column data type. The exact semantics of the ORDER BY are specified by the amopsortfamily column, which must reference a btree operator family for the operator's result type.
+PG_AMPROC records information about the support procedures associated with the access method operator families. There is one row for each support procedure belonging to an operator family.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
amprocfamily + |
+oid + |
+PG_OPFAMILY.oid + |
+Operator family this entry is for + |
+
amproclefttype + |
+oid + |
+PG_TYPE.oid + |
+Left-hand input data type of associated operator + |
+
amprocrighttype + |
+oid + |
+PG_TYPE.oid + |
+Right-hand input data type of associated operator + |
+
amprocnum + |
+smallint + |
+- + |
+Support procedure number + |
+
amproc + |
+regproc + |
+PG_PROC.oid + |
+OID of the procedure + |
+
The usual interpretation of the amproclefttype and amprocrighttype columns is that they identify the left and right input types of the operator(s) that a particular support procedure supports. For some access methods these match the input data type(s) of the support procedure itself, for others not. There is a notion of "default" support procedures for an index, which are those with amproclefttype and amprocrighttype both equal to the index opclass's opcintype.
+PG_ATTRDEF stores default values of columns.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
adrelid + |
+oid + |
+Table to which the column belongs + |
+
adnum + |
+smallint + |
+Number of the column + |
+
adbin + |
+pg_node_tree + |
+Internal representation of the default value of the column + |
+
adsrc + |
+text + |
+Internal representation of the readable default value + |
+
PG_ATTRIBUTE records information about table columns.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
attrelid + |
+oid + |
+Table to which the column belongs + |
+
attname + |
+name + |
+Column name + |
+
atttypid + |
+oid + |
+Column type + |
+
attstattarget + |
+integer + |
+Controls the level of details of statistics collected for this column by ANALYZE. +
For scalar data types, attstattarget is both the target number of "most common values" to collect, and the target number of histogram bins to create. + |
+
attlen + |
+smallint + |
+Copy of pg_type.typlen of the column's type + |
+
attnum + |
+smallint + |
+Number of a column. + |
+
attndims + |
+integer + |
+Number of dimensions if the column is an array; otherwise, the value is 0. + |
+
attcacheoff + |
+integer + |
+This column is always -1 on disk. When it is loaded into a row descriptor in the memory, it may be updated to cache the offset of the columns in the row. + |
+
atttypmod + |
+integer + |
+Type-specific data supplied at table creation time (for example, the maximum length of a varchar column). This column is used as the third parameter when passing to type-specific input functions and length coercion functions. The value will generally be -1 for types that do not need ATTTYPMOD. + |
+
attbyval + |
+boolean + |
+Copy of pg_type.typbyval of the column's type + |
+
attstorage + |
+"char" + |
+Copy of pg_type.typstorage of this column's type + |
+
attalign + |
+"char" + |
+Copy of pg_type.typalign of the column's type + |
+
attnotnull + |
+boolean + |
+A not-null constraint. It is possible to change this column to enable or disable the constraint. + |
+
atthasdef + |
+boolean + |
+Indicates that this column has a default value, in which case there will be a corresponding entry in the pg_attrdef table that actually defines the value. + |
+
attisdropped + |
+boolean + |
+Whether the column has been dropped and is no longer valid. A dropped column is still physically present in the table but is ignored by the analyzer, so it cannot be accessed through SQL. + |
+
attislocal + |
+boolean + |
+Whether the column is defined locally in the relation. Note that a column can be locally defined and inherited simultaneously. + |
+
attcmprmode + |
+tinyint + |
+Compressed modes for a specific column The compressed mode includes: +
|
+
attinhcount + |
+integer + |
+Number of direct ancestors this column has. A column with an ancestor cannot be dropped nor renamed. + |
+
attcollation + |
+oid + |
+Defined collation of a column + |
+
attacl + |
+aclitem[] + |
+Permissions for column-level access + |
+
attoptions + |
+text[] + |
+Property-level options + |
+
attfdwoptions + |
+text[] + |
+Property-level external data options + |
+
attinitdefval + |
+bytea + |
+attinitdefval stores the default value expression. ADD COLUMN in a row-store table must use this column. + |
+
PG_AUTHID records information about the database authentication identifiers (roles). The concept of users is contained in that of roles. A user is actually a role whose rolcanlogin has been set. Any role, whether the rolcanlogin is set or not, can use other roles as members.
+For a cluster, only one pg_authid exists which is not available for every database. It is accessible only to users with system administrator rights.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
oid + |
+oid + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
rolname + |
+name + |
+Role name + |
+
rolsuper + |
+boolean + |
+Whether the role is the initial system administrator with the highest permission + |
+
rolinherit + |
+boolean + |
+Whether the role automatically inherits permissions of roles it is a member of + |
+
rolcreaterole + |
+boolean + |
+Whether the role can create more roles + |
+
rolcreatedb + |
+boolean + |
+Whether the role can create databases + |
+
rolcatupdate + |
+boolean + |
+Whether the role can directly update system catalogs. Only the initial system administrator whose usesysid is 10 has this permission. It is not available for other users. + |
+
rolcanlogin + |
+boolean + |
+Whether a role can log in, that is, whether a role can be given as the initial session authorization identifier. + |
+
rolreplication + |
+boolean + |
+Indicates that the role is a replicated one (an adaptation syntax and no actual meaning). + |
+
rolauditadmin + |
+boolean + |
+Indicates that the role is an audit user. + |
+
rolsystemadmin + |
+boolean + |
+Indicates that the role is an administrator. + |
+
rolconnlimit + |
+integer + |
+For roles that can log in, this sets maximum number of concurrent connections this role can make. -1 means no limit. + |
+
rolpassword + |
+text + |
+Password (possibly encrypted); NULL if no password. + |
+
rolvalidbegin + |
+timestamp with time zone + |
+Account validity start time; NULL if no start time + |
+
rolvaliduntil + |
+timestamp with time zone + |
+Password expiry time; NULL if no expiration + |
+
rolrespool + |
+name + |
+Resource pool that a user can use + |
+
roluseft + |
+boolean + |
+Whether the role can perform operations on foreign tables + |
+
rolparentid + |
+oid + |
+OID of a group user to which the user belongs + |
+
roltabspace + |
+Text + |
+Storage space of the user permanent table + |
+
rolkind + |
+char + |
+Special type of user, including private users, logical cluster administrators, and common users. + |
+
rolnodegroup + |
+oid + |
+OID of a node group associated with a user. The node group must be a logical cluster. + |
+
roltempspace + |
+Text + |
+Storage space of the user temporary table + |
+
rolspillspace + |
+Text + |
+Operator disk spill space of the user + |
+
rolexcpdata + |
+text + |
+Reserved column + |
+
rolauthinfo + |
+text + |
+Additional information when LDAP authentication is used. If other authentication modes are used, the value is NULL. + |
+
rolpwdexpire + |
+integer + |
+Password expiration time. Users can change their password before it expires. After the password expires, only the administrator can change the password. The value -1 indicates that the password never expires. + |
+
rolpwdtime + |
+timestamp with time zone + |
+Time when a password is created + |
+
PG_AUTH_HISTORY records the authentication history of the role. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
roloid + |
+oid + |
+ID of the role + |
+
passwordtime + |
+timestamp with time zone + |
+Time of password creation and change + |
+
rolpassword + |
+text + |
+Role password that is encrypted using MD5 or SHA256, or that is not encrypted + |
+
PG_AUTH_MEMBERS records the membership relations between roles.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
roleid + |
+oid + |
+ID of a role that has a member + |
+
member + |
+oid + |
+ID of a role that is a member of ROLEID + |
+
grantor + |
+oid + |
+ID of a role that grants this membership + |
+
admin_option + |
+boolean + |
+Whether a member can grant membership in ROLEID to others + |
+
PG_CAST records conversion relationships between data types.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
castsource + |
+oid + |
+OID of the source data type + |
+
casttarget + |
+oid + |
+OID of the target data type + |
+
castfunc + |
+oid + |
+OID of the conversion function. If the value is 0, no conversion function is required. + |
+
castcontext + |
+"char" + |
+Conversion mode between the source and target data types +
|
+
castmethod + |
+"char" + |
+Conversion method +
|
+
PG_CLASS records database objects and their relations.
+ +Name + |
+Type + |
+Description + |
+||||
---|---|---|---|---|---|---|
oid + |
+oid + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+||||
relname + |
+name + |
+Name of an object, such as a table, index, or view + |
+||||
relnamespace + |
+oid + |
+OID of the namespace that contains the relationship + |
+||||
reltype + |
+oid + |
+Data type that corresponds to this table's row type (the index is 0 because the index does not have pg_type record) + |
+||||
reloftype + |
+oid + |
+OID is of composite type. 0 indicates other types. + |
+||||
relowner + |
+oid + |
+Owner of the relationship + |
+||||
relam + |
+oid + |
+Specifies the access method used, such as B-tree and hash, if this is an index + |
+||||
relfilenode + |
+oid + |
+Name of the on-disk file of this relationship. If such file does not exist, the value is 0. + |
+||||
reltablespace + |
+oid + |
+Tablespace in which this relationship is stored. If its value is 0, the default tablespace in this database is used. This column is meaningless if the relationship has no on-disk file. + |
+||||
relpages + |
+double precision + |
+Size of the on-disk representation of this table in pages (of size BLCKSZ). This is only an estimate used by the optimizer. + |
+||||
reltuples + |
+double precision + |
+Number of rows in the table. This is only an estimate used by the optimizer. + |
+||||
relallvisible + |
+integer + |
+Number of pages marked as all visible in the table. This column is used by the optimizer for optimizing SQL execution. It is updated by VACUUM, ANALYZE, and a few DDL statements such as CREATE INDEX. + |
+||||
reltoastrelid + |
+oid + |
+OID of the TOAST table associated with this table. The OID is 0 if no TOAST table exists. +The TOAST table stores large columns "offline" in a secondary table. + |
+||||
reltoastidxid + |
+oid + |
+OID of the index for a TOAST table. The OID is 0 for a table other than a TOAST table. + |
+||||
reldeltarelid + |
+oid + |
+OID of a Delta table +Delta tables belong to column-store tables. They store long tail data generated during data import. + |
+||||
reldeltaidx + |
+oid + |
+OID of the index for a Delta table + |
+||||
relcudescrelid + |
+oid + |
+OID of a CU description table +CU description tables (Desc tables) belong to column-store tables. They control whether storage data in the HDFS table directory is visible. + |
+||||
relcudescidx + |
+oid + |
+OID of the index for a CU description table + |
+||||
relhasindex + |
+boolean + |
+Its value is true if this column is a table and has (or recently had) at least one index. +It is set by CREATE INDEX but is not immediately cleared by DROP INDEX. If the VACUUM process detects that a table has no index, it clears the relhasindex column and sets the value to false. + |
+||||
relisshared + |
+boolean + |
+Its value is true if the table is shared across all databases in the cluster. Only certain system catalogs (such as pg_database) are shared. + |
+||||
relpersistence + |
+"char" + |
+
|
+||||
relkind + |
+"char" + |
+
|
+||||
relnatts + |
+smallint + |
+Number of user columns in the relationship (excluding system columns) pg_attribute has the same number of rows corresponding to the user columns. + |
+||||
relchecks + |
+smallint + |
+Number of constraints on a table. For details, see PG_CONSTRAINT. + |
+||||
relhasoids + |
+boolean + |
+Its value is true if an OID is generated for each row of the relationship. + |
+||||
relhaspkey + |
+boolean + |
+Its value is true if the table has (or once had) a primary key. + |
+||||
relhasrules + |
+boolean + |
+Its value is true if the table has rules. See table PG_REWRITE to check whether it has rules. + |
+||||
relhastriggers + |
+boolean + |
+Its value is true if the table has (or once had) triggers. See PG_TRIGGER. + |
+||||
relhassubclass + |
+boolean + |
+Its value is true if the table has (or once had) any inheritance child table. + |
+||||
relcmprs + |
+tinyint + |
+Whether the compression feature is enabled for the table. Note that only batch insertion triggers compression so ordinary CRUD does not trigger compression. +
|
+||||
relhasclusterkey + |
+boolean + |
+Whether the local cluster storage is used + |
+||||
relrowmovement + |
+boolean + |
+Whether the row migration is allowed when the partitioned table is updated +
|
+||||
parttype + |
+"char" + |
+Whether the table or index has the property of a partitioned table +
|
+||||
relfrozenxid + |
+xid32 + |
+All transaction IDs before this one have been replaced with a permanent ("frozen") transaction ID in this table. This column is used to track whether the table needs to be vacuumed in order to prevent transaction ID wraparound (or to allow pg_clog to be shrunk). The value is 0 (InvalidTransactionId) if the relationship is not a table. +To ensure forward compatibility, this column is reserved. The relfrozenxid64 column is added to record the information. + |
+||||
relacl + |
+aclitem[] + |
+Access permissions +The command output of the query is as follows: +
xxxx indicates the assigned privileges, and yyyy indicates the roles that are assigned to the privileges. For details about permission descriptions, see Table 2. + |
+||||
reloptions + |
+text[] + |
+Access-method-specific options, as "keyword=value" strings + |
+||||
relfrozenxid64 + |
+xid + |
+All transaction IDs before this one have been replaced with a permanent ("frozen") transaction ID in this table. This column is used to track whether the table needs to be vacuumed in order to prevent transaction ID wraparound (or to allow pg_clog to be shrunk). The value is 0 (InvalidTransactionId) if the relationship is not a table. + |
+
Parameter + |
+Description + |
+
---|---|
r + |
+SELECT (read) + |
+
w + |
+UPDATE (write) + |
+
a + |
+INSERT (insert) + |
+
d + |
+DELETE + |
+
D + |
+TRUNCATE + |
+
x + |
+REFERENCES + |
+
t + |
+TRIGGER + |
+
X + |
+EXECUTE + |
+
U + |
+USAGE + |
+
C + |
+CREATE + |
+
c + |
+CONNECT + |
+
T + |
+TEMPORARY + |
+
A + |
+ANALYZE|ANALYSE + |
+
arwdDxtA + |
+ALL PRIVILEGES (used for tables) + |
+
* + |
+Authorization options for preceding permissions + |
+
View the OID and relfilenode of a table.
+1 | select oid,relname,relfilenode from pg_class where relname = 'table_name'; + |
Count row-store tables.
+1 | select 'row count:'||count(1) as point from pg_class where relkind = 'r' and oid > 16384 and reloptions::text not like '%column%' and reloptions::text not like '%internal_mask%'; + |
Count column-store tables.
+1 | select 'column count:'||count(1) as point from pg_class where relkind = 'r' and oid > 16384 and reloptions::text like '%column%'; + |
PG_COLLATION records the available collations, which are essentially mappings from an SQL name to operating system locale categories.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
collname + |
+name + |
+- + |
+Collation name (unique per namespace and encoding) + |
+
collnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace that contains this collation + |
+
collowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the collation + |
+
collencoding + |
+integer + |
+- + |
+Encoding in which the collation is applicable, or -1 if it works for any encoding + |
+
collcollate + |
+name + |
+- + |
+LC_COLLATE for this collation object + |
+
collctype + |
+name + |
+- + |
+LC_CTYPE for this collation object + |
+
PG_CONSTRAINT records check, primary key, unique, and foreign key constraints on the tables.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
conname + |
+name + |
+Constraint name (not necessarily unique) + |
+
connamespace + |
+oid + |
+OID of the namespace that contains the constraint + |
+
contype + |
+"char" + |
+
|
+
condeferrable + |
+boolean + |
+Whether the constraint can be deferrable + |
+
condeferred + |
+boolean + |
+Whether the constraint can be deferrable by default + |
+
convalidated + |
+boolean + |
+Whether the constraint is valid Currently, only foreign key and check constraints can be set to false. + |
+
conrelid + |
+oid + |
+Table containing this constraint. The value is 0 if it is not a table constraint. + |
+
contypid + |
+oid + |
+Domain containing this constraint. The value is 0 if it is not a domain constraint. + |
+
conindid + |
+oid + |
+ID of the index associated with the constraint + |
+
confrelid + |
+oid + |
+Referenced table if this constraint is a foreign key; otherwise, the value is 0. + |
+
confupdtype + |
+"char" + |
+Foreign key update action code +
|
+
confdeltype + |
+"char" + |
+Foreign key deletion action code +
|
+
confmatchtype + |
+"char" + |
+Foreign key match type +
|
+
conislocal + |
+boolean + |
+Whether the local constraint is defined for the relationship + |
+
coninhcount + |
+integer + |
+Number of direct inheritance parent tables this constraint has. When the number is not 0, the constraint cannot be deleted or renamed. + |
+
connoinherit + |
+boolean + |
+Whether the constraint can be inherited + |
+
consoft + |
+boolean + |
+Whether the column indicates an informational constraint. + |
+
conopt + |
+boolean + |
+Whether you can use Informational Constraint to optimize the execution plan. + |
+
conkey + |
+smallint[] + |
+Column list of the constrained control if this column is a table constraint + |
+
confkey + |
+smallint[] + |
+List of referenced columns if this column is a foreign key + |
+
conpfeqop + |
+oid[] + |
+ID list of the equality operators for PK = FK comparisons if this column is a foreign key + |
+
conppeqop + |
+oid[] + |
+ID list of the equality operators for PK = PK comparisons if this column is a foreign key + |
+
conffeqop + |
+oid[] + |
+ID list of the equality operators for FK = FK comparisons if this column is a foreign key + |
+
conexclop + |
+oid[] + |
+ID list of the per-column exclusion operators if this column is an exclusion constraint + |
+
conbin + |
+pg_node_tree + |
+Internal representation of the expression if this column is a check constraint + |
+
consrc + |
+text + |
+Human-readable representation of the expression if this column is a check constraint + |
+
PG_CONVERSION records encoding conversion information.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
conname + |
+name + |
+- + |
+Conversion name (unique in a namespace) + |
+
connamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace that contains this conversion + |
+
conowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the conversion + |
+
conforencoding + |
+integer + |
+- + |
+Source encoding ID + |
+
contoencoding + |
+integer + |
+- + |
+Destination encoding ID + |
+
conproc + |
+regproc + |
+PG_PROC.oid + |
+Conversion procedure + |
+
condefault + |
+boolean + |
+- + |
+Its value is true if this is the default conversion. + |
+
PG_DATABASE records information about the available databases.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datname + |
+name + |
+Database name + |
+
datdba + |
+oid + |
+Owner of the database, usually the user who created it + |
+
encoding + |
+integer + |
+Database encoding. +You can use pg_encoding_to_char() to convert this number to the encoding name. + |
+
datcollate + |
+name + |
+Sequence used by the database + |
+
datctype + |
+name + |
+Character type used by the database + |
+
datistemplate + |
+boolean + |
+Whether this column can serve as a template database + |
+
datallowconn + |
+boolean + |
+If false then no one can connect to this database. This column is used to protect the template0 database from being altered. + |
+
datconnlimit + |
+integer + |
+Maximum number of concurrent connections allowed on this database. -1 indicates no limit. + |
+
datlastsysoid + |
+oid + |
+Last system OID in the database + |
+
datfrozenxid + |
+xid32 + |
+Tracks whether the database needs to be vacuumed in order to prevent transaction ID wraparound. +To ensure forward compatibility, this column is reserved. The datfrozenxid64 column is added to record the information. + |
+
dattablespace + |
+oid + |
+Default tablespace of the database + |
+
datcompatibility + |
+name + |
+Database compatibility mode + |
+
datacl + |
+aclitem[] + |
+Access permissions + |
+
datfrozenxid64 + |
+xid + |
+Tracks whether the database needs to be vacuumed in order to prevent transaction ID wraparound. + |
+
PG_DB_ROLE_SETTING records the default values of configuration items bonded to each role and database when the database is running.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
setdatabase + |
+oid + |
+Database corresponding to the configuration items; the value is 0 if the database is not specified + |
+
setrole + |
+oid + |
+Role corresponding to the configuration items; the value is 0 if the role is not specified + |
+
setconfig + |
+text[] + |
+Default value of configuration items when the database is running + |
+
PG_DEFAULT_ACL records the initial privileges assigned to the newly created objects.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
defaclrole + |
+oid + |
+ID of the role associated with the permission + |
+
defaclnamespace + |
+oid + |
+Namespace associated with the permission; the value is 0 if no ID + |
+
defaclobjtype + |
+"char" + |
+Object type of the permission: +
|
+
defaclacl + |
+aclitem[] + |
+Access permissions that this type of object should have on creation + |
+
Run the following command to view the initial permissions of the new user role1:
+1 +2 +3 +4 | select * from PG_DEFAULT_ACL; + defaclrole | defaclnamespace | defaclobjtype | defaclacl +------------+-----------------+---------------+----------------- + 16820 | 16822 | r | {role1=r/user1} + |
You can also run the following statement to convert the format:
+1 | SELECT pg_catalog.pg_get_userbyid(d.defaclrole) AS "Granter", n.nspname AS "Schema", CASE d.defaclobjtype WHEN 'r' THEN 'table' WHEN 'S' THEN 'sequence' WHEN 'f' THEN 'function' WHEN 'T' THEN 'type' END AS "Type", pg_catalog.array_to_string(d.defaclacl, E', ') AS "Access privileges" FROM pg_catalog.pg_default_acl d LEFT JOIN pg_catalog.pg_namespace n ON n.oid = d.defaclnamespace ORDER BY 1, 2, 3; + |
If the following information is displayed, user1 grants role1 the read permission on schema user1.
+1 +2 +3 +4 | Granter | Schema | Type | Access privileges +---------+--------+-------+------------------- + user1 | user1 | table | role1=r/user1 +(1 row) + |
PG_DEPEND records the dependency relationships between database objects. This information allows DROP commands to find which other objects must be dropped by DROP CASCADE or prevent dropping in the DROP RESTRICT case.
+See also PG_SHDEPEND, which provides a similar function for dependencies involving objects that are shared across a database cluster.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
classid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog the dependent object is in + |
+
objid + |
+oid + |
+Any OID column + |
+OID of the specific dependent object + |
+
objsubid + |
+integer + |
+- + |
+For a table column, this is the column number (the objid and classid refer to the table itself). For all other object types, this column is 0. + |
+
refclassid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog the referenced object is in + |
+
refobjid + |
+oid + |
+Any OID column + |
+OID of the specific referenced object + |
+
refobjsubid + |
+integer + |
+- + |
+For a table column, this is the column number (the refobjid and refclassid refer to the table itself). For all other object types, this column is 0. + |
+
deptype + |
+"char" + |
+- + |
+A code defining the specific semantics of this dependency relationship + |
+
In all cases, a pg_depend entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors defined by deptype:
+Query the table that depends on the database object sequence serial1.
+1 +2 +3 +4 +5 | SELECT oid FROM pg_class WHERE relname ='serial1'; + oid +------- + 17815 +(1 row) + |
1 +2 +3 +4 +5 +6 | SELECT * FROM pg_depend WHERE objid ='17815'; + classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype +---------+-------+----------+------------+----------+-------------+--------- + 1259 | 17815 | 0 | 2615 | 2200 | 0 | n + 1259 | 17815 | 0 | 1259 | 17812 | 1 | a +(2 rows) + |
1 +2 +3 +4 +5 | SELECT relname FROM pg_class where oid='17812'; + relname +------------------ + customer_address +(1 row) + |
PG_DESCRIPTION records optional descriptions (comments) for each database object. Descriptions of many built-in system objects are provided in the initial contents of PG_DESCRIPTION.
+See also PG_SHDESCRIPTION, which performs a similar function for descriptions involving objects that are shared across a database cluster.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
objoid + |
+oid + |
+Any OID column + |
+OID of the object this description pertains to + |
+
classoid + |
+oid + |
+PG_CLASSoid + |
+OID of the system catalog this object appears in + |
+
objsubid + |
+integer + |
+- + |
+For a comment on a table column, this is the column number (the objoid and classoid refer to the table itself). For all other object types, this column is 0. + |
+
description + |
+text + |
+- + |
+Arbitrary text that serves as the description of this object + |
+
PG_ENUM records entries showing the values and labels for each enum type. The internal representation of a given enum value is actually the OID of its associated row in pg_enum.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
enumtypid + |
+oid + |
+PG_TYPE.oid + |
+OID of the pg_type entry that contains this enum value + |
+
enumsortorder + |
+real + |
+- + |
+Sort position of this enum value within its enum type + |
+
enumlabel + |
+name + |
+- + |
+Textual label for this enum value + |
+
The OIDs for PG_ENUM rows follow a special rule: even-numbered OIDs are guaranteed to be ordered in the same way as the sort ordering of their enum type. That is, if two even OIDs belong to the same enum type, the smaller OID must have the smaller enumsortorder value. Odd-numbered OID values need bear no relationship to the sort order. This rule allows the enum comparison routines to avoid catalog lookups in many common cases. The routines that create and alter enum types attempt to assign even OIDs to enum values whenever possible.
+When an enum type is created, its members are assigned sort-order positions from 1 to n. But members added later might be given negative or fractional values of enumsortorder. The only requirement on these values is that they be correctly ordered and unique within each enum type.
+PG_EXTENSION records information about the installed extensions. By default, GaussDB(DWS) has 12 extensions, that is, PLPGSQL, DIST_FDW, FILE_FDW, HDFS_FDW, HSTORE, PLDBGAPI, DIMSEARCH, PACKAGES, GC_FDW, UUID-OSSP, LOG_FDW, and ROACH_API.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
extname + |
+name + |
+Extension name + |
+
extowner + |
+oid + |
+Owner of the extension + |
+
extnamespace + |
+oid + |
+Namespace containing the extension's exported objects + |
+
extrelocatable + |
+boolean + |
+Its value is true if the extension can be relocated to another schema. + |
+
extversion + |
+text + |
+Version number of the extension + |
+
extconfig + |
+oid[] + |
+Configuration information about the extension + |
+
extcondition + |
+text[] + |
+Filter conditions for the extension's configuration information + |
+
PG_EXTENSION_DATA_SOURCE records information about external data source. An external data source contains information about an external database, such as its password encoding. It is mainy used with Extension Connector.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
srcname + |
+name + |
+- + |
+Name of an external data source + |
+
srcowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of an external data source + |
+
srctype + |
+text + |
+- + |
+Type of an external data source. It is NULL by default. + |
+
srcversion + |
+text + |
+- + |
+Type of an external data source. It is NULL by default. + |
+
srcacl + |
+aclitem[] + |
+- + |
+Access permissions + |
+
srcoptions + |
+text[] + |
+- + |
+Option used for foreign data sources. It is a keyword=value string. + |
+
PG_FOREIGN_DATA_WRAPPER records foreign-data wrapper definitions. A foreign-data wrapper is the mechanism by which external data, residing on foreign servers, is accessed.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
fdwname + |
+name + |
+- + |
+Name of the foreign-data wrapper + |
+
fdwowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the foreign-data wrapper + |
+
fdwhandler + |
+oid + |
+PG_PROC.oid + |
+References a handler function that is responsible for supplying execution routines for the foreign-data wrapper. Its value is 0 if no handler is provided. + |
+
fdwvalidator + |
+oid + |
+PG_PROC.oid + |
+References a validator function that is responsible for checking the validity of the options given to the foreign-data wrapper, as well as options for foreign servers and user mappings using the foreign-data wrapper. Its value is 0 if no validator is provided. + |
+
fdwacl + |
+aclitem[] + |
+- + |
+Access permissions + |
+
fdwoptions + |
+text[] + |
+- + |
+Option used for foreign data wrappers. It is a keyword=value string. + |
+
PG_FOREIGN_SERVER records the foreign server definitions. A foreign server describes a source of external data, such as a remote server. Foreign servers are accessed via foreign-data wrappers.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
srvname + |
+name + |
+- + |
+Name of the foreign server + |
+
srvowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the foreign server + |
+
srvfdw + |
+oid + |
++ | +OID of the foreign-data wrapper of this foreign server + |
+
srvtype + |
+text + |
+- + |
+Type of the server (optional) + |
+
srvversion + |
+text + |
+- + |
+Version of the server (optional) + |
+
srvacl + |
+aclitem[] + |
+- + |
+Access permissions + |
+
srvoptions + |
+text[] + |
+- + |
+Option used for foreign servers. It is a keyword=value string. + |
+
PG_FOREIGN_TABLE records auxiliary information about foreign tables.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
ftrelid + |
+oid + |
+OID of the foreign table + |
+
ftserver + |
+oid + |
+OID of the server where the foreign table is located + |
+
ftwriteonly + |
+boolean + |
+Whether data can be written in the foreign table + |
+
ftoptions + |
+text[] + |
+Foreign table options + |
+
PG_INDEX records part of the information about indexes. The rest is mostly in PG_CLASS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
indexrelid + |
+oid + |
+OID of the pg_class entry for this index + |
+
indrelid + |
+oid + |
+OID of the pg_class entry for the table this index is for + |
+
indnatts + |
+smallint + |
+Number of columns in an index + |
+
indisunique + |
+boolean + |
+This index is a unique index if the value is true. + |
+
indisprimary + |
+boolean + |
+This index represents the primary key of the table if the value is true. If this value is true, the value of indisunique is true. + |
+
indisexclusion + |
+boolean + |
+This index supports exclusion constraints if the value is true. + |
+
indimmediate + |
+boolean + |
+A uniqueness check is performed upon data insertion if the value is true. + |
+
indisclustered + |
+boolean + |
+The table was last clustered on this index if the value is true. + |
+
indisusable + |
+boolean + |
+This index supports insert/select if the value is true. + |
+
indisvalid + |
+boolean + |
+This index is valid for queries if the value is true. If this column is false, this index is possibly incomplete and must still be modified by INSERT/UPDATE operations, but it cannot safely be used for queries. If it is a unique index, the uniqueness property is also not true. + |
+
indcheckxmin + |
+boolean + |
+If the value is true, queries must not use the index until the xmin of this row in pg_index is below their TransactionXmin event horizon, because the table may contain broken HOT chains with incompatible rows that they can see. + |
+
indisready + |
+boolean + |
+If the value is true, this index is ready for inserts. If the value is false, this index is ignored when data is inserted or modified. + |
+
indkey + |
+int2vector + |
+This is an array of indnatts values that indicate which table columns this index creates. For example, a value of 1 3 means that the first and the third columns make up the index key. 0 in this array indicates that the corresponding index attribute is an expression over the table columns, rather than a simple column reference. + |
+
indcollation + |
+oidvector + + |
+ID of each column used by the index + |
+
indclass + |
+oidvector + |
+For each column in the index key, this column contains the OID of the operator class to use. For details, see PG_OPCLASS. + |
+
indoption + |
+int2vector + |
+Array of values that store per-column flag bits. The meaning of the bits is defined by the index's access method. + |
+
indexprs + |
+pg_node_tree + |
+Expression trees (in nodeToString() representation) for index attributes that are not simple column references. It is a list with one element for each zero entry in INDKEY. NULL if all index attributes are simple references. + |
+
indpred + |
+pg_node_tree + |
+Expression tree (in nodeToString() representation) for partial index predicate. If the index is not a partial index, the value is null. + |
+
PG_INHERITS records information about table inheritance hierarchies. There is one entry for each direct child table in the database. Indirect inheritance can be determined by following chains of entries.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
inhrelid + |
+oid + |
+PG_CLASS.oid + |
+OID of the child table + |
+
inhparent + |
+oid + |
+PG_CLASS.oid + |
+OID of the parent table + |
+
inhseqno + |
+integer + |
+- + |
+If there is more than one direct parent for a child table (multiple inheritances), this number tells the order in which the inherited columns are to be arranged. The count starts at 1. + |
+
PG_JOBS records detailed information about jobs created by users. Dedicated threads poll the pg_jobs table and trigger jobs based on scheduled job execution time. This table belongs to the Shared Relation category. All job records are visible to all databases.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
job_id + |
+integer + |
+Job ID, primary key, unique (with a unique index) + |
+
what + |
+text + |
+Job content + |
+
log_user + |
+oid + |
+Username of the job creator + |
+
priv_user + |
+oid + |
+User ID of the job executor + |
+
job_db + |
+oid + |
+OID of the database where the job is executed + |
+
job_nsp + |
+oid + |
+OID of the namespace where a job is running + |
+
job_node + |
+oid + |
+CN node on which the job will be created and executed + |
+
is_broken + |
+boolean + |
+Job invalid or not. If a job fails to be executed for 16 consecutive times, is_broken is automatically set to true and the job will not be executed later. + |
+
start_date + |
+timestamp without time zone + |
+Start time of the first job execution, accurate to millisecond + |
+
next_run_date + |
+timestamp without time zone + |
+Scheduled time of the next job execution, accurate to millisecond + |
+
failure_count + |
+smallint + |
+Number of times the job has started and failed. If a job fails to be executed for 16 consecutive times, no more attempt will be made on it. + |
+
interval + |
+text + |
+Job execution interval + |
+
last_start_date + |
+timestamp without time zone + |
+Start time of the last job execution, accurate to millisecond + |
+
last_end_date + |
+timestamp without time zone + |
+End time of the last job execution, accurate to millisecond + |
+
last_suc_date + |
+timestamp without time zone + |
+Start time of the last successful job execution, accurate to millisecond + |
+
this_run_date + |
+timestamp without time zone + |
+Start time of the ongoing job execution, accurate to millisecond + |
+
PG_LANGUAGE records programming languages. You can use them and interfaces to write functions or stored procedures.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
lanname + |
+name + |
+- + |
+Name of the language + |
+
lanowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the language + |
+
lanispl + |
+boolean + |
+- + |
+The value is false for internal languages (such as SQL) and true for user-defined languages. Currently, gs_dump still uses this to determine which languages need to be dumped, but this might be replaced by a different mechanism in the future. + |
+
lanpltrusted + |
+boolean + |
+- + |
+Its value is true if this is a trusted language, which means that it is believed not to grant access to anything outside the normal SQL execution environment. Only the initial user can create functions in untrusted languages. + |
+
lanplcallfoid + |
+oid + |
+PG_PROC.oid + |
+For external languages, this references the language handler, which is a special function that is responsible for executing all functions that are written in the particular language. + |
+
laninline + |
+oid + |
+PG_PROC.oid + |
+This references a function that is responsible for executing "inline" anonymous code blocks (DO blocks). The value is 0 if inline blocks are not supported. + |
+
lanvalidator + |
+oid + |
+PG_PROC.oid + |
+This references a language validator function that is responsible for checking the syntax and validity of new functions when they are created. The value is 0 if no validator is provided. + |
+
lanacl + |
+aclitem[] + |
+- + |
+Access permissions + |
+
PG_LARGEOBJECT records the data making up large objects A large object is identified by an OID assigned when it is created. Each large object is broken into segments or "pages" small enough to be conveniently stored as rows in pg_largeobject. The amount of data per page is defined to be LOBLKSIZE (which is currently BLCKSZ/4, or typically 2 kB).
+It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
loid + |
+oid + |
++ | +Identifier of the large object that includes this page + |
+
pageno + |
+integer + |
+- + |
+Page number of this page within its large object (counting from zero) + |
+
data + |
+bytea + |
+- + |
+Actual data stored in the large object. This will never be more than LOBLKSIZE bytes and might be less. + |
+
Each row of pg_largeobject holds data for one page of a large object, beginning at byte offset (pageno * LOBLKSIZE) within the object. The implementation allows sparse storage: pages might be missing, and might be shorter than LOBLKSIZE bytes even if they are not the last page of the object. Missing regions within a large object are read as zeroes.
+PG_LARGEOBJECT_METADATA records metadata associated with large objects. The actual large object data is stored in PG_LARGEOBJECT.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
lomowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the large object + |
+
lomacl + |
+aclitem[] + |
+- + |
+Access permissions + |
+
PG_NAMESPACE records the namespaces, that is, schema-related information.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nspname + |
+name + |
+Name of the namespace + |
+
nspowner + |
+oid + |
+Owner of the namespace + |
+
nsptimeline + |
+bigint + |
+Timeline when the namespace is created on the DN This column is for internal use and valid only on the DN. + |
+
nspacl + |
+aclitem[] + |
+Access permissions For details, see GRANT and REVOKE. + |
+
permspace + |
+bigint + |
+Quota of a schema's permanent tablespace + |
+
usedspace + |
+bigint + |
+Used size of a schema's permanent tablespace + |
+
PG_OBJECT records the user creation, creation time, last modification time, and last analyzing time of objects of specified types (types existing in object_type).
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
object_oid + |
+oid + |
+Object identifier. + |
+
object_type + |
+"char" + |
+Object type: +
|
+
creator + |
+oid + |
+ID of the creator. + |
+
ctime + |
+timestamp with time zone + |
+Object creation time. + |
+
mtime + |
+timestamp with time zone + |
+Time when the object was last modified. By default, the ALTER, COMMENT, GRANT, REVOKE, and TRUNCATE operations are recorded. +If light_object_mtime is configured for behavior_compat_options, the GRANT, REVOKE, and TRUNCATE operations are not recorded. + |
+
last_analyze_time + |
+timestamp with time zone + |
+Time when an object is analyzed for the last time. + |
+
PG_OBSSCANINFO defines the OBS runtime information scanned in cluster acceleration scenarios. Each record corresponds to a piece of runtime information of a foreign table on OBS in a query.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
query_id + |
+bigint + |
+- + |
+Query ID + |
+
user_id + |
+text + |
+- + |
+Database user who performs queries + |
+
table_name + |
+text + |
+- + |
+Name of a foreign table on OBS + |
+
file_type + |
+text + |
+- + |
+Format of files storing the underlying data + |
+
time_stamp + |
+time_stam + |
+- + |
+Scanning start time + |
+
actual_time + |
+double + |
+- + |
+Scanning execution time, in seconds + |
+
file_scanned + |
+bigint + |
+- + |
+Number of files scanned + |
+
data_size + |
+double + |
+- + |
+Size of data scanned, in bytes + |
+
billing_info + |
+text + |
+- + |
+Reserved columns + |
+
PG_OPCLASS defines index access method operator classes.
+Each operator class defines semantics for index columns of a particular data type and a particular index access method. An operator class essentially specifies that a particular operator family is applicable to a particular indexable column data type. The set of operators from the family that are actually usable with the indexed column are whichever ones accept the column's data type as their lefthand input.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
opcmethod + |
+oid + |
+PG_AM.oid + |
+Index access method the operator class is for + |
+
opcname + |
+name + |
+- + |
+Name of the operator class + |
+
opcnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+Namespace to which the operator class belongs + |
+
opcowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the operator class + |
+
opcfamily + |
+oid + |
+PG_OPFAMILY.oid + |
+Operator family containing the operator class + |
+
opcintype + |
+oid + |
+PG_TYPE.oid + |
+Data type that the operator class indexes + |
+
opcdefault + |
+boolean + |
+- + |
+Whether the operator class is the default for opcintype. If it is, its value is true. + |
+
opckeytype + |
+oid + |
+PG_TYPE.oid + |
+Type of data stored in index, or zero if same as opcintype + |
+
An operator class's opcmethod must match the opfmethod of its containing operator family. Also, there must be no more than one pg_opclass row having opcdefault true for any given combination of opcmethod and opcintype.
+PG_OPERATOR records information about operators.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
oprname + |
+name + |
+- + |
+Name of the operator + |
+
oprnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace that contains this operator + |
+
oprowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the operator + |
+
oprkind + |
+"char" + |
+- + |
+
|
+
oprcanmerge + |
+boolean + |
+- + |
+Whether the operator supports merge joins + |
+
oprcanhash + |
+boolean + |
+- + |
+Whether the operator supports hash joins + |
+
oprleft + |
+oid + |
+PG_TYPE.oid + |
+Type of the left operand + |
+
oprright + |
+oid + |
+PG_TYPE.oid + |
+Type of the right operand + |
+
oprresult + |
+oid + |
+PG_TYPE.oid + |
+Type of the result + |
+
oprcom + |
+oid + |
+PG_OPERATOR.oid + |
+Commutator of this operator, if any + |
+
oprnegate + |
+oid + |
+PG_OPERATOR.oid + |
+Negator of this operator, if any + |
+
oprcode + |
+regproc + |
+PG_PROC.oid + |
+Function that implements this operator + |
+
oprrest + |
+regproc + |
+PG_PROC.oid + |
+Restriction selectivity estimation function for this operator + |
+
oprjoin + |
+regproc + |
+PG_PROC.oid + |
+Join selectivity estimation function for this operator + |
+
PG_OPFAMILY defines operator families.
+Each operator family is a collection of operators and associated support routines that implement the semantics specified for a particular index access method. Furthermore, the operators in a family are all "compatible", in a way that is specified by the access method. The operator family concept allows cross-data-type operators to be used with indexes and to be reasoned about using knowledge of access method semantics.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
opfmethod + |
+oid + |
+PG_AM.oid + |
+Index access method the operator family is for + |
+
opfname + |
+name + |
+- + |
+Name of the operator family + |
+
opfnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+Namespace of the operator family + |
+
opfowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the operator family + |
+
The majority of the information defining an operator family is not in PG_OPFAMILY, but in the associated PG_AMOP, PG_AMPROC, and PG_OPCLASS.
+PG_PARTITION records all partitioned tables, table partitions, toast tables on table partitions, and index partitions in the database. Partitioned index information is not stored in the PG_PARTITION system catalog.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relname + |
+name + |
+Names of the partitioned tables, table partitions, TOAST tables on table partitions, and index partitions + |
+
parttype + |
+"char" + |
+Object type +
|
+
parentid + |
+oid + |
+OID of the partitioned table in PG_CLASS when the object is a partitioned table or table partition +OID of the partitioned index when the object is an index partition + |
+
rangenum + |
+integer + |
+Reserved field. + |
+
intervalnum + |
+integer + |
+Reserved field. + |
+
partstrategy + |
+"char" + |
+Partition policy of the partitioned table. The following policies are supported: +r indicates the range partition. +v indicates the numeric partition + |
+
relfilenode + |
+oid + |
+Physical storage locations of the table partition, index partition, and TOAST table on the table partition. + |
+
reltablespace + |
+oid + |
+OID of the tablespace containing the table partition, index partition, TOAST table on the table partition + |
+
relpages + |
+double precision + |
+Statistics: numbers of data pages of the table partition and index partition + |
+
reltuples + |
+double precision + |
+Statistics: numbers of tuples of the table partition and index partition. + |
+
relallvisible + |
+integer + |
+Statistics: number of visible data pages of the table partition and index partition. + |
+
reltoastrelid + |
+oid + |
+OID of the TOAST table corresponding to the table partition + |
+
reltoastidxid + |
+oid + |
+OID of the TOAST table index corresponding to the table partition + |
+
indextblid + |
+oid + |
+OID of the table partition corresponding to the index partition + |
+
indisusable + |
+boolean + |
+Whether the index partition is available + |
+
reldeltarelid + |
+oid + |
+OID of a Delta table + |
+
reldeltaidx + |
+oid + |
+OID of the index for a Delta table + |
+
relcudescrelid + |
+oid + |
+OID of a CU description table + |
+
relcudescidx + |
+oid + |
+OID of the index for a CU description table + |
+
relfrozenxid + |
+xid32 + |
+Frozen transaction ID +To ensure forward compatibility, this column is reserved. The relfrozenxid64 column is added to record the information. + |
+
intspnum + |
+integer + |
+Number of tablespaces that the interval partition belongs to + |
+
partkey + |
+int2vector + |
+Column number of the partition key + |
+
intervaltablespace + |
+oidvector + |
+Tablespace that the interval partition belongs to. Interval partitions fall in the tablespaces in the round-robin manner + |
+
interval + |
+text[] + |
+Interval value of the interval partition + |
+
boundaries + |
+text[] + |
+Upper boundary of the range partition and interval partition + |
+
transit + |
+text[] + |
+Transit of the interval partition + |
+
reloptions + |
+text[] + |
+Storage property of a partition used for collecting online scale-out information. Same as pg_class.reloptions, it is a keyword=value string. + |
+
relfrozenxid64 + |
+xid + |
+Frozen transaction ID + |
+
PG_PLTEMPLATE records template information for procedural languages.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
tmplname + |
+name + |
+Name of the language for which this template is used + |
+
tmpltrusted + |
+boolean + |
+The value is true if the language is considered trusted. + |
+
tmpldbacreate + |
+boolean + |
+The value is true if the language is created by the owner of the database. + |
+
tmplhandler + |
+text + |
+Name of the call handler function + |
+
tmplinline + |
+text + |
+Name of the anonymous block handler. If no name of the block handler exists, the value is null. + |
+
tmplvalidator + |
+text + |
+Name of the verification function. If no verification function is available, the value is null. + |
+
tmpllibrary + |
+text + |
+Path of the shared library that implements languages + |
+
tmplacl + |
+aclitem[] + |
+Access permissions for template (not yet used) + |
+
PG_PROC records information about functions or procedures.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
proname + |
+name + |
+Name of the function + |
+
pronamespace + |
+oid + |
+OID of the namespace that contains the function + |
+
proowner + |
+oid + |
+Owner of the function + |
+
prolang + |
+oid + |
+Implementation language or call interface of the function + |
+
procost + |
+real + |
+Estimated execution cost + |
+
prorows + |
+real + |
+Estimate number of result rows + |
+
provariadic + |
+oid + |
+Data type of parameter element + |
+
protransform + |
+regproc + |
+Simplified call method for this function + |
+
proisagg + |
+boolean + |
+Whether this function is an aggregate function + |
+
proiswindow + |
+boolean + |
+Whether this function is a window function + |
+
prosecdef + |
+boolean + |
+Whether this function is a security definer (such as a "setuid" function) + |
+
proleakproof + |
+boolean + |
+Whether this function has side effects. If no leakproof treatment is provided for parameters, the function throws errors. + |
+
proisstrict + |
+boolean + |
+The function returns null if any call parameter is null. In that case the function does not actually be called at all. Functions that are not "strict" must be prepared to process null inputs. + |
+
proretset + |
+boolean + |
+The function returns a set, that is, multiple values of the specified data type. + |
+
provolatile + |
+"char" + |
+Whether the function's result depends only on its input parameters, or is affected by outside factors +
|
+
pronargs + |
+smallint + |
+Number of parameters + |
+
pronargdefaults + |
+smallint + |
+Number of parameters that have default values + |
+
prorettype + |
+oid + |
+OID of the returned parameter type + |
+
proargtypes + |
+oidvector + |
+Array with the data types of the function parameters. This array includes only input parameters (including INOUT parameters) and thus represents the call signature of the function. + |
+
proallargtypes + |
+oid[] + |
+Array with the data types of the function parameters. This array includes all parameter types (including OUT and INOUT parameters); however, if all the parameters are IN parameters, this column is null. Note that array subscripting is 1-based, whereas for historical reasons, and proargtypes is subscripted from 0. + |
+
proargmodes + |
+"char"[] + |
+Array with the modes of the function parameters. +
If all the parameters are IN parameters, this column is null. Note that subscripts of this array correspond to positions of proallargtypes not proargtypes. + |
+
proargnames + |
+text[] + |
+Array that stores the names of the function parameters. Parameters without a name are set to empty strings in the array. If none of the parameters have a name, this column is null. Note that subscripts correspond to positions of proallargtypes not proargtypes. + |
+
proargdefaults + |
+pg_node_tree + |
+Expression tree of the default value. This is the list of PRONARGDEFAULTS elements. + |
+
prosrc + |
+text + |
+A definition that describes a function or stored procedure. In an interpreting language, it is the function source code, a link symbol, a file name, or any body content specified when a function or stored procedure is created, depending on how a language or calling is used. + |
+
probin + |
+text + |
+Additional information about how to call the function. Again, the interpretation is language-specific. + |
+
proconfig + |
+text[] + |
+Function's local settings for run-time configuration variables. + |
+
proacl + |
+aclitem[] + |
+Access permissions For details, see GRANT and REVOKE. + |
+
prodefaultargpos + |
+int2vector + |
+Locations of the function default values. Not only the last few parameters have default values. + |
+
fencedmode + |
+boolean + |
+Execution mode of a function, indicating whether a function is executed in fence or not fence mode. If the execution mode is fence, the function is executed in the fork process that is reworked. The default value is fence. + |
+
proshippable + |
+boolean + |
+Whether a function can be pushed down to DNs. The default value is false. +
|
+
propackage + |
+boolean + |
+Indicates whether the function supports overloading, which is mainly used for the Oracle style function. The default value is false. + |
+
Query the OID of a specified function. For example, obtain the OID 1295 of the justify_days function.
+1 +2 +3 +4 +5 | SELECT oid FROM pg_proc where proname ='justify_days'; + oid +------ + 1295 +(1 row) + |
Query whether a function is an aggregate function. For example, the justify_days function is a non-aggregate function.
+1 +2 +3 +4 +5 | SELECT proisagg FROM pg_proc where proname ='justify_days'; + proisagg +---------- + f +(1 row) + |
PG_RANGE records information about range types.
+This is in addition to the types' entries in PG_TYPE.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
rngtypid + |
+oid + |
+PG_TYPE.oid + |
+OID of the range type + |
+
rngsubtype + |
+oid + |
+PG_TYPE.oid + |
+OID of the element type (subtype) of this range type + |
+
rngcollation + |
+oid + |
+PG_COLLATION.oid + |
+OID of the collation used for range comparisons, or 0 if none + |
+
rngsubopc + |
+oid + |
+PG_OPCLASS.oid + |
+OID of the subtype's operator class used for range comparisons + |
+
rngcanonical + |
+regproc + |
+PG_PROC.oid + |
+OID of the function to convert a range value into canonical form, or 0 if none + |
+
rngsubdiff + |
+regproc + |
+PG_PROC.oid + |
+OID of the function to return the difference between two element values as double precision, or 0 if none + |
+
rngsubopc (plus rngcollation, if the element type is collatable) determines the sort ordering used by the range type. rngcanonical is used when the element type is discrete.
+PG_REDACTION_COLUMN records the information about the redacted columns.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
object_oid + |
+oid + |
+OID of the object to be redacted. + |
+
column_attrno + |
+smallint + |
+attrno of the redacted column. + |
+
function_type + |
+integer + |
+Redaction type. + NOTE:
+This column is reserved. It is used only for forward compatibility of redacted column information in earlier versions. The value can be 0 (NONE) or 1 (FULL). + |
+
function_parameters + |
+text + |
+Parameters used when the redaction type is partial (reserved). + |
+
regexp_pattern + |
+text + |
+Pattern string when the redaction type is regexp (reserved). + |
+
regexp_replace_string + |
+text + |
+Replacement string when the redaction type is regexp (reserved). + |
+
regexp_position + |
+integer + |
+Start and end replacement positions when the redaction type is regexp (reserved). + |
+
regexp_occurrence + |
+integer + |
+Replacement times when the redaction type is regexp (reserved). + |
+
regexp_match_parameter + |
+text + |
+Regular control parameter used when the redaction type is regexp (reserved). + |
+
column_description + |
+text + |
+Description of the redacted column. + |
+
function_expr + |
+pg_node_tree + |
+Internal representation of the redaction function. + |
+
PG_REDACTION_POLICY records information about the object to be redacted.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
object_oid + |
+oid + |
+OID of the object to be redacted. + |
+
policy_name + |
+name + |
+Name of the redact policy. + |
+
enable + |
+boolean + |
+Policy status (enabled or disabled). + NOTE:
+The value can be: +
|
+
expression + |
+pg_node_tree + |
+Policy effective expression (for users). + |
+
policy_description + |
+text + |
+Description of a policy. + |
+
PG_RLSPOLICY displays the information about row-level access control policies.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
polname + |
+name + |
+Name of a row-level access control policy + |
+
polrelid + |
+oid + |
+Table OID of a row-level access control policy + |
+
polcmd + |
+char + |
+SQL operations affected by a row-level access control policy. The options are *(ALL), r(SELECT), w(UPDATE), and d(DELETE). + |
+
polpermissive + |
+boolean + |
+Type of a row-level access control policy + NOTE:
+Values of polpermissive: +
|
+
polroles + |
+oid[] + |
+OID of database user affected by a row-level access control policy + |
+
polqual + |
+pg_node_tree + |
+SQL condition expression of a row-level access control policy + |
+
PG_RESOURCE_POOL records the information about database resource pool.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
respool_name + |
+name + |
+Name of the resource pool + |
+
mem_percent + |
+integer + |
+Percentage of the memory configuration + |
+
cpu_affinity + |
+bigint + |
+Value of cores bound to the CPU + |
+
control_group + |
+name + |
+Name of the Cgroup where the resource pool is located + |
+
active_statements + |
+integer + |
+Maximum number of concurrent statements in the resource pool + |
+
max_dop + |
+integer + |
+Maximum concurrency. This is a reserved parameter. + |
+
memory_limit + |
+name + |
+Maximum memory of resource pool + |
+
parentid + |
+oid + |
+OID of the parent resource pool + |
+
io_limits + |
+integer + |
+Upper limit of IOPS. It is counted by ones for column storage and by 10 thousands for row storage. + |
+
io_priority + |
+text + |
+I/O priority set for jobs that consume many I/O resources. It takes effect when the I/O usage reaches 90%. + |
+
is_foreign + |
+boolean + |
+Indicates whether the resource pool can be used for users outside the logical cluster. If it is set to true, the resource pool controls the resources of common users who do not belong to the current resource pool. + |
+
PG_REWRITE records rewrite rules defined for tables and views.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
rulename + |
+name + |
+Rule name + |
+
ev_class + |
+oid + |
+Name of the table that uses the rule + |
+
ev_attr + |
+smallint + |
+Column this rule is for (always 0 to indicate the entire table) + |
+
ev_type + |
+"char" + |
+Event type for this rule: +
|
+
ev_enabled + |
+"char" + |
+Controls in which mode the rule fires +
|
+
is_instead + |
+boolean + |
+Its value is true if the rule is an INSTEAD rule. + |
+
ev_qual + |
+pg_node_tree + |
+Expression tree (in the form of a nodeToString() representation) for the rule's qualifying condition + |
+
ev_action + |
+pg_node_tree + |
+Query tree (in the form of a nodeToString() representation) for the rule's action + |
+
PG_SECLABEL records security labels on database objects.
+See also PG_SHSECLABEL, which performs a similar function for security labels of database objects that are shared across a database cluster.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
objoid + |
+oid + |
+Any OID column + |
+OID of the object this security label pertains to + |
+
classoid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog that contains the object + |
+
objsubid + |
+integer + |
+- + |
+For a security label on a table column, this is the column number. + |
+
provider + |
+text + |
+- + |
+Label provider associated with this label + |
+
label + |
+text + |
+- + |
+Security label applied to this object + |
+
PG_SHDEPEND records the dependency relationships between database objects and shared objects, such as roles. This information allows GaussDB(DWS) to ensure that those objects are unreferenced before attempting to delete them.
+See also PG_DEPEND, which performs a similar function for dependencies involving objects within a single database.
+Unlike most system catalogs, PG_SHDEPEND is shared across all databases of a cluster: there is only one copy of PG_SHDEPEND per cluster, not one per database.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
dbid + |
+oid + |
+PG_DATABASE.oid + |
+OID of the database the dependent object is in. The value is 0 for a shared object. + |
+
classid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog the dependent object is in. + |
+
objid + |
+oid + |
+Any OID column + |
+OID of the specific dependent object + |
+
objsubid + |
+integer + |
+- + |
+For a table column, this is the column number (the objid and classid refer to the table itself). For all other object types, this column is 0. + |
+
refclassid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog the referenced object is in (must be a shared catalog) + |
+
refobjid + |
+oid + |
+Any OID column + |
+OID of the specific referenced object + |
+
deptype + |
+"char" + |
+- + |
+Code segment defining the specific semantics of this dependency relationship. See the following text for details. + |
+
objfile + |
+text + |
+- + |
+Path of the user-defined C function library file. + |
+
In all cases, a pg_shdepend entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors defined by deptype:
+The referenced object (which must be a role) is the owner of the dependent object.
+The referenced object (which must be a role) is mentioned in the ACL (access control list, i.e., privileges list) of the dependent object. (A SHARED_DEPENDENCY_ACL entry is not made for the owner of the object, since the owner will have a SHARED_DEPENDENCY_OWNER entry anyway.)
+There is no dependent object. This type of entry is a signal that the system itself depends on the referenced object, and so that object must never be deleted. Entries of this type are created only by initdb. The columns for the dependent object contain zeroes.
+PG_SHDESCRIPTION records optional comments for shared database objects. Descriptions can be manipulated with the COMMENT command and viewed with psql's \d commands.
+See also PG_DESCRIPTION, which performs a similar function for descriptions involving objects within a single database.
+Unlike most system catalogs, PG_SHDESCRIPTION is shared across all databases of a cluster. There is only one copy of PG_SHDESCRIPTION per cluster, not one per database.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
objoid + |
+oid + |
+Any OID column + |
+OID of the object this description pertains to + |
+
classoid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog where the object resides + |
+
description + |
+text + |
+- + |
+Arbitrary text that serves as the description of this object + |
+
PG_SHSECLABEL records security labels on shared database objects. Security labels can be manipulated with the SECURITY LABEL command.
+For an easier way to view security labels, see PG_SECLABELS.
+See also PG_SECLABEL, which performs a similar function for security labels involving objects within a single database.
+Unlike most system catalogs, PG_SHSECLABEL is shared across all databases of a cluster. There is only one copy of PG_SHSECLABEL per cluster, not one per database.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
objoid + |
+oid + |
+Any OID column + |
+OID of the object this security label pertains to + |
+
classoid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system catalog where the object resides + |
+
provider + |
+text + |
+- + |
+Label provider associated with this label + |
+
label + |
+text + |
+- + |
+Security label applied to this object + |
+
PG_STATISTIC records statistics about tables and index columns in a database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
starelid + |
+oid + |
+Table or index which the described column belongs to + |
+
starelkind + |
+"char" + |
+Type of an object + |
+
staattnum + |
+smallint + |
+Number of the described column in the table, starting from 1 + |
+
stainherit + |
+boolean + |
+Whether to collect statistics for objects that have inheritance relationship + |
+
stanullfrac + |
+real + |
+Percentage of column entries that are null + |
+
stawidth + |
+integer + |
+Average stored width, in bytes, of non-null entries + |
+
stadistinct + |
+real + |
+Number of distinct, not-null data values in the column for all DNs +
|
+
stakindN + |
+smallint + |
+Code number stating that the type of statistics is stored in Slot N of the pg_statistic row. +Value range: 1 to 5 + |
+
staopN + |
+oid + |
+Operator used to generate the statistics stored in Slot N. For example, a histogram slot shows the < operator that defines the sort order of the data. +Value range: 1 to 5 + |
+
stanumbersN + |
+real[] + |
+Numerical statistics of the appropriate type for Slot N. The value is null if the slot kind does not involve numerical values. +Value range: 1 to 5 + |
+
stavaluesN + |
+anyarray + |
+Column data values of the appropriate type for Slot N. The value is null if the slot type does not store any data values. Each array's element values are actually of the specific column's data type so there is no way to define these columns' type more specifically than anyarray. +Value range: 1 to 5 + |
+
stadndistinct + |
+real + |
+Number of unique non-null data values in the dn1 column +
|
+
staextinfo + |
+text + |
+Information about extension statistics (reserved) + |
+
PG_STATISTIC_EXT records the extended statistics of tables in a database, such as statistics of multiple columns. Statistics of expressions will be supported later. You can specify the extended statistics to be collected. It is accessible only to users with system administrator rights.
+ +Parameter + |
+Type + |
+Description + |
+
---|---|---|
starelid + |
+oid + |
+Table or index which the described column belongs to + |
+
starelkind + |
+"char" + |
+Type of an object + |
+
stainherit + |
+boolean + |
+Whether to collect statistics for objects that have inheritance relationship + |
+
stanullfrac + |
+real + |
+Percentage of column entries that are null + |
+
stawidth + |
+integer + |
+Average stored width, in bytes, of non-null entries + |
+
stadistinct + |
+real + |
+Number of distinct, not-null data values in the column for all DNs +
|
+
stadndistinct + |
+real + |
+Number of unique non-null data values in the dn1 column +
|
+
stakindN + |
+smallint + |
+Code number stating that the type of statistics is stored in Slot N of the pg_statistic row. +Value range: 1 to 5 + |
+
staopN + |
+oid + |
+Operator used to generate the statistics stored in Slot N. For example, a histogram slot shows the < operator that defines the sort order of the data. +Value range: 1 to 5 + |
+
stakey + |
+int2vector + |
+Array of a column ID + |
+
stanumbersN + |
+real[] + |
+Numerical statistics of the appropriate type for Slot N. The value is null if the slot kind does not involve numerical values. +Value range: 1 to 5 + |
+
stavaluesN + |
+anyarray + |
+Column data values of the appropriate type for Slot N. The value is null if the slot type does not store any data values. Each array's element values are actually of the specific column's data type so there is no way to define these columns' type more specifically than anyarray. +Value range: 1 to 5 + |
+
staexprs + |
+pg_node_tree + |
+Expression corresponding to the extended statistics information. + |
+
PG_SYNONYM records the mapping between synonym object names and other database object names.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
synname + |
+name + |
+Synonym name. + |
+
synnamespace + |
+oid + |
+OID of the namespace where the synonym is located. + |
+
synowner + |
+oid + |
+Owner of a synonym, usually the OID of the user who created it. + |
+
synobjschema + |
+name + |
+Schema name specified by the associated object. + |
+
synobjname + |
+name + |
+Name of the associated object. + |
+
PG_TABLESPACE records tablespace information.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
spcname + |
+name + |
+Name of the tablespace + |
+
spcowner + |
+oid + |
+Owner of the tablespace, usually the user who created it + |
+
spcacl + |
+aclitem[] + |
+Access permissions For details, see GRANT and REVOKE. + |
+
spcoptions + |
+text[] + |
+Specifies options of the tablespace. + |
+
spcmaxsize + |
+text + |
+Maximum size of the available disk space, in bytes + |
+
PG_TRIGGER records the trigger information.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
tgrelid + |
+oid + |
+OID of the table where the trigger is located. + |
+
tgname + |
+name + |
+Trigger name. + |
+
tgfoid + |
+oid + |
+Trigger OID. + |
+
tgtype + |
+smallint + |
+Trigger type + |
+
tgenabled + |
+"char" + |
+O: The trigger fires in "origin" or "local" mode. +D: The trigger is disabled. +R: The trigger fires in "replica" mode. +A: The trigger always fires. + |
+
tgisinternal + |
+boolean + |
+Internal trigger ID. If the value is true, it indicates an internal trigger. + |
+
tgconstrrelid + |
+oid + |
+The table referenced by the integrity constraint + |
+
tgconstrindid + |
+oid + |
+Index of the integrity constraint + |
+
tgconstraint + |
+oid + |
+OID of the constraint trigger in the pg_constraint + |
+
tgdeferrable + |
+boolean + |
+The constraint trigger is of the DEFERRABLE type. + |
+
tginitdeferred + |
+boolean + |
+whether the trigger is of the INITIALLY DEFERRED type + |
+
tgnargs + |
+smallint + |
+Input parameters number of the trigger function + |
+
tgattr + |
+int2vector + |
+Column ID specified by the trigger. If no column is specified, an empty array is used. + |
+
tgargs + |
+bytea + |
+Parameter transferred to the trigger + |
+
tgqual + |
+pg_node_tree + |
+Indicates the WHEN condition of the trigger. If the WHEN condition does not exist, the value is null. + |
+
PG_TS_CONFIG records entries representing text search configurations. A configuration specifies a particular text search parser and a list of dictionaries to use for each of the parser's output token types.
+The parser is shown in the PG_TS_CONFIG entry, but the token-to-dictionary mapping is defined by subsidiary entries in PG_TS_CONFIG_MAP.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
cfgname + |
+name + |
+- + |
+Text search configuration name + |
+
cfgnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace where the configuration resides + |
+
cfgowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the configuration + |
+
cfgparser + |
+oid + |
+PG_TS_PARSER.oid + |
+OID of the text search parser for this configuration + |
+
cfoptions + |
+text[] + |
+- + |
+Configuration options + |
+
PG_TS_CONFIG_MAP records entries showing which text search dictionaries should be consulted, and in what order, for each output token type of each text search configuration's parser.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
mapcfg + |
+oid + |
+PG_TS_CONFIG.oid + |
+OID of the PG_TS_CONFIG entry owning this map entry + |
+
maptokentype + |
+integer + |
+- + |
+A token type emitted by the configuration's parser + |
+
mapseqno + |
+integer + |
+- + |
+Order in which to consult this entry + |
+
mapdict + |
+oid + |
+PG_TS_DICT.oid + |
+OID of the text search dictionary to consult + |
+
PG_TS_DICT records entries that define text search dictionaries. A dictionary depends on a text search template, which specifies all the implementation functions needed. The dictionary itself provides values for the user-settable parameters supported by the template.
+This division of labor allows dictionaries to be created by unprivileged users. The parameters are specified by a text string dictinitoption, whose format and meaning vary depending on the template.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
dictname + |
+name + |
+- + |
+Text search dictionary name + |
+
dictnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace that contains the dictionary + |
+
dictowner + |
+oid + |
+PG_AUTHID.oid + |
+Owner of the dictionary + |
+
dicttemplate + |
+oid + |
+PG_TS_TEMPLATE.oid + |
+OID of the text search template for this dictionary + |
+
dictinitoption + |
+text + |
+- + |
+Initialization option string for the template + |
+
PG_TS_PARSER records entries defining text search parsers. A parser splits input text into lexemes and assigns a token type to each lexeme. Since a parser must be implemented by C functions, parsers can be created only by database administrators.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
prsname + |
+name + |
+- + |
+Text search parser name + |
+
prsnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace that contains the parser + |
+
prsstart + |
+regproc + |
+PG_PROC.oid + |
+OID of the parser's startup function + |
+
prstoken + |
+regproc + |
+PG_PROC.oid + |
+OID of the parser's next-token function + |
+
prsend + |
+regproc + |
+PG_PROC.oid + |
+OID of the parser's shutdown function + |
+
prsheadline + |
+regproc + |
+PG_PROC.oid + |
+OID of the parser's headline function + |
+
prslextype + |
+regproc + |
+PG_PROC.oid + |
+OID of the parser's lextype function + |
+
PG_TS_TEMPLATE records entries defining text search templates. A template provides a framework for text search dictionaries. Since a template must be implemented by C functions, templates can be created only by database administrators.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
tmplname + |
+name + |
+- + |
+Text search template name + |
+
tmplnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace that contains the template + |
+
tmplinit + |
+regproc + |
+PG_PROC.oid + |
+OID of the template's initialization function + |
+
tmpllexize + |
+regproc + |
+PG_PROC.oid + |
+OID of the template's lexize function + |
+
PG_TYPE records the information about data types.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
typname + |
+name + |
+Data type name + |
+
typnamespace + |
+oid + |
+OID of the namespace that contains this type + |
+
typowner + |
+oid + |
+Owner of this type + |
+
typlen + |
+smallint + |
+Number of bytes in the internal representation of the type for a fixed-size type. But for a variable-length type, typlen is negative. +
|
+
typbyval + |
+boolean + |
+Whether the value of this type is passed by parameter or reference of this column. TYPBYVAL is false if the type of TYPLEN is not 1, 2, 4, or 8, because values of this type are always passed by reference of this column. TYPBYVAL can be false even the TYPLEN is passed by parameter of this column. + |
+
typtype + |
+char + |
+
For details, see typrelid and typbasetype. + |
+
typcategory + |
+char + |
+typcategory is an arbitrary classification of data types that is used by the parser to determine which implicit casts should be "preferred". + |
+
typispreferred + |
+boolean + |
+Whether data is converted. It is true if conversion is performed when data meets the conversion rules specified by TYPCATEGORY. + |
+
typisdefined + |
+boolean + |
+The value is true if the type is defined. The value is false if this is a placeholder entry for a not-yet-defined type. When it is false, type name, namespace, and OID are the only dependable objects. + |
+
typdelim + |
+char + |
+Character that separates two values of this type when parsing array input. Note that the delimiter is associated with the array element data type, not the array data type. + |
+
typrelid + |
+oid + |
+If this is a composite type (see typtype), then this column points to the pg_class entry that defines the corresponding table. For a free-standing composite type, the pg_class entry does not represent a table, but it is required for the type's pg_attribute entries to link to. The value is 0 for non-composite types. + |
+
typelem + |
+oid + |
+If typelem is not 0 then it identifies another row in pg_type. The current type can be subscripted like an array yielding values of type typelem. The current type can then be subscripted like an array yielding values of type typelem. A "true" array type is variable length (typlen = -1), but some fixed-length (typlen > 0) types also have nonzero typelem, for example name and point. If a fixed-length type has a typelem, its internal representation must be some number of values of the typelem data type with no other data. Variable-length array types have a header defined by the array subroutines. + |
+
typarray + |
+oid + |
+Indicates that the corresponding type record is available in pg_type if the value is not 0. + |
+
typinput + |
+regproc + |
+Input conversion function (text format) + |
+
typoutput + |
+regproc + |
+Output conversion function (text format) + |
+
typreceive + |
+regproc + |
+Input conversion function (binary format). If no input conversion function, the value is 0. + |
+
typsend + |
+regproc + |
+output conversion function (binary format). If no output conversion function, the value is 0. + |
+
typmodin + |
+regproc + |
+Type modifier input function. The value is 0 if the type does not support modifiers. + |
+
typmodout + |
+regproc + |
+Type modifier output function. The value is 0 if the type does not support modifiers. + |
+
typanalyze + |
+regproc + |
+Custom ANALYZE function. The value is 0 if the standard function is used. + |
+
typalign + |
+char + |
+Alignment required when storing a value of this type. It applies to storage on disk as well as most representations of the value inside PostgreSQL. When multiple values are stored consecutively, such as in the representation of a complete row on disk, padding is inserted before a data of this type so that it begins on the specified boundary. The alignment reference is the beginning of the first datum in the sequence. Possible values are: +
NOTICE:
+For types used in system catalogs, the size and alignment defined in pg_type must agree with the way that the compiler lays out the column in a structure representing a table row. + |
+
typstorage + |
+char + |
+typstorage tells for varlena types (those with typlen = -1) if the type is prepared for toasting and what the default strategy for attributes of this type should be. Possible values are: +
NOTICE:
+m domains can also be moved out to secondary storage, but only as a last resort (e and x domains are moved first). + |
+
typenotnull + |
+boolean + |
+Represents a NOTNULL constraint on a type. Currently, it is used for domains only. + |
+
typbasetype + |
+oid + |
+If this is a domain (see typtype), then typbasetype identifies the type that this one is based on. The value is 0 if this type is not a derived type. + |
+
typtypmod + |
+integer + |
+Records the typtypmod to be applied to domains' base types by domains (the value is -1 if the base type does not use typmod). The value is -1 if this type is not a domain. + |
+
typndims + |
+integer + |
+Number of array dimensions for a domain that is an array (that is, typbasetype is an array type; the domain's typelem matches the base type's typelem). The value is 0 for types other than domains over array types. + |
+
typcollation + |
+oid + |
+Sequence rule for specified types. Sequencing is not supported if the value is 0. + |
+
typdefaultbin + |
+pg_node_tree + |
+nodeToString() representation of a default expression for the type if the value is non-null. Currently, this column is only used for domains. + |
+
typdefault + |
+text + |
+The value is null if a type has no associated default value. If typdefaultbin is not null, typdefault must contain a human-readable version of the default expression represented by typdefaultbin. If typdefaultbin is null and typdefault is not, then typdefault is the external representation of the type's default value, which can be fed to the type's input converter to produce a constant. + |
+
typacl + |
+aclitem[] + |
+Access permissions + |
+
PG_USER_MAPPING records the mappings from local users to remote.
+It is accessible only to users with system administrator rights. You can use view PG_USER_MAPPINGS to query common users.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
oid + |
+oid + |
+- + |
+Row identifier (hidden attribute; must be explicitly selected) + |
+
umuser + |
+oid + |
+PG_AUTHID.oid + |
+OID of the local role being mapped, 0 if the user mapping is public + |
+
umserver + |
+oid + |
++ | +OID of the foreign server that contains this mapping + |
+
umoptions + |
+text[] + |
+- + |
+Option used for user mapping. It is a keyword=value string. + |
+
PG_USER_STATUS records the states of users that access to the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
roloid + |
+oid + |
+ID of the role + |
+
failcount + |
+integer + |
+Specifies the number of failed attempts. + |
+
locktime + |
+timestamp with time zone + |
+Time at which the role is locked + |
+
rolstatus + |
+smallint + |
+Role state +
|
+
permspace + |
+bigint + |
+Size of the permanent table storage space used by a role in the current instance. + |
+
tempspace + |
+bigint + |
+Size of the temporary table storage space used by a role in the current instance. + |
+
PG_WORKLOAD_ACTION records information about query_band.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
qband + |
+name + |
+query_band key-value pairs + |
+
class + |
+name + |
+Class of the object associated with query_band + |
+
object + |
+name + |
+Object associated with query_band + |
+
action + |
+name + |
+Action of the object associated with query_band + |
+
PGXC_CLASS records the replicated or distributed information for each table.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
pcrelid + |
+oid + |
+Table OID + |
+
pclocatortype + |
+"char" + |
+Locator type +
|
+
pchashalgorithm + |
+smallint + |
+Distributed tuple using the hash algorithm + |
+
pchashbuckets + |
+smallint + |
+Value of a harsh container + |
+
pgroup + |
+name + |
+Name of the node group + |
+
redistributed + |
+"char" + |
+The table has been redistributed. + |
+
redis_order + |
+integer + |
+Redistribution sequence + |
+
pcattnum + |
+int2vector + |
+Column number used as a distribution key + |
+
nodeoids + |
+oidvector_extend + |
+List of distributed table node OIDs + |
+
options + |
+text + |
+Extension status information. This is a reserved column in the system. + |
+
PGXC_GROUP records information about node groups.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
group_name + |
+name + |
+Name of the node group + |
+
in_redistribution + |
+"char" + |
+Whether redistribution is required +
|
+
group_members + |
+oidvector_extend + |
+Node OID list of the node group + |
+
group_buckets + |
+text + |
+Distributed data bucket group + |
+
is_installation + |
+boolean + |
+Whether to install a sub-cluster + |
+
group_acl + |
+aclitem[] + |
+Access permissions + |
+
group_kind + |
+"char" + |
+Node Group type +
|
+
PGXC_NODE records information about cluster nodes.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
node_type + |
+"char" + |
+Node type +C: CN +D: DN + |
+
node_port + |
+integer + |
+Port ID of the node + |
+
node_host + |
+name + |
+Host name or IP address of a node. (If a virtual IP address is configured, its value is a virtual IP address.) + |
+
node_port1 + |
+integer + |
+Port number of a replication node + |
+
node_host1 + |
+name + |
+Host name or IP address of a replication node. (If a virtual IP address is configured, its value is a virtual IP address.) + |
+
hostis_primary + |
+boolean + |
+Whether a switchover occurs between the primary and the standby server on the current node + |
+
nodeis_primary + |
+boolean + |
+Whether the current node is preferred to execute non-query operations in the replication table + |
+
nodeis_preferred + |
+boolean + |
+Whether the current node is preferred to execute queries in the replication table + |
+
node_id + |
+integer + |
+Node identifier + |
+
sctp_port + |
+integer + |
+Specifies the port used by the TCP proxy communication library or SCTP communication library of the primary node to listen to the data channel. + |
+
control_port + |
+integer + |
+Specifies the port used by the TCP proxy communication library or SCTP communication library of the primary node to listen to the control channel. + |
+
sctp_port1 + |
+integer + |
+Specifies the port used by the TCP proxy communication library or SCTP communication library of the standby node to listen to the data channel. + |
+
control_port1 + |
+integer + |
+Specifies the port used by the TCP proxy communication library or SCTP communication library of the standby node to listen to the control channel. + |
+
nodeis_central + |
+boolean + |
+Indicates that the current node is the central node. + |
+
Query the CN and DN information of the cluster:
+select * from pgxc_node; + node_name | node_type | node_port | node_host | node_port1 | node_host1 | hostis_primary | nodeis_primary | nodeis_preferred + | node_id | sctp_port | control_port | sctp_port1 | control_port1 | nodeis_central +--------------+-----------+-----------+----------------+------------+----------------+----------------+----------------+----------------- +-+-------------+-----------+--------------+------------+---------------+---------------- + dn_6001_6002 | D | 40000 | 172.**.***.**1 | 45000 | 172.**.**.**2 | t | f | f + | 1644780306 | 40002 | 40003 | 45002 | 45003 | f + dn_6003_6004 | D | 40000 | 172.**.**.**2 | 45000 | 172.**.**.**3 | t | f | f + | -966646068 | 40002 | 40003 | 45002 | 45003 | f + dn_6005_6006 | D | 40000 | 172.**.**.**3 | 45000 | 172.**.***.**1 | t | f | f + | 868850011 | 40002 | 40003 | 45002 | 45003 | f + cn_5001 | C | 8000 | 172.**.***.**1 | 8000 | 172.**.***.**1 | t | f | f + | 1120683504 | 8002 | 8003 | 0 | 0 | f + cn_5002 | C | 8000 | 172.**.**.**2 | 8000 | 172.**.**.**2 | t | f | f + | -1736975100 | 8002 | 8003 | 0 | 0 | f + cn_5003 | C | 8000 | localhost | 8000 | localhost | t | f | f + | -125853378 | 8002 | 8003 | 0 | 0 | t +(6 rows)+
ALL_ALL_TABLES displays the tables or views accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+name + |
+Owner of the table or the view + |
+
table_name + |
+name + |
+Name of the table or the view + |
+
tablespace_name + |
+name + |
+Tablespace where the table or view is located + |
+
ALL_CONSTRAINTS displays information about constraints accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
constraint_name + |
+vcharacter varying(64) + |
+Constraint name + |
+
constraint_type + |
+text + |
+Constraint type +
|
+
table_name + |
+character varying(64) + |
+Name of constraint-related table + |
+
index_owner + |
+character varying(64) + |
+Owner of constraint-related index (only for the unique constraint and primary key constraint) + |
+
index_name + |
+character varying(64) + |
+Name of constraint-related index (only for the unique constraint and primary key constraint) + |
+
ALL_CONS_COLUMNS displays information about constraint columns accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_name + |
+character varying(64) + |
+Name of constraint-related table + |
+
column_name + |
+character varying(64) + |
+Name of constraint-related column + |
+
constraint_name + |
+character varying(64) + |
+Constraint name + |
+
position + |
+smallint + |
+Position of the column in the table + |
+
ALL_COL_COMMENTS displays the comment information about table columns accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
column_name + |
+character varying(64) + |
+Column name + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
owner + |
+character varying(64) + |
+Table owner + |
+
comments + |
+text + |
+Comments + |
+
ALL_DEPENDENCIES displays dependencies between functions and advanced packages accessible to the current user.
+Currently in GaussDB(DWS), this table is empty without any record due to information constraints.
+Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(30) + |
+Owner of the object + |
+
name + |
+character varying(30) + |
+Object name + |
+
type + |
+character varying(17) + |
+Type of the object + |
+
referenced_owner + |
+character varying(30) + |
+Owner of the referenced object + |
+
referenced_name + |
+character varying(64) + |
+Name of the referenced object + |
+
referenced_type + |
+character varying(17) + |
+Type of the referenced object + |
+
referenced_link_name + |
+character varying(128) + |
+Name of the link to the referenced object + |
+
schemaid + |
+numeric + |
+ID of the current schema + |
+
dependency_type + |
+character varying(4) + |
+Dependency type (REF or HARD) + |
+
ALL_IND_COLUMNS displays all index columns accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_name + |
+name + |
+Column name + |
+
column_position + |
+smallint + |
+Position of column in the index + |
+
ALL_IND_EXPRESSIONS displays information about the expression indexes accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_expression + |
+text + |
+Function-based index expression of a specified column + |
+
column_position + |
+smallint + |
+Position of a column in the index + |
+
ALL_INDEXES displays information about indexes accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_name + |
+character varying(64) + |
+Name of the table corresponding to the index. + |
+
uniqueness + |
+text + |
+Whether the index is a unique index + |
+
generated + |
+character varying(1) + |
+Whether the index name is generated by the system + |
+
partitioned + |
+character(3) + |
+Whether the index has the property of the partition table + |
+
ALL_OBJECTS displays all database objects accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+name + |
+Owner of the object + |
+
object_name + |
+name + |
+Object name + |
+
object_id + |
+oid + |
+OID of the object + |
+
object_type + |
+name + |
+Type of the object + |
+
namespace + |
+oid + |
+ID of the namespace where the object resides + |
+
created + |
+timestamp with time zone + |
+Object creation time + |
+
last_ddl_time + |
+timestamp with time zone + |
+The last time when an object was modified. + |
+
For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.
+ALL_PROCEDURES displays information about all stored procedures or functions accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+name + |
+Owner of the object + |
+
object_name + |
+name + |
+Object name + |
+
ALL_SEQUENCES displays all sequences accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sequence_owner + |
+name + |
+Owner of the sequence + |
+
sequence_name + |
+name + |
+Name of the sequence + |
+
min_value + |
+bigint + |
+Minimum value of the sequence + |
+
max_value + |
+bigint + |
+Maximum value of the sequence + |
+
increment_by + |
+bigint + |
+Value by which the sequence is incremented + |
+
cycle_flag + |
+character(1) + |
+Whether the sequence is a cycle sequence. The value can be Y or N. +
|
+
ALL_SOURCE displays information about stored procedures or functions accessible to the current user, and provides the columns defined by the stored procedures and functions.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+name + |
+Owner of the object + |
+
name + |
+name + |
+Object name + |
+
type + |
+name + |
+Type of the object + |
+
text + |
+text + |
+Definition of the object + |
+
ALL_SYNONYMS displays all synonyms accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+text + |
+Owner of a synonym. + |
+
schema_name + |
+text + |
+Name of the schema to which the synonym belongs. + |
+
synonym_name + |
+text + |
+Synonym name. + |
+
table_owner + |
+text + |
+Owner of the associated object. + |
+
table_schema_name + |
+text + |
+Schema name of the associated object. + |
+
table_name + |
+text + |
+Name of the associated object. + |
+
ALL_TAB_COLUMNS displays description information about columns of the tables accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the table + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_name + |
+character varying(64) + |
+Column name + |
+
data_type + |
+character varying(128) + |
+Data type of the column + |
+
column_id + |
+integer + |
+Column ID generated when the object is created or column is added + |
+
data_length + |
+integer + |
+Length of the column in the unit of bytes + |
+
avg_col_len + |
+numeric + |
+Average length of a column in the unit of bytes + |
+
nullable + |
+bpchar + |
+Whether the column can be empty. For the primary key constraint and non-null constraint, the value is n. + |
+
data_precision + |
+integer + |
+Indicates the precision of the data type. This parameter is valid for the numeric data type and NULL for other types. + |
+
data_scale + |
+integer + |
+Number of decimal places. This parameter is valid for the numeric data type. For other data types, the value of this parameter is 0. + |
+
char_length + |
+numeric + |
+Column length in the unit of bytes. This parameter is valid only for the varchar, nvarchar2, bpchar, and char types. + |
+
ALL_TAB_COMMENTS displays comments about all tables and views accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the table or the view + |
+
table_name + |
+character varying(64) + |
+Name of the table or the view + |
+
comments + |
+text + |
+Comments + |
+
ALL_TABLES displays all the tables accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
tablespace_name + |
+character varying(64) + |
+Name of the tablespace that contains the table + |
+
status + |
+character varying(8) + |
+Whether the current record is valid + |
+
temporary + |
+character(1) + |
+Whether the table is a temporary table +
|
+
dropped + |
+character varying + |
+Whether the current record is deleted +
|
+
num_rows + |
+numeric + |
+The estimated number of rows in the table + |
+
ALL_USERS displays all users of the database visible to the current user, however, it does not describe the users.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
username + |
+name + |
+User name + |
+
user_id + |
+oid + |
+OID of the user + |
+
ALL_VIEWS displays the description about all views accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+name + |
+Owner of the view + |
+
view_name + |
+name + |
+Name of the view + |
+
text_length + |
+integer + |
+Text length of the view + |
+
text + |
+text + |
+Text in the view + |
+
DBA_DATA_FILES displays the description of database files. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
tablespace_name + |
+name + |
+Name of the tablespace to which the file belongs + |
+
bytes + |
+double precision + |
+Length of the file in bytes + |
+
DBA_USERS displays all user names in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
username + |
+character varying(64) + |
+User name + |
+
DBA_COL_COMMENTS displays information about table colum comments in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
column_name + |
+character varying(64) + |
+Column name + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
owner + |
+character varying(64) + |
+Table owner + |
+
comments + |
+text + |
+Comments + |
+
DBA_CONSTRAINTS displays information about table constraints in database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
constraint_name + |
+vcharacter varying(64) + |
+Constraint name + |
+
constraint_type + |
+text + |
+Constraint type +
|
+
table_name + |
+character varying(64) + |
+Name of constraint-related table + |
+
index_owner + |
+character varying(64) + |
+Owner of constraint-related index (only for the unique constraint and primary key constraint) + |
+
index_name + |
+character varying(64) + |
+Name of constraint-related index (only for the unique constraint and primary key constraint) + |
+
DBA_CONS_COLUMNS displays information about constraint columns in database tables. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_name + |
+character varying(64) + |
+Name of constraint-related table + |
+
column_name + |
+character varying(64) + |
+Name of constraint-related column + |
+
constraint_name + |
+character varying(64) + |
+Constraint name + |
+
position + |
+smallint + |
+Position of the column in the table + |
+
DBA_IND_COLUMNS displays column information about all indexes in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_name + |
+name + |
+Column name + |
+
column_position + |
+smallint + |
+Position of column in the index + |
+
DBA_IND_EXPRESSIONS displays the information about expression indexes in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_expression + |
+text + |
+The function-based index expression of a specified column + |
+
column_position + |
+smallint + |
+Position of column in the index + |
+
DBA_IND_PARTITIONS displays information about all index partitions in the database. Each index partition of a partitioned table in the database, if present, has a row of records in DBA_IND_PARTITIONS. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Name of the owner of the partitioned index to which the index partition belongs + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned index to which the index partition belongs + |
+
index_name + |
+character varying(64) + |
+Index name of the partitioned table to which the index partition belongs + |
+
partition_name + |
+character varying(64) + |
+Name of the index partition + |
+
index_partition_usable + |
+boolean + |
+Whether the index partition is available + |
+
high_value + |
+text + |
+Upper boundary of the partition corresponding to the index partition + |
+
def_tablespace_name + |
+name + |
+Tablespace name of the index partition + |
+
DBA_INDEXES displays all indexes in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the index + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_name + |
+character varying(64) + |
+Name of the table corresponding to the index + |
+
uniqueness + |
+text + |
+Whether the index is a unique index + |
+
generated + |
+character varying(1) + |
+Whether the index name is generated by the system + |
+
partitioned + |
+character(3) + |
+Whether the index has the property of the partition table + |
+
DBA_OBJECTS displays all database objects in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+name + |
+Owner of the object + |
+
object_name + |
+name + |
+Object name + |
+
object_id + |
+oid + |
+OID of the object + |
+
object_type + |
+name + |
+Type of the object + |
+
namespace + |
+oid + |
+Namespace containing the object + |
+
created + |
+timestamp with time zone + |
+Object creation time + |
+
last_ddl_time + |
+timestamp with time zone + |
+The last time when an object was modified. + |
+
For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.
+DBA_PART_INDEXES displays information about all partitioned table indexes in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Name of the owner of the partitioned table index + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table index + |
+
index_name + |
+character varying(64) + |
+Name of the partitioned table index + |
+
table_name + |
+character varying(64) + |
+Name of the partitioned table to which the partitioned table index belongs + |
+
partitioning_type + |
+text + |
+Partition policy of the partitioned table + NOTE:
+Currently, only range partitioning is supported. + |
+
partition_count + |
+bigint + |
+Number of index partitions of the partitioned table index + |
+
def_tablespace_name + |
+name + |
+Tablespace name of the partitioned table index + |
+
partitioning_key_count + |
+integer + |
+Number of partition keys of the partitioned table + |
+
DBA_PART_TABLES displays information about all partitioned tables in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_owner + |
+character varying(64) + |
+Name of the owner of the partitioned table + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table + |
+
table_name + |
+character varying(64) + |
+Name of the partitioned table + |
+
partitioning_type + |
+text + |
+Partition policy of the partitioned table + NOTE:
+Currently, only range partitioning is supported. + |
+
partition_count + |
+bigint + |
+Number of partitions of the partitioned table + |
+
def_tablespace_name + |
+name + |
+Tablespace name of the partitioned table + |
+
partitioning_key_count + |
+integer + |
+Number of partition keys of the partitioned table + |
+
DBA_PROCEDURES displays information about all stored procedures and functions in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the stored procedure or the function + |
+
object_name + |
+character varying(64) + |
+Name of the stored procedure or the function + |
+
argument_number + |
+smallint + |
+Number of the input parameters in the stored procedure + |
+
DBA_SEQUENCES displays information about all sequences in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sequence_owner + |
+character varying(64) + |
+Owner of the sequence + |
+
sequence_name + |
+character varying(64) + |
+Name of the sequence + |
+
DBA_SOURCE displays all stored procedures or functions in the database, and it provides the columns defined by the stored procedures or functions. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the stored procedure or the function + |
+
name + |
+character varying(64) + |
+Name of the stored procedure or the function + |
+
text + |
+text + |
+Definition of the stored procedure or the function + |
+
DBA_SYNONYMS displays all synonyms in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+text + |
+Owner of a synonym. + |
+
schema_name + |
+text + |
+Name of the schema to which the synonym belongs. + |
+
synonym_name + |
+text + |
+Synonym name. + |
+
table_owner + |
+text + |
+Owner of the associated object. + |
+
table_schema_name + |
+text + |
+Schema name of the associated object. + |
+
table_name + |
+text + |
+Name of the associated object. + |
+
DBA_TAB_COLUMNS displays the columns of tables. Each column of a table in the database has a row in DBA_TAB_COLUMNS. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_name + |
+character varying(64) + |
+Column name + |
+
data_type + |
+character varying(128) + |
+Data type of the column + |
+
column_id + |
+integer + |
+Sequence number of the column when the table is created + |
+
data_length + |
+integer + |
+Length of the column in the unit of bytes + |
+
comments + |
+text + |
+Comments + |
+
avg_col_len + |
+numeric + |
+Average length of a column in the unit of bytes + |
+
nullable + |
+bpchar + |
+Whether the column can be empty. For the primary key constraint and non-null constraint, the value is n. + |
+
data_precision + |
+integer + |
+Indicates the precision of the data type. This parameter is valid for the numeric data type, however its value is NULL for other types. + |
+
data_scale + |
+integer + |
+Number of decimal places. This parameter is valid for the numeric data type. For other data types, the value of this parameter is 0. + |
+
char_length + |
+numeric + |
+Column length (in the unit of bytes) which is valid only for varchar, nvarchar2, bpchar, and char types. + |
+
DBA_TAB_COMMENTS displays comments about all tables and views in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the table or the view + |
+
table_name + |
+character varying(64) + |
+Name of the table or the view + |
+
comments + |
+text + |
+Comments + |
+
DBA_TAB_PARTITIONS displays information about all partitions in the database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_owner + |
+character varying(64) + |
+Owner of the table that contains the partition + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
partition_name + |
+character varying(64) + |
+Name of the partition + |
+
high_value + |
+text + |
+Upper boundary of the range partition and interval partition + |
+
tablespace_name + |
+name + |
+Name of the tablespace that contains the partition + |
+
DBA_TABLES displays all tables in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
tablespace_name + |
+character varying(64) + |
+Name of the tablespace that contains the table + |
+
status + |
+character varying(8) + |
+Whether the current record is valid + |
+
temporary + |
+character(1) + |
+Whether the table is a temporary table +
|
+
dropped + |
+character varying + |
+Whether the current record is deleted +
|
+
num_rows + |
+numeric + |
+The estimated number of rows in the table + |
+
DBA_TABLESPACES displays information about available tablespaces. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
tablespace_name + |
+character varying(64) + |
+Name of the tablespace + |
+
DBA_TRIGGERS displays information about triggers in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
trigger_name + |
+character varying(64) + |
+Trigger name + |
+
table_name + |
+character varying(64) + |
+Name of the table that defines the trigger + |
+
table_owner + |
+character varying(64) + |
+Owner of the table that defines the trigger + |
+
DBA_VIEWS displays views in the database. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the view + |
+
view_name + |
+character varying(64) + |
+View name + |
+
DUAL is automatically created by the database based on the data dictionary. It has only one text column in only one row for storing expression calculation results. It is accessible to all users.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
dummy + |
+text + |
+Expression calculation result + |
+
GLOBAL_REDO_STAT displays the total statistics of XLOG redo operations on all nodes in a cluster. Except the avgiotim column (indicating the average redo write time of all nodes), the names of the other columns in this view are the same as those in the PV_REDO_STAT view. The respective meanings of the other columns are the sum of the values of the same columns in the PV_REDO_STAT view on each node.
+This view is accessible only to users with system administrator rights.
+GLOBAL_REL_IOSTAT displays the total disk I/O statistics of all nodes in a cluster. The name of each column in this view is the same as that in the GS_REL_IOSTAT view, but the column meaning is the sum of the value of the same column in the GS_REL_IOSTAT view on each node. This view is accessible only to users with system administrator rights.
+GLOBAL_STAT_DATABASE displays the status and statistics of databases on all nodes in a cluster.
+Name + |
+Type + |
+Description + |
+Sum Range + |
+
---|---|---|---|
datid + |
+oid + |
+Database OID + |
+- + |
+
datname + |
+name + |
+Database name + |
+- + |
+
numbackends + |
+integer + |
+Number of backends currently connected to this database on the current node. This is the only column in this view that reflects the current state value. All columns return the accumulated value since the last reset. + |
+CN + |
+
xact_commit + |
+bigint + |
+Number of transactions in this database that have been committed on the current node + |
+CN + |
+
xact_rollback + |
+bigint + |
+Number of transactions in this database that have been rolled back on the current node + |
+CN + |
+
blks_read + |
+bigint + |
+Number of disk blocks read in this database on the current node + |
+DN + |
+
blks_hit + |
+bigint + |
+Number of disk blocks found in the buffer cache on the current node, that is, the number of blocks hit in the cache. (This only includes hits in the GaussDB(DWS) buffer cache, not in the file system cache.) + |
+DN + |
+
tup_returned + |
+bigint + |
+Number of rows returned by queries in this database on the current node + |
+DN + |
+
tup_fetched + |
+bigint + |
+Number of rows fetched by queries in this database on the current node + |
+DN + |
+
tup_inserted + |
+bigint + |
+Number of rows inserted in this database on the current node + |
+DN + |
+
tup_updated + |
+bigint + |
+Number of rows updated in this database on the current node + |
+DN + |
+
tup_deleted + |
+bigint + |
+Number of rows deleted from this database on the current node + |
+DN + |
+
conflicts + |
+bigint + |
+Number of queries canceled due to database recovery conflicts on the current node (conflicts occurring only on the standby server). For details, see PG_STAT_DATABASE_CONFLICTS. + |
+CN and DN + |
+
temp_files + |
+bigint + |
+Number of temporary files created by this database on the current node. All temporary files are counted, regardless of why the temporary file was created (for example, sorting or hashing), and regardless of the log_temp_files setting. + |
+DN + |
+
temp_bytes + |
+bigint + |
+Size of temporary files written to this database on the current node. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting. + |
+DN + |
+
deadlocks + |
+bigint + |
+Number of deadlocks in this database on the current node + |
+CN and DN + |
+
blk_read_time + |
+double precision + |
+Time spent reading data file blocks by backends in this database on the current node, in milliseconds + |
+DN + |
+
blk_write_time + |
+double precision + |
+Time spent writing into data file blocks by backends in this database on the current node, in milliseconds + |
+DN + |
+
stats_reset + |
+timestamp with time zone + |
+Time when the database statistics are reset on the current node + |
+- + |
+
GLOBAL_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in all workload Cgroups in a cluster, including the number of SELECT, UPDATE, INSERT, and DELETE statements and the number of DDL, DML, and DCL statements.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
workload + |
+name + |
+Workload Cgroup name + |
+
select_count + |
+bigint + |
+Number of SELECT statements + |
+
update_count + |
+bigint + |
+Number of UPDATE statements + |
+
insert_count + |
+bigint + |
+Number of INSERT statements + |
+
delete_count + |
+bigint + |
+Number of DELETE statements + |
+
ddl_count + |
+bigint + |
+Number of DDL statements + |
+
dml_count + |
+bigint + |
+Number of DML statements + |
+
dcl_count + |
+bigint + |
+Number of DCL statements + |
+
GLOBAL_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in all workload Cgroups in a cluster, including the maximum, minimum, average, and total response time of SELECT, UPDATE, INSERT, and DELETE statements. The unit is microsecond.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
workload + |
+name + |
+Workload Cgroup name + |
+
total_select_elapse + |
+bigint + |
+Total response time of SELECT + |
+
max_select_elapse + |
+bigint + |
+Maximum response time of SELECT + |
+
min_select_elapse + |
+bigint + |
+Minimum response time of SELECT + |
+
avg_select_elapse + |
+bigint + |
+Average response time of SELECT + |
+
total_update_elapse + |
+bigint + |
+Total response time of UPDATE + |
+
max_update_elapse + |
+bigint + |
+Maximum response time of UPDATE + |
+
min_update_elapse + |
+bigint + |
+Minimum response time of UPDATE + |
+
avg_update_elapse + |
+bigint + |
+Average response time of UPDATE + |
+
total_insert_elapse + |
+bigint + |
+Total response time of INSERT + |
+
max_insert_elapse + |
+bigint + |
+Maximum response time of INSERT + |
+
min_insert_elapse + |
+bigint + |
+Minimum response time of INSERT + |
+
avg_insert_elapse + |
+bigint + |
+Average response time of INSERT + |
+
total_delete_elapse + |
+bigint + |
+Total response time of DELETE + |
+
max_delete_elapse + |
+bigint + |
+Maximum response time of DELETE + |
+
min_delete_elapse + |
+bigint + |
+Minimum response time of DELETE + |
+
avg_delete_elapse + |
+bigint + |
+Average response time of DELETE + |
+
GLOBAL_WORKLOAD_TRANSACTION provides the total transaction information about workload Cgroups on all CNs in the cluster. This view is accessible only to users with system administrator rights. It is valid only when the real-time resource monitoring function is enabled, that is, enable_resource_track is on.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
workload + |
+name + |
+Workload Cgroup name + |
+
commit_counter + |
+bigint + |
+Total number of submission times on each CN + |
+
rollback_counter + |
+bigint + |
+Total number of rollback times on each CN + |
+
resp_min + |
+bigint + |
+Minimum response time of the cluster + |
+
resp_max + |
+bigint + |
+Maximum response time of the cluster + |
+
resp_avg + |
+bigint + |
+Average response time on each CN + |
+
resp_total + |
+bigint + |
+Total response time on each CN + |
+
GS_ALL_CONTROL_GROUP_INFO displays all Cgroup information in a database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+text + |
+Name of the Cgroup + |
+
type + |
+text + |
+Type of the Cgroup + |
+
gid + |
+bigint + |
+Cgroup ID + |
+
classgid + |
+bigint + |
+ID of the Class Cgroup to which a Workload belongs + |
+
class + |
+text + |
+Class Cgroup + |
+
workload + |
+text + |
+Workload Cgroup + |
+
shares + |
+bigint + |
+CPU quota allocated to a Cgroup + |
+
limits + |
+bigint + |
+Limit of CPUs allocated to a Cgroup + |
+
wdlevel + |
+bigint + |
+Workload Cgroup level + |
+
cpucores + |
+text + |
+Usage of CPU cores in a Cgroup + |
+
GS_CLUSTER_RESOURCE_INFO displays a DN resource summary.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
min_mem_util + |
+integer + |
+Minimum memory usage of a DN + |
+
max_mem_util + |
+integer + |
+Maximum memory usage of a DN + |
+
min_cpu_util + |
+integer + |
+Minimum CPU usage of a DN + |
+
max_cpu_util + |
+integer + |
+Maximum CPU usage of a DN + |
+
min_io_util + |
+integer + |
+Minimum I/O usage of a DN + |
+
max_io_util + |
+integer + |
+Maximum I/O usage of a DN + |
+
used_mem_rate + |
+integer + |
+Maximum physical memory usage + |
+
The database parses each received SQL text string and generates an internal parsing tree. The database traverses the parsing tree and ignores constant values in the parsing tree. In this case, an integer value is calculated using a certain algorithm. This integer is used as the Unique SQL ID to uniquely identify this type of SQL. SQLs with the same Unique SQL ID are called Unique SQLs.
+Assume that the user enters the following SQL statements in sequence:
+select * from t1 where id = 1; +select * from t1 where id = 2;+
The statistics of the two SQL statements are aggregated to the same Unique SQL statement.
+select * from t1 where id = ?;+
The GS_INSTR_UNIQUE_SQL view displays the execution information about the Unique SQL statements collected by the current node, including:
+The Unique SQL statistics function has the following restrictions:
+When a common user accesses the GS_INSTR_UNIQUE_SQL view, only the Unique SQL information about the user is displayed. When an administrator accesses the GS_INSTR_UNIQUE_SQL view, all Unique SQL information about the current node is displayed. The GS_INSTR_UNIQUE_SQL view can be queried on both CNs and DNs. The DN displays the Unique SQL statistics of the local node, and the CN displays the complete Unique SQL statistics of the local node. That is, the CN collects the Unique SQL execution information of the CN from other CNs and DNs and displays the information. You can query the GS_INSTR_UNIQUE_SQL view to locate the top SQL statements that consume different resources, providing a basis for cluster performance optimization and maintenance.
+Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Name of the CN that receives SQL statements + |
+
node_id + |
+integer + |
+Node ID, which is the same as the value of node_id in the pgxc_node table + |
+
user_name + |
+name + |
+Username + |
+
user_id + |
+oid + |
+User ID + |
+
unique_sql_id + |
+bigint + |
+Normalized Unique SQL ID + |
+
query + |
+text + |
+Normalized SQL text. The maximum length is equal to the value of the GUC parameter track_activity_query_size. + |
+
n_calls + |
+bigint + |
+Number of successful execution times + |
+
min_elapse_time + |
+bigint + |
+Minimum running time of the SQL statement in the database (unit: μs) + |
+
max_elapse_time + |
+bigint + |
+Maximum running time of SQL statements in the database (unit: μs) + |
+
total_elapse_time + |
+bigint + |
+Total running time of SQL statements in the database (unit: μs) + |
+
n_returned_rows + |
+bigint + |
+Row activity - Number of rows in the result set returned by the SELECT statement + |
+
n_tuples_fetched + |
+bigint + |
+Row activity - Randomly scan rows (column-store tables/foreign tables are not counted.) + |
+
n_tuples_returned + |
+bigint + |
+Row activity - Sequential scan rows (Column-store tables/foreign tables are not counted.) + |
+
n_tuples_inserted + |
+bigint + |
+Row activity - Inserted rows + |
+
n_tuples_updated + |
+bigint + |
+Row activity - Updated rows + |
+
n_tuples_deleted + |
+bigint + |
+Row activity - Deleted rows + |
+
n_blocks_fetched + |
+bigint + |
+Block access times of the buffer, that is, physical read/I/O + |
+
n_blocks_hit + |
+bigint + |
+Block hits of the buffer, that is, logical read/cache + |
+
n_soft_parse + |
+bigint + |
+Number of soft parsing times (cache plan) + |
+
n_hard_parse + |
+bigint + |
+Number of hard parsing times (generation plan) + |
+
db_time + |
+bigint + |
+Valid DB execution time, including the waiting time and network sending time. If multiple threads are involved in query execution, the value of DB_TIME is the sum of DB_TIME of multiple threads (unit: μs). + |
+
cpu_time + |
+bigint + |
+CPU execution time, excluding the sleep time (unit: μs) + |
+
execution_time + |
+bigint + |
+SQL execution time in the query executor, DDL statements, and statements (such as Copy statements) that are not executed by the executor are not counted (unit: μs). + |
+
parse_time + |
+bigint + |
+SQL parsing time (unit: μs) + |
+
plan_time + |
+bigint + |
+SQL generation plan time (unit: μs) + |
+
rewrite_time + |
+bigint + |
+SQL rewriting time (unit: μs) + |
+
pl_execution_time + |
+bigint + |
+Execution time of the plpgsql procedural language function (unit: μs) + |
+
pl_compilation_time + |
+bigint + |
+Compilation time of the plpgsql procedural language function (unit: μs) + |
+
net_send_time + |
+bigint + |
+Network time, including the time spent by the CN in sending data to the client and the time spent by the DN in sending data to the CN (unit: μs) + |
+
data_io_time + |
+bigint + |
+File I/O time (unit: μs) + |
+
GS_REL_IOSTAT displays disk I/O statistics on the current node. In the current version, only one page is read or written in each read or write operation. Therefore, the number of read/write times is the same as the number of pages.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
phyrds + |
+bigint + |
+Number of disk reads + |
+
phywrts + |
+bigint + |
+Number of disk writes + |
+
phyblkrd + |
+bigint + |
+Number of read pages + |
+
phyblkwrt + |
+bigint + |
+Number of written pages + |
+
The GS_NODE_STAT_RESET_TIME view provides the reset time of statistics on the current node and returns the timestamp with the time zone. For details, see the get_node_stat_reset_time() function in "Functions and Operators > System Administration Functions > Other Functions" in SQL Syntax.
+GS_SESSION_CPU_STATISTICS displays load management information about CPU usage of ongoing complex jobs executed by the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+OID of the database this backend is connected to + |
+
usename + |
+name + |
+Name of the user logging in to the backend + |
+
pid + |
+bigint + |
+ID of a backend process + |
+
start_time + |
+timestamp with time zone + |
+Time when the statement starts to be executed + |
+
min_cpu_time + |
+bigint + |
+Minimum CPU time of the statement across all DNs. The unit is ms. + |
+
max_cpu_time + |
+bigint + |
+Maximum CPU time of the statement across all DNs. The unit is ms. + |
+
total_cpu_time + |
+bigint + |
+Total CPU time of the statement across all DNs. The unit is ms. + |
+
query + |
+text + |
+Statement that is being executed + |
+
node_group + |
+text + |
+Logical cluster of the user running the statement + |
+
GS_SESSION_MEMORY_STATISTICS displays load management information about memory usage of ongoing complex jobs executed by the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+OID of the database this backend is connected to + |
+
usename + |
+name + |
+Name of the user logging in to the backend + |
+
pid + |
+bigint + |
+ID of a backend process + |
+
start_time + |
+timestamp with time zone + |
+Time when the statement starts to be executed + |
+
min_peak_memory + |
+integer + |
+Minimum memory peak of a statement across all DNs, in MB + |
+
max_peak_memory + |
+integer + |
+Maximum memory peak of a statement across all DNs, in MB + |
+
spill_info + |
+text + |
+Information about statement flushing into disks on DNs +None indicates that the statement has not been flushed to disks on any DNs. +All indicates that the statement has been flushed to disks on every DN. +[a:b] indicates that the statement has been flushed to disks on a of b DNs. + |
+
query + |
+text + |
+Statement that is being executed + |
+
node_group + |
+text + |
+Logical cluster of the user running the statement + |
+
GS_SQL_COUNT displays statistics about the five types of statements (SELECT, INSERT, UPDATE, DELETE, and MERGE INTO) executed on the current node of the database, including the number of execution times, response time (the maximum, minimum, average, and total response time of the other four types of statements except the MERGE INTO statement, in microseconds), and the number of execution times of DDL, DML, and DCL statements.
+The classification of DDL, DML, and DCL statements in the GS_SQL_COUNT view is slightly different from that of the SQL syntaxt. The details are as follows:
+The classification of other statements is similar to the definition in the SQL syntax.
+When a common user queries the GS_SQL_COUNT view, only the statistics of this user in the current node can be viewed. When a user with the administrator permissions queries the GS_SQL_COUNT view, the statistics of all users in the current node can be viewed. When the cluster or the node is restarted, the statistics are cleared and the counting restarts. The counting is based on the number of queries received by the node, including the queries performed inside the cluster. Statistics about the GS_SQL_COUNT view are collected only on CNs, and SQL statements sent from other CNs are not collected. No result is returned when you query the view on a DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
user_name + |
+name + |
+User name + |
+
select_count + |
+bigint + |
+Number of SELECT statements + |
+
update_count + |
+bigint + |
+Number of UPDATE statements + |
+
insert_count + |
+bigint + |
+Number of INSERT statements + |
+
delete_count + |
+bigint + |
+Number of DELETE statements + |
+
mergeinto_count + |
+bigint + |
+Number of MERGE INTO statements + |
+
ddl_count + |
+bigint + |
+Number of DDL statements + |
+
dml_count + |
+bigint + |
+Number of DML statements + |
+
dcl_count + |
+bigint + |
+Number of DCL statements + |
+
total_select_elapse + |
+bigint + |
+Total response time of SELECT statements + |
+
avg_select_elapse + |
+bigint + |
+Average response time of SELECT statements + |
+
max_select_elapse + |
+bigint + |
+Maximum response time of SELECT statements + |
+
min_select_elapse + |
+bigint + |
+Minimum response time of SELECT statements + |
+
total_update_elapse + |
+bigint + |
+Total response time of UPDATE statements + |
+
avg_update_elapse + |
+bigint + |
+Average response time of UPDATE statements + |
+
max_update_elapse + |
+bigint + |
+Maximum response time of UPDATE statements + |
+
min_update_elapse + |
+bigint + |
+Minimum response time of UPDATE statements + |
+
total_delete_elapse + |
+bigint + |
+Total response time of DELETE statements + |
+
avg_delete_elapse + |
+bigint + |
+Average response time of DELETE statements + |
+
max_delete_elapse + |
+bigint + |
+Maximum response time of DELETE statements + |
+
min_delete_elapse + |
+bigint + |
+Minimum response time of DELETE statements + |
+
total_insert_elapse + |
+bigint + |
+Total response time of INSERT statements + |
+
avg_insert_elapse + |
+bigint + |
+Average response time of INSERT statements + |
+
max_insert_elapse + |
+bigint + |
+Maximum response time of INSERT statements + |
+
min_insert_elapse + |
+bigint + |
+Minimum response time of INSERT statements + |
+
GS_WAIT_EVENTS displays statistics about waiting status and events on the current node.
+The values of statistical columns in this view are accumulated only when the enable_track_wait_event GUC parameter is set to on. If enable_track_wait_event is set to off during statistics measurement, the statistics will no longer be accumulated, but the existing values are not affected. If enable_track_wait_event is off, 0 row is returned when this view is queried.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+name + |
+Node name + |
+
type + |
+text + |
+Event type, which can be STATUS, LOCK_EVENT, LWLOCK_EVENT, or IO_EVENT + |
+
event + |
+text + |
+Event name. For details, see PG_THREAD_WAIT_STATUS. + |
+
wait + |
+bigint + |
+Number of times an event occurs. This column and all the columns below are values accumulated during process running. + |
+
failed_wait + |
+bigint + |
+Number of waiting failures. In the current version, this column is used only for counting timeout errors and waiting failures of locks such as LOCK and LWLOCK. + |
+
total_wait_time + |
+bigint + |
+Total duration of the event + |
+
avg_wait_time + |
+bigint + |
+Average duration of the event + |
+
max_wait_time + |
+bigint + |
+Maximum wait time of the event + |
+
min_wait_time + |
+bigint + |
+Minimum wait time of the event + |
+
In the current version, for events whose type is LOCK_EVENT, LWLOCK_EVENT, or IO_EVENT, the display scope of GS_WAIT_EVENTS is the same as that of the corresponding events in the PG_THREAD_WAIT_STATUS view.
+For events whose type is STATUS, GS_WAIT_EVENTS displays the following waiting status columns. For details, see the PG_THREAD_WAIT_STATUS view.
+This view displays the execution information about operators in the query statements that have been executed on the current CN. The information comes from the system catalog dbms_om. gs_wlm_operator_info.
+This view displays the records of operators in jobs that have been executed by the current user on the current CN.
+This view is used by Database Manager to query data from the kernel. Data in the kernel is cleared every 3 minutes.
+GS_WLM_OPERATOR_STATISTICS displays the operators of the jobs that are being executed by the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
queryid + |
+bigint + |
+Internal query_id used for statement execution + |
+
pid + |
+bigint + |
+ID of the backend thread + |
+
plan_node_id + |
+integer + |
+plan_node_id of the execution plan of a query + |
+
plan_node_name + |
+text + |
+Name of the operator corresponding to plan_node_id + |
+
start_time + |
+timestamp with time zone + |
+Time when an operator starts to process the first data record + |
+
duration + |
+bigint + |
+Total execution time of an operator. The unit is ms. + |
+
status + |
+text + |
+Execution status of the current operator. Its value can be finished or running. + |
+
query_dop + |
+integer + |
+DOP of the current operator + |
+
estimated_rows + |
+bigint + |
+Number of rows estimated by the optimizer + |
+
tuple_processed + |
+bigint + |
+Number of elements returned by the current operator + |
+
min_peak_memory + |
+integer + |
+Minimum peak memory used by the current operator on all DNs. The unit is MB. + |
+
max_peak_memory + |
+integer + |
+Maximum peak memory used by the current operator on all DNs. The unit is MB. + |
+
average_peak_memory + |
+integer + |
+Average peak memory used by the current operator on all DNs. The unit is MB. + |
+
memory_skew_percent + |
+integer + |
+Memory usage skew of the current operator among DNs + |
+
min_spill_size + |
+integer + |
+Minimum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
max_spill_size + |
+integer + |
+Maximum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
average_spill_size + |
+integer + |
+Average spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
spill_skew_percent + |
+integer + |
+DN spill skew when a spill occurs + |
+
min_cpu_time + |
+bigint + |
+Minimum execution time of the operator on all DNs. The unit is ms. + |
+
max_cpu_time + |
+bigint + |
+Maximum execution time of the operator on all DNs. The unit is ms. + |
+
total_cpu_time + |
+bigint + |
+Total execution time of the operator on all DNs. The unit is ms. + |
+
cpu_skew_percent + |
+integer + |
+Skew of the execution time among DNs. + |
+
warning + |
+text + |
+Warning. The following warnings are displayed: +
|
+
This view displays the execution information about the query statements that have been executed on the current CN. The information comes from the system catalog dbms_om. gs_wlm_session_info.
+GS_WLM_SESSION_HISTORY displays load management information about a completed job executed by the current user on the current CN. This view is used by Database Manager to query data from GaussDB(DWS). Data in the GaussDB(DWS) is cleared every 3 minutes.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+OID of the database this backend is connected to + |
+
dbname + |
+text + |
+Name of the database the backend is connected to + |
+
schemaname + |
+text + |
+Schema name + |
+
nodename + |
+text + |
+Name of the CN where the statement is run + |
+
username + |
+text + |
+User name used for connecting to the backend + |
+
application_name + |
+text + |
+Name of the application that is connected to the backend + |
+
client_addr + |
+inet + |
+IP address of the client connected to this backend. If this column is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. + |
+
client_hostname + |
+text + |
+Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This column will only be non-null for IP connections, and only when log_hostname is enabled. + |
+
client_port + |
+integer + |
+TCP port number that the client uses for communication with this backend, or -1 if a Unix socket is used + |
+
query_band + |
+text + |
+Job type, which is specified by the query_band parameter. The default value is a null string. + |
+
block_time + |
+bigint + |
+Duration that a statement is blocked before being executed, including the statement parsing and optimization duration. The unit is ms. + |
+
start_time + |
+timestamp with time zone + |
+Time when the statement starts to be run + |
+
finish_time + |
+timestamp with time zone + |
+Time when the statement execution ends + |
+
duration + |
+bigint + |
+Execution time of a statement. The unit is ms. + |
+
estimate_total_time + |
+bigint + |
+Estimated execution time of a statement. The unit is ms. + |
+
status + |
+text + |
+Final statement execution status. Its value can be finished (normal) or aborted (abnormal). + |
+
abort_info + |
+text + |
+Exception information displayed if the final statement execution status is aborted. + |
+
resource_pool + |
+text + |
+Resource pool used by the user + |
+
control_group + |
+text + |
+Cgroup used by the statement + |
+
min_peak_memory + |
+integer + |
+Minimum memory peak of a statement across all DNs. The unit is MB. + |
+
max_peak_memory + |
+integer + |
+Maximum memory peak of a statement across all DNs. The unit is MB. + |
+
average_peak_memory + |
+integer + |
+Average memory usage during statement execution. The unit is MB. + |
+
memory_skew_percent + |
+integer + |
+Memory usage skew of a statement among DNs. + |
+
spill_info + |
+text + |
+Statement spill information on all DNs. +None indicates that the statement has not been flushed to disks on any DNs. +All indicates that the statement has been flushed to disks on every DN. +[a:b] indicates that the statement has been flushed to disks on a of b DNs. + |
+
min_spill_size + |
+integer + |
+Minimum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
max_spill_size + |
+integer + |
+Maximum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
average_spill_size + |
+integer + |
+Average spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
spill_skew_percent + |
+integer + |
+DN spill skew when a spill occurs + |
+
min_dn_time + |
+bigint + |
+Minimum execution time of a statement across all DNs. The unit is ms. + |
+
max_dn_time + |
+bigint + |
+Maximum execution time of a statement across all DNs. The unit is ms. + |
+
average_dn_time + |
+bigint + |
+Average execution time of a statement across all DNs. The unit is ms. + |
+
dntime_skew_percent + |
+integer + |
+Execution time skew of a statement among DNs. + |
+
min_cpu_time + |
+bigint + |
+Minimum CPU time of a statement across all DNs. The unit is ms. + |
+
max_cpu_time + |
+bigint + |
+Maximum CPU time of a statement across all DNs. The unit is ms. + |
+
total_cpu_time + |
+bigint + |
+Total CPU time of a statement across all DNs. The unit is ms. + |
+
cpu_skew_percent + |
+integer + |
+CPU time skew of a statement among DNs. + |
+
min_peak_iops + |
+integer + |
+Minimum IOPS peak of a statement across all DNs. It is counted by ones in a column-store table and by ten thousands in a row-store table. + |
+
max_peak_iops + |
+integer + |
+Maximum IOPS peak of a statement across all DNs. It is counted by ones in a column-store table and by ten thousands in a row-store table. + |
+
average_peak_iops + |
+integer + |
+Average IOPS peak of a statement across all DNs. It is counted by ones in a column-store table and by ten thousands in a row-store table. + |
+
iops_skew_percent + |
+integer + |
+I/O skew across DNs. + |
+
warning + |
+text + |
+Warning. The following warnings and warnings related to SQL self-diagnosis tuning are displayed: +
|
+
queryid + |
+bigint + |
+Internal query ID used for statement execution + |
+
query + |
+text + |
+Statement executed + |
+
query_plan + |
+text + |
+Execution plan of a statement + |
+
node_group + |
+text + |
+Logical cluster of the user running the statement + |
+
GS_WLM_SESSION_STATISTICS displays load management information about jobs being executed by the current user on the current CN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+OID of the database this backend is connected to + |
+
dbname + |
+name + |
+Name of the database the backend is connected to + |
+
schemaname + |
+text + |
+Schema name + |
+
nodename + |
+text + |
+Name of the CN where the statement is executed + |
+
username + |
+name + |
+User name used for connecting to the backend + |
+
application_name + |
+text + |
+Name of the application that is connected to the backend + |
+
client_addr + |
+inet + |
+IP address of the client connected to this backend. If this column is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. + |
+
client_hostname + |
+text + |
+Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This column will only be non-null for IP connections, and only when log_hostname is enabled. + |
+
client_port + |
+integer + |
+TCP port number that the client uses for communication with this backend, or -1 if a Unix socket is used + |
+
query_band + |
+text + |
+Job type, which is specified by the GUC parameter query_band parameter. The default value is a null string. + |
+
pid + |
+bigint + |
+Process ID of the backend + |
+
block_time + |
+bigint + |
+Block time before the statement is executed. The unit is ms. + |
+
start_time + |
+timestamp with time zone + |
+Time when the statement starts to be executed + |
+
duration + |
+bigint + |
+For how long a statement has been executing. The unit is ms. + |
+
estimate_total_time + |
+bigint + |
+Estimated execution time of a statement. The unit is ms. + |
+
estimate_left_time + |
+bigint + |
+Estimated remaining time of statement execution. The unit is ms. + |
+
enqueue + |
+text + |
+Workload management resource status + |
+
resource_pool + |
+name + |
+Resource pool used by the user + |
+
control_group + |
+text + |
+Cgroup used by the statement + |
+
estimate_memory + |
+integer + |
+Estimated memory used by the statement. The unit is MB. + |
+
min_peak_memory + |
+integer + |
+Minimum memory peak of a statement across all DNs. The unit is MB. + |
+
max_peak_memory + |
+integer + |
+Maximum memory peak of a statement across all DNs. The unit is MB. + |
+
average_peak_memory + |
+integer + |
+Average memory usage during statement execution. The unit is MB. + |
+
memory_skew_percent + |
+integer + |
+Memory usage skew of a statement among DNs. + |
+
spill_info + |
+text + |
+Statement spill information on all DNs. +None indicates that the statement has not been flushed to disks on any DNs. +All indicates that the statement has been flushed to disks on every DN. +[a:b] indicates that the statement has been flushed to disks on a of b DNs. + |
+
min_spill_size + |
+integer + |
+Minimum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
max_spill_size + |
+integer + |
+Maximum spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
average_spill_size + |
+integer + |
+Average spilled data among all DNs when a spill occurs. The unit is MB. The default value is 0. + |
+
spill_skew_percent + |
+integer + |
+DN spill skew when a spill occurs + |
+
min_dn_time + |
+bigint + |
+Minimum execution time of a statement across all DNs. The unit is ms. + |
+
max_dn_time + |
+bigint + |
+Maximum execution time of a statement across all DNs. The unit is ms. + |
+
average_dn_time + |
+bigint + |
+Average execution time of a statement across all DNs. The unit is ms. + |
+
dntime_skew_percent + |
+bigint + |
+Execution time skew of a statement among DNs. + |
+
min_cpu_time + |
+bigint + |
+Minimum CPU time of a statement across all DNs. The unit is ms. + |
+
max_cpu_time + |
+bigint + |
+Maximum CPU time of a statement across all DNs. The unit is ms. + |
+
total_cpu_time + |
+bigint + |
+Total CPU time of a statement across all DNs. The unit is ms. + |
+
cpu_skew_percent + |
+integer + |
+CPU time skew of a statement among DNs. + |
+
min_peak_iops + |
+integer + |
+Minimum IOPS peak of a statement across all DNs. It is counted by ones in a column-store table and by ten thousands in a row-store table. + |
+
max_peak_iops + |
+integer + |
+Maximum IOPS peak of a statement across all DNs. It is counted by ones in a column-store table and by ten thousands in a row-store table. + |
+
average_peak_iops + |
+integer + |
+Average IOPS peak of a statement across all DNs. It is counted by ones in a column-store table and by ten thousands in a row-store table. + |
+
iops_skew_percent + |
+integer + |
+I/O skew across DNs. + |
+
warning + |
+text + |
+Warning. The following warnings and warnings related to SQL self-diagnosis tuning are displayed: +
|
+
queryid + |
+bigint + |
+Internal query ID used for statement execution + |
+
query + |
+text + |
+Statement that is being executed + |
+
query_plan + |
+text + |
+Execution plan of a statement + |
+
node_group + |
+text + |
+Logical cluster of the user running the statement + |
+
GS_WLM_SQL_ALLOW displays the configured resource management SQL whitelist, including the default SQL whitelist and the SQL whitelist configured using the GUC parameter wlm_sql_allow_list.
+GS_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in workload Cgroups on the current node, including the number of SELECT, UPDATE, INSERT, and DELETE statements and the number of DDL, DML, and DCL statements.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
workload + |
+name + |
+Workload Cgroup name + |
+
select_count + |
+bigint + |
+Number of SELECT statements + |
+
update_count + |
+bigint + |
+Number of UPDATE statements + |
+
insert_count + |
+bigint + |
+Number of INSERT statements + |
+
delete_count + |
+bigint + |
+Number of DELETE statements + |
+
ddl_count + |
+bigint + |
+Number of DDL statements + |
+
dml_count + |
+bigint + |
+Number of DML statements + |
+
dcl_count + |
+bigint + |
+Number of DCL statements + |
+
GS_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in workload Cgroups on the current node, including the maximum, minimum, average, and total response time of SELECT, UPDATE, INSERT, and DELETE statements. The unit is microsecond.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
workload + |
+name + |
+Workload Cgroup name + |
+
total_select_elapse + |
+bigint + |
+Total response time of SELECT statements + |
+
max_select_elapse + |
+bigint + |
+Maximum response time of SELECT statements + |
+
min_select_elapse + |
+bigint + |
+Minimum response time of SELECT statements + |
+
avg_select_elapse + |
+bigint + |
+Average response time of SELECT statements + |
+
total_update_elapse + |
+bigint + |
+Total response time of UPDATE statements + |
+
max_update_elapse + |
+bigint + |
+Maximum response time of UPDATE statements + |
+
min_update_elapse + |
+bigint + |
+Minimum response time of UPDATE statements + |
+
avg_update_elapse + |
+bigint + |
+Average response time of UPDATE statements + |
+
total_insert_elapse + |
+bigint + |
+Total response time of INSERT statements + |
+
max_insert_elapse + |
+bigint + |
+Maximum response time of INSERT statements + |
+
min_insert_elapse + |
+bigint + |
+Minimum response time of INSERT statements + |
+
avg_insert_elapse + |
+bigint + |
+Average response time of INSERT statements + |
+
total_delete_elapse + |
+bigint + |
+Total response time of DELETE statements + |
+
max_delete_elapse + |
+bigint + |
+Maximum response time of DELETE statements + |
+
min_delete_elapse + |
+bigint + |
+Minimum response time of DELETE statements + |
+
avg_delete_elapse + |
+bigint + |
+Average response time of DELETE statements + |
+
GS_WORKLOAD_TRANSACTION provides transaction information about workload cgroups on a single CN. The database records the number of times that each workload Cgroup commits and rolls back transactions and the response time of transaction commitment and rollback, in microseconds.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
workload + |
+name + |
+Workload Cgroup name + |
+
commit_counter + |
+bigint + |
+Number of the commit times + |
+
rollback_counter + |
+bigint + |
+Number of rollbacks + |
+
resp_min + |
+bigint + |
+Minimum response time + |
+
resp_max + |
+bigint + |
+Maximum response time + |
+
resp_avg + |
+bigint + |
+Average response time + |
+
resp_total + |
+bigint + |
+Total response time + |
+
GS_STAT_DB_CU displsys CU hits in a database and in each node in a cluster. You can clear it using gs_stat_reset().
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name1 + |
+text + |
+Node name + |
+
db_name + |
+text + |
+Database name + |
+
mem_hit + |
+integer + |
+Number of memory hits + |
+
hdd_sync_read + |
+integer + |
+Number of hard disk synchronous reads + |
+
hdd_asyn_read + |
+integer + |
+Number of hard disk asynchronous reads + |
+
GS_STAT_SESSION_CU displays the CU hit rate of running sessions on each node in a cluster. This data about a session is cleared when you exit this session or restart the cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name1 + |
+text + |
+Node name + |
+
mem_hit + |
+integer + |
+Number of memory hits + |
+
hdd_sync_read + |
+integer + |
+Number of hard disk synchronous reads + |
+
hdd_asyn_read + |
+integer + |
+Number of hard disk asynchronous reads + |
+
GS_TOTAL_NODEGROUP_MEMORY_DETAIL displays statistics about memory usage of the logical cluster that the current database belongs to in the unit of MB.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
ngname + |
+text + |
+Name of a logical cluster + |
+
memorytype + |
+text + |
+Memory type. Its value can be: +
|
+
memorymbytes + |
+integer + |
+Size of allocated memory-typed memory + |
+
GS_USER_TRANSACTION provides transaction information about users on a single CN. The database records the number of times that each user commits and rolls back transactions and the response time of transaction commitment and rollback, in microseconds.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
usename + |
+name + |
+Username + |
+
commit_counter + |
+bigint + |
+Number of the commit times + |
+
rollback_counter + |
+bigint + |
+Number of rollbacks + |
+
resp_min + |
+bigint + |
+Minimum response time + |
+
resp_max + |
+bigint + |
+Maximum response time + |
+
resp_avg + |
+bigint + |
+Average response time + |
+
resp_total + |
+bigint + |
+Total response time + |
+
GS_VIEW_DEPENDENCY allows you to query the direct dependencies of all views visible to the current user.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
objschema + |
+name + |
+View space name + |
+
objname + |
+name + |
+View name + |
+
refobjschema + |
+name + |
+Name of the space where the dependent object resides + |
+
refobjname + |
+name + |
+Name of a dependent object + |
+
relobjkind + |
+char + |
+Type of a dependent object +
|
+
GS_VIEW_INVALID queries all unavailable views visible to the current user. If the base table, function, or synonym that the view depends on is abnormal, the validtype column of the view is displayed as "invalid".
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
oid + |
+oid + |
+OID of the view + |
+
schemaname + |
+name + |
+View space name + |
+
viewname + |
+name + |
+Name of the view + |
+
viewowner + |
+name + |
+Owner of the view + |
+
definition + |
+text + |
+Definition of the view + |
+
validtype + |
+text + |
+View validity flag + |
+
PG_AVAILABLE_EXTENSION_VERSIONS displays the extension versions of certain database features.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+name + |
+Extension name + |
+
version + |
+text + |
+Version name + |
+
installed + |
+boolean + |
+The value is true if the version of this extension is currently installed. + |
+
superuser + |
+boolean + |
+The value is true if only system administrators are allowed to install this extension. + |
+
relocatable + |
+boolean + |
+The value is true if an extension can be relocated to another schema. + |
+
schema + |
+name + |
+Name of the schema that the extension must be installed into. The value is null if the extension is partially or fully relocatable. + |
+
requires + |
+name[] + |
+Names of prerequisite extensions. The value is null if there are no prerequisite extensions. + |
+
comment + |
+text + |
+Comment string from the extension's control file + |
+
PG_AVAILABLE_EXTENSIONS displays the extended information about certain database features.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+name + |
+Extension name + |
+
default_version + |
+text + |
+Name of default version. The value is NULL if none is specified. + |
+
installed_version + |
+text + |
+Currently installed version of the extension. The value is NULL if no version is installed. + |
+
comment + |
+text + |
+Comment string from the extension's control file + |
+
On any normal node in a cluster, PG_BULKLOAD_STATISTICS displays the execution status of the import and export services. Each import or export service corresponds to a record. This view is accessible only to users with system administrators rights.
+ +Name + |
+Type + |
+Description + |
+
node_name + |
+text + |
+Node name + |
+
db_name + |
+text + |
+Database name + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
tid + |
+bigint + |
+ID of the current thread + |
+
lwtid + |
+integer + |
+Lightweight thread ID + |
+
session_id + |
+bigint + |
+GDS session ID + |
+
direction + |
+text + |
+Service type. The options are gds to file, gds from file, gds to pipe, gds from pipe, copy from, and copy to. + |
+
query + |
+text + |
+Query statement + |
+
address + |
+text + |
+Location of the foreign table used for data import and export + |
+
query_start + |
+timestamp with time zone + |
+Start time of data import or export + |
+
total_bytes + |
+bigint + |
+Total size of data to be processed +This parameter is specified only when a GDS common file is to be imported and the record in the row comes from a CN. Otherwise, left this parameter unspecified. + |
+
phase + |
+text + |
+Execution phase of the current service import and export. The options are INITIALIZING, TRANSFER_DATA, and RELEASE_RESOURCE. + |
+
done_lines + |
+bigint + |
+Number of lines that have been transferred + |
+
done_bytes + |
+bigint + |
+Number of bytes that have been transferred + |
+
PG_COMM_CLIENT_INFO stores the client connection information of a single node. (You can query this view on a DN to view the information about the connection between the CN and DN.)
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Current node name. + |
+
app + |
+text + |
+Client application name + |
+
tid + |
+bigint + |
+Thread ID of the current thread. + |
+
lwtid + |
+integer + |
+Lightweight thread ID of the current thread. + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
socket + |
+integer + |
+It is displayed if the connection is a physical connection. + |
+
remote_ip + |
+text + |
+Peer node IP address. + |
+
remote_port + |
+text + |
+Peer node port. + |
+
logic_id + |
+integer + |
+If the connection is a logical connection, sid is displayed. If -1 is displayed, the current connection is a physical connection. + |
+
PG_COMM_DELAY displays the communication library delay status for a single DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
remote_name + |
+text + |
+Name of the peer node + |
+
remote_host + |
+text + |
+IP address of the peer + |
+
stream_num + |
+integer + |
+Number of logical stream connections used by the current physical connection + |
+
min_delay + |
+integer + |
+Minimum delay of the current physical connection within 1 minute. Its unit is microsecond. + NOTE:
+A negative result is invalid. Wait until the delay status is updated and query again. + |
+
average + |
+integer + |
+Average delay of the current physical connection within 1 minute. The unit is microsecond. + |
+
max_delay + |
+integer + |
+Maximum delay of the current physical connection within 1 minute. The unit is microsecond. + |
+
PG_COMM_STATUS displays the communication library status for a single DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Specifies the node name. + |
+
rxpck/s + |
+integer + |
+Receiving rate of the communication library on a node. The unit is byte/s. + |
+
txpck/s + |
+integer + |
+Sending rate of the communication library on a node. The unit is byte/s. + |
+
rxkB/s + |
+bigint + |
+Receiving rate of the communication library on a node. The unit is KB/s. + |
+
txkB/s + |
+bigint + |
+Sending rate of the communication library on a node. The unit is KB/s. + |
+
buffer + |
+bigint + |
+Size of the buffer of the Cmailbox. + |
+
memKB(libcomm) + |
+bigint + |
+Communication memory size of the libcomm process, in KB. + |
+
memKB(libpq) + |
+bigint + |
+Communication memory size of the libpq process, in KB. + |
+
%USED(PM) + |
+integer + |
+Real-time usage of the postmaster thread. + |
+
%USED (sflow) + |
+integer + |
+Real-time usage of the gs_sender_flow_controller thread. + |
+
%USED (rflow) + |
+integer + |
+Real-time usage of the gs_receiver_flow_controller thread. + |
+
%USED (rloop) + |
+integer + |
+Highest real-time usage among multiple gs_receivers_loop threads. + |
+
stream + |
+integer + |
+Total number of used logical connections. + |
+
PG_COMM_RECV_STREAM displays the receiving stream status of all the communication libraries for a single DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
local_tid + |
+bigint + |
+ID of the thread using this stream + |
+
remote_name + |
+text + |
+Name of the peer node + |
+
remote_tid + |
+bigint + |
+Peer thread ID + |
+
idx + |
+integer + |
+Peer DN ID in the local DN + |
+
sid + |
+integer + |
+Stream ID in the physical connection + |
+
tcp_sock + |
+integer + |
+TCP socket used in the stream + |
+
state + |
+text + |
+Current status of the stream +
|
+
query_id + |
+bigint + |
+debug_query_id corresponding to the stream + |
+
pn_id + |
+integer + |
+plan_node_id of the query executed by the stream + |
+
send_smp + |
+integer + |
+smpid of the sender of the query executed by the stream + |
+
recv_smp + |
+integer + |
+smpid of the receiver of the query executed by the stream + |
+
recv_bytes + |
+bigint + |
+Total data volume received from the stream. The unit is byte. + |
+
time + |
+bigint + |
+Current life cycle service duration of the stream. The unit is ms. + |
+
speed + |
+bigint + |
+Average receiving rate of the stream. The unit is byte/s. + |
+
quota + |
+bigint + |
+Current communication quota value of the stream. The unit is Byte. + |
+
buff_usize + |
+bigint + |
+Current size of the data cache of the stream. The unit is byte. + |
+
PG_COMM_SEND_STREAM displays the sending stream status of all the communication libraries for a single DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
local_tid + |
+bigint + |
+ID of the thread using this stream + |
+
remote_name + |
+text + |
+Name of the peer node + |
+
remote_tid + |
+bigint + |
+Peer thread ID + |
+
idx + |
+integer + |
+Peer DN ID in the local DN + |
+
sid + |
+integer + |
+Stream ID in the physical connection + |
+
tcp_sock + |
+integer + |
+TCP socket used in the stream + |
+
state + |
+text + |
+Current status of the stream +
|
+
query_id + |
+bigint + |
+debug_query_id corresponding to the stream + |
+
pn_id + |
+integer + |
+plan_node_id of the query executed by the stream + |
+
send_smp + |
+integer + |
+smpid of the sender of the query executed by the stream + |
+
recv_smp + |
+integer + |
+smpid of the receiver of the query executed by the stream + |
+
send_bytes + |
+bigint + |
+Total data volume sent by the stream. The unit is Byte. + |
+
time + |
+bigint + |
+Current life cycle service duration of the stream. The unit is ms. + |
+
speed + |
+bigint + |
+Average sending rate of the stream. The unit is Byte/s. + |
+
quota + |
+bigint + |
+Current communication quota value of the stream. The unit is Byte. + |
+
wait_quota + |
+bigint + |
+Extra time generated when the stream waits the quota value. The unit is ms. + |
+
PG_CONTROL_GROUP_CONFIG displays the Cgroup configuration information in the system.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
pg_control_group_config + |
+text + |
+Configuration information of the cgroup + |
+
PG_CURSORS displays the cursors that are currently available.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+text + |
+Cursor name + |
+
statement + |
+text + |
+Query statement when the cursor is declared to change + |
+
is_holdable + |
+boolean + |
+Whether the cursor is holdable (that is, it can be accessed after the transaction that declared the cursor has committed). If it is, its value is true. + |
+
is_binary + |
+boolean + |
+Whether the cursor was declared BINARY. If it was, its value is true. + |
+
is_scrollable + |
+boolean + |
+Whether the cursor is scrollable (that is, it allows rows to be retrieved in a nonsequential manner). If it is, its value is true. + |
+
creation_time + |
+timestamp with time zone + |
+Timestamp at which the cursor is declared + |
+
PG_EXT_STATS displays extension statistics stored in the PG_STATISTIC_EXT table. The extension statistics means multiple columns of statistics.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
schemaname + |
+name + |
+PG_NAMESPACE.nspname + |
+Name of the schema that contains a table + |
+
tablename + |
+name + |
+PG_CLASS.relname + |
+Name of a table + |
+
attname + |
+int2vector + |
+PG_STATISTIC_EXT.stakey + |
+Indicates the columns to be combined for collecting statistics. + |
+
inherited + |
+boolean + |
+- + |
+Includes inherited sub-columns if the value is true; otherwise, indicates the column in a specified table. + |
+
null_frac + |
+real + |
+- + |
+Percentage of column combinations that are null to all records + |
+
avg_width + |
+integer + |
+- + |
+Average width of column combinations. The unit is byte. + |
+
n_distinct + |
+real + |
+- + |
+
The negated form is used when ANALYZE believes that the number of distinct values is likely to increase as the table grows. +The positive form is used when the column seems to have a fixed number of possible values. For example, -1 indicates that the number of distinct values is the same as the number of rows for a column combination. +
|
+
n_dndistinct + |
+real + |
+- + |
+Number of unique not-null data values in the dn1 column combination +
|
+
most_common_vals + |
+anyarray + |
+- + |
+List of the most common values in a column combination. If this combination does not have the most common values, most_common_vals_null will be NULL. None of the most common values in most_common_vals is NULL. + |
+
most_common_freqs + |
+real[] + |
+- + |
+List of the frequencies of the most common values, that is, the number of occurrences of each value divided by the total number of rows. (NULL if most_common_vals is NULL) + |
+
most_common_vals_null + |
+anyarray + |
+- + |
+List of the most common values in a column combination. If this combination does not have the most common values, most_common_vals_null will be NULL. At least one of the common values in most_common_vals_null is NULL. + |
+
most_common_freqs_null + |
+real[] + |
+- + |
+List of the frequencies of the most common values, that is, the number of occurrences of each value divided by the total number of rows. (NULL if most_common_vals_null is NULL) + |
+
PG_GET_INVALID_BACKENDS displays the information about backend threads on the CN that are connected to the current standby DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
pid + |
+bigint + |
+Thread ID + |
+
node_name + |
+text + |
+Node information connected to the backend thread + |
+
dbname + |
+name + |
+Name of the connected database + |
+
backend_start + |
+timestamp with time zone + |
+Backend thread startup time + |
+
query + |
+text + |
+Query statement performed by the backend thread + |
+
PG_GET_SENDERS_CATCHUP_TIME displays the catchup information of the currently active primary/standby instance sending thread on a single DN.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
pid + |
+bigint + |
+Current sender thread ID + |
+
lwpid + |
+integer + |
+Current sender lwpid + |
+
local_role + |
+text + |
+Local role + |
+
peer_role + |
+text + |
+Peer role + |
+
state + |
+text + |
+Current sender's replication status + |
+
type + |
+text + |
+Current sender type + |
+
catchup_start + |
+timestamp with time zone + |
+Startup time of a catchup task + |
+
catchup_end + |
+timestamp with time zone + |
+End time of a catchup task + |
+
catchup_type + |
+text + |
+Catchup task type, full or incremental + |
+
catchup_bcm_filename + |
+text + |
+BCM file executed by the current catchup task + |
+
catchup_bcm_finished + |
+integer + |
+Number of BCM files completed by a catchup task + |
+
catchup_bcm_total + |
+integer + |
+Total number of BCM files to be operated by a catchup task + |
+
catchup_percent + |
+text + |
+Completion percentage of a catchup task + |
+
catchup_remaining_time + |
+text + |
+Estimated remaining time of a catchup task + |
+
PG_GROUP displays the database role authentication and the relationship between roles.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
groname + |
+name + |
+Group name + |
+
grosysid + |
+oid + |
+Group ID + |
+
grolist + |
+oid[] + |
+An array, including all the role IDs in this group + |
+
PG_INDEXES displays access to useful information about each index in the database.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
schemaname + |
+name + |
+PG_NAMESPACE.nspname + |
+Name of the schema that contains tables and indexes + |
+
tablename + |
+name + |
+PG_CLASS.relname + |
+Name of the table for which the index serves + |
+
indexname + |
+name + |
+PG_CLASS.relname + |
+Index name + |
+
tablespace + |
+name + |
+PG_TABLESPACE.spcname + |
+Name of the tablespace that contains the index + |
+
indexdef + |
+text + |
+- + |
+Index definition (a reconstructed CREATE INDEX command) + |
+
The PG_JOB view replaces the PG_JOB system catalog in earlier versions and provides forward compatibility with earlier versions. The original PG_JOB system catalog is changed to the PG_JOBS system catalog. For details about PG_JOBS, see PG_JOBS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
job_id + |
+bigint + |
+Job ID + |
+
current_postgres_pid + |
+bigint + |
+If the current job has been executed, the PostgreSQL thread ID of this job is recorded. The default value is -1, indicating that the job has not yet been executed. + |
+
log_user + |
+name + |
+User name of the job creator + |
+
priv_user + |
+name + |
+User name of the job executor + |
+
dbname + |
+name + |
+Name of the database where the job is executed + |
+
node_name + |
+name + |
+CN node on which the job will be created and executed + |
+
job_status + |
+text + |
+Status of the current job. The value range is r, s, f, or d. The default value is s. The indications are as follows: +
If a job fails to be executed for 16 consecutive times, job_status is automatically set to d, and no more attempt will be made on this job. + NOTE:
+
|
+
start_date + |
+timestamp without time zone + |
+Start time of the first job execution, precise to millisecond + |
+
next_run_date + |
+timestamp without time zone + |
+Scheduled time of the next job execution, accurate to millisecond + |
+
failure_count + |
+smallint + |
+Number of times the job has started and failed. If a job fails to be executed for 16 consecutive times, no more attempt will be made on it. + |
+
interval + |
+text + |
+Job execution interval + |
+
last_start_date + |
+timestamp without time zone + |
+Start time of the last job execution, accurate to millisecond + |
+
last_end_date + |
+timestamp without time zone + |
+End time of the last job execution, accurate to millisecond + |
+
last_suc_date + |
+timestamp without time zone + |
+Start time of the last successful job execution, accurate to millisecond + |
+
this_run_date + |
+timestamp without time zone + |
+Start time of the ongoing job execution, accurate to millisecond + |
+
nspname + |
+name + |
+Name of the namespace where a job is running + |
+
what + |
+text + |
+Job content + |
+
The PG_JOB_PROC view replaces the PG_JOB_PROC system catalog in earlier versions and provides forward compatibility with earlier versions. The original PG_JOB_PROC and PG_JOB system catalogs are merged into the PG_JOBS system catalog in the current version. For details about the PG_JOBS system catalog, see PG_JOBS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
job_id + |
+bigint + |
+Job ID + |
+
what + |
+text + |
+Job content + |
+
PG_JOB_SINGLE displays job information about the current node.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
job_id + |
+bigint + |
+Job ID + |
+
current_postgres_pid + |
+bigint + |
+If the current job has been executed, the PostgreSQL thread ID of this job is recorded. The default value is -1, indicating that the job has not yet been executed. + |
+
log_user + |
+name + |
+User name of the job creator + |
+
priv_user + |
+name + |
+User name of the job executor + |
+
dbname + |
+name + |
+Name of the database where the job is executed + |
+
node_name + |
+name + |
+CN node on which the job will be created and executed + |
+
job_status + |
+text + |
+Status of the current job. The value range is r, s, f, or d. The default value is s. The indications are as follows: +
If a job fails to be executed for 16 consecutive times, job_status is automatically set to d, and no more attempt will be made on this job. + NOTE:
+
|
+
start_date + |
+timestamp without time zone + |
+Start time of the first job execution, precise to millisecond + |
+
next_run_date + |
+timestamp without time zone + |
+Scheduled time of the next job execution, accurate to millisecond + |
+
failure_count + |
+smallint + |
+Number of times the job has started and failed. If a job fails to be executed for 16 consecutive times, no more attempt will be made on it. + |
+
interval + |
+text + |
+Job execution interval + |
+
last_start_date + |
+timestamp without time zone + |
+Start time of the last job execution, accurate to millisecond + |
+
last_end_date + |
+timestamp without time zone + |
+End time of the last job execution, accurate to millisecond + |
+
last_suc_date + |
+timestamp without time zone + |
+Start time of the last successful job execution, accurate to millisecond + |
+
this_run_date + |
+timestamp without time zone + |
+Start time of the ongoing job execution, accurate to millisecond + |
+
nspname + |
+name + |
+Name of the namespace where a job is running + |
+
what + |
+text + |
+Job content + |
+
PG_LIFECYCLE_DATA_DISTRIBUTE displays the distribution of cold and hot data in a multi-temperature table of OBS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+name + |
+Schema name + |
+
tablename + |
+name + |
+Current table name + |
+
nodename + |
+name + |
+Node name + |
+
hotpartition + |
+text + |
+Hot partition on the DN + |
+
coldpartition + |
+text + |
+Cold partition on the DN + |
+
switchablepartition + |
+text + |
+Switchable partition on the DN + |
+
hotdatasize + |
+text + |
+Data size of the hot partition on the DN + |
+
colddatasize + |
+text + |
+Data size of the cold partition on the DN + |
+
switchabledatasize + |
+text + |
+Data size of the switchable partition on the DN + |
+
PG_LOCKS displays information about the locks held by open transactions.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
locktype + |
+text + |
+- + |
+Type of the locked object: relation, extend, page, tuple, transactionid, virtualxid, object, userlock, and advisory + |
+
database + |
+oid + |
+PG_DATABASE.oid + |
+OID of the database in which the locked target exists +
|
+
relation + |
+oid + |
+PG_CLASS.oid + |
+OID of the relationship targeted by the lock. The value is NULL if the object is neither a relationship nor part of a relationship. + |
+
page + |
+integer + |
+- + |
+Page number targeted by the lock within the relationship. If the object is neither a relation page nor row page, the value is NULL. + |
+
tuple + |
+smallint + |
+- + |
+Row number targeted by the lock within the page. If the object is not a row, the value is NULL. + |
+
virtualxid + |
+text + |
+- + |
+Virtual ID of the transaction targeted by the lock. If the object is not a virtual transaction ID, the value is NULL. + |
+
transactionid + |
+xid + |
+- + |
+ID of the transaction targeted by the lock. If the object is not a transaction ID, the value is NULL. + |
+
classid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system table that contains the object. If the object is not a general database object, the value is NULL. + |
+
objid + |
+oid + |
+- + |
+OID of the lock target within its system table. If the target is not a general database object, the value is NULL. + |
+
objsubid + |
+smallint + |
+- + |
+Column number for a column in the table. The value is 0 if the target is some other object type. If the object is not a general database object, the value is NULL. + |
+
virtualtransaction + |
+text + |
+- + |
+Virtual ID of the transaction holding or awaiting this lock + |
+
pid + |
+bigint + |
+- + |
+Logical ID of the server thread holding or awaiting this lock. This is NULL if the lock is held by a prepared transaction. + |
+
mode + |
+text + |
+- + |
+Lock mode held or desired by this thread For more information about lock modes, see "LOCK" in GaussDB(DWS) SQL Syntax Reference. + + |
+
granted + |
+boolean + |
+- + |
+
|
+
fastpath + |
+boolean + |
+- + |
+Whether the lock is obtained through fast-path (true) or main lock table (false) + |
+
PG_NODE_ENVO displays the environmental variable information about the current node.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Name of the current node + |
+
host + |
+text + |
+Host name of the current node + |
+
process + |
+integer + |
+Process ID of the current node + |
+
port + |
+integer + |
+Port ID of the current node + |
+
installpath + |
+text + |
+Installation directory of current node + |
+
datapath + |
+text + |
+Data directory of the current node + |
+
log_directory + |
+text + |
+Log directory of the current node + |
+
PG_OS_THREADS displays the status information about all the threads under the current node.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Name of the current node + |
+
pid + |
+bigint + |
+Thread number running under the current node process + |
+
lwpid + |
+integer + |
+Lightweight thread ID corresponding to the PID + |
+
thread_name + |
+text + |
+Thread name corresponding to the PID + |
+
creation_time + |
+timestamp with time zone + |
+Thread creation time corresponding to the PID + |
+
PG_POOLER_STATUS displays the cache connection status in the pooler. PG_POOLER_STATUS can only query on the CN, and displays the connection cache information about the pooler module.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
database + |
+text + |
+Database name + |
+
user_name + |
+text + |
+User name + |
+
tid + |
+bigint + |
+ID of a thread connected to the CN + |
+
node_oid + |
+bigint + |
+OID of the node connected + |
+
node_name + |
+name + |
+Name of the node connected + |
+
in_use + |
+boolean + |
+Whether the connection is in use +
|
+
fdsock + |
+bigint + |
+Peer socket. + |
+
remote_pid + |
+bigint + |
+Peer thread ID. + |
+
session_params + |
+text + |
+GUC session parameter delivered by the connection. + |
+
PG_PREPARED_STATEMENTS displays all prepared statements that are available in the current session.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+text + |
+Identifier of the prepared statement + |
+
statement + |
+text + |
+Query string for creating this prepared statement For prepared statements created through SQL, this is the PREPARE statement submitted by the client. For prepared statements created through the frontend/backend protocol, this is the text of the prepared statement itself. + |
+
prepare_time + |
+timestamp with time zone + |
+Timestamp when the prepared statement is created + |
+
parameter_types + |
+regtype[] + |
+Expected parameter types for the prepared statement in the form of an array of regtype. The OID corresponding to an element of this array can be obtained by casting the regtype value to oid. + |
+
from_sql + |
+boolean + |
+How a prepared statement was created +
|
+
PG_PREPARED_XACTS displays information about transactions that are currently prepared for two-phase commit.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
transaction + |
+xid + |
+- + |
+Numeric transaction identifier of the prepared transaction + |
+
gid + |
+text + |
+- + |
+Global transaction identifier that was assigned to the transaction + |
+
prepared + |
+timestamp with time zone + |
+- + |
+Time at which the transaction is prepared for commit + |
+
owner + |
+name + |
+PG_AUTHID.rolname + |
+Name of the user that executes the transaction + |
+
database + |
+name + |
+PG_DATABASE.datname + |
+Name of the database in which the transaction is executed + |
+
PG_QUERYBAND_ACTION displays information about the object associated with query_band and the query_band query order.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
qband + |
+text + |
+query_band key-value pairs + |
+
respool_id + |
+oid + |
+OID of the resource pool associated with query_band + |
+
respool + |
+text + |
+Name of the resource pool associated with query_band + |
+
priority + |
+text + |
+Intra-queue priority associated with query_band + |
+
qborder + |
+integer + |
+query_band query order + |
+
PG_REPLICATION_SLOTS displays the replication node information.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
slot_name + |
+text + |
+Name of a replication node + |
+
plugin + |
+name + |
+Name of the output plug-in of the logical replication slot + |
+
slot_type + |
+text + |
+Type of a replication node + |
+
datoid + |
+oid + |
+OID of the database on the replication node + |
+
database + |
+name + |
+Name of the database on the replication node + |
+
active + |
+boolean + |
+Whether the replication node is active + |
+
xmin + |
+xid + |
+Transaction ID of the replication node + |
+
catalog_xmin + |
+text + |
+ID of the earliest-decoded transaction corresponding to the logical replication slot + |
+
restart_lsn + |
+text + |
+Xlog file information on the replication node + |
+
dummy_standby + |
+boolean + |
+Whether the replication node is the dummy standby node + |
+
PG_ROLES displays information about database roles.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
rolname + |
+name + |
+- + |
+Role name + |
+
rolsuper + |
+boolean + |
+- + |
+Whether the role is the initial system administrator with the highest permission + |
+
rolinherit + |
+boolean + |
+- + |
+Whether the role inherits permissions for this type of roles + |
+
rolcreaterole + |
+boolean + |
+- + |
+Whether the role can create other roles + |
+
rolcreatedb + |
+boolean + |
+- + |
+Whether the role can create databases + |
+
rolcatupdate + |
+boolean + |
+- + |
+Whether the role can update system tables directly. Only the initial system administrator whose usesysid is 10 has this permission. It is not available for other users. + |
+
rolcanlogin + |
+boolean + |
+- + |
+Whether the role can log in to the database + |
+
rolreplication + |
+boolean + |
+- + |
+Whether the role can be replicated + |
+
rolauditadmin + |
+boolean + |
+- + |
+Whether the role is an audit system administrator + |
+
rolsystemadmin + |
+boolean + |
+- + |
+Whether the role is a system administrator + |
+
rolconnlimit + |
+integer + |
+- + |
+Sets the maximum number of concurrent connections this role can make if this role can log in. -1 indicates no limit. + |
+
rolpassword + |
+text + |
+- + |
+Not the password (always reads as ********) + |
+
rolvalidbegin + |
+timestamp with time zone + |
+- + |
+Account validity start time; null if no start time + |
+
rolvaliduntil + |
+timestamp with time zone + |
+- + |
+Password expiry time; null if no expiration + |
+
rolrespool + |
+name + |
+- + |
+Resource pool that a user can use + |
+
rolparentid + |
+oid + |
+PG_AUTHID.rolparentid + |
+OID of a group user to which the user belongs + |
+
roltabspace + |
+text + |
+- + |
+The storage space of the user permanent table. + |
+
roltempspace + |
+text + |
+- + |
+The storage space of the user temporary table. + |
+
rolspillspace + |
+text + |
+- + |
+The operator disk flushing space of the user. + |
+
rolconfig + |
+text[] + |
+- + |
+Session defaults for runtime configuration variables + |
+
oid + |
+oid + |
+PG_AUTHID.oid + |
+ID of the role + |
+
roluseft + |
+boolean + |
+PG_AUTHID.roluseft + |
+Whether the role can perform operations on foreign tables + |
+
nodegroup + |
+name + |
+- + |
+Name of the logical cluster associated with the role. If no logical cluster is associated, this column is left empty. + |
+
PG_RULES displays information about rewrite rules.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+name + |
+Name of the schema that contains the table + |
+
tablename + |
+name + |
+Name of the table the rule is for + |
+
rulename + |
+name + |
+Rule name + |
+
definition + |
+text + |
+Rule definition (a reconstructed creation command) + |
+
PG_RUNNING_XACTS displays the running transaction information on the current node.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
handle + |
+integer + |
+Handle corresponding to the transaction in GTM + |
+
gxid + |
+xid + |
+Transaction ID + |
+
state + |
+tinyint + |
+Transaction status (3: prepared; 0: starting) + |
+
node + |
+text + |
+Node name + |
+
xmin + |
+xid + |
+Minimum transaction ID xmin on the node + |
+
vacuum + |
+boolean + |
+Whether the current transaction is lazy vacuum + |
+
timeline + |
+bigint + |
+Number of database restarts + |
+
prepare_xid + |
+xid + |
+Transaction ID in the prepared status. If the status is not prepared, the value is 0. + |
+
pid + |
+bigint + |
+Thread ID corresponding to the transaction + |
+
next_xid + |
+xid + |
+Transaction ID sent from a CN to a DN + |
+
PG_SECLABELS displays information about security labels.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
objoid + |
+oid + |
+Any OID column + |
+OID of the object this security label pertains to + |
+
classoid + |
+oid + |
+PG_CLASS.oid + |
+OID of the system table that contains the object + |
+
objsubid + |
+integer + |
+- + |
+For a security label on a table column, this is the column number (the objoid and classoid refer to the table itself). For all other object types, this column is 0. + |
+
objtype + |
+text + |
+- + |
+Type of the object to which this label applies + |
+
objnamespace + |
+oid + |
+PG_NAMESPACE.oid + |
+OID of the namespace for this object, if applicable; otherwise NULL. + |
+
objname + |
+text + |
+- + |
+Name of the object to which the label applies + |
+
provider + |
+text + |
+PG_SECLABEL.provider + |
+Label provider associated with this label + |
+
label + |
+text + |
+PG_SECLABEL.label + |
+Security label applied to this object + |
+
PG_SESSION_WLMSTAT displays the corresponding load management information about the task currently executed by the user.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+OID of the database this backend is connected to + |
+
datname + |
+name + |
+Name of the database the backend is connected to + |
+
threadid + |
+bigint + |
+ID of the backend thread + |
+
processid + |
+integer + |
+Thread PID of the backend + |
+
usesysid + |
+oid + |
+OID of the user who logged into the backend + |
+
appname + |
+text + |
+Name of the application that is connected to the backend + |
+
usename + |
+name + |
+Name of the user logged in to the backend + |
+
priority + |
+bigint + |
+Priority of Cgroup where the statement is located + |
+
attribute + |
+text + |
+Statement attributes +
|
+
block_time + |
+bigint + |
+Pending duration of the statements by now (unit: s) + |
+
elapsed_time + |
+bigint + |
+Actual execution duration of the statements by now (unit: s) + |
+
total_cpu_time + |
+bigint + |
+Total CPU usage duration of the statement on the DN in the last period (unit: s) + |
+
cpu_skew_percent + |
+integer + |
+CPU usage inclination ratio of the statement on the DN in the last period + |
+
statement_mem + |
+integer + |
+Estimated memory required for statement execution. This column is reserved. + |
+
active_points + |
+integer + |
+Number of concurrently active points occupied by the statement in the resource pool + |
+
dop_value + |
+integer + |
+DOP value obtained by the statement from the resource pool + |
+
control_group + |
+text + |
+Cgroup currently used by the statement + |
+
status + |
+text + |
+Status of a statement, including: +
|
+
enqueue + |
+text + |
+Current queuing status of the statements, including: +
|
+
resource_pool + |
+name + |
+Current resource pool where the statements are located. + |
+
query + |
+text + |
+Text of this backend's most recent query If state is active, this column shows the executing query. In all other states, it shows the last query that was executed. + |
+
isplana + |
+bool + |
+In logical cluster mode, indicates whether a statement occupies the resources of other logical clusters. The default value is f (does not occupy). + |
+
node_group + |
+text + |
+Logical cluster of the user running the statement + |
+
lane + |
+text + |
+Fast or slow lane for statement queries. +
|
+
PG_SESSION_IOSTAT displays the I/O load management information about the task currently executed by the user.
+IOPS is counted by ones for column storage and by thousands for row storage.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
query_id + |
+bigint + |
+Job ID + |
+
mincurriops + |
+integer + |
+Minimum I/O of the current job across DNs + |
+
maxcurriops + |
+integer + |
+Maximum I/O of the current job across DNs + |
+
minpeakiops + |
+integer + |
+Minimum peak I/O of the current job across DNs + |
+
maxpeakiops + |
+integer + |
+Maximum peak I/O of the current job across DNs + |
+
io_limits + |
+integer + |
+io_limits set for the job + |
+
io_priority + |
+text + |
+io_priority set for the job + |
+
query + |
+text + |
+Job + |
+
node_group + |
+text + |
+Logical cluster of the user running the job + |
+
PG_SETTINGS displays information about parameters of the running database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+text + |
+Parameter name + |
+
setting + |
+text + |
+Current value of the parameter + |
+
unit + |
+text + |
+Implicit unit of the parameter + |
+
category + |
+text + |
+Logical group of the parameter + |
+
short_desc + |
+text + |
+Brief description of the parameter + |
+
extra_desc + |
+text + |
+Detailed description of the parameter + |
+
context + |
+text + |
+Context of parameter values including internal, backend, superuser, and user + |
+
vartype + |
+text + |
+Parameter type. It can be bool, enum, integer, real, or string. + |
+
source + |
+text + |
+Method of assigning the parameter value + |
+
min_val + |
+text + |
+Minimum value of the parameter. If the parameter type is not numeric data, the value of this column is null. + |
+
max_val + |
+text + |
+Maximum value of the parameter. If the parameter type is not numeric data, the value of this column is null. + |
+
enumvals + |
+text[] + |
+Valid values of an enum-typed parameter. If the parameter type is not enum, the value of this column is null. + |
+
boot_val + |
+text + |
+Default parameter value used upon the database startup + |
+
reset_val + |
+text + |
+Default parameter value used upon the database reset + |
+
sourcefile + |
+text + |
+Configuration file used to set parameter values. If parameter values are not configured using the configuration file, the value of this column is null. + |
+
sourceline + |
+integer + |
+Row number of the configuration file for setting parameter values. If parameter values are not configured using the configuration file, the value of this column is null. + |
+
PG_SHADOW displays properties of all roles that are marked as rolcanlogin in PG_AUTHID.
+The name stems from the fact that this table should not be readable by the public since it contains passwords. PG_USER is a publicly readable view on PG_SHADOW that blanks out the password column.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
usename + |
+name + |
+PG_AUTHID.rolname + |
+User name + |
+
usesysid + |
+oid + |
+PG_AUTHID.oid + |
+ID of a user + |
+
usecreatedb + |
+boolean + |
+- + |
+Indicates that the user can create databases. + |
+
usesuper + |
+boolean + |
+- + |
+Indicates that the user is an administrator. + |
+
usecatupd + |
+boolean + |
+- + |
+Indicates that the user can update system catalogs. Even the system administrator cannot do this unless this column is true. + |
+
userepl + |
+boolean + |
+- + |
+User can initiate streaming replication and put the system in and out of backup mode. + |
+
passwd + |
+text + |
+- + |
+Password (possibly encrypted); null if none. See PG_AUTHID for details about how encrypted passwords are stored. + |
+
valbegin + |
+timestamp with time zone + |
+- + |
+Account validity start time; null if no start time + |
+
valuntil + |
+timestamp with time zone + |
+- + |
+Password expiry time; null if no expiration + |
+
respool + |
+name + |
+- + |
+Resource pool used by the user + |
+
parent + |
+oid + |
+- + |
+Parent resource pool + |
+
spacelimit + |
+text + |
+- + |
+The storage space of the permanent table. + |
+
tempspacelimit + |
+text + |
+- + |
+The storage space of the temporary table. + |
+
spillspacelimit + |
+text + |
+- + |
+The operator disk flushing space. + |
+
useconfig + |
+text[ ] + |
+- + |
+Session defaults for runtime configuration variables + |
+
PG_SHARED_MEMORY_DETAIL displays usage information about all the shared memory contexts.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
contextname + |
+text + |
+Name of the context in the memory + |
+
level + |
+smallint + |
+Hierarchy of the memory context + |
+
parent + |
+text + |
+Context of the parent memory + |
+
totalsize + |
+bigint + |
+Total size of the shared memory, in bytes + |
+
freesize + |
+bigint + |
+Remaining size of the shared memory, in bytes + |
+
usedsize + |
+bigint + |
+Used size of the shared memory, in bytes + |
+
PG_STATS displays the single-column statistics stored in the pg_statistic table.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
schemaname + |
+name + |
+PG_NAMESPACE.nspname + |
+Name of the schema that contains the table + |
+
tablename + |
+name + |
+PG_CLASS.relname + |
+Name of the table + |
+
attname + |
+name + |
+PG_ATTRIBUTE.attname + |
+Column name + |
+
inherited + |
+boolean + |
+- + |
+Includes inherited sub-columns if the value is true; otherwise, indicates the column in a specified table. + |
+
null_frac + |
+real + |
+- + |
+Percentage of column entries that are null + |
+
avg_width + |
+integer + |
+- + |
+Average width in bytes of column's entries + |
+
n_distinct + |
+real + |
+- + |
+
The negated form is used when ANALYZE believes that the number of distinct values is likely to increase as the table grows. +The positive form is used when the column seems to have a fixed number of possible values. For example, -1 indicates a unique column in which the number of distinct values is the same as the number of rows. + |
+
n_dndistinct + |
+real + |
+- + |
+Number of unique non-null data values in the dn1 column +
|
+
most_common_vals + |
+anyarray + |
+- + |
+List of the most common values in a column. If this combination does not have the most common values, it will be NULL. + |
+
most_common_freqs + |
+real[] + |
+- + |
+List of the frequencies of the most common values, that is, the number of occurrences of each value divided by the total number of rows. (NULL if most_common_vals is NULL) + |
+
histogram_bounds + |
+anyarray + |
+- + |
+List of values that divide the column's values into groups of equal proportion. The values in most_common_vals, if present, are omitted from this histogram calculation. This field is null if the field data type does not have a < operator or if the most_common_vals list accounts for the entire population. + |
+
correlation + |
+real + |
+- + |
+Statistical correlation between physical row ordering and logical ordering of the column values. It ranges from -1 to +1. When the value is near to -1 or +1, an index scan on the column is estimated to be cheaper than when it is near to zero, due to reduction of random access to the disk. This column is null if the column data type does not have a < operator. + |
+
most_common_elems + |
+anyarray + |
+- + |
+Specifies a list of non-null element values most often appearing. + |
+
most_common_elem_freqs + |
+real[] + |
+- + |
+Specifies a list of the frequencies of the most common element values. + |
+
elem_count_histogram + |
+real[] + |
+- + |
+Specifies a histogram of the counts of distinct non-null element values. + |
+
PG_STAT_ACTIVITY displays information about the current user's queries.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+OID of the database that the user session connects to in the backend + |
+
datname + |
+name + |
+Name of the database that the user session connects to in the backend + |
+
pid + |
+bigint + |
+Process ID of the backend + |
+
usesysid + |
+oid + |
+OID of the user logging in to the backend + |
+
usename + |
+name + |
+OID of the user logging in to the backend + |
+
application_name + |
+text + |
+Name of the application connected to the backend + |
+
client_addr + |
+inet + |
+IP address of the client connected to the backend If this column is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. + |
+
client_hostname + |
+text + |
+Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This column will only be non-null for IP connections, and only when log_hostname is enabled. + |
+
client_port + |
+integer + |
+TCP port number that the client uses for communication with this backend, or -1 if a Unix socket is used + |
+
backend_start + |
+timestamp with time zone + |
+Startup time of the backend process, that is, the time when the client connects to the server. + |
+
xact_start + |
+timestamp with time zone + |
+Time when the current transaction was started, or NULL if no transaction is active. If the current query is the first of its transaction, this column is equal to the query_start column. + |
+
query_start + |
+timestamp with time zone + |
+Time when the currently active query was started, or if state is not active, when the last query was started + |
+
state_change + |
+timestamp with time zone + |
+Time for the last status change + |
+
waiting + |
+boolean + |
+Whether the backend is currently waiting on a lock. If it is, its value is true. + |
+
enqueue + |
+text + |
+Queuing status of a statement. Its value can be: +
|
+
state + |
+text + |
+Current overall state of this backend. Its value can be: +
NOTE:
+Common users can view only their own session status. The state information of other accounts is empty. For example, after user judy is connected to the database, the state information of user joe and the initial user omm in pg_stat_activity is empty. +SELECT datname, usename, usesysid, state,pid FROM pg_stat_activity;+ datname | usename | usesysid | state | pid +----------+---------+----------+--------+----------------- + gaussdb | dbadmin | 10 | | 139968752121616 + gaussdb | dbadmin | 10 | | 139968903116560 + db_tpcds | judy | 16398 | active | 139968391403280 + gaussdb | dbadmin | 10 | | 139968643069712 + gaussdb | dbadmin | 10 | | 139968680818448 + gaussdb | joe | 16390 | | 139968563377936 +(6 rows)+ |
+
resource_pool + |
+name + |
+Resource pool used by the user + |
+
query_id + |
+bigint + |
+ID of a query + |
+
query + |
+text + |
+Text of this backend's most recent query If state is active, this column shows the executing query. In all other states, it shows the last query that was executed. + |
+
connection_info + |
+text + |
+A string in JSON format recording the driver type, driver version, driver deployment path, and process owner of the connected database (for details, see connection_info) + |
+
PG_STAT_ALL_INDEXES displays access informaton about all indexes in the database, with information about each index displayed in a row.
+Indexes can be used via either simple index scans or "bitmap" index scans. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the pg_stat_all_indexes.idx_tup_read count(s) for the index(es) it uses, and it increments the pg_stat_all_tables.idx_tup_fetch count for the table, but it does not affect pg_stat_all_indexes.idx_tup_fetch.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+OID of the table for this index + |
+
indexrelid + |
+oid + |
+OID of this index + |
+
schemaname + |
+name + |
+Name of the schema this index is in + |
+
relname + |
+name + |
+Name of the table for this index + |
+
indexrelname + |
+name + |
+Name of this index + |
+
idx_scan + |
+bigint + |
+Number of index scans initiated on this index + |
+
idx_tup_read + |
+bigint + |
+Number of index entries returned by scans on this index + |
+
idx_tup_fetch + |
+bigint + |
+Number of live table rows fetched by simple index scans using this index + |
+
PG_STAT_ALL_TABLES displays access information about all rows in all tables (including TOAST tables) in the database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Name of the table + |
+
seq_scan + |
+bigint + |
+Number of sequential scans started on the table + |
+
seq_tup_read + |
+bigint + |
+Number of rows that have live data fetched by sequential scans + |
+
idx_scan + |
+bigint + |
+Number of index scans + |
+
idx_tup_fetch + |
+bigint + |
+Number of rows that have live data fetched by index scans + |
+
n_tup_ins + |
+bigint + |
+Number of rows inserted + |
+
n_tup_upd + |
+bigint + |
+Number of rows updated + |
+
n_tup_del + |
+bigint + |
+Number of rows deleted + |
+
n_tup_hot_upd + |
+bigint + |
+Number of rows updated by HOT (no separate index update is required) + |
+
n_live_tup + |
+bigint + |
+Estimated number of live rows + |
+
n_dead_tup + |
+bigint + |
+Estimated number of dead rows + |
+
last_vacuum + |
+timestamp with time zone + |
+Last time at which this table was manually vacuumed (excluding VACUUM FULL) + |
+
last_autovacuum + |
+timestamp with time zone + |
+Last time at which this table was automatically vacuumed + |
+
last_analyze + |
+timestamp with time zone + |
+Last time at which this table was analyzed + |
+
last_autoanalyze + |
+timestamp with time zone + |
+Last time at which this table was automatically vacuumed + |
+
vacuum_count + |
+bigint + |
+Number of vacuum operations (excluding VACUUM FULL) + |
+
autovacuum_count + |
+bigint + |
+Number of autovacuum operations + |
+
analyze_count + |
+bigint + |
+Number of analyze operations + |
+
autoanalyze_count + |
+bigint + |
+Number of autoanalyze operations + |
+
last_data_changed + |
+timestamp with time zone + |
+Last time at which this table was updated (by INSERT/UPDATE/DELETE or EXCHANGE/TRUNCATE/DROP partition). This column is recorded only on the local CN. + |
+
PG_STAT_BAD_BLOCK displays statistics about page or CU verification failures after a node is started.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
databaseid + |
+integer + |
+Database OID + |
+
tablespaceid + |
+integer + |
+Tablespace OID + |
+
relfilenode + |
+integer + |
+File object ID + |
+
forknum + |
+integer + |
+File type + |
+
error_count + |
+integer + |
+Number of verification failures + |
+
first_time + |
+timestamp with time zone + |
+Time of the first occurrence + |
+
last_time + |
+timestamp with time zone + |
+Time of the latest occurrence + |
+
PG_STAT_BGWRITER displays statistics about the background writer process's activity.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
checkpoints_timed + |
+bigint + |
+Number of scheduled checkpoints that have been performed + |
+
checkpoints_req + |
+bigint + |
+Number of requested checkpoints that have been performed + |
+
checkpoint_write_time + |
+double precision + |
+Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds + |
+
checkpoint_sync_time + |
+double precision + |
+Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds + |
+
buffers_checkpoint + |
+bigint + |
+Number of buffers written during checkpoints + |
+
buffers_clean + |
+bigint + |
+Number of buffers written by the background writer + |
+
maxwritten_clean + |
+bigint + |
+Number of times the background writer stopped a cleaning scan because it had written too many buffers + |
+
buffers_backend + |
+bigint + |
+Number of buffers written directly by a backend + |
+
buffers_backend_fsync + |
+bigint + |
+Number of times that a backend has to execute fsync + |
+
buffers_alloc + |
+bigint + |
+Number of buffers allocated + |
+
stats_reset + |
+timestamp with time zone + |
+Time at which these statistics were reset + |
+
PG_STAT_DATABASE displays the status and statistics of each database on the current node.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+Database OID + |
+
datname + |
+name + |
+Database name + |
+
numbackends + |
+integer + |
+Number of backends currently connected to this database on the current node. This is the only column in this view that reflects the current state value. All columns return the accumulated value since the last reset. + |
+
xact_commit + |
+bigint + |
+Number of transactions in this database that have been committed on the current node + |
+
xact_rollback + |
+bigint + |
+Number of transactions in this database that have been rolled back on the current node + |
+
blks_read + |
+bigint + |
+Number of disk blocks read in this database on the current node + |
+
blks_hit + |
+bigint + |
+Number of disk blocks found in the buffer cache on the current node, that is, the number of blocks hit in the cache. (This only includes hits in the GaussDB(DWS) buffer cache, not in the file system cache.) + |
+
tup_returned + |
+bigint + |
+Number of rows returned by queries in this database on the current node + |
+
tup_fetched + |
+bigint + |
+Number of rows fetched by queries in this database on the current node + |
+
tup_inserted + |
+bigint + |
+Number of rows inserted in this database on the current node + |
+
tup_updated + |
+bigint + |
+Number of rows updated in this database on the current node + |
+
tup_deleted + |
+bigint + |
+Number of rows deleted from this database on the current node + |
+
conflicts + |
+bigint + |
+Number of queries canceled due to database recovery conflicts on the current node (conflicts occurring only on the standby server). For details, see PG_STAT_DATABASE_CONFLICTS. + |
+
temp_files + |
+bigint + |
+Number of temporary files created by this database on the current node. All temporary files are counted, regardless of why the temporary file was created (for example, sorting or hashing), and regardless of the log_temp_files setting. + |
+
temp_bytes + |
+bigint + |
+Size of temporary files written to this database on the current node. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting. + |
+
deadlocks + |
+bigint + |
+Number of deadlocks in this database on the current node + |
+
blk_read_time + |
+double precision + |
+Time spent reading data file blocks by backends in this database on the current node, in milliseconds + |
+
blk_write_time + |
+double precision + |
+Time spent writing into data file blocks by backends in this database on the current node, in milliseconds + |
+
stats_reset + |
+timestamp with time zone + |
+Time when the database statistics are reset on the current node + |
+
PG_STAT_DATABASE_CONFLICTS displays statistics about database conflicts.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
datid + |
+oid + |
+Database OID + |
+
datname + |
+name + |
+Database name + |
+
confl_tablespace + |
+bigint + |
+Number of conflicting tablespaces + |
+
confl_lock + |
+bigint + |
+Number of conflicting locks + |
+
confl_snapshot + |
+bigint + |
+Number conflicting snapshots + |
+
confl_bufferpin + |
+bigint + |
+Number of conflicting buffers + |
+
confl_deadlock + |
+bigint + |
+Number of conflicting deadlocks + |
+
PG_STAT_GET_MEM_MBYTES_RESERVED displays the current activity information of a thread stored in memory. You need to specify the thread ID (pid in PG_STAT_ACTIVITY) for query. If the thread ID is set to 0, the current thread ID is used. For example:
+1 | SELECT pg_stat_get_mem_mbytes_reserved(0); + |
Parameter + |
+Description + |
+
---|---|
ConnectInfo + |
+Connection information + |
+
ParctlManager + |
+Concurrency management information + |
+
GeneralParams + |
+Basic parameter information + |
+
GeneralParams RPDATA + |
+Basic resource pool information + |
+
ExceptionManager + |
+Exception management information + |
+
CollectInfo + |
+Collection information + |
+
GeneralInfo + |
+Basic information + |
+
ParctlState + |
+Concurrency status information + |
+
CPU INFO + |
+CPU information + |
+
ControlGroup + |
+Cgroup information + |
+
IOSTATE + |
+I/O status information + |
+
PG_STAT_USER_FUNCTIONS displays user-defined function status information in the namespace. (The language of the function is non-internal language.)
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
funcid + |
+oid + |
+Function OID + |
+
schemaname + |
+name + |
+Schema name + |
+
funcname + |
+name + |
+Function name + |
+
calls + |
+bigint + |
+Number of times this function has been called + |
+
total_time + |
+double precision + |
+Total time spent in this function and all other functions called by it + |
+
self_time + |
+double precision + |
+Total time spent in this function itself, excluding other functions called by it + |
+
PG_STAT_USER_INDEXES displays information about the index status of user-defined ordinary tables and TOAST tables.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID for the index + |
+
indexrelid + |
+oid + |
+Index OID + |
+
schemaname + |
+name + |
+Schema name for the index + |
+
relname + |
+name + |
+Table name for the index + |
+
indexrelname + |
+name + |
+Index name + |
+
idx_scan + |
+bigint + |
+Number of index scans + |
+
idx_tup_read + |
+bigint + |
+Number of index entries returned by scans on this index + |
+
idx_tup_fetch + |
+bigint + |
+Number of rows that have live data fetched by index scans + |
+
PG_STAT_USER_TABLES displays status information about user-defined ordinary tables and TOAST tables in all namespaces.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
seq_scan + |
+bigint + |
+Number of sequential scans started on the table. Data in this column is valid only for non-system catalogs on the local CN. + |
+
seq_tup_read + |
+bigint + |
+Number of rows that have live data fetched by sequential scans + |
+
idx_scan + |
+bigint + |
+Number of index scans + |
+
idx_tup_fetch + |
+bigint + |
+Number of rows that have live data fetched by index scans + |
+
n_tup_ins + |
+bigint + |
+Number of rows inserted + |
+
n_tup_upd + |
+bigint + |
+Number of rows updated + |
+
n_tup_del + |
+bigint + |
+Number of rows deleted + |
+
n_tup_hot_upd + |
+bigint + |
+Number of rows updated by HOT (no separate index update is required) + |
+
n_live_tup + |
+bigint + |
+Estimated number of live rows + |
+
n_dead_tup + |
+bigint + |
+Estimated number of dead rows + |
+
last_vacuum + |
+timestamp with time zone + |
+Last time at which this table was manually vacuumed (excluding VACUUM FULL) + |
+
last_autovacuum + |
+timestamp with time zone + |
+Last time at which this table was automatically vacuumed + |
+
last_analyze + |
+timestamp with time zone + |
+Last time at which this table was analyzed + |
+
last_autoanalyze + |
+timestamp with time zone + |
+Last time at which this table was automatically analyzed + |
+
vacuum_count + |
+bigint + |
+Number of vacuum operations (excluding VACUUM FULL) + |
+
autovacuum_count + |
+bigint + |
+Number of autovacuum operations + |
+
analyze_count + |
+bigint + |
+Number of analyze operations + |
+
autoanalyze_count + |
+bigint + |
+Number of autoanalyze operations + |
+
PG_STAT_REPLICATION displays information about log synchronization status, such as the locations of the sender sending logs and the receiver receiving logs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
pid + |
+bigint + |
+PID of the thread + |
+
usesysid + |
+oid + |
+User system ID + |
+
usename + |
+name + |
+Username + |
+
application_name + |
+text + |
+Application name + |
+
client_addr + |
+inet + |
+Client address + |
+
client_hostname + |
+text + |
+Client name + |
+
client_port + |
+integer + |
+Client port number + |
+
backend_start + |
+timestamp with time zone + |
+Start time of the program + |
+
state + |
+text + |
+Log replication state (catch-up or consistent streaming) + |
+
sender_sent_location + |
+text + |
+Location where the sender sends logs + |
+
receiver_write_location + |
+text + |
+Location where the receiver writes logs + |
+
receiver_flush_location + |
+text + |
+Location where the receiver flushes logs + |
+
receiver_replay_location + |
+text + |
+Location where the receiver replays logs + |
+
sync_priority + |
+integer + |
+Priority of synchronous duplication (0 indicates asynchronization) + |
+
sync_state + |
+text + |
+Synchronization state (asynchronous duplication, synchronous duplication, or potential synchronization) + |
+
PG_STAT_SYS_INDEXES displays the index status information about all the system catalogs in the pg_catalog and information_schema schemas.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID for the index + |
+
indexrelid + |
+oid + |
+Index OID + |
+
schemaname + |
+name + |
+Schema name for the index + |
+
relname + |
+name + |
+Table name for the index + |
+
indexrelname + |
+name + |
+Index name + |
+
idx_scan + |
+bigint + |
+Number of index scans + |
+
idx_tup_read + |
+bigint + |
+Number of index entries returned by scans on this index + |
+
idx_tup_fetch + |
+bigint + |
+Number of rows that have live data fetched by index scans + |
+
PG_STAT_SYS_TABLES displays the statistics about the system catalogs of all the namespaces in pg_catalog and information_schema schemas.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
seq_scan + |
+bigint + |
+Number of sequential scans started on the table + |
+
seq_tup_read + |
+bigint + |
+Number of rows that have live data fetched by sequential scans + |
+
idx_scan + |
+bigint + |
+Number of index scans + |
+
idx_tup_fetch + |
+bigint + |
+Number of rows that have live data fetched by index scans + |
+
n_tup_ins + |
+bigint + |
+Number of rows inserted + |
+
n_tup_upd + |
+bigint + |
+Number of rows updated + |
+
n_tup_del + |
+bigint + |
+Number of rows deleted + |
+
n_tup_hot_upd + |
+bigint + |
+Number of rows HOT updated (that is, with no separate index update required) + |
+
n_live_tup + |
+bigint + |
+Estimated number of live rows + |
+
n_dead_tup + |
+bigint + |
+Estimated number of dead rows + |
+
last_vacuum + |
+timestamp with time zone + |
+Last time at which this table was manually vacuumed (excluding VACUUM FULL) + |
+
last_autovacuum + |
+timestamp with time zone + |
+Last time at which this table was automatically vacuumed + |
+
last_analyze + |
+timestamp with time zone + |
+Last time at which this table was analyzed + |
+
last_autoanalyze + |
+timestamp with time zone + |
+Last time at which this table was automatically analyzed + |
+
vacuum_count + |
+bigint + |
+Number of vacuum operations (excluding VACUUM FULL) + |
+
autovacuum_count + |
+bigint + |
+Number of autovacuum operations + |
+
analyze_count + |
+bigint + |
+Number of analyze operations + |
+
autoanalyze_count + |
+bigint + |
+Number of autoanalyze operations + |
+
PG_STAT_XACT_ALL_TABLES displays the transaction status information about all ordinary tables and TOAST tables in the namespaces.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
seq_scan + |
+bigint + |
+Number of sequential scans started on the table + |
+
seq_tup_read + |
+bigint + |
+Number of live rows fetched by sequential scans + |
+
idx_scan + |
+bigint + |
+Number of index scans started on the table + |
+
idx_tup_fetch + |
+bigint + |
+Number of live rows fetched by index scans + |
+
n_tup_ins + |
+bigint + |
+Number of rows inserted + |
+
n_tup_upd + |
+bigint + |
+Number of rows updated + |
+
n_tup_del + |
+bigint + |
+Number of rows deleted + |
+
n_tup_hot_upd + |
+bigint + |
+Number of rows HOT updated (that is, with no separate index update required) + |
+
PG_STAT_XACT_SYS_TABLES displays the transaction status information of the system catalog in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
seq_scan + |
+bigint + |
+Number of sequential scans started on the table + |
+
seq_tup_read + |
+bigint + |
+Number of live rows fetched by sequential scans + |
+
idx_scan + |
+bigint + |
+Number of index scans started on the table + |
+
idx_tup_fetch + |
+bigint + |
+Number of live rows fetched by index scans + |
+
n_tup_ins + |
+bigint + |
+Number of rows inserted + |
+
n_tup_upd + |
+bigint + |
+Number of rows updated + |
+
n_tup_del + |
+bigint + |
+Number of rows deleted + |
+
n_tup_hot_upd + |
+bigint + |
+Number of rows HOT updated (that is, with no separate index update required) + |
+
PG_STAT_XACT_USER_FUNCTIONS displays statistics about function executions, with statistics about each execution displayed in a row.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
funcid + |
+oid + |
+Function OID + |
+
schemaname + |
+name + |
+Schema name + |
+
funcname + |
+name + |
+Function name + |
+
calls + |
+bigint + |
+Number of times this function has been called + |
+
total_time + |
+double precision + |
+Total time spent in this function and all other functions called by it + |
+
self_time + |
+double precision + |
+Total time spent in this function itself, excluding other functions called by it + |
+
PG_STAT_XACT_USER_TABLES displays the transaction status information of the user table in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
seq_scan + |
+bigint + |
+Number of sequential scans started on the table + |
+
seq_tup_read + |
+bigint + |
+Number of live rows fetched by sequential scans + |
+
idx_scan + |
+bigint + |
+Number of index scans started on the table + |
+
idx_tup_fetch + |
+bigint + |
+Number of live rows fetched by index scans + |
+
n_tup_ins + |
+bigint + |
+Number of rows inserted + |
+
n_tup_upd + |
+bigint + |
+Number of rows updated + |
+
n_tup_del + |
+bigint + |
+Number of rows deleted + |
+
n_tup_hot_upd + |
+bigint + |
+Number of rows HOT updated (that is, with no separate index update required) + |
+
PG_STATIO_ALL_INDEXES contains each row of each index in the current database, showing I/O statistics about accesses to that specific index.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID for the index + |
+
indexrelid + |
+oid + |
+Index OID + |
+
schemaname + |
+name + |
+Schema name for the index + |
+
relname + |
+name + |
+Table name for the index + |
+
indexrelname + |
+name + |
+Index name + |
+
idx_blks_read + |
+bigint + |
+Number of disk blocks read from this index + |
+
idx_blks_hit + |
+bigint + |
+Number of buffer hits in this index + |
+
PG_STATIO_ALL_SEQUENCES contains each row of each sequence in the current database, showing I/O statistics about accesses to that specific sequence.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+OID of this sequence + |
+
schemaname + |
+name + |
+Name of the schema this sequence is in + |
+
relname + |
+name + |
+Name of the sequence + |
+
blks_read + |
+bigint + |
+Number of disk blocks read from this sequence + |
+
blks_hit + |
+bigint + |
+Number of buffer hits in this sequence + |
+
PG_STATIO_ALL_TABLES contains one row for each table in the current database (including TOAST tables), showing I/O statistics about accesses to that specific table.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
heap_blks_read + |
+bigint + |
+Number of disk blocks read from this table + |
+
heap_blks_hit + |
+bigint + |
+Number of buffer hits in this table + |
+
idx_blks_read + |
+bigint + |
+Number of disk blocks read from the index in this table + |
+
idx_blks_hit + |
+bigint + |
+Number of buffer hits in all indexes on this table + |
+
toast_blks_read + |
+bigint + |
+Number of disk blocks read from the TOAST table (if any) in this table + |
+
toast_blks_hit + |
+bigint + |
+Number of buffer hits in the TOAST table (if any) in this table + |
+
tidx_blks_read + |
+bigint + |
+Number of disk blocks read from the TOAST table index (if any) in this table + |
+
tidx_blks_hit + |
+bigint + |
+Number of buffer hits in the TOAST table index (if any) in this table + |
+
PG_STATIO_SYS_INDEXES displays the I/O status information about all system catalog indexes in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID for the index + |
+
indexrelid + |
+oid + |
+Index OID + |
+
schemaname + |
+name + |
+Schema name for the index + |
+
relname + |
+name + |
+Table name for the index + |
+
indexrelname + |
+name + |
+Index name + |
+
idx_blks_read + |
+bigint + |
+Number of disk blocks read from this index + |
+
idx_blks_hit + |
+bigint + |
+Number of buffer hits in this index + |
+
PG_STATIO_SYS_SEQUENCES displays the I/O status information about all the system sequences in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+OID of this sequence + |
+
schemaname + |
+name + |
+Name of the schema this sequence is in + |
+
relname + |
+name + |
+Name of the sequence + |
+
blks_read + |
+bigint + |
+Number of disk blocks read from this sequence + |
+
blks_hit + |
+bigint + |
+Number of buffer hits in this sequence + |
+
PG_STATIO_SYS_TABLES displays the I/O status information about all the system catalogs in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
heap_blks_read + |
+bigint + |
+Number of disk blocks read from this table + |
+
heap_blks_hit + |
+bigint + |
+Number of buffer hits in this table + |
+
idx_blks_read + |
+bigint + |
+Number of disk blocks read from all indexes in this table + |
+
idx_blks_hit + |
+bigint + |
+Number of buffer hits in all indexes on this table + |
+
toast_blks_read + |
+bigint + |
+Number of disk blocks read from the TOAST table (if any) in this table + |
+
toast_blks_hit + |
+bigint + |
+Number of buffer hits in the TOAST table (if any) in this table + |
+
tidx_blks_read + |
+bigint + |
+Number of disk blocks read from the TOAST table index (if any) in this table + |
+
tidx_blks_hit + |
+bigint + |
+Number of buffer hits in the TOAST table index (if any) in this table + |
+
PG_STATIO_USER_INDEXES displays the I/O status information about all the user relationship table indexes in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID for the index + |
+
indexrelid + |
+oid + |
+Index OID + |
+
schemaname + |
+name + |
+Schema name for the index + |
+
relname + |
+name + |
+Table name for the index + |
+
indexrelname + |
+name + |
+Index name + |
+
idx_blks_read + |
+bigint + |
+Number of disk blocks read from this index + |
+
idx_blks_hit + |
+bigint + |
+Number of buffer hits in this index + |
+
PG_STATIO_USER_SEQUENCES displays the I/O status information about all the user relation table sequences in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+OID of this sequence + |
+
schemaname + |
+name + |
+Name of the schema this sequence is in + |
+
relname + |
+name + |
+Name of this sequence + |
+
blks_read + |
+bigint + |
+Number of disk blocks read from this sequence + |
+
blks_hit + |
+bigint + |
+Number of cache hits in this sequence + |
+
PG_STATIO_USER_TABLES displays the I/O status information about all the user relation tables in the namespace.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
relname + |
+name + |
+Table name + |
+
heap_blks_read + |
+bigint + |
+Number of disk blocks read from this table + |
+
heap_blks_hit + |
+bigint + |
+Number of buffer hits in this table + |
+
idx_blks_read + |
+bigint + |
+Number of disk blocks read from the index in this table + |
+
idx_blks_hit + |
+bigint + |
+Number of buffer hits in all indexes on this table + |
+
toast_blks_read + |
+bigint + |
+Number of disk blocks read from the TOAST table (if any) in this table + |
+
toast_blks_hit + |
+bigint + |
+Number of buffer hits in the TOAST table (if any) in this table + |
+
tidx_blks_read + |
+bigint + |
+Number of disk blocks read from the TOAST table index (if any) in this table + |
+
tidx_blks_hit + |
+bigint + |
+Number of buffer hits in the TOAST table index (if any) in this table + |
+
PG_THREAD_WAIT_STATUS allows you to test the block waiting status about the backend thread and auxiliary thread of the current instance.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Current node name + |
+
db_name + |
+text + |
+Database name + |
+
thread_name + |
+text + |
+Thread name + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
tid + |
+bigint + |
+Thread ID of the current thread + |
+
lwtid + |
+integer + |
+Lightweight thread ID of the current thread + |
+
ptid + |
+integer + |
+Parent thread of the streaming thread + |
+
tlevel + |
+integer + |
+Level of the streaming thread + |
+
smpid + |
+integer + |
+Concurrent thread ID + |
+
wait_status + |
+text + |
+Waiting status of the current thread. For details about the waiting status, see Table 2. + |
+
wait_event + |
+text + |
+If wait_status is acquire lock, acquire lwlock, or wait io, this column describes the lock, lightweight lock, and I/O information, respectively. If wait_status is not any of the three values, this column is empty. + |
+
The waiting statuses in the wait_status column are as follows:
+ +Value + |
+Description + |
+
---|---|
none + |
+Waiting for no event + |
+
acquire lock + |
+Waiting for locking until the locking succeeds or times out + |
+
acquire lwlock + |
+Waiting for a lightweight lock + |
+
wait io + |
+Waiting for I/O completion + |
+
wait cmd + |
+Waiting for network communication packet read to complete + |
+
wait pooler get conn + |
+Waiting for pooler to obtain the connection + |
+
wait pooler abort conn + |
+Waiting for pooler to terminate the connection + |
+
wait pooler clean conn + |
+Waiting for pooler to clear connections + |
+
pooler create conn: [nodename], total N + |
+Waiting for the pooler to set up a connection. The connection is being established with the node specified by nodename, and there are N connections waiting to be set up. + |
+
get conn + |
+Obtaining the connection to other nodes + |
+
set cmd: [nodename] + |
+Waiting for running the SET, RESET, TRANSACTION BLOCK LEVEL PARA SET, or SESSION LEVEL PARA SET statement on the connection. The statement is being executed on the node specified by nodename. + |
+
cancel query + |
+Canceling the SQL statement that is being executed through the connection + |
+
stop query + |
+Stopping the query that is being executed through the connection + |
+
wait node: [nodename](plevel), total N, [phase] + |
+Waiting for receiving the data from a connected node. The thread is waiting for the data from the plevel thread of the node specified by nodename. The data of N connections is waiting to be returned. If phase is included, the possible phases are as follows: +
|
+
wait transaction sync: xid + |
+Waiting for synchronizing the transaction specified by xid + |
+
wait wal sync + |
+Waiting for the completion of wal log of synchronization from the specified LSN to the standby instance + |
+
wait data sync + |
+Waiting for the completion of data page synchronization to the standby instance + |
+
wait data sync queue + |
+Waiting for putting the data pages that are in the row storage or the CU in the column storage into the synchronization queue + |
+
flush data: [nodename](plevel), [phase] + |
+Waiting for sending data to the plevel thread of the node specified by nodename. If phase is included, the possible phase is wait quota, indicating that the current communication flow is waiting for the quota value. + |
+
stream get conn: [nodename], total N + |
+Waiting for connecting to the consumer object of the node specified by nodename when the stream flow is initialized. There are N consumers waiting to be connected. + |
+
wait producer ready: [nodename](plevel), total N + |
+Waiting for each producer to be ready when the stream flow is initialized. The thread is waiting for the procedure of the plevel thread on the nodename node to be ready. There are N producers waiting to be ready. + |
+
synchronize quit + |
+Waiting for the threads in the stream thread group to quit when the steam plan ends + |
+
nodegroup destroy + |
+Waiting for destroying the stream node group when the steam plan ends + |
+
wait active statement + |
+Waiting for job execution under resource and load control. + |
+
wait global queue + |
+Waiting for job execution. The job is queuing in the global queue. + |
+
wait respool queue + |
+Waiting for job execution. The job is queuing in the resource pool. + |
+
wait ccn queue + |
+Waiting for job execution. The job is queuing on the central coordinator node (CCN). + |
+
gtm connect + |
+Waiting for connecting to GTM. + |
+
gtm get gxid + |
+Wait for obtaining xids from GTM. + |
+
gtm get snapshot + |
+Wait for obtaining transaction snapshots from GTM. + |
+
gtm begin trans + |
+Waiting for GTM to start a transaction. + |
+
gtm commit trans + |
+Waiting for GTM to commit a transaction. + |
+
gtm rollback trans + |
+Waiting for GTM to roll back a transaction. + |
+
gtm create sequence + |
+Waiting for GTM to create a sequence. + |
+
gtm alter sequence + |
+Waiting for GTM to modify a sequence. + |
+
gtm get sequence val + |
+Waiting for obtaining the next value of a sequence from GTM. + |
+
gtm set sequence val + |
+Waiting for GTM to set a sequence value. + |
+
gtm drop sequence + |
+Waiting for GTM to delete a sequence. + |
+
gtm rename sequece + |
+Waiting for GTM to rename a sequence. + |
+
analyze: [relname], [phase] + |
+The thread is doing ANALYZE to the relname table. If phase is included, the possible phase is autovacuum, indicating that the database automatically enables the AutoVacuum thread to execute ANALYZE. + |
+
vacuum: [relname], [phase] + |
+The thread is doing VACUUM to the relname table. If phase is included, the possible phase is autovacuum, indicating that the database automatically enables the AutoVacuum thread to execute VACUUM. + |
+
vacuum full: [relname] + |
+The thread is doing VACUUM FULL to the relname table. + |
+
create index + |
+An index is being created. + |
+
HashJoin - [ build hash | write file ] + |
+The HashJoin operator is being executed. In this phase, you need to pay attention to the execution time-consuming. +
|
+
HashAgg - [ build hash | write file ] + |
+The HashAgg operator is being executed. In this phase, you need to pay attention to the execution time-consuming. +
|
+
HashSetop - [build hash | write file ] + |
+The HashSetop operator is being executed. In this phase, you need to pay attention to the execution time-consuming. +
|
+
Sort | Sort - write file + |
+The Sort operator is being executed. write file indicates that the Sort operator is writing data to disks. + |
+
Material | Material - write file + |
+The Material operator is being executed. write file indicates that the Material operator is writing data to disks. + |
+
wait sync consumer next step + |
+The consumer (receive end) synchronously waits for the next iteration. + |
+
wait sync producer next step + |
+The producer (transmit end) synchronously waits for the next iteration. + |
+
If wait_status is acquire lwlock, acquire lock, or wait io, there is an event performing I/O operations or waiting for obtaining the corresponding lightweight lock or transaction lock.
+The following table describes the corresponding wait events when wait_status is acquire lwlock. (If wait_event is extension, the lightweight lock is dynamically allocated and is not monitored.)
+ +wait_event + |
+Description + |
+
---|---|
ShmemIndexLock + |
+Used to protect the primary index table, a hash table, in shared memory + |
+
OidGenLock + |
+Used to prevent different threads from generating the same OID + |
+
XidGenLock + |
+Used to prevent two transactions from obtaining the same XID + |
+
ProcArrayLock + |
+Used to prevent concurrent access to or concurrent modification on the ProcArray shared array + |
+
SInvalReadLock + |
+Used to prevent concurrent execution with invalid message deletion + |
+
SInvalWriteLock + |
+Used to prevent concurrent execution with invalid message write and deletion + |
+
WALInsertLock + |
+Used to prevent concurrent execution with WAL insertion + |
+
WALWriteLock + |
+Used to prevent concurrent write from a WAL buffer to a disk + |
+
ControlFileLock + |
+Used to prevent concurrent read/write or concurrent write/write on the pg_control file + |
+
CheckpointLock + |
+Used to prevent multi-checkpoint concurrent execution + |
+
CLogControlLock + |
+Used to prevent concurrent access to or concurrent modification on the Clog control data structure + |
+
MultiXactGenLock + |
+Used to allocate a unique MultiXact ID in serial mode + |
+
MultiXactOffsetControlLock + |
+Used to prevent concurrent read/write or concurrent write/write on pg_multixact/offset + |
+
MultiXactMemberControlLock + |
+Used to prevent concurrent read/write or concurrent write/write on pg_multixact/members + |
+
RelCacheInitLock + |
+Used to add a lock before any operations are performed on the init file when messages are invalid + |
+
CheckpointerCommLock + |
+Used to send file flush requests to a checkpointer. The request structure needs to be inserted to a request queue in serial mode. + |
+
TwoPhaseStateLock + |
+Used to prevent concurrent access to or modification on two-phase information sharing arrays + |
+
TablespaceCreateLock + |
+Used to check whether a tablespace already exists + |
+
BtreeVacuumLock + |
+Used to prevent VACUUM from clearing pages that are being used by B-tree indexes + |
+
AutovacuumLock + |
+Used to access the autovacuum worker array in serial mode + |
+
AutovacuumScheduleLock + |
+Used to distribute tables requiring VACUUM in serial mode + |
+
SyncScanLock + |
+Used to determine the start position of a relfilenode during heap scanning + |
+
NodeTableLock + |
+Used to protect a shared structure that stores CN and DN information + |
+
PoolerLock + |
+Used to prevent two threads from simultaneously obtaining the same connection from a connection pool + |
+
RelationMappingLock + |
+Used to wait for the mapping file between system catalogs and storage locations to be updated + |
+
AsyncCtlLock + |
+Used to prevent concurrent access to or concurrent modification on the sharing notification status + |
+
AsyncQueueLock + |
+Used to prevent concurrent access to or concurrent modification on the sharing notification queue + |
+
SerializableXactHashLock + |
+Used to prevent concurrent read/write or concurrent write/write on a sharing structure for serializable transactions + |
+
SerializableFinishedListLock + |
+Used to prevent concurrent read/write or concurrent write/write on a shared linked list for completed serial transactions + |
+
SerializablePredicateLockListLock + |
+Used to protect a linked list of serializable transactions that have locks + |
+
OldSerXidLock + |
+Used to protect a structure that records serializable transactions that have conflicts + |
+
FileStatLock + |
+Used to protect a data structure that stores statistics file information + |
+
SyncRepLock + |
+Used to protect Xlog synchronization information during primary-standby replication + |
+
DataSyncRepLock + |
+Used to protect data page synchronization information during primary-standby replication + |
+
CStoreColspaceCacheLock + |
+Used to add a lock when CU space is allocated for a column-store table + |
+
CStoreCUCacheSweepLock + |
+Used to add a lock when CU caches used by a column-store table are cyclically washed out + |
+
MetaCacheSweepLock + |
+Used to add a lock when metadata is cyclically washed out + |
+
DfsConnectorCacheLock + |
+Used to protect a global hash table where HDFS connection handles are cached + |
+
dummyServerInfoCacheLock + |
+Used to protect a global hash table where the information about computing Node Group connections is cached + |
+
ExtensionConnectorLibLock + |
+Used to add a lock when a specific dynamic library is loaded or uninstalled in ODBC connection initialization scenarios + |
+
SearchServerLibLock + |
+Used to add a lock on the file read operation when a specific dynamic library is initially loaded in GPU-accelerated scenarios + |
+
DfsUserLoginLock + |
+Used to protect a global linked table where HDFS user information is stored + |
+
DfsSpaceCacheLock + |
+Used to ensure that the IDs of files to be imported to an HDFS table increase monotonically + |
+
LsnXlogChkFileLock + |
+Used to serially update the Xlog flush points for primary and standby servers recorded in a specific structure + |
+
GTMHostInfoLock + |
+Used to prevent concurrent access to or concurrent modification on GTM host information + |
+
ReplicationSlotAllocationLock + |
+Used to add a lock when a primary server allocates stream replication slots during primary-standby replication + |
+
ReplicationSlotControlLock + |
+Used to prevent concurrent update of replication slot status during primary-standby replication + |
+
ResourcePoolHashLock + |
+Used to prevent concurrent access to or concurrent modification on a resource pool table, a hash table + |
+
WorkloadStatHashLock + |
+Used to prevent concurrent access to or concurrent modification on a hash table that contains SQL requests from the CN side + |
+
WorkloadIoStatHashLock + |
+Used to prevent concurrent access to or concurrent modification on a hash table that contains the I/O information of the current DN + |
+
WorkloadCGroupHashLock + |
+Used to prevent concurrent access to or concurrent modification on a hash table that contains Cgroup information + |
+
OBSGetPathLock + |
+Used to prevent concurrent read/write or concurrent write/write on an OBS path + |
+
WorkloadUserInfoLock + |
+Used to prevent concurrent access to or concurrent modification on a hash table that contains user information about load management + |
+
WorkloadRecordLock + |
+Used to prevent concurrent access to or concurrent modification on a hash table that contains requests received by CNs during adaptive memory management + |
+
WorkloadIOUtilLock + |
+Used to protect a structure that records iostat and CPU load information + |
+
WorkloadNodeGroupLock + |
+Used to prevent concurrent access to or concurrent modification on a hash table that contains Node Group information in memory + |
+
JobShmemLock + |
+Used to protect global variables in the shared memory that is periodically read during a scheduled task where MPP is compatible with Oracle + |
+
OBSRuntimeLock + |
+Used to obtain environment variables, for example, GASSHOME + |
+
LLVMDumpIRLock + |
+Used to export the assembly language for dynamically generating functions + |
+
LLVMParseIRLock + |
+Used to compile and parse a finished IR function from the IR file at the start position of a query + |
+
RPNumberLock + |
+Used by a DN on a computing Node Group to count the number of threads for a task where plans are being executed + |
+
ClusterRPLock + |
+Used to control concurrent access on cluster load data maintained in a CCN of the cluster + |
+
CriticalCacheBuildLock + |
+Used to load caches from a shared or local cache initialization file + |
+
WaitCountHashLock + |
+Used to protect a shared structure in user statement counting scenarios + |
+
BufMappingLock + |
+Used to protect operations on a table mapped to shared buffer + |
+
LockMgrLock + |
+It is used to protect a common lock structure. + |
+
PredicateLockMgrLock + |
+Used to protect a lock structure that has serializable transactions + |
+
OperatorRealTLock + |
+Used to prevent concurrent access to or concurrent modification on a global structure that contains real-time data at the operator level + |
+
OperatorHistLock + |
+Used to prevent concurrent access to or concurrent modification on a global structure that contains historical data at the operator level + |
+
SessionRealTLock + |
+Used to prevent concurrent access to or concurrent modification on a global structure that contains real-time data at the query level + |
+
SessionHistLock + |
+Used to prevent concurrent access to or concurrent modification on a global structure that contains historical data at the query level + |
+
CacheSlotMappingLock + |
+Used to protect global CU cache information + |
+
BarrierLock + |
+Used to ensure that only one thread is creating a barrier at a time + |
+
The following table describes the corresponding wait events when wait_status is wait io.
+ +wait_event + |
+Description + |
+
---|---|
BufFileRead + |
+Reads data from a temporary file to a specified buffer. + |
+
BufFileWrite + |
+Writes the content of a specified buffer to a temporary file. + |
+
ControlFileRead + |
+Reads the pg_control file, mainly during database startup, checkpoint execution, and primary/standby verification. + |
+
ControlFileSync + |
+Flushes the pg_control file to a disk, mainly during database initialization. + |
+
ControlFileSyncUpdate + |
+Flushes the pg_control file to a disk, mainly during database startup, checkpoint execution, and primary/standby verification. + |
+
ControlFileWrite + |
+Writes to the pg_control file, mainly during database initialization. + |
+
ControlFileWriteUpdate + |
+Updates the pg_control file, mainly during database startup, checkpoint execution, and primary/standby verification. + |
+
CopyFileRead + |
+Reads a file during file copying. + |
+
CopyFileWrite + |
+Writes a file during file copying. + |
+
DataFileExtend + |
+Writes a file during file extension. + |
+
DataFileFlush + |
+Flushes a table data file to a disk. + |
+
DataFileImmediateSync + |
+Flushes a table data file to a disk immediately. + |
+
DataFilePrefetch + |
+Reads a table data file asynchronously. + |
+
DataFileRead + |
+Reads a table data file synchronously. + |
+
DataFileSync + |
+Flushes table data file modifications to a disk. + |
+
DataFileTruncate + |
+Truncates a table data file. + |
+
DataFileWrite + |
+Writes a table data file. + |
+
LockFileAddToDataDirRead + |
+Reads the postmaster.pid file. + |
+
LockFileAddToDataDirSync + |
+Flushes the postmaster.pid file to a disk. + |
+
LockFileAddToDataDirWrite + |
+Writes the PID information into the postmaster.pid file. + |
+
LockFileCreateRead + |
+Read the LockFile file %s.lock. + |
+
LockFileCreateSync + |
+Flushes the LockFile file %s.lock to a disk. + |
+
LockFileCreateWRITE + |
+Writes the PID information into the LockFile file %s.lock. + |
+
RelationMapRead + |
+Reads the mapping file between system catalogs and storage locations. + |
+
RelationMapSync + |
+Flushes the mapping file between system catalogs and storage locations to a disk. + |
+
RelationMapWrite + |
+Writes the mapping file between system catalogs and storage locations. + |
+
ReplicationSlotRead + |
+Reads a stream replication slot file during a restart. + |
+
ReplicationSlotRestoreSync + |
+Flushes a stream replication slot file to a disk during a restart. + |
+
ReplicationSlotSync + |
+Flushes a temporary stream replication slot file to a disk during checkpoint execution. + |
+
ReplicationSlotWrite + |
+Writes a temporary stream replication slot file during checkpoint execution. + |
+
SLRUFlushSync + |
+Flushes the pg_clog, pg_subtrans, and pg_multixact files to a disk, mainly during checkpoint execution and database shutdown. + |
+
SLRURead + |
+Reads the pg_clog, pg_subtrans, and pg_multixact files. + |
+
SLRUSync + |
+Writes dirty pages into the pg_clog, pg_subtrans, and pg_multixact files, and flushes the files to a disk, mainly during checkpoint execution and database shutdown. + |
+
SLRUWrite + |
+Writes the pg_clog, pg_subtrans, and pg_multixact files. + |
+
TimelineHistoryRead + |
+Reads the timeline history file during database startup. + |
+
TimelineHistorySync + |
+Flushes the timeline history file to a disk during database startup. + |
+
TimelineHistoryWrite + |
+Writes to the timeline history file during database startup. + |
+
TwophaseFileRead + |
+Reads the pg_twophase file, mainly during two-phase transaction submission and restoration. + |
+
TwophaseFileSync + |
+Flushes the pg_twophase file to a disk, mainly during two-phase transaction submission and restoration. + |
+
TwophaseFileWrite + |
+Writes the pg_twophase file, mainly during two-phase transaction submission and restoration. + |
+
WALBootstrapSync + |
+Flushes an initialized WAL file to a disk during database initialization. + |
+
WALBootstrapWrite + |
+Writes an initialized WAL file during database initialization. + |
+
WALCopyRead + |
+Read operation generated when an existing WAL file is read for replication after archiving and restoration. + |
+
WALCopySync + |
+Flushes a replicated WAL file to a disk after archiving and restoration. + |
+
WALCopyWrite + |
+Write operation generated when an existing WAL file is read for replication after archiving and restoration. + |
+
WALInitSync + |
+Flushes a newly initialized WAL file to a disk during log reclaiming or writing. + |
+
WALInitWrite + |
+Initializes a newly created WAL file to 0 during log reclaiming or writing. + |
+
WALRead + |
+Reads data from Xlogs during redo operations on two-phase files. + |
+
WALSyncMethodAssign + |
+Flushes all open WAL files to a disk. + |
+
WALWrite + |
+Writes a WAL file. + |
+
The following table describes the corresponding wait events when wait_status is acquire lock.
+ +wait_event + |
+Description + |
+
---|---|
relation + |
+Adds a lock to a table. + |
+
extend + |
+Adds a lock to a table being scaled out. + |
+
partition + |
+Adds a lock to a partitioned table. + |
+
partition_seq + |
+Adds a lock to a partition of a partitioned table. + |
+
page + |
+Adds a lock to a table page. + |
+
tuple + |
+Adds a lock to a tuple on a page. + |
+
transactionid + |
+Adds a lock to a transaction ID. + |
+
virtualxid + |
+Adds a lock to a virtual transaction ID. + |
+
object + |
+Adds a lock to an object. + |
+
cstore_freespace + |
+Adds a lock to idle column-store space. + |
+
userlock + |
+Adds a lock to a user. + |
+
advisory + |
+Adds an advisory lock. + |
+
PG_TABLES displays access to each table in the database.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
schemaname + |
+name + |
+PG_NAMESPACE.nspname + |
+Name of the schema that contains the table + |
+
tablename + |
+name + |
+PG_CLASS.relname + |
+Name of the table + |
+
tableowner + |
+name + |
+pg_get_userbyid(PG_CLASS.relowner) + |
+Owner of the table + |
+
tablespace + |
+name + |
+PG_TABLESPACE.spcname + |
+Tablespace that contains the table. The default value is null + |
+
hasindexes + |
+boolean + |
+PG_CLASS.relhasindex + |
+Whether the table has (or recently had) an index. If it does, its value is true. Otherwise, its value is false. + |
+
hasrules + |
+boolean + |
+PG_CLASS.relhasruls + |
+Whether the table has rules. If it does, its value is true. Otherwise, its value is false. + |
+
hastriggers + |
+boolean + |
+PG_CLASS.RELHASTRIGGERS + |
+Whether the table has triggers. If it does, its value is true. Otherwise, its value is false. + |
+
tablecreator + |
+name + |
+pg_get_userbyid(PG_OBJECT.creator) + |
+Table creator. If the creator has been deleted, no value is returned. + |
+
created + |
+timestamp with time zone + |
+PG_OBJECT.ctime + |
+Time when the table was created. + |
+
last_ddl_time + |
+timestamp with time zone + |
+PG_OBJECT.mtime + |
+Last time when the cluster was modified. + |
+
PG_TDE_INFO displays the encryption information about the current cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
is_encrypt + |
+text + |
+Whether the cluster is an encryption cluster +
|
+
g_tde_algo + |
+text + |
+Encryption algorithm +
|
+
remain + |
+text + |
+Reserved + |
+
Check whether the current cluster is encrypted, and check the encryption algorithm (if any) used by the current cluster.
+1 +2 +3 +4 +5 | SELECT * FROM PG_TDE_INFO; + is_encrypt | g_tde_algo | remain +------------+-------------+-------- + f | AES-CTR-128 | remain +(1 row) + |
PG_TIMEZONE_ABBREVS displays all time zone abbreviations that can be recognized by the input routines.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
abbrev + |
+text + |
+Time zone abbreviation + |
+
utc_offset + |
+interval + |
+Offset from UTC + |
+
is_dst + |
+boolean + |
+Whether the abbreviation indicates a daylight saving time (DST) zone. If it does, its value is true. Otherwise, its value is false. + |
+
PG_TIMEZONE_NAMES displays all time zone names that can be recognized by SET TIMEZONE, along with their associated abbreviations, UTC offsets, and daylight saving time statuses.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+text + |
+Name of the time zone + |
+
abbrev + |
+text + |
+Time zone name abbreviation + |
+
utc_offset + |
+interval + |
+Offset from UTC + |
+
is_dst + |
+boolean + |
+Whether DST is used. If it is, its value is true. Otherwise, its value is false. + |
+
PG_TOTAL_MEMORY_DETAIL displays the memory usage of a certain node in the database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
memorytype + |
+text + |
+It can be set to any of the following values: +
|
+
memorymbytes + |
+integer + |
+Size of the used memory (MB) + |
+
PG_TOTAL_SCHEMA_INFO displays the storage usage of all schemas in each database. This view is valid only if use_workload_manager is set to on.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
schemaid + |
+oid + |
+Schema OID + |
+
schemaname + |
+text + |
+Schema name + |
+
databaseid + |
+oid + |
+Database OID + |
+
databasename + |
+name + |
+Database name + |
+
usedspace + |
+bigint + |
+Size of the permanent table storage space used by the schema, in bytes. + |
+
permspace + |
+bigint + |
+Upper limit of the permanent table storage space of the schema, in bytes. + |
+
PG_TOTAL_USER_RESOURCE_INFO displays the resource usage of all users. Only administrators can query this view. This view is valid only if use_workload_manager is set to on.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
username + |
+name + |
+Username + |
+
used_memory + |
+integer + |
+Used memory (unit: MB) + |
+
total_memory + |
+integer + |
+Available memory (unit: MB). 0 indicates that the available memory is not limited and depends on the maximum memory available in the database. + |
+
used_cpu + |
+double precision + |
+Number of CPU cores in use. Only the CPU usage of complex jobs in the non-default resource pool is collected, and the value is the CPU usage of the related cgroup. + |
+
total_cpu + |
+integer + |
+Total number of CPU cores of the cgroup associated with a user on the node + |
+
used_space + |
+bigint + |
+Used permanent table storage space (unit: KB) + |
+
total_space + |
+bigint + |
+Available storage space (unit: KB). -1 indicates that the storage space is not limited. + |
+
used_temp_space + |
+bigint + |
+Used temporary table storage space (unit: KB) + |
+
total_temp_space + |
+bigint + |
+Available temporary table storage space (unit: KB). -1 indicates that the storage space is not limited. + |
+
used_spill_space + |
+bigint + |
+Size of the used operator flushing space, in KB + |
+
total_spill_space + |
+bigint + |
+Size of the available operator flushing space, in KB. The value -1 indicates that the operator flushing space is not limited. + |
+
read_kbytes + |
+bigint + |
+CN: total number of bytes read by a user's complex jobs on all DNs in the last 5 seconds. The unit is KB. +DN: total number of bytes read by a user's complex jobs from the instance startup time to the current time. The unit is KB. + |
+
write_kbytes + |
+bigint + |
+CN: total number of bytes written by a user's complex jobs on all DNs in the last 5 seconds. The unit is KB. +DN: total number of bytes written by a user's complex jobs from the instance startup time to the current time. The unit is KB. + |
+
read_counts + |
+bigint + |
+CN: total number of read times of a user's complex jobs on all DNs in the last 5 seconds. Unit: count. +DN: total number of read times of a user's complex jobs from the instance startup time to the current time. Unit: count. + |
+
write_counts + |
+bigint + |
+CN: total number of write times of a user's complex jobs on all DNs in the last 5 seconds. Unit: count. +DN: total number of write times of a user's complex jobs from the instance startup time to the current time. Unit: count. + |
+
read_speed + |
+double precision + |
+CN: average read rate of a user's complex jobs on a single DN in the last 5 seconds. (Unit: KB/s) +DN: indicates the average read rate of a user's complex jobs on a single DN in the last 5 seconds. (Unit: KB/s) + |
+
write_speed + |
+double precision + |
+CN: average write rate of a user's complex jobs on a single DN in the last 5 seconds. (Unit: KB/s) +DN: average write rate of a user's complex jobs on a single DN in the last 5 seconds. (Unit: KB/s) + |
+
PG_USER displays information about users who can access the database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
usename + |
+name + |
+User name + |
+
usesysid + |
+oid + |
+ID of this user + |
+
usecreatedb + |
+boolean + |
+Whether the user has the permission to create databases + |
+
usesuper + |
+boolean + |
+whether the user is the initial system administrator with the highest rights. + |
+
usecatupd + |
+boolean + |
+whether the user can directly update system tables. Only the initial system administrator whose usesysid is 10 has this permission. It is not available for other users. + |
+
userepl + |
+boolean + |
+Whether the user has the permission to duplicate data streams + |
+
passwd + |
+text + |
+Encrypted user password. The value is displayed as ********. + |
+
valbegin + |
+timestamp with time zone + |
+Account validity start time; null if no start time + |
+
valuntil + |
+timestamp with time zone + |
+Password expiry time; null if no expiration + |
+
respool + |
+name + |
+Resource pool where the user is in + |
+
parentid + |
+oid + |
+Parent user OID + |
+
spacelimit + |
+text + |
+The storage space of the permanent table. + |
+
tempspacelimit + |
+text + |
+The storage space of the temporary table. + |
+
spillspacelimit + |
+text + |
+The operator disk flushing space. + |
+
useconfig + |
+text[] + |
+Session defaults for run-time configuration variables + |
+
nodegroup + |
+name + |
+Name of the logical cluster associated with the user. If no logical cluster is associated, this column is left blank. + |
+
PG_USER_MAPPINGS displays information about user mappings.
+This is essentially a publicly readable view of PG_USER_MAPPING that leaves out the options column if the user has no rights to use it.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
umid + |
+oid + |
+PG_USER_MAPPING.oid + |
+OID of the user mapping + |
+
srvid + |
+oid + |
++ | +OID of the foreign server that contains this mapping + |
+
srvname + |
+name + |
+PG_FOREIGN_SERVER.srvname + |
+Name of the foreign server + |
+
umuser + |
+oid + |
+PG_AUTHID.oid + |
+OID of the local role being mapped, 0 if the user mapping is public + |
+
usename + |
+name + |
+- + |
+Name of the local user to be mapped + |
+
umoptions + |
+text[ ] + |
+- + |
+User mapping specific options. If the current user is the owner of the foreign server, its value is keyword=value strings. Otherwise, its value is null. + |
+
PG_VIEWS displays basic information about each view in the database.
+ +Name + |
+Type + |
+Reference + |
+Description + |
+
---|---|---|---|
schemaname + |
+name + |
+PG_NAMESPACE.nspname + |
+Name of the schema that contains the view + |
+
viewname + |
+name + |
+PG_CLASS.relname + |
+View name + |
+
viewowner + |
+name + |
+PG_AUTHID.Erolname + |
+Owner of the view + |
+
definition + |
+text + |
+- + |
+Definition of the view + |
+
PG_WLM_STATISTICS displays information about workload management after the task is complete or the exception has been handled.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
statement + |
+text + |
+Statement executed for exception handling + |
+
block_time + |
+bigint + |
+Block time before the statement is executed + |
+
elapsed_time + |
+bigint + |
+Elapsed time when the statement is executed + |
+
total_cpu_time + |
+bigint + |
+Total time used by the CPU on the DN when the statement is executed for exception handling + |
+
qualification_time + |
+bigint + |
+Period when the statement checks the inclination ratio + |
+
cpu_skew_percent + |
+integer + |
+CPU usage skew on the DN when the statement is executed for exception handling + |
+
control_group + |
+text + |
+Cgroup used when the statement is executed for exception handling + |
+
status + |
+text + |
+Statement status after it is executed for exception handling +
|
+
action + |
+text + |
+Actions when statements are executed for exception handling +
|
+
queryid + |
+bigint + |
+Internal query ID used for statement execution + |
+
threadid + |
+bigint + |
+ID of the backend thread + |
+
PGXC_BULKLOAD_PROGRESS displays the progress of the service import. Only GDS common files can be imported. This view is accessible only to users with system administrators rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
session_id + |
+bigint + |
+GDS session ID + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
query + |
+text + |
+Query statement + |
+
progress + |
+text + |
+Progress percentage + |
+
PGXC_BULKLOAD_STATISTICS displays real-time statistics about service execution, such as GDS, COPY, and \COPY, on a CN. This view summarizes the real-time execution status of import and export services that are being executed on each node in the current cluster. In this way, you can monitor the real-time progress of import and export services and locate performance problems.
+Columns in PGXC_BULKLOAD_STATISTICS are the same as those in PG_BULKLOAD_STATISTICS. This is because PGXC_BULKLOAD_STATISTICS is essentially the summary result of querying PG_BULKLOAD_STATISTICS on each node in the cluster.
+This view is accessible only to users with system administrators rights.
+ +Name + |
+Type + |
+Description + |
+
node_name + |
+text + |
+Node name + |
+
db_name + |
+text + |
+Database name + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
tid + |
+bigint + |
+ID of the current thread + |
+
lwtid + |
+integer + |
+Lightweight thread ID + |
+
session_id + |
+bigint + |
+GDS session ID + |
+
direction + |
+text + |
+Service type. The options are gds to file, gds from file, gds to pipe, gds from pipe, copy from, and copy to. + |
+
query + |
+text + |
+Query statement + |
+
address + |
+text + |
+Location of the foreign table used for data import and export + |
+
query_start + |
+timestamp with time zone + |
+Start time of data import or export + |
+
total_bytes + |
+bigint + |
+Total size of data to be processed +This parameter is specified only when a GDS common file is to be imported and the record in the row comes from a CN. Otherwise, left this parameter unspecified. + |
+
phase + |
+text + |
+Current phase. The options are INITIALIZING, TRANSFER_DATA, and RELEASE_RESOURCE. + |
+
done_lines + |
+bigint + |
+Number of lines that have been transferred + |
+
done_bytes + |
+bigint + |
+Number of bytes that have been transferred + |
+
PGXC_COMM_CLIENT_INFO stores the client connection information of all nodes. (You can query this view on a DN to view the information about the connection between the CN and DN.)
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Current node name. + |
+
app + |
+text + |
+Client application name + |
+
tid + |
+bigint + |
+Thread ID of the current thread. + |
+
lwtid + |
+integer + |
+Lightweight thread ID of the current thread. + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
socket + |
+integer + |
+It is displayed if the connection is a physical connection. + |
+
remote_ip + |
+text + |
+Peer node IP address. + |
+
remote_port + |
+text + |
+Peer node port. + |
+
logic_id + |
+integer + |
+If the connection is a logical connection, sid is displayed. If -1 is displayed, the current connection is a physical connection. + |
+
PGXC_COMM_STATUS displays the communication library delay status for all the DNs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
remote_name + |
+text + |
+Name of the peer node + |
+
remote_host + |
+text + |
+IP address of the peer + |
+
stream_num + |
+integer + |
+Number of logical stream connections used by the current physical connection + |
+
min_delay + |
+integer + |
+Minimum delay of the current physical connection within 1 minute. Its unit is microsecond. + NOTE:
+A negative result is invalid. Wait until the delay status is updated and query again. + |
+
average + |
+integer + |
+Average delay of the current physical connection within 1 minute. Its unit is microsecond. + |
+
max_delay + |
+integer + |
+Maximum delay of the current physical connection within 1 minute. Its unit is microsecond. + |
+
PG_COMM_RECV_STREAM displays the receiving stream status of the communication libraries for all the DNs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
local_tid + |
+bigint + |
+ID of the thread using this stream + |
+
remote_name + |
+text + |
+Name of the peer node + |
+
remote_tid + |
+bigint + |
+Peer thread ID + |
+
idx + |
+integer + |
+Peer DN ID in the local DN + |
+
sid + |
+integer + |
+Stream ID in the physical connection + |
+
tcp_sock + |
+integer + |
+TCP socket used in the stream + |
+
state + |
+text + |
+Current status of the stream +
|
+
query_id + |
+bigint + |
+debug_query_id corresponding to the stream + |
+
pn_id + |
+integer + |
+plan_node_id of the query executed by the stream + |
+
send_smp + |
+integer + |
+smpid of the sender of the query executed by the stream + |
+
recv_smp + |
+integer + |
+smpid of the receiver of the query executed by the stream + |
+
recv_bytes + |
+bigint + |
+Total data volume received from the stream. The unit is byte. + |
+
time + |
+bigint + |
+Current life cycle service duration of the stream. The unit is ms. + |
+
speed + |
+bigint + |
+Average receiving rate of the stream. The unit is byte/s. + |
+
quota + |
+bigint + |
+Current communication quota value of the stream. The unit is Byte. + |
+
buff_usize + |
+bigint + |
+Current size of the data cache of the stream. The unit is byte. + |
+
PGXC_COMM_SEND_STREAM displays the sending stream status of the communication libraries for all the DNs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
local_tid + |
+bigint + |
+ID of the thread using this stream + |
+
remote_name + |
+text + |
+Name of the peer node + |
+
remote_tid + |
+bigint + |
+Peer thread ID + |
+
idx + |
+integer + |
+Peer DN ID in the local DN + |
+
sid + |
+integer + |
+Stream ID in the physical connection + |
+
tcp_sock + |
+integer + |
+TCP socket used in the stream + |
+
state + |
+text + |
+Current status of the stream +
|
+
query_id + |
+bigint + |
+debug_query_id corresponding to the stream + |
+
pn_id + |
+integer + |
+plan_node_id of the query executed by the stream + |
+
send_smp + |
+integer + |
+smpid of the sender of the query executed by the stream + |
+
recv_smp + |
+integer + |
+smpid of the receiver of the query executed by the stream + |
+
send_bytes + |
+bigint + |
+Total data volume sent by the stream. The unit is Byte. + |
+
time + |
+bigint + |
+Current life cycle service duration of the stream. The unit is ms. + |
+
speed + |
+bigint + |
+Average sending rate of the stream. The unit is Byte/s. + |
+
quota + |
+bigint + |
+Current communication quota value of the stream. The unit is Byte. + |
+
wait_quota + |
+bigint + |
+Extra time generated when the stream waits the quota value. The unit is ms. + |
+
PGXC_COMM_STATUS displays the communication library status for all the DNs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
rxpck/s + |
+integer + |
+Receiving rate of the communication library on a node. The unit is byte/s. + |
+
txpck/s + |
+integer + |
+Sending rate of the communication library on a node. The unit is byte/s. + |
+
rxkB/s + |
+bigint + |
+Receiving rate of the communication library on a node. The unit is KB/s. + |
+
txkB/s + |
+bigint + |
+Sending rate of the communication library on a node. The unit is KB/s. + |
+
buffer + |
+bigint + |
+Size of the buffer of the Cmailbox. + |
+
memKB(libcomm) + |
+bigint + |
+Communication memory size of the libcomm process, in KB. + |
+
memKB(libpq) + |
+bigint + |
+Communication memory size of the libpq process, in KB. + |
+
%USED(PM) + |
+integer + |
+Real-time usage of the postmaster thread. + |
+
%USED (sflow) + |
+integer + |
+Real-time usage of the gs_sender_flow_controller thread. + |
+
%USED (rflow) + |
+integer + |
+Real-time usage of the gs_receiver_flow_controller thread. + |
+
%USED (rloop) + |
+integer + |
+Highest real-time usage among multiple gs_receivers_loop threads. + |
+
stream + |
+integer + |
+Total number of used logical connections. + |
+
PGXC_DEADLOCK displays lock wait information generated due to distributed deadlocks.
+Currently, PGXC_DEADLOCK collects only lock wait information about locks whose locktype is relation, partition, page, tuple, or transactionid.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
locktype + |
+text + |
+Type of the locked object + |
+
nodename + |
+name + |
+Name of the node where the locked object resides + |
+
dbname + |
+name + |
+Name of the database where the locked object resides. The value is NULL if the locked object is a transaction. + |
+
nspname + |
+name + |
+Name of the namespace of the locked object + |
+
relname + |
+name + |
+Name of the relation targeted by the lock. The value is NULL if the object is not a relation or part of a relation. + |
+
partname + |
+name + |
+Name of the partition targeted by the lock. The value is NULL if the locked object is not a partition. + |
+
page + |
+integer + |
+Number of the page targeted by the lock. The value is NULL if the locked object is neither a page nor a tuple. + |
+
tuple + |
+smallint + |
+Number of the tuple targeted by the lock. The value is NULL if the locked object is not a tuple. + |
+
transactionid + |
+xid + |
+ID of the transaction targeted by the lock. The value is NULL if the locked object is not a transaction. + |
+
waitusername + |
+name + |
+Name of the user who waits for the lock + |
+
waitgxid + |
+xid + |
+ID of the transaction that waits for the lock + |
+
waitxactstart + |
+timestamp with time zone + |
+Start time of the transaction that waits for the lock + |
+
waitqueryid + |
+bigint + |
+Latest query ID of the thread that waits for the lock + |
+
waitquery + |
+text + |
+Latest query statement of the thread that waits for the lock + |
+
waitpid + |
+bigint + |
+ID of the thread that waits for the lock + |
+
waitmode + |
+text + |
+Mode of the waited lock + |
+
holdusername + |
+name + |
+Name of the user who holds the lock + |
+
holdgxid + |
+xid + |
+ID of the transaction that holds the lock + |
+
holdxactstart + |
+timestamp with time zone + |
+Start time of the transaction that holds the lock + |
+
holdqueryid + |
+bigint + |
+Latest query ID of the thread that holds the lock + |
+
holdquery + |
+text + |
+Latest query statement of the thread that holds the lock + |
+
holdpid + |
+bigint + |
+ID of the thread that holds the lock + |
+
holdmode + |
+text + |
+Mode of the held lock + |
+
PGXC_GET_STAT_ALL_TABLES displays information about insertion, update, and deletion operations on tables and the dirty page rate of tables.
+Before running VACUUM FULL to a system catalog with a high dirty page rate, ensure that no user is performing operations it.
+You are advised to run VACUUM FULL to tables (excluding system catalogs) whose dirty page rate exceeds 30% or run it based on service scenarios.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
relname + |
+name + |
+Table name + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
n_tup_ins + |
+numeric + |
+Number of inserted tuples + |
+
n_tup_upd + |
+numeric + |
+Number of updated tuples + |
+
n_tup_del + |
+numeric + |
+Number of deleted tuples + |
+
n_live_tup + |
+numeric + |
+Number of live tuples + |
+
n_dead_tup + |
+numeric + |
+Number of dead tuples + |
+
page_dirty_rate + |
+numeric(5,2) + |
+Dirty page rate (%) of a table + |
+
GaussDB(DWS) also provides the pgxc_get_stat_dirty_tables(int dirty_percent, int n_tuples) and pgxc_get_stat_dirty_tables(int dirty_percent, int n_tuples, text schema) functions to quickly filter out tables whose dirty page rate is greater than dirty_percent, number of dead tuples is greater than n_tuples, and schema name is schema. For details, see "Functions and Operators > System Administration Functions > Other Functions" in the SQL Syntax.
+PGXC_GET_STAT_ALL_PARTITIONS displays information about insertion, update, and deletion operations on partitions of partitioned tables and the dirty page rate of tables.
+The statistics of this view depend on the ANALYZE operation. To obtain the most accurate information, perform the ANALYZE operation on the partitioned table first.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
partid + |
+oid + |
+Partition OID + |
+
schename + |
+name + |
+Schema name of a table + |
+
relname + |
+name + |
+Table name + |
+
partname + |
+name + |
+Partition name + |
+
n_tup_ins + |
+numeric + |
+Number of inserted tuples + |
+
n_tup_upd + |
+numeric + |
+Number of updated tuples + |
+
n_tup_del + |
+numeric + |
+Number of deleted tuples + |
+
n_live_tup + |
+numeric + |
+Number of live tuples + |
+
n_dead_tup + |
+numeric + |
+Number of dead tuples + |
+
page_dirty_rate + |
+numeric(5,2) + |
+Dirty page rate (%) of a table + |
+
PGXC_GET_TABLE_SKEWNESS displays the data skew on tables in the current database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+name + |
+Schema name of a table + |
+
tablename + |
+name + |
+Name of a table + |
+
totalsize + |
+numeric + |
+Total size of a table, in bytes + |
+
avgsize + |
+numeric(1000,0) + |
+Average table size (total table size divided by the number of DNs), which is the ideal size of tables distributed on each DN + |
+
maxratio + |
+numeric(4,3) + |
+Ratio of the maximum table size on a single DN to the total table size + |
+
minratio + |
+numeric(4,3) + |
+Ratio of the minimum table size on a single DN to the total table size + |
+
skewsize + |
+bigint + |
+Table skew rate (the maximum table size on a single DN minus the minimum table size on a single DN) + |
+
skewratio + |
+numeric(4,3) + |
+Table skew rate (skew size divided by total table size) + |
+
skewstddev + |
+numeric(1000,0) + |
+Standard deviation of table distribution (For two tables of the same size, a larger deviation indicates a more severe skew.) + |
+
PGXC_GTM_SNAPSHOT_STATUS displays transaction information on the current GTM.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
xmin + |
+xid + |
+Minimum ID of the running transactions + |
+
xmax + |
+xid + |
+ID of the transaction next to the executed transaction with the maximum ID + |
+
csn + |
+integer + |
+Sequence number of the transaction to be committed + |
+
oldestxmin + |
+xid + |
+Minimum ID of the executed transactions + |
+
xcnt + |
+integer + |
+Number of the running transactions + |
+
running_xids + |
+text + |
+IDs of the running transactions + |
+
PGXC_INSTANCE_TIME displays the running time of processes on each node in the cluster and the time consumed in each execution phase. Except the node_name column, the other columns are the same as those in the PV_INSTANCE_TIME view. This view is accessible only to users with system administrator rights.
+PGXC_INSTR_UNIQUE_SQL displays the complete Unique SQL statistics of all CN nodes in the cluster.
+Only the system administrator can access this view. For details about the field, see GS_INSTR_UNIQUE_SQL.
+PGXC_LOCK_CONFLICTS displays information about conflicting locks in the cluster.
+When a lock is waiting for another lock or another lock is waiting for this one, a lock conflict occurs.
+Currently, PGXC_LOCK_CONFLICTS collects only information about locks whose locktype is relation, partition, page, tuple, or transactionid.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
locktype + |
+text + |
+Type of the locked object + |
+
nodename + |
+name + |
+Name of the node where the locked object resides + |
+
dbname + |
+name + |
+Name of the database where the locked object resides. The value is NULL if the locked object is a transaction. + |
+
nspname + |
+name + |
+Name of the namespace of the locked object + |
+
relname + |
+name + |
+Name of the relation targeted by the lock. The value is NULL if the object is not a relation or part of a relation. + |
+
partname + |
+name + |
+Name of the partition targeted by the lock. The value is NULL if the locked object is not a partition. + |
+
page + |
+integer + |
+Number of the page targeted by the lock. The value is NULL if the locked object is neither a page nor a tuple. + |
+
tuple + |
+smallint + |
+Number of the tuple targeted by the lock. The value is NULL if the locked object is not a tuple. + |
+
transactionid + |
+xid + |
+ID of the transaction targeted by the lock. The value is NULL if the locked object is not a transaction. + |
+
username + |
+name + |
+Name of the user who applies for the lock + |
+
gxid + |
+xid + |
+ID of the transaction that applies for the lock + |
+
xactstart + |
+timestamp with time zone + |
+Start time of the transaction that applies for the lock + |
+
queryid + |
+bigint + |
+Latest query ID of the thread that applies for the lock + |
+
query + |
+text + |
+Latest query statement of the thread that applies for the lock + |
+
pid + |
+bigint + |
+ID of the thread that applies for the lock + |
+
mode + |
+text + |
+Lock mode + |
+
granted + |
+boolean + |
+
|
+
PGXC_NODE_ENV displays the environmental variables information about all nodes in a cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Names of all nodes in the cluster + |
+
host + |
+text + |
+Host names of all nodes in the cluster + |
+
process + |
+integer + |
+Process IDs of all nodes in the cluster + |
+
port + |
+integer + |
+Port numbers of all nodes in the cluster + |
+
installpath + |
+text + |
+Installation directory of all nodes in the cluster + |
+
datapath + |
+text + |
+Data directory of all nodes in the cluster + |
+
log_directory + |
+text + |
+Log directory of all nodes in the cluster + |
+
PGXC_NODE_STAT_RESET_TIME displays the time when statistics of each node in the cluster are reset. All columns except node_name are the same as those in the GS_NODE_STAT_RESET_TIME view. This view is accessible only to users with system administrators rights.
+PGXC_OS_RUN_INFO displays the OS running status of each node in the cluster. All columns except node_name are the same as those in the PV_OS_RUN_INFO view. This view is accessible only to users with system administrators rights.
+PGXC_OS_THREADS displays thread status information under all normal nodes in the current cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+All normal node names in the cluster + |
+
pid + |
+bigint + |
+IDs of running threads among all normal node processes in the current cluster + |
+
lwpid + |
+integer + |
+Lightweight thread ID corresponding to the PID + |
+
thread_name + |
+text + |
+Thread name corresponding to the PID + |
+
creation_time + |
+timestamp with time zone + |
+Thread creation time corresponding to the PID + |
+
PGXC_PREPARED_XACTS displays the two-phase transactions in the prepared phase.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
pgxc_prepared_xact + |
+text + |
+Two-phase transactions in prepared phase + |
+
PGXC_REDO_STAT displays statistics on redoing Xlogs of each node in the cluster. All columns except node_name are the same as those in the PV_REDO_STAT view. This view is accessible only to users with system administrators rights.
+PGXC_REL_IOSTAT displays statistics on disk read and write of each node in the cluster. All columns except node_name are the same as those in the GS_REL_IOSTAT view. This view is accessible only to users with system administrators rights.
+PGXC_REPLICATION_SLOTS displays the replication information of DNs in the cluster. All columns except node_name are the same as those in the PG_REPLICATION_SLOTS view. This view is accessible only to users with system administrators rights.
+PGXC_RUNNING_XACTS displays information about running transactions on each node in the cluster. The content is the same as that displayed in PG_RUNNING_XACTS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
handle + |
+integer + |
+Handle corresponding to the transaction in GTM + |
+
gxid + |
+xid + |
+Transaction ID + |
+
state + |
+tinyint + |
+Transaction status (3: prepared or 0: starting) + |
+
node + |
+text + |
+Node name + |
+
xmin + |
+xid + |
+Minimum transaction ID xmin on the node + |
+
vacuum + |
+boolean + |
+Whether the current transaction is lazy vacuum + |
+
timeline + |
+bigint + |
+Number of database restart + |
+
prepare_xid + |
+xid + |
+Transaction ID in prepared state. If the status is not prepared, the value is 0. + |
+
pid + |
+bigint + |
+Thread ID corresponding to the transaction + |
+
next_xid + |
+xid + |
+Transaction ID sent from a CN to a DN + |
+
PGXC_SETTINGS displays the database running status of each node in the cluster. All columns except node_name are the same as those in the PG_SETTINGS view. This view is accessible only to users with system administrators rights.
+PGXC_STAT_ACTIVITY displays information about the query performed by the current user on all the CNs in the current cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
coorname + |
+text + |
+Name of the CN in the current cluster + |
+
datid + |
+oid + |
+OID of the database that the user session connects to in the backend + |
+
datname + |
+name + |
+Name of the database that the user session connects to in the backend + |
+
pid + |
+bigint + |
+ID of the backend thread + |
+
usesysid + |
+oid + |
+OID of the user logging in to the backend + |
+
usename + |
+name + |
+Name of the user logging in to the backend + |
+
application_name + |
+text + |
+Name of the application connected to the backend + |
+
client_addr + |
+inet + |
+IP address of the client connected to the backend. If this column is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. + |
+
client_hostname + |
+text + |
+Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This column will only be non-null for IP connections, and only when log_hostname is enabled. + |
+
client_port + |
+integer + |
+TCP port number that the client uses for communication with this backend, or -1 if a Unix socket is used + |
+
backend_start + |
+timestamp with time zone + |
+Startup time of the backend process, that is, the time when the client connects to the server + |
+
xact_start + |
+timestamp with time zone + |
+Time when the current transaction was started, or NULL if no transaction is active. If the current query is the first of its transaction, this column is equal to the query_start column. + |
+
query_start + |
+timestamp with time zone + |
+Time when the currently active query was started, or time when the last query was started if state is not active + |
+
state_change + |
+timestamp with time zone + |
+Time for the last status change + |
+
waiting + |
+boolean + |
+If backend is currently waiting for a lock, the value is true. + |
+
enqueue + |
+text + |
+Queuing status of a statement. Its value can be:
+
|
+
state + |
+text + |
+Overall state of the backend. Its value can be: +
NOTE:
+Only system administrators can view the session status of their accounts. The state information of other accounts is empty. For example, after user judy is connected to the database, the state information of user joe and the initial user dbadmin in pgxc_stat_activity is empty. +SELECT datname, usename, usesysid, state,pid FROM pgxc_stat_activity;+ datname | usename | usesysid | state | pid +----------+---------+----------+--------+----------------- + gaussdb | dbadmin | 10 | | 139968752121616 + gaussdb | dbadmin | 10 | | 139968903116560 + db_tpcds | judy | 16398 | active | 139968391403280 + gaussdb | dbadmin | 10 | | 139968643069712 + gaussdb | dbadmin | 10 | | 139968680818448 + gaussdb | joe | 16390 | | 139968563377936 +(6 rows)+ |
+
resource_pool + |
+name + |
+Resource pool used by the user + |
+
query_id + |
+bigint + |
+ID of a query + |
+
query + |
+text + |
+Text of this backend's most recent query If state is active, this column shows the running query. In all other states, it shows the last query that was executed. + |
+
connection_info + |
+text + |
+A string in JSON format recording the driver type, driver version, driver deployment path, and process owner of the connected database (for details, see connection_info) + |
+
Run the following command to view blocked query statements.
+1 | SELECT datname,usename,state,query FROM PGXC_STAT_ACTIVITY WHERE waiting = true; + |
Check the working status of the snapshot thread.
+1 | SELECT application_name,backend_start,state_change,state,query FROM PGXC_STAT_ACTIVITY WHERE application_name='WDRSnapshot'; + |
View the running query statements.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT datname,usename,state,pid FROM PGXC_STAT_ACTIVITY; + datname | usename | state | pid +----------+---------+--------+----------------- + gaussdb | Ruby | active | 140298793514752 + gaussdb | Ruby | active | 140298718004992 + gaussdb | Ruby | idle | 140298650908416 + gaussdb | Ruby | idle | 140298625742592 + gaussdb | dbadmin | active | 140298575406848 +(5 rows) + |
View the number of session connections that have been used by postgres. 1 indicates the number of session connections that have been used by postgres.
+1 +2 +3 +4 +5 | SELECT COUNT(*) FROM PGXC_STAT_ACTIVITY WHERE DATNAME='postgres'; + count +------- + 1 +(1 row) + |
PGXC_STAT_BAD_BLOCK displays statistics about page or CU verification failures after all nodes in a cluster are started.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
databaseid + |
+integer + |
+Database OID + |
+
tablespaceid + |
+integer + |
+Tablespace OID + |
+
relfilenode + |
+integer + |
+File object ID + |
+
forknum + |
+integer + |
+File type + |
+
error_count + |
+integer + |
+Number of verification failures + |
+
first_time + |
+timestamp with time zone + |
+Time of the first occurrence + |
+
last_time + |
+timestamp with time zone + |
+Time of the latest occurrence + |
+
PGXC_STAT_BGWRITER displays statistics on the background writer of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_BGWRITER view. This view is accessible only to users with system administrators rights.
+PGXC_STAT_DATABASE displays the database status and statistics of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_DATABASE view. This view is accessible only to users with system administrators rights.
+PGXC_STAT_REPLICATION displays the log synchronization status of each node in the cluster. All columns except node_name are the same as those in the PG_STAT_REPLICATION view. This view is accessible only to users with system administrators rights.
+PGXC_SQL_COUNT displays the node-level and user-level statistics for the SQL statements of SELECT, INSERT, UPDATE, DELETE, and MERGE INTO and DDL, DML, and DCL statements of each CN in a cluster in real time, identifies query types with heavy load, and measures the capability of a cluster or a node to perform a specific type of query. You can calculate QPS based on the quantities and response time of the preceding types of SQL statements at certain time points. For example, USER1 SELECT is counted as X1 at T1 and as X2 at T2. The SELECT QPS of the user can be calculated as follows: (X2 – X1)/(T2 – T1). In this way, the system can draw cluster-user-level QPS curve graphs and determine cluster throughput, monitoring changes in the service load of each user. If there are drastic changes, the system can locate the specific statement type (such as SELECT, INSERT, UPDATE, DELETE, and MERGE INTO). You can also observe QPS curves to determine the time points when problems occur and then locate the problems using other tools. The curves provide a basis for optimizing cluster performance and locating problems.
+Columns in the PGXC_SQL_COUNT view are the same as those in the GS_SQL_COUNT view. For details, see Table 1.
+If a MERGE INTO statement can be pushed down and a DN receives it, the statement will be counted on the DN and the value of the mergeinto_count column will increment by 1. If the pushdown is not allowed, the DN will receive an UPDATE or INSERT statement. In this case, the update_count or insert_count column will increment by 1.
+PGXC_THREAD_WAIT_STATUS displays all the call layer hierarchy relationship between threads of the SQL statements on all the nodes in a cluster, and the waiting status of the block for each thread, so that you can easily locate the causes of process response failures and similar phenomena.
+The definitions of PGXC_THREAD_WAIT_STATUS view and PG_THREAD_WAIT_STATUS view are the same, because the essence of the PGXC_THREAD_WAIT_STATUS view is the query summary result of the PG_THREAD_WAIT_STATUS view on each node in the cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Current node name + |
+
db_name + |
+text + |
+Database name + |
+
thread_name + |
+text + |
+Thread name + |
+
query_id + |
+bigint + |
+Query ID. It is equivalent to debug_query_id. + |
+
tid + |
+bigint + |
+Thread ID of the current thread + |
+
lwtid + |
+integer + |
+Lightweight thread ID of the current thread + |
+
ptid + |
+integer + |
+Parent thread of the streaming thread + |
+
tlevel + |
+integer + |
+Level of the streaming thread + |
+
smpid + |
+integer + |
+Concurrent thread ID + |
+
wait_status + |
+text + |
+Waiting status of the current thread. For details about the waiting status, see Table 2. + |
+
wait_event + |
+text + |
+If wait_status is acquire lock, acquire lwlock, or wait io, this column describes the lock, lightweight lock, and I/O information, respectively. If wait_status is not any of the three values, this column is empty. + |
+
Example:
+Assume you run a statement on coordinator1, and no response is returned after a long period of time. In this case, establish another connection to coordinator1 to check the thread status on it.
+1 +2 +3 +4 +5 | select * from pg_thread_wait_status where query_id > 0; + node_name | db_name | thread_name | query_id | tid | lwtid | ptid | tlevel | smpid | wait_status | wait_event +--------------+----------+--------------+----------+-----------------+-------+-------+--------+-------+---------------------- + coordinator1 | gaussdb | gsql | 20971544 | 140274089064208 | 22579 | | 0 | 0 | wait node: datanode4 | +(1 rows) + |
Furthermore, you can view the statement working status on each node in the entire cluster. In the following example, no DNs have threads blocked, and there is a huge amount of data to be read, causing slow execution.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | select * from pgxc_thread_wait_status where query_id=20971544; + node_name | db_name | thread_name | query_id | tid | lwtid | ptid | tlevel | smpid | wait_status | wait_event +--------------+----------+--------------+----------+-----------------+-------+-------+--------+-------+---------------------- + datanode1 | gaussdb | coordinator1 | 20971544 | 139902867994384 | 22735 | | 0 | 0 | wait node: datanode3 | + datanode1 | gaussdb | coordinator1 | 20971544 | 139902838634256 | 22970 | 22735 | 5 | 0 | synchronize quit | + datanode1 | gaussdb | coordinator1 | 20971544 | 139902607947536 | 22972 | 22735 | 5 | 1 | synchronize quit | + datanode2 | gaussdb | coordinator1 | 20971544 | 140632156796688 | 22736 | | 0 | 0 | wait node: datanode3 | + datanode2 | gaussdb | coordinator1 | 20971544 | 140632030967568 | 22974 | 22736 | 5 | 0 | synchronize quit | + datanode2 | gaussdb | coordinator1 | 20971544 | 140632081299216 | 22975 | 22736 | 5 | 1 | synchronize quit | + datanode3 | gaussdb | coordinator1 | 20971544 | 140323627988752 | 22737 | | 0 | 0 | wait node: datanode3 | + datanode3 | gaussdb | coordinator1 | 20971544 | 140323523131152 | 22976 | 22737 | 5 | 0 | net flush data | + datanode3 | gaussdb | coordinator1 | 20971544 | 140323548296976 | 22978 | 22737 | 5 | 1 | net flush data + datanode4 | gaussdb | coordinator1 | 20971544 | 140103024375568 | 22738 | | 0 | 0 | wait node: datanode3 + datanode4 | gaussdb | coordinator1 | 20971544 | 140102919517968 | 22979 | 22738 | 5 | 0 | synchronize quit | + datanode4 | gaussdb | coordinator1 | 20971544 | 140102969849616 | 22980 | 22738 | 5 | 1 | synchronize quit | + coordinator1 | gaussdb | gsql | 20971544 | 140274089064208 | 22579 | | 0 | 0 | wait node: datanode4 | +(13 rows) + |
PGXC_TOTAL_MEMORY_DETAIL displays the memory usage in the cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
memorytype + |
+text + |
+Memory name, which can be set to any of the following values: +
|
+
memorymbytes + |
+integer + |
+Size of the used memory (MB) + |
+
PGXC_TOTAL_SCHEMA_INFO displays the schema space information of all instances in the cluster, providing visibility into the schema space usage of each instance. This view can be queried only on CNs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+text + |
+Schema name + |
+
schemaid + |
+oid + |
+Schema OID + |
+
databasename + |
+text + |
+Database name + |
+
databaseid + |
+oid + |
+Database OID + |
+
nodename + |
+text + |
+Instance name + |
+
nodegroup + |
+text + |
+Name of the node group + |
+
usedspace + |
+bigint + |
+Size of used space + |
+
permspace + |
+bigint + |
+Upper limit of the space + |
+
PGXC_TOTAL_SCHEMA_INFO_ANALYZE displays the overall schema space information of the cluster, including the total cluster space, average space of instances, skew ratio, maximum space of a single instance, minimum space of a single instance, and names of the instances with the maximum space and minimum space. It provides visibility into the schema space usage of the entire cluster. This view can be queried only on CNs.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+text + |
+Schema name + |
+
databasename + |
+text + |
+Database name + |
+
nodegroup + |
+text + |
+Name of the node group + |
+
total_value + |
+bigint + |
+Total cluster space in the current schema + |
+
avg_value + |
+bigint + |
+Average space of instances in the current schema + |
+
skew_percent + |
+integer + |
+Skew ratio + |
+
extend_info + |
+text + |
+Extended information, including the maximum space of a single instance, minimum space of a single instance, and names of the instances with the maximum sapce and minimum space + |
+
PGXC_USER_TRANSACTION provides transaction information about users on all CNs. It is accessible only to users with system administrator rights. This view is valid only when the real-time resource monitoring function is enabled, that is, when enable_resource_track is on.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
usename + |
+name + |
+Username + |
+
commit_counter + |
+bigint + |
+Number of the commit times + |
+
rollback_counter + |
+bigint + |
+Number of rollbacks + |
+
resp_min + |
+bigint + |
+Minimum response time + |
+
resp_max + |
+bigint + |
+Maximum response time + |
+
resp_avg + |
+bigint + |
+Average response time + |
+
resp_total + |
+bigint + |
+Total response time + |
+
PGXC_VARIABLE_INFO displays information about transaction IDs and OIDs of all nodes in a cluster.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
nextOid + |
+oid + |
+OID generated next time for a node + |
+
nextXid + |
+xid + |
+Transaction ID generated next time for a node + |
+
oldestXid + |
+xid + |
+Oldest transaction ID for a node + |
+
xidVacLimit + |
+xid + |
+Critical point that triggers forcible autovacuum + |
+
oldestXidDB + |
+oid + |
+OID of the database that has the minimum datafrozenxid on a node + |
+
lastExtendCSNLogpage + |
+integer + |
+Number of the last extended csnlog page + |
+
startExtendCSNLogpage + |
+integer + |
+Number of the page from which the csnlog extending starts + |
+
nextCommitSeqNo + |
+integer + |
+CSN generated next time for a node + |
+
latestCompletedXid + |
+xid + |
+Latest transaction ID on a node after the transaction commission or rollback + |
+
startupMaxXid + |
+xid + |
+Last transaction ID before a node is powered off + |
+
PGXC_WAIT_EVENTS displays statistics on the waiting status and events of each node in the cluster. The content is the same as that displayed in GS_WAIT_EVENTS. This view is accessible only to users with system administrators rights.
+PGXC_WLM_OPERATOR_HISTORY displays the operator information of completed jobs executed on all CNs. This view is used by Database Manager to query data from a database. Data in the database is cleared every 3 minutes.
+This view is accessible only to users with system administrators rights. For details about columns in the view, see Table 1.
+PGXC_WLM_OPERATOR_INFO displays the operator information of completed jobs executed on CNs. The data in this view is obtained from GS_WLM_OPERATOR_INFO.
+This view is accessible only to users with system administrators rights. For details about columns in the view, see Table 1.
+PGXC_WLM_OPERATOR_STATISTICS displays the operator information of jobs being executed on CNs.
+This view is accessible only to users with system administrators rights. For details about columns in the view, see GS_WLM_OPERATOR_STATISTICS columns.
+PGXC_WLM_SESSION_INFO displays load management information for completed jobs executed on all CNs. The data in this view is obtained from GS_WLM_SESSION_INFO.
+This view is accessible only to users with system administrators rights. For details about columns in the view, see Table 1.
+PGXC_WLM_SESSION_HISTORY displays load management information for completed jobs executed on all CNs. This view is used by Data Manager to query data from a database. Data in the database is cleared every 3 minutes. For details, see GS_WLM_SESSION_HISTORY.
+This view is accessible only to users with system administrators rights. For details about columns in the view, see Table 1.
+PGXC_WLM_SESSION_STATISTICS displays load management information about jobs that are being executed on CNs.
+This view is accessible only to users with system administrators rights. For details about columns in the view, see Table 1.
+PGXC_WLM_WORKLOAD_RECORDS displays the status of job executed by the current user on CNs. It is accessible only to users with system administrator rights. This view is available only when enable_dynamic_workload is set to on.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Name of the CN where the job is executed + |
+
thread_id + |
+bigint + |
+ID of the backend thread + |
+
processid + |
+integer + |
+lwpid of a thread + |
+
timestamp + |
+bigint + |
+Time when a statement starts to be executed + |
+
username + |
+name + |
+Name of the user logging in to the backend + |
+
memory + |
+integer + |
+Memory required by a statement + |
+
active_points + |
+integer + |
+Number of resources consumed by a statement in a resource pool + |
+
max_points + |
+integer + |
+Maximum number of resources in a resource pool + |
+
priority + |
+integer + |
+Priority of a job + |
+
resource_pool + |
+text + |
+Resource pool to which a job belongs + |
+
status + |
+text + |
+Job execution status. Its value can be: +pending +running +finished +aborted +unknown + |
+
control_group + |
+text + |
+Cgroups used by a job + |
+
enqueue + |
+text + |
+Queue that a job is in. Its value can be: +GLOBAL: global queue +RESPOOL: resource pool queue +ACTIVE: not in a queue + |
+
query + |
+text + |
+Statement that is being executed + |
+
PGXC_WORKLOAD_SQL_COUNT displays statistics on the number of SQL statements executed in workload Cgroups on all CNs in a cluster, including the number of SELECT, UPDATE, INSERT, and DELETE statements and the number of DDL, DML, and DCL statements. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
workload + |
+name + |
+Workload Cgroup name + |
+
select_count + |
+bigint + |
+Number of SELECT statements + |
+
update_count + |
+bigint + |
+Number of UPDATE statements + |
+
insert_count + |
+bigint + |
+Number of INSERT statements + |
+
delete_count + |
+bigint + |
+Number of DELETE statements + |
+
ddl_count + |
+bigint + |
+Number of DDL statements + |
+
dml_count + |
+bigint + |
+Number of DML statements + |
+
dcl_count + |
+bigint + |
+Number of DCL statements + |
+
PGXC_WORKLOAD_SQL_ELAPSE_TIME displays statistics on the response time of SQL statements in workload Cgroups on all CNs in a cluster, including the maximum, minimum, average, and total response time of SELECT, UPDATE, INSERT, and DELETE statements. The unit is microsecond. It is accessible only to users with system administrator rights.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
workload + |
+name + |
+Workload Cgroup name + |
+
total_select_elapse + |
+bigint + |
+Total response time of SELECT statements + |
+
max_select_elapse + |
+bigint + |
+Maximum response time of SELECT statements + |
+
min_select_elapse + |
+bigint + |
+Minimum response time of SELECT statements + |
+
avg_select_elapse + |
+bigint + |
+Average response time of SELECT statements + |
+
total_update_elapse + |
+bigint + |
+Total response time of UPDATE statements + |
+
max_update_elapse + |
+bigint + |
+Maximum response time of UPDATE statements + |
+
min_update_elapse + |
+bigint + |
+Minimum response time of UPDATE statements + |
+
avg_update_elapse + |
+bigint + |
+Average response time of UPDATE statements + |
+
total_insert_elapse + |
+bigint + |
+Total response time of INSERT statements + |
+
max_insert_elapse + |
+bigint + |
+Maximum response time of INSERT statements + |
+
min_insert_elapse + |
+bigint + |
+Minimum response time of INSERT statements + |
+
avg_insert_elapse + |
+bigint + |
+Average response time of INSERT statements + |
+
total_delete_elapse + |
+bigint + |
+Total response time of DELETE statements + |
+
max_delete_elapse + |
+bigint + |
+Maximum response time of DELETE statements + |
+
min_delete_elapse + |
+bigint + |
+Minimum response time of DELETE statements + |
+
avg_delete_elapse + |
+bigint + |
+Average response time of DELETE statements + |
+
PGXC_WORKLOAD_TRANSACTION provides transaction information about workload Cgroups on all CNs. It is accessible only to users with system administrator rights. This view is valid only when the real-time resource monitoring function is enabled, that is, when enable_resource_track is on.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
workload + |
+name + |
+Workload Cgroup name + |
+
commit_counter + |
+bigint + |
+Number of the commit times + |
+
rollback_counter + |
+bigint + |
+Number of rollbacks + |
+
resp_min + |
+bigint + |
+Minimum response time (unit: μs) + |
+
resp_max + |
+bigint + |
+Maximum response time (unit: μs) + |
+
resp_avg + |
+bigint + |
+Average response time (unit: μs) + |
+
resp_total + |
+bigint + |
+Total response time (unit: μs) + |
+
PLAN_TABLE displays the plan information collected by EXPLAIN PLAN. Plan information is in a session-level life cycle. After the session exits, the data will be deleted. Data is isolated between sessions and between users.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
statement_id + |
+varchar2(30) + |
+Query tag specified by a user + |
+
plan_id + |
+bigint + |
+ID of a plan to be queried + |
+
id + |
+int + |
+ID of each operator in a generated plan + |
+
operation + |
+varchar2(30) + |
+Operation description of an operator in a plan + |
+
options + |
+varchar2(255) + |
+Operation parameters + |
+
object_name + |
+name + |
+Name of an operated object. It is defined by users, not the object alias used in the query. + |
+
object_type + |
+varchar2(30) + |
+Object type + |
+
object_owner + |
+name + |
+User-defined schema to which an object belongs + |
+
projection + |
+varchar2(4000) + |
+Returned column information + |
+
PLAN_TABLE_DATA displays the plan information collected by EXPLAIN PLAN. Different from the PLAN_TABLE view, the system catalog PLAN_TABLE_DATA stores the plan information collected by all sessions and users.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
session_id + |
+text + |
+Session that inserts the data. Its value consists of a service thread start timestamp and a service thread ID. Values are constrained by NOT NULL. + |
+
user_id + |
+oid + |
+User who inserts the data. Values are constrained by NOT NULL. + |
+
statement_id + |
+varchar2(30) + |
+Query tag specified by a user + |
+
plan_id + |
+bigint + |
+ID of a plan to be queried + |
+
id + |
+int + |
+Node ID in a plan + |
+
operation + |
+varchar2(30) + |
+Operation description + |
+
options + |
+varchar2(255) + |
+Operation parameters + |
+
object_name + |
+name + |
+Name of an operated object. It is defined by users. + |
+
object_type + |
+varchar2(30) + |
+Object type + |
+
object_owner + |
+name + |
+User-defined schema to which an object belongs + |
+
projection + |
+varchar2(4000) + |
+Returned column information + |
+
By collecting statistics about the data file I/Os, PV_FILE_STAT displays the I/O performance of the data to detect the performance problems, such as abnormal I/O operations.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
filenum + |
+oid + |
+File ID + |
+
dbid + |
+oid + |
+Database ID + |
+
spcid + |
+oid + |
+Tablespace ID + |
+
phyrds + |
+bigint + |
+Number of times of reading physical files + |
+
phywrts + |
+bigint + |
+Number of times of writing into physical files + |
+
phyblkrd + |
+bigint + |
+Number of times of reading physical file blocks + |
+
phyblkwrt + |
+bigint + |
+Number of times of writing into physical file blocks + |
+
readtim + |
+bigint + |
+Total duration of reading files. The unit is microsecond. + |
+
writetim + |
+bigint + |
+Total duration of writing files. The unit is microsecond. + |
+
avgiotim + |
+bigint + |
+Average duration of reading and writing files. The unit is microsecond. + |
+
lstiotim + |
+bigint + |
+Duration of the last file reading. The unit is microsecond. + |
+
miniotim + |
+bigint + |
+Minimum duration of reading and writing files. The unit is microsecond. + |
+
maxiowtm + |
+bigint + |
+Maximum duration of reading and writing files. The unit is microsecond. + |
+
PV_INSTANCE_TIME collects statistics on the running time of processes and the time consumed in each execution phase, in microseconds.
+PV_INSTANCE_TIME records time consumption information of the current node. The time consumption information is classified into the following types:
+Name + |
+Type + |
+Description + |
+
---|---|---|
stat_id + |
+integer + |
+Type ID + |
+
stat_name + |
+text + |
+Running time type name + |
+
value + |
+bigint + |
+Running time value + |
+
PV_OS_RUN_INFO displays the running status of the current operating system.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
id + |
+integer + |
+ID + |
+
name + |
+text + |
+Name of the OS running status + |
+
value + |
+numeric + |
+Value of the OS running status + |
+
comments + |
+text + |
+Remarks of the OS running status + |
+
cumulative + |
+boolean + |
+Whether the value of the OS running status is cumulative + |
+
PV_SESSION_MEMORY displays statistics about memory usage at the session level in the unit of MB, including all the memory allocated to Postgres and Stream threads on DNs for jobs currently executed by users.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sessid + |
+text + |
+Thread start time and ID + |
+
init_mem + |
+integer + |
+Memory allocated to the currently executed task before the task enters the executor, in MB + |
+
used_mem + |
+integer + |
+Memory allocated to the currently executed task, in MB + |
+
peak_mem + |
+integer + |
+Peak memory allocated to the currently executed task, in MB + |
+
PV_SESSION_MEMORY_DETAIL displays statistics about thread memory usage by memory context.
+The memory context TempSmallContextGroup collects information about all memory contexts whose value in the totalsize column is less than 8192 bytes in the current thread, and the number of the collected memory contexts is recorded in the usedsize column. Therefore, the totalsize and freesize columns for TempSmallContextGroup in the view display the corresponding information about all the memory contexts whose value in the totalsize column is less than 8192 bytes in the current thread, and the usedsize column displays the number of these memory contexts.
+You can run the SELECT * FROM pv_session_memctx_detail (threadid,''); statement to record information about all memory contexts of a thread into the threadid_timestamp.log file in the /tmp/dumpmem directory. threadid can be obtained from the following table.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sessid + |
+text + |
+Thread start time+thread ID (string: timestamp.threadid) + |
+
sesstype + |
+text + |
+Thread name + |
+
contextname + |
+text + |
+Name of the memory context + |
+
level + |
+smallint + |
+Hierarchy of the memory context + |
+
parent + |
+text + |
+Name of the parent memory context + |
+
totalsize + |
+bigint + |
+Total size of the memory context, in bytes + |
+
freesize + |
+bigint + |
+Total size of released memory in the memory context, in bytes + |
+
usedsize + |
+bigint + |
+Size of used memory in the memory context, in bytes. For TempSmallContextGroup, this parameter specifies the number of collected memory contexts. + |
+
Query the usage of all MemoryContexts on the current node.
+Locate the thread in which the MemoryContext is created and used based on sessid. Check whether the memory usage meets the expectation based on totalsize, freesize, and usedsize to see whether memory leakage may occur.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | SELECT * FROM PV_SESSION_MEMORY_DETAIL order by totalsize desc; + sessid | sesstype | contextname | level | parent | totalsize | freesize | usedsize +----------------------------+-------------------------+---------------------------------------------+-------+------------------------------+-----------+----------+---------- + 0.139975915622720 | postmaster | gs_signal | 1 | TopMemoryContext | 17209904 | 8081136 | 9128768 + 1667462258.139973631031040 | postgres | SRF multi-call context | 5 | FunctionScan_139973631031040 | 1725504 | 3168 | 1722336 + 1667461280.139973666686720 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 284456 | 1188088 + 1667450443.139973877479168 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 356088 | 1116456 + 1667462258.139973631031040 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 128216 | 1344328 + 1667461250.139973915236096 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 226352 | 1246192 + 1667450439.139974010144512 | WLMarbiter | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 386736 | 1085808 + 1667450439.139974151726848 | WDRSnapshot | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 159720 | 1312824 + 1667450439.139974026925824 | WLMmonitor | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 297976 | 1174568 + 1667451036.139973746386688 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 208064 | 1264480 + 1667461250.139973950891776 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 270016 | 1202528 + 1667450439.139974076212992 | WLMCalSpaceInfo | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 393952 | 1078592 + 1667450439.139974092994304 | WLMCollectWorker | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 94848 | 1377696 + 1667461254.139973971343104 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 338544 | 1134000 + 1667461280.139973822945024 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 284456 | 1188088 + 1667450439.139974202070784 | JobScheduler | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 216728 | 1255816 + 1667450454.139973860697856 | postgres | CacheMemoryContext | 1 | TopMemoryContext | 1472544 | 388384 | 1084160 + 0.139975915622720 | postmaster | Postmaster | 1 | TopMemoryContext | 1004288 | 88792 | 915496 + 1667450439.139974218852096 | AutoVacLauncher | CacheMemoryContext | 1 | TopMemoryContext | 948256 | 183488 | 764768 + 1667461250.139973915236096 | postgres | TempSmallContextGroup | 0 | | 584448 | 148032 | 119 + 1667462258.139973631031040 | postgres | TempSmallContextGroup | 0 | | 579712 | 162128 | 123 + |
PV_SESSION_STAT displays session state statistics based on session threads or the AutoVacuum thread.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sessid + |
+text + |
+Thread ID and start time + |
+
statid + |
+integer + |
+Statistics ID + |
+
statname + |
+text + |
+Name of the statistics session + |
+
statunit + |
+text + |
+Unit of the statistics session + |
+
value + |
+bigint + |
+Value of the statistics session + |
+
PV_SESSION_TIME displays statistics about the running time of session threads and time consumed in each execution phase, in microseconds.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sessid + |
+text + |
+Thread ID and start time + |
+
stat_id + |
+integer + |
+Statistics ID + |
+
stat_name + |
+text + |
+Running time type name + |
+
value + |
+bigint + |
+Running time value + |
+
PV_TOTAL_MEMORY_DETAIL displays statistics about memory usage of the current database node in the unit of MB.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
memorytype + |
+text + |
+Memory type. Its value can be: +
|
+
memorymbytes + |
+integer + |
+Size of allocated memory-typed memory + |
+
PV_REDO_STAT displays statistics on redoing Xlogs on the current node.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
phywrts + |
+bigint + |
+Number of physical writes + |
+
phyblkwrt + |
+bigint + |
+Number of physical write blocks + |
+
writetim + |
+bigint + |
+Time consumed by physical writes + |
+
avgiotim + |
+bigint + |
+Average time for each write + |
+
lstiotim + |
+bigint + |
+Last write time + |
+
miniotim + |
+bigint + |
+Minimum write time + |
+
maxiowtm + |
+bigint + |
+Maximum write time + |
+
REDACTION_COLUMNS displays information about all redaction columns in the current database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
object_owner + |
+name + |
+Owner of the object to be redacted. + |
+
object_name + |
+name + |
+Redacted object name + |
+
column_name + |
+name + |
+Redacted column name + |
+
function_type + |
+integer + |
+Redaction type + |
+
function_parameters + |
+text + |
+Parameter used when the redaction type is partial (reserved) + |
+
regexp_pattern + |
+text + |
+Pattern string when the redaction type is regexp (reserved) + |
+
regexp_replace_string + |
+text + |
+Replacement string when the redaction type is regexp (reserved) + |
+
regexp_position + |
+integer + |
+Start and end replacement positions when the redaction type is regexp (reserved) + |
+
regexp_occurrence + |
+integer + |
+Replacement times when the redaction type is regexp (reserved) + |
+
regexp_match_parameter + |
+text + |
+Regular control parameter used when the redaction type is regexp (reserved) + |
+
function_info + |
+text + |
+Redaction function information + |
+
column_description + |
+text + |
+Description of the redacted column + |
+
REDACTION_POLICIES displays information about all redaction objects in the current database.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
object_owner + |
+name + |
+Owner of the object to be redacted. + |
+
object_name + |
+name + |
+Redacted object name + |
+
policy_name + |
+name + |
+Name of the redact policy + |
+
expression + |
+text + |
+Policy effective expression (for users) + |
+
enable + |
+boolean + |
+Policy status (enabled or disabled) + |
+
policy_description + |
+text + |
+Description of a policy + |
+
USER_COL_COMMENTS displays the column comments of the table accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
column_name + |
+character varying(64) + |
+Column name + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
owner + |
+character varying(64) + |
+Table owner + |
+
comments + |
+text + |
+Comments + |
+
USER_CONSTRAINTS displays the table constraint information accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
constraint_name + |
+vcharacter varying(64) + |
+Constraint name + |
+
constraint_type + |
+text + |
+Constraint type +
|
+
table_name + |
+character varying(64) + |
+Name of constraint-related table + |
+
index_owner + |
+character varying(64) + |
+Owner of constraint-related index (only for the unique constraint and primary key constraint) + |
+
index_name + |
+character varying(64) + |
+Name of constraint-related index (only for the unique constraint and primary key constraint) + |
+
USER_CONSTRAINTS displays the information about constraint columns of the tables accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_name + |
+character varying(64) + |
+Name of constraint-related table + |
+
column_name + |
+character varying(64) + |
+Name of constraint-related column + |
+
constraint_name + |
+character varying(64) + |
+Constraint name + |
+
position + |
+smallint + |
+Position of the column in the table + |
+
USER_INDEXES displays index information in the current schema.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_name + |
+character varying(64) + |
+Table name for the index + |
+
uniqueness + |
+text + |
+Whether the index is a unique index + |
+
generated + |
+character varying(1) + |
+Whether the index name is generated by the system + |
+
partitioned + |
+character(3) + |
+Whether the index has the property of the partition table + |
+
USER_IND_COLUMNS displays column information about all indexes accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_name + |
+name + |
+Column name + |
+
column_position + |
+smallint + |
+Position of column in the index + |
+
USER_IND_EXPRESSIONS displays information about the function-based expression index accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Index owner + |
+
index_name + |
+character varying(64) + |
+Index name + |
+
table_owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_expression + |
+text + |
+Function-based index expression of a specified column + |
+
column_position + |
+smallint + |
+Position of column in the index + |
+
USER_IND_PARTITIONS displays information about index partitions accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Name of the owner of the partitioned table index to which the index partition belongs + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table index to which the index partition belongs + |
+
index_name + |
+character varying(64) + |
+Name of the partitioned table index to which the index partition belongs + |
+
partition_name + |
+character varying(64) + |
+Name of the index partition + |
+
index_partition_usable + |
+boolean + |
+Whether the index partition is available + |
+
high_value + |
+text + |
+Upper limit of the partition corresponding to the index partition + |
+
def_tablespace_name + |
+name + |
+Name of the tablespace of the index partition + |
+
USER_JOBS displays all jobs owned by the user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
job + |
+int4 + |
+Job ID + |
+
log_user + |
+name not null + |
+User name of the job creator + |
+
priv_user + |
+name not null + |
+User name of the job executor + |
+
dbname + |
+name not null + |
+Database in which the job is created + |
+
start_date + |
+timestamp without time zone + |
+Job start time + |
+
start_suc + |
+text + |
+Start time of the successful job execution + |
+
last_date + |
+timestamp without time zone + |
+Start time of the last job execution + |
+
last_suc + |
+text + |
+Start time of the last successful job execution + |
+
this_date + |
+timestamp without time zone + |
+Start time of the ongoing job execution + |
+
this suc + |
+text + |
+Same as THIS_DATE + |
+
next_date + |
+timestamp without time zone + |
+Schedule time of the next job execution + |
+
next suc + |
+text + |
+Same as next_date + |
+
broken + |
+text + |
+Task status +Y: the system does not try to execute the task. +N: the system attempts to execute the task. + |
+
status + |
+char + |
+Status of the current job. The value range is 'r', 's', 'f', 'd'. The default value is 's'. The indications are as follows: +
|
+
interval + |
+text + |
+Time expression used to calculate the next execution time. If this parameter is set to null, the job will be executed once only. + |
+
failures + |
+smallint + |
+Number of times the job has started and failed. If a job fails to be executed for 16 consecutive times, no more attempt will be made on it. + |
+
what + |
+text + |
+Body of the PL/SQL blocks or anonymous clock that the job executes + |
+
USER_OBJECTS displays all database objects accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
object_name + |
+name + |
+Object name + |
+
object_id + |
+oid + |
+OID of the object + |
+
object_type + |
+name + |
+Type of the object (TABLE, INDEX, SEQUENCE, or VIEW) + |
+
namespace + |
+oid + |
+Namespace that the object belongs to + |
+
created + |
+timestamp with time zone + |
+Object creation time + |
+
last_ddl_time + |
+timestamp with time zone + |
+The last time when an object was modified. + |
+
For details about the value ranges of last_ddl_time and last_ddl_time, see PG_OBJECT.
+USER_PART_INDEXES displays information about partitioned table indexes accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
index_owner + |
+character varying(64) + |
+Name of the owner of the partitioned table index + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table index + |
+
index_name + |
+character varying(64) + |
+Name of the partitioned table index + |
+
table_name + |
+character varying(64) + |
+Name of the partitioned table to which the partitioned table index belongs + |
+
partitioning_type + |
+text + |
+Partition policy of the partitioned table + |
+
partition_count + |
+bigint + |
+Number of index partitions of the partitioned table index + |
+
def_tablespace_name + |
+name + |
+Name of the tablespace of the partitioned table index + |
+
partitioning_key_count + |
+integer + |
+Number of partition keys of the partitioned table + |
+
USER_PART_TABLES displays information about partitioned tables accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_owner + |
+character varying(64) + |
+Name of the owner of the partitioned table + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table + |
+
table_name + |
+character varying(64) + |
+Name of the partitioned table + |
+
partitioning_type + |
+text + |
+Partition policy of the partitioned table + |
+
partition_count + |
+bigint + |
+Number of partitions of the partitioned table + |
+
def_tablespace_name + |
+name + |
+Name of the tablespace of the partitioned table + |
+
partitioning_key_count + |
+integer + |
+Number of partition keys of the partitioned table + |
+
USER_PROCEDURES displays information about all stored procedures and functions in the current schema.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the stored procedure or the function + |
+
object_name + |
+character varying(64) + |
+Name of the stored procedure or the function + |
+
argument_number + |
+smallint + |
+Number of the input parameters in the stored procedure + |
+
USER_SEQUENCES displays sequence information in the current schema.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sequence_owner + |
+character varying(64) + |
+Owner of the sequence + |
+
sequence_name + |
+character varying(64) + |
+Name of the sequence + |
+
USER_SOURCE displays information about stored procedures or functions in this mode, and provides the columns defined by the stored procedures or the functions.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the stored procedure or the function + |
+
name + |
+character varying(64) + |
+Name of the stored procedure or the function + |
+
text + |
+text + |
+Definition of the stored procedure or the function + |
+
USER_SYNONYMS displays synonyms accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schema_name + |
+text + |
+Name of the schema to which the synonym belongs. + |
+
synonym_name + |
+text + |
+Synonym name. + |
+
table_owner + |
+text + |
+Owner of the associated object. + |
+
table_schema_name + |
+text + |
+Schema name of the associated object. + |
+
table_name + |
+text + |
+Name of the associated object. + |
+
USER_TAB_COLUMNS displays information about table columns accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
column_name + |
+character varying(64) + |
+Column name + |
+
data_type + |
+character varying(128) + |
+Data type of the column + |
+
column_id + |
+integer + |
+Sequence number of the column when the table is created + |
+
data_length + |
+integer + |
+Length of the column in the unit of bytes + |
+
comments + |
+text + |
+Comments + |
+
avg_col_len + |
+numeric + |
+Average length of a column in the unit of bytes + |
+
nullable + |
+bpchar + |
+Whether the column can be empty. For the primary key constraint and non-null constraint, the value is n. + |
+
data_precision + |
+integer + |
+Precision of the data type. This parameter is valid for the numeric data type and NULL for other types. + |
+
data_scale + |
+integer + |
+Number of decimal places. This parameter is valid for the numeric data type and 0 for other types. + |
+
char_length + |
+numeric + |
+Column length in the unit of bytes which is valid only for the varchar, nvarchar2, bpchar, and char types. + |
+
USER_TAB_COMMENTS displays comments about all tables and views accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the table or the view + |
+
table_name + |
+character varying(64) + |
+Name of the table or the view + |
+
comments + |
+text + |
+Comments + |
+
USER_TAB_PARTITIONS displays all table partitions accessible to the current user. Each partition of a partitioned table accessible to the current user has a piece of record in USER_TAB_PARTITIONS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
table_owner + |
+character varying(64) + |
+Name of the owner of the partitioned table + |
+
schema + |
+character varying(64) + |
+Schema of the partitioned table + |
+
table_name + |
+character varying(64) + |
+Name of the partitioned table + |
+
partition_name + |
+character varying(64) + |
+Name of the table partition + |
+
high_value + |
+text + |
+Upper boundary of the table partition + |
+
tablespace_name + |
+name + |
+Name of the tablespace of the table partition + |
+
USER_TABLES displays table information in the current schema.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Table owner + |
+
table_name + |
+character varying(64) + |
+Table name + |
+
tablespace_name + |
+character varying(64) + |
+Name of the tablespace where the table is located + |
+
status + |
+character varying(8) + |
+Whether the current record is valid + |
+
temporary + |
+character(1) + |
+Whether the table is a temporary table +
|
+
dropped + |
+character varying + |
+Whether the current record is deleted +
|
+
num_rows + |
+numeric + |
+Estimated number of rows in the table + |
+
USER_TRIGGERS displays the information about triggers accessible to the current user.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
trigger_name + |
+character varying(64) + |
+Trigger name + |
+
table_name + |
+character varying(64) + |
+Name of the relationship table + |
+
table_owner + |
+character varying(64) + |
+Role name + |
+
USER_VIEWS displays information about all views in the current schema.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
owner + |
+character varying(64) + |
+Owner of the view + |
+
view_name + |
+character varying(64) + |
+View name + |
+
V$SESSION displays all session information about the current session.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sid + |
+bigint + |
+OID of the background process of the current activity + |
+
serial# + |
+integer + |
+Sequence number of the active background process, which is 0 in GaussDB(DWS). + |
+
user# + |
+oid + |
+OID of the user that has logged in to the background process + |
+
username + |
+name + |
+Name of the user that has logged in to the background process + |
+
V$SESSION_LONGOPS displays the progress of ongoing operations.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
sid + |
+bigint + |
+OID of the running background process + |
+
serial# + |
+integer + |
+Sequence number of the running background process, which is 0 in GaussDB(DWS). + |
+
sofar + |
+integer + |
+Completed workload, which is empty in GaussDB(DWS). + |
+
totalwork + |
+integer + |
+Total workload, which is empty in GaussDB(DWS). + |
+
GaussDB(DWS) GUC parameters can control database system behaviors. You can check and adjust the GUC parameters based on your business scenario and data volume.
+1 | SHOW server_version; + |
server_version indicates the database version.
+1 | SHOW ALL; + |
1 | SELECT * FROM pg_settings WHERE NAME='server_version'; + |
1 | SELECT * FROM pg_settings; + |
To ensure the optimal performance of GaussDB(DWS), you can adjust the GUC parameters in the database.
+You can configure GUC parameters in the following ways:
+Configure parameters at database, user, or session levels.
+1 | ALTER DATABASE dbname SET paraname TO value; + |
The setting takes effect in the next session.
+1 | ALTER USER username SET paraname TO value; + |
The setting takes effect in the next session.
+1 | SET paraname TO value; + |
Parameter value in the current session is changed. After you exit the session, the setting becomes invalid.
+The following example shows how to set explain_perf_mode.
+1 +2 +3 +4 +5 | SHOW explain_perf_mode; + explain_perf_mode +------------------- + normal +(1 row) + |
Perform one of the following operations:
+1 | ALTER DATABASE gaussdb SET explain_perf_mode TO pretty; + |
If the following information is displayed, the setting has been modified.
+ALTER DATABASE+
The setting takes effect in the next session.
+1 | ALTER USER dbadmin SET explain_perf_mode TO pretty; + |
If the following information is displayed, the setting has been modified.
+ALTER USER+
The setting takes effect in the next session.
+1 | SET explain_perf_mode TO pretty; + |
If the following information is displayed, the setting has been modified.
+SET+
1 +2 +3 +4 +5 | SHOW explain_perf_mode; + explain_perf_mode +-------------- + pretty +(1 row) + |
The database provides many operation parameters. Configuration of these parameters affects the behavior of the database system. Before modifying these parameters, learn the impact of these parameters on the database. Otherwise, unexpected results may occur.
+This section describes parameters related to the connection mode between the client and server.
+Parameter description: Specifies the maximum number of allowed parallel connections to the database. This parameter influences the concurrent processing capability of the cluster.
+Type: POSTMASTER
+Value range: an integer. For CNs, the ranges from 1 to 16384. For DNs, the value ranges from 1 to 262143. Because there are internal connections in the cluster, the maximum value is rarely reached. If invalid value for parameter "max_connections" is displayed in the log, you need to decrease the max_connections value for DNs.
+Default value: 800 for CNs and 5000 for DNs. If the default value is greater than the maximum value supported by kernel (determined when the gs_initdb command is executed), an error message will be displayed.
+Setting suggestions:
+Retain the default value of this parameter on the CN. Set this parameter on the DN to the following calculation result: Number of CNs x Value of this parameter on the CN.
+If the parameter is set to a large value, GaussDB(DWS) requires more SystemV shared memories or semaphores, which may exceed the maximum default configuration of the OS. In this case, modify the value as needed.
+The value of max_connections is related to max_prepared_transactions. Before setting max_connections, ensure that the value of max_prepared_transactions is greater than or equal to that of max_connections. In this way, each session has a prepared transaction in the waiting state.
+Parameter description: Specifies the minimum number of connections reserved for administrators.
+Type: POSTMASTER
+Value range: an integer ranging from 0 to 262143
+Default value: 3
+Parameter description: Specifies the name of the client program connecting to the database.
+Type: USERSET
+Value range: a string
+Default value: gsql
+Parameter description: Specifies the database connection information, including the driver type, driver version, driver deployment path, and process owner. (This is an O&M parameter. Do not configure it by yourself.)
+Type: USERSET
+Value range: a string
+Default value: an empty string
+1 | {"driver_name":"ODBC","driver_version": "(GaussDB 8.1.1 build af002019) compiled at 2020-01-10 05:43:20 commit 6995 last mr 11566 debug","driver_path":"/usr/local/lib/psqlodbcw.so","os_user":"dbadmin"} + |
driver_name and driver_version are displayed by default. Whether driver_path and os_user are displayed is determined by users.
+This section describes parameters about how to securely authenticate the client and server.
+Parameter description: Specifies the longest duration to wait before the client authentication times out. If a client is not authenticated by the server within the timeout period, the server automatically breaks the connection from the client so that the faulty client does not occupy connection resources.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 600. The minimum unit is second (s).
+Default value: 1 min
+Parameter description: Specifies the number of interactions during the generation of encryption information for authentication.
+Type: SIGHUP
+Value range: an integer ranging from 2048 to 134217728
+Default value: 50000
+If this parameter is set to a large value, performance deteriorates in operations involving password encryption, such as authentication and user creation. Set this parameter to an appropriate value based on the hardware conditions.
+Parameter description: Specifies the longest duration with no operations after the connection to the server.
+Type: USERSET
+Value range: an integer ranging from 0 to 86400. The minimum unit is second (s). 0 means to disable the timeout.
+Default value: 10 min
+Parameter description: Specifies whether the SSL connection is enabled.
+Type: POSTMASTER
+Value range: Boolean
+GaussDB(DWS) supports the SSL connection when the client connects to CNs. It is recommended that the SSL connection be enabled only on CNs.
+Default value: on
+Parameter description: Specifies the encryption algorithm list supported by the SSL.
+Type: POSTMASTER
+Value range: a string. Separate multiple encryption algorithms with semicolons (;).
+Default value: ALL
+Parameter description: Specifies the traffic volume over the SSL-encrypted channel before the session key is renegotiated. The renegotiation traffic limitation mechanism reduces the probability that attackers use the password analysis method to crack the key based on a huge amount of data but causes big performance losses. The traffic indicates the sum of sent and received traffic.
+Type: USERSET
+You are advised to retain the default value, that is, disable the renegotiation mechanism. You are not advised to use the gs_guc tool or other methods to set the ssl_renegotiation_limit parameter in the postgresql.conf file. The setting does not take effect.
+Value range: an integer ranging from 0 to INT_MAX. The unit is KB. 0 indicates that the renegotiation mechanism is disabled.
+Default value: 0
+Parameter description: Specifies whether to check the password complexity when you run the CREATE ROLE/USER or ALTER ROLE/USER command to create or modify a GaussDB(DWS) account.
+Type: SIGHUP
+For security purposes, do not disable the password complexity policy.
+Value range: an integer, 0 or 1
+Default value: 1
+Parameter description: Specifies whether to check the reuse days of the new password when you run the ALTER USER or ALTER ROLE command to change a user password.
+Type: SIGHUP
+When you change the password, the system checks the values of password_reuse_time and password_reuse_max.
+Value range: a floating number ranging from 0 to 3650. The unit is day.
+Default value: 60
+Parameter description: Specifies whether to check the reuse times of the new password when you run the ALTER USER or ALTER ROLE command to change a user password.
+Type: SIGHUP
+When you change the password, the system checks the values of password_reuse_time and password_reuse_max.
+Value range: an integer ranging from 0 to 1000
+Default value: 0
+Parameter description: Specifies the duration before an account is automatically unlocked.
+Type: SIGHUP
+The locking and unlocking functions take effect only when the values of password_lock_time and failed_login_attempts are positive numbers.
+Value range: a floating number ranging from 0 to 365. The unit is day.
+Default value: 1
+Parameter description: Specifies the maximum number of incorrect password attempts before an account is locked. The account will be automatically unlocked after the time specified in password_lock_time. For example, incorrect password attempts during login and password input failures when using the ALTER USER command
+Type: SIGHUP
+Value range: an integer ranging from 0 to 1000
+Default value: 10
+Parameter description: Specifies the encryption type of user passwords.
+Type: SIGHUP
+Value range: an integer, 0, 1, or 2
+Default value: 2
+Parameter description: Specifies the minimum account password length.
+Type: SIGHUP
+Value range: an integer. A password can contain 6 to 999 characters.
+Default value: 8
+Parameter description: Specifies the maximum account password length.
+Type: SIGHUP
+Value range: an integer. A password can contain 6 to 999 characters.
+Default value: 32
+Parameter description: Specifies the minimum number of uppercase letters that an account password must contain.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 999.
+Default value: 0
+Parameter description: Specifies the minimum number of lowercase letters that an account password must contain.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 999.
+Default value: 0
+Parameter description: Specifies the minimum number of digits that an account password must contain.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 999.
+Default value: 0
+Parameter description: Specifies the minimum number of special characters that an account password must contain.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 999.
+Default value: 0
+Parameter description: Specifies the validity period of an account password.
+Type: SIGHUP
+Value range: a floating number ranging from 0 to 999. The unit is day.
+Default value: 90
+Parameter description: Specifies how many days in advance users are notified before the account password expires.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 999. The unit is day.
+Default value: 7
+This section describes parameter settings and value ranges for communication libraries.
+Parameter description: Specifies whether the communication library uses the TCP or SCTP protocol to set up a data channel. The modification of this parameter takes effect after the cluster is restarted.
+Type: POSTMASTER
+Value range: Boolean. If this parameter is set to on for CNs, the CNs connect to DNs using TCP. If this parameter is set to on for DNs, the DNs communicate with each other using TCP.
+Default value: on
+Parameter description: Specifies the TCP or SCTP listening port used by the TCP proxy communication library or SCTP communication library, respectively.
+Type: POSTMASTER
+This port number is automatically allocated during cluster deployment. Do not change the parameter setting. If the port number is incorrectly set, the database communication fails.
+Value range: an integer ranging from 0 to 65535
+Default value: port + Number of primary DNs on the local host x 2 + Sequence number of the local DN on the local host
+Parameter description: Specifies the TCP listening port used by the TCP proxy communication library or SCTP communication library, respectively.
+Type: POSTMASTER
+Value range: an integer ranging from 0 to 65535
+Default value: port + Number of primary DNs on the local host x 2 + Sequence number of the local DN on the local host + 1
+This port number is automatically allocated during cluster deployment. Do not change the parameter setting. If the port number is incorrectly set, the database communication fails.
+Parameter description: Specifies the maximum number of DNs supported by the TCP proxy communication library or SCTP communication library.
+Type: USERSET
+Value range: an integer ranging from 1 to 8192
+Default value: actual number of DNs
+Parameter description: Specifies the maximum number of concurrent data streams supported by the TCP proxy communication library or SCTP communication library. The value of this parameter must be greater than: Number of concurrent data streams x Number of operators in each stream x Square of SMP.
+Type: POSTMASTER
+Value range: an integer ranging from 1 to 60000
+Default value: calculated by the following formula: min (query_dop_limit x query_dop_limit x 2 x 20, max_process_memory (bytes) x 0.005/(Maximum number of CNs + Number of current DNs)/260. If the value is less than 1024, 1024 is used. query_dop_limit = Number of CPU cores of a single server/Number of DNs of a single server.
+Parameter description: Specifies the maximum number of receiving threads for the TCP proxy communication library or SCTP communication library.
+Type: POSTMASTER
+Value range: an integer ranging from 1 to 50
+Default value: 4
+Parameter description: Specifies the maximum size of packets that can be consecutively sent by the TCP proxy communication library or SCTP communication library. When you use a 1GE NIC, a small value ranging from 20 KB to 40 KB is recommended.
+Type: USERSET
+Value range: an integer ranging from 0 to 102400. The default unit is KB. The value 0 indicates that the quota mechanism is not used.
+Default value: 1MB
+Parameter description: Specifies the maximum memory available for buffering on the TCP proxy communication library or SCTP communication library on a single DN.
+Type: POSTMASTER
+Value range: an integer ranging from 102400 to INT_MAX/2. The default unit is KB. The minimum size cannot be less than 1 GB for installation.
+Default value: max_process_memory/8
+This parameter must be specifically set based on environment memory and the deployment method. If it is too large, there may be out-of-memory (OOM). If it is too small, the performance of the TCP proxy communication library or SCTP communication library may deteriorate.
+Parameter description: Specifies the percentage of the memory pool resources that can be used by the TCP proxy communication library or the SCTP communication library in a DN. This parameter is used to adaptively reserve memory used by the communication libraries.
+Type: POSTMASTER
+Value range: an integer ranging from 0 to 100
+Default value: 0
+If the memory used by the communication library is small, set this parameter to a small value. Otherwise, set it to a large value.
+Parameter description: Specifies whether to bind the client of the communication library to a specified IP address when the client initiates a connection.
+Type: USERSET
+Value range: Boolean
+If multiple IP addresses of a node in a cluster are on the same communication network segment, set this parameter to on. In this case, the client is bound to the IP address specified by listen_addresses. The concurrency performance of a cluster depends on the number of random ports because a port can be used only by one client at a time.
+Default value: off
+Parameter description: Specifies whether to use the NO_DELAY attribute of the communication library connection. Restart the cluster for the setting to take effect.
+Type: USERSET
+Value range: Boolean
+Default value: off
+If packet loss occurs because a large number of packets are received per second, set this parameter to off to reduce the total number of packets.
+Parameter description: Specifies the debug mode of the TCP proxy communication library or SCTP communication library, that is, whether to print logs about the communication layer. The setting is effective at the session layer.
+When the switch is set to on, the number of printed logs is huge, adding extra overhead and reducing database performance. Therefore, set the switch to on only in the debug mode.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the duration after which the communication library server automatically triggers ACK when no data package is received.
+Type: USERSET
+Value range: an integer ranging from 0 to 20000. The unit is millisecond (ms). 0 indicates that automatic ACK triggering is disabled.
+Default value: 2000
+Parameter description: Specifies the timer mode of the TCP proxy communication library or SCTP communication library, that is, whether to print timer logs in each phase of the communication layer. The setting is effective at the session layer.
+When the switch is set to on, the number of printed logs is huge, adding extra overhead and reducing database performance. Therefore, set the switch to on only in the debug mode.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the statistics mode of the TCP proxy communication library or SCTP communication library, that is, whether to print statistics about the communication layer. The setting is effective at the session layer.
+When the switch is set to on, the number of printed logs is huge, adding extra overhead and reducing database performance. Therefore, set the switch to on only in the debug mode.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to enable the pooler reuse mode. The setting takes effect after the cluster is restarted.
+Type: POSTMASTER
+Value range: Boolean
+Set this parameter to the same value for CNs and DNs. If enable_stateless_pooler_reuse is set to off for CNs and set to on for DNs, the cluster communication fails. Restart the cluster to make the setting take effect.
+Default value: off
+Parameter description: Specifies a switch for logical connections between CNs and DNs. The parameter setting takes effect only after the cluster is restarted.
+Type: POSTMASTER
+Value range: Boolean
+If comm_cn_dn_logic_conn is set to off for CNs and set to on for DNs, cluster communication will fail. You are advised to set this parameter to the same value for all CNs and DNs. Restart the cluster to make the setting take effect.
+Default value: off
+This section describes memory parameters.
+Parameters described in this section take effect only after the database service restarts.
+Parameter description: Specifies whether to enable the logical memory management module.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+If the result of max_process_memory - shared_buffer - cstore_buffers is less than 2 GB, GaussDB(DWS) forcibly sets enable_memory_limit to off.
+Parameter description: Specifies the maximum physical memory of a database node.
+Type: POSTMASTER
+Value range: an integer ranging from 2 x 1024 x 1024 to INT_MAX/2. The unit is KB.
+Default value: The value is automatically adapted by non-secondary DNs. The formula is (Physical memory size) x 0.6/(1 + Number of primary DNs). If the result is less than 2 GB, 2 GB is used by default. The default size of the secondary DN is 12 GB.
+Setting suggestions:
+On DNs, the value of this parameter is determined based on the physical system memory and the number of DNs deployed on a single node. Parameter value = (Physical memory – vm.min_free_kbytes) x 0.7/(n + Number of primary DNs). This parameter aims to ensure system reliability, preventing node OOM caused by increasing memory usage. vm.min_free_kbytes indicates OS memory reserved for kernels to receive and send data. Its value is at least 5% of the total memory. That is, max_process_memory = Physical memory x 0.665/(n + Number of primary DNs). If the cluster scale (number of nodes in the cluster) is smaller than 256, n=1; if the cluster scale is larger than 256 and smaller than 512, n=2; if the cluster scale is larger than 512, n=3.
+Set this parameter on CNs to the same value as that on DNs.
+RAM is the maximum memory allocated to the cluster.
+Parameter description: Specifies the size of shared memory used by GaussDB(DWS). If this parameter is set to a large value, GaussDB(DWS) may require more System V shared memory than the default setting.
+Type: POSTMASTER
+Value range: an integer ranging from 128 to INT_MAX. The unit is 8 KB.
+Changing the value of BLCKSZ will result in a change in the minimum value of the shared_buffers.
+Default value: 512 MB for CNs and 1 GB for DNs. If the maximum value allowed by the OS is smaller than 32 MB, this parameter will be automatically changed to the maximum value allowed by the OS during database initialization.
+Setting suggestions:
+Set this parameter for DNs to a value greater than that for CNs, because GaussDB(DWS) pushes most of its queries down to DNs.
+It is recommended that shared_buffers be set to a value less than 40% of the memory. Set it to a large value for row-store tables and a small value for column-store tables. For column-store tables: shared_buffers = (Memory of a single server/Number of DNs on the single server) x 0.4 x 0.25
+If you want to increase the value of shared_buffers, you also need to increase the value of checkpoint_segments, because a longer period of time is required to write a large amount of new or changed data.
+Parameter description: Specifies the size of the ring buffer used for data parallel import.
+Type: USERSET
+Value range: an integer ranging from 16384 to INT_MAX. The unit is KB.
+Default value: 2 GB
+Setting suggestions: Increase the value of this parameter on DNs if a large amount of data is to be imported.
+Parameter description: Specifies the maximum size of local temporary buffers used by each database session.
+Type: USERSET
+Value range: an integer ranging from 800 to INT_MAX/2. The unit is KB.
+Default value: 8 MB
+Parameter description: Specifies the maximum number of transactions that can stay in the prepared state simultaneously. If this parameter is set to a large value, GaussDB(DWS) may require more System V shared memory than the default setting.
+When GaussDB(DWS) is deployed as an HA system, set this parameter on the standby server to the same value or a value greater than that on the primary server. Otherwise, queries will fail on the standby server.
+Type: POSTMASTER
+Value range: an integer ranging from 0 to 536870911. 800 indicates that the prepared transaction feature is disabled.
+Default value: 800
+Set this parameter to a value greater than or equal to that of max_connections to avoid failures in preparation.
+Parameter description: Specifies the memory used for internal sort operations and hash tables before data is written into temporary disk files. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are required for Hash joins as well as Hash-based aggregations and IN subqueries.
+For a complex query, several sort or Hash operations may be running in parallel; each operation will be allowed to use as much memory as this value specifies. If the memory is insufficient, data is written into temporary files. In addition, several running sessions could be performing such operations concurrently. Therefore, the total memory used may be many times the value of work_mem.
+Type: USERSET
+Value range: an integer ranging from 64 to INT_MAX. The unit is KB.
+Default value: 64 MB
+Setting suggestions:
+If the physical memory specified by work_mem is insufficient, additional operator calculation data will be written into temporary tables based on query characteristics and the degree of parallelism. This reduces performance by five to ten times, and prolongs the query response time from seconds to minutes.
+Parameter description: Specifies the memory used by query. If the value of query_mem is greater than 0, the optimizer adjusts the estimated query memory to this value when generating an execution plan.
+Type: USERSET
+Value range: 0 or an integer greater than 32. The default unit is KB. If the value is set to a negative value or less than 32 MB, the default value 0 is used. In this case, the optimizer does not adjust the estimated query memory.
+Default value: 0
+Parameter description: Specifies the maximum memory that can be used by query. If the value of query_max_mem is greater than 0, an error is reported when the query memory usage exceeds the value.
+Type: USERSET
+Value range: 0 or an integer greater than 32 MB. The default unit is KB. If the value is set to a negative value or less than 32 MB, the default value 0 is used. In this case, the query memory will not be limited based on the value.
+Default value: 0
+Parameter description: Specifies the maximum size of memory to be used for maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. This parameter may affect the execution efficiency of VACUUM, VACUUM FULL, CLUSTER, and CREATE INDEX.
+Type: USERSET
+Value range: an integer ranging from 1024 to INT_MAX. The unit is KB.
+Default value: 128 MB
+Setting suggestions:
+Parameter description: Specifies the memory used for internal sort operations on column-store tables before data is written into temporary disk files. This parameter can be used for inserting tables with a partial cluster key or index, creating a table index, and deleting or updating a table.
+Type: USERSET
+Multiple running sessions may perform partial sorting on a table at the same time. Therefore, the total memory usage may be several times of the psort_work_mem value.
+Value range: an integer ranging from 64 to INT_MAX. The unit is KB.
+Default value: 512 MB
+Parameter description: Specifies the number of loaded CuDescs per column when a column-store table is scanned. Increasing the value will improve the query performance and increase the memory usage, particularly when there are many columns in the column tables.
+Type: USERSET
+Value range: an integer ranging from 100 to INT_MAX/2
+Default value: 1024
+When the value of max_loaded_cudesc is set to a large value, the memory may be insufficient.
+Parameter description: Specifies the maximum safe depth of GaussDB(DWS) execution stack. The safety margin is required because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines, such as expression evaluation.
+Type: SUSET
+Configuration principles:
+Value range: an integer ranging from 100 to INT_MAX. The unit is KB.
+Default value: 2 MB
+2 MB is a small value and will not incur system breakdown in general, but may lead to execution failures of complex functions.
+Parameter description: Specifies the size of the shared buffer used by ORC, Parquet, or CarbonData data of column-store tables and OBS or HDFS column-store foreign tables.
+Type: POSTMASTER
+Value range: an integer ranging from 16384 to INT_MAX. The unit is KB.
+Default value: 32 MB
+Setting suggestions:
+Column-store tables use the shared buffer specified by cstore_buffers instead of that specified by shared_buffers. When column-store tables are mainly used, reduce the value of shared_buffers and increase that of cstore_buffers.
+Use cstore_buffers to specify the cache of ORC, Parquet, or CarbonData metadata and data for OBS or HDFS foreign tables. The metadata cache size should be 1/4 of cstore_buffers and not exceed 2 GB. The remaining cache is shared by column-store data and foreign table column-store data.
+Parameter description: Specifies whether to reserve 1/4 of cstore_buffers for storing ORC metadata when the cstore buffer is initialized.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the maximum number of files that can be stored in memory when you schedule an HDFS foreign table. If the number is exceeded, all files in the list will be spilled to disk for scheduling.
+Type: USERSET
+Value range: an integer ranging from 1 to INT_MAX
+Default value: 60000
+Parameter description: Specifies the size of the ring buffer used for data parallel export.
+Type: USERSET
+Value range: an integer ranging from 256 to INT_MAX. The unit is KB.
+Default value: 16 MB
+Parameter description: If the amount of data inserted to a CU is greater than the value of this parameter when data is inserted to a column-store table, the system starts row-level size verification to prevent the generation of a CU whose size is greater than 1 GB (non-compressed size).
+Type: USERSET
+Value range: an integer ranging from 0 to 1024. The unit is MB.
+Default value: 1024 MB
+This section describes parameters related to statement disk space control, which are used to limit the disk space usage of statements.
+Parameter description: Specifies the space size for files to be spilled to disks when a single SQL statement is executed on a single DN. The managed space includes the space occupied by ordinary tables, temporary tables, and intermediate result sets to be flushed to disks. System administrators are also restricted by this parameter.
+Type: USERSET
+Value range: an integer ranging from -1 to INT_MAX. The unit is KB. –1 indicates no limit.
+Default value: –1
+Setting suggestion: You are advised to set sql_use_spacelimit to 10% of the total disk space where DNs reside. If two DNs exist on a single disk, set sql_use_spacelimit to 5% of the total disk space.
+For example, if sql_use_spacelimit is set to 100 in the statement and the amount data spilled to disks on a single DN exceeds 100 KB, DWS stops the query and displays a message of threshold exceeding.
+1 +2 | insert into user1.t1 select * from user2.t1; +ERROR: The space used on DN (104 kB) has exceeded the sql use space limit (100 kB). + |
Handling suggestion:
+Parameter description: Specifies the total space for files spilled to disks in a single thread. For example, temporary files used by sorting and hash tables or cursors are controlled by this parameter.
+This is a session-level setting.
+Type: SUSET
+Value range: an integer ranging from -1 to INT_MAX. The unit is KB. –1 indicates no limit.
+Default value: –1
+This parameter does not apply to disk space occupied by temporary tablespaces used for executing SQL queries.
+Parameter description: Specifies the percentage of idle space of old pages that can be reused when page replication is used for data synchronization between primary and standby DNs in the scenario where data is inserted into row-store tables in batches.
+Type: USERSET
+Value range: an integer ranging from 0 to 100. The value is a percentage. Value 0 indicates that the old pages are not reused and new pages are requested.
+Default value: 70
+This section describes kernel resource parameters. Whether these parameters take effect depends on OS settings.
+Parameter description: Specifies the maximum number of simultaneously open files allowed by each server process. If the kernel is enforcing a proper limit, setting this parameter is not required.
+But on some platforms, especially on most BSD systems, the kernel allows independent processes to open far more files than the system can really support. If the message "Too many open files" is displayed, try to reduce the setting. Generally, the number of file descriptors must be greater than or equal to the maximum number of concurrent tasks multiplied by the number of primary DNs on the current physical machine (*max_files_per_process*3).
+Type: POSTMASTER
+Value range: an integer ranging from 25 to INT_MAX
+Default value: 1000
+This feature allows administrators to reduce the I/O impact of the VACUUM and ANALYZE statements on concurrent database activities. It is often more important to prevent maintenance statements, such as VACUUM and ANALYZE, from affecting other database operations than to run them quickly. Cost-based vacuum delay provides a way for administrators to achieve this purpose.
+Certain operations hold critical locks and should be complete as quickly as possible. In GaussDB(DWS), cost-based vacuum delays do not take effect during such operations. To avoid uselessly long delays in such cases, the actual delay is calculated as follows and is the maximum value of the following calculation results:
+During the execution of the ANALYZE | ANALYSE and VACUUM statements, the system maintains an internal counter that keeps track of the estimated cost of the various I/O operations that are performed. When the accumulated cost reaches a limit (specified by vacuum_cost_limit), the process performing the operation will sleep for a short period of time (specified by vacuum_cost_delay). Then, the counter resets and the operation continues.
+By default, this feature is disabled. To enable this feature, set vacuum_cost_delay to a value other than 0.
+Parameter description: Specifies the length of time that the process will sleep when vacuum_cost_limit has been exceeded.
+Type: USERSET
+Value range: an integer ranging from 0 to 100. The unit is millisecond (ms). A positive number enables cost-based vacuum delay and 0 disables cost-based vacuum delay.
+Default value: 0
+Parameter description: Specifies the estimated cost for vacuuming a buffer found in the shared buffer. It represents the cost to lock the buffer pool, look up the shared Hash table, and scan the page.
+Type: USERSET
+Value range: an integer ranging from 0 to 10000. The unit is millisecond (ms).
+Default value: 1
+Parameter description: Specifies the estimated cost for vacuuming a buffer read from the disk. It represents the cost to lock the buffer pool, look up the shared Hash table, read the desired block from the disk, and scan the block.
+Type: USERSET
+Value range: an integer ranging from 0 to 10000. The unit is millisecond (ms).
+Default value: 10
+Parameter description: Specifies the estimated cost charged when vacuum modifies a block that was previously clean. It represents the I/Os required to flush the dirty block out to disk again.
+Type: USERSET
+Value range: an integer ranging from 0 to 10000. The unit is millisecond (ms).
+Default value: 20
+Parameter description: Specifies the cost limit. The cleanup process will sleep if this limit is exceeded.
+Type: USERSET
+Value range: an integer ranging from 1 to 10000. The unit is ms.
+Default value: 200
+Parameter description: Specifies whether O&M personnel are allowed to generate some ADIO logs to locate ADIO issues. This parameter is used only by developers. Common users are advised not to use it.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the quick allocation switch of the disk space is enabled. This switch can be enabled only in the XFS file system.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the number of row-store prefetches using the ADIO.
+Type: USERSET
+Value range: an integer ranging from 1024 to 1048576. The unit is 8 KB.
+Default value: 32 MB
+Parameter description: Specifies the number of row-store writes using the ADIO.
+Type: USERSET
+Value range: an integer ranging from 1024 to 1048576. The unit is 8 KB.
+Default value: 8MB
+Parameter description: Specifies the number of column-store prefetches using the ADIO.
+Type: USERSET
+Value range: an integer. The value range is from 1024 to 1048576 and the unit is KB.
+Default value: 32 MB
+Parameter description: Specifies the number of column-store writes using the ADIO.
+Type: USERSET
+Value range: an integer. The value range is from 1024 to 1048576 and the unit is KB.
+Default value: 8MB
+Parameter description: Specifies the maximum number of column-store writes buffered in the database using the ADIO.
+Type: USERSET
+Value range: An integer. The value range is from 4096 to INT_MAX/2 and the unit is KB.
+Default value: 2 GB
+Parameter description: Specifies the disk size that the row-store pre-scales using the ADIO.
+Type: SUSET
+Value range: an integer. The value range is from 1024 to 1048576 and the unit is KB.
+Default value: 8MB
+Parameter description: Specifies the number of requests that can be simultaneously processed by the disk subsystem. For the RAID array, the parameter value must be the number of disk drive spindles in the array.
+Type: USERSET
+Value range: an integer ranging from 0 to 1000
+Default value: 1
+GaussDB(DWS) provides a parallel data import function that enables a large amount of data to be imported in a fast and efficient manner. This section describes parameters for importing data in parallel in GaussDB(DWS).
+Parameter description: Specifies whether distinguish between the problems "the number of imported file records is empty" and "the imported file does not exist". If this parameter is set to true and the problem "the imported file does not exist" occurs, GaussDB(DWS) will report the error message "file does not exist".
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: To optimize the inserting of column-store partitioned tables in batches, data is cached during the inserting process and then written to the disk in batches. You can use partition_mem_batch to specify the number of buffers. If the value is too large, much memory will be consumed. If it is too small, the performance of inserting column-store partitioned tables in batches will deteriorate.
+Type: USERSET
+Value range: 1 to 65535
+Default value: 256
+Parameter description: To optimize the inserting of column-store partitioned tables in batches, data is cached during the inserting process and then written to the disk in batches. You can use partition_max_cache_size to specify the size of the data buffer. If the value is too large, much memory will be consumed. If it is too small, the performance of inserting column-store partitioned tables in batches will deteriorate.
+Type: USERSET
+Value range: 4096 to INT_MAX/2. The minimum unit is KB.
+Default value: 2 GB
+Parameter description: Specifies whether to enable the debug function of Gauss Data Service (GDS). This parameter is used to better locate and analyze GDS faults. After the debug function is enabled, types of packets received or sent by GDS, peer end of GDS during command interaction, and other interaction information about GDS are written into the logs of corresponding nodes. In this way, state switching on the GaussDB state machine and the current state are recorded. If this function is enabled, additional log I/O resources will be consumed, affecting log performance and validity. You are advised to enable this function only when locating GDS faults.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: This parameter has been discarded. You can set this parameter to on for forward compatibility, but the setting will not take effect.
+For details about how to enable the delta table function of column-store tables, see the table-level parameter enable_delta in "CREATE TABLE" in the SQL Syntax.
+Type: POSTMASTER
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the level of the information that is written to WALs.
+Type: POSTMASTER
+Value range: enumerated values
+Advantages: Certain bulk operations (including creating tables and indexes, executing cluster operations, and copying tables) are safely skipped in logging, which can make those operations much faster.
+Disadvantages: WALs only contain basic information required for the recovery from a database server crash or an emergency shutdown. Archived WALs cannot be used to restore data.
+Adds logging required for WAL archiving, supporting the database restoration from archives.
+Default value: hot_standby
+Parameter description: Specifies the synchronization mode of the current transaction.
+Type: USERSET
+Value range: enumerated values
+Default value: on
+Parameter description: Specifies the number of XLOG_BLCKSZs used for storing WAL data. The size of each XLOG_BLCKSZ is 8 KB.
+Type: POSTMASTER
+Value range: -1 to 218. The unit is 8 KB.
+Default value: 16 MB
+Setting suggestions: The content of WAL buffers is written to disks at each transaction commit, and setting this parameter to a large value does not significantly improve system performance. Setting this parameter to hundreds of megabytes can improve the disk writing performance on the server, to which a large number of transactions are committed. Based on experiences, the default value meets user requirements in most cases.
+Parameter description: Specifies the duration of committed data be stored in the WAL buffer.
+Type: USERSET
+Value range: an integer, ranging from 0 to 100000 (unit: μs). 0 indicates no delay.
+Default value: 0
+Parameter description: Specifies a limit on the number of ongoing transactions. If the number of ongoing transactions is greater than the limit, a new transaction will wait for the period of time specified by commit_delay before it is submitted. If the number of ongoing transactions is less than the limit, the new transaction is immediately written into a WAL.
+Type: USERSET
+Value range: an integer ranging from 0 to 1000
+Default value: 5
+Parameter description: Specifies whether to enable the group insertion mode for WALs. Only the Kunpeng architecture supports this parameter.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to compress FPI pages.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the compression level of zlib compression algorithm when the wal_compression parameter is enabled.
+Type: USERSET
+Value range: an integer ranging from 0 to 9.
+Default value: 9
+Parameter description: Specifies the minimum number of WAL segment files in the period specified by checkpoint_timeout. The size of each log file is 16 MB.
+Type: SIGHUP
+Value range: an integer. The minimum value is 1.
+Default value: 64
+Increasing the value of this parameter speeds up the export of big data. Set this parameter based on checkpoint_timeout and shared_buffers. This parameter affects the number of WAL log segment files that can be reused. Generally, the maximum number of reused files in the pg_xlog folder is twice the number of checkpoint segments. The reused files are not deleted and are renamed to the WAL log segment files which will be later used.
+Parameter description: Specifies the maximum time between automatic WAL checkpoints.
+Type: SIGHUP
+Value range: an integer ranging from 30 to 3600 (s)
+Default value: 15min
+If the value of checkpoint_segments is increased, you need to increase the value of this parameter. The increase of them further requires the increase of shared_buffers. Consider all these parameters during setting.
+Parameter description: Specifies the target of checkpoint completion, as a fraction of total time between checkpoints.
+Type: SIGHUP
+Value range: 0.0 to 1.0. The default value 0.5 indicates that each checkpoint must be completed within 50% of the checkpoint interval.
+Default value: 0.5
+Parameter description: Specifies a time in seconds. If the checkpoint interval is close to this time due to filling of checkpoint segment files, a message is sent to the server log to increase the value of checkpoint_segments.
+Type: SIGHUP
+Value range: an integer (unit: s). 0 indicates that warning is disabled.
+Default value: 5min
+Recommended value: 5min
+Parameter description: Specifies the longest time that the checkpoint waits for the checkpointer thread to start.
+Type: SIGHUP
+Value range: an integer ranging from 2 to 3600 (s)
+Default value: 1min
+Parameter description: When archive_mode is enabled, completed WAL segments are sent to archive storage by setting archive_command.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+When wal_level is set to minimal, archive_mode cannot be used.
+Parameter description: Specifies the command used to archive WALs set by the administrator. You are advised to set the archive log path to an absolute path.
+Type: SIGHUP
+Value range: a string
+Default value: (disabled)
+1 +2 | archive_command = 'cp --remove-destination %p /mnt/server/archivedir/%f' +archive_command = 'copy %p /mnt/server/archivedir/%f' # Windows + |
Parameter description: Specifies the size of WAL logs backed up in the pg_xlog/backup directory.
+Type: SIGHUP
+Value range: an integer between 1048576 and 104857600. The unit is KB.
+Default value: 2097152
+Parameter description: Specifies the archiving period.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX. The unit is second. 0 indicates that archiving timeout is disabled.
+Default value: 0
+Parameter description: Specifies the number of Xlog file segments. Specifies the minimum number of transaction log files stored in the pg_xlog directory. The standby server obtains log files from the primary server for streaming replication.
+Type: SIGHUP
+Value range: an integer ranging from 2 to INT_MAX
+Default value: 65
+Setting suggestions:
+Parameter description: Specifies the maximum duration that the sending server waits for the WAL reception in the receiver.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX. The unit is millisecond (ms).
+Default value: 15s
+Parameter description: Specifies the number of log replication slots on the primary server.
+Type: POSTMASTER
+Value range: an integer ranging from 0 to 262143
+Default value: 8
+A physical replication slot provides an automatic method to ensure that an Xlog is not removed from a primary DN before all the standby and secondary DNs receive it. Physical replication slots are used to support HA clusters. The number of physical replication slots required by a cluster is as follows: ratio of standby and secondary DNs to the primary DN in a ring of DNs. For example, if an HA cluster has 1 primary DN, 1 standby DN, and 1 secondary DN, the number of required physical replication slots will be 2.
+Parameter description: Specifies the data volume that can be read from the disk per second when the primary server provides a build session to the standby server.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 1048576. The unit is KB. The value 0 indicates that the I/O flow is not restricted when the primary server provides a build session to the standby server.
+Default value: 0
+Setting suggestions: Set this parameter based on the disk bandwidth and job model. If there is no flow restriction or job interference, for disks with good performance such as SSDs, a full build consumes a relatively small proportion of bandwidth and has little impact on service performance. In this case, you do not need to set the threshold. If the service performance of a common 10,000 rpm SAS disk deteriorates significantly during a build, you are advised to set the parameter to 20 MB.
+This setting directly affects the build speed and completion time. Therefore, you are advised to set this parameter to a value larger than 10 MB. During off-peak hours, you are advised to remove the flow restriction to restore to the normal build speed.
+Parameter description: Specifies the number of transactions by which VACUUM will defer the cleanup of invalid row-store table records, so that VACUUM and VACUUM FULL do not clean up deleted tuples immediately.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 1000000. 0 means no delay.
+Default value: 0
+Parameter description: Specifies the size of memory used by queues when the sender sends data pages to the receiver. The value of this parameter affects the buffer size copied for the replication between the primary and standby servers.
+Type: POSTMASTER
+Value range: an integer ranging from 4 to 1023. The unit is MB.
+Default value: 128 MB
+Parameter description: Specifies the data synchronization mode between the primary and standby servers when data is imported to row-store tables in a database.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the data catchup mode between the primary and standby nodes.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the maximum duration for the primary, standby, and secondary clusters to wait for the secondary cluster to start in sequence and the maximum duration for the secondary cluster to send the scanning list when incremental data catchup is enabled.
+Type: SIGHUP
+Value range: Integer, from 1 to INT_MAX, in seconds.
+Default value: 300s
+The unit can only be second.
+These configuration parameters provide a crude method of influencing the query plans chosen by the query optimizer. If the default plan chosen by the optimizer for a particular query is not optimal, a temporary solution is to use one of these configuration parameters to force the optimizer to choose a different plan. Better ways include adjusting the optimizer cost constants, manually running ANALYZE, increasing the value of the default_statistics_target configuration parameter, and adding the statistics collected in a specific column using ALTER TABLE SET STATISTICS.
+Parameter description: Controls whether the query optimizer uses the bitmap-scan plan type.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the Hash aggregation plan type.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the Hash-join plan type.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the index-scan plan type.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the index-only-scan plan type.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses materialization. It is impossible to suppress materialization entirely, but setting this parameter to off prevents the optimizer from inserting materialized nodes.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the merge-join plan type.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Controls whether the query optimizer uses the nested-loop join plan type to fully scan internal tables. It is impossible to suppress nested-loop joins entirely, but setting this parameter to off allows the optimizer to choose other methods if available.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Controls whether the query optimizer uses the nested-loop join plan type to scan the parameterized indexes of internal tables.
+Type: USERSET
+Value range: Boolean
+Default value: The default value for a newly installed cluster is on. If the cluster is upgraded from R8C10, the forward compatibility is retained. If the version is upgraded from R7C10 or an earlier version, the default value is off.
+Parameter description: Controls whether the query optimizer uses the sequential scan plan type. It is impossible to suppress sequential scans entirely, but setting this variable to off allows the optimizer to preferentially choose other methods if available.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the sort method. It is impossible to suppress explicit sorts entirely, but setting this variable to off allows the optimizer to preferentially choose other methods if available.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the Tuple ID (TID) scan plan type.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: In CASCADE mode, when a user is deleted, all the objects belonging to the user are deleted. This parameter specifies whether the queries of the objects belonging to the user can be unlocked when the user is deleted.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Controls the rule matching modes of regular expressions.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls the use of stream in concurrent updates. This parameter is restricted by the enable_stream_operator parameter.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses streams.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to push WITH RECURSIVE join queries to DNs for processing.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the maximum number of WITH RECURSIVE iterations.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 200
+Parameter description: Controls whether the query optimizer uses the vectorized executor.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether the query optimizer uses the broadcast distribution method when it evaluates the cost of stream.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the optimizer excludes internal table running costs when selecting the Hash Join cost path. If it is set to on, tables with a few records and high running costs are more possible to be selected.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Controls whether the query optimizer uses streams when it delivers statements. This parameter is only used for external HDFS tables.
+This parameter has been discarded. To reserve forward compatibility, set this parameter to on, but the setting does not make a difference.
+Type: USERSET
+Value range: Boolean
+Default value: off
+This parameter is used to control the query optimizer to generate which type of hashagg plans.
+Type: USERSET
+Value range: an integer ranging from 0 to 3.
+Default value: 0
+Parameter description: When the aggregate operation is performed, which contains multiple group by columns and all of the columns are not in the distribution column, you need to select one group by column for redistribution. This parameter controls the policy of selecting a redistribution column.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the DFS partitioned table is dynamically or statically optimized.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies a computing Node Group or the way to choose such a group. The Node Group mechanism is now for internal use only. You do not need to set it.
+During join or aggregation operations, a Node Group can be selected in four modes. In each mode, the specified candidate computing Node Groups are listed for the optimizer to select an appropriate one for the current operator.
+Type: USERSET
+Value range: a string
+Default value: bind
+Parameter description: Specifies whether the optimizer assigns computing workloads to a specific Node Group when multiple Node Groups exist in an environment. The Node Group mechanism is now for internal use only. You do not need to set it.
+This parameter takes effect only when expected_computing_nodegroup is set to a specific Node Group.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the weight used for optimizer to calculate the final cost of stream operators.
+The base stream cost is multiplied by this weight to make the final cost.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 1
+This parameter is applicable only to Redistribute and Broadcast streams.
+Parameter description: Specifies whether enable inlist-to-join (inlist2join) query rewriting.
+Type: USERSET
+Value range: a string
+Default value: cost_base
+This section describes the optimizer cost constants. The cost variables described in this section are measured on an arbitrary scale. Only their relative values matter, therefore scaling them all in or out by the same factor will result in no differences in the optimizer's choices. By default, these cost variables are based on the cost of sequential page fetches, that is, seq_page_cost is conventionally set to 1.0 and the other cost variables are set with reference to the parameter. However, you can use a different scale, such as actual execution time in milliseconds.
+Parameter description: Specifies the optimizer's estimated cost of a disk page fetch that is part of a series of sequential fetches.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 1
+Parameter description: Specifies the optimizer's estimated cost of an out-of-sequence disk page fetch.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 4
+Parameter description: Specifies the optimizer's estimated cost of processing each row during a query.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 0.01
+Parameter description: Specifies the optimizer's estimated cost of processing each index entry during an index scan.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 0.005
+Parameter description: Specifies the optimizer's estimated cost of processing each operator or function during a query.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 0.0025
+Parameter description: Specifies the optimizer's assumption about the effective size of the disk cache that is available to a single query.
+When setting this parameter you should consider both GaussDB(DWS)'s shared buffer and the kernel's disk cache. Also, take into account the expected number of concurrent queries on different tables, since they will have to share the available space.
+This parameter has no effect on the size of shared memory allocated by GaussDB(DWS). It is used only for estimation purposes and does not reserve kernel disk cache. The value is in the unit of disk page. Usually the size of each page is 8192 bytes.
+Type: USERSET
+Value range: an integer ranging is from 1 to INT_MAX. The unit is 8 KB.
+A value greater than the default one may enable index scanning, and a value less than the default one may enable sequence scanning.
+Default value: 128 MB
+Parameter description: Specifies the query optimizer's estimated cost of creating a Hash table for memory space using Hash join. This parameter is used for optimization when the Hash join estimation is inaccurate.
+Type: USERSET
+Value range: a floating point number ranging from 0 to DBL_MAX
+Default value: 0
+This section describes parameters related to genetic query optimizer. The genetic query optimizer (GEQO) is an algorithm that plans queries by using heuristic searching. This algorithm reduces planning time for complex queries and the cost of producing plans are sometimes inferior to those found by the normal exhaustive-search algorithm.
+Parameter description: Controls the use of genetic query optimization.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Generally, do not set this parameter to off. geqo_threshold provides more subtle control of GEQO.
+Parameter description: Specifies the number of FROM items. Genetic query optimization is used to plan queries when the number of statements executed is greater than this value.
+Type: USERSET
+Value range: an integer ranging from 2 to INT_MAX
+Default value: 12
+Parameter description: Controls the trade-off between planning time and query plan quality in GEQO.
+Type: USERSET
+Value range: an integer ranging from 1 to 10
+Default value: 5
+Parameter description: Specifies the pool size used by GEQO, that is, the number of individuals in the genetic population.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX
+The value of this parameter must be at least 2, and useful values are typically from 100 to 1000. If this parameter is set to 0, GaussDB(DWS) selects a proper value based on geqo_effort and the number of tables.
+Default value: 0
+Parameter description: Specifies the number parameter iterations of the algorithm used by GEQO.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX
+The value of this parameter must be at least 1, and useful values are typically from 100 to 1000. If it is set to 0, a suitable value is chosen based on geqo_pool_size.
+Default value: 0
+Parameter description: Specifies the selection bias used by GEQO. The selection bias is the selective pressure within the population.
+Type: USERSET
+Value range: a floating point number ranging from 1.5 to 2.0
+Default value: 2
+Parameter description: Specifies the initial value of the random number generator used by GEQO to select random paths through the join order search space.
+Type: USERSET
+Value range: a floating point number ranging from 0.0 to 1.0
+Varying the value changes the setting of join paths explored, and may result in a better or worse path being found.
+Default value: 0
+Parameter description: Specifies the default statistics target for table columns without a column-specific target set via ALTER TABLE SET STATISTICS. If this parameter is set to a positive number, it indicates the number of samples of statistics information. If this parameter is set to a negative number, percentage is used to set the statistic target. The negative number converts to its corresponding percentage, for example, -5 means 5%. During sampling, default_statistics_target * 300 is used as the size of the random sampling. For example, if the default value is 100, 100 x 300 pages are read in a random sampling.
+Type: USERSET
+Value range: an integer ranging from -100 to 10000
+Default value: 100
+Parameter description: Controls the query optimizer's use of table constraints to optimize queries.
+Type: USERSET
+Value range: enumerated values
+When constraint_exclusion is set to on, the optimizer compares query conditions with the table's CHECK constraints, and omits scanning tables for which the conditions contradict the constraints.
+Default value: partition
+Currently, this parameter is set to on by default to partition tables. If this parameter is set to on, extra planning is imposed on simple queries, which has no benefits. If you have no partitioned tables, set it to off.
+Parameter description: Specifies the optimizer's estimated fraction of a cursor's rows that are retrieved.
+Type: USERSET
+Value range: a floating point number ranging from 0.0 to 1.0
+Smaller values than the default value bias the optimizer towards using fast start plans for cursors, which will retrieve the first few rows quickly while perhaps taking a long time to fetch all rows. Larger values put more emphasis on the total estimated time. At the maximum setting of 1.0, cursors are planned exactly like regular queries, considering only the total estimated time and how soon the first rows might be delivered.
+Default value: 0.1
+Parameter description: Specifies whether the optimizer merges sub-queries into upper queries based on the resulting FROM list. The optimizer merges sub-queries into upper queries if the resulting FROM list would have no more than this many items.
+Type: USERSET
+Value range: an integer ranging from 1 to INT_MAX
+Smaller values reduce planning time but may lead to inferior execution plans.
+Default value: 8
+Parameter description: Specifies whether the optimizer rewrites JOIN constructs (except FULL JOIN) into lists of FROM items based on the number of the items in the result list.
+Type: USERSET
+Value range: an integer ranging from 1 to INT_MAX
+Default value: 8
+Parameter description: This is a commissioning parameter. Currently, it supports only OPTIMIZE_PLAN and RANDOM_PLAN. OPTIMIZE_PLAN indicates the optimal plan, the cost of which is estimated using the dynamic planning algorithm, and its value is 0. RANDOM_PLAN indicates the plan that is randomly generated. If plan_mode_seed is set to -1, you do not need to specify the value of the seed identifier. Instead, the optimizer generates a random integer ranging from 1 to 2147483647, and then generates a random execution plan based on this random number. If plan_mode_seed is set to an integer ranging from 1 to 2147483647, you need to specify the value of the seed identifier, and the optimizer generates a random execution plan based on the seed value.
+Type: USERSET
+Value range: an integer ranging from -1 to 2147483647
+Default value: 0
+Parameter description: Specifies whether the function of pushing down predicates the native data layer is enabled.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the function that random query about DNs in the replication table is enabled. A complete data table is stored on each DN for random retrieval to release the pressure on nodes.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the hash table size during the execution of the HASH AGG operation.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX/2
+Default value: 0
+Parameter description: Specifies whether code optimization can be enabled. Currently, the code optimization uses the LLVM optimization.
+Type: USERSET
+Value range: Boolean
+Currently, the LLVM optimization only supports the vectorized executor and SQL on Hadoop features. You are advised to set this parameter to off in other cases.
+Default value: on
+Parameter description: Specifies the codegen optimization strategy that is used when an expression is converted to codegen-based.
+Type: USERSET
+Value range: enumerated values
+In the scenario where query performance reduces after the codegen function is enabled, you can set this parameter to pure. In other scenarios, do not change the default value partial of this parameter.
+Default value: partial
+Parameter description: Specifies whether the LLVM IR function can be printed in logs.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: The LLVM compilation takes some time to generate executable machine code. Therefore, LLVM compilation is beneficial only when the actual execution cost is more than the sum of the code required for generating machine code and the optimized execution cost. This parameter specifies a threshold. If the estimated execution cost exceeds the threshold, LLVM optimization is performed.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 10000
+Parameter description: Specifies whether the informational constraint optimization execution plan can be used for an HDFS foreign table.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the BloomFilter optimization is used.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the extrapolation logic is used for data of DATE type based on historical statistics. The logic can increase the accuracy of estimation for tables whose statistics are not collected in time, but will possibly provide an overlarge estimation due to incorrect extrapolation. Enable the logic only in scenarios where the data of DATE type is periodically inserted.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to allow automatic statistics collection for tables that have statistics when generating a plan. Foreign tables nor temporary tables with the ON COMMIT [DELETE ROWS|DROP] option can trigger autoanalyze. To collect statistics, you need to manually perform the ANALYZE operation. If an exception occurs in the database during the execution of autoanalyze on a table, after the database is recovered, the system may still prompt you to collect the statistics of the table when you run the statement again. In this case, manually perform the ANALYZE operation on the table to synchronize statistics.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the user-defined degree of parallelism.
+Type: USERSET
+Value range: an integer ranging from -64 to 64.
+[1, 64]: Fixed SMP is enabled, and the system will use the specified degree.
+0: SMP adaptation function is enabled. The system dynamically selects the optimal parallelism degree [1,8] (x86 platforms) or [1,64] (Kunpeng platforms) for each query based on the resource usage and query plans.
+[-64, -1]: SMP adaptation is enabled, and the system will dynamically select a degree from the limited range.
+Default value: 1
+Parameter description: Specifies the DOP multiple used to adjust the optimal DOP preset in the system when query_dop is set to 0. That is, DOP = Preset DOP x query_dop_ratio (ranging from 1 to 64). If this parameter is set to 1, the DOP cannot be adjusted.
+Type: USERSET
+Value range: a floating point number ranging from 0 to 64
+Default value: 1
+Parameter description: Specifies the unified DOP parallelism degree allocated to the groups that use the Stream operator as the vertex in the generated execution plan when the value of query_dop is 0. This parameter is used to manually specify the DOP for specific groups for performance optimization. Its format is G1,D1,G2,D2,...,, where G1 and G2 indicate the group IDs that can be obtained from logs and D1 and D2 indicate the specified DOP values and can be any positive integers.
+Type: USERSET
+Value range: a string
+Default value: empty
+This parameter is used only for internal optimization and cannot be set. You are advised to use the default value.
+Parameter description: Checks whether statistics were collected about tables whose reltuples and relpages are shown as 0 in pg_class during plan generation.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to use the Hash Agg operator for column-oriented hash table design when certain constraints are met.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to use the Hash Join operator for column-oriented hash table design when certain constraints are met.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to optimize the number of Hash Join or Hash Agg files written to disks in the sonic scenario. This parameter takes effect only when enable_sonic_hashjoin or enable_sonic_hashagg is enabled.
+Type: USERSET
+Value range: Boolean
+For the Hash Join or Hash Agg operator that meets the sonic condition, if this parameter is set to off, one file is written to disks for each column. If this parameter is set to on and the data types of different columns are similar, only one file (a maximum of five files) will be written to disks.
+Default value: on
+Parameter description: Specifies the expansion ratio used to resize the hash table during the execution of the Hash Agg and Hash Join operators.
+Type: USERSET
+Value range: a floating point number of 0 or ranging from 0.5 to 10
+Default value: 0
+Parameter description: Specifies the policy for generating an execution plan in the prepare statement.
+Type: USERSET
+Value range: enumerated values
+Default value: auto
+Parameter description: Specifies whether the query needs to be accelerated when short query acceleration is enabled.
+Type: USERSET
+Value range: an integer ranging from –1 to 1
+Default value: –1
+Parameter description: Specifies whether to print the alarm for the statement pushdown failure to the client.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the writing mode of the log files when logging_collector is set to on.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Example:
+Assume that you plan to keep logs in a period of 7 days, one log file is generated per day, log files generated on Monday are named server_log.Mon and named server_log.Tue on Tuesday (others are named in the same way), and log files generated on the same day in different weeks are overwritten. Implement the plan by performing the following operations: set log_filename to server_log.%a, log_truncate_on_rotation to on, and log_rotation_age to 1440 (indicating the valid duration of the log file is 24 hours).
+Parameter description: Specifies the interval for creating a log file when logging_collector is set to on. If the difference between the current time and the time when the previous audit log file is created is greater than the value of log_rotation_age, a new log file will be generated.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 24 days. The unit is min, h, or d. 0 indicates that the time-based creation of new log files is disabled.
+Default value: 1d
+Parameter description: Specifies the maximum size of a server log file when logging_collector is set to on. If the total size of messages in a server log exceeds the capacity of the server log file, a log file will be generated.
+Type: SIGHUP
+Value range: an integer ranging from INT_MAX to 1024. The unit is KB.
+0 indicates the capacity-based creation of new log files is disabled.
+Default value: 20 MB
+Parameter description: Specifies the identifier of the GaussDB(DWS) error messages in logs when log_destination is set to eventlog.
+Type: POSTMASTER
+Value range: a string
+Default value: PostgreSQL
+Parameter description: Specifies which level of messages are sent to the client. Each level covers all the levels following it. The lower the level is, the fewer messages are sent.
+Type: USERSET
+When the values of client_min_messages and log_min_messages are the same, the levels are different.
+Valid values: Enumerated values. Valid values: debug5, debug4, debug3, debug2, debug1, info, log, notice, warning, error For details about the parameters, see Table 1.
+Default value: notice
+Parameter description: Specifies which level of messages will be written into server logs. Each level covers all the levels following it. The lower the level is, the fewer messages will be written into the log.
+Type: SUSET
+When the values of client_min_messages and log_min_messages are the same, the levels are different.
+Value range: enumerated type. Valid values: debug5, debug4, debug3, debug2, debug1, info, log, notice, warning, error, fatal, panic For details about the parameters, see Table 1.
+Default value: warning
+Parameter description: Specifies which SQL statements that cause errors condition will be recorded in the server log.
+Type: SUSET
+Value range: enumerated type. Valid values: debug5, debug4, debug3, debug2, debug1, info, log, notice, warning, error, fatal, panic For details about the parameters, see Table 1.
+Default value: error
+Parameter description: Specifies the threshold for logging statement execution durations. The execution duration that is greater than the specified value will be logged.
+This parameter helps track query statements that need to be optimized. For clients using extended query protocol, durations of the Parse, Bind, and Execute are logged independently.
+Type: SUSET
+If this parameter and log_statement are used at the same time, statements recorded based on the value of log_statement will not be logged again after their execution duration exceeds the value of this parameter. If you are not using syslog, it is recommended that you log the process ID (PID) or session ID using log_line_prefix so that you can link the current statement message to the last logged duration.
+Value range: an integer ranging from -1 to INT_MAX. The unit is millisecond.
+Default value: 30min
+Parameter description: Prints the function's stack information to the server's log file if the level of information generated is greater than or equal to this parameter level.
+Type: SUSET
+This parameter is used for locating customer on-site problems. Because frequent stack printing will affect the system's overhead and stability, therefore, when you locate the onsite problems, set the value of this parameter to ranks other than fatal and panic.
+Value range: enumerated values
+Valid values: debug5, debug4, debug3, debug2, debug1, info, log, notice, warning, error, fatal, panic For details about the parameters, see Table 1.
+Default value: panic
+Table 1 explains the message security levels used in GaussDB(DWS). If logging output is sent to syslog or eventlog, severity is translated in GaussDB(DWS) as shown in the table.
+ +Severity + |
+Description + |
+syslog + |
+eventlog + |
+
---|---|---|---|
debug[1-5] + |
+Provides detailed debug information. + |
+DEBUG + |
+INFORMATION + |
+
log + |
+Reports information of interest to administrators, for example, checkpoint activity. + |
+INFO + |
+INFORMATION + |
+
info + |
+Provides information implicitly requested by the user, for example, output from VACUUM VERBOSE. + |
+INFO + |
+INFORMATION + |
+
notice + |
+Provides information that might be helpful to users, for example, notice of truncation of long identifiers and index created as part of the primary key. + |
+NOTICE + |
+INFORMATION + |
+
warning + |
+Provides warnings of likely problems, for example, COMMIT outside a transaction block. + |
+NOTICE + |
+WARNING + |
+
error + |
+Reports an error that causes a command to terminate. + |
+WARNING + |
+ERROR + |
+
fatal + |
+Reports the reason that causes a session to terminate. + |
+ERR + |
+ERROR + |
+
panic + |
+Reports an error that caused all database sessions to terminate. + |
+CRIT + |
+ERROR + |
+
Parameter description: Specifies the output interval of performance log data.
+Type: SUSET
+This parameter value is in milliseconds. You are advised to set this parameter to a value that is a multiple of 1000. That is, the value is in seconds. Name extension of the performance log files controlled by this parameter is .prf. These log files are stored in the $GAUSSLOG/gs_profile/<node_name> directory. node_name is the value of pgxc_node_name in the postgres.conf file. You are advised not to use this parameter externally.
+Value range: an integer ranging from 0 to INT_MAX. The unit is millisecond (ms).
+Default value: 3s
+Parameter description: Specifies whether to print parsing tree results.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to print query rewriting results.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to print query execution results.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the logs produced by debug_print_parse, debug_print_rewritten, and debug_print_plan. The output format is more readable but much longer than the output generated when this parameter is set to off.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the statistics on the checkpoints and restart points are recorded in the server logs. When this parameter is set to on, statistics on checkpoints and restart points are recorded in the log messages, including the number of buffers to be written and the time spent in writing them.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to record connection request information of the client.
+Type: BACKEND
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to record end connection request information of the client.
+Type: BACKEND
+Value range: Boolean
+Default value: off
+Session connection parameter. Users are not advised to configure this parameter.
+Parameter description: Specifies whether to record the duration of every completed SQL statement. For clients using extended query protocols, the time required for parsing, binding, and executing steps are logged independently.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the amount of detail written in the server log for each message that is logged.
+Type: SUSET
+Value range: enumerated values
+Default value: default
+Parameter description: By default, connection log messages only show the IP address of the connected host. The host name can be recorded when this parameter is set to on. It may take some time to parse the host name. Therefore, the database performance may be affected.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: If the time that a session used to wait a lock is longer than the value of deadlock_timeout, this parameter specifies whether to record this message in the database. This is useful in determining if lock waits are causing poor performance.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to record SQL statements. For clients using extended query protocols, logging occurs when an execute message is received, and values of the Bind parameters are included (with any embedded single quotation marks doubled).
+Type: SUSET
+Statements that contain simple syntax errors are not logged even if log_statement is set to all, because the log message is emitted only after basic parsing has been completed to determine the statement type. If the extended query protocol is used, this setting also does not log statements before the execution phase (during parse analysis or planning). Set log_min_error_statement to ERROR or lower to log such statements.
+Value range: enumerated values
+Default value: none
+Parameter description: Specifies whether to record the delete information of temporary files. Temporary files can be created for sorting, hashing, and temporary querying results. A log entry is generated for each temporary file when it is deleted.
+Type: SUSET
+Value range: an integer ranging from -1 to INT_MAX. The unit is KB.
+Default value: –1
+Parameter description: Specifies the time zone used for time stamps written in the server log. Different from TimeZone, this parameter takes effect for all sessions in the database.
+Type: SIGHUP
+Value range: a string
+Default value: PRC
+The value can be changed when gs_initdb is used to set system environments.
+Parameter description: Specifies whether module logs can be output on the server. This parameter is a session-level parameter, and you are not advised to use the gs_guc tool to set it.
+Type: USERSET
+Value range: a string
+Default value: off. All the module logs on the server can be viewed by running show logging_module.
+Setting method: First, you can run show logging_module to view which module is controllable. For example, the query output result is as follows:
+1 +2 +3 +4 | show logging_module; +logging_module +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ALL,on(),off(DFS,GUC,HDFS,ORC,SLRU,MEM_CTL,AUTOVAC,CACHE,ADIO,SSL,GDS,TBLSPC,WLM,OBS,EXECUTOR,VEC_EXECUTOR,STREAM,LLVM,OPT,OPT_REWRITE,OPT_JOIN,OPT_AGG,OPT_SUBPLAN,OPT_SETOP,OPT_SKEW,UDF,COOP_ANALYZE,WLMCP,ACCELERATE,PLANHINT,PARQUET,CARBONDATA,SNAPSHOT,XACT,HANDLE,CLOG,EC,REMOTE,CN_RETRY,PLSQL,TEXTSEARCH,SEQ,INSTR,COMM_IPC,COMM_PARAM) +(1 row) + |
Controllable modules are identified by uppercase letters, and the special ID ALL is used for setting all module logs. You can control module logs to be exported by setting the log modules to on or off. Enable log output for SSL:
+1 +2 +3 +4 +5 +6 +7 | set logging_module='on(SSL)'; +SET +show logging_module; logging_module +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ALL,on(SSL),off(DFS,GUC,HDFS,ORC,SLRU,MEM_CTL,AUTOVAC,CACHE,ADIO,GDS,TBLSPC,WLM,OBS,EXECUTOR,VEC_EXECUTOR,STREAM,LLVM,OPT,OPT_REWRITE,OPT_JOIN,OPT_AGG,OPT_SUBPLAN,OPT_SETOP,OPT_CA +RD,OPT_SKEW,UDF,COOP_ANALYZE,WLMCP,ACCELERATE,PLANHINT,PARQUET,CARBONDATA,SNAPSHOT,XACT,HANDLE,CLOG,TQUAL,EC,REMOTE,CN_RETRY,PLSQL,TEXTSEARCH,SEQ,INSTR,COMM_IPC,COMM_PARAM,CSTORE) +(1 row) + |
SSL log output is enabled.
+The ALL identifier is equivalent to a shortcut operation. That is, logs of all modules can be enabled or disabled.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | set logging_module='off(ALL)'; +SET +show logging_module; logging_module +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ALL,on(),off(DFS,GUC,HDFS,ORC,SLRU,MEM_CTL,AUTOVAC,CACHE,ADIO,SSL,GDS,TBLSPC,WLM,OBS,EXECUTOR,VEC_EXECUTOR,STREAM,LLVM,OPT,OPT_REWRITE,OPT_JOIN,OPT_AGG,OPT_SUBPLAN,OPT_SETOP,OPT_C +ARD,OPT_SKEW,UDF,COOP_ANALYZE,WLMCP,ACCELERATE,PLANHINT,PARQUET,CARBONDATA,SNAPSHOT,XACT,HANDLE,CLOG,TQUAL,EC,REMOTE,CN_RETRY,PLSQL,TEXTSEARCH,SEQ,INSTR,COMM_IPC,COMM_PARAM,CSTORE) +(1 row) + +set logging_module='on(ALL)'; +SET +show logging_module; logging_module +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ALL,on(DFS,GUC,HDFS,ORC,SLRU,MEM_CTL,AUTOVAC,CACHE,ADIO,SSL,GDS,TBLSPC,WLM,OBS,EXECUTOR,VEC_EXECUTOR,STREAM,LLVM,OPT,OPT_REWRITE,OPT_JOIN,OPT_AGG,OPT_SUBPLAN,OPT_SETOP,OPT_CARD,OP +T_SKEW,UDF,COOP_ANALYZE,WLMCP,ACCELERATE,PLANHINT,PARQUET,CARBONDATA,SNAPSHOT,XACT,HANDLE,CLOG,TQUAL,EC,REMOTE,CN_RETRY,PLSQL,TEXTSEARCH,SEQ,INSTR,COMM_IPC,COMM_PARAM,CSTORE),off() +(1 row) + |
Dependency relationship: The value of this parameter depends on the settings of log_min_messages.
+Parameter description: Specifies whether to log statements that are not pushed down. The logs help locate performance issues that may be caused by statements not pushed down.
+Type: SUSET
+Value range: Boolean
+Default value: on
+ +During cluster running, error scenarios can be detected in a timely manner to inform users as soon as possible.
+Parameter description: Enables the alarm detection thread to detect the fault scenarios that may occur in the database.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the ratio restriction that the maximum number of allowed parallel connections to the database. The maximum number of concurrent connections to the database is max_connections x connection_alarm_rate.
+Type: SIGHUP
+Value range: a floating point number ranging from 0.0 to 1.0
+Default value: 0.9
+Parameter description: Specifies the interval at which an alarm is reported.
+Type: SIGHUP
+Value range: a non-negative integer. The unit is second.
+Default value: 10
+The query and index statistics collector is used to collect statistics during database running. The statistics include the times of inserting and updating a table and an index, the number of disk blocks and tuples, and the time required for the last cleanup and analysis on each table. The statistics can be viewed by querying system view families pg_stats and pg_statistic. The following parameters are used to set the statistics collection feature in the server scope.
+Parameter description: Collects statistics about the commands that are being executed in session.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Collects statistics about data activities.
+Type: SUSET
+Value range: Boolean
+When the database to be cleaned up is selected from the AutoVacuum automatic cleanup process, the database statistics are required. In this case, the default value is set to on.
+Default value: on
+Parameter description: Collects statistics about I/O invoking timing in the database. The I/O timing statistics can be queried by using the pg_stat_database parameter.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Collects statistics about invoking times and duration in a function.
+Type: SUSET
+When the SQL functions are set to inline functions queried by the invoking, these SQL functions cannot be traced no matter these functions are set or not.
+Value range: enumerated values
+Default value: none
+Parameter description: Specifies byte counts of the current running commands used to trace each active session.
+Type: POSTMASTER
+Value range: an integer ranging from 100 to 102400
+Default value: 1024
+Parameter description: Collects statistics updated with a process name each time the server receives a new SQL statement.
+The process name can be viewed on Windows task manager by running the ps command.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the interval of collecting the thread status information periodically.
+Type: SUSET
+Value range: an integer ranging from 0 to 1440. The unit is minute (min).
+Default value: 30min
+Parameter description: Specifies whether to record the time when INSERT, UPDATE, DELETE, or EXCHANGE/TRUNCATE/DROP PARTITION is performed on table data.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to collect Unique SQL statements and the maximum number of unique SQL statements that can be collected.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 0
+If a new value is loaded using reload and the new value is less than the original value, the Unique SQL statistics collected by the corresponding CN will be cleared. Note that the clearing operation is performed by the background thread of the resource management module. If the GUC parameter use_workload_manager is set to off, the clearing operation may fail. In this case, you can use the reset_instr_unique_sql function for clearing.
+Parameter description: Specifies whether to collect statistics on the number of the SELECT, INSERT, UPDATE, DELETE, and MERGE INTO statements that are being executed in each session, the response time of the SELECT, INSERT, UPDATE, and DELETE statements, and the number of DDL, DML, and DCL statements.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to collect statistics on waiting events, including the number of occurrence times, number of failures, duration, maximum waiting time, minimum waiting time, and average waiting time.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to enable the performance view snapshot function. After this function is enabled, GaussDB(DWS) will periodically create snapshots for some system performance views and save them permanently. In addition, it will accept manual snapshot creation requests.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the interval for automatically creating performance view snapshots.
+Type: SIGHUP
+Value range: an integer ranging from 10 to 180, in minutes
+Default value: 60
+Parameter description: Specifies the maximum number of days for storing performance snapshot data.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 15 days
+Default value: 8
+During the running of the database, the lock access, disk I/O operation, and invalid message process are involved. All these operations are the bottleneck of the database performance. The performance statistics method provided by GaussDB(DWS) can facilitate the performance fault location.
+Parameter description: For each query, the following four parameters control the performance statistics of corresponding modules recorded in the server log:
+All these parameters can only provide assistant analysis for administrators, which are similar to the getrusage() of the Linux OS.
+Type: SUSET
+Value range: Boolean
+Default value: off
+If database resource usage is not controlled, concurrent tasks easily preempt resources. As a result, the OS will be overloaded and cannot respond to user tasks; or even crash and cannot provide any services to users. The GaussDB(DWS) workload management function balances the database workload based on available resources to avoid database overloading.
+Parameter description: Specifies whether to enable the resource management function. This parameter must be applied on both CNs and DNs.
+Type: SIGHUP
+Value range: Boolean
+1 | select gs_wlm_readjust_user_space(0); + |
Default value: on
+Parameter description: Specifies whether to enable the Cgroup management function. This parameter must be applied on both CNs and DNs.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+If a method in Setting GUC Parameters is used to change the parameter value, the new value takes effect only for the threads that are started after the change. In addition, the new value does not take effect for new jobs that are executed by backend threads and reused threads. You can make the new value take effect for these threads by using kill session or restarting the node.
+Parameter description: Specifies whether to control the database permanent thread to the DefaultBackend Cgroup. This parameter must be applied on both CNs and DNs.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to control the database permanent thread autoVacuumWorker to the Vacuum Cgroup. This parameter must be applied on both CNs and DNs.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to enable the perm space function. This parameter must be applied on both CNs and DNs.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to enable the background calibration function in static adaptive load scenarios. This parameter must be used on CNs.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the maximum global concurrency. This parameter applies to one CN.
+The database administrator changes the value of this parameter based on system resources (for example, CPU, I/O, and memory resources) so that the system fully supports the concurrency tasks and avoids too many concurrency tasks resulting in system crash.
+Type: SIGHUP
+Value range: an integer ranging from -1 to INT_MAX. The values -1 and 0 indicate that the number of concurrent requests is not limited.
+Default value: 60
+Parameter description: Specifies the minimum execution cost of a statement under the concurrency control of a resource pool.
+Type: SIGHUP
+Value range: an integer ranging from –1 to INT_MAX
+Default value: 100000
+Parameter description: Specifies the name of the Cgroup in use. It can be used to change the priorities of jobs in the queue of a Cgroup.
+If you set cgroup_name and then session_respool, the Cgroups associated with session_respool take effect. If you reverse the order, Cgroups associated with cgroup_name take effect.
+If the Workload Cgroup level is specified during the cgroup_name change, the database does not check the Cgroup level. The level ranges from 1 to 10.
+Type: USERSET
+You are not advised to set cgroup_name and session_respool at the same time.
+Value range: a string
+Default value: DefaultClass:Medium
+DefaultClass:Medium indicates the Medium Cgroup belonging to the Timeshare Cgroup under the DefaultClass Cgroup.
+Parameter description: Specifies how frequently CPU data is collected during statement execution on DNs.
+The database administrator changes the value of this parameter based on system resources (for example, CPU, I/O, and memory resources) so that the system fully supports the concurrency tasks and avoids too many concurrency tasks resulting in system crash.
+Type: SIGHUP
+Value range: an integer ranging from -1 to INT_MAX. The unit is second.
+Default value: 30
+Parameter description: Specifies whether the database automatically switches to the TopWD group when executing statements by group type.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the memory information recording mode.
+Type: USERSET
+Value range:
+Default value: none
+Parameter description: Specifies the sequence number of the memory background information distributed in the needed thread and plannodeid of the query where the current thread is located.
+Type: USERSET
+Value range: a string
+Default value: empty
+It is recommended that you retain the default value for this parameter.
+Parameter description: Specifies whether the real-time resource monitoring function is enabled. This parameter must be applied on both CNs and DNs.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether resource monitoring records are archived. If this parameter is set to on, records in the history views (GS_WLM_SESSION_HISTORY and GS_WLM_OPERATOR_HISTORY) are archived to the corresponding info views (GS_WLM_SESSION_INFO and GS_WLM_OPERATOR_INFO) at an interval of 3 minutes. After being archived, the records are deleted from the history views. This parameter must be applied on both CNs and DNs.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the user historical resource monitoring dumping function is enabled. If this function is enabled, data in view PG_TOTAL_USER_RESOURCE_INFO is periodically sampled and saved to system catalog GS_WLM_USER_RESOURCE_HISTORY.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the retention time of the user historical resource monitoring data. This parameter is valid only when enable_user_metric_persistent is set to on.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 3650. The unit is day.
+Default value: 7
+Parameter description: Specifies whether the instance resource monitoring dumping function is enabled. When this function is enabled, the instance monitoring data is saved to the system catalog GS_WLM_INSTANCE_HISTORY.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the retention time of the instance historical resource monitoring data. This parameter is valid only when enable_instance_metric_persistent is set to on.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 3650. The unit is day.
+Default value: 7
+Parameter description: Specifies the resource monitoring level of the current session. This parameter is valid only when enable_resource_track is set to on.
+Type: USERSET
+Value range: enumerated values
+Default value: query
+Parameter description: Specifies the minimum execution cost for resource monitoring on statements in the current session. This parameter is valid only when enable_resource_track is set to on.
+Type: USERSET
+Value range: an integer ranging from -1 to INT_MAX
+Default value: 100000
+Parameter description: Specifies the minimum statement execution time that determines whether information about jobs of a statement recorded in the real-time view (see Table 1) will be dumped to a historical view after the statement is executed. Job information will be dumped from the real-time view (with the suffix statistics) to a historical view (with the suffix history) if the statement execution time is no less than this value.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX. The unit is second (s).
+Default value: 1min
+Parameter description: Specifies the memory quota in adaptive load scenarios, that is, the proportion of maximum available memory to total system memory.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 100
+Default value: 80
+Parameter description: Stops memory protection. To query system views when system memory is insufficient, set this parameter to on to stop memory protection. This parameter is used only to diagnose and debug the system when system memory is insufficient. Set it to off in other scenarios.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the job type of the current session.
+Type: USERSET
+Value range: a string
+Default value: empty
+Parameter description: Specifies whether the black box function is enabled. The core files can be generated even through the core dump mechanism is not configured in the system. This parameter must be simultaneously used on CNs and DNs.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to enable the dynamic workload management function.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the maximum number of core files that are generated by GaussDB(DWS) and can be stored in the path specified by bbox_dump_path. If the number of core files exceeds this value, old core files will be deleted. This parameter is valid only if enable_bbox_dump is set to on.
+Type: USERSET
+Value range: an integer ranging from 1 to 20
+Default value: 8
+When core files are generated during concurrent SQL statement execution, the number of files may be larger than the value of bbox_dump_count.
+Parameter description: Specifies the upper limit of IOPS triggered.
+Type: USERSET
+Value range: an integer ranging from 0 to 1073741823
+Default value: 0
+Parameter description: Specifies the I/O priority for jobs that consume many I/O resources. It takes effect when the I/O usage reaches 90%.
+Type: USERSET
+Value range: enumerated values
+Default value: None
+Parameter description: Specifies the resource pool associated with the current session.
+Type: USERSET
+If you set cgroup_name and then session_respool, the Cgroups associated with session_respool take effect. If you reverse the order, Cgroups associated with cgroup_name take effect.
+If the Workload Cgroup level is specified during the cgroup_name change, the database does not check the Cgroup level. The level ranges from 1 to 10.
+You are not advised to set cgroup_name and session_respool at the same time.
+Value range: a string. This parameter can be set to the resource pool configured through create resource pool.
+Default value: invalid_pool
+Parameter description: whether to control transaction block statements and stored procedure statements.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the memory size of a real-time query view.
+Type: SIGHUP
+Value range: an integer ranging from 5 MB to 50% of max_process_memory
+Default value: 5 MB
+Parameter description: Specifies the memory size of a historical query view.
+Type: SIGHUP
+Value range: an integer ranging from 10 MB to 50% of max_process_memory
+Default value: 100 MB
+Parameter description: Specifies the retention period of historical TopSQL data in the gs_wlm_session_info and gs_wlm_operator_info tables.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 3650. The unit is day.
+Default value: 0
+Before setting this GUC parameter to enable the data retention function, delete data from the gs_wlm_session_info and gs_wlm_operator_info tables.
+Parameter description: maximum queuing time of transaction block statements and stored procedure statements if enable_transaction_parctl is set to on.
+Type: USERSET
+Value range: an integer ranging from –1 to INT_MAX. The unit is second (s).
+Default value: 0
+This parameter is valid only for internal statements of stored procedures and transaction blocks. That is, this parameter takes effect only for the statements whose enqueue value (for details, see PG_SESSION_WLMSTAT) is Transaction or StoredProc.
+Parameter description: Specifies whitelisted SQL statements for resource management. Whitelisted SQL statements are not monitored by resource management.
+Type: SIGHUP
+Value range: a string
+Default value: empty
+The automatic cleanup process (autovacuum) in the system automatically runs the VACUUM and ANALYZE commands to recycle the record space marked by the deleted status and update statistics in the table.
+Parameter description: Enables the automatic cleanup process (autovacuum) in the database. Ensure that the track_counts parameter is set to on before enabling the automatic cleanup process.
+Type: SIGHUP
+Even if the autovacuum parameter is set to off, the automatic cleanup process will be enabled automatically by the database when a transaction ID wrap is about to occur. When the create database or drop database operation fails, some nodes may be submitted or rolled back while others in the prepared status may not be submitted. In this case, the system cannot automatically restore these nodes and the manual restoration is required. The restoration steps are as follows:
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the autoanalyze or autovacuum function is enabled. This parameter is valid only when autovacuum is set to on.
+Type: SIGHUP
+Value range: enumerated values
+Default value: mix
+Parameter description: Specifies the timeout period of autoanalyze. If the duration of autoanalyze on a table exceeds the value of autoanalyze_timeout, the autoanalyze is automatically canceled.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 2147483. The unit is second.
+Default value: 5min
+Parameter description: Specifies the upper limit of I/Os triggered by the autovacuum process per second.
+Type: SIGHUP
+Value range: an integer ranging from –1 to 1073741823. –1 indicates that the default Cgroup is used.
+Default value: –1
+Parameter description: Records each step performed by the automatic cleanup process to the server log when the execution time of the automatic cleanup process is greater than or equal to a certain value. This parameter helps track the automatic cleanup behaviors.
+Type: SIGHUP
+For example, set the log_autovacuum_min_duration parameter to 250 ms to record the information related to the automatic cleanup commands running the parameters whose values are greater than or equal to 250 ms.
+Value range: an integer ranging from –1 to INT_MAX. The unit is ms.
+Default value: –1
+Parameter description: Specifies the maximum number of automatic cleanup threads running at the same time.
+Type: POSTMASTER
+Value range: an integer ranging from 0 to 262143. 0 indicates that autovacuum is disabled.
+Default value: 3
+Parameter description: Specifies the interval between two automatic cleanup operations.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 2147483. The unit is second.
+Default value: 10min
+Parameter description: Specifies the threshold for triggering the VACUUM operation. When the number of deleted or updated records in a table exceeds the specified threshold, the VACUUM operation is executed on this table.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 50
+Parameter description: Specifies the threshold for triggering the ANALYZE operation. When the number of deleted, inserted, or updated records in a table exceeds the specified threshold, the ANALYZE operation is executed on this table.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 50
+Parameter description: Specifies the size scaling factor of a table added to the autovacuum_vacuum_threshold parameter when a VACUUM event is triggered.
+Type: SIGHUP
+Value range: a floating point number ranging from 0.0 to 100.0
+Default value: 0.2
+Parameter description: Specifies the size scaling factor of a table added to the autovacuum_analyze_threshold parameter when an ANALYZE event is triggered.
+Type: SIGHUP
+Value range: a floating point number ranging from 0.0 to 100.0
+Default value: 0.1
+Parameter description: Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid column can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table.
+The old files under the subdirectory of pg_clog/ can also be deleted by the VACUUM operation. Even if the automatic cleanup process is forbidden, the system will invoke the automatic cleanup process to prevent the cyclic repetition.
+Type: POSTMASTER
+Value range: an integer ranging from 100000 to 576460752303423487
+Default value: 20000000000
+Parameter description: Specifies the value of the cost delay used in the autovacuum operation.
+Type: SIGHUP
+Value range: an integer ranging from –1 to 100. The unit is ms. -1 indicates that the normal vacuum cost delay is used.
+Default value: 20ms
+Parameter description: Specifies the value of the cost limit used in the autovacuum operation.
+Type: SIGHUP
+Value range: an integer ranging from –1 to 10000. -1 indicates that the normal vacuum cost limit is used.
+Default value: –1
+This section describes related default parameters involved in the execution of SQL statements.
+Parameter description: Specifies the order in which schemas are searched when an object is referenced with no schema specified. The value of this parameter consists of one or more schema names. Different schema names are separated by commas (,).
+Type: USERSET
+Value range: a string
+Default value: "$user",public
+$user indicates the name of the schema with the same name as the current session user. If the schema does not exist, $user will be ignored.
+Parameter description: Specifies the current schema.
+Type: USERSET
+Value range: a string
+Default value: "$user",public
+$user indicates the name of the schema with the same name as the current session user. If the schema does not exist, $user will be ignored.
+Parameter description: Specifies the default tablespace of the created objects (tables and indexes) when a CREATE command does not explicitly specify a tablespace.
+Type: USERSET
+Value range: a string. An empty string indicates that the default tablespace is used.
+Default value: empty
+Parameter description: Specifies the Node Group where a table is created by default. This parameter takes effect only for ordinary tables.
+Type: USERSET
+Value range: a string
+Default value: installation
+Parameter description: Sets the storage format version of the column-store table that is created by default.
+Type: SIGHUP
+Value range: enumerated values
+Default value: 2.0
+Parameter description: Specifies tablespaces to which temporary objects will be created (temporary tables and their indexes) when a CREATE command does not explicitly specify a tablespace. Temporary files for sorting large data are created in these tablespaces.
+The value of this parameter is a list of names of tablespaces. When there is more than one name in the list, GaussDB(DWS) chooses a random tablespace from the list upon the creation of a temporary object each time. Except that within a transaction, successively created temporary objects are placed in successive tablespaces in the list. If the element selected from the list is an empty string, GaussDB(DWS) will automatically use the default tablespace of the current database instead.
+Type: USERSET
+Value range: a string An empty string indicates that all temporary objects are created only in the default tablespace of the current database. For details, see default_tablespace.
+Default value: empty
+Parameter description: Specifies whether to enable validation of the function body string during the execution of CREATE FUNCTION. Verification is occasionally disabled to avoid problems, such as forward references when you restore function definitions from a dump.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the default isolation level of each transaction.
+Type: USERSET
+Value range: enumerated values
+Default value: READ COMMITTED
+Parameter description: Specifies whether each new transaction is in read-only state.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the default delaying state of each new transaction. It currently has no effect on read-only transactions or those running at isolation levels lower than serializable.
+GaussDB(DWS) does not support the serializable isolation level of each transaction. The parameter is insignificant.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the behavior of replication-related triggers and rules for the current session.
+Type: USERSET
+Setting this parameter will discard all the cached execution plans.
+Value range: enumerated values
+Default value: origin
+Parameter description: If the statement execution time (starting when the server receives the command) is longer than the duration specified by the parameter, error information is displayed when you attempt to execute the statement and the statement then exits.
+Type: USERSET
+Value range: an integer ranging from 0 to 2147483647. The unit is ms.
+Default value: 0
+Parameter description: Specifies the minimum cutoff age (in the same transaction), based on which VACUUM decides whether to replace transaction IDs with FrozenXID while scanning a table.
+Type: USERSET
+Value range: an integer from 0 to 576460752303423487.
+Although you can set this parameter to a value ranging from 0 to 1000000000 anytime, VACUUM will limit the effective value to half the value of autovacuum_freeze_max_age by default.
+Default value: 5000000000
+Parameter description: Specifies the time that VACUUM freezes tuples while scanning the whole table. VACUUM performs a whole-table scan if the value of the pg_class.relfrozenxid column of the table has reached the specified time.
+Type: USERSET
+Value range: an integer from 0 to 576460752303423487.
+Although users can set this parameter to a value ranging from 0 to 2000000000 anytime, VACUUM will limit the effective value to 95% of autovacuum_freeze_max_age by default. Therefore, a periodic manual VACUUM has a chance to run before an anti-wraparound autovacuum is launched for the table.
+Default value: 15000000000
+Parameter description: Specifies the output format for values of the bytea type.
+Type: USERSET
+Value range: enumerated values
+Default value: hex
+Parameter description: Specifies how binary values are to be encoded in XML.
+Type: USERSET
+Value range: enumerated values
+Default value: base64
+Parameter description: Specifies whether DOCUMENT or CONTENT is implicit when converting between XML and string values.
+Type: USERSET
+Value range: enumerated values
+Default value: content
+Parameter description: Specifies the maximum number of function compilation results stored in the server. Excessive functions and compilation results generated during the storage may occupy large memory space. Setting this parameter to a proper value can reduce the memory usage and improve system performance.
+Type: POSTMASTER
+Value range: an integer ranging from 1 to INT_MAX
+Default value: 1000
+Parameter description: Specifies the maximum size of the GIN pending list which is used when fastupdate is enabled. If the list grows larger than this maximum size, it is cleaned up by moving the entries in it to the main GIN data structure in batches. This setting can be overridden for individual GIN indexes by modifying index storage parameters.
+Type: USERSET
+Value range: an integer ranging from 64 to INT_MAX. The unit is KB.
+Default value: 4 MB
+This section describes parameters related to the time format setting.
+Parameter description: Specifies the display format for date and time values, as well as the rules for interpreting ambiguous date input values.
+This variable contains two independent components: the output format specifications (ISO, Postgres, SQL, or German) and the input/output order of year/month/day (DMY, MDY, or YMD). The two components can be set separately or together. The keywords Euro and European are synonyms for DMY; the keywords US, NonEuro, and NonEuropean are synonyms for MDY.
+Type: USERSET
+Value range: a string
+Default value: ISO, MDY
+ +Suggestion: The ISO format is recommended. Postgres, SQL, and German use abbreviations for time zones, such as EST, WST, and CST. These abbreviations can be ambiguous. For example, CST can represent Central Standard Time (USA) UT-6:00, Central Standard Time (Australia) UT+9:30, and others. This may lead to incorrect time zone conversion and cause errors.
+Parameter description: Specifies the display format for interval values.
+Type: USERSET
+Value range: enumerated values
+The IntervalStyle parameter also affects the interpretation of ambiguous interval input.
+Default value: postgres
+Parameter description: Specifies the time zone for displaying and interpreting time stamps.
+Type: USERSET
+Value range: a string. You can obtain it by querying the pg_timezone_names view.
+Default value: PRC
+gs_initdb will set a time zone value that is consistent with the system environment.
+Parameter description: Specifies the time zone abbreviations that will be accepted by the server.
+Type: USERSET
+Value range: a string. You can obtain it by querying the pg_timezone_names view.
+Default value: Default
+Default indicates an abbreviation that works in most of the world. There are also other abbreviations, such as Australia and India that can be defined for a particular installation.
+Parameter description: Specifies the number of digits displayed for floating-point values, including float4, float8, and geometric data types. The parameter value is added to the standard number of digits (FLT_DIG or DBL_DIG as appropriate).
+Type: USERSET
+Value range: an integer ranging from –15 to 3
+Default value: 0
+Parameter description: Specifies the client-side encoding type (character set).
+Set this parameter as needed. Try to keep the client code and server code consistent to improve efficiency.
+Type: USERSET
+Value range: encoding compatible with PostgreSQL. UTF8 indicates that the database encoding is used.
+Default value: UTF8
+Recommended value: SQL_ASCII or UTF8
+Parameter description: Specifies the language in which messages are displayed.
+Valid values depend on the current system. On some systems, this zone category does not exist. Setting this variable will still work, but there will be no effect. In addition, translated messages for the desired language may not exist. In this case, you can still see the English messages.
+Type: SUSET
+Value range: a string
+Default value: C
+Parameter description: Specifies the display format of monetary values. It affects the output of functions such as to_char. Valid values depend on the current system.
+Type: USERSET
+Value range: a string
+Default value: C
+Parameter description: Specifies the display format of numbers. It affects the output of functions such as to_char. Valid values depend on the current system.
+Type: USERSET
+Value range: a string
+Default value: C
+Parameter description: Specifies the display format of time and zones. It affects the output of functions such as to_char. Valid values depend on the current system.
+Type: USERSET
+Value range: a string
+Default value: C
+Parameter description: Specifies the text search configuration.
+If the specified text search configuration does not exist, an error will be reported. If the specified text search configuration is deleted, set default_text_search_config again. Otherwise, an error will be reported, indicating incorrect configuration.
+Type: USERSET
+Value range: a string
+GaussDB(DWS) supports the following two configurations: pg_catalog.english and pg_catalog.simple.
+Default value: pg_catalog.english
+This section describes the default database loading parameters of the database system.
+Parameter description: Specifies the path for saving the shared database files that are dynamically loaded for data searching. When a dynamically loaded module needs to be opened and the file name specified in the CREATE FUNCTION or LOAD command does not have a directory component, the system will search this path for the required file.
+1 | dynamic_library_path = '/usr/local/lib/postgresql:/opt/testgs/lib:$libdir' + |
Type: SUSET
+Value range: a string
+If the value of this parameter is set to an empty character string, the automatic path search is turned off.
+Default value: $libdir
+Parameter description: Specifies the upper limit of the size of the set returned by GIN indexes.
+Type: USERSET
+Value range: an integer ranging from 0 to INT_MAX. The value 0 indicates no limit.
+Default value: 0
+In GaussDB(DWS), a deadlock may occur when concurrently executed transactions compete for resources. This section describes parameters used for managing transaction lock mechanisms.
+Parameter description: Specifies the time, in milliseconds, to wait on a lock before checking whether there is a deadlock condition. When the applied lock exceeds the preset value, the system will check whether a deadlock occurs.
+Type: SUSET
+Value range: an integer ranging from 1 to 2147483647. The unit is millisecond (ms).
+Default value: 1s
+Parameter description: Specifies the longest time to wait before a single lock times out. If the time you wait before acquiring a lock exceeds the specified time, an error is reported.
+Type: SUSET
+Value range: an integer ranging from 0 to INT_MAX. The unit is millisecond (ms).
+Default value: 20 min
+Parameter description: sets the maximum duration that a lock waits for concurrent updates on a row to complete when the concurrent update feature is enabled. If the time you wait before acquiring a lock exceeds the specified time, an error is reported.
+Type: SUSET
+Value range: an integer ranging from 0 to INT_MAX. The unit is millisecond (ms).
+Default value: 2min
+Parameter description: Controls the average number of object locks allocated for each transaction.
+Type: POSTMASTER
+Value range: an integer ranging from 10 to INT_MAX
+Default value: 256
+Parameter description: Controls the average number of predicated locks allocated for each transaction.
+Type: POSTMASTER
+Value range: an integer ranging from 10 to INT_MAX
+Default value: 64
+Parameter description: Specifies the time to wait before the attempt of a lock upgrade from ExclusiveLock to AccessExclusiveLock times out on partitions.
+Type: USERSET
+Value range: an integer ranging from -1 to 3000. The unit is second (s).
+Default value: 1800
+Parameter description: Specifies whether to block DDL operations to wait for the release of cluster locks, such as pg_advisory_lock and pgxc_lock_for_backup. This parameter is mainly used in online OM operations and you are not advised to modify the settings.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+This section describes the parameter control of the downward compatibility and external compatibility features of GaussDB(DWS). Backward compatibility of the database system provides support for the application of databases of earlier versions. This section describes parameters used for controlling backward compatibility of a database.
+Parameter description: Determines whether the array input parser recognizes unquoted NULL as a null array element.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Determines whether a single quotation mark can be represented by \' in a string text.
+Type: USERSET
+When the string text meets the SQL standards, \ has no other meanings. This parameter only affects the handling of non-standard-conforming string texts, including escape string syntax (E'...').
+Value range: enumerated values
+Default value: safe_encoding
+Parameter description: Determines whether CREATE TABLE and CREATE TABLE AS include an OID field in newly-created tables if neither WITH OIDS nor WITHOUT OIDS is specified. It also determines whether OIDs will be included in tables created by SELECT INTO.
+It is not recommended that OIDs be used in user tables. Therefore, this parameter is set to off by default. When OIDs are required for a particular table, WITH OIDS needs to be specified during the table creation.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies a warning on directly using a backslash (\) as an escape in an ordinary character string.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Determines whether to enable backward compatibility for the privilege check of large objects.
+Type: SUSET
+Value range: Boolean
+on indicates that the privilege check is disabled when users read or modify large objects. This setting is compatible with versions earlier than PostgreSQL 9.0.
+Default value: off
+Parameter description: When the database generates SQL, this parameter forcibly quotes all identifiers even if they are not keywords. This will affect the output of EXPLAIN as well as the results of functions, such as pg_get_viewdef. For details, see the --quote-all-identifiers parameter of gs_dump.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Determines whether to inherit semantics.
+Type: USERSET
+Value range: Boolean
+off indicates that child tables cannot be accessed by various commands. That is, an ONLY keyword is used by default. This setting is compatible with versions earlier than PostgreSQL 7.1.
+Default value: on
+Parameter description: Determines whether ordinary string texts ('...') treat backslashes as ordinary texts as specified in the SQL standard.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls sequential scans of tables to synchronize with each other. Concurrent scans read the same data block about at the same time and share the I/O workload.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Controls whether certain limited features, such as GDS table join, are available. These features are not explicitly prohibited in earlier versions, but are not recommended due to their limitations in certain scenarios.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Many platforms use the database system. External compatibility of the database system provides a lot of convenience for platforms.
+Parameter description: Determines whether expressions of the form expr = NULL (or NULL = expr) are treated as expr IS NULL. They return true if expr evaluates to NULL, and false otherwise.
+Type: USERSET
+Value range: Boolean
+Default value: off
+New users are always confused about the semantics of expressions involving NULL values. Therefore, off is used as the default value.
+Parameter description: Determines whether to enable features compatible with a Teradata database. You can set this parameter to on when connecting to a database compatible with the Teradata database, so that when you perform the INSERT operation, overlong strings are truncated based on the allowed maximum length before being inserted into char- and varchar-type columns in the target table. This ensures all data is inserted into the target table without errors reported.
+Type: USERSET
+Value range: Boolean
+Default value: off
+This section describes parameters used for controlling the methods that the server processes an error occurring in the database system.
+Parameter description: Specifies whether to terminate the current session.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: If this parameter is set to on and the client character set of the database is encoded in UTF-8 format, the occurring character encoding conversion errors will be recorded in logs. Additionally, converted characters that have conversion errors will be ignored and replaced with question marks (?).
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the maximum number of automatic retry times when an SQL statement error occurs. Currently, a statement can start retrying if the following errors occur: Connection reset by peer, Lock wait timeout, and Connection timed out. If this parameter is set to 0, the retry function is disabled.
+Type: USERSET
+Value range: an integer ranging from 0 to 20
+Default value: 6
+Parameter description: Specifies the size of the data buffer used for data transmission on the CN.
+Type: POSTMASTER
+Value range: an integer ranging from 8 to 128. The unit is KB.
+Default value: 8 KB
+Parameter description: Specifies the maximum number of temporary files that can be used by the CN during automatic SQL statement retries. The value 0 indicates that no temporary file is used.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 10485760. The unit is KB.
+Default value: 5 GB
+Parameter description: Specifies the list of SQL error types that support automatic retry.
+Type: USERSET
+Value range: a string
+Default value: YY001 YY002 YY003 YY004 YY005 YY006 YY007 YY008 YY009 YY010 YY011 YY012 YY013 YY014 YY015 53200 08006 08000 57P01 XX003 XX009 YY016 CG003 CG004 F0011
+Parameter description: Specifies whether to keep running the database when updated data fails to be written into disks by using the fsync function. In some OSs, no error is reported even if fsync has failed for multiple times. As a result, data is lost.
+Type: POSTMASTER
+Value range: Boolean
+Default value: off
+When a connection pool is used to access the database, database connections are established and then stored in the memory as objects during system running. When you need to access the database, no new connection is established. Instead, an existing idle connection is selected from the connection pool. After you finish accessing the database, the database does not disable the connection but puts it back into the connection pool. The connection can be used for the next access request.
+Parameter description: Specifies the minimum number of connections between a CN's connection pool and another CN/DN.
+Type: POSTMASTER
+Value range: an integer ranging from 1 to 65535
+Default value: 1
+Parameter description: Specifies the maximum number of connections between a CN's connection pool and another CN/DN.
+Type: POSTMASTER
+Value range: an integer ranging from 1 to 65535
+Default value: 800
+Parameter description: Specifies whether to release the connection for the current session.
+Type: USERSET
+Value range: Boolean
+After this function is enabled, a session may hold a connection but does not run a query. As a result, other query requests fail to be connected. To fix this problem, the number of sessions must be less than or equal to max_active_statements.
+Default value: off
+Parameter description: Specifies the maximum number of CNs in a cluster.
+Type: POSTMASTER
+Value range: an integer ranging from 2 to 40
+Default value: 40
+Parameter description: Specifies the maximum number of DNs in a cluster.
+Type: POSTMASTER
+Value range: an integer ranging from 2 to 65535
+Default value: 4096
+Parameter description: Specifies whether to reclaim the connections of a connection pool.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether a session forcibly reuses a new connection.
+Type: BACKEND
+Value range: Boolean
+Default value: off
+Session connection parameter. Users are not advised to configure this parameter.
+Parameter description: Specifies whether a CN's connection pool can be connected in parallel mode.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+This section describes the settings and value ranges of cluster transaction parameters.
+Parameter description: Specifies the isolation level of the current transaction.
+Type: USERSET
+Value range:
+Default value: READ COMMITTED
+Parameter description: Specifies that the current transaction is a read-only transaction.
+Type: USERSET
+Value range: Boolean
+Default value: off for CNs and on for DNs
+Parameter description: Specifies whether the system is in maintenance mode.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Enable the maintenance mode with caution to avoid cluster data inconsistencies.
+Parameter description: Specifies whether to allow concurrent update.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to create a restoration point for the GTM starting point.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Sets the CN to check whether the connection between the local thread and the primary GTM is normal.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX/1000. The unit is second.
+Default value: 10s
+Parameter description: Specifies whether to delay the execution of a read-only serial transaction without incurring an execution failure. Assume this parameter is set to on. When the server detects that the tuples read by a read-only transaction are being modified by other transactions, it delays the execution of the read-only transaction until the other transactions finish modifying the tuples. Currently, this parameter is not used in GaussDB(DWS). Similar to this parameter, the default_transaction_deferrable parameter is used to specify whether to allow delayed execution of a transaction.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: This parameter is reserved for compatibility with earlier versions. This parameter is invalid in the current version.
+Parameter description: This parameter is available only in a read-only transaction and is used for analysis. When this parameter is set to on/true, all versions of tuples in the table are displayed.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the number of GTM reconnection attempts.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 2147483647.
+Default value: 30
+Parameter description: Specifies whether unmatched nodes are redistributed.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the GTM-FREE mode is enabled. In large concurrency scenarios, the snapshots delivered by the GTM increase in number and size. The network between the GTM and the CN becomes the performance bottleneck. The GTM-FREE mode is used to eliminate the bottleneck. In this mode, the CN communicates with DNs instead of the GTM. The CN sends queries to each DN, which locally generates snapshots and xids, ensuring external write consistency but not external read consistency.
+You are not advised to set this parameter to on in OLTP or OLAP scenarios where strong read consistency is required. This parameter is invalid for GaussDB(DWS).
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to enable the lightweight column-store update.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to use the distributed framework for a query planner.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the trigger can be pushed to DNs for execution.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether JOIN operation plans can be delivered to DNs for execution.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the execution plans of GROUP BY and AGGREGATE can be delivered to DNs for execution.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the execution plan specified in the LIMIT clause can be pushed down to DNs for execution.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the execution plan of the ORDER BY clause can be delivered to DNs for execution.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether joining with the pseudo constant is allowed. A pseudo constant indicates that the variables on both sides of a join are identical to the same constant.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the model used for cost estimation in the application scenario. This parameter affects the distinct estimation of the expression, HashJoin cost model, estimation of the number of rows, distribution key selection during redistribution, and estimation of the number of aggregate rows.
+Type: USERSET
+Value range: 0, 1, or 2
+Default value: 1
+Parameter description: Specifies whether to enable various assertion checks. This parameter assists in debugging. If you are experiencing strange problems or crashes, set this parameter to on to identify programming defects. To use this parameter, the macro USE_ASSERT_CHECKING must be defined (through the configure option --enable-cassert) during the GaussDB(DWS) compilation.
+Type: USERSET
+Value range: Boolean
+This parameter is set to on by default if GaussDB(DWS) is compiled with various assertion checks enabled.
+Default value: off
+Parameter description: Specifies whether the embedded test stubs for testing the distribution framework take effect. In most cases, developers embed some test stubs in the code during fault injection tests. Each test stub is identified by a unique name. The value of this parameter is a triplet that includes three values: thread level, test stub name, and error level of the injected fault. The three values are separated by commas (,).
+Type: USERSET
+Value range: a string indicating the name of any embedded test stub.
+Default value: -1, default, default
+Parameter description: Sets whether to ignore check failures (but still generates an alarm) and continues reading data. This parameter is valid only when enable_crc_check is set to on. Continuing reading data may result in breakdown, damaged data being transferred or hidden, failure of data recovery from remote nodes, or other serious problems. You are not advised to modify the settings.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to create a table as a column-store table by default when no storage method is specified. The value for each node must be the same. This parameter is used for tests. Users are not allowed to enable it.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to forcibly generate vectorized execution plans for a vectorized execution operator if the operator's child node is a non-vectorized operator. When this parameter is set to on, vectorized execution plans are forcibly generated. When enable_force_vector_engine is enabled, no matter it is a row-store table, column-store table, or hybrid row-column store table, if the plantree does not contain scenarios that do not support vectorization, the vectorized executor is forcibly used.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to deliver filter criteria for a rough check during query.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the name of a CSV file exported when explain_perf_mode is set to run.
+Type: USERSET
+The value of this parameter must be an absolute path plus a file name with the extension .csv.
+Value range: a string
+Default value: NULL
+Parameter description: Specifies the display format of the explain command.
+Type: USERSET
+Value range: normal, pretty, summary, and run
+Default value: pretty
+Parameter description: Controls the default distinct value of the join column or expression in application scenarios.
+Type: USERSET
+Value range: a double-precision floating point number greater than or equal to -100. Decimals may be truncated when displayed on clients.
+Default value: -20
+Parameter description: Controls the default distinct value of the filter column or expression in application scenarios.
+Type: USERSET
+Value range: a double-precision floating point number greater than or equal to -100. Decimals may be truncated when displayed on clients.
+Default value: 200
+Parameter description: Specifies whether to generate a large amount of debugging output for the LISTEN and NOTIFY commands. client_min_messages or log_min_messages must be DEBUG1 or lower so that such output can be recorded in the logs on the client or server separately.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to enable logging of recovery-related debugging output. This parameter allows users to overwrite the normal setting of log_min_messages, but only for specific messages. This is intended for use in debugging the standby server.
+Type: SIGHUP
+Value range: enumerated values. Valid values include debug5, debug4, debug3, debug2, debug1, and log. For details about the parameter values, see log_min_messages.
+Default value: log
+Parameter description: Specifies whether to display information about resource usage during sorting operations in logs. This parameter is available only when the macro TRACE_SORT is defined during the GaussDB(DWS) compilation. However, TRACE_SORT is currently defined by default.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to detect a damaged page header that causes GaussDB(DWS) to report an error, aborting the current transaction.
+Type: SUSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to use the same method to calculate char-type hash values and varchar- or text-type hash values. Based on the setting of this parameter, you can determine whether a redistribution is required when a distribution column is converted from a char-type data distribution into a varchar- or text-type data distribution.
+Type: POSTMASTER
+Value range: Boolean
+Calculation methods differ in the length of input strings used for calculating hash values. (For a char-type hash value, spaces following a string are not counted as the length. For a text- or varchar-type hash value, the spaces are counted.) The hash value affects the calculation result of queries. To avoid query errors, do not modify this parameter during database running once it is set.
+Default value: off
+Parameter description: Specifies whether to enable internal testing on the data replication function.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Controls use of different estimation methods in specific customer scenarios, allowing estimated values approximating to onsite values. This parameter can control various methods simultaneously by performing AND (&) operations on the bit for each method. A method is selected if its value is not 0.
+If cost_param & 1 is not set to 0, an improvement mechanism is selected for calculating a non-equi join selection rate, which is more accurate in estimation of self-join (join between two same tables). In V300R002C00 and later, cost_param & 1=0 is not used. That is, an optimized formula is selected for calculation.
+When cost_param & 2 is set to a value other than 0, the selection rate is estimated based on multiple filter criteria. The lowest selection rate among all filter criteria, but not the product of the selection rates for two tables under a specific filter criterion, is used as the total selection rate. This method is more accurate when a close correlation exists between the columns to be filtered.
+When cost_param & 4 is not 0, the selected debugging model is not recommended when the stream node is evaluated.
+When cost_param & 16 is not 0, the model between fully correlated and fully uncorrelated models is used to calculate the comprehensive selection rate of two or more filtering conditions or join conditions. If there are many filtering conditions, the strongly-correlated model is preferred.
+Type: USERSET
+Value range: an integer ranging from 1 to INT_MAX
+Default value: 16
+Parameter description: Specifies the implicit conversion priority, which determines whether to preferentially convert strings into numbers.
+In MySQL-compatible mode, this parameter has no impact.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Modify this parameter only when absolutely necessary because the modification will change the rule for converting internal data types and may cause unexpected results.
+Parameter description: Specifies the default timestamp format.
+Type: USERSET
+Value range: a string
+Default value: DD-Mon-YYYY HH:MI:SS.FF AM
+Parameter description: Specifies whether to select an intelligent algorithm for joining partitioned tables.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether dynamic pruning is enabled during partition table scanning.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the maximum number of exceptions. The default value cannot be changed.
+Type: USERSET
+Value range: an integer
+Default value: 1000
+Parameter description: This parameter no longer takes effect.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to allow output of some VACUUM-related logs for problem locating. This parameter is used only by developers. Common users are advised not to use it.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the current statistics mode. This parameter is used to compare global statistics generation plans and the statistics generation plans for a single DN. This parameter is used for tests. Users are not allowed to enable it.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to enable optimization for numeric data calculation. Calculation of numeric data is time-consuming. Numeric data is converted into int64- or int128-type data to improve numeric data calculation performance.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the format in which numeric data in a row-store table is spilled to disks.
+Type: USERSET
+Value range: Boolean
+If this parameter is set to on, you are advised to enable enable_force_vector_engine to improve the query performance of large data sets. However, compared with the original format, there is a high probability that the bigint format occupies more disk space. For example, the TPC-H test set occupies about 7% more space (reference value, may vary depending on the environment).
+Default value: off
+Parameter description: Specifies the rewriting rule for enabled optional queries. Some query rewriting rules are optional. Enabling them cannot always improve query efficiency. In a specific customer scenario, you can set the query rewriting rules through the GUC parameter to achieve optimal query efficiency.
+This parameter can control the combination of query rewriting rules, for example, there are multiple rewriting rules: rule1, rule2, rule3, and rule4. To set the parameters, you can perform the following operations:
+set rewrite_rule=rule1; --Enable query rewriting rule rule1. +set rewrite_rule=rule2,rule3; --Enable query rewriting rules rule2 and rule3. +set rewrite_rule=none; --Disable all optional query rewriting rules.+
Type: USERSET
+Value range: a string
+Default value: magicset
+Parameter description: Specifies whether to enable the compression function of writing data to a disk.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether to enable function options in the corresponding options to use the corresponding location functions, including data verification and performance statistics. For details, see the options in the value range.
+Type: USERSET
+Value range: a string
+Default value: off(ALL), which indicates that no location function is enabled.
+Parameter description: Specifies the log level of self-diagnosis. Currently, this parameter takes effect only in multi-column statistics.
+Type: USERSET
+Value range: a string
+Currently, the two parameter values differ only when there is an alarm about multi-column statistics not collected. If the parameter is set to summary, such an alarm will not be displayed. If it is set to detail, such an alarm will be displayed.
+Default value: summary
+Parameter description: Specifies the number of buckets for HLL data. The number of buckets affects the precision of distinct values calculated by HLL. The more buckets there are, the smaller the deviation is. The deviation range is as follows: [–1.04/2log2m*1/2, +1.04/2log2m*1/2]
+Type: USERSET
+Value range: an integer ranging from 10 to 16
+Default value: 11
+Parameter description: Specifies the number of bits in each bucket for HLL data. A larger value indicates more memory occupied by HLL. hll_default_regwidth and hll_default_log2m determine the maximum number of distinct values that can be calculated by HLL. For details, see Table 1.
+Type: USERSET
+Value range: an integer ranging from 1 to 5
+Default value: 5
+ +log2m + |
+regwidth = 1 + |
+regwidth = 2 + |
+regwidth = 3 + |
+regwidth = 4 + |
+regwidth = 5 + |
+
---|---|---|---|---|---|
10 + |
+7.4e+02 + |
+3.0e+03 + |
+4.7e+04 + |
+1.2e+07 + |
+7.9e+11 + |
+
11 + |
+1.5e+03 + |
+5.9e+03 + |
+9.5e+04 + |
+2.4e+07 + |
+1.6e+12 + |
+
12 + |
+3.0e+03 + |
+1.2e+04 + |
+1.9e+05 + |
+4.8e+07 + |
+3.2e+12 + |
+
13 + |
+5.9e+03 + |
+2.4e+04 + |
+3.8e+05 + |
+9.7e+07 + |
+6.3e+12 + |
+
14 + |
+1.2e+04 + |
+4.7e+04 + |
+7.6e+05 + |
+1.9e+08 + |
+1.3e+13 + |
+
15 + |
+2.4e+04 + |
+9.5e+04 + |
+1.5e+06 + |
+3.9e+08 + |
+2.5e+13 + |
+
Parameter description: Specifies the default threshold for switching from the explicit mode to the sparse mode.
+Type: USERSET
+Value range: an integer ranging from –1 to 7 –1 indicates the auto mode; 0 indicates that the explicit mode is skipped; a value from 1 to 7 indicates that the mode is switched when the number of distinct values reaches 2hll_default_expthresh.
+Default value: –1
+Parameter description: Specifies whether to enable the sparse mode by default.
+Type: USERSET
+Valid value: 0 and 1 0 indicates that the sparse mode is disabled by default. 1 indicates that the sparse mode is enabled by default.
+Default value: 1
+Parameter description: Specifies the size of max_sparse.
+Type: USERSET
+Value range: an integer ranging from –1 to INT_MAX
+Default value: –1
+Parameter description: Specifies whether to enable memory optimization for HLL.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Controls the maximum physical memory that can be used when each CN or DN executes UDFs.
+Type: POSTMASTER
+Value range: an integer. The value range is from 200 x 1024 to the value of max_process_memory and the unit is KB.
+Default value: 200 MB
+Parameter description: Controls the virtual memory used by each fenced udf worker process.
+Type: USERSET
+Suggestion: You are not advised to set this parameter. You can set udf_memory_limit instead.
+Value range: an integer. The unit can be KB, MB, or GB. 0 indicates that the memory is not limited.
+Default value: 0
+Parameter description: Specifies the maximum value of fencedUDFMemoryLimit.
+Type: POSTMASTER
+Suggestion: You are not advised to set this parameter. You can set udf_memory_limit instead.
+Value range: an integer. The unit can be KB, MB, or GB.
+Default value: 1 GB
+Parameter description: Specifies the startup parameters for JVMs used by the PL/Java function.
+Type: SUSET
+Value range: a string, supporting:
+If pljava_vmoptions is set to a value beyond the value range, an error will be reported when PL/Java functions are used.
+Default value: empty
+Parameter description: Specifies the granularity of Java UDF actions.
+Type: SIGHUP
+Value range: a string
+Default value: extdir,hadoop,reflection,loadlibrary,net,socket,security,classloader,access_declared_members
+Parameter description: Specifies whether the optimizer optimizes the query plan for statements executed in Parse Bind Execute (PBE) mode.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the optimizer optimizes the execution of simple queries on CNs.
+Type: SUSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the number of consecutive disk pages that the checkpointer writer thread writes before asynchronous flush. In GaussDB(DWS), the size of a disk page is 8 KB.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 256. 0 indicates that the asynchronous flush function is disabled. For example, if the value is 32, the checkpointer thread continuously writes 32 disk pages (that is, 32 x 8 = 256 KB) before asynchronous flush.
+Default value: 32
+Parameter description: Controls whether multiple CNs can concurrently perform DDL operations on the same database object.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: When the GaussDB(DWS) cluster is accelerated (acceleration_with_compute_pool is set to on), specifies whether the EXPLAIN statement displays the evaluation information about execution plan pushdown to computing Node Groups. The evaluation information is generally used by O&M personnel during maintenance, and it may affect the output display of the EXPLAIN statement. Therefore, this parameter is disabled by default. The evaluation information is displayed only if the verbose option of the EXPLAIN statement is enabled.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to batch bind and execute PBE statements through interfaces such as JDBC, ODBC, and Libpq.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether the execution of the current statement or session can be immediately interrupted in the signal processing function.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Exercise caution when setting this parameter to on. If the execution of the current statement or session can be immediately interrupted in the signal processing function, the execution of some key processes may be interrupted, causing the failure to release the global lock in the system. It is recommended that this parameter be set to on only during system debugging or fault prevention.
+Parameter description: Specifies whether to enable or disable the audit process. After the audit process is enabled, the auditing information written by the background process can be read from the pipe and written into audit files.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the format of the audit log files. Currently, only the binary format is supported.
+Type: POSTMASTER
+Value range: a string
+Default value: binary
+Parameter description: Specifies the interval of creating an audit log file. If the difference between the current time and the time when the previous audit log file is created is greater than the value of audit_rotation_interval, a new audit log file will be generated.
+Type: SIGHUP
+Value range: an integer ranging from 1 to INT_MAX/60. The unit is min.
+Default value: 1d
+Adjust this parameter only when required. Otherwise, audit_resource_policy may fail to take effect. To control the storage space and time of audit logs, set the audit_resource_policy, audit_space_limit, and audit_file_remain_time parameters.
+Parameter description: Specifies the maximum capacity of an audit log file. If the total number of messages in an audit log exceeds the value of audit_rotation_size, the server will generate a new audit log file.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 1024. The unit is MB.
+Default value: 10 MB
+Adjust this parameter only when required. Otherwise, audit_resource_policy may fail to take effect. To control the storage space and time of audit logs, set the audit_resource_policy, audit_space_limit, and audit_file_remain_time parameters.
+Parameter description: Specifies the policy for determining whether audit logs are preferentially stored by space or time.
+Type: SIGHUP
+Value range: Boolean
+Default value: on
+Parameter description: Specifies the minimum duration required for recording audit logs. This parameter is valid only when audit_resource_policy is set to off.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 730. The unit is day. 0 indicates that the storage duration is not limited.
+Default value: 90
+Parameter description: Specifies the total disk space occupied by audit files.
+Type: SIGHUP
+Value range: an integer ranging from 1024 KB to 1024 GB. The unit is KB.
+Default value: 1GB
+Parameter description: Specifies the maximum number of audit files in the audit directory.
+Type: SIGHUP
+Value range: an integer ranging from 1 to 1048576
+Default value: 1048576
+Ensure that the value of this parameter is 1048576. If the value is changed, the audit_resource_policy parameter may not take effect. To control the storage space and time of audit logs, use the audit_resource_policy, audit_space_limit, and audit_file_remain_time parameters.
+Parameter description: Specifies whether to audit successful operations in GaussDB(DWS). Set this parameter as required.
+Type: SIGHUP
+Value range: a string
+Default value: login, logout, database_process, user_lock, grant_revoke, set, transaction, and cursor
+Parameter description: Specifies whether to audit failed operations in GaussDB(DWS). Set this parameter as required.
+Type: SIGHUP
+Value range: a string
+Default value: login
+Parameter description: Specifies whether to audit the operations of the internal maintenance tool in GaussDB(DWS).
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to audit the CREATE, DROP, and ALTER operations on the GaussDB(DWS) database object. The GaussDB(DWS) database objects include databases, users, schemas, and tables. The operations on the database object can be audited by changing the value of this parameter.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 4194303
+Value description:
+The value of this parameter is calculated by 22 binary bits. The 22 binary bits represent 22 types of GaussDB(DWS) database objects. If the corresponding binary bit is set to 0, the CREATE, DROP, and ALTER operations on corresponding database objects are not audited. If it is set to 1, the CREATE, DROP, and ALTER operations are audited. For details about the audit content represented by these 22 binary bits, see Table 1.
+Default value: 12303
+ +Binary Bit + |
+Meaning + |
+Value Description + |
+
---|---|---|
Bit 0 + |
+Whether to audit the CREATE, DROP, and ALTER operations on databases. + |
+
|
+
Bit 1 + |
+Whether to audit the CREATE, DROP, and ALTER operations on schemas. + |
+
|
+
Bit 2 + |
+Whether to audit the CREATE, DROP, and ALTER operations on users. + |
+
|
+
Bit 3 + |
+Whether to audit the CREATE, DROP, ALTER, and TRUNCATE operations on tables. + |
+
|
+
Bit 4 + |
+Whether to audit the CREATE, DROP, and ALTER operations on indexes. + |
+
|
+
Bit 5 + |
+Whether to audit the CREATE, DROP, and ALTER operations on views. + |
+
|
+
Bit 6 + |
+Whether to audit the CREATE, DROP, and ALTER operations on triggers. + |
+
|
+
Bit 7 + |
+Whether to audit the CREATE, DROP, and ALTER operations on procedures/functions. + |
+
|
+
Bit 8 + |
+Whether to audit the CREATE, DROP, and ALTER operations on tablespaces. + |
+
|
+
Bit 9 + |
+Whether to audit the CREATE, DROP, and ALTER operations on resource pools. + |
+
|
+
Bit 10 + |
+Whether to audit the CREATE, DROP, and ALTER operations on workloads. + |
+
|
+
Bit 11 + |
+Whether to audit the CREATE, DROP, and ALTER operations on SERVER FOR HADOOP objects. + |
+
|
+
Bit 12 + |
+Whether to audit the CREATE, DROP, and ALTER operations on data sources. + |
+
|
+
Bit 13 + |
+Whether to audit the CREATE, DROP, and ALTER operations on Node Groups. + |
+
|
+
Bit 14 + |
+Whether to audit the CREATE, DROP, and ALTER operations on ROW LEVEL SECURITY objects. + |
+
|
+
Bit 15 + |
+Whether to audit the CREATE, DROP, and ALTER operations on types. + |
+
|
+
Bit 16 + |
+Whether to audit the CREATE, DROP, and ALTER operations on text search objects (configurations and dictionaries) + |
+
|
+
Bit 17 + |
+Whether to audit the CREATE, DROP, and ALTER operations on directories. + |
+
|
+
Bit 18 + |
+Whether to audit the CREATE, DROP, and ALTER operations on workloads. + |
+
|
+
Bit 19 + |
+Whether to audit the CREATE, DROP, and ALTER operations on redaction policies. + |
+
|
+
Bit 20 + |
+Whether to audit the CREATE, DROP, and ALTER operations on sequences. + |
+
|
+
Bit 21 + |
+Whether to audit the CREATE, DROP, and ALTER operations on nodes. + |
+
|
+
Parameter description: Specifies whether the separation of permissions is enabled.
+Type: POSTMASTER
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the with grant option function can be used in security mode.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to enable the permission to copy server files.
+Type: POSTMASTER
+Value range: Boolean
+Default value: true
+COPY FROM/TO file requires system administrator permissions. However, if the separation of permissions is enabled, system administrator permissions are different from initial user permissions. In this case, you can use enable_copy_server_file to control the COPY permission of system administrators to prevent escalation of their permissions.
+The automatic rollback transaction can be monitored and its statement problems can be located by setting the transaction timeout warning. In addition, the statements with long execution time can also be monitored.
+Parameter description: For data consistency, when the local transaction's status differs from that in the snapshot of the GTM, other transactions will be blocked. You need to wait for a few minutes until the transaction status of the local host is consistent with that of the GTM. The gs_clean tool is automatically triggered for cleansing when the waiting period on the CN exceeds that of transaction_sync_naptime. The tool will shorten the blocking time after it completes the cleansing.
+Type: USERSET
+Value range: an integer. The minimum value is 0. The unit is second.
+Default value: 5s
+If the value of this parameter is set to 0, gs_clean will not be automatically invoked for the cleansing before the blocking arrives the duration. Instead, the gs_clean tool is invoked by gs_clean_timeout. The default value is 5 minutes.
+Parameter description: For data consistency, when the local transaction's status differs from that in the snapshot of the GTM, other transactions will be blocked. You need to wait for a few minutes until the transaction status of the local host is consistent with that of the GTM. An exception is reported when the waiting duration on the CN exceeds the value of transaction_sync_timeout. Roll back the transaction to avoid system blocking due to long time of process response failures (for example, sync lock).
+Type: USERSET
+Value range: an integer. The minimum value is 0. The unit is second.
+Default value: 10min
+Parameter description: If an SQL statement involves tables belonging to different groups, you can enable this parameter to push the execution plan of the statement to improve performance.
+Type: SUSET
+Value range: Boolean
+Default value: off
+This parameter is used for internal O&M. Do not set it to on unless absolutely necessary.
+Parameter description: Specifies the storage location of data to be imported to an HDFS table. This parameter is needed for operations that involve data import, such as INSERT, UPDATE, COPY, and VACUUM FULL.
+Type: USERSET
+Value range: enumerated values
+Default value: auto
+You can set other values as the default in the configuration file.
+Parameter description: When enable_crc_check is set to on and the data read by the primary DN fails the verification, remote_read_mode is used to specify whether to enable remote read and whether to use secure authentication for connection upon the data verification failure. The setting takes effect only after the cluster is restarted.
+Type: POSTMASTER
+Value range: off, non_authentication, authentication
+Default value: non_authentication
+Parameter description: If this parameter is set to on, the delta merge operation internally increases the lock level, and errors can be avoided when update and delete operations are performed at the same time.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies the number of jobs that can be concurrently executed. This parameter is a postmaster parameter. You can set it using gs_guc, and you need to restart gaussdb to make the setting take effect.
+Type: POSTMASTER
+Value range: 0 to 1000
+Functions:
+After the scheduled task function is enabled, the job_scheduler thread at a scheduled interval polls the pg_jobs system catalog. The scheduled task check is performed every second by default.
+Too many concurrent tasks consume many system resources, so you need to set the number of concurrent tasks to be processed. If the current number of concurrent tasks reaches job_queue_processes and some of them expire, these tasks will be postponed to the next polling period. Therefore, you are advised to set the polling interval (the interval parameter of the submit interface) based on the execution duration of each task to avoid the problem that tasks in the next polling period cannot be properly processed because overlong task execution time.
+Note: If the number of parallel jobs is large and the value is too small, these jobs will wait in queues. However, a large parameter value leads to large resource consumption. You are advised to set this parameter to 100 and change it based on the system resource condition.
+Default value: 10
+Parameter description: Specifies the length of the ngram parser segmentation.
+Type: USERSET
+Value range: an integer ranging from 1 to 4
+Default value: 2
+Parameter description: Specifies whether the ngram parser ignores graphical characters.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the ngram parser ignores punctuations.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether Zhparser adds a dictionary to memory.
+Type: POSTMASTER
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether Zhparser aggregates segments in long words with duality.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether Zhparser executes long words composite divide.
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether Zhparser displays all single words individually.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether Zhparser displays important single words separately.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether the Zhparser segmentation result ignores special characters including punctuations (\r and \n will not be ignored).
+Type: USERSET
+Value range: Boolean
+Default value: on
+Parameter description: Specifies whether Zhparser aggregates segments in long words with duality.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies whether to use the computing resource pool for acceleration when OBS is queried.
+Type: USERSET
+Value range: Boolean
+Default value: off
+Parameter description: Specifies database compatibility behavior. Multiple items are separated by commas (,).
+Type: USERSET
+Value range: a string
+Default value: In upgrade scenarios, the default value of this parameter is the same as that in the cluster before the upgrade. When a new cluster is installed, the default value of this parameter is check_function_conflicts to prevent serious problems caused by incorrect function attributes defined by users.
+Configuration Item + |
+Behavior + |
+Applicable Compatibility Mode + |
+||||
---|---|---|---|---|---|---|
display_leading_zero + |
+Specifies how floating point numbers are displayed. +
|
+ORA +TD + |
+||||
end_month_calculate + |
+Specifies the calculation logic of the add_months function. +Assume that the two parameters of the add_months function are param1 and param2, and that the sum of param1 and param2 is result. +
|
+ORA +TD + |
+||||
compat_analyze_sample + |
+Specifies the sampling behavior of the ANALYZE operation. +If this item is specified, the sample collected by the ANALYZE operation will be limited to around 30,000 records, controlling CN memory consumption and maintaining the stability of ANALYZE. + |
+ORA +TD +MySQL + |
+||||
bind_schema_tablespace + |
+Binds a schema with the tablespace with the same name. +If a tablespace name is the same as sche_name, default_tablespace will also be set to sche_name if search_path is set to sche_name. + |
+ORA +TD +MySQL + |
+||||
bind_procedure_searchpath + |
+Specifies the search path of the database object for which no schema name is specified. +If no schema name is specified for a stored procedure, the search is performed in the schema to which the stored procedure belongs. +If the stored procedure is not found, the following operations are performed: +
|
+ORA +TD +MySQL + |
+||||
correct_to_number + |
+Controls the compatibility of the to_number() result. +If this item is specified, the result of the to_number() function is the same as that of PG11. Otherwise, the result is the same as that of Oracle. + |
+ORA + |
+||||
unbind_divide_bound + |
+Controls the range check on the result of integer division. +
SELECT (-2147483648)::int / (-1)::int; +ERROR: integer out of range+
SELECT (-2147483648)::int / (-1)::int; + ?column? +------------ + 2147483648 +(1 row)+ |
+ORA +TD + |
+||||
merge_update_multi + |
+Performs an update if multiple rows are matched for MERGE INTO. +If this item is specified, no error is reported if multiple rows are matched. Otherwise, an error is reported (same as Oracle). + |
+ORA +TD + |
+||||
return_null_string + |
+Specifies how to display the empty result (empty string '') of the lpad(), rpad(), repeat(), regexp_split_to_table(), and split_part() functions. +
|
+ORA + |
+||||
compat_concat_variadic + |
+Specifies the compatibility of variadic results of the concat() and concat_ws() functions. +If this item is specified and a concat function has a parameter of the variadic type, different result formats in Oracle and Teradata are retained. If this item is not specified and a concat function has a parameter of the variadic type, the result format of Oracle is retained for both Oracle and Teradata. + |
+ORA +TD + |
+||||
convert_string_digit_to_numeric + |
+Specifies the type casting priority for binary BOOL operations on the CHAR type and INT type. +
CAUTION:
+This configuration item is valid only for binary BOOL operation, for example, INT2>TEXT and INT4=BPCHAR. Non-BOOL operation is not affected. This configuration item does not support conversion of UNKNOWN operations such as INT>'1.1'. After this configuration item is enabled, all BOOL operations of the CHAR and INT types are preferred to be converted to the NUMERIC type for computation, which affects the computation performance of the database. When the JOIN column is a combination of affected types, the execution plan is affected. + |
+ORA +TD +MySQL + |
+||||
check_function_conflicts + |
+Controls the check of the custom plpgsql/SQL function attributes. +
For example, when this parameter is specified, an error is reported in the following scenarios: +CREATE OR replace FUNCTION sql_immutable (INTEGER) +RETURNS INTEGER AS 'SELECT a+$1 from shipping_schema.t4 where a=1;' +LANGUAGE SQL IMMUTABLE +RETURNS NULL +ON NULL INPUT; +select sql_immutable(1); +ERROR: IMMUTABLE function cannot contain SQL statements with relation or Non-IMMUTABLE function. +CONTEXT: SQL function "sql_immutable" during startup +referenced column: sql_immutable+ |
+ORA +TD +MySQL + |
+||||
varray_verification + |
+Indicates whether to verify the array length and array type length. Compatible with GaussDB(DWS) versions earlier than 8.1.0. +If this parameter is specified, the array length and array type length are not verified. +Scenario 1 +CREATE OR REPLACE PROCEDURE varray_verification +AS + TYPE org_varray_type IS varray(5) OF VARCHAR2(2); + v_org_varray org_varray_type; +BEGIN + v_org_varray(1) := '111'; --If the value exceeds the limit of VARCHAR2(2), the setting will be consistent with that in the historical version and no verification is performed after configuring this option. +END; +/ +Scenario 2 + CREATE OR REPLACE PROCEDURE varray_verification_i3_1 +AS + TYPE org_varray_type IS varray(2) OF NUMBER(2); + v_org_varray org_varray_type; +BEGIN + v_org_varray(3) := 1; --If the value exceeds the limit of varray(2) specified for array length, the setting will be consistent with that in the historical version and no verification is performed after configuring this option. +END; +/+ |
+ORA +TD + |
+||||
strict_concat_functions + |
+Indicates whether the textanycat() and anytextcat() functions are compatible with the return value if there are null parameters. This parameter and strict_text_concat_td are mutually exclusive. +In MySQL-compatible mode, this parameter has no impact. +
If this configuration item is not specified, the returned values of the textanycat() and anytextcat() functions are the same as those in the Oracle database. +SELECT textanycat('gauss', cast(NULL as BOOLEAN)); + textanycat +------------ + gauss +(1 row) + +SELECT 'gauss' || cast(NULL as BOOLEAN); -- In this case, the || operator is converted to the textanycat function. + ?column? +---------- + gauss +(1 row)+ When setting this configuration item, retain the results that are different from those in Oracle and Teradata: +SELECT textanycat('gauss', cast(NULL as BOOLEAN)); + textanycat +------------ + +(1 row) + +SELECT 'gauss' || cast(NULL as BOOLEAN); -- In this case, the || operator is converted to the textanycat function. + ?column? +---------- + +(1 row)+ |
+ORA +TD + |
+||||
strict_text_concat_td + |
+In Teradata compatible mode, whether the textcat(), textanycat() and anytextcat() functions are compatible with the return value if there are null parameters. This parameter and strict_concat_functions are mutually exclusive. +
If this parameter is not specified, the returned values of the textcat(), textanycat(), and anytextcat() functions are the same as those in the GaussDB(DWS). +td_data_compatible_db=# SELECT textcat('abc', NULL); +textcat +--------- +abc +(1 row)+ td_data_compatible_db=# SELECT 'abc' || NULL; -- In this case, the operator || is converted to the textcat() function. +?column? +---------- +abc +(1 row)+ When this parameter is specified, NULL is returned if any of the textcat(), textanycat(), and anytextcat() functions returns a null value. +td_data_compatible_db=# SELECT textcat('abc', NULL); +textcat +--------- + +(1 row)+ td_data_compatible_db=# SELECT 'abc' || NULL; +?column? +---------- + +(1 row)+ |
+TD + |
+||||
compat_display_ref_table + |
+Sets the column display format in the view. +
SET behavior_compat_options='compat_display_ref_table'; +CREATE OR REPLACE VIEW viewtest2 AS SELECT a.c1, c2, a.c3, 0 AS c4 FROM viewtest_tbl a; +SELECT pg_get_viewdef('viewtest2'); +pg_get_viewdef +----------------------------------------------------- +SELECT a.c1, c2, a.c3, 0 AS c4 FROM viewtest_tbl a; +(1 row)+ |
+ORA +TD + |
+||||
para_support_set_func + |
+Whether the input parameters of the COALESCE(), NVL(), GREATEST(), and LEAST() functions in a column-store table support multiple result set expressions. +
|
+ORA +TD + |
+||||
disable_select_truncate_parallel + |
+Controls the DDL lock level such as TRUNCATE in a partitioned table. +
|
+ORA +TD +MySQL + |
+||||
bpchar_text_without_rtrim + |
+In Teradata-compatible mode, controls the space to be retained on the right during the character conversion from bpchar to text. If the actual length is less than the length specified by bpchar, spaces are added to the value to be compatible with the Teradata style of the bpchar character string. +Currently, ignoring spaces at the end of a string for comparison is not supported. If the concatenated string contains spaces at the end, the comparison is space-sensitive. +The following is an example: +td_compatibility_basic_db=# select length('a'::char(10)::text); +length +-------- +10 +(1 row) + +td_compatibility_basic_db=# select length('a'||'a'::char(10)); +length +-------- +11 +(1 row)+ |
+TD + |
+||||
convert_empty_str_to_null_td + |
+In Teradata-compatible mode, controls the to_date, to_timestamp, and to_number type conversion functions to return null when they encounter empty strings, and controls the format of the return value when the to_char function encounters an input parameter of the date type. +Example: +If this parameter is not specified: +td_compatibility_db=# select to_number(''); + to_number +----------- + 0 +(1 row) + +td_compatibility_db=# select to_date(''); +ERROR: the format is not correct +DETAIL: invalid date length "0", must between 8 and 10. +CONTEXT: referenced column: to_date + +td_compatibility_db=# select to_timestamp(''); + to_timestamp +------------------------ + 0001-01-01 00:00:00 BC +(1 row) + +td_compatibility_db=# select to_char(date '2020-11-16'); + to_char +------------------------ + 2020-11-16 00:00:00+08 +(1 row)+ If this parameter is specified, and parameters of to_number, to_date, and to_timestamp functions contain empty strings: +td_compatibility_db=# select to_number(''); + to_number +----------- + +(1 row) + +td_compatibility_db=# select to_date(''); + to_date +--------- + +(1 row) + +td_compatibility_db=# select to_timestamp(''); + to_timestamp +-------------- + +(1 row) + +td_compatibility_db=# select to_char(date '2020-11-16'); + to_char +------------ + 2020/11/16 +(1 row)+ |
+TD + |
+||||
disable_case_specific + |
+Determines whether to ignore case sensitivity during character type match. This parameter is valid only in Teradata-compatible mode. +
After being specified, this item will affect five character types (CHAR, TEXT, BPCHAR, VARCHAR, and NVARCHAR), 12 operators (<, >, =, >=, <=, !=, <>, !=, like, not like, in, and not in), and expressions case when and decode. + CAUTION:
+After this item is enabled, the UPPER function is added before the character type, which affects the estimation logic. Therefore, an enhanced estimation model is required. (Suggested settings: cost_param=16, cost_model_version = 1, join_num_distinct=-20, and qual_num_distinct=200) + |
+TD + |
+||||
enable_interval_to_text + |
+Controls the implicit conversion from the interval type to the text type. +
|
+ORA +TD +MySQL + |
+||||
light_object_mtime + |
+Specifies whether the mtime column in the pg_object system catalog records object operations. +
|
+ORA +TD +MySQL + |
+
Parameter description: Specifies the threshold for triggering a table skew alarm.
+Type: SUSET
+Value range: a floating point number ranging from 0 to 1
+Default value: 1
+Parameter description: Specifies the minimum number of rows for triggering a table skew alarm.
+Type: SUSET
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 100000
+Parameter description: Specifies the number of memory-saving partitions in column-store mode during redistribution after scale-out. If the number of partitions exceeds the upper limit, the earliest cached partition is directly written to the column-store file.
+Type: SIGHUP
+Value range: an integer ranging from 0 to 32767.
+Default value: 0
+This parameter is used for redistribution during scale-out. A proper value can reduce the memory consumption during redistribution of a partitioned column-store table. However, tables with unbalanced data distribution in some partitions may generate a large number of small CUs after the redistribution. If there are a large number of small CUs, execute the VACUUM FULL statement to merge them.
+Parameter description: Specifies whether to prevent the thread startup of scheduled jobs. This is an internal parameter. You are not advised to change the value of this parameter.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Set this parameter only on CNs.
+Parameter description: Specifies whether to enable the residual file recording function.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Enables the view update function or not.
+Type: POSTMASTER
+Value range: Boolean
+Default value: off
+Parameter description: Decouples views from tables, functions, and synonyms or not. After the base table is restored, automatic association and re-creation are supported.
+Type: SIGHUP
+Value range: Boolean
+Default value: off
+Parameter description: Sets the threshold for reporting import and export statistics.
+Type: SIGHUP
+Value range: an integer ranging from 0 to INT_MAX
+Default value: 50
+Parameter description: Determines the transaction to be aborted based on the specified XID in a query.
+Type: USERSET
+Value range: a character string with the specified XID
+This parameter is used only for quick restoration if a user deletes data by mistake (DELETE operation). Do not use this parameter in other scenarios. Otherwise, visible transaction errors may occur.
+Term + |
+Description + |
+
---|---|
A – E + |
+|
ACID + |
+Atomicity, Consistency, Isolation, and Durability (ACID). These are a set of properties of database transactions in a DBMS. + |
+
cluster ring + |
+A cluster ring consists of several physical servers. The primary-standby-secondary relationships among its DNs do not involve external DNs. That is, none of the primary, standby, or secondary counterparts of DNs belonging to the ring are deployed in other rings. A ring is the smallest unit used for scaling. + |
+
Bgwriter + |
+A background write thread created when the database starts. The thread pushes dirty pages in the database to a permanent device (such as a disk). + |
+
bit + |
+The smallest unit of information handled by a computer. One bit is expressed as a 1 or a 0 in a binary numeral, or as a true or a false logical condition. A bit is physically represented by an element such as high or low voltage at one point in a circuit, or a small spot on a disk that is magnetized in one way or the other. A single bit conveys little information a human would consider meaningful. A group of eight bits, however, makes up a byte, which can be used to represent many types of information, such as a letter of the alphabet, a decimal digit, or other character. + |
+
Bloom filter + |
+Bloom filter is a space-efficient binary vectorized data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not, in other words, a query returns either "possibly in set (possible error)" or "definitely not in set". In the cases, Bloom filter sacrificed the accuracy for time and space. + |
+
CCN + |
+The Central Coordinator (CCN) is a node responsible for determining, queuing, and scheduling complex operations in each CN to enable the dynamic load management of GaussDB(DWS). + |
+
CIDR + |
+Classless Inter-Domain Routing (CIDR). CIDR abandons the traditional class-based (class A: 8; class B: 16; and class C: 24) address allocation mode and allows the use of address prefixes of any length, effectively improving the utilization of address space. A CIDR address is in the format of IP address/Number of bits in a network ID. For example, in 192.168.23.35/21, 21 indicates that the first 21 bits are the network prefix and others are the host ID. + |
+
Cgroups + |
+A control group (Cgroup), also called a priority group (PG) in GaussDB(DWS). The Cgroup is a kernel feature of SUSE Linux and Red Hat that can limit, account for, and isolate the resource usage of a collection of processes. + |
+
CLI + |
+Command-line interface (CLI). Users use the CLI to interact with applications. Its input and output are based on texts. Commands are entered through keyboards or similar devices and are compiled and executed by applications. The results are displayed in text or graphic forms on the terminal interface. + |
+
CM + |
+Cluster Manager (CM) manages and monitors the running status of functional units and physical resources in the distributed system, ensuring stable running of the entire system. + |
+
CMS + |
+The Cluster Management Service (CMS) component manages the cluster status. + |
+
CN + |
+The Coordinator (CN) stores database metadata, splits query tasks and supports their execution, and aggregates the query results returned from DNs. + |
+
CU + |
+Compression Unit (CU) is the smallest storage unit in a column-storage table. + |
+
core file + |
+A file that is created when memory overwriting, assertion failures, or access to invalid memory occurs in a process, causing it to fail. This file is then used for further analysis. +A core file contains a memory dump, in an all-binary and port-specific format. The name of a core file consists of the word "core" and the OS process ID. +The core file is available regardless of the type of platform. + |
+
core dump + |
+When a program stops abnormally, the core dump, memory dump, or system dump records the state of the working memory of the program at that point in time. In practice, other key pieces of program state are usually dumped at the same time, including the processor registers, which may include the program counter and stack pointer, memory management information, and other processor and OS flags and information. A core dump is often used to assist diagnosis and computer program debugging. + |
+
DBA + |
+A database administrator (DBA) instructs or executes database maintenance operations. + |
+
DBLINK + |
+An object defining the path from one database to another. A remote database object can be queried with DBLINK. + |
+
DBMS + |
+Database Management System (DBMS) is a piece of system management software that allows users to access information in a database. This is a collection of programs that allows you to access, manage, and query data in a database. A DBMS can be classified as memory DBMS or disk DBMS based on the location of the data. + |
+
DCL + |
+Data control language (DCL) + |
+
DDL + |
+Data definition language (DDL) + |
+
DML + |
+Data manipulation language (DML) + |
+
DN + |
+Datanode performs table data storage and query operations. + |
+
ETCD + |
+The Editable Text Configuration Daemon (ETCD) is a distributed key-value storage system used for configuration sharing and service discovery (registration and search). + |
+
ETL + |
+Extract-Transform-Load (ETL) refers to the process of data transmission from the source to the target database. + |
+
Extension Connector + |
+Extension Connector is provided by GaussDB(DWS) to process data across clusters. It can send SQL statements to Spark, and can return execution results to your database. + |
+
Backup + |
+A backup, or the process of backing up, refers to the copying and archiving of computer data in case of data loss. + |
+
backup and restoration + |
+A collection of concepts, procedures, and strategies to protect data loss caused by invalid media or misoperations. + |
+
standby server + |
+A node in the GaussDB(DWS) HA solution. It functions as a backup of the primary server. If the primary server is behaving abnormally, the standby server is promoted to primary, ensuring data service continuity. + |
+
crash + |
+A crash (or system crash) is an event in which a computer or a program (such as a software application or an OS) ceases to function properly. Often the program will exit after encountering this type of error. Sometimes the offending program may appear to freeze or hang until a crash reporting service documents details of the crash. If the program is a critical part of the OS kernel, the entire computer may crash (possibly resulting in a fatal system error). + |
+
encoding + |
+Encoding is representing data and information using code so that it can be processed and analyzed by a computer. Characters, digits, and other objects can be converted into digital code, or information and data can be converted into the required electrical pulse signals based on predefined rules. + |
+
encoding technology + |
+A technology that presents data using a specific set of characters, which can be identified by computer hardware and software. + |
+
table + |
+A set of columns and rows. Each column is referred to as a field. The value in each field represents a data type. For example, if a table contains people's names, cities, and states, it has three columns: Name, City, and State. In every row in the table, the Name column contains a name, the City column contains a city, and the State column contains a state. + |
+
tablespace + |
+A tablespace is a logical storage structure that contains tables, indexes, large objects, and long data. A tablespace provides an abstract layer between physical data and logical data, and provides storage space for all database objects. When you create a table, you can specify which tablespace it belongs to. + |
+
concurrency control + |
+A DBMS service that ensures data integrity when multiple transactions are concurrently executed in a multi-user environment. In a multi-threaded environment, GaussDB(DWS) concurrency control ensures that database operations are safe and all database transactions remain consistent at any given time. + |
+
query + |
+Specifies requests sent to the database, such as updating, modifying, querying, or deleting information. + |
+
query operator + |
+An iterator or a query tree node, which is a basic unit for the execution of a query. Execution of a query can be split into one or more query operators. Common query operators include scan, join, and aggregation. + |
+
query fragment + |
+Each query task can be split into one or more query fragments. Each query fragment consists of one or more query operators and can independently run on a node. Query fragments exchange data through data flow operators. + |
+
durability + |
+One of the ACID features of database transactions. Durability indicates that transactions that have been committed will permanently survive and not be rolled back. + |
+
stored procedure + |
+A group of SQL statements compiled into a single execution plan and stored in a large database system. Users can specify a name and parameters (if any) for a stored procedure to execute the procedure. + |
+
OS + |
+An operating system (OS) is loaded by a bootstrap program to a computer to manage other programs in the computer. applications on a computer or similar device. + |
+
secondary server + |
+To ensure high cluster availability, the primary server synchronizes logs to the secondary server if data synchronization between the primary and standby servers fails. If the primary server suddenly breaks down, the standby server is promoted to primary and synchronizes logs from the secondary server for the duration of the breakdown. + |
+
BLOB + |
+Binary large object (BLOB) is a collection of binary data stored in a database, such as videos, audio, and images. + |
+
dynamic load balancing + |
+In GaussDB(DWS), dynamic load balancing automatically adjusts the number of concurrent jobs based on the usage of CPU, I/O, and memory to avoid service errors and to prevent the system from stop responding due to system overload. + |
+
segment + |
+A segment in the database indicates a part containing one or more regions. Region is the smallest range of a database and consists of data blocks. One or more segments comprise a tablespace. + |
+
F – J + |
+|
failover + |
+Automatic switchover from a faulty node to its standby node. Reversely, automatic switchback from the standby node to the primary node is called failback. + |
+
FDW + |
+A foreign data wrapper (FDW) is a SQL interface provided by Postgres. It is used to access big data objects stored in remote data so that DBAs can integrate data from unrelated data sources and store them in public schema in the database. + |
+
freeze + |
+An operation automatically performed by the AutoVacuum Worker process when transaction IDs are exhausted. GaussDB(DWS) records transaction IDs in row headings. When a transaction reads a row, the transaction ID in the row heading and the actual transaction ID are compared to determine whether this row is explicit. Transaction IDs are integers containing no symbols. If exhausted, transaction IDs are re-calculated outside of the integer range, causing the explicit rows to become implicit. To prevent such a problem, the freeze operation marks a transaction ID as a special ID. Rows marked with these special transaction IDs are explicit to all transactions. + |
+
GDB + |
+As a GNU debugger, GDB allows you to see what is going on 'inside' another program while it executes or what another program was doing the moment that it crashed. GDB can perform four main kinds of things (make PDK functions stronger) to help you catch bugs in the act: +
|
+
GDS + |
+General Data Service (GDS). To import data to GaussDB(DWS), you need to deploy the tool on the server where the source data is stored so that DNs can use this tool to obtain data. + |
+
GIN index + |
+Generalized inverted index (GIN) is used for handling cases where the items to be indexed are composite values, and the queries to be handled by the index need to search for element values that appear within the composite items. + |
+
GNU + |
+The GNU Project was publicly announced on September 27, 1983 by Richard Stallman, aiming at building an OS composed wholly of free software. GNU is a recursive acronym for "GNU's Not Unix!". Stallman announced that GNU should be pronounced as Guh-NOO. Technically, GNU is similar to Unix in design, a widely used commercial OS. However, GNU is free software and contains no Unix code. + |
+
gsql + |
+GaussDB(DWS) interaction terminal. It enables you to interactively type in queries, issue them to GaussDB(DWS), and view the query results. Queries can also be entered from files. gsql supports many meta commands and shell-like commands, allowing you to conveniently compile scripts and automate tasks. + |
+
GTM + |
+Global Transaction Manager (GTM) manages the status of transactions. + |
+
GUC + |
+Grand unified configuration (GUC) includes parameters for running databases, the values of which determine database system behavior. + |
+
HA + |
+High availability (HA) is a solution in which two modules operate in primary/standby mode to achieve high availability. This solution helps to minimize the duration of service interruptions caused by routine maintenance (planned) or sudden system breakdowns (unplanned), improving the system and application usability. + |
+
HBA + |
+Host-based authentication (HBA) allows hosts to authenticate on behalf of all or some of the system users. It can apply to all users on a system or a subset using the Match directive. This type of authentication can be useful for managing computing clusters and other fairly homogenous pools of machines. In all, three files on the server and one on the client must be modified to prepare for host-based authentication. + |
+
HDFS + |
+Hadoop Distributed File System (HDFS) is a subproject of Apache Hadoop. HDFS is highly fault tolerant and is designed to run on low-end hardware. The HDFS provides high-throughput access to large data sets and is ideal for applications having large data sets. + |
+
server + |
+A combination of hardware and software designed for providing clients with services. This word alone refers to the computer running the server OS, or the software or dedicated hardware providing services. + |
+
advanced package + |
+Logical and functional stored procedures and functions provided by GaussDB(DWS). + |
+
isolation + |
+One of the ACID features of database transactions. Isolation means that the operations inside a transaction and data used are isolated from other concurrent transactions. The concurrent transactions do not affect each other. + |
+
relational database + |
+A database created using a relational model. It processes data using methods of set algebra. + |
+
archive thread + |
+A thread started when the archive function is enabled on a database. The thread archives database logs to a specified path. + |
+
failover + |
+The automatic substitution of a functionally equivalent system component for a failed one. The system component can be a processor, server, network, or database. + |
+
environment variable + |
+An environment variable defines the part of the environment in which a process runs. For example, it can define the part of the environment as the main directory, command search path, terminal that is in use, or the current time zone. + |
+
checkpoint + |
+A mechanism that stores data in the database memory to disks at a certain time. GaussDB(DWS) periodically stores the data of committed and uncommitted transactions to disks. The data and redo logs can be used for database restoration if a database restarts or breaks down. + |
+
encryption + |
+A function hiding information content during data transmission to prevent the unauthorized use of the information. + |
+
node + |
+Cluster nodes (or nodes) are physical and virtual severs that make up the GaussDB(DWS) cluster environment. + |
+
error correction + |
+A technique that automatically detects and corrects errors in software and data streams to improve system stability and reliability. + |
+
process + |
+An instance of a computer program that is being executed. A process may be made up of multiple threads of execution. Other processes cannot use a thread occupied by the process. + |
+
PITR + |
+Point-In-Time Recovery (PITR) is a backup and restoration feature of GaussDB(DWS). Data can be restored to a specified point in time if backup data and WAL logs are normal. + |
+
record + |
+In a relational database, a record corresponds to data in each row of a table. + |
+
cluster + |
+A cluster is an independent system consisting of servers and other resources, ensuring high availability. In certain conditions, clusters can implement load balancing and concurrent processing of transactions. + |
+
K – O + |
+|
LLVM + |
+LLVM is short for Low Level Virtual Machine. Low Level Virtual Machine (LLVM) is a compiler framework written in C++ and is designed to optimize the compile-time, link-time, run-time, and idle-time of programs that are written in arbitrary programming languages. It is open to developers and compatible with existing scripts. +GaussDB(DWS) LLVM dynamic compilation can be used to generate customized machine code for each query to replace original common functions. Query performance is improved by reducing redundant judgment conditions and virtual function invocation, and by making local data more accurate during actual queries. + |
+
LVS + |
+Linux Virtual Server (LVS), a virtual server cluster system, is used for balancing the load of a cluster. + |
+
MPP + |
+Massive Parallel Processing (MPP) refers to cluster architecture that consists of multiple machines. The architecture is also called a cluster system. + |
+
MVCC + |
+Multi-Version Concurrency Control (MVCC) is a protocol that allows a tuple to have multiple versions, on which different query operations can be performed. A basic advantage is that read and write operations do not conflict. + |
+
NameNode + |
+The NameNode is the centerpiece of a Hadoop file system, managing the namespace of the file system and client access to files. + |
+
OLAP + |
+Online analytical processing (OLAP) is the most important application in the database warehouse system. It is dedicated to complex analytical operations, helps decision makers and executives to make decisions, and rapidly and flexibly processes complex queries involving a great amount of data based on analysts' requirements. In addition, the OLAP provides decision makers with query results that are easy to understand, allowing them to learn the operating status of the enterprise. These decision makers can then produce informed and accurate solutions based on the query results. + |
+
OM + |
+Operations Management (OM) provides management interfaces and tools for routine maintenance and configuration management of the cluster. + |
+
ORC + |
+Optimized Row Columnar (ORC) is a widely used file format for structured data in a Hadoop system. It was introduced from the Hadoop HIVE project. + |
+
client + |
+A computer or program that accesses or requests services from another computer or program. + |
+
free space management + |
+A mechanism for managing free space in a table. This mechanism enables the database system to record free space in each table and establish an easy-to-search data structure, accelerating operations (such as INSERT) performed on the free space. + |
+
cross-cluster + |
+In GaussDB(DWS), users can access data in other DBMS through foreign tables or using an Extension Connector. Such access is cross-cluster. + |
+
junk tuple + |
+A tuple that is deleted using the DELETE and UPDATE statements. When deleting a tuple, GaussDB(DWS) only marks the tuples that are to be cleared. The Vacuum thread will then periodically clear these junk tuples. + |
+
column + |
+An equivalent concept of "field". A database table consists of one or more columns. Together they describe all attributes of a record in the table. + |
+
logical node + |
+Multiple logical nodes can be installed on the same node. A logical node is a database instance. + |
+
schema + |
+Collection of database objects, including logical structures, such as tables, views, sequences, stored procedures, synonyms, indexes, clusters, and database links. + |
+
schema file + |
+A SQL file that determines the database structure. + |
+
P – T + |
+|
Page + |
+Minimum memory unit for row storage in the GaussDB(DWS) relational object structure. The default size of a page is 8 KB. + |
+
PostgreSQL + |
+An open-source DBMS developed by volunteers all over the world. PostgreSQL is not controlled by any companies or individuals. Its source code can be used for free. + |
+
Postgres-XC + |
+Postgres-XC is an open source PostgreSQL cluster to provide write-scalable, synchronous, multi-master PostgreSQL cluster solution. + |
+
Postmaster + |
+A thread started when the database service is started. It listens to connection requests from other nodes in the cluster or from clients. +After receiving and accepting a connection request from the standby server, the primary server creates a WAL Sender thread to interact with the standby server. + |
+
RHEL + |
+Red Hat Enterprise Linux (RHEL) + |
+
redo log + |
+A log that contains information required for performing an operation again in a database. If a database is faulty, redo logs can be used to restore the database to its original state. + |
+
SCTP + |
+The Stream Control Transmission Protocol (SCTP) is a transport-layer protocol defined by Internet Engineering Task Force (IETF) in 2000. The protocol ensures the reliability of datagram transport based on unreliable service transmission protocols by transferring SCN narrowband signaling over IP network. + |
+
savepoint + |
+A savepoint marks the end of a sub-transaction (also known as a nested transaction) in a relational DBMS. The process of a long transaction can be divided into several parts. After a part is successfully executed, a savepoint will be created. If later execution fails, the transaction will be rolled back to the savepoint instead of being totally rolled back. This is helpful for recovering database applications from complicated errors. If an error occurs in a multi-statement transaction, the application can possibly recover by rolling back to the save point without terminating the entire transaction. + |
+
session + |
+A task created by a database for a connection when an application attempts to connect to the database. Sessions are managed by the session manager. They execute initial tasks to perform all user operations. + |
+
shared-nothing architecture + |
+A distributed computing architecture, in which none of the nodes share CPUs or storage resources. This architecture has good scalability. + |
+
SLES + |
+SUSE Linux Enterprise Server (SLES) is an enterprise Linux OS provided by SUSE. + |
+
SMP + |
+Symmetric multiprocessing (SMP) lets multiple CPUs run on a computer and share the same memory and bus. To ensure an SMP system achieves high performance, an OS must support multi-tasking and multi-thread processing. In databases, SMP means to concurrently execute queries using the multi-thread technology, efficiently using all CPU resources and improving query performance. + |
+
SQL + |
+Structure Query Language (SQL) is a standard database query language. It consists of DDL, DML, and DCL. + |
+
SSL + |
+Secure Socket Layer (SSL) is a network security protocol introduced by Netscape. SSL is a security protocol based on the TCP and IP communications protocols and uses the public key technology. SSL supports a wide range of networks and provides three basic security services, all of which use the public key technology. SSL ensures the security of service communication through the network by establishing a secure connection between the client and server and then sending data through this connection. + |
+
convergence ratio + |
+Downlink to uplink bandwidth ratio of a switch. A high convergence ratio indicates a highly converged traffic environment and severe packet loss. + |
+
TCP + |
+Transmission Control Protocol (TCP) sends and receives data through the IP protocol. It splits data into packets for sending, and checks and reassembles received package to obtain original information. TCP is a connection-oriented, reliable protocol that ensures information correctness in transmission. + |
+
trace + |
+A way of logging to record information about the way a program is executed. This information is typically used by programmers for debugging purposes. System administrators and technical support can diagnose common problems by using software monitoring tools and based on this information. + |
+
full backup + |
+Backup of the entire database cluster. + |
+
full synchronization + |
+A data synchronization mechanism specified in the GaussDB(DWS) HA solution. Used to synchronize all data from the primary server to a standby server. + |
+
Log File + |
+A file to which a computer system writes a record of its activities. + |
+
transaction + |
+A logical unit of work performed within a DBMS against a database. A transaction consists of a limited database operation sequence, and must have ACID features. + |
+
data + |
+A representation of facts or directives for manual or automatic communication, explanation, or processing. Data includes constants, variables, arrays, and strings. + |
+
data redistribution + |
+A process whereby a data table is redistributed among nodes after users change the data distribution mode. + |
+
data distribution + |
+A mode in which table data is split and stored on each database instance in a distributed system. Table data can be distributed in hash, replication, or random mode. In hash mode, a hash value is calculated based on the value of a specified column in a tuple, and then the target storage location of the tuple is determined based on the mapping between nodes and hash values. In replication mode, tuples are replicated to all nodes. In random mode, data is randomly distributed to the nodes. + |
+
data partitioning + |
+A division of a logical database or its constituent elements into multiple parts (partitions) whose data does not overlap based on specified ranges. Data is mapped to storage locations based on the value ranges of specific columns in a tuple. + |
+
Database Name + |
+A collection of data that is stored together and can be accessed, managed, and updated. Data in a view in the database can be classified into the following types: numerals, full text, digits, and images. + |
+
DB instance + |
+A database instance consists of a process in GaussDB(DWS) and files controlled by the process. GaussDB(DWS) installs multiple database instances on one physical node. GTM, CM, CN, and DN installed on cluster nodes are all database instances. A database instance is also called a logical node. + |
+
database HA + |
+GaussDB(DWS) provides a highly reliable HA solution. Every logical node in GaussDB(DWS) is identified as a primary or standby node. Only one GaussDB(DWS) node is identified as primary at a time. When the HA system is deployed for the first time, the primary server synchronizes all data from each standby server (full synchronization). The HA system then synchronizes only data that is new or has been modified from each standby server (incremental synchronization). When the HA system is running, the primary server can receive data read and write operation requests and the standby servers only synchronize logs. + |
+
database file + |
+A binary file that stores user data and the data inside the database system. + |
+
data flow operator + |
+An operator that exchanges data among query fragments. By their input/output relationships, data flows can be categorized into Gather flows, Broadcast flows, and Redistribution flows. Gather combines multiple query fragments of data into one. Broadcast forwards the data of one query fragment to multiple query fragments. Redistribution reorganizes the data of multiple query fragments and then redistributes the reorganized data to multiple query fragments. + |
+
data dictionary + |
+A reserved table within a database which is used to store information about the database itself. The information includes database design information, stored procedure information, user rights, user statistics, database process information, database increase statistics, and database performance statistics. + |
+
deadlock + |
+Unresolved contention for the use of resources. + |
+
index + |
+An ordered data structure in the database management system. An index accelerates querying and the updating of data in database tables. + |
+
statistics + |
+Information that is automatically collected by databases, including table-level information (number of tuples and number of pages) and column-level information (column value range distribution histogram). Statistics in databases are used to estimate the cost of execution plans to find the plan with the lowest cost. + |
+
stop word + |
+In computing, stop words are words which are filtered out before or after processing of natural language data (text), saving storage space and improving search efficiency. + |
+
U – Z + |
+|
vacuum + |
+A thread that is periodically started up by a database to clear junk tuples. Multiple Vacuum threads can be started concurrently by setting a parameter. + |
+
verbose + |
+The VERBOSE option specifies the information to be displayed. + |
+
WAL + |
+Write-ahead logging (WAL) is a standard method for logging a transaction. Corresponding logs must be written into a permanent device before a data file (carrier for a table and index) is modified. + |
+
WAL Receiver + |
+A thread created by the standby server during database duplication. The thread is used to receive data and commands from the primary server and to tell the primary server that the data and commands have been acknowledged. Only one WAL receiver thread can run on one standby server. + |
+
WAL Sender + |
+A thread created on the primary server when the primary server has received a connection request from a standby server during database replication. This thread is used to send data and commands to standby servers and to receive responses from the standby servers. Multiple WAL Sender threads may run on one primary server. Each WAL Sender thread corresponds to a connection request initiated by a standby server. + |
+
WAL Writer + |
+A thread for writing redo logs that are created when a database is started. This thread is used to write logs in the memory to a permanent device, such as a disk. + |
+
WLM + |
+The WorkLoad Manager (WLM) is a module for controlling and allocating system resources in GaussDB(DWS). + |
+
Xlog + |
+A transaction log. A logical node can have only one Xlog file. + |
+
xDR + |
+X detailed record. It refers to detailed records on the user and signaling plans and can be categorized into charging data records (CDRs), user flow data records (UFDRs), transaction detail records (TDRs), and data records (SDRs). + |
+
network backup + |
+Network backup provides a comprehensive and flexible data protection solution to Microsoft Windows, UNIX, and Linux platforms. Network backup can back up, archive, and restore files, folders, directories, volumes, and partitions on a computer. + |
+
physical node + |
+A physical machine or device. + |
+
system catalog + |
+A table storing meta information about the database. The meta information includes user tables, indexes, columns, functions, and the data types in a database. + |
+
pushdown + |
+GaussDB(DWS) is a distributed database, where CN can send a query plan to multiple DNs for parallel execution. This CN behavior is called pushdown. It achieves better query performance than extracting data to CN for query. + |
+
compression + |
+Data compression, source coding, or bit-rate reduction involves encoding information that uses fewer bits than the original representation. Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by identifying and removing unnecessary or unimportant information. The process of reducing the size of a data file is commonly referred as data compression, although its formal name is source coding (coding done at the source of the data, before it is stored or transmitted). + |
+
consistency + |
+One of the ACID features of database transactions. Consistency is a database status. In such a status, data in the database must comply with integrity constraints. + |
+
metadata + |
+Data that provides information about other data. Metadata describes the source, size, format, or other characteristics of data. In database columns, metadata explains the content of a data warehouse. + |
+
atomicity + |
+One of the ACID features of database transactions. Atomicity means that a transaction is composed of an indivisible unit of work. All operations performed in a transaction must either be committed or uncommitted. If an error occurs during transaction execution, the transaction is rolled back to the state when it was not committed. + |
+
online scale-out + |
+Online scale-out means that data can be saved to the database and query services are not interrupted during redistribution in GaussDB(DWS). + |
+
dirty page + |
+A page that has been modified and is not written to a permanent device. + |
+
incremental backup + |
+Incremental backup stores all files changed since the last valid backup. + |
+
incremental synchronization + |
+A data synchronization mechanism in the GaussDB(DWS) HA solution. Only data modified since the last synchronization is synchronized to the standby server. + |
+
Host + |
+A node that receives data read and write operations in the GaussDB(DWS) HA system and works with all standby servers. At any time, only one node in the HA system is identified as the primary server. + |
+
thesaurus + |
+Standardized words or phrases that express document themes and are used for indexing and retrieval. + |
+
dump file + |
+A specific type of the trace file. A dump is typically a one-time output of diagnostic data in response to an event, whereas a trace tends to be continuous output of diagnostic data. + |
+
resource pool + |
+Resource pools used for allocating resources in GaussDB(DWS). By binding a user to a resource pool, you can limit the priority of the jobs executed by the user and resources available to the jobs. + |
+
tenant + |
+A database service user who runs services using allocated computing (CPU, memory, and I/O) and storage resources. Service level agreements (SLAs) are met through resource management and isolation. + |
+
minimum restoration point + |
+A method used by GaussDB(DWS) to ensure data consistency. During startup, GaussDB(DWS) checks consistency between the latest WAL logs and the minimum restoration point. If the record location of the minimum restoration point is greater than that of the latest WAL logs, the database fails to start. + |
+
GS_VIEW_DEPENDENCY_PATH allows you to query the direct dependencies of all views visible to the current user. If the base table on which the view depends exists and the dependency between views at different levels is normal, you can use this view to query the dependency between views at different levels starting from the base table.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
objschema + |
+name + |
+View space name + |
+
objname + |
+name + |
+View name + |
+
refobjschema + |
+name + |
+Name of the space where the dependent object resides + |
+
refobjname + |
+name + |
+Name of a dependent object + |
+
path + |
+text + |
+Dependency path + |
+
You can create foreign tables to perform associated queries and import data between clusters.
+CREATE SERVER server_remote FOREIGN DATA WRAPPER GC_FDW OPTIONS + (address '10.180.157.231:8000,10.180.157.130:8000' , + dbname 'gaussdb', + username 'xyz', + password 'xxxxxx' +);+
CREATE FOREIGN TABLE region +( + R_REGIONKEY INT4, + R_NAME TEXT, + R_COMMENT TEXT +) +SERVER + server_remote +OPTIONS +( + schema_name 'test', + table_name 'region', + encoding 'gbk' +);+
\d+ region + + Foreign table "public.region" + Column | Type | Modifiers | FDW Options | Storage | Stats target | Description +-------------+---------+-----------+-------------+----------+--------------+------------- + r_regionkey | integer | | | plain | | + r_name | text | | | extended | | + r_comment | text | | | extended | | +Server: server_remote +FDW Options: (schema_name 'test', table_name 'region', encoding 'gbk') +FDW permition: read only +Has OIDs: no +Distribute By: ROUND ROBIN +Location Nodes: ALL DATANODES+
\des+ server_remote + List of foreign servers + Name | Owner | Foreign-data wrapper | Access privileges | Type | Version | + FDW Options | Description +---------------+---------+----------------------+-------------------+------+---------+----------------------------------------------------------------- +-----------------------------------------------------------------------------------------------------------------+------------- + server_remote | dbadmin | gc_fdw | | | | (address '10.180.157.231:8000,10.180.157.130:8000', dbname 'gaussdb' +, username 'xyz', password 'xxxxxx') | +(1 row)+
CREATE TABLE local_region +( + R_REGIONKEY INT4, + R_NAME TEXT, + R_COMMENT TEXT +); +INSERT INTO local_region SELECT * FROM region;+
SELECT * FROM region, local_region WHERE local_region.R_NAME = region.R_NAME;+
DROP FOREIGN TABLE region;+
To improve the cluster performance, you can use multiple methods to optimize the database, including hardware configuration, software driver upgrade, and internal parameter adjustment of the database. This section describes some common parameters and recommended configurations.
+The SMP architecture uses abundant resources to obtain time. After the plan parallelism is executed, more resources are consumed, including the CPU, memory, I/O, and network bandwidth. As the DOP grows, the resource consumption increases.
+The SMP DOP can be configured at a session level and you are advised to enable the SMP before executing the query that meets the requirements. After the execution is complete, disable the SMP. Otherwise, SMP may affect services in peak hours.
+The default value of query_dop is 1. You can set query_dop to 10 to enable the SMP in a session.
+Dynamic load management refers to the automatic queue control of complex queries based on user loads in a database. This fine-tunes system parameters without manual adjustment.
+This parameter is enabled by default. Notes:
+When configuring this parameter, you can set query_dop to 0 (adaptive). In this case, the system dynamically selects the optimal DOP between 1 and 8 for each query based on resource usage and plan characteristics. The enable_dynamic_workload parameter supports the dynamic memory allocation.
+Specifies the maximum number of concurrent jobs. This parameter applies to all the jobs on one CN.
+Set the value of this parameter based on system resources, such as CPU, I/O, and memory resources, to ensure that the system resources can be fully utilized and the system will not be crashed due to excessive concurrent jobs.
+By default, if a client is in idle state after connecting to a database, the client automatically disconnects from the database after the duration specified by the parameter.
+You are advised to set this parameter to 0, indicating that the timeout setting is disabled to prevent disconnection due to timeout.
+max_process_memory is a logical memory management parameter. It is used to control the maximum available memory on a single CN or DN.
+Formula: max_process_memory = Physical memory x 0.665/ (1 + Number of primary DNs)
+Specifies the size of the shared memory used by GaussDB(DWS). If the value of this parameter is increased, GaussDB(DWS) requires more System V shared memory than the default system setting.
+You are advised to set shared_buffers to a value less than 40% of the memory. It is used to scan row-store tables. Formula: shared_buffers = (Memory of a single server/Number of DNs on a single server) x 0.4 x 0.25
+Specifies the size of the shared buffer used by column-store tables and column-store tables (ORC, Parquet, and CarbonData) of OBS and HDFS foreign tables.
+For details about the calculation formula, see the formula in shared_buffers.
+Specifies the size of the memory used by internal sequential operations and the Hash table before data is written into temporary disk files.
+Sort operations are required for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.
+In a complex query, several sort or hash operations may run in parallel. Each operation will be allowed to use as much memory as this parameter specifies. If the memory is insufficient, data will be written into temporary files. In addition, several running sessions may be performing such operations concurrently. Therefore, the total memory used may be many times the value of work_mem.
+The formulas are as follows:
+For non-concurrent complex serial queries, each query requires five to ten associated operations. Configure work_mem using the following formula: work_mem = 50% of the memory/10.
+For non-concurrent simple serial queries, each query requires two to five associated operations. Configure work_mem using the following formula: work_mem = 50% of the memory/5.
+For concurrent queries, configure work_mem using the following formula: work_mem = work_mem for serial queries/Number of concurrent SQL statements.
+maintenance_work_mem specifies the maximum size of memory used for maintenance operations, involving VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
+Setting suggestions:
+If you set this parameter to a value greater than that of work_mem, database dump files can be cleaned up and restored more efficiently. In a database session, only one maintenance operation can be performed at a time. Maintenance is usually performed when there are not many sessions.
+When the automatic cleanup process is running, up to autovacuum_max_workers times of the memory will be allocated. In this case, set maintenance_work_mem to a value greater than or equal to that of work_mem.
+Specifies the size of a ring buffer used for parallel data import.
+This parameter affects the database import performance. You are advised to increase the value of this parameter on DNs when a large amount of data is to be imported.
+Specifies the maximum number of concurrent connections to the database. This parameter affects the concurrent processing capability of the cluster.
+Setting suggestions:
+Retain the default value of this parameter on CNs. Set this parameter on DNs to a value calculated using this formula: Number of CNs x Value of this parameter on a CN.
+If the value of this parameter is increased, GaussDB(DWS) may require more System V shared memory or semaphore, which may exceed the default maximum value of the OS. In this case, modify the value as needed.
+Specifies the maximum number of transactions that can stay in the prepared state simultaneously. If the value of this parameter is increased, GaussDB(DWS) requires more System V shared memory than the default system setting.
+The value of max_connections is related to max_prepared_transactions. Before configuring max_connections, ensure that the value of max_prepared_transactions is greater than or equal to that of max_connections. In this way, each session has a prepared transaction in the waiting state.
+Specifies the target for which the checkpoint is completed.
+Each checkpoint must be completed within 50% of the checkpoint interval.
+The default value is 0.5. To improve the performance, you can change the value to 0.9.
+Specifies the memory used by queues when the sender sends data pages to the receiver. The value of this parameter affects the buffer size used for the replication from the primary server to the standby server.
+The default value is 128 MB. If the server memory is 256 GB, you can increase the value to 512 MB.
+Specifies the memory buffer size for the standby and secondary servers to store the received XLOG files.
+The default value is 64 MB. If the server memory is 256 GB, you can increase the value to 128 MB.
+GaussDB(DWS) + |
+Java + |
+
---|---|
BOOLEAN + |
+boolean + |
+
"char" + |
+byte + |
+
bytea + |
+byte[] + |
+
SMALLINT + |
+short + |
+
INTEGER + |
+int + |
+
BIGINT + |
+long + |
+
FLOAT4 + |
+float + |
+
FLOAT8 + |
+double + |
+
CHAR + |
+java.lang.String + |
+
VARCHAR + |
+java.lang.String + |
+
TEXT + |
+java.lang.String + |
+
name + |
+java.lang.String + |
+
DATE + |
+java.sql.Timestamp + |
+
TIME + |
+java.sql.Time (stored value treated as local time) + |
+
TIMETZ + |
+java.sql.Time + |
+
TIMESTAMP + |
+java.sql.Timestamp + |
+
TIMESTAMPTZ + |
+java.sql.Timestamp + |
+
Any error that occurs in a PL/pgSQL function aborts the execution of the function and related transactions. You can use a BEGIN block with an EXCEPTION clause to catch and fix errors.
+Each line shall contain only one statement. To assign initial values, write them in the same line.
+In the statements used for creating a stored procedure, the keywords CREATE, AS/IS, BEGIN, and END at the same level shall have the same indent.
+GaussDB(DWS) supports encryption and decryption of strings using the following functions:
+Description: Encrypts an encryptstr string using the keystr key based on the encryption algorithm specified by cryptotype and cryptomode and the HMAC algorithm specified by hashmethod, and returns the encrypted string. cryptotype can be aes128, aes192, aes256, or sm4. cryptomode is cbc. hashmethod can be sha256, sha384, sha512, or sm3. Currently, the following types of data can be encrypted: numerals supported in the database; character type; RAW in binary type; and DATE, TIMESTAMP, and SMALLDATETIME in date/time type. The keystr length is related to the encryption algorithm and contains 1 to KeyLen bytes. If cryptotype is aes128 or sm4, KeyLen is 16; if cryptotype is aes192, KeyLen is 24; if cryptotype is aes256, KeyLen is 32.
+Return type: text
+Length of the return value: at least 4 x [(maclen + 56)/3] bytes and no more than 4 x [(Len + maclen + 56)/3] bytes, where Len indicates the string length (in bytes) before the encryption and maclen indicates the length of the HMAC value. If hashmethod is sha256 or sm3, maclen is 32; if hashmethod is sha384, maclen is 48; if hashmethod is sha512, maclen is 64. That is, if hashmethod is sha256 or sm3, the returned string contains 120 to 4 x [(Len + 88)/3] bytes; if hashmethod is sha384, the returned string contains 140 to 4 x [(Len + 104)/3] bytes; if hashmethod is sha512, the returned string contains 160 to 4 x [(Len + 120)/3] bytes.
+Example:
+1 +2 +3 +4 +5 | SELECT gs_encrypt('GaussDB(DWS)', '1234', 'aes128', 'cbc', 'sha256'); + gs_encrypt +-------------------------------------------------------------------------------------------------------------------------- + AAAAAAAAAACcFjDcCSbop7D87sOa2nxTFrkE9RJQGK34ypgrOPsFJIqggI8tl+eMDcQYT3po98wPCC7VBfhv7mdBy7IVnzdrp0rdMrD6/zTl8w0v9/s2OA== +(1 row) + |
Description: Decrypts a decryptstr string using the keystr key based on the encryption algorithm specified by cryptotype and cryptomode and the HMAC algorithm specified by hashmethod, and returns the decrypted string. The keystr used for decryption must be consistent with that used for encryption. keystr cannot be empty.
+Return type: text
+Example:
+1 +2 +3 +4 +5 | SELECT gs_decrypt('AAAAAAAAAACcFjDcCSbop7D87sOa2nxTFrkE9RJQGK34ypgrOPsFJIqggI8tl+eMDcQYT3po98wPCC7VBfhv7mdBy7IVnzdrp0rdMrD6/zTl8w0v9/s2OA==', '1234', 'aes128', 'cbc', 'sha256'); + gs_decrypt +-------------- + GaussDB(DWS) +(1 row) + |
Description: Encrypts encryptstr strings using keystr as the key and returns encrypted strings. The length of keystr ranges from 1 to 16 bytes. Currently, the following types of data can be encrypted: numerals supported in the database; character type; RAW in binary type; and DATE, TIMESTAMP, and SMALLDATETIME in date/time type.
+Return type: text
+Length of the return value: At least 92 bytes and no more than (4*[Len/3]+68) bytes, where Len indicates the length of the data before encryption (unit: byte).
+Example:
+1 +2 +3 +4 +5 +6 | SELECT gs_encrypt_aes128('MPPDB','1234'); + + gs_encrypt_aes128 +------------------------------------------------------------------------------------- +gwditQLQG8NhFw4OuoKhhQJoXojhFlYkjeG0aYdSCtLCnIUgkNwvYI04KbuhmcGZp8jWizBdR1vU9CspjuzI0lbz12A= +(1 row) + |
Description: Decrypts a decryptstr string using the keystr key and returns the decrypted string. The keystr used for decryption must be consistent with that used for encryption. keystr cannot be empty.
+Return type: text
+Example:
+1 +2 +3 +4 +5 | SELECT gs_decrypt_aes128('gwditQLQG8NhFw4OuoKhhQJoXojhFlYkjeG0aYdSCtLCnIUgkNwvYI04KbuhmcGZp8jWizBdR1vU9CspjuzI0lbz12A=','1234'); + gs_decrypt_aes128 +------------------- + MPPDB +(1 row) + |
Description: Obtains the digest string of a hashstr string based on the algorithm specified by hashmethod. hashmethod can be sha256, sha384, sha512, or sm3. This function is supported by version 8.1.1 or later clusters.
+Return type: text
+Length of the return value: 64 bytes if hashmethod is sha256 or sm3; 96 bytes if hashmethod is sha384; 128 bytes if hashmethod is sha512
+Example:
+1 +2 +3 +4 +5 | SELECT gs_hash('GaussDB(DWS)', 'sha256'); + gs_hash +-------------------------------------------------------------------------------------------------- + e59069daa6541ae20af7c747662702c731b26b8abd7a788f4d15611aa0db608efdbb5587ba90789a983f85dd51766609 +(1 row) + |
MPP_TABLES displays information about tables in PGXC_CLASS.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+name + |
+Name of the schema that contains the table + |
+
tablename + |
+name + |
+Name of a table + |
+
tableowner + |
+name + |
+Owner of the table + |
+
tablespace + |
+name + |
+Tablespace where the table is located. + |
+
pgroup + |
+name + |
+Name of a node cluster. + |
+
nodeoids + |
+oidvector_extend + |
+List of distributed table node OIDs + |
+
Released On + |
+Description + |
+
---|---|
2022-11-17 + |
+This issue is the first official release, adapts to DWS 8.1.1.202. +Feature Changes: +GUC Parameters:Added the GUC parameter included enable_light_colupdate,bi_page_reuse_factor,expand_hashtable_ratio,query_dop_ratio,enable_row_fast_numeric,enable_view_update,enable_grant_option). +Functions and Operators:Added the function included pgxc_wlm_get_schema_space(cstring),pgxc_wlm_analyze_schema_space(cstring),median(expression),gs_password_expiration,pgxc_get_lock_conflicts(),percentile_disc(const) within group(order by expression),percentile_cont(const) within group(order by expression). +System Views:Added system views included PGXC_TOTAL_SCHEMA_INFO,PGXC_TOTAL_SCHEMA_INFO_ANALYZE,GS_WLM_SQL_ALLOW,PGXC_BULKLOAD_PROGRESS,PGXC_BULKLOAD_STATISTICS,PG_BULKLOAD_STATISTICS. +Keyword:Added keywords included EXPIRATION,IFNULL and TIMESTAMPDIFF. +CREATE REDACTION POLICY:Added Custom data redaction. +Syntax Compatibility Differences Among Oracle, Teradata, and MySQL:Added MySQL Syntax Compatibility Differences . + |
+
SQL is a standard computer language used to control the access to databases and manage data in databases.
+SQL provides different statements to enable you to:
+SQL consists of commands and functions that are used to manage databases and database objects. SQL can also forcibly implement the rules for data types, expressions, and texts. Therefore, section "SQL Reference" describes data types, expressions, functions, and operators in addition to SQL syntax.
+Released SQL standards are as follows:
+GaussDB(DWS) is compatible with Postgres-XC features and supports the major features of SQL2, SQL3, and SQL4 by default.
+GaussDB(DWS) gsql differs from PostgreSQL psql in that the former has made the following changes to enhance security:
+gsql provides the following additional functions based on psql:
+During the development of certain GaussDB(DWS) functions such as the gsql client connection tool, PostgreSQL libpq is greatly modified. However, the libpq interface is not verified in application development. You are not advised to use this set of APIs for application development, because underlying risks probably exist. You can use the ODBC or JDBC APIs instead.
+For details about supported data types by GaussDB(DWS), see Data Types.
+The following PostgreSQL data type is not supported:
+For details about the functions supported by GaussDB(DWS), see Functions and Operators.
+The following PostgreSQL functions are not supported:
+The SQL contains reserved and non-reserved words. Standards require that reserved keywords not be used as other identifiers. Non-reserved keywords have special meanings only in a specific environment and can be used as identifiers in other environments.
+ +Keyword + |
+GaussDB(DWS) + |
+SQL:1999 + |
+SQL-92 + |
+
---|---|---|---|
ABORT + |
+Non-reserved + |
+- + |
+- + |
+
ABS + |
+- + |
+Non-reserved + |
+- + |
+
ABSOLUTE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ACCESS + |
+Non-reserved + |
+- + |
+- + |
+
ACCOUNT + |
+Non-reserved + |
+- + |
+- + |
+
ACTION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ADA + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
ADD + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ADMIN + |
+Non-reserved + |
+Reserved + |
+- + |
+
AFTER + |
+Non-reserved + |
+Reserved + |
+- + |
+
AGGREGATE + |
+Non-reserved + |
+Reserved + |
+- + |
+
ALIAS + |
+- + |
+Reserved + |
+- + |
+
ALL + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ALLOCATE + |
+- + |
+Reserved + |
+Reserved + |
+
ALSO + |
+Non-reserved + |
+- + |
+- + |
+
ALTER + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ALWAYS + |
+Non-reserved + |
+- + |
+- + |
+
ANALYSE + |
+Reserved + |
+- + |
+- + |
+
ANALYZE + |
+Reserved + |
+- + |
+- + |
+
AND + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ANY + |
+Reserved + |
+Reserved + |
+Reserved + |
+
APP + |
+Non-reserved + |
+- + |
+- + |
+
ARE + |
+- + |
+Reserved + |
+Reserved + |
+
ARRAY + |
+Reserved + |
+Reserved + |
+- + |
+
AS + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ASC + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ASENSITIVE + |
+- + |
+Non-reserved + |
+- + |
+
ASSERTION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ASSIGNMENT + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
ASYMMETRIC + |
+Reserved + |
+Non-reserved + |
+- + |
+
AT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ATOMIC + |
+- + |
+Non-reserved + |
+- + |
+
ATTRIBUTE + |
+Non-reserved + |
+- + |
+- + |
+
AUTHID + |
+Reserved + |
+- + |
+- + |
+
AUTHINFO + |
+Non-reserved + |
+- + |
+- + |
+
AUTHORIZATION + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
AUTOEXTEND + |
+Non-reserved + |
+- + |
+- + |
+
AUTOMAPPED + |
+Non-reserved + |
+- + |
+- + |
+
AVG + |
+- + |
+Non-reserved + |
+Reserved + |
+
BACKWARD + |
+Non-reserved + |
+- + |
+- + |
+
BARRIER + |
+Non-reserved + |
+- + |
+- + |
+
BEFORE + |
+Non-reserved + |
+Reserved + |
+- + |
+
BEGIN + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
BETWEEN + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
BIGINT + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
BINARY + |
+Reserved (functions and types allowed) + |
+Reserved + |
+- + |
+
BINARY_DOUBLE + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
BINARY_INTEGER + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
BIT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
BITVAR + |
+- + |
+Non-reserved + |
+- + |
+
BIT_LENGTH + |
+- + |
+Non-reserved + |
+Reserved + |
+
BLOB + |
+Non-reserved + |
+Reserved + |
+- + |
+
BOOLEAN + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+- + |
+
BOTH + |
+Reserved + |
+Reserved + |
+Reserved + |
+
BUCKETS + |
+Reserved + |
+- + |
+- + |
+
BREADTH + |
+- + |
+Reserved + |
+- + |
+
BY + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
C + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CACHE + |
+Non-reserved + |
+- + |
+- + |
+
CALL + |
+Non-reserved + |
+Reserved + |
+- + |
+
CALLED + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
CARDINALITY + |
+- + |
+Non-reserved + |
+- + |
+
CASCADE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CASCADED + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CASE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CAST + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CATALOG + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CATALOG_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CHAIN + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
CHAR + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
CHARACTER + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
CHARACTERISTICS + |
+Non-reserved + |
+- + |
+- + |
+
CHARACTER_LENGTH + |
+- + |
+Non-reserved + |
+Reserved + |
+
CHARACTER_SET_CATALOG + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CHARACTER_SET_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CHARACTER_SET_SCHEMA + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CHAR_LENGTH + |
+- + |
+Non-reserved + |
+Reserved + |
+
CHECK + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CHECKED + |
+- + |
+Non-reserved + |
+- + |
+
CHECKPOINT + |
+Non-reserved + |
+- + |
+- + |
+
CLASS + |
+Non-reserved + |
+Reserved + |
+- + |
+
CLEAN + |
+Non-reserved + |
+- + |
+- + |
+
CLASS_ORIGIN + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CLOB + |
+Non-reserved + |
+Reserved + |
+- + |
+
CLOSE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CLUSTER + |
+Non-reserved + |
+- + |
+- + |
+
COALESCE + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
COBOL + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
COLLATE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
COLLATION + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
COLLATION_CATALOG + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
COLLATION_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
COLLATION_SCHEMA + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
COLUMN + |
+Reserved + |
+Reserved + |
+Reserved + |
+
COLUMNS + |
+Non-reserved + |
+- + |
+- + |
+
COLUMN_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
COMMAND_FUNCTION + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
COMMAND_FUNCTION_CODE + |
+- + |
+Non-reserved + |
+- + |
+
COMMENT + |
+Non-reserved + |
+- + |
+- + |
+
COMMENTS + |
+Non-reserved + |
+- + |
+- + |
+
COMMIT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
COMMITTED + |
+Non-reserved + |
+Non-reserved + |
+Non-reserved + |
+
COMPATIBLE_ILLEGAL_CHARS + |
+Non-reserved + |
+- + |
+- + |
+
COMPLETE + |
+Non-reserved + |
+- + |
+- + |
+
COMPRESS + |
+Non-reserved + |
+- + |
+- + |
+
COMPLETION + |
+- + |
+Reserved + |
+- + |
+
CONCURRENTLY + |
+Reserved (functions and types allowed) + |
+- + |
+- + |
+
CONDITION + |
+- + |
+- + |
+- + |
+
CONDITION_NUMBER + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CONFIGURATION + |
+Non-reserved + |
+- + |
+- + |
+
CONNECT + |
+- + |
+Reserved + |
+Reserved + |
+
CONNECTION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CONNECTION_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CONSTRAINT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CONSTRAINTS + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CONSTRAINT_CATALOG + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CONSTRAINT_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CONSTRAINT_SCHEMA + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CONSTRUCTOR + |
+- + |
+Reserved + |
+- + |
+
CONTAINS + |
+- + |
+Non-reserved + |
+- + |
+
CONTENT + |
+Non-reserved + |
+- + |
+- + |
+
CONTINUE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CONVERSION + |
+Non-reserved + |
+- + |
+- + |
+
CONVERT + |
+- + |
+Non-reserved + |
+Reserved + |
+
COORDINATOR + |
+Non-reserved + |
+- + |
+- + |
+
COPY + |
+Non-reserved + |
+- + |
+- + |
+
CORRESPONDING + |
+- + |
+Reserved + |
+Reserved + |
+
COST + |
+Non-reserved + |
+- + |
+- + |
+
COUNT + |
+- + |
+Non-reserved + |
+Reserved + |
+
CREATE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CROSS + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
CSV + |
+Non-reserved + |
+- + |
+- + |
+
CUBE + |
+- + |
+Reserved + |
+- + |
+
CURRENT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CURRENT_CATALOG + |
+Reserved + |
+- + |
+- + |
+
CURRENT_DATE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CURRENT_PATH + |
+- + |
+Reserved + |
+- + |
+
CURRENT_ROLE + |
+Reserved + |
+Reserved + |
+- + |
+
CURRENT_SCHEMA + |
+Reserved (functions and types allowed) + |
+- + |
+- + |
+
CURRENT_TIME + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CURRENT_TIMESTAMP + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CURRENT_USER + |
+Reserved + |
+Reserved + |
+Reserved + |
+
CURSOR + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
CURSOR_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
CYCLE + |
+Non-reserved + |
+Reserved + |
+- + |
+
DATA + |
+Non-reserved + |
+Reserved + |
+Non-reserved + |
+
DATE_FORMAT + |
+Non-reserved + |
+- + |
+- + |
+
DATABASE + |
+Non-reserved + |
+- + |
+- + |
+
DATAFILE + |
+Non-reserved + |
+- + |
+- + |
+
DATE + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
DATETIME_INTERVAL_CODE + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
DATETIME_INTERVAL_PRECISION + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
DAY + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DBCOMPATIBILITY + |
+Non-reserved + |
+- + |
+- + |
+
DEALLOCATE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DEC + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
DECIMAL + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
DECLARE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DECODE + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
DEFAULT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
DEFAULTS + |
+Non-reserved + |
+- + |
+- + |
+
DEFERRABLE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
DEFERRED + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DEFINED + |
+- + |
+Non-reserved + |
+- + |
+
DEFINER + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
DELETE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DELIMITER + |
+Non-reserved + |
+- + |
+- + |
+
DELIMITERS + |
+Non-reserved + |
+- + |
+- + |
+
DELTA + |
+Non-reserved + |
+- + |
+- + |
+
DEPTH + |
+- + |
+Reserved + |
+- + |
+
DEREF + |
+- + |
+Reserved + |
+- + |
+
DESC + |
+Reserved + |
+Reserved + |
+Reserved + |
+
DESCRIBE + |
+- + |
+Reserved + |
+Reserved + |
+
DESCRIPTOR + |
+- + |
+Reserved + |
+Reserved + |
+
DESTROY + |
+- + |
+Reserved + |
+- + |
+
DESTRUCTOR + |
+- + |
+Reserved + |
+- + |
+
DETERMINISTIC + |
+Non-reserved + |
+Reserved + |
+- + |
+
DIAGNOSTICS + |
+- + |
+Reserved + |
+Reserved + |
+
DICTIONARY + |
+Non-reserved + |
+Reserved + |
+- + |
+
DIRECT + |
+Non-reserved + |
+- + |
+- + |
+
DIRECTORY + |
+Non-reserved + |
+- + |
+- + |
+
DISABLE + |
+Non-reserved + |
+- + |
+- + |
+
DISCARD + |
+Non-reserved + |
+- + |
+- + |
+
DISCONNECT + |
+- + |
+Reserved + |
+Reserved + |
+
DISPATCH + |
+- + |
+Non-reserved + |
+- + |
+
DISTINCT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
DISTRIBUTE + |
+Non-reserved + |
+- + |
+- + |
+
DISTRIBUTION + |
+Non-reserved + |
+- + |
+- + |
+
DO + |
+Reserved + |
+- + |
+- + |
+
DOCUMENT + |
+Non-reserved + |
+- + |
+- + |
+
DOMAIN + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DOUBLE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DROP + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
DYNAMIC + |
+- + |
+Reserved + |
+- + |
+
DYNAMIC_FUNCTION + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
DYNAMIC_FUNCTION_CODE + |
+- + |
+Non-reserved + |
+- + |
+
EACH + |
+Non-reserved + |
+Reserved + |
+- + |
+
ELASTIC + |
+Non-reserved + |
+- + |
+- + |
+
ELSE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ENABLE + |
+Non-reserved + |
+- + |
+- + |
+
ENCODING + |
+Non-reserved + |
+- + |
+- + |
+
ENCRYPTED + |
+Non-reserved + |
+- + |
+- + |
+
END + |
+Reserved + |
+Reserved + |
+Reserved + |
+
END-EXEC + |
+- + |
+Reserved + |
+Reserved + |
+
ENFORCED + |
+Non-reserved + |
+- + |
+- + |
+
ENUM + |
+Non-reserved + |
+- + |
+- + |
+
EOL + |
+Non-reserved + |
+- + |
+- + |
+
EQUALS + |
+- + |
+Reserved + |
+- + |
+
ERRORS + |
+Non-reserved + |
+- + |
+- + |
+
ESCAPE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ESCAPING + |
+Non-reserved + |
+- + |
+- + |
+
EVERY + |
+Non-reserved + |
+Reserved + |
+- + |
+
EXCEPT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
EXCEPTION + |
+- + |
+Reserved + |
+Reserved + |
+
EXCHANGE + |
+Non-reserved + |
+- + |
+- + |
+
EXCLUDE + |
+Non-reserved + |
+- + |
+- + |
+
EXCLUDING + |
+Non-reserved + |
+- + |
+- + |
+
EXCLUSIVE + |
+Non-reserved + |
+- + |
+- + |
+
EXEC + |
+- + |
+Reserved + |
+Reserved + |
+
EXECUTE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
EXISTING + |
+- + |
+Non-reserved + |
+- + |
+
EXISTS + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
EXPIRATION + |
+Non-reserved + |
+- + |
+- + |
+
EXPLAIN + |
+Non-reserved + |
+- + |
+- + |
+
EXTENSION + |
+Non-reserved + |
+- + |
+- + |
+
EXTERNAL + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
EXTRACT + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
FALSE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
FAMILY + |
+Non-reserved + |
+- + |
+- + |
+
FAST + |
+Non-reserved + |
+- + |
+- + |
+
FENCED + |
+Non-reserved + |
+- + |
+- + |
+
FETCH + |
+Reserved + |
+Reserved + |
+Reserved + |
+
FILEHEADER + |
+Non-reserved + |
+- + |
+- + |
+
FILL_MISSING_FIELDS + |
+Non-reserved + |
+- + |
+- + |
+
FINAL + |
+- + |
+Non-reserved + |
+- + |
+
FIRST + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
FIXED + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
FLOAT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
FOLLOWING + |
+Non-reserved + |
+- + |
+- + |
+
FOR + |
+Reserved + |
+Reserved + |
+Reserved + |
+
FORCE + |
+Non-reserved + |
+- + |
+- + |
+
FOREIGN + |
+Reserved + |
+Reserved + |
+Reserved + |
+
FORMATTER + |
+Non-reserved + |
+- + |
+- + |
+
FORTRAN + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
FORWARD + |
+Non-reserved + |
+- + |
+- + |
+
FOUND + |
+- + |
+Reserved + |
+Reserved + |
+
FREE + |
+- + |
+Reserved + |
+- + |
+
FREEZE + |
+Reserved (functions and types allowed) + |
+- + |
+- + |
+
FROM + |
+Reserved + |
+Reserved + |
+Reserved + |
+
FULL + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
FUNCTION + |
+Non-reserved + |
+Reserved + |
+- + |
+
FUNCTIONS + |
+Non-reserved + |
+- + |
+- + |
+
G + |
+- + |
+Non-reserved + |
+- + |
+
GENERAL + |
+- + |
+Reserved + |
+- + |
+
GENERATED + |
+- + |
+Non-reserved + |
+- + |
+
GET + |
+- + |
+Reserved + |
+Reserved + |
+
GLOBAL + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
GO + |
+- + |
+Reserved + |
+Reserved + |
+
GOTO + |
+- + |
+Reserved + |
+Reserved + |
+
GRANT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
GRANTED + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
GREATEST + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
GROUP + |
+Reserved + |
+Reserved + |
+Reserved + |
+
GROUPING + |
+- + |
+Reserved + |
+- + |
+
HANDLER + |
+Non-reserved + |
+- + |
+- + |
+
HAVING + |
+Reserved + |
+Reserved + |
+Reserved + |
+
HEADER + |
+Non-reserved + |
+- + |
+- + |
+
HIERARCHY + |
+- + |
+Non-reserved + |
+- + |
+
HOLD + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
HOST + |
+- + |
+Reserved + |
+- + |
+
HOUR + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
IDENTIFIED + |
+Non-reserved + |
+- + |
+- + |
+
IDENTITY + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
IF + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
IFNULL + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
IGNORE + |
+- + |
+Reserved + |
+- + |
+
IGNORE_EXTRA_DATA + |
+Non-reserved + |
+- + |
+- + |
+
ILIKE + |
+Reserved (functions and types allowed) + |
+- + |
+- + |
+
IMMEDIATE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
IMMUTABLE + |
+Non-reserved + |
+- + |
+- + |
+
IMPLEMENTATION + |
+- + |
+Non-reserved + |
+- + |
+
IMPLICIT + |
+Non-reserved + |
+- + |
+- + |
+
IN + |
+Reserved + |
+Reserved + |
+Reserved + |
+
INCLUDING + |
+Non-reserved + |
+- + |
+- + |
+
INCREMENT + |
+Non-reserved + |
+- + |
+- + |
+
INDEX + |
+Non-reserved + |
+- + |
+- + |
+
INDEXES + |
+Non-reserved + |
+- + |
+- + |
+
INDICATOR + |
+- + |
+Reserved + |
+Reserved + |
+
INFIX + |
+- + |
+Non-reserved + |
+- + |
+
INHERIT + |
+Non-reserved + |
+- + |
+- + |
+
INHERITS + |
+Non-reserved + |
+- + |
+- + |
+
INITIAL + |
+Non-reserved + |
+- + |
+- + |
+
INITIALIZE + |
+- + |
+Reserved + |
+- + |
+
INITIALLY + |
+Reserved + |
+Reserved + |
+Reserved + |
+
INITRANS + |
+Non-reserved + |
+- + |
+- + |
+
INLINE + |
+Non-reserved + |
+- + |
+- + |
+
INNER + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
INOUT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+- + |
+
INPUT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
INSENSITIVE + |
+Non-reserved + |
+Non-reserved + |
+Reserved + |
+
INSERT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
INSTANCE + |
+- + |
+Non-reserved + |
+- + |
+
INSTANTIABLE + |
+- + |
+Non-reserved + |
+- + |
+
INSTEAD + |
+Non-reserved + |
+- + |
+- + |
+
INT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
INTEGER + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
INTERNAL + |
+Reserved + |
+- + |
+- + |
+
INTERSECT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
INTERVAL + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
INTO + |
+Reserved + |
+Reserved + |
+Reserved + |
+
INVOKER + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
IS + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ISNULL + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
ISOLATION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ITERATE + |
+- + |
+Reserved + |
+- + |
+
JOIN + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
K + |
+- + |
+Non-reserved + |
+- + |
+
KEY + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
KEY_MEMBER + |
+- + |
+Non-reserved + |
+- + |
+
KEY_TYPE + |
+- + |
+Non-reserved + |
+- + |
+
LABEL + |
+Non-reserved + |
+- + |
+- + |
+
LANGUAGE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
LARGE + |
+Non-reserved + |
+Reserved + |
+- + |
+
LAST + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
LATERAL + |
+- + |
+Reserved + |
+- + |
+
LC_COLLATE + |
+Non-reserved + |
+- + |
+- + |
+
LC_CTYPE + |
+Non-reserved + |
+- + |
+- + |
+
LEADING + |
+Reserved + |
+Reserved + |
+Reserved + |
+
LEAKPROOF + |
+Non-reserved + |
+- + |
+- + |
+
LEAST + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
LEFT + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
LENGTH + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
LESS + |
+Reserved + |
+Reserved + |
+- + |
+
LEVEL + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
LIKE + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
LIMIT + |
+Reserved + |
+Reserved + |
+- + |
+
LISTEN + |
+Non-reserved + |
+- + |
+- + |
+
LOAD + |
+Non-reserved + |
+- + |
+- + |
+
LOCAL + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
LOCALTIME + |
+Reserved + |
+Reserved + |
+- + |
+
LOCALTIMESTAMP + |
+Reserved + |
+Reserved + |
+- + |
+
LOCATION + |
+Non-reserved + |
+- + |
+- + |
+
LOCATOR + |
+- + |
+Reserved + |
+- + |
+
LOCK + |
+Non-reserved + |
+- + |
+- + |
+
LOG + |
+Non-reserved + |
+- + |
+- + |
+
LOGGING + |
+Non-reserved + |
+- + |
+- + |
+
LOGIN + |
+Non-reserved + |
+- + |
+- + |
+
LOOP + |
+Non-reserved + |
+- + |
+- + |
+
LOWER + |
+- + |
+Non-reserved + |
+Reserved + |
+
M + |
+- + |
+Non-reserved + |
+- + |
+
MAP + |
+- + |
+Reserved + |
+- + |
+
MAPPING + |
+Non-reserved + |
+- + |
+- + |
+
MATCH + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
MATCHED + |
+Non-reserved + |
+- + |
+- + |
+
MAX + |
+- + |
+Non-reserved + |
+Reserved + |
+
MAXEXTENTS + |
+Non-reserved + |
+- + |
+- + |
+
MAXSIZE + |
+Non-reserved + |
+- + |
+- + |
+
MAXTRANS + |
+Non-reserved + |
+- + |
+- + |
+
MAXVALUE + |
+Reserved + |
+- + |
+- + |
+
MERGE + |
+Non-reserved + |
+- + |
+- + |
+
MESSAGE_LENGTH + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
MESSAGE_OCTET_LENGTH + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
MESSAGE_TEXT + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
METHOD + |
+- + |
+Non-reserved + |
+- + |
+
MIN + |
+- + |
+Non-reserved + |
+Reserved + |
+
MINEXTENTS + |
+Non-reserved + |
+- + |
+- + |
+
MINUS + |
+Reserved + |
+- + |
+- + |
+
MINUTE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
MINVALUE + |
+Non-reserved + |
+- + |
+- + |
+
MOD + |
+- + |
+Non-reserved + |
+- + |
+
MODE + |
+Non-reserved + |
+- + |
+- + |
+
MODIFIES + |
+- + |
+Reserved + |
+- + |
+
MODIFY + |
+Reserved + |
+Reserved + |
+- + |
+
MODULE + |
+- + |
+Reserved + |
+Reserved + |
+
MONTH + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
MORE + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
MOVE + |
+Non-reserved + |
+- + |
+- + |
+
MOVEMENT + |
+Non-reserved + |
+- + |
+- + |
+
MUMPS + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
NAME + |
+Non-reserved + |
+Non-reserved + |
+Non-reserved + |
+
NAMES + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
NATIONAL + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
NATURAL + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
NCHAR + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
NCLOB + |
+- + |
+Reserved + |
+- + |
+
NEW + |
+- + |
+Reserved + |
+- + |
+
NEXT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
NLSSORT + |
+Reserved + |
+- + |
+- + |
+
NO + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
NOCOMPRESS + |
+Non-reserved + |
+- + |
+- + |
+
NOCYCLE + |
+Non-reserved + |
+- + |
+- + |
+
NODE + |
+Non-reserved + |
+- + |
+- + |
+
NOLOGGING + |
+Non-reserved + |
+- + |
+- + |
+
NOLOGIN + |
+Non-reserved + |
+- + |
+- + |
+
NOMAXVALUE + |
+Non-reserved + |
+- + |
+- + |
+
NOMINVALUE + |
+Non-reserved + |
+- + |
+- + |
+
NONE + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+- + |
+
NOT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
NOTHING + |
+Non-reserved + |
+- + |
+- + |
+
NOTIFY + |
+Non-reserved + |
+- + |
+- + |
+
NOTNULL + |
+Reserved (functions and types allowed) + |
+- + |
+- + |
+
NOWAIT + |
+Non-reserved + |
+- + |
+- + |
+
NULL + |
+Reserved + |
+Reserved + |
+Reserved + |
+
NULLABLE + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
NULLIF + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
NULLS + |
+Non-reserved + |
+- + |
+- + |
+
NUMBER + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Non-reserved + |
+
NUMERIC + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
NUMSTR + |
+Non-reserved + |
+- + |
+- + |
+
NVARCHAR2 + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
NVL + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
OBJECT + |
+Non-reserved + |
+Reserved + |
+- + |
+
OCTET_LENGTH + |
+- + |
+Non-reserved + |
+Reserved + |
+
OF + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
OFF + |
+Non-reserved + |
+Reserved + |
+- + |
+
OFFSET + |
+Reserved + |
+- + |
+- + |
+
OIDS + |
+Non-reserved + |
+- + |
+- + |
+
OLD + |
+- + |
+Reserved + |
+- + |
+
ON + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ONLY + |
+Reserved + |
+Reserved + |
+Reserved + |
+
OPEN + |
+- + |
+Reserved + |
+Reserved + |
+
OPERATION + |
+- + |
+Reserved + |
+- + |
+
OPERATOR + |
+Non-reserved + |
+- + |
+- + |
+
OPTIMIZATION + |
+Non-reserved + |
+- + |
+- + |
+
OPTION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
OPTIONS + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
OR + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ORDER + |
+Reserved + |
+Reserved + |
+Reserved + |
+
ORDINALITY + |
+- + |
+Reserved + |
+- + |
+
OUT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+- + |
+
OUTER + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
OUTPUT + |
+- + |
+Reserved + |
+Reserved + |
+
OVER + |
+Non-reserved + |
+- + |
+- + |
+
OVERLAPS + |
+Reserved (functions and types allowed) + |
+Non-reserved + |
+Reserved + |
+
OVERLAY + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+- + |
+
OVERRIDING + |
+- + |
+Non-reserved + |
+- + |
+
OWNED + |
+Non-reserved + |
+- + |
+- + |
+
OWNER + |
+Non-reserved + |
+- + |
+- + |
+
PACKAGE + |
+Non-reserved + |
+- + |
+- + |
+
PAD + |
+- + |
+Reserved + |
+Reserved + |
+
PARAMETER + |
+- + |
+Reserved + |
+- + |
+
PARAMETERS + |
+- + |
+Reserved + |
+- + |
+
PARAMETER_MODE + |
+- + |
+Non-reserved + |
+- + |
+
PARAMETER_NAME + |
+- + |
+Non-reserved + |
+- + |
+
PARAMETER_ORDINAL_POSITION + |
+- + |
+Non-reserved + |
+- + |
+
PARAMETER_SPECIFIC_CATALOG + |
+- + |
+Non-reserved + |
+- + |
+
PARAMETER_SPECIFIC_NAME + |
+- + |
+Non-reserved + |
+- + |
+
PARAMETER_SPECIFIC_SCHEMA + |
+- + |
+Non-reserved + |
+- + |
+
PARSER + |
+Non-reserved + |
+- + |
+- + |
+
PARTIAL + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
PARTITION + |
+Non-reserved + |
+- + |
+- + |
+
PARTITIONS + |
+Non-reserved + |
+- + |
+- + |
+
PASCAL + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
PASSING + |
+Non-reserved + |
+- + |
+- + |
+
PASSWORD + |
+Non-reserved + |
+- + |
+- + |
+
PATH + |
+- + |
+Reserved + |
+- + |
+
PCTFREE + |
+Non-reserved + |
+- + |
+- + |
+
PER + |
+Non-reserved + |
+- + |
+- + |
+
PERM + |
+Non-reserved + |
+- + |
+- + |
+
PERCENT + |
+Non-reserved + |
+- + |
+- + |
+
PERFORMANCE + |
+Reserved + |
+- + |
+- + |
+
PLACING + |
+Reserved + |
+- + |
+- + |
+
PLAN + |
+Reserved + |
+- + |
+- + |
+
PLANS + |
+Non-reserved + |
+- + |
+- + |
+
PLI + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
POLICY + |
+Non-reserved + |
+- + |
+- + |
+
POOL + |
+Non-reserved + |
+- + |
+- + |
+
POSITION + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
POSTFIX + |
+- + |
+Reserved + |
+- + |
+
PRECEDING + |
+Non-reserved + |
+- + |
+- + |
+
PRECISION + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
PREFERRED + |
+Non-reserved + |
+- + |
+- + |
+
PREFIX + |
+Non-reserved + |
+Reserved + |
+- + |
+
PREORDER + |
+- + |
+Reserved + |
+- + |
+
PREPARE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
PREPARED + |
+Non-reserved + |
+- + |
+- + |
+
PRESERVE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
PRIMARY + |
+Reserved + |
+Reserved + |
+Reserved + |
+
PRIOR + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
PRIVATE + |
+Non-reserved + |
+- + |
+- + |
+
PRIVILEGE + |
+Non-reserved + |
+- + |
+- + |
+
PRIVILEGES + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
PROCEDURAL + |
+Non-reserved + |
+- + |
+- + |
+
PROCEDURE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
PROFILE + |
+Non-reserved + |
+- + |
+- + |
+
PUBLIC + |
+- + |
+Reserved + |
+Reserved + |
+
QUERY + |
+Non-reserved + |
+- + |
+- + |
+
QUOTE + |
+Non-reserved + |
+- + |
+- + |
+
RANGE + |
+Non-reserved + |
+- + |
+- + |
+
RAW + |
+Non-reserved + |
+- + |
+- + |
+
READ + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
READS + |
+- + |
+Reserved + |
+- + |
+
REAL + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
REASSIGN + |
+Non-reserved + |
+- + |
+- + |
+
REBUILD + |
+Non-reserved + |
+- + |
+- + |
+
RECHECK + |
+Non-reserved + |
+- + |
+- + |
+
RECURSIVE + |
+Non-reserved + |
+Reserved + |
+- + |
+
REF + |
+Non-reserved + |
+Reserved + |
+- + |
+
REFRESH + |
+Non-reserved + |
+- + |
+- + |
+
REFERENCES + |
+Reserved + |
+Reserved + |
+Reserved + |
+
REFERENCING + |
+- + |
+Reserved + |
+- + |
+
REINDEX + |
+Non-reserved + |
+- + |
+- + |
+
REJECT + |
+Reserved + |
+- + |
+- + |
+
RELATIVE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
RELEASE + |
+Non-reserved + |
+- + |
+- + |
+
RELOPTIONS + |
+Non-reserved + |
+- + |
+- + |
+
REMOTE + |
+Non-reserved + |
+- + |
+- + |
+
RENAME + |
+Non-reserved + |
+- + |
+- + |
+
REPEATABLE + |
+Non-reserved + |
+Non-reserved + |
+Non-reserved + |
+
REPLACE + |
+Non-reserved + |
+- + |
+- + |
+
REPLICA + |
+Non-reserved + |
+- + |
+- + |
+
RESET + |
+Non-reserved + |
+- + |
+- + |
+
RESIZE + |
+Non-reserved + |
+- + |
+- + |
+
RESOURCE + |
+Non-reserved + |
+- + |
+- + |
+
RESTART + |
+Non-reserved + |
+- + |
+- + |
+
RESTRICT + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
RESULT + |
+- + |
+Reserved + |
+- + |
+
RETURN + |
+Non-reserved + |
+Reserved + |
+- + |
+
RETURNED_LENGTH + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
RETURNED_OCTET_LENGTH + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
RETURNED_SQLSTATE + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
RETURNING + |
+Reserved + |
+- + |
+- + |
+
RETURNS + |
+Non-reserved + |
+Reserved + |
+- + |
+
REUSE + |
+Non-reserved + |
+- + |
+- + |
+
REVOKE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
RIGHT + |
+Reserved (functions and types allowed) + |
+Reserved + |
+Reserved + |
+
ROLE + |
+Non-reserved + |
+Reserved + |
+- + |
+
ROLLBACK + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ROLLUP + |
+- + |
+Reserved + |
+- + |
+
ROUTINE + |
+- + |
+Reserved + |
+- + |
+
ROUTINE_CATALOG + |
+- + |
+Non-reserved + |
+- + |
+
ROUTINE_NAME + |
+- + |
+Non-reserved + |
+- + |
+
ROUTINE_SCHEMA + |
+- + |
+Non-reserved + |
+- + |
+
ROW + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+- + |
+
ROWS + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
ROW_COUNT + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
RULE + |
+Non-reserved + |
+- + |
+- + |
+
SAVEPOINT + |
+Non-reserved + |
+Reserved + |
+- + |
+
SCALE + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
SCHEMA + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
SCHEMA_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
SCOPE + |
+- + |
+Reserved + |
+- + |
+
SCROLL + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
SEARCH + |
+Non-reserved + |
+Reserved + |
+- + |
+
SECOND + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
SECTION + |
+- + |
+Reserved + |
+Reserved + |
+
SECURITY + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
SELECT + |
+Reserved + |
+Reserved + |
+Reserved + |
+
SELF + |
+- + |
+Non-reserved + |
+- + |
+
SENSITIVE + |
+- + |
+Non-reserved + |
+- + |
+
SEQUENCE + |
+Non-reserved + |
+Reserved + |
+- + |
+
SEQUENCES + |
+Non-reserved + |
+- + |
+- + |
+
SERIALIZABLE + |
+Non-reserved + |
+Non-reserved + |
+Non-reserved + |
+
SERVER + |
+Non-reserved + |
+- + |
+- + |
+
SERVER_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
SESSION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
SESSION_USER + |
+Reserved + |
+Reserved + |
+Reserved + |
+
SET + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
SETOF + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
SETS + |
+- + |
+Reserved + |
+- + |
+
SHARE + |
+Non-reserved + |
+- + |
+- + |
+
SHIPPABLE + |
+Non-reserved + |
+- + |
+- + |
+
SHOW + |
+Non-reserved + |
+- + |
+- + |
+
SIMILAR + |
+Reserved (functions and types allowed) + |
+Non-reserved + |
+- + |
+
SIMPLE + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
SIZE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
SMALLDATETIME + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
SMALLDATETIME_FORMAT + |
+Non-reserved + |
+- + |
+- + |
+
SMALLINT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
SNAPSHOT + |
+Non-reserved + |
+- + |
+- + |
+
SOME + |
+Reserved + |
+Reserved + |
+Reserved + |
+
SOURCE + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
SPACE + |
+- + |
+Reserved + |
+Reserved + |
+
SPECIFIC + |
+- + |
+Reserved + |
+- + |
+
SPECIFICTYPE + |
+- + |
+Reserved + |
+- + |
+
SPECIFIC_NAME + |
+- + |
+Non-reserved + |
+- + |
+
SPILL + |
+Non-reserved + |
+- + |
+- + |
+
SPLIT + |
+Non-reserved + |
+- + |
+- + |
+
SQL + |
+- + |
+Reserved + |
+Reserved + |
+
SQLCODE + |
+- + |
+- + |
+Reserved + |
+
SQLERROR + |
+- + |
+- + |
+Reserved + |
+
SQLEXCEPTION + |
+- + |
+Reserved + |
+- + |
+
SQLSTATE + |
+- + |
+Reserved + |
+Reserved + |
+
SQLWARNING + |
+- + |
+Reserved + |
+- + |
+
STABLE + |
+Non-reserved + |
+- + |
+- + |
+
STANDALONE + |
+Non-reserved + |
+- + |
+- + |
+
START + |
+Non-reserved + |
+Reserved + |
+- + |
+
STATE + |
+- + |
+Reserved + |
+- + |
+
STATEMENT + |
+Non-reserved + |
+Reserved + |
+- + |
+
STATEMENT_ID + |
+Non-reserved + |
+- + |
+- + |
+
STATIC + |
+- + |
+Reserved + |
+- + |
+
STATISTICS + |
+Non-reserved + |
+- + |
+- + |
+
STDIN + |
+Non-reserved + |
+- + |
+- + |
+
STDOUT + |
+Non-reserved + |
+- + |
+- + |
+
STORAGE + |
+Non-reserved + |
+- + |
+- + |
+
STORE + |
+Non-reserved + |
+- + |
+- + |
+
STRICT + |
+Non-reserved + |
+- + |
+- + |
+
STRIP + |
+Non-reserved + |
+- + |
+- + |
+
STRUCTURE + |
+- + |
+Reserved + |
+- + |
+
STYLE + |
+- + |
+Non-reserved + |
+- + |
+
SUBCLASS_ORIGIN + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
SUBLIST + |
+- + |
+Non-reserved + |
+- + |
+
SUBSTRING + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
SUM + |
+- + |
+Non-reserved + |
+Reserved + |
+
SUPERUSER + |
+Non-reserved + |
+- + |
+- + |
+
SYMMETRIC + |
+Reserved + |
+Non-reserved + |
+- + |
+
SYNONYM + |
+Non-reserved + |
+- + |
+- + |
+
SYS_REFCURSOR + |
+Non-reserved + |
+- + |
+- + |
+
SYSDATE + |
+Reserved + |
+- + |
+- + |
+
SYSID + |
+Non-reserved + |
+- + |
+- + |
+
SYSTEM + |
+Non-reserved + |
+Non-reserved + |
+- + |
+
SYSTEM_USER + |
+- + |
+Reserved + |
+Reserved + |
+
TABLE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
TABLES + |
+Non-reserved + |
+- + |
+- + |
+
TABLE_NAME + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
TEMP + |
+Non-reserved + |
+- + |
+- + |
+
TEMPLATE + |
+Non-reserved + |
+- + |
+- + |
+
TEMPORARY + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
TERMINATE + |
+- + |
+Reserved + |
+- + |
+
TEXT + |
+Non-reserved + |
+- + |
+- + |
+
THAN + |
+Non-reserved + |
+Reserved + |
+- + |
+
THEN + |
+Reserved + |
+Reserved + |
+Reserved + |
+
TIME + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
TIME_FORMAT + |
+Non-reserved + |
+- + |
+- + |
+
TIMESTAMP + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
TIMESTAMPDIFF + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
TIMESTAMP_FORMAT + |
+Non-reserved + |
+- + |
+- + |
+
TIMEZONE_HOUR + |
+- + |
+Reserved + |
+Reserved + |
+
TIMEZONE_MINUTE + |
+- + |
+Reserved + |
+Reserved + |
+
TINYINT + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
TO + |
+Reserved + |
+Reserved + |
+Reserved + |
+
TRAILING + |
+Reserved + |
+Reserved + |
+Reserved + |
+
TRANSACTION + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
TRANSACTIONS_COMMITTED + |
+- + |
+Non-reserved + |
+- + |
+
TRANSACTIONS_ROLLED_BACK + |
+- + |
+Non-reserved + |
+- + |
+
TRANSACTION_ACTIVE + |
+- + |
+Non-reserved + |
+- + |
+
TRANSFORM + |
+- + |
+Non-reserved + |
+- + |
+
TRANSFORMS + |
+- + |
+Non-reserved + |
+- + |
+
TRANSLATE + |
+- + |
+Non-reserved + |
+Reserved + |
+
TRANSLATION + |
+- + |
+Reserved + |
+Reserved + |
+
TREAT + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+- + |
+
TRIGGER + |
+Non-reserved + |
+Reserved + |
+- + |
+
TRIGGER_CATALOG + |
+- + |
+Non-reserved + |
+- + |
+
TRIGGER_NAME + |
+- + |
+Non-reserved + |
+- + |
+
TRIGGER_SCHEMA + |
+- + |
+Non-reserved + |
+- + |
+
TRIM + |
+Non-reserved (excluding functions and types) + |
+Non-reserved + |
+Reserved + |
+
TRUE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
TRUNCATE + |
+Non-reserved + |
+- + |
+- + |
+
TRUSTED + |
+Non-reserved + |
+- + |
+- + |
+
TYPE + |
+Non-reserved + |
+Non-reserved + |
+Non-reserved + |
+
TYPES + |
+Non-reserved + |
+- + |
+- + |
+
UESCAPE + |
+- + |
+- + |
+- + |
+
UNBOUNDED + |
+Non-reserved + |
+- + |
+- + |
+
UNCOMMITTED + |
+Non-reserved + |
+Non-reserved + |
+Non-reserved + |
+
UNDER + |
+- + |
+Reserved + |
+- + |
+
UNENCRYPTED + |
+Non-reserved + |
+- + |
+- + |
+
UNION + |
+Reserved + |
+Reserved + |
+Reserved + |
+
UNIQUE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
UNKNOWN + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
UNLIMITED + |
+Non-reserved + |
+- + |
+- + |
+
UNLISTEN + |
+Non-reserved + |
+- + |
+- + |
+
UNLOCK + |
+Non-reserved + |
+- + |
+- + |
+
UNLOGGED + |
+Non-reserved + |
+- + |
+- + |
+
UNNAMED + |
+- + |
+Non-reserved + |
+Non-reserved + |
+
UNNEST + |
+- + |
+Reserved + |
+- + |
+
UNTIL + |
+Non-reserved + |
+- + |
+- + |
+
UNUSABLE + |
+Non-reserved + |
+- + |
+- + |
+
UPDATE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
UPPER + |
+- + |
+Non-reserved + |
+Reserved + |
+
USAGE + |
+- + |
+Reserved + |
+Reserved + |
+
USER + |
+Reserved + |
+Reserved + |
+Reserved + |
+
USER_DEFINED_TYPE_CATALOG + |
+- + |
+Non-reserved + |
+- + |
+
USER_DEFINED_TYPE_NAME + |
+- + |
+Non-reserved + |
+- + |
+
USER_DEFINED_TYPE_SCHEMA + |
+- + |
+Non-reserved + |
+- + |
+
USING + |
+Reserved + |
+Reserved + |
+Reserved + |
+
VACUUM + |
+Non-reserved + |
+- + |
+- + |
+
VALID + |
+Non-reserved + |
+- + |
+- + |
+
VALIDATE + |
+Non-reserved + |
+- + |
+- + |
+
VALIDATION + |
+Non-reserved + |
+- + |
+- + |
+
VALIDATOR + |
+Non-reserved + |
+- + |
+- + |
+
VALUE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
VALUES + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
VARCHAR + |
+Non-reserved (excluding functions and types) + |
+Reserved + |
+Reserved + |
+
VARCHAR2 + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
VARIABLE + |
+- + |
+Reserved + |
+- + |
+
VARIADIC + |
+Reserved + |
+- + |
+- + |
+
VARYING + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
VCGROUP + |
+Non-reserved + |
+- + |
+- + |
+
VERBOSE + |
+Reserved (functions and types allowed) + |
+- + |
+- + |
+
VERIFY + |
+Non-reserved + |
+- + |
+- + |
+
VERSION + |
+Non-reserved + |
+- + |
+- + |
+
VIEW + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
VOLATILE + |
+Non-reserved + |
+- + |
+- + |
+
WHEN + |
+Reserved + |
+Reserved + |
+Reserved + |
+
WHENEVER + |
+- + |
+Reserved + |
+Reserved + |
+
WHERE + |
+Reserved + |
+Reserved + |
+Reserved + |
+
WHITESPACE + |
+Non-reserved + |
+- + |
+- + |
+
WINDOW + |
+Reserved + |
+- + |
+- + |
+
WITH + |
+Reserved + |
+Reserved + |
+Reserved + |
+
WITHIN + |
+Non-reserved + |
+- + |
+- + |
+
WITHOUT + |
+Non-reserved + |
+Reserved + |
+- + |
+
WORK + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
WORKLOAD + |
+Non-reserved + |
+- + |
+- + |
+
WRAPPER + |
+Non-reserved + |
+- + |
+- + |
+
WRITE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
XML + |
+Non-reserved + |
+- + |
+- + |
+
XMLATTRIBUTES + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLCONCAT + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLELEMENT + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLEXISTS + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLFOREST + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLNAMESPACES + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLPARSE + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLPI + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLROOT + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLSERIALIZE + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
XMLTABLE + |
+Non-reserved (excluding functions and types) + |
+- + |
+- + |
+
YEAR + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
YES + |
+Non-reserved + |
+- + |
+- + |
+
ZONE + |
+Non-reserved + |
+Reserved + |
+Reserved + |
+
Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision decimals.
+For details about numeric operators and functions, see Mathematical Functions and Operators.
+GaussDB(DWS) supports integers, arbitrary precision numbers, floating point types, and serial integers.
+The types TINYINT, SMALLINT, INTEGER, BINARY_INTEGER, and BIGINT store whole numbers, that is, numbers without fractional components, of various ranges. Saving a number with a decimal in any of the data types will result in errors.
+Column + |
+Description + |
+Storage Space + |
+Range + |
+
---|---|---|---|
TINYINT + |
+Tiny integer, also called INT1 + |
+1 byte + |
+0 ~ 255 + |
+
SMALLINT + |
+Small integer, also called INT2 + |
+2 bytes + |
+-32,768 ~ +32,767 + |
+
INTEGER + |
+Typical choice for integer, also called INT4 + |
+4 bytes + |
+-2,147,483,648 ~ +2,147,483,647 + |
+
BINARY_INTEGER + |
+INTEGER alias, compatible with Oracle + |
+4 bytes + |
+-2,147,483,648 ~ +2,147,483,647 + |
+
BIGINT + |
+Big integer, also called INT8 + |
+8 bytes + |
+-9,223,372,036,854,775,808 ~ 9,223,372,036,854,775,807 + |
+
Examples:
+1 +2 +3 +4 +5 +6 +7 | CREATE TABLE int_type_t1 +( + a TINYINT, + b TINYINT, + c INTEGER, + d BIGINT +); + |
Insert data.
+1 | INSERT INTO int_type_t1 VALUES(100, 10, 1000, 10000); + |
View data.
+1 +2 +3 +4 +5 | SELECT * FROM int_type_t1; + a | b | c | d +-----+----+------+------- + 100 | 10 | 1000 | 10000 +(1 row) + |
The type NUMBER can store numbers with a very large number of digits. It is especially recommended for storing monetary amounts and other quantities where exactness is required. The arbitrary precision numbers require larger storage space and have lower storage efficiency, operation efficiency, and poorer compression ratio results than integer types.
+The scale of a NUMBER value is the count of decimal digits in the fractional part, to the right of the decimal point. The precision of a NUMBER value is the total count of significant digits in the whole number, that is, the number of digits to both sides of the decimal point. So the number 23.5141 has a precision of 6 and a scale of 4. Integers can be considered to have a scale of zero.
+To configure a numeric or decimal column, you are advised to specify both the maximum precision (p) and the maximum scale (s) of the column.
+If the precision or scale of a value is greater than the declared scale of the column, the system will round the value to the specified number of fractional digits. Then, if the number of digits to the left of the decimal point exceeds the declared precision minus the declared scale, an error will be reported.
+ +Column + |
+Description + |
+Storage Space + |
+Range + |
+
---|---|---|---|
NUMERIC[(p[,s])], +DECIMAL[(p[,s])] + |
+The value range of p (precision) is [1,1000], and the value range of s (standard) is [0,p]. + |
+The precision is specified by users. Every four decimal digits occupy two bytes, and an extra eight-byte overhead is added to the entire data. + |
+Up to 131,072 digits before the decimal point; and up to 16,383 digits after the decimal point when no precision is specified + |
+
NUMBER[(p[,s])] + |
+Alias for type NUMERIC, compatible with Oracle + |
+The precision is specified by users. Every four decimal digits occupy two bytes, and an extra eight-byte overhead is added to the entire data. + |
+Up to 131,072 digits before the decimal point; and up to 16,383 digits after the decimal point when no precision is specified + |
+
Examples:
+Create a table with DECIMAL values.
+1 | CREATE TABLE decimal_type_t1 (DT_COL1 DECIMAL(10,4)); + |
Insert data.
+1 | INSERT INTO decimal_type_t1 VALUES(123456.122331); + |
View data.
+1 +2 +3 +4 +5 | SELECT * FROM decimal_type_t1; + dt_col1 +------------- + 123456.1223 +(1 row) + |
The floating-point type is an inexact, variable-precision numeric type. This type is an implementation of IEEE Standard 754 for Binary Floating-Point Arithmetic (single and double precision, respectively), to the extent that the underlying processor, OS, and compiler support it.
+ +Column + |
+Description + |
+Storage Space + |
+Range + |
+
---|---|---|---|
REAL, +FLOAT4 + |
+Single precision floating points, inexact + |
+4 bytes + |
+Six bytes of decimal digits + |
+
DOUBLE PRECISION, +FLOAT8 + |
+Double precision floating points, inexact + |
+8 bytes + |
+1E-307~1E+308, +15 bytes of decimal digits + |
+
FLOAT[(p)] + |
+Floating points, inexact. The value range of precision (p) is [1,53]. + NOTE:
+p is the precision, indicating the total decimal digits. + |
+4 or 8 bytes + |
+REAL or DOUBLE PRECISION is selected as an internal identifier based on different precision (p). If no precision is specified, DOUBLE PRECISION is used as the internal identifier. + |
+
BINARY_DOUBLE + |
+DOUBLE PRECISION alias, compatible with Oracle + |
+8 bytes + |
+1E-307~1E+308, +15 bytes of decimal digits + |
+
DEC[(p[,s])] + |
+The value range of p (precision) is [1,1000], and the value range of s (standard) is [0,p]. + NOTE:
+p indicates the total digits, and s indicates the decimal digit. + |
+The precision is specified by users. Every four decimal digits occupy two bytes, and an extra eight-byte overhead is added to the entire data. + |
+Up to 131,072 digits before the decimal point; and up to 16,383 digits after the decimal point when no precision is specified + |
+
INTEGER[(p[,s])] + |
+The value range of p (precision) is [1,1000], and the value range of s (standard) is [0,p]. + |
+The precision is specified by users. Every four decimal digits occupy two bytes, and an extra eight-byte overhead is added to the entire data. + |
+Up to 131,072 digits before the decimal point; and up to 16,383 digits after the decimal point when no precision is specified + |
+
Examples:
+Create a table with floating-point values.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | CREATE TABLE float_type_t2 +( + FT_COL1 INTEGER, + FT_COL2 FLOAT4, + FT_COL3 FLOAT8, + FT_COL4 FLOAT(3), + FT_COL5 BINARY_DOUBLE, + FT_COL6 DECIMAL(10,4), + FT_COL7 INTEGER(6,3) +) DISTRIBUTE BY HASH ( ft_col1); + |
Insert data.
+1 | INSERT INTO float_type_t2 VALUES(10,10.365456,123456.1234,10.3214, 321.321, 123.123654, 123.123654); + |
View data.
+1 +2 +3 +4 +5 | SELECT * FROM float_type_t2; + ft_col1 | ft_col2 | ft_col3 | ft_col4 | ft_col5 | ft_col6 | ft_col7 +---------+---------+-------------+---------+---------+----------+--------- + 10 | 10.3655 | 123456.1234 | 10.3214 | 321.321 | 123.1237 | 123.124 +(1 row) + |
SMALLSERIAL, SERIAL, and BIGSERIAL are not true types, but merely a notational convenience for creating unique identifier columns. Therefore, an integer column is created and its default value plans to be read from a sequencer. A NOT NULL constraint is used to ensure NULL is not inserted. In most cases you would also want to attach a UNIQUE or PRIMARY KEY constraint to prevent duplicate values from being inserted unexpectedly. Lastly, the sequence is marked as "owned by" the column, so that it will be dropped if the column or table is dropped. Currently, the SERIAL column can be specified only when you create a table. You cannot add the SERIAL column in an existing table. In addition, SERIAL columns cannot be created in temporary tables. Because SERIAL is not a data type, columns cannot be converted to this type.
+ +Column + |
+Description + |
+Storage Space + |
+Range + |
+
---|---|---|---|
SMALLSERIAL + |
+Two-byte auto-incrementing integer + |
+2 bytes + |
+1 ~ 32,767 + |
+
SERIAL + |
+Four-byte auto-incrementing integer + |
+4 bytes + |
+1 ~ 2,147,483,647 + |
+
BIGSERIAL + |
+Eight-byte auto-incrementing integer + |
+8 bytes + |
+1 ~ 9,223,372,036,854,775,807 + |
+
Examples:
+Create a table with serial values.
+1 | CREATE TABLE smallserial_type_tab(a SMALLSERIAL); + |
Insert data.
+1 | INSERT INTO smallserial_type_tab VALUES(default); + |
Insert data again.
+1 | INSERT INTO smallserial_type_tab VALUES(default); + |
View data.
+1 +2 +3 +4 +5 +6 | SELECT * FROM smallserial_type_tab; + a +--- + 1 + 2 +(2 rows) + |
The money type stores a currency amount with fixed fractional precision. The range shown in Table 1 assumes there are two fractional digits. Input is accepted in a variety of formats, including integer and floating-point literals, as well as typical currency formatting, such as $1,000.00. Output is generally in the latter form but depends on the locale.
+ +Name + |
+Storage Size + |
+Description + |
+Range + |
+
---|---|---|---|
money + |
+8 bytes + |
+Currency amount + |
+-92233720368547758.08 to +92233720368547758.07 + |
+
Values of the numeric, int, and bigint data types can be cast to money. Conversion from the real and double precision data types can be done by casting to numeric first, for example:
+1 | SELECT '12.34'::float8::numeric::money; + |
However, this is not recommended. Floating point numbers should not be used to handle money due to the potential for rounding errors.
+A money value can be cast to numeric without loss of precision. Conversion to other types could potentially lose precision, and must also be done in two stages:
+1 | SELECT '52093.89'::money::numeric::float8; + |
When a money value is divided by another money value, the result is double precision (that is, a pure number, not money); the currency units cancel each other out in the division.
+Name + |
+Description + |
+Storage Space + |
+Value + |
+
---|---|---|---|
BOOLEAN + |
+Boolean type + |
+1 byte + |
+
|
+
Valid literal values for the "true" state are:
+TRUE, 't', 'true', 'y', 'yes', '1'
+Valid literal values for the "false" state include:
+FALSE, 'f', 'false', 'n', 'no', '0'
+TRUE and FALSE are standard expressions, compatible with SQL statements.
+Data type boolean is displayed with letters t and f.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 | -- Create a table: +CREATE TABLE bool_type_t1 +( + BT_COL1 BOOLEAN, + BT_COL2 TEXT +) DISTRIBUTE BY HASH(BT_COL2); + +--Insert data: +INSERT INTO bool_type_t1 VALUES (TRUE, 'sic est'); + +INSERT INTO bool_type_t1 VALUES (FALSE, 'non est'); + +-- View data: +SELECT * FROM bool_type_t1; + bt_col1 | bt_col2 +---------+--------- + t | sic est + f | non est +(2 rows) + +SELECT * FROM bool_type_t1 WHERE bt_col1 = 't'; + bt_col1 | bt_col2 +---------+--------- + t | sic est +(1 row) + +-- Delete the tables: +DROP TABLE bool_type_t1; + |
Table 1 lists the character types that can be used in GaussDB(DWS). For string operators and related built-in functions, see Character Processing Functions and Operators.
+ +Name + |
+Description + |
+Length + |
+Storage Space + |
+
---|---|---|---|
CHAR(n) +CHARACTER(n) +NCHAR(n) + |
+Fixed-length character string. If the length is not reached, fill in spaces. + |
+n indicates the string length. If it is not specified, the default precision 1 is used. The value of n is less than 10485761. + |
+The maximum size is 10 MB. + |
+
VARCHAR(n) +CHARACTER VARYING(n) + |
+Variable-length string. + |
+n indicates the byte length. The value of n is less than 10485761. + |
+The maximum size is 10 MB. + |
+
VARCHAR2(n) + |
+Variable-length string. It is an alias for VARCHAR(n) type, compatible with Oracle. + |
+n indicates the byte length. The value of n is less than 10485761. + |
+The maximum size is 10 MB. + |
+
NVARCHAR2(n) + |
+Variable-length string. + |
+n indicates the string length. The value of n is less than 10485761. + |
+The maximum size is 10 MB. + |
+
CLOB + |
+Variable-length string. A big text object. It is an alias for TEXT type, compatible with Oracle. + |
+- + |
+The maximum size is 1,073,733,621 bytes (1 GB - 8203 bytes). + |
+
TEXT + |
+Variable-length string. + |
+- + |
+The maximum size is 1,073,733,621 bytes (1 GB - 8203 bytes). + |
+
GaussDB(DWS) has two other fixed-length character types, as listed in Table 2.
+The name type is used only in the internal system catalog as the storage identifier. The length of this type is 64 bytes (63 characters plus the terminator). This data type is not recommended for common users. When the name type is aligned with other data types (for example, in multiple branches of case when, one branch returns the name type and other branches return the text type), the name type may be aligned but characters may be truncated. If you do not want to have 64-bit truncated characters, you need to forcibly convert the name type to the text type.
+The type "char" only uses one byte of storage. It is internally used in the system catalogs as a simplistic enumeration type.
+ +Name + |
+Description + |
+Storage Space + |
+
---|---|---|
name + |
+Internal type for object names + |
+64 bytes + |
+
"char" + |
+Single-byte internal type + |
+1 byte + |
+
If a field is defined as char(n) or varchar(n). n indicates the maximum length. Regardless of the type, the length can not exceed 10485760 bytes (10 MB).
+When the data length exceeds the specified length n, the error "value too long" is reported. Of course, you can also specify to automatically truncate the data that exceeds the length.
+Example:
+1 | CREATE TABLE t1 (a char(5),b varchar(5)); + |
1 +2 +3 | INSERT INTO t1 VALUES('bookstore','123'); +ERROR: value too long for type character(5) +CONTEXT: referenced column: a + |
1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO t1 VALUES('bookstore'::char(5),'12345678'::varchar(5)); +INSERT 0 1 + +SELECT a,b FROM t1; + a | b +-------+------- + books | 12345 +(1 row) + |
All character types can be classified into fixed-length strings and variable-length strings.
+Example:
+1 | CREATE TABLE t2 (a char(5),b varchar(5)); + |
1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO t2 VALUES('abc','abc'); +INSERT 0 1 + +SELECT a,lengthb(a),b FROM t2; + a | lengthb | b +-------+---------+----- + abc | 5 | abc +(1 row) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT a = b from t2; + ?column? +---------- + t +(1 row) + +SELECT cast(a as text) as val,lengthb(val) FROM t2; + val | lengthb +-----+--------- + abc | 3 +(1 row) + |
n means differently in VARCHAR2(n) and NVARCHAR2(n).
+Take an UTF8-encoded database as an example. A letter occupies one byte, and a Chinese character occupies three bytes. VARCHAR2(6) allows for six letters or two Chinese characters, and NVARCHAR2(6) allows for six letters or six Chinese characters.
+In Oracle compatibility mode, empty strings and NULL are not distinguished. When a statement is executed to query or import data, empty strings are processed as NULL.
+As such, = " cannot be used as the query condition, and so does is ''. Otherwise, no result set is returned. The correct usage is is null, or is not null.
+Example:
+1 | CREATE TABLE t4 (a text); + |
1 +2 | INSERT INTO t4 VALUES('abc'),(''),(null); +INSERT 0 3 + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | SELECT a,a isnull FROM t4; + a | ?column? +-----+---------- + | t + | t + abc | f +(3 rows) + +SELECT a,a isnull FROM t4 WHERE a is null; + a | ?column? +---+---------- + | t + | t +(2 rows) + |
Table 1 lists the binary data types that can be used in GaussDB(DWS).
+ +Name + |
+Description + |
+Storage Space + |
+
---|---|---|
BLOB + |
+Binary large object. +Currently, BLOB only supports the following external access interfaces: +
For details about the interfaces, see DBMS_LOB. + NOTE:
+Column storage cannot be used for the BLOB type. + |
+The maximum size is 10,7373,3621 bytes (1 GB - 8203 bytes). + |
+
RAW + |
+Variable-length hexadecimal string + NOTE:
+Column storage cannot be used for the raw type. + |
+4 bytes plus the actual hexadecimal string. The maximum size is 10,7373,3621 bytes (1 GB - 8203 bytes). + |
+
BYTEA + |
+Variable-length binary string + |
+4 bytes plus the actual binary string. The maximum size is 10,7373,3621 bytes (1 GB - 8203 bytes). + |
+
In addition to the size limitation on each column, the total size of each tuple is 8203 bytes less than 1 GB.
+Examples
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | -- Create a table: +CREATE TABLE blob_type_t1 +( + BT_COL1 INTEGER, + BT_COL2 BLOB, + BT_COL3 RAW, + BT_COL4 BYTEA +) DISTRIBUTE BY REPLICATION; + +--Insert data: +INSERT INTO blob_type_t1 VALUES(10,empty_blob(), +HEXTORAW('DEADBEEF'),E'\\xDEADBEEF'); + +-- Query data in the table: +SELECT * FROM blob_type_t1; + bt_col1 | bt_col2 | bt_col3 | bt_col4 +---------+---------+----------+------------ + 10 | | DEADBEEF | \xdeadbeef +(1 row) + +-- Delete the tables: +DROP TABLE blob_type_t1; + |
Table 1 lists date and time types supported by GaussDB(DWS). For the operators and built-in functions of the types, see Date and Time Processing Functions and Operators.
+If the time format of another database is different from that of GaussDB(DWS), modify the value of the DateStyle parameter to keep them consistent.
+Name + |
+Description + |
+Storage Space + |
+
---|---|---|
DATE + |
+In Oracle compatibility mode, it is equivalent to timestamp(0) and records the date and time. +In other modes, it records the date. + |
+In Oracle compatibility mode, it occupies 8 bytes. +In Oracle compatibility mode, it occupies 4 bytes. + |
+
TIME [(p)] [WITHOUT TIME ZONE] + |
+Specifies the time of day (no date). +p indicates the precision after the decimal point. The value ranges from 0 to 6. + |
+8 bytes + |
+
TIME [(p)] [WITH TIME ZONE] + |
+Specifies time within one day (with time zone). +p indicates the precision after the decimal point. The value ranges from 0 to 6. + |
+12 bytes + |
+
TIMESTAMP[(p)] [WITHOUT TIME ZONE] + |
+Specifies the date and time. +p indicates the precision after the decimal point. The value ranges from 0 to 6. + |
+8 bytes + |
+
TIMESTAMP[(p)][WITH TIME ZONE] + |
+Specifies the date and time (with time zone). TIMESTAMP is also called TIMESTAMPTZ. +p indicates the precision after the decimal point. The value ranges from 0 to 6. + |
+8 bytes + |
+
SMALLDATETIME + |
+Specifies the date and time (without time zone). +The precision level is minute. 31s to 59s are rounded into 1 minute. + |
+8 bytes + |
+
INTERVAL DAY (l) TO SECOND (p) + |
+Specifies the time interval (X days X hours X minutes X seconds). +
|
+16 bytes + |
+
INTERVAL [FIELDS] [ (p) ] + |
+Specifies the time interval. +
|
+12 bytes + |
+
reltime + |
+Relative time interval. The format is: +X years X months X days XX:XX:XX +
|
+4 bytes + |
+
For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 | --Create a table: +CREATE TABLE date_type_tab(coll date); + +--Insert data: +INSERT INTO date_type_tab VALUES (date '12-10-2010'); + +-- View data: +SELECT * FROM date_type_tab; + coll +--------------------- + 2010-12-10 00:00:00 +(1 row) + +-- Delete the tables: +DROP TABLE date_type_tab; + +--Create a table: +CREATE TABLE time_type_tab (da time without time zone ,dai time with time zone,dfgh timestamp without time zone,dfga timestamp with time zone, vbg smalldatetime); + +--Insert data: +INSERT INTO time_type_tab VALUES ('21:21:21','21:21:21 pst','2010-12-12','2013-12-11 pst','2003-04-12 04:05:06'); + +-- View data: +SELECT * FROM time_type_tab; + da | dai | dfgh | dfga | vbg +----------+-------------+---------------------+------------------------+--------------------- + 21:21:21 | 21:21:21-08 | 2010-12-12 00:00:00 | 2013-12-11 16:00:00+08 | 2003-04-12 04:05:00 +(1 row) + +-- Delete the tables: +DROP TABLE time_type_tab; + +--Create a table: +CREATE TABLE day_type_tab (a int,b INTERVAL DAY(3) TO SECOND (4)); + +--Insert data: +INSERT INTO day_type_tab VALUES (1, INTERVAL '3' DAY); + +-- View data: +SELECT * FROM day_type_tab; + a | b +---+-------- + 1 | 3 days +(1 row) + +-- Delete the tables: +DROP TABLE day_type_tab; + +--Create a table: +CREATE TABLE year_type_tab(a int, b interval year (6)); + +--Insert data: +INSERT INTO year_type_tab VALUES(1,interval '2' year); + +-- View data: +SELECT * FROM year_type_tab; + a | b +---+--------- + 1 | 2 years +(1 row) + +-- Delete the tables: +DROP TABLE year_type_tab; + |
Date and time input is accepted in almost any reasonable formats, including ISO 8601, SQL-compatible, and traditional POSTGRES. The system allows you to customize the sequence of day, month, and year in the date input. Set the DateStyle parameter to MDY to select month-day-year interpretation, DMY to select day-month-year interpretation, or YMD to select year-month-day interpretation.
+Remember that any date or time literal input needs to be enclosed with single quotes, and the syntax is as follows:
+type [ ( p ) ] 'value'
+The p that can be selected in the precision statement is an integer, indicating the number of fractional digits in the seconds column. Table 2 shows some possible inputs for the date type.
+ +Example + |
+Description + |
+
---|---|
1999-01-08 + |
+ISO 8601 (recommended format). January 8, 1999 in any mode + |
+
January 8, 1999 + |
+Unambiguous in any date input mode + |
+
1/8/1999 + |
+January 8 in MDY mode. August 1 in DMY mode + |
+
1/18/1999 + |
+January 18 in MDY mode, rejected in other modes + |
+
01/02/03 + |
+
|
+
1999-Jan-08 + |
+January 8 in any mode + |
+
Jan-08-1999 + |
+January 8 in any mode + |
+
08-Jan-1999 + |
+January 8 in any mode + |
+
99-Jan-08 + |
+January 8 in YMD mode, else error + |
+
08-Jan-99 + |
+January 8, except error in YMD mode + |
+
Jan-08-99 + |
+January 8, except error in YMD mode + |
+
19990108 + |
+ISO 8601. January 8, 1999 in any mode + |
+
990108 + |
+ISO 8601. January 8, 1999 in any mode + |
+
1999.008 + |
+Year and day of year + |
+
J2451187 + |
+Julian date + |
+
January 8, 99 BC + |
+Year 99 BC + |
+
For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 | --Create a table: +CREATE TABLE date_type_tab(coll date); + +--Insert data: +INSERT INTO date_type_tab VALUES (date '12-10-2010'); + +-- View data: +SELECT * FROM date_type_tab; + coll +--------------------- + 2010-12-10 00:00:00 +(1 row) + +-- View the date format: +SHOW datestyle; + DateStyle +----------- + ISO, MDY +(1 row) + +-- Configure the date format: +SET datestyle='YMD'; +SET + +-- Insert data: +INSERT INTO date_type_tab VALUES(date '2010-12-11'); + +-- View data: +SELECT * FROM date_type_tab; + coll +--------------------- + 2010-12-10 00:00:00 + 2010-12-11 00:00:00 +(2 rows) + +-- Delete the tables: +DROP TABLE date_type_tab; + |
The time-of-day types are TIME [(p)] [WITHOUT TIME ZONE] and TIME [(p)] [WITH TIME ZONE]. TIME alone is equivalent to TIME WITHOUT TIME ZONE.
+If a time zone is specified in the input for TIME WITHOUT TIME ZONE, it is silently ignored.
+For details about the time input types, see Table 3. For details about time zone input types, see Table 4.
+ +Example + |
+Description + |
+
---|---|
05:06.8 + |
+ISO 8601 + |
+
4:05:06 + |
+ISO 8601 + |
+
4:05 + |
+ISO 8601 + |
+
40506 + |
+ISO 8601 + |
+
4:05 AM + |
+Same as 04:05. AM does not affect value + |
+
4:05 PM + |
+Same as 16:05. Input hour must be <= 12 + |
+
04:05:06.789-8 + |
+ISO 8601 + |
+
04:05:06-08:00 + |
+ISO 8601 + |
+
04:05-08:00 + |
+ISO 8601 + |
+
040506-08 + |
+ISO 8601 + |
+
04:05:06 PST + |
+Time zone specified by abbreviation + |
+
2003-04-12 04:05:06 America/New_York + |
+Time zone specified by full name + |
+
Example + |
+Description + |
+
---|---|
PST + |
+Abbreviation (for Pacific Standard Time) + |
+
America/New_York + |
+Full time zone name + |
+
-8:00 + |
+ISO-8601 offset for PST + |
+
-800 + |
+ISO-8601 offset for PST + |
+
-8 + |
+ISO-8601 offset for PST + |
+
For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | SELECT time '04:05:06'; + time +---------- + 04:05:06 +(1 row) + +SELECT time '04:05:06 PST'; + time +---------- + 04:05:06 +(1 row) + +SELECT time with time zone '04:05:06 PST'; + timetz +------------- + 04:05:06-08 +(1 row) + |
The special values supported by GaussDB(DWS) are converted to common date/time values when being read. For details, see Table 5.
+ +Input String + |
+Applicable Type + |
+Description + |
+
---|---|---|
epoch + |
+date, timestamp + |
+1970-01-01 00:00:00+00 (Unix system time zero) + |
+
infinity + |
+timestamp + |
+Later than any other timestamps + |
+
-infinity + |
+timestamp + |
+Earlier than any other timestamps + |
+
now + |
+date, time, timestamp + |
+Start time of the current transaction + |
+
today + |
+date, timestamp + |
+Today midnight + |
+
tomorrow + |
+date, timestamp + |
+Tomorrow midnight + |
+
yesterday + |
+date, timestamp + |
+Yesterday midnight + |
+
allballs + |
+time + |
+00:00:00.00 UTC + |
+
The input of reltime can be any valid interval in TEXT format. It can be a number (negative numbers and decimals are also allowed) or a specific time, which must be in SQL standard format, ISO-8601 format, or POSTGRES format. In addition, the text input needs to be enclosed with single quotation marks ('').
+For details, see Table 6.
+ +Input + |
+Output + |
+Description + |
+
---|---|---|
60 + |
+2 mons + |
+Numbers are used to indicate intervals. The default unit is day. Decimals and negative numbers are also allowed. Particularly, a negative interval syntactically means how long before. + |
+
31.25 + |
+1 mons 1 days 06:00:00 + |
+|
-365 + |
+-12 mons -5 days + |
+|
1 years 1 mons 8 days 12:00:00 + |
+1 years 1 mons 8 days 12:00:00 + |
+Intervals are in POSTGRES format. They can contain both positive and negative numbers and are case-insensitive. Output is a simplified POSTGRES interval converted from the input. + |
+
-13 months -10 hours + |
+-1 years -25 days -04:00:00 + |
+|
-2 YEARS +5 MONTHS 10 DAYS + |
+-1 years -6 mons -25 days -06:00:00 + |
+|
P-1.1Y10M + |
+-3 mons -5 days -06:00:00 + |
+Intervals are in ISO-8601 format. They can contain both positive and negative numbers and are case-insensitive. Output is a simplified POSTGRES interval converted from the input. + + |
+
-12H + |
+-12:00:00 + |
+
For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | -- Create a table. +CREATE TABLE reltime_type_tab(col1 character(30), col2 reltime); + +-- Insert data. +INSERT INTO reltime_type_tab VALUES ('90', '90'); +INSERT INTO reltime_type_tab VALUES ('-366', '-366'); +INSERT INTO reltime_type_tab VALUES ('1975.25', '1975.25'); +INSERT INTO reltime_type_tab VALUES ('-2 YEARS +5 MONTHS 10 DAYS', '-2 YEARS +5 MONTHS 10 DAYS'); +INSERT INTO reltime_type_tab VALUES ('30 DAYS 12:00:00', '30 DAYS 12:00:00'); +INSERT INTO reltime_type_tab VALUES ('P-1.1Y10M', 'P-1.1Y10M'); + +-- View data. +SELECT * FROM reltime_type_tab; + col1 | col2 +--------------------------------+------------------------------------- + 1975.25 | 5 years 4 mons 29 days + -2 YEARS +5 MONTHS 10 DAYS | -1 years -6 mons -25 days -06:00:00 + P-1.1Y10M | -3 mons -5 days -06:00:00 + -366 | -1 years -18:00:00 + 90 | 3 mons + 30 DAYS 12:00:00 | 1 mon 12:00:00 +(6 rows) + +-- Delete tables. +DROP TABLE reltime_type_tab; + |
Table 1 lists the geometric types that can be used in GaussDB(DWS). The most fundamental type, the point, forms the basis for all of the other types.
+ +Name + |
+Storage Space + |
+Description + |
+Representation + |
+
---|---|---|---|
point + |
+16 bytes + |
+Point on a plane + |
+(x,y) + |
+
lseg + |
+32 bytes + |
+Finite line segment + |
+((x1,y1),(x2,y2)) + |
+
box + |
+32 bytes + |
+Rectangular Box + |
+((x1,y1),(x2,y2)) + |
+
path + |
+16+16n bytes + |
+Closed path (similar to polygon) + |
+((x1,y1),...) + |
+
path + |
+16+16n bytes + |
+Open path + |
+[(x1,y1),...] + |
+
polygon + |
+40+16n bytes + |
+Polygon (similar to closed path) + |
+((x1,y1),...) + |
+
circle + |
+24 bytes + |
+Circle + |
+<(x,y),r> (center point and radius) + |
+
A rich set of functions and operators is available in GaussDB(DWS) to perform various geometric operations, such as scaling, translation, rotation, and determining intersections. For details, see Geometric Functions and Operators.
+Points are the fundamental two-dimensional building block for geometric types. Values of the point type are specified using either of the following syntaxes:
+( x , y ) +x , y+
where x and y are the respective coordinates, as floating-point numbers.
+Points are output using the first syntax.
+Line segments (lseg) are represented by pairs of points. Values of the lseg type are specified using any of the following syntaxes:
+[ ( x1 , y1 ) , ( x2 , y2 ) ] +( ( x1 , y1 ) , ( x2 , y2 ) ) +( x1 , y1 ) , ( x2 , y2 ) +x1 , y1 , x2 , y2+
where (x1,y1) and (x2,y2) are the end points of the line segment.
+Line segments are output using the first syntax.
+Boxes are represented by pairs of points that are opposite corners of the box. Values of the box type are specified using any of the following syntaxes:
+( ( x1 , y1 ) , ( x2 , y2 ) ) +( x1 , y1 ) , ( x2 , y2 ) +x1 , y1 , x2 , y2+
where (x1,y1) and (x2,y2) are any two opposite corners of the box.
+Boxes are output using the second syntax.
+Any two opposite corners can be supplied on input, but in this order, the values will be reordered as needed to store the upper right and lower left corners.
+Paths are represented by lists of connected points. Paths can be open, where the first and last points in the list are considered not connected, or closed, where the first and last points are considered connected.
+Values of the path type are specified using any of the following syntaxes:
+[ ( x1 , y1 ) , ... , ( xn , yn ) ] +( ( x1 , y1 ) , ... , ( xn , yn ) ) +( x1 , y1 ) , ... , ( xn , yn ) +( x1 , y1 , ... , xn , yn ) +x1 , y1 , ... , xn , yn+
where the points are the end points of the line segments comprising the path. Square brackets ([]) indicate an open path, while parentheses (()) indicate a closed path. When the outermost parentheses are omitted, as in the third through fifth syntaxes, a closed path is assumed.
+Paths are output using the first or second syntax.
+Polygons are represented by lists of points (the vertexes of the polygon). Polygons are very similar to closed paths, but are stored differently and have their own set of support functions.
+Values of the polygon type are specified using any of the following syntaxes:
+( ( x1 , y1 ) , ... , ( xn , yn ) ) +( x1 , y1 ) , ... , ( xn , yn ) +( x1 , y1 , ... , xn , yn ) +x1 , y1 , ... , xn , yn+
where the points are the end points of the line segments comprising the boundary of the polygon.
+Polygons are output using the first syntax.
+Circles are represented by a center point and radius. Values of the circle type are specified using any of the following syntaxes:
+< ( x , y ) , r > +( ( x , y ) , r ) +( x , y ) , r +x , y , r+
where (x,y) is the center point and r is the radius of the circle.
+Circles are output using the first syntax.
+GaussDB(DWS) offers data types to store IPv4, IPv6, and MAC addresses.
+It is better to use network address types instead of plaintext types to store IPv4, IPv6, and MAC addresses, because these types offer input error checking and specialized operators and functions. For details, see Network Address Functions and Operators.
+ +Name + |
+Storage Space + |
+Description + |
+
---|---|---|
cidr + |
+7 or 19 bytes + |
+IPv4 or IPv6 networks + |
+
inet + |
+7 or 19 bytes + |
+IPv4 or IPv6 hosts and networks + |
+
macaddr + |
+6 bytes + |
+MAC addresses + |
+
When sorting inet or cidr data types, IPv4 addresses will always sort before IPv6 addresses, including IPv4 addresses encapsulated or mapped to IPv6 addresses, such as ::10.2.3.4 or ::ffff:10.4.3.2.
+The cidr type (Classless Inter-Domain Routing) holds an IPv4 or IPv6 network specification. The format for specifying networks is address/y where address is the network represented as an IPv4 or IPv6 address, and y is the number of bits in the netmask. If y is omitted, it is calculated using assumptions from the older classful network numbering system, except it will be at least large enough to include all of the octets written in the input.
+For example, 10.0.0.0/8 is converted into a 32-bit binary address 00001010.00000000.00000000.00000000. /8 indicates an 8-bit network ID. The first eight bits of the 32-bit binary address are fixed. The corresponding network segment is 00001010.00000000.00000000.00000000-00001010.11111111.11111111.11111111. 10.0.0.0/8 indicates that the subnet mask is 255.0.0.0 and the corresponding network segment is 10.0.0.0-10.255.255.255.
+For the IP address segment 192.168.0.0–192.168.31.255, the last two segments can be converted into a binary address 00000000.00000000-00011111.11111111. The first 19 bits (8 x 2 + 3) are fixed. Therefore, the binary address is 192.168.0.0/19.
+cidr Input + |
+cidr Output + |
+abbrev (cidr) + |
+
---|---|---|
192.168.100.128/25 + |
+192.168.100.128/25 + |
+192.168.100.128/25 + |
+
192.168/24 + |
+192.168.0.0/24 + |
+192.168.0/24 + |
+
192.168/25 + |
+192.168.0.0/25 + |
+192.168.0.0/25 + |
+
192.168.1 + |
+192.168.1.0/24 + |
+192.168.1/24 + |
+
192.168 + |
+192.168.0.0/24 + |
+192.168.0/24 + |
+
10.1.2 + |
+10.1.2.0/24 + |
+10.1.2/24 + |
+
10.1 + |
+10.1.0.0/16 + |
+10.1/16 + |
+
10 + |
+10.0.0.0/8 + |
+10/8 + |
+
10.1.2.3/32 + |
+10.1.2.3/32 + |
+10.1.2.3/32 + |
+
2001:4f8:3:ba::/64 + |
+2001:4f8:3:ba::/64 + |
+2001:4f8:3:ba::/64 + |
+
2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 + |
+2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128 + |
+2001:4f8:3:ba:2e0:81ff:fe22:d1f1 + |
+
::ffff:1.2.3.0/120 + |
+::ffff:1.2.3.0/120 + |
+::ffff:1.2.3/120 + |
+
::ffff:1.2.3.0/128 + |
+::ffff:1.2.3.0/128 + |
+::ffff:1.2.3.0/128 + |
+
The inet type holds an IPv4 or IPv6 host address, and optionally its subnet, all in one field. The subnet is represented by the number of network address bits present in the host address (the "netmask"). If the netmask is 32 and the address is IPv4, then the value does not indicate a subnet, only a single host. In IPv6, the address length is 128 bits, so 128 bits specify a unique host address.
+The input format for this type is address/y where address is an IPv4 or IPv6 address and y is the number of bits in the netmask. If the /y portion is missing, the netmask is 32 for IPv4 and 128 for IPv6, so the value represents just a single host. On display, the /y portion is suppressed if the netmask specifies a single host.
+The essential difference between the inet and cidr data types is that inet accepts values with nonzero bits to the right of the netmask, whereas cidr does not.
+The macaddr type stores MAC addresses, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). Input is accepted in the following formats:
+'08:00:2b:01:02:03' +'08-00-2b-01-02-03' +'08002b:010203' +'08002b-010203' +'0800.2b01.0203' +'08002b010203'+
These examples would all specify the same address. Upper and lower cases are accepted for the digits a through f. Output is always in the first of the forms shown.
+Bit strings are strings of 1's and 0's. They can be used to store bit masks.
+GaussDB(DWS) supports two SQL bit types: bit(n) and bit varying(n), where n is a positive integer.
+The bit type data must match the length n exactly. It is an error to attempt to store shorter or longer bit strings. The bit varying data is of variable length up to the maximum length n; longer strings will be rejected. Writing bit without a length is equivalent to bit(1), while bit varying without a length specification means unlimited length.
+If one explicitly casts a bit-string value to bit(n), it will be truncated or zero-padded on the right to be exactly n bits, without raising an error.
+Similarly, if one explicitly casts a bit-string value to bit varying(n), it will be truncated on the right if it is more than n bits.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 | -- Create a table: +CREATE TABLE bit_type_t1 +( + BT_COL1 INTEGER, + BT_COL2 BIT(3), + BT_COL3 BIT VARYING(5) +) DISTRIBUTE BY REPLICATION; + +--Insert data: +INSERT INTO bit_type_t1 VALUES(1, B'101', B'00'); + +-- Specify the type length. An error is reported if an inserted string exceeds this length. +INSERT INTO bit_type_t1 VALUES(2, B'10', B'101'); +ERROR: bit string length 2 does not match type bit(3) +CONTEXT: referenced column: bt_col2 + +-- Specify the type length. Data is converted if it exceeds this length. +INSERT INTO bit_type_t1 VALUES(2, B'10'::bit(3), B'101'); + +-- View data: +SELECT * FROM bit_type_t1; + bt_col1 | bt_col2 | bt_col3 +---------+---------+--------- + 1 | 101 | 00 + 2 | 100 | 101 +(2 rows) + +-- Delete the tables: +DROP TABLE bit_type_t1; + |
GaussDB(DWS) offers two data types that are designed to support full text search. The tsvector type represents a document in a form optimized for text search. The tsquery type similarly represents a text query.
+The tsvector type represents a retrieval unit, usually a textual column within a row of a database table, or a combination of such columns. A tsvector value is a sorted list of distinct lexemes, which are words that have been normalized to merge different variants of the same word. Sorting and deduplication are done automatically during input. The to_tsvector function is used to parse and normalize a document string. The to_tsvector function is used to parse and normalize a document string.
+A tsvector value is a sorted list of distinct lexemes, which are words that have been formatted different entries. During segmentation, tsvector automatically performs duplicate-elimination to the entries for input in a certain order. For example:
+1 +2 +3 +4 +5 | SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector; + tsvector +---------------------------------------------------- + 'a' 'and' 'ate' 'cat' 'fat' 'mat' 'on' 'rat' 'sat' +(1 row) + |
It can be seen from the preceding example that tsvector segments a string by spaces, and segmented lexemes are sorted based on their length and alphabetical order. To represent lexemes containing whitespace or punctuation, surround them with quotes:
+1 +2 +3 +4 +5 | SELECT $$the lexeme ' ' contains spaces$$::tsvector; + tsvector +------------------------------------------- + ' ' 'contains' 'lexeme' 'spaces' 'the' +(1 row) + |
Use double dollar signs ($$) to mark entries containing single quotation marks (').
+1 +2 +3 +4 +5 | SELECT $$the lexeme 'Joe''s' contains a quote$$::tsvector; + tsvector +------------------------------------------------ + 'Joe''s' 'a' 'contains' 'lexeme' 'quote' 'the' +(1 row) + |
Optionally, integer positions can be attached to lexemes:
+1 +2 +3 +4 +5 | SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::tsvector; + tsvector +------------------------------------------------------------------------------- + 'a':1,6,10 'and':8 'ate':9 'cat':3 'fat':2,11 'mat':7 'on':5 'rat':12 'sat':4 +(1 row) + |
A position normally indicates the source word's location in the document. Positional information can be used for proximity ranking. Position values range from 1 to 16383. The default maximum value is 16383. Duplicate positions for the same lexeme are discarded.
+Lexemes that have positions can further be labeled with a weight, which can be A, B, C, or D. D is the default and hence is not shown on output:
+1 +2 +3 +4 +5 | SELECT 'a:1A fat:2B,4C cat:5D'::tsvector; + tsvector +---------------------------- + 'a':1A 'cat':5 'fat':2B,4C +(1 row) + |
Weights are typically used to reflect document structure, for example, by marking title words differently from body words. Text search ranking functions can assign different priorities to the different weight markers.
+The following example is the standard usage of the tsvector type. For example:
+1 +2 +3 +4 +5 | SELECT 'The Fat Rats'::tsvector; + tsvector +-------------------- + 'Fat' 'Rats' 'The' +(1 row) + |
For most English-text-searching applications the above words would be considered non-normalized, which should usually be passed through to_tsvector to normalize the words appropriately for searching:
+1 +2 +3 +4 +5 | SELECT to_tsvector('english', 'The Fat Rats'); + to_tsvector +----------------- + 'fat':2 'rat':3 +(1 row) + |
The tsquery type represents a retrieval condition. A tsquery value stores lexemes that are to be searched for, and combines them honoring the Boolean operators & (AND), | (OR), and ! (NOT). Parentheses can be used to enforce grouping of the operators. The to_tsquery and plainto_tsquery functions will normalize lexemes before the lexemes are converted to the tsquery type.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | SELECT 'fat & rat'::tsquery; + tsquery +--------------- + 'fat' & 'rat' +(1 row) + +SELECT 'fat & (rat | cat)'::tsquery; + tsquery +--------------------------- + 'fat' & ( 'rat' | 'cat' ) +(1 row) + +SELECT 'fat & rat & ! cat'::tsquery; + tsquery +------------------------ + 'fat' & 'rat' & !'cat' +(1 row) + |
In the absence of parentheses, ! (NOT) binds most tightly, and & (AND) binds more tightly than | (OR).
+Lexemes in a tsquery can be labeled with one or more weight letters, which restrict them to match only tsvector lexemes with matching weights:
+1 +2 +3 +4 +5 | SELECT 'fat:ab & cat'::tsquery; + tsquery +------------------ + 'fat':AB & 'cat' +(1 row) + |
Also, lexemes in a tsquery can be labeled with * to specify prefix matching:
+1 +2 +3 +4 +5 | SELECT 'super:*'::tsquery; + tsquery +----------- + 'super':* +(1 row) + |
This query will match any word in a tsvector that begins with "super".
+Note that prefixes are first processed by text search configurations, which means the following example returns true:
+1 +2 +3 +4 +5 | SELECT to_tsvector( 'postgraduate' ) @@ to_tsquery( 'postgres:*' ) AS RESULT; + result +---------- + t +(1 row) + |
because postgres gets stemmed to postgr:
+1 +2 +3 +4 +5 | SELECT to_tsquery('postgres:*'); + to_tsquery +------------ + 'postgr':* +(1 row) + |
which then matches postgraduate.
+'Fat:ab & Cats' is normalized to the tsquery type as follows:
+1 +2 +3 +4 +5 | SELECT to_tsquery('Fat:ab & Cats'); + to_tsquery +------------------ + 'fat':AB & 'cat' +(1 row) + |
The data type UUID stores Universally Unique Identifiers (UUID) as defined by RFC 4122, ISO/IEF 9834-8:2005, and related standards. This identifier is a 128-bit quantity that is generated by an algorithm chosen to make it very unlikely that the same identifier will be generated by anyone else in the known universe using the same algorithm.
+Therefore, for distributed systems, these identifiers provide a better uniqueness guarantee than sequence generators, which are only unique within a single database.
+A UUID is written as a sequence of lower-case hexadecimal digits, in several groups separated by hyphens, specifically a group of 8 digits followed by three groups of 4 digits followed by a group of 12 digits, for a total of 32 digits representing the 128 bits. An example of a UUID in this standard form is:
+a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11+
GaussDB(DWS) also accepts the following alternative forms for input: use of upper-case letters and digits, the standard format surrounded by braces, omitting some or all hyphens, and adding a hyphen after any group of four digits. Examples
+A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11 +{a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11} +a0eebc999c0b4ef8bb6d6bb9bd380a11 +a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11+
Output is always in the standard form.
+JSON data types are for storing JavaScript Object Notation (JSON) data. Such data can also be stored as TEXT, but the JSON data type has the advantage of checking that each stored value is a valid JSON value.
+For functions that support the JSON data type, see JSON Functions.
+HyperLoglog (HLL) is an approximation algorithm for efficiently counting the number of distinct values in a data set. It features faster computing and lower space usage. You only need to store HLL data structures, instead of data sets. When new data is added to a data set, make hash calculation on the data and insert the result to an HLL. Then, you can obtain the final result based on the HLL.
+Table 1 compares HLL with other algorithms.
+ +Item + |
+Sorting Algorithm + |
+Hash Algorithm + |
+HLL + |
+
---|---|---|---|
Time complexity + |
+O(nlogn) + |
+O(n) + |
+O(n) + |
+
Space complexity + |
+O(n) + |
+O(n) + |
+1280 bytes + |
+
Error rate + |
+0 + |
+0 + |
+≈2% + |
+
Storage space requirement + |
+Size of raw data + |
+Size of raw data + |
+1280 bytes + |
+
HLL has advantages over others in the computing speed and storage space requirement. In terms of time complexity, the sorting algorithm needs O(nlogn) time for sorting, and the hash algorithm and HLL need O(n) time for full table scanning. In terms of storage space requirements, the sorting algorithm and hash algorithm need to store raw data before collecting statistics, whereas the HLL algorithm needs to store only the HLL data structures rather than the raw data, and thereby occupying a fixed space of only 1280 bytes.
+Table 2 describes main HLL data structures.
+ +Data Type + |
+Description + |
+
---|---|
hll + |
+Its size is always 1280 bytes, which can be directly used to calculate the number of distinct values. + |
+
The following describes HLL application scenarios.
+The following example shows how to use the HLL data type:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | -- Create a table with the HLL data type: +create table helloworld (id integer, set hll); + +-- Insert an empty HLL to the table: +insert into helloworld(id, set) values (1, hll_empty()); + +-- Add a hashed integer to the HLL: +update helloworld set set = hll_add(set, hll_hash_integer(12345)) where id = 1; + +-- Add a hashed string to the HLL: +update helloworld set set = hll_add(set, hll_hash_text('hello world')) where id = 1; + +-- Obtain the number of distinct values of the HLL: +select hll_cardinality(set) from helloworld where id = 1; + hll_cardinality +----------------- + 2 +(1 row) + |
The following example shows how an HLL collects statistics on the number of users visiting a website within a period of time:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 | -- Create a raw data table to show that a user has visited the website at a certain time: +create table facts ( + date date, + user_id integer +); + +-- Construct data to show the users who have visited the website in a day: +insert into facts values ('2019-02-20', generate_series(1,100)); +insert into facts values ('2019-02-21', generate_series(1,200)); +insert into facts values ('2019-02-22', generate_series(1,300)); +insert into facts values ('2019-02-23', generate_series(1,400)); +insert into facts values ('2019-02-24', generate_series(1,500)); +insert into facts values ('2019-02-25', generate_series(1,600)); +insert into facts values ('2019-02-26', generate_series(1,700)); +insert into facts values ('2019-02-27', generate_series(1,800)); + +-- Create another table and specify an HLL column: +create table daily_uniques ( + date date UNIQUE, + users hll +); + +-- Group data by date and insert the data into the HLL: +insert into daily_uniques(date, users) + select date, hll_add_agg(hll_hash_integer(user_id)) + from facts + group by 1; + +-- Calculate the numbers of users visiting the website every day: +select date, hll_cardinality(users) from daily_uniques order by date; + date | hll_cardinality +---------------------+------------------ + 2019-02-20 00:00:00 | 100 + 2019-02-21 00:00:00 | 203.813355588808 + 2019-02-22 00:00:00 | 308.048239950384 + 2019-02-23 00:00:00 | 410.529188080374 + 2019-02-24 00:00:00 | 513.263875705319 + 2019-02-25 00:00:00 | 609.271181107416 + 2019-02-26 00:00:00 | 702.941844662509 + 2019-02-27 00:00:00 | 792.249946595237 +(8 rows) + +-- Calculate the number of users who had visited the website in the week from February 20, 2019 to February 26, 2019: +select hll_cardinality(hll_union_agg(users)) from daily_uniques where date >= '2019-02-20'::date and date <= '2019-02-26'::date; + hll_cardinality +------------------ + 702.941844662509 +(1 row) + +-- Calculate the number of users who had visited the website yesterday but have not visited the website today: +SELECT date, (#hll_union_agg(users) OVER two_days) - #users AS lost_uniques FROM daily_uniques WINDOW two_days AS (ORDER BY date ASC ROWS 1 PRECEDING); + date | lost_uniques +---------------------+-------------- + 2019-02-20 00:00:00 | 0 + 2019-02-21 00:00:00 | 0 + 2019-02-22 00:00:00 | 0 + 2019-02-23 00:00:00 | 0 + 2019-02-24 00:00:00 | 0 + 2019-02-25 00:00:00 | 0 + 2019-02-26 00:00:00 | 0 + 2019-02-27 00:00:00 | 0 +(8 rows) + |
When inserting data into a column of the HLL type, ensure that the data meets the requirements of the HLL data structure. If the data does not meet the requirements after being parsed, an error will be reported. In the following example, E\\1234 to be inserted does not meet the requirements of the HLL data structure after being parsed. As a result, an error is reported.
+1 +2 +3 | create table test(id integer, set hll); +insert into test values(1, 'E\\1234'); +ERROR: unknown schema version 4 + |
Object identifiers (OIDs) are used internally by GaussDB(DWS) as primary keys for various system catalogs. OIDs are not added to user-created tables by the system. The OID type represents an object identifier.
+The OID type is currently implemented as an unsigned four-byte integer. So, using a user-created table's OID column as a primary key is discouraged.
+ +Name + |
+Reference + |
+Description + |
+Examples + |
+
---|---|---|---|
OID + |
+- + |
+Numeric object identifier + |
+564182 + |
+
CID + |
+- + |
+A command identifier. This is the data type of the system columns cmin and cmax. Command identifiers are 32-bit quantities. + |
+- + |
+
XID + |
+- + |
+A transaction identifier. This is the data type of the system columns xmin and xmax. Transaction identifiers are also 32-bit quantities. + |
+- + |
+
TID + |
+- + |
+A row identifier. This is the data type of the system column ctid. A row ID is a pair (block number, tuple index within block) that identifies the physical location of the row within its table. + |
+- + |
+
REGCONFIG + |
+pg_ts_config + |
+Text search configuration + |
+english + |
+
REGDICTIONARY + |
+pg_ts_dict + |
+Text search dictionary + |
+simple + |
+
REGOPER + |
+pg_operator + |
+Operator name + |
++ + |
+
REGOPERATOR + |
+pg_operator + |
+Operator with argument types + |
+*(integer,integer) or -(NONE,integer) + |
+
REGPROC + |
+pg_proc + |
+Indicates the name of the function. + |
+sum + |
+
REGPROCEDURE + |
+pg_proc + |
+Function with argument types + |
+sum(int4) + |
+
REGCLASS + |
+pg_class + |
+Relation name + |
+pg_type + |
+
REGTYPE + |
+pg_type + |
+Data type name + |
+integer + |
+
The OID type is used for a column in the database system catalog.
+For example:
+1 +2 +3 +4 +5 | SELECT oid FROM pg_class WHERE relname = 'pg_type'; + oid +------ + 1247 +(1 row) + |
The alias type for OID is REGCLASS which allows simplified search for OID values.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 | SELECT attrelid,attname,atttypid,attstattarget FROM pg_attribute WHERE attrelid = 'pg_type'::REGCLASS; + attrelid | attname | atttypid | attstattarget +----------+------------+----------+--------------- + 1247 | xc_node_id | 23 | 0 + 1247 | tableoid | 26 | 0 + 1247 | cmax | 29 | 0 + 1247 | xmax | 28 | 0 + 1247 | cmin | 29 | 0 + 1247 | xmin | 28 | 0 + 1247 | oid | 26 | 0 + 1247 | ctid | 27 | 0 + 1247 | typname | 19 | -1 + 1247 | typnamespace | 26 | -1 + 1247 | typowner | 26 | -1 + 1247 | typlen | 21 | -1 + 1247 | typbyval | 16 | -1 + 1247 | typtype | 18 | -1 + 1247 | typcategory | 18 | -1 + 1247 | typispreferred | 16 | -1 + 1247 | typisdefined | 16 | -1 + 1247 | typdelim | 18 | -1 + 1247 | typrelid | 26 | -1 + 1247 | typelem | 26 | -1 + 1247 | typarray | 26 | -1 + 1247 | typinput | 24 | -1 + 1247 | typoutput | 24 | -1 + 1247 | typreceive | 24 | -1 + 1247 | typsend | 24 | -1 + 1247 | typmodin | 24 | -1 + 1247 | typmodout | 24 | -1 + 1247 | typanalyze | 24 | -1 + 1247 | typalign | 18 | -1 + 1247 | typstorage | 18 | -1 + 1247 | typnotnull | 16 | -1 + 1247 | typbasetype | 26 | -1 + 1247 | typtypmod | 23 | -1 + 1247 | typndims | 23 | -1 + 1247 | typcollation | 26 | -1 + 1247 | typdefaultbin | 194 | -1 + 1247 | typdefault | 25 | -1 + 1247 | typacl | 1034 | -1 +(38 rows) + |
GaussDB(DWS) has a number of special-purpose entries that are collectively called pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to declare a function's argument or result type.
+Each of the available pseudo-types is useful in situations where a function's behavior does not correspond to simply taking or returning a value of a specific SQL data type. Table 1 lists all pseudo-types.
+ +Name + |
+Description + |
+
---|---|
any + |
+Indicates that a function accepts any input data type. + |
+
anyelement + |
+Indicates that a function accepts any data type. + |
+
anyarray + |
+Indicates that a function accepts any array data type. + |
+
anynonarray + |
+Indicates that a function accepts any non-array data type. + |
+
anyenum + |
+Indicates that a function accepts any enum data type. + |
+
anyrange + |
+Indicates that a function accepts any range data type. + |
+
cstring + |
+Indicates that a function accepts or returns a null-terminated C string. + |
+
internal + |
+Indicates that a function accepts or returns a server-internal data type. + |
+
language_handler + |
+Indicates that a procedural language call handler is declared to return language_handler. + |
+
fdw_handler + |
+Indicates that a foreign-data wrapper handler is declared to return fdw_handler. + |
+
record + |
+Identifies a function returning an unspecified row type. + |
+
trigger + |
+Indicates that a trigger function is declared to return trigger. + |
+
void + |
+Indicates that a function returns no value. + |
+
opaque + |
+Indicates an obsolete type name that formerly served all the above purposes. + |
+
Functions coded in C (whether built in or dynamically loaded) can be declared to accept or return any of these pseudo data types. It is up to the function author to ensure that the function will behave safely when a pseudo-type is used as an argument type.
+Functions coded in procedural languages can use pseudo-types only as allowed by their implementation languages. At present the procedural languages all forbid use of a pseudo-type as argument type, and allow only void and record as a result type. Some also support polymorphic functions using the anyelement, anyarray, anynonarray, anyenum, and anyrange types.
+The internal pseudo-type is used to declare functions that are meant only to be called internally by the database system, and not by direct call in an SQL query. If a function has at least one internal-type argument, it cannot be called from SQL. You are not advised to create any function that is declared to return internal unless the function has at least one internal argument.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | -- Create or replace the showall() function: +CREATE OR REPLACE FUNCTION showall() RETURNS SETOF record +AS $$ SELECT count(*) from tpcds.store_sales where ss_customer_sk = 9692; $$ +LANGUAGE SQL; + +-- Invoke the showall() function: +SELECT showall(); + showall +--------- + (35) +(1 row) + +-- Delete the function: +DROP FUNCTION showall(); + |
Table 1 lists the data types supported by column-store tables.
+ +Category + |
+Data Type + |
+Length + |
+Supported + |
+
---|---|---|---|
Numeric types + |
+smallint + |
+2 + |
+Yes + |
+
integer + |
+4 + |
+Yes + |
+|
bigint + |
+8 + |
+Yes + |
+|
decimal + |
+Variable length + |
+Yes + |
+|
numeric + |
+Variable length + |
+Yes + |
+|
real + |
+4 + |
+Yes + |
+|
double precision + |
+8 + |
+Yes + |
+|
smallserial + |
+2 + |
+Yes + |
+|
serial + |
+4 + |
+Yes + |
+|
bigserial + |
+8 + |
+Yes + |
+|
Monetary types + |
+money + |
+8 + |
+Yes + |
+
Character types + |
+character varying(n), varchar(n) + |
+Variable length + |
+Yes + |
+
character(n), char(n) + |
+n + |
+Yes + |
+|
character, char + |
+1 + |
+Yes + |
+|
text + |
+Variable length + |
+Yes + |
+|
nvarchar2 + |
+Variable length + |
+Yes + |
+|
name + |
+64 + |
+No + |
+|
Date/time types + |
+timestamp with time zone + |
+8 + |
+Yes + |
+
timestamp without time zone + |
+8 + |
+Yes + |
+|
date + |
+4 + |
+Yes + |
+|
time without time zone + |
+8 + |
+Yes + |
+|
time with time zone + |
+12 + |
+Yes + |
+|
interval + |
+16 + |
+Yes + |
+|
Large objects + |
+clob + |
+Variable length + |
+Yes + |
+
blob + |
+Variable length + |
+No + |
+|
Others + |
+... + |
+... + |
+No + |
+
XML data type stores Extensible Markup Language (XML) formatted data. Such data can also be stored as text, but the advantage of the XML data type is that it checks whether each stored value is a well-formed XML value. XML can store well-formed documents and content fragments defined by XML standards. A content fragment can have multiple top-level elements or character nodes.
+For functions that support the XML data type, see XML Functions.
+The syntax is as follows:
+1 +2 | SET XML OPTION { DOCUMENT | CONTENT }; +SET xmloption TO { DOCUMENT | CONTENT }; + |
If a string value is not converted to XML using the XMLPARSE or XMLSERIALIZE function, the XML OPTION session parameter determines the value, DOCUMENT or CONTENT.
+The default value is CONTENT, indicating that all types of XML data are allowed.
+Example:
+1 +2 +3 +4 | SET XML OPTION DOCUMENT; +SET +SET xmloption TO DOCUMENT; +SET + |
Syntax:
+1 | SET xmlbinary TO { base64 | hex}; + |
Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | SET xmlbinary TO base64; +SET + +SELECT xmlelement(name foo, bytea 'bar'); +xmlelement +----------------- +<foo>YmFy</foo> +(1 row) + +SET xmlbinary TO hex; +SET + +SELECT xmlelement(name foo, bytea 'bar'); +xmlelement +------------------- +<foo>626172</foo> +(1 row) + |
The XML data type is special, and it does not provide any comparison operators, because there is no general comparison algorithm for XML data, so you cannot retrieve data rows by comparing an XML value with a search value. An XML data entry is typically accompanied by an ID for retrieving. Alternatively, you can convert XML values into character strings. However, this is not widely applicable to common scenarios of XML value comparison.
+Table 1 lists the constants and macros that can be used in GaussDB(DWS).
+ +Parameter + |
+Description + |
+Examples + |
+||
---|---|---|---|---|
CURRENT_CATALOG + |
+Specifies the current database. + |
+
|
+||
CURRENT_ROLE + |
+Current role + |
+
|
+||
CURRENT_SCHEMA + |
+Current database model + |
+
|
+||
CURRENT_USER + |
+Current user + |
+
|
+||
LOCALTIMESTAMP + |
+Current session time (without time zone) + |
+
|
+||
NULL + |
+This parameter is left blank. + |
+- + |
+||
+ SESSION_USER + |
+Current system user + |
+
|
+||
SYSDATE + |
+Current system date + |
+
|
+||
USER + |
+Current user, also called CURRENT_USER + |
+
|
+
The usual logical operators include AND, OR, and NOT. SQL uses a three-valued logical system with true, false, and null, which represents "unknown". Their priorities are NOT > AND > OR.
+Table 1 lists operation rules, where a and b represent logical expressions.
+ +a + |
+b + |
+a AND b Result + |
+a OR b Result + |
+NOT a Result + |
+
---|---|---|---|---|
TRUE + |
+TRUE + |
+TRUE + |
+TRUE + |
+FALSE + |
+
TRUE + |
+FALSE + |
+FALSE + |
+TRUE + |
+FALSE + |
+
TRUE + |
+NULL + |
+NULL + |
+TRUE + |
+FALSE + |
+
FALSE + |
+FALSE + |
+FALSE + |
+FALSE + |
+TRUE + |
+
FALSE + |
+NULL + |
+FALSE + |
+NULL + |
+TRUE + |
+
NULL + |
+NULL + |
+NULL + |
+NULL + |
+NULL + |
+
The operators AND and OR are commutative, that is, you can switch the left and right operand without affecting the result.
+Comparison operators are available for all data types and return Boolean values.
+All comparison operators are binary operators. Only data types that are the same or can be implicitly converted can be compared using comparison operators.
+Table 1 describes comparison operators provided by GaussDB(DWS).
+ +Operators + |
+Description + |
+
---|---|
< + |
+Less than + |
+
> + |
+Greater than + |
+
<= + |
+Less than or equal to + |
+
>= + |
+Greater than or equal to + |
+
= + |
+Equality + |
+
<> or != + |
+Inequality + |
+
Comparison operators are available for all relevant data types. All comparison operators are binary operators that returned values of Boolean type. Expressions like 1 < 2 < 3 are invalid. (Because there is no comparison operator to compare a Boolean value with 3.)
+String functions and operators provided by GaussDB(DWS) are for concatenating strings with each other, concatenating strings with non-strings, and matching the patterns of strings.
+Description: Specifies the number of bits occupied by a string.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT bit_length('world'); + bit_length +------------ + 40 +(1 row) + |
Description: Removes the longest string consisting only of characters in characters (a space by default) from the start and end of string.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT btrim('sring' , 'ing'); + btrim +------- + sr +(1 row) + |
Description: Number of characters in a string
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT char_length('hello'); + char_length +------------- + 5 +(1 row) + |
Description: FROM int indicates the start position of the replacement in the first string. for int indicates the number of characters replaced in the first string.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT instr( 'abcdabcdabcd', 'bcd', 2, 2 ); + instr +------- + 6 +(1 row) + |
Description: Obtains the number of bytes of a specified string.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT lengthb('hello'); + lengthb +--------- + 5 +(1 row) + |
Description: Returns first n characters in the string.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT left('abcde', 2); + left +------ + ab +(1 row) + |
Description: Number of characters in string in the given encoding. The string must be valid in this encoding.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT length('jose', 'UTF8'); + length +-------- + 4 +(1 row) + |
Description: Fills up the string to the specified length by appending the characters fill (a space by default). If the string is already longer than length then it is truncated (on the right).
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT lpad('hi', 5, 'xyza'); + lpad +------- + xyzhi +(1 row) + |
Description: Number of bytes in a string
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT octet_length('jose'); + octet_length +-------------- + 4 +(1 row) + |
Description: Replaces substring. FROM int indicates the start position of the replacement in the first string. for int indicates the number of characters replaced in the first string.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT overlay('hello' placing 'world' from 2 for 3 ); + overlay +--------- + hworldo +(1 row) + |
Description: Location of specified substring
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT position('ing' in 'string'); + position +---------- + 4 +(1 row) + |
Description: Current client encoding name
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT pg_client_encoding(); + pg_client_encoding +-------------------- + UTF8 +(1 row) + |
Description: Returns the given string suitably quoted to be used as an identifier in an SQL statement string (quotation marks are used as required). Quotes are added only if necessary (that is, if the string contains non-identifier characters or would be case-folded). Embedded quotes are properly doubled.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT quote_ident('hello world'); + quote_ident +-------------- + "hello world" +(1 row) + |
Description: Returns the given string suitably quoted to be used as a string literal in an SQL statement string (quotation marks are used as required).
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT quote_literal('hello'); + quote_literal +--------------- + 'hello' +(1 row) + |
If command similar to the following exists, text will be escaped.
+1 +2 +3 +4 +5 | SELECT quote_literal(E'O\'hello'); + quote_literal +--------------- + 'O''hello' +(1 row) + |
If command similar to the following exists, backslash will be properly doubled.
+1 +2 +3 +4 +5 | SELECT quote_literal('O\hello'); + quote_literal +--------------- + E'O\\hello' +(1 row) + |
If the parameter is null, return NULL. If the parameter may be null, you are advised to use quote_nullable.
+1 +2 +3 +4 +5 | SELECT quote_literal(NULL); + quote_literal +--------------- + +(1 row) + |
Description: Coerces the given value to text and then quotes it as a literal.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT quote_literal(42.5); + quote_literal +--------------- + '42.5' +(1 row) + |
If command similar to the following exists, the given value will be escaped.
+1 +2 +3 +4 +5 | SELECT quote_literal(E'O\'42.5'); + quote_literal +--------------- + '0''42.5' +(1 row) + |
If command similar to the following exists, backslash will be properly doubled.
+1 +2 +3 +4 +5 | SELECT quote_literal('O\42.5'); + quote_literal +--------------- + E'O\\42.5' +(1 row) + |
Description: Returns the given string suitably quoted to be used as a string literal in an SQL statement string (quotation marks are used as required).
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT quote_nullable('hello'); + quote_nullable +---------------- + 'hello' +(1 row) + |
If command similar to the following exists, text will be escaped.
+1 +2 +3 +4 +5 | SELECT quote_nullable(E'O\'hello'); + quote_nullable +---------------- + 'O''hello' +(1 row) + |
If command similar to the following exists, backslash will be properly doubled.
+1 +2 +3 +4 +5 | SELECT quote_nullable('O\hello'); + quote_nullable +---------------- + E'O\\hello' +(1 row) + |
If the parameter is null, return NULL.
+1 +2 +3 +4 +5 | SELECT quote_nullable(NULL); + quote_nullable +---------------- + NULL +(1 row) + |
Description: Converts the given value to text and then quotes it as a literal.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT quote_nullable(42.5); + quote_nullable +---------------- + '42.5' +(1 row) + |
If command similar to the following exists, the given value will be escaped.
+1 +2 +3 +4 +5 | SELECT quote_nullable(E'O\'42.5'); + quote_nullable +---------------- + 'O''42.5' +(1 row) + |
If command similar to the following exists, backslash will be properly doubled.
+1 +2 +3 +4 +5 | SELECT quote_nullable('O\42.5'); + quote_nullable +---------------- + E'O\\42.5' +(1 row) + |
If the parameter is null, return NULL.
+1 +2 +3 +4 +5 | SELECT quote_nullable(NULL); + quote_nullable +---------------- + NULL +(1 row) + |
Description: Extracts a substring. from int indicates the start position of the truncation. for int indicates the number of characters truncated.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT substring('Thomas' from 2 for 3); + substring +----------- + hom +(1 row) + |
Description: Extracts substring matching POSIX regular expression. It returns the text that matches the pattern. If no match record is found, a null value is returned.
+Return type: text
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | SELECT substring('Thomas' from '...$'); + substring +----------- + mas +(1 row) +SELECT substring('foobar' from 'o(.)b'); + result +-------- + o +(1 row) +SELECT substring('foobar' from '(o(.)b)'); + result +-------- + oob +(1 row) + |
If the POSIX pattern contains any parentheses, the portion of the text that matched the first parenthesized sub-expression (the one whose left parenthesis comes first) is returned. You can put parentheses around the whole expression if you want to use parentheses within it without triggering this exception.
+Description: Extracts substring matching SQL regular expression. The specified pattern must match the entire data string, or else the function fails and returns null. To indicate the part of the pattern that should be returned on success, the pattern must contain two occurrences of the escape character followed by a double quote ("). The text matching the portion of the pattern between these markers is returned.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT substring('Thomas' from '%#"o_a#"_' for '#'); + substring +----------- + oma +(1 row) + |
Description: Indicates the string concatenation functions.
+Return type: raw
+For example:
+1 +2 +3 +4 +5 | SELECT rawcat('ab','cd'); + rawcat +-------- + ABCD +(1 row) + |
Description: Indicates the mode matching function of a regular expression.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | SELECT regexp_like('str','[ac]'); + regexp_like +------------- + f +(1 row) + |
Description: Extracts substrings from a regular expression. Its function is similar to substr. When a regular expression contains multiple parallel brackets, it also needs to be processed.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT regexp_substr('str','[ac]'); + regexp_substr +--------------- + +(1 row) + |
Description: Returns all captured substrings resulting from matching a POSIX regular expression against the string. If the pattern does not match, the function returns no rows. If the pattern contains no parenthesized sub-expressions, then each row returned is a single-element text array containing the substring matching the whole pattern. If the pattern contains parenthesized sub-expressions, the function returns a text array whose nth element is the substring matching the nth parenthesized sub-expression of the pattern.
+The optional flags argument contains zero or multiple single-letter flags that change function behavior. i indicates that the matching is not related to uppercase and lowercase. g indicates that each matching substring is replaced, instead of replacing only the first one.
+If the last parameter is provided but the parameter value is an empty string ('') and the SQL compatibility mode of the database is set to ORA, the returned result is an empty set. This is because the ORA compatible mode treats the empty string ('') as NULL. To resolve this problem, you can:
+Return type: setof text[]
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | SELECT regexp_matches('foobarbequebaz', '(bar)(beque)'); + regexp_matches +---------------- + {bar,beque} +(1 row) +SELECT regexp_matches('foobarbequebaz', 'barbeque'); + regexp_matches +---------------- + {barbeque} +(1 row) + SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g'); + result +-------------- + {bar,beque} + {bazil,barf} +(2 rows) + |
Description: Splits string using a POSIX regular expression as the delimiter. The regexp_split_to_array function behaves the same as regexp_split_to_table, except that regexp_split_to_array returns its result as an array of text.
+Return type: text[]
+For example:
+1 +2 +3 +4 +5 | SELECT regexp_split_to_array('hello world', E'\\s+'); + regexp_split_to_array +----------------------- + {hello,world} +(1 row) + |
Description: Splits string using a POSIX regular expression as the delimiter. If there is no match to the pattern, the function returns the string. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string.
+The flags parameter is a text string containing zero or more single-letter flags that change the function's behavior. i indicates that the matching is not related to uppercase and lowercase. g indicates that each matching substring is replaced, instead of replacing only the first one.
+Return type: setof text
+For example:
+1 +2 +3 +4 +5 +6 | SELECT regexp_split_to_table('hello world', E'\\s+'); + regexp_split_to_table +----------------------- + hello + world +(2 rows) + |
Return type: string repeated for number times
+For example:
+1 +2 +3 +4 +5 | SELECT repeat('Pg', 4); + repeat +---------- + PgPgPgPg +(1 row) + |
Description: Replaces all occurrences in string of substring from with substring to.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT replace('abcdefabcdef', 'cd', 'XXX'); + replace +---------------- + abXXXefabXXXef +(1 row) + |
Description: Returns reversed string.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT reverse('abcde'); + reverse +--------- + edcba +(1 row) + |
Description: Returns the last n characters in the string.
+Return type: text
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT right('abcde', 2); + right +------- + de +(1 row) + +SELECT right('abcde', -2); + right +------- + cde +(1 row) + |
Description: Fills up the string to length by appending the characters fill (a space by default). If the string is already longer than length then it is truncated.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT rpad('hi', 5, 'xy'); + rpad +------- + hixyx +(1 row) + |
Description: Removes the longest string containing only characters from characters (a space by default) from the end of string.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT rtrim('trimxxxx', 'x'); + rtrim +------- + trim +(1 row) + |
Description: Obtains and returns the parameter values of a specified namespace.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT SYS_CONTEXT ( 'postgres' , 'archive_mode'); + sys_context +------------- + +(1 row) + |
Description: Extracts a substring. The first int indicates the start position of the subtraction. The second int indicates the number of characters subtracted.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT substrb('string',2,3); + substrb +--------- + tri +(1 row) + |
Description: Extracts a substring. int indicates the start position of the subtraction.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT substrb('string',2); + substrb +--------- + tring +(1 row) + |
Description: Concatenates strings.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT 'MPP'||'DB' AS RESULT; + result +-------- + MPPDB +(1 row) + |
Description: Concatenates strings and non-strings.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT 'Value: '||42 AS RESULT; + result +----------- + Value: 42 +(1 row) + |
Description: Splits string on delimiter and returns the fieldth column (counting from text of the first appeared delimiter).
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT split_part('abc~@~def~@~ghi', '~@~', 2); + split_part +------------ + def +(1 row) + |
Description: Specifies the position of a substring. It is the same as position(substring in string). However, the parameter sequences of them are reversed.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT strpos('source', 'rc'); + strpos +-------- + 4 +(1 row) + |
Description: Converts number to a hexadecimal expression.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT to_hex(2147483647); + to_hex +---------- + 7fffffff +(1 row) + |
Description: Any character in string that matches a character in the from set is replaced by the corresponding character in the to set. If from is longer than to, extra characters occurred in from are removed.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT translate('12345', '143', 'ax'); + translate +----------- + a2x5 +(1 row) + |
Description: Obtains the number of characters in a string.
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT length('abcd'); + length +-------- + 4 +(1 row) + |
Description: Obtains the number of characters in a string. The value depends on character sets (GBK and UTF8).
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT lengthb('hello'); + lengthb +--------- + 5 +(1 row) + |
Extracts substrings from a string.
+from indicates the start position of the extraction.
+Return type: varchar
+For example:
+If the value of from is positive:
+1 +2 +3 +4 +5 | SELECT substr('ABCDEF',2); + substr +-------- + BCDEF +(1 row) + |
If the value of from is negative:
+1 +2 +3 +4 +5 | SELECT substr('ABCDEF',-2); + substr +-------- + EF +(1 row) + |
Extracts substrings from a string.
+from indicates the start position of the extraction.
+"count" indicates the length of the extracted substring.
+Return type: varchar
+For example:
+If the value of from is positive:
+1 +2 +3 +4 +5 | SELECT substr('ABCDEF',2,2); + substr +-------- + BC +(1 row) + |
If the value of from is negative:
+1 +2 +3 +4 +5 | SELECT substr('ABCDEF',-3,2); + substr +-------- + DE +(1 row) + |
Description: The functionality of this function is the same as that of SUBSTR(string,from). However, the calculation unit is byte.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT substrb('ABCDEF',-2); + substrb +--------- + EF +(1 row) + |
Description: The functionality of this function is the same as that of SUBSTR(string,from,count). However, the calculation unit is byte.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT substrb('ABCDEF',2,2); + substrb +--------- + BC +(1 row) + |
Description: Removes the longest string containing only the characters (a space by default) from the start/end/both ends of the string.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT trim(BOTH 'x' FROM 'xTomxx'); + btrim +------- + Tom +(1 row) + |
1 +2 +3 +4 +5 | SELECT trim(LEADING 'x' FROM 'xTomxx'); + ltrim +------- + Tomxx +(1 row) + |
1 +2 +3 +4 +5 | SELECT trim(TRAILING 'x' FROM 'xTomxx'); + rtrim +------- + xTom +(1 row) + |
Description: Removes the longest string containing only characters from characters (a space by default) from the end of string.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT rtrim('TRIMxxxx','x'); + rtrim +------- + TRIM +(1 row) + |
Description: Removes the longest string containing only characters from characters (a space by default) from the start of string.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT ltrim('xxxxTRIM','x'); + ltrim +------- + TRIM +(1 row) + |
Description: Converts the string into the uppercase.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT upper('tom'); + upper +------- + TOM +(1 row) + |
Description: Converts the string into the lowercase.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT lower('TOM'); + lower +------- + tom +(1 row) + |
Description: Fills up the string to length by appending the characters fill (a space by default). If the string is already longer than length then it is truncated.
+length in GaussDB(DWS) indicates the character length. One Chinese character is counted as one character.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT rpad('hi',5,'xyza'); + rpad +------- + hixyz +(1 row) + |
1 +2 +3 +4 +5 | SELECT rpad('hi',5,'abcdefg'); + rpad +------- + hiabc +(1 row) + |
Description: Queries and returns the value of the substring position that occurs the occurrence (first by default) times from the position (1 by default) in the string.
+In this function, the calculation unit is character. One Chinese character is one character.
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT instr('corporate floor','or', 3); + instr +------- + 5 +(1 row) + |
1 +2 +3 +4 +5 | SELECT instr('corporate floor','or',-3,2); + instr +------- + 2 +(1 row) + |
Description: The first letter of each word in the string is converted into the uppercase and the other letters are converted into the lowercase.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT initcap('hi THOMAS'); + initcap +----------- + Hi Thomas +(1 row) + |
Description: Indicates the ASCII code of the first character in the string.
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT ascii('xyz'); + ascii +------- + 120 +(1 row) + |
Description: Replaces all search-string in the string with replacement_string.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT replace('jack and jue','j','bl'); + replace +---------------- + black and blue +(1 row) + |
Description: Adds a series of repeat_string (a space by default) on the left of the string to generate a new string with the total length of n.
+If the length of the string is longer than the specified length, the function truncates the string and returns the substrings with the specified length.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT lpad('PAGE 1',15,'*.'); + lpad +----------------- + *.*.*.*.*PAGE 1 +(1 row) + |
1 +2 +3 +4 +5 | SELECT lpad('hello world',5,'abcd'); + lpad +------- + hello +(1 row) + |
Description: Connects str1 and str2 and returns the string.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT concat('Hello', ' World!'); + concat +-------------- + Hello World! +(1 row) + |
Description: Specifies the character of the ASCII code.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT chr(65); + chr +----- + A +(1 row) + |
Description: Extracts substrings from a regular expression.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT regexp_substr('500 Hello World, Redwood Shores, CA', ',[^,]+,') "REGEXPR_SUBSTR"; + REGEXPR_SUBSTR +------------------- + , Redwood Shores, +(1 row) + |
Description: Replaces substring matching POSIX regular expression. The source string is returned unchanged if there is no match to the pattern. If there is a match, the source string is returned with the replacement string substituted for the matching substring.
+The replacement string can contain \n, where n is 1 through 9, to indicate that the source substring matching the nth parenthesized sub-expression of the pattern should be inserted, and it can contain \& to indicate that the substring matching the entire pattern should be inserted.
+The optional flags argument contains zero or multiple single-letter flags that change function behavior. The following table lists the options of the flags argument.
+ +Option + |
+Description + |
+
---|---|
g + |
+Replace all the matched substrings. (By default, only the first matched substring is replaced.) + |
+
B + |
+Preferentially use the boost regex regular expression library and its regular expression syntax. By default, the Henry Spencer's regular expression library and its regular expression syntax are used. +In the following cases, the Henry Spencer's regular expression library and its regular expression syntax will be used even if this option is specified: +
|
+
b + |
+Use POSIX Basic Regular Expressions (BREs) for matching. + |
+
c + |
+Case-sensitive matching + |
+
e + |
+Use POSIX Extended Regular Expressions (EREs) for matching. If neither b nor e is specified and the Henry Spencer's regular expression library is used, Advanced Regular Expressions (AREs), similar to Perl Compatible Regular Expressions (PCREs), are used for matching; if neither b nor e is specified and the boost regex regular expression library is used, PCREs are used for matching. + |
+
i + |
+Case-insensitive matching + |
+
m + |
+Line feed-sensitive matching, which has the same meaning as option n + |
+
n + |
+Line feed-sensitive matching. When this option takes effect, the line separator affects the matching of metacharacters (., ^, $, and [^). + |
+
p + |
+Partial line feed-sensitive matching. When this option takes effect, the line separator affects the matching of metacharacters (. and [^). + |
+
q + |
+Reset the regular expression to a text string enclosed in double quotation marks ("") and consisting of only common characters. + |
+
s + |
+Non-line feed-sensitive matching + |
+
t + |
+Compact syntax (default). When this option takes effect, all characters matter. + |
+
w + |
+Reverse partial line feed-sensitive matching. When this option takes effect, the line separator affects the matching of metacharacters (^ and $). + |
+
x + |
+Extended syntax In contrast to the compact syntax, whitespace characters in regular expressions are ignored in the extended syntax. Whitespace characters include spaces, horizontal tabs, new lines, and any other characters in the space character table. + |
+
Return type: varchar
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT regexp_replace('Thomas', '.[mN]a.', 'M'); + regexp_replace +---------------- + ThM +(1 row) +SELECT regexp_replace('foobarbaz','b(..)', E'X\\1Y', 'g') AS RESULT; + result +------------- + fooXarYXazY +(1 row) + |
Description: The first parameter is used as the separator, which is associated with all following parameters.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT concat_ws(',', 'ABCDE', 2, NULL, 22); + concat_ws +------------ + ABCDE,2,22 +(1 row) + |
Description: Converts the bytea string to dest_encoding. src_encoding specifies the source code encoding. The string must be valid in this encoding.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT convert('text_in_utf8', 'UTF8', 'GBK'); + convert +---------------------------- + \x746578745f696e5f75746638 +(1 row) + |
If the rule for converting between source to target encoding (for example, GBK and LATIN1) does not exist, the string is returned without conversion. See the pg_conversion system catalog for details.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | show server_encoding; + server_encoding +----------------- + LATIN1 +(1 row) + +SELECT convert_from('some text', 'GBK'); + convert_from +-------------- + some text +(1 row) + +db_latin1=# SELECT convert_to('some text', 'GBK'); + convert_to +---------------------- + \x736f6d652074657874 +(1 row) + +db_latin1=# SELECT convert('some text', 'GBK', 'LATIN1'); + convert +---------------------- + \x736f6d652074657874 +(1 row) + |
Description: Converts the long bytea using the coding mode of the database.
+src_encoding specifies the source code encoding. The string must be valid in this encoding.
+Return type: text
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT convert_from('text_in_utf8', 'UTF8'); + convert_from +-------------- + text_in_utf8 +(1 row) +SELECT convert_from('\x6461746162617365','gbk'); + convert_from +-------------- + database +(1 row) + |
Description: Converts string to dest_encoding.
+Return type: bytea
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT convert_to('some text', 'UTF8'); + convert_to +---------------------- + \x736f6d652074657874 +(1 row) +SELECT convert_to('database', 'gbk'); + convert_to +-------------------- + \x6461746162617365 +(1 row) + |
Description: Pattern matching function
+If the pattern does not include a percentage sign (%) or an underscore (_), this mode represents itself only. In this case, the behavior of LIKE is the same as the equal operator. The underscore (_) in the pattern matches any single character while one percentage sign (%) matches no or multiple characters.
+To match with underscores (_) or percent signs (%), corresponding characters in pattern must lead escape characters. The default escape character is a backward slash (\) and can be specified using the ESCAPE clause. To match with escape characters, enter two escape characters.
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT 'AA_BBCC' LIKE '%A@_B%' ESCAPE '@' AS RESULT; + result +-------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'AA_BBCC' LIKE '%A@_B%' AS RESULT; + result +-------- + f +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'AA@_BBCC' LIKE '%A@_B%' AS RESULT; + result +-------- + t +(1 row) + |
Description: Indicates the mode matching function of a regular expression.
+source_string indicates the source string and pattern indicates the matching pattern of the regular expression. match_parameter indicates the matching items and the values are as follows:
+If match_parameter is ignored, case-sensitive is enabled by default, "." is not matched with a linefeed, and source_string is regarded as a single row.
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT regexp_like('ABC', '[A-Z]'); + regexp_like +------------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT regexp_like('ABC', '[D-Z]'); + regexp_like +------------- + f +(1 row) + |
1 +2 +3 +4 +5 | SELECT regexp_like('ABC', '[A-Z]','i'); + regexp_like +------------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT regexp_like('ABC', '[A-Z]'); + regexp_like +------------- + t +(1 row) + |
Description: Formats a string.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT format('Hello %s, %1$s', 'World'); + format +-------------------- + Hello World, World +(1 row) + |
Description: Encrypts a string in MD5 mode and returns a value in hexadecimal form.
+MD5 is insecure and is not recommended.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT md5('ABC'); + md5 +---------------------------------- + 902fbdd2b1df0c4f70b4a5d23525e932 +(1 row) + |
Description: Decodes binary data from textual representation.
+Return type: bytea
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT decode('ZGF0YWJhc2U=', 'base64'); + decode +-------------- + \x6461746162617365 +(1 row) + +SELECT convert_from('\x6461746162617365','utf-8'); + convert_from +-------------- + database +(1 row) + |
Description: Encodes binary data into a textual representation.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT encode('database', 'base64'); + encode +---------- + ZGF0YWJhc2U= +(1 row) + |
SQL defines some string functions that use keywords, rather than commas, to separate arguments.
+Description: Number of bytes in binary string
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT octet_length(E'jo\\000se'::bytea) AS RESULT; + result +-------- + 5 +(1 row) + |
Description: Replaces substring.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT overlay(E'Th\\000omas'::bytea placing E'\\002\\003'::bytea from 2 for 3) AS RESULT; + result +---------------- + \x5402036d6173 +(1 row) + |
Description: Location of specified substring
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT position(E'\\000om'::bytea in E'Th\\000omas'::bytea) AS RESULT; + result +-------- + 3 +(1 row) + |
Description: Truncates substring.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT substring(E'Th\\000omas'::bytea from 2 for 3) AS RESULT; + result +---------- + \x68006f +(1 row) + |
Truncate the time and obtain the number of hours.
+1 +2 +3 +4 +5 | select substring('2022-07-18 24:38:15',12,2)AS RESULT; + result +----------- + 24 +(1 row) + |
Description: Removes the longest string containing only bytes from bytes from the start and end of string.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT trim(E'\\000'::bytea from E'\\000Tom\\000'::bytea) AS RESULT; + result +---------- + \x546f6d +(1 row) + |
GaussDB(DWS) also provides the common syntax used for invoking functions.
+Description: Removes the longest string containing only bytes from bytes from the start and end of string.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT btrim(E'\\000trim\\000'::bytea, E'\\000'::bytea) AS RESULT; + result +------------ + \x7472696d +(1 row) + |
Description: Extracts bit from string.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT get_bit(E'Th\\000omas'::bytea, 45) AS RESULT; + result +-------- + 1 +(1 row) + |
Description: Extracts byte from string.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT get_byte(E'Th\\000omas'::bytea, 4) AS RESULT; + result +-------- + 109 +(1 row) + |
Description: Sets bit in string.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT set_bit(E'Th\\000omas'::bytea, 45, 0) AS RESULT; + result +------------------ + \x5468006f6d4173 +(1 row) + |
Description: Sets byte in string.
+Return type: bytea
+For example:
+1 +2 +3 +4 +5 | SELECT set_byte(E'Th\\000omas'::bytea, 4, 64) AS RESULT; + result +------------------ + \x5468006f406173 +(1 row) + |
Aside from the usual comparison operators, the following operators can be used. Bit string operands of &, |, and # must be of equal length. When bit shifting, the original length of the string is preserved by zero padding (if necessary).
+Description: Connects bit strings.
+For example:
+1 +2 +3 +4 +5 | SELECT B'10001' || B'011' AS RESULT; + result +---------- + 10001011 +(1 row) + |
Description: AND operation between bit strings
+For example:
+1 +2 +3 +4 +5 | SELECT B'10001' & B'01101' AS RESULT; + result +-------- + 00001 +(1 row) + |
Description: OR operation between bit strings
+For example:
+1 +2 +3 +4 +5 | SELECT B'10001' | B'01101' AS RESULT; + result +-------- + 11101 +(1 row) + |
Description: OR operation between bit strings if they are inconsistent. If the same positions in the two bit strings are both 1 or 0, the position returns 0.
+For example:
+1 +2 +3 +4 +5 | SELECT B'10001' # B'01101' AS RESULT; + result +-------- + 11100 +(1 row) + |
Description: NOT operation between bit strings
+For example:
+1 +2 +3 +4 +5 | SELECT ~B'10001'AS RESULT; + result +---------- + 01110 +(1 row) + |
Description: binary left shift
+1 +2 +3 +4 +5 | SELECT B'10001' << 3 AS RESULT; + result +---------- + 01000 +(1 row) + |
Description: binary right shift
+For example:
+1 +2 +3 +4 +5 | SELECT B'10001' >> 2 AS RESULT; + result +---------- + 00100 +(1 row) + |
The following SQL-standard functions work on bit strings as well as character strings: length, bit_length, octet_length, position, substring, and overlay.
+The following functions work on bit strings as well as binary strings: get_bit and set_bit. When working with a bit string, these functions number the first (leftmost) bit of the string as bit 0.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | SELECT 44::bit(10) AS RESULT; + result +------------ + 0000101100 +(1 row) + +SELECT 44::bit(3) AS RESULT; + result +-------- + 100 +(1 row) + +SELECT cast(-44 as bit(12)) AS RESULT; + result +-------------- + 111111010100 +(1 row) + +SELECT '1110'::bit(4)::integer AS RESULT; + result +-------- + 14 +(1 row) + |
Casting to just "bit" means casting to bit(1), and so will deliver only the least significant bit of the integer.
+There are three separate approaches to pattern matching provided by the database: the traditional SQL LIKE operator, the more recent SIMILAR TO operator, and POSIX-style regular expressions. Besides these basic operators, functions can be used to extract or replace matching substrings and to split a string at matching locations.
+Description: checks whether the string matches the mode string following LIKE. The LIKE expression returns true if the string matches the supplied pattern. (As expected, the NOT LIKE expression returns false if LIKE returns true, and vice versa.
+When standard_conforming_strings is set to off, any backslashes you write in literal string constants will need to be doubled. Therefore, writing a pattern matching a single backslash is actually going to write four backslashes in the statement. You can avoid this by selecting a different escape character by using ESCAPE, so that the backslash is no longer a special character of LIKE. But the backslash is still the special character of the character text analyzer, so you still need two backslashes.) You can also select no escape character by writing ESCAPE ''. This effectively disables the escape mechanism, which makes it impossible to turn off the special meaning of underscore and percent signs in the pattern.
+For example:
+1 +2 +3 +4 +5 | SELECT 'abc' LIKE 'abc' AS RESULT; + result +----------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' LIKE 'a%' AS RESULT; + result +----------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' LIKE '_b_' AS RESULT; + result +----------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' LIKE 'c' AS RESULT; + result +----------- + f +(1 row) + |
Description: The SIMILAR TO operator returns true or false depending on whether the pattern matches the given string. It is similar to LIKE, except that it interprets the pattern using the SQL standard's definition of a regular expression.
+Metacharacter + |
+Description + |
+
---|---|
| + |
+Specifies alternation (either of two alternatives). + |
+
* + |
+Specifies repetition of the previous item zero or more times. + |
+
+ + |
+Specifies repetition of the previous item one or more times. + |
+
? + |
+Specifies repetition of the previous item zero or one time. + |
+
{m} + |
+Specifies repetition of the previous item exactly m times. + |
+
{m,} + |
+Specifies repetition of the previous item m or more times. + |
+
{m,n} + |
+Specifies repetition of the previous item at least m times and does not exceed n times. + |
+
() + |
+Specifies that parentheses () can be used to group items into a single logical item. + |
+
[...] + |
+Specifies a character class, just as in POSIX regular expressions. + |
+
Regular expressions:
+The substring function with three parameters, substring(string from pattern for escape), provides extraction of a substring that matches an SQL regular expression pattern.
+Example:
+1 +2 +3 +4 +5 | SELECT 'abc' SIMILAR TO 'abc' AS RESULT; + result +----------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' SIMILAR TO 'a' AS RESULT; + result +----------- + f +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' SIMILAR TO '%(b|d)%' AS RESULT; + result +----------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' SIMILAR TO '(b|c)%' AS RESULT; + result +----------- + f +(1 row) + |
Description: A regular expression is a character sequence that is an abbreviated definition of a set of strings (a regular set). If a string is a member of a regular expression described by a regular expression, the string matches the regular expression. POSIX regular expressions provide a more powerful means for pattern matching than the LIKE and SIMILAR TO operators. Table 1 Regular expression match operators lists all available operators for pattern matching using POSIX regular expressions.
+ +Operator + |
+Description + |
+Example + |
+
---|---|---|
~ + |
+Matches regular expression, which is case-sensitive. + |
+'thomas' ~ '.*thomas.*' + |
+
~* + |
+Matches regular expression, which is case-insensitive. + |
+'thomas' ~* '.*Thomas.*' + |
+
! ~ + |
+Does not match regular expression, which is case-sensitive. + |
+'thomas' !~ '.*Thomas.*' + |
+
! ~* + |
+Does not match regular expression, which is case-insensitive. + |
+'thomas' !~* '.*vadim.*' + |
+
Metacharacter + |
+Description + |
+
---|---|
^ + |
+Specifies the match starting with a string. + |
+
$ + |
+Specifies the match at the end of a string. + |
+
. + |
+Matches any single character. + |
+
Regular expressions:
+The regular expression split functions ignore zero-length matches, which occur at the beginning or end of a string or after the previous match. This is contrary to the strict definition of regular expression matching. The latter is implemented by regexp_matches, but the former is usually the most commonly used behavior in practice.
+For example:
+1 +2 +3 +4 +5 | SELECT 'abc' ~ 'Abc' AS RESULT; +result +-------- + f +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' ~* 'Abc' AS RESULT; + result +-------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' !~ 'Abc' AS RESULT; + result +-------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc'!~* 'Abc' AS RESULT; + result +-------- + f +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' ~ '^a' AS RESULT; + result +-------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' ~ '(b|d)'AS RESULT; + result +-------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'abc' ~ '^(b|c)'AS RESULT; + result +-------- + f +(1 row) + |
Although most regular expression searches can be executed quickly, the time and memory for regular expression processing can still be manually controlled. It is not recommended that you accept the regular expression search mode from the non-security mode source. If you must do this, you are advised to add the statement timeout limit. The search with the SIMILAR TO mode has the same security risks as the SIMILAR TO provides many capabilities that are the same as those of the POSIX- style regular expression. The LIKE search is much simpler than the other two options. Therefore, it is more secure to accept the non-secure mode source search.
+For example:
+1 +2 +3 +4 +5 | SELECT 2+3 AS RESULT; + result +-------- + 5 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT 2-3 AS RESULT; + result +-------- + -1 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT 2*3 AS RESULT; + result +-------- + 6 +(1 row) + |
Description: Division (The result is not rounded.)
+For example:
+1 +2 +3 +4 +5 | SELECT 4/2 AS RESULT; + result +-------- + 2 +(1 row) + |
1 +2 +3 +4 +5 | SELECT 4/3 AS RESULT; + result +------------------ + 1.33333333333333 +(1 row) + |
Description: Positive/negative
+For example:
+1 +2 +3 +4 +5 | SELECT -2 AS RESULT; + result +-------- + -2 +(1 row) + |
Description: Model (to obtain the remainder)
+For example:
+1 +2 +3 +4 +5 | SELECT 5%4 AS RESULT; + result +-------- + 1 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT @ -5.0 AS RESULT; + result +-------- + 5.0 +(1 row) + |
Description: Power (exponent calculation)
+In MySQL-compatible mode, this operator means exclusive or. For details, see operator # in Bit String Functions and Operators.
+For example:
+1 +2 +3 +4 +5 | SELECT 2.0^3.0 AS RESULT; + result +-------------------- + 8.0000000000000000 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT |/ 25.0 AS RESULT; + result +-------- + 5 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT ||/ 27.0 AS RESULT; + result +-------- + 3 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT 5! AS RESULT; + result +-------- + 120 +(1 row) + |
Description: Factorial (prefix operator)
+For example:
+1 +2 +3 +4 +5 | SELECT !!5 AS RESULT; + result +-------- + 120 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT 91&15 AS RESULT; + result +-------- + 11 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT 32|3 AS RESULT; + result +-------- + 35 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT 17#5 AS RESULT; + result +-------- + 20 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT ~1 AS RESULT; + result +-------- + -2 +(1 row) + |
Description: Binary shift left
+For example:
+1 +2 +3 +4 +5 | SELECT 1<<4 AS RESULT; + result +-------- + 16 +(1 row) + |
Description: Binary shift right
+For example:
+1 +2 +3 +4 +5 | SELECT 8>>2 AS RESULT; + result +-------- + 2 +(1 row) + |
Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT abs(-17.4); + abs +------ + 17.4 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT acos(-1); + acos +------------------ + 3.14159265358979 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT asin(0.5); + asin +------------------ + .523598775598299 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT atan(1); + atan +------------------ + .785398163397448 +(1 row) + |
Description: Arc tangent of y/x
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT atan2(2, 1); + atan2 +------------------ + 1.10714871779409 +(1 row) + |
Description: Performs AND (&) operation on two integers.
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | SELECT bitand(127, 63); + bitand +-------- + 63 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT cbrt(27.0); + cbrt +------ + 3 +(1 row) + |
Description: Minimum integer greater than or equal to the parameter
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT ceil(-42.8); + ceil +------ + -42 +(1 row) + |
Description: Minimum integer (alias of ceil) greater than or equal to the parameter
+Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT ceiling(-95.3); + ceiling +--------- + -95 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT cos(-3.1415927); + cos +------------------- + -.999999999999999 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT cot(1); + cot +------------------ + .642092615934331 +(1 row) + |
Description: Converts radians to angles.
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT degrees(0.5); + degrees +------------------ + 28.6478897565412 +(1 row) + |
Description: Integer part of y/x
+Return type: numeric
+For example:
+1 +2 +3 +4 +5 | SELECT div(9,4); + div +----- + 2 +(1 row) + |
Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT exp(1.0); + exp +-------------------- + 2.7182818284590452 +(1 row) + |
Description: Not larger than the maximum integer of the parameter
+Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT floor(-42.8); + floor +------- + -43 +(1 row) + |
Description: Converts angles to radians.
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT radians(45.0); + radians +------------------ + .785398163397448 +(1 row) + |
Description: Random number between 0.0 and 1.0
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT random(); + random +------------------ + .824823560658842 +(1 row) + |
Description: Natural logarithm
+Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT ln(2.0); + ln +------------------- + .6931471805599453 +(1 row) + |
Description: Logarithm with 10 as the base
+Return type: same as the input
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | -- ORA-compatible mode +SELECT log(100.0); + log +-------------------- + 2.0000000000000000 +(1 row) +-- TD-compatible mode +SELECT log(100.0); + log +-------------------- + 2.0000000000000000 +(1 row) +-- MySQL-compatible mode +SELECT log(100.0); + log +-------------------- + 4.6051701859880914 +(1 row) + |
Description: Logarithm with b as the base
+Return type: numeric
+For example:
+1 +2 +3 +4 +5 | SELECT log(2.0, 64.0); + log +-------------------- + 6.0000000000000000 +(1 row) + |
Remainder of x/y (model)
+If x equals to 0, y is returned.
+Return type: same as the parameter type
+For example:
+1 +2 +3 +4 +5 | SELECT mod(9,4); + mod +----- + 1 +(1 row) + |
1 +2 +3 +4 +5 | SELECT mod(9,0); + mod +----- + 9 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT pi(); + pi +------------------ + 3.14159265358979 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT power(9.0, 3.0); + power +---------------------- + 729.0000000000000000 +(1 row) + |
Description: Integer closest to the input parameter
+Return type: same as the input
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT round(42.4); + round +------- + 42 +(1 row) + +SELECT round(42.6); + round +------- + 43 +(1 row) + |
When the round function is invoked, the numeric type is rounded to zero. While on most computers, the real number and the double-precision number are rounded to the nearest even number.
+Description: s digits are kept after the decimal point.
+Return type: numeric
+For example:
+1 +2 +3 +4 +5 | SELECT round(42.4382, 2); + round +------- + 42.44 +(1 row) + |
Description: Sets seed for the following random() invoking (between -1.0 and 1.0, inclusive).
+Return type: void
+For example:
+1 +2 +3 +4 +5 | SELECT setseed(0.54823); + setseed +--------- + +(1 row) + |
Description: returns symbols of this parameter.
+The return value type:-1 indicates negative. 0 indicates 0, and 1 indicates a positive number.
+For example:
+1 +2 +3 +4 +5 | SELECT sign(-8.4); + sign +------ + -1 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT sin(1.57079); + sin +------------------ + .999999999979986 +(1 row) + |
Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT sqrt(2.0); + sqrt +------------------- + 1.414213562373095 +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT tan(20); + tan +------------------ + 2.23716094422474 +(1 row) + |
Description: truncates (the integral part).
+Return type: same as the input
+For example:
+1 +2 +3 +4 +5 | SELECT trunc(42.8); + trunc +------- + 42 +(1 row) + |
Description: Truncates a number with s digits after the decimal point.
+Return type: numeric
+For example:
+1 +2 +3 +4 +5 | SELECT trunc(42.4382, 2); + trunc +------- + 42.43 +(1 row) + |
Description: Returns a bucket to which the operand will be assigned in an equidepth histogram with count buckets, ranging from b1 to b2.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT width_bucket(5.35, 0.024, 10.06, 5); + width_bucket +-------------- + 3 +(1 row) + |
Description: Returns a bucket to which the operand will be assigned in an equidepth histogram with count buckets, ranging from b1 to b2.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT width_bucket(5.35, 0.024, 10.06, 5); + width_bucket +-------------- + 3 +(1 row) + |
When the user uses date/time operators, explicit type prefixes are modified for corresponding operands to ensure that the operands parsed by the database are consistent with what the user expects, and no unexpected results occur.
+For example, abnormal mistakes will occur in the following example without an explicit data type.
+1 | SELECT date '2001-10-01' - '7' AS RESULT; + |
Operators + |
+Examples + |
+|||
---|---|---|---|---|
+ + |
+Add a date with an integer to obtain the date after 7 days. +
|
+|||
Add a date with an interval to obtain the time after 1 hour. +
|
+||||
Add a date with a time to obtain a specific time. +
|
+||||
Add a date with an interval to obtain the time after one month. +If the sum or subtraction results fall beyond the date range of a month, the result will be rounded to the last day of the month. +
|
+||||
Add two intervals to obtain the sum. +
|
+||||
Add a timestamp with an interval to obtain the time after 23 hours. +
|
+||||
Add a time with an interval to obtain the time after three hours. +
|
+||||
- + |
+Subtract a date from another to obtain the difference. +
|
+|||
Subtract an integer from a date, the return is a timestamp type. +
|
+||||
Subtract an interval from a date to obtain the time difference. +
|
+||||
Subtract a time from another time to obtain the time difference. +
|
+||||
Subtract an interval from a time to obtain the time difference. +
|
+||||
Subtract an interval from a timestamp to obtain the date difference. +
|
+||||
Subtract an interval from another interval to obtain the time difference. +
|
+||||
Subtract a timestamp from another timestamp to obtain the time difference. +
|
+||||
Obtain the time at the previous day. +
|
+||||
* + |
+Multiply an interval by a quantity: +
|
+|||
|
+||||
|
+||||
/ + |
+Divide an interval by a quantity to obtain a time segment. +
|
+
Description: Subtracts arguments, producing a result in YYYY-MM-DD format. If the result is negative, the returned result is also negative.
+Return type: interval
+For example:
+1 +2 +3 +4 +5 | SELECT age(timestamp '2001-04-10', timestamp '1957-06-13'); + age +------------------------- + 43 years 9 mons 27 days +(1 row) + |
Description: Subtracts from current_date
+Return type: interval
+For example:
+1 +2 +3 +4 +5 | SELECT age(timestamp '1957-06-13'); + age +------------------------- + 60 years 2 mons 18 days +(1 row) + |
Description: Subtracts timestamp1 from timestamp2 and returns the difference in the unit of field. If the difference is negative, this function returns it normally. The field can be day, month, quarter, day, week, hour, minute, second, or microsecond.
+Return type: bigint
+For example:
+SELECT timestampdiff(day, timestamp '2001-02-01', timestamp '2003-05-01 12:05:55'); + timestampdiff +--------------- + 819 +(1 row)+
Description: Specifies the current timestamp of the real-time clock.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT clock_timestamp(); + clock_timestamp +------------------------------- + 2017-09-01 16:57:36.636205+08 +(1 row) + |
Return type: date
+For example:
+1 +2 +3 +4 +5 | SELECT current_date; + date +------------ + 2017-09-01 +(1 row) + |
Return type: time with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT current_time; + timetz +-------------------- + 16:58:07.086215+08 +(1 row) + |
Description: Specifies the current date and time.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT current_timestamp; + pg_systimestamp +------------------------------ + 2017-09-01 16:58:19.22173+08 +(1 row) + |
Description: Obtains the hour.
+Equivalent to extract(field from timestamp).
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT date_part('hour', timestamp '2001-02-16 20:38:40'); + date_part +----------- + 20 +(1 row) + |
Obtains the month. If the value is greater than 12, obtain the remainder after it is divided by 12.
+Equivalent to extract(field from timestamp).
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT date_part('month', interval '2 years 3 months'); + date_part +----------- + 3 +(1 row) + |
Description: Truncates to the precision specified by text.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT date_trunc('hour', timestamp '2001-02-16 20:38:40'); + date_trunc +--------------------- + 2001-02-16 20:00:00 +(1 row) + |
Description: By default, the data is intercepted by day.
+For example:
+1 +2 +3 +4 | SELECT trunc(timestamp '2001-02-16 20:38:40'); trunc +--------------------- +2001-02-16 00:00:00 +(1 row) + |
Description: Obtains the hour.
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT extract(hour from timestamp '2001-02-16 20:38:40'); + date_part +----------- + 20 +(1 row) + |
Description: Obtains the month. If the value is greater than 12, obtain the remainder after it is divided by 12.
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT extract(month from interval '2 years 3 months'); + date_part +----------- + 3 +(1 row) + |
Description: Tests for valid date.
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT isfinite(date '2001-02-16'); + isfinite +---------- + t +(1 row) + |
Description: Tests for valid timestamp.
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT isfinite(timestamp '2001-02-16 21:28:30'); + isfinite +---------- + t +(1 row) + |
Description: Tests for valid interval.
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT isfinite(interval '4 hours'); + isfinite +---------- + t +(1 row) + |
Description: Adjusts interval to 30-day time periods are represented as months
+Return type: interval
+For example:
+1 +2 +3 +4 +5 | SELECT justify_days(interval '35 days'); + justify_days +-------------- + 1 mon 5 days +(1 row) + |
Description: Adjusts interval to 24-hour time periods are represented as days
+Return type: interval
+For example:
+1 +2 +3 +4 +5 | SELECT JUSTIFY_HOURS(INTERVAL '27 HOURS'); + justify_hours +---------------- + 1 day 03:00:00 +(1 row) + |
Description: Adjusts interval using justify_days and justify_hours.
+Return type: interval
+For example:
+1 +2 +3 +4 +5 | SELECT JUSTIFY_INTERVAL(INTERVAL '1 MON -1 HOUR'); + justify_interval +------------------ + 29 days 23:00:00 +(1 row) + |
Return type: time
+For example:
+1 +2 +3 +4 +5 | SELECT localtime AS RESULT; + result +---------------- + 16:05:55.664681 +(1 row) + |
Description: Specifies the current date and time.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT localtimestamp; + timestamp +---------------------------- + 2017-09-01 17:03:30.781902 +(1 row) + |
Description: Timestamp indicating the start of the current transaction.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT now(); + now +------------------------------- + 2017-09-01 17:03:42.549426+08 +(1 row) + |
Description: Converts a number to the interval type. num is a numeric-typed number. interval_unit is a string in the following format: 'DAY' | 'HOUR' | 'MINUTE' | 'SECOND'
+You can set the IntervalStyle parameter to oracle to be compatible with the interval output format of the function in the Oracle database.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | SELECT numtodsinterval(100, 'HOUR'); + numtodsinterval +----------------- + 100:00:00 +(1 row) + +SET intervalstyle = oracle; +SET +SELECT numtodsinterval(100, 'HOUR'); + numtodsinterval +------------------------------- + +000000004 04:00:00.000000000 +(1 row) + |
Description: Specifies the delay time of the server thread in unit of second.
+Return type: void
+For example:
+1 +2 +3 +4 +5 | SELECT pg_sleep(10); + pg_sleep +---------- + +(1 row) + |
Description: Specifies the current date and time.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT statement_timestamp(); + statement_timestamp +------------------------------- + 2017-09-01 17:04:39.119267+08 +(1 row) + |
Description: Specifies the current date and time.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT sysdate; + sysdate +--------------------- + 2017-09-01 17:04:49 +(1 row) + |
Description: Current date and time (like clock_timestamp, but returned as a text string)
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT timeofday(); + timeofday +------------------------------------- + Fri Sep 01 17:05:01.167506 2017 CST +(1 row) + |
Description: Current date and time (equivalent to current_timestamp)
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT transaction_timestamp(); + transaction_timestamp +------------------------------- + 2017-09-01 17:05:13.534454+08 +(1 row) + |
Description: Calculates the time point day and time of nth months.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT add_months(to_date('2017-5-29', 'yyyy-mm-dd'), 11) FROM dual; + add_months +--------------------- + 2018-04-29 00:00:00 +(1 row) + |
Description: Calculates the time of the last day in the month.
+For example:
+1 +2 +3 +4 +5 | select last_day(to_date('2017-01-01', 'YYYY-MM-DD')) AS cal_result; + cal_result +--------------------- + 2017-01-31 00:00:00 +(1 row) + |
Description: Calculates the time of the next week y started from x
+For example:
+1 +2 +3 +4 +5 | select next_day(timestamp '2017-05-25 00:00:00','Sunday')AS cal_result; + cal_result +--------------------- + 2017-05-28 00:00:00 +(1 row) + |
Description: Returns the number of days from the first day of year 0 to a specified date.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT to_days(timestamp '2008-10-07'); + to_days +--------- + 733687 +(1 row) + |
EXTRACT(field FROM source)
+The extract function retrieves subcolumns such as year or hour from date/time values. source must be a value expression of type timestamp, time, or interval. (Expressions of type date are cast to timestamp and can therefore be used as well.) field is an identifier or string that selects what column to extract from the source value. The extract function returns values of type double precision. The following are valid field names:
+The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0. You go from -1 century to 1 century.
+For example:
+1 +2 +3 +4 +5 | SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13'); + date_part +----------- + 20 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 16 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(DAY FROM INTERVAL '40 days 1 minute'); + date_part +----------- + 40 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 200 +(1 row) + |
Day of the week as Sunday(0) to Saturday (6)
+1 +2 +3 +4 +5 | SELECT EXTRACT(DOW FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 5 +(1 row) + |
Day of the year (1–365 or 366)
+1 +2 +3 +4 +5 | SELECT EXTRACT(DOY FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 47 +(1 row) + |
for date and timestamp values, the number of seconds since 1970-01-01 00:00:00 local time;
+for interval values, the total number of seconds in the interval.
+1 +2 +3 +4 +5 | SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40.12-08'); + date_part +-------------- + 982384720.12 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); + date_part +----------- + 442800 +(1 row) + |
1 +2 +3 +4 +5 | SELECT TIMESTAMP WITH TIME ZONE 'epoch' + 982384720.12 * INTERVAL '1 second' AS RESULT; + result +--------------------------- + 2001-02-17 12:38:40.12+08 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 20 +(1 row) + |
Monday is 1 and Sunday is 7.
+This is identical to dow except for Sunday.
+1 +2 +3 +4 +5 | SELECT EXTRACT(ISODOW FROM TIMESTAMP '2001-02-18 20:38:40'); + date_part +----------- + 7 +(1 row) + |
The ISO 8601 year that the date falls in (not applicable to intervals).
+Each ISO year begins with the Monday of the week containing the 4th of January, so in early January or late December the ISO year may be different from the Gregorian year. See the week column for more information.
+1 +2 +3 +4 +5 | SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-01'); + date_part +----------- + 2005 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02'); + date_part +----------- + 2006 +(1 row) + |
The seconds column, including fractional parts, multiplied by 1,000,000
+1 +2 +3 +4 +5 | SELECT EXTRACT(MICROSECONDS FROM TIME '17:12:28.5'); + date_part +----------- + 28500000 +(1 row) + |
Years in the 1900s are in the second millennium. The third millennium started from January 1, 2001.
+1 +2 +3 +4 +5 | SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 3 +(1 row) + |
The seconds column, including fractional parts, multiplied by 1000. Note that this includes full seconds.
+1 +2 +3 +4 +5 | SELECT EXTRACT(MILLISECONDS FROM TIME '17:12:28.5'); + date_part +----------- + 28500 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(MINUTE FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 38 +(1 row) + |
For timestamp values, the number of the month within the year (1–12);
+1 +2 +3 +4 +5 | SELECT EXTRACT(MONTH FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 2 +(1 row) + |
For interval values, the number of months, modulo 12 (0–11)
+1 +2 +3 +4 +5 | SELECT EXTRACT(MONTH FROM INTERVAL '2 years 13 months'); + date_part +----------- + 1 +(1 row) + |
Quarter of the year (1–4) that the date is in
+1 +2 +3 +4 +5 | SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 1 +(1 row) + |
Seconds column, including fractional parts (0–59)
+1 +2 +3 +4 +5 | SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); + date_part +----------- + 28.5 +(1 row) + |
The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC.
+The number of the week of the year that the day is in. By definition (ISO 8601), the first week of a year contains January 4 of that year. (The ISO-8601 week starts on Monday.) In other words, the first Thursday of a year is in week 1 of that year.
+Because of this, it is possible for early January dates to be part of the 52nd or 53rd week of the previous year, and late December dates to be part of the 1st week of the next year. For example, 2005-01-01 is part of the 53rd week of year 2004, 2006-01-01 is part of the 52nd week of year 2005, and 2012-12-31 is part of the 1st week of year 2013. You are advised to use the columns isoyear and week together to ensure consistency.
+1 +2 +3 +4 +5 | SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 7 +(1 row) + |
1 +2 +3 +4 +5 | SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 2001 +(1 row) + |
The date_part function is modeled on the traditional Ingres equivalent to the SQL-standard function extract:
+date_part('field', source)
+Note that the field must be a string, rather than a name. The valid field names are the same as those for extract. For details, see EXTRACT.
+For example:
+1 +2 +3 +4 +5 | SELECT date_part('day', TIMESTAMP '2001-02-16 20:38:40'); + date_part +----------- + 16 +(1 row) + |
1 +2 +3 +4 +5 | SELECT date_part('hour', INTERVAL '4 hours 3 minutes'); + date_part +----------- + 4 +(1 row) + |
date_format(timestamp, fmt)
+Converts a date into a string in the format specified by fmt.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | SELECT date_format('2009-10-04 22:23:00', '%M %D %W'); + date_format +-------------------- + October 4th Sunday +(1 row) +SELECT date_format('2021-02-20 08:30:45', '%Y-%m-%d %H:%i:%S'); + date_format +--------------------- + 2021-02-20 08:30:45 +(1 row) +SELECT date_format('2021-02-20 18:10:15', '%r-%T'); + date_format +---------------------- + 06:10:15 PM-18:10:15 +(1 row) + |
The following table describes the patterns of date parameter values. They can be used for the date_format, time_format, str_to_date, str_to_time, and from_unixtime functions.
+ +Format + |
+Description + |
+Value + |
+
---|---|---|
%a + |
+Abbreviated week name + |
+Sun...Sat + |
+
%b + |
+Abbreviated month name + |
+Jan...Dec + |
+
%c + |
+Month + |
+0...12 + |
+
%D + |
+Date with a suffix + |
+0th, 1st, 2nd, 3rd, ... + |
+
%d + |
+Day in a month (two digits) + |
+00...31 + |
+
%e + |
+Day in a month + |
+0...31 + |
+
%f + |
+Microsecond + |
+000000...999999 + |
+
%H + |
+Hour, in 24-hour format + |
+00...23 + |
+
%h + |
+Hour, in 12-hour format + |
+01...12 + |
+
%I + |
+Hour, in 12-hour format, same as %h + |
+01...12 + |
+
%i + |
+Minute + |
+00...59 + |
+
%j + |
+Day in a year + |
+001...366 + |
+
%k + |
+Hour, in 24-hour format, same as %H + |
+0...23 + |
+
%l + |
+Hour, in 12-hour format, same as %h + |
+1...12 + |
+
%M + |
+Month name + |
+January...December + |
+
%m + |
+Month (two digits) + |
+00...12 + |
+
%p + |
+Morning and afternoon + |
+AM PM + |
+
%r + |
+Time, in 12-hour format + |
+hh::mm::ss AM/PM + |
+
%S + |
+Second + |
+00...59 + |
+
%s + |
+Second, same as %S + |
+00...59 + |
+
%T + |
+Time, in 24-hour format + |
+hh::mm::ss + |
+
%U + |
+Week (Sunday is the first day of a week) + |
+00...53 + |
+
%u + |
+Week (Monday is the first day of a week) + |
+00...53 + |
+
%V + |
+Week (Sunday is the first day of a week). It is used together with %X. + |
+01...53 + |
+
%v + |
+Week (Monday is the first day of a week). It is used together with %x. + |
+01...53 + |
+
%W + |
+Week name + |
+Sunday...Saturday + |
+
%w + |
+Day of a week. The value is 0 for Sunday. + |
+0...6 + |
+
%X + |
+Year (four digits). It is used together with %V. Sunday is the first day of a week. + |
+- + |
+
%x + |
+Year (four digits). It is used together with %v. Monday is the first day of a week. + |
+- + |
+
%Y + |
+Year (four digits) + |
+- + |
+
%y + |
+Year (two digits) + |
+- + |
+
%% + |
+Character '%' + |
+Character '%' + |
+
%x + |
+'x': any character apart from the preceding ones + |
+Character 'x' + |
+
In the preceding table, %U, %u, %V, %v, %X, and %x are not supported currently.
+Description: Converts x into the type specified by y.
+For example:
+1 +2 +3 +4 +5 | SELECT cast('22-oct-1997' as timestamp); + timestamp +--------------------- + 1997-10-22 00:00:00 +(1 row) + |
Description: Converts a string in hexadecimal format into binary format.
+Return type: raw
+For example:
+1 +2 +3 +4 +5 | SELECT hextoraw('7D'); + hextoraw +---------- + 7D +(1 row) + |
Description: Converts values of the number type into the timestamp of the specified type.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT numtoday(2); + numtoday +---------- + 2 days +(1 row) + |
Description: Obtains the system timestamp.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT pg_systimestamp(); + pg_systimestamp +------------------------------- + 2015-10-14 11:21:28.317367+08 +(1 row) + |
Description: Converts a string in binary format into hexadecimal format.
+The result is the ACSII code of the input characters in hexadecimal format.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT rawtohex('1234567'); + rawtohex +---------------- + 31323334353637 +(1 row) + |
Description: Converts a DATETIME or INTERVAL value of the DATE/TIMESTAMP/TIMESTAMP WITH TIME ZONE/TIMESTAMP WITH LOCAL TIME ZONE type into the VARCHAR type according to the format specified by fmt.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(current_timestamp,'HH12:MI:SS'); + to_char +---------- + 10:19:26 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_char(current_timestamp,'FMHH12:FMMI:FMSS'); + to_char +---------- + 10:19:46 +(1 row) + |
Description: Converts the values of the double-precision type into the strings in the specified format.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(125.8::real, '999D99'); + to_char +--------- + 125.80 +(1 row) + |
Descriptions: Converts an integer or a value in floating point format into a string in specified format.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(1485,'9,999'); + to_char +--------- + 1,485 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_char( 1148.5,'9,999.999'); + to_char +------------ + 1,148.500 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_char(148.5,'990999.909'); + to_char +------------- + 0148.500 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_char(123,'XXX'); + to_char +--------- + 7B +(1 row) + |
Description: Converts the values of the time interval type into the strings in the specified format.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(interval '15h 2m 12s', 'HH24:MI:SS'); + to_char +---------- + 15:02:12 +(1 row) + |
Description: Converts the values of the integer type into the strings in the specified format.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(125, '999'); + to_char +--------- + 125 +(1 row) + |
Description: Converts the values of the numeric type into the strings in the specified format.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(-125.8, '999D99S'); + to_char +--------- + 125.80- +(1 row) + |
Description: Converts the CHAR/VARCHAR/VARCHAR2/CLOB type into the VARCHAR type.
+If this function is used to convert data of the CLOB type, and the value to be converted exceeds the value range of the target type, an error is returned.
+Return type: varchar
+For example:
+1 +2 +3 +4 +5 | SELECT to_char('01110'); + to_char +--------- + 01110 +(1 row) + |
Description: Converts the values of the timestamp type into the strings in the specified format.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT to_char(current_timestamp, 'HH12:MI:SS'); + to_char +---------- + 10:55:59 +(1 row) + |
Description: Convert the RAW type or text character set type CHAR/NCHAR/VARCHAR/VARCHAR2/NVARCHAR2/TEXT into the CLOB type.
+Return type: clob
+For example:
+1 +2 +3 +4 +5 | SELECT to_clob('ABCDEF'::RAW(10)); + to_clob +--------- + ABCDEF +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_clob('hello111'::CHAR(15)); + to_clob +---------- + hello111 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_clob('gauss123'::NCHAR(10)); + to_clob +---------- + gauss123 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_clob('gauss234'::VARCHAR(10)); + to_clob +---------- + gauss234 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_clob('gauss345'::VARCHAR2(10)); + to_clob +---------- + gauss345 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_clob('gauss456'::NVARCHAR2(10)); + to_clob +---------- + gauss456 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_clob('World222!'::TEXT); + to_clob +----------- + World222! +(1 row) + |
Description: Converts values of the text type into the timestamp in the specified format.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT to_date('2015-08-14'); + to_date +--------------------- + 2015-08-14 00:00:00 +(1 row) + |
Description: Converts the values of the string type into the dates in the specified format.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT to_date('05 Dec 2000', 'DD Mon YYYY'); + to_date +--------------------- + 2000-12-05 00:00:00 +(1 row) + |
Description: Converts a string into a value of the DATE type according to the format specified by fmt. For details about the fmt format, see Table 2.
+This function cannot support the CLOB type directly. However, a parameter of the CLOB type can be converted using implicit conversion.
+Return type: date
+For example:
+1 +2 +3 +4 +5 | SELECT TO_DATE('05 Dec 2010','DD Mon YYYY'); + to_date +--------------------- + 2010-12-05 00:00:00 +(1 row) + |
Description: Converts expr into a value of the NUMBER type according to the specified format.
+For details about the type conversion formats, see Table 1.
+If a hexadecimal string is converted into a decimal number, the hexadecimal string can include a maximum of 16 bytes if it is to be converted into a sign-free number.
+During the conversion from a hexadecimal string to a decimal digit, the format string cannot have a character other than x or X. Otherwise, an error is reported.
+Return type: number
+For example:
+1 +2 +3 +4 +5 | SELECT to_number('12,454.8-', '99G999D9S'); + to_number +----------- + -12454.8 +(1 row) + |
Description: Converts the values of the string type into the numbers in the specified format.
+Return type: numeric
+For example:
+1 +2 +3 +4 +5 | SELECT to_number('12,454.8-', '99G999D9S'); + to_number +----------- + -12454.8 +(1 row) + |
Description: Converts a UNIX century into a timestamp.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT to_timestamp(1284352323); + to_timestamp +------------------------ + 2010-09-13 12:32:03+08 +(1 row) + |
Description: Converts a string into a value of the timestamp type according to the format specified by fmt. When fmt is not specified, perform the conversion according to the format specified by nls_timestamp_format. For details about the fmt format, see Table 2.
+In to_timestamp in GaussDB(DWS):
+Characters in the fmt must match the schema for formatting the data and time. Otherwise, an error is reported.
+Return type: timestamp without time zone
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SHOW nls_timestamp_format; + nls_timestamp_format +---------------------------- + DD-Mon-YYYY HH:MI:SS.FF AM +(1 row) + +SELECT to_timestamp('12-sep-2014'); + to_timestamp +--------------------- + 2014-09-12 00:00:00 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_timestamp('12-Sep-10 14:10:10.123000','DD-Mon-YY HH24:MI:SS.FF'); + to_timestamp +------------------------- + 2010-09-12 14:10:10.123 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_timestamp('-1','SYYYY'); + to_timestamp +------------------------ + 0001-01-01 00:00:00 BC +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_timestamp('98','RR'); + to_timestamp +--------------------- + 1998-01-01 00:00:00 +(1 row) + |
1 +2 +3 +4 +5 | SELECT to_timestamp('01','RR'); + to_timestamp +--------------------- + 2001-01-01 00:00:00 +(1 row) + |
Description: Converts values of the string type into the timestamp of the specified type.
+Return type: timestamp
+For example:
+1 +2 +3 +4 +5 | SELECT to_timestamp('05 Dec 2000', 'DD Mon YYYY'); + to_timestamp +--------------------- + 2000-12-05 00:00:00 +(1 row) + |
The following table describes the value formats of the to_number function.
+Schema + |
+Description + |
+
---|---|
9 + |
+Value with specified digits + |
+
0 + |
+Values with leading zeros + |
+
Period (.) + |
+Decimal point + |
+
Comma (,) + |
+Group (thousand) separator + |
+
PR + |
+Negative values in angle brackets + |
+
S + |
+Sign anchored to number (uses locale) + |
+
L + |
+Currency symbol (uses locale) + |
+
D + |
+Decimal point (uses locale) + |
+
G + |
+Group separator (uses locale) + |
+
MI + |
+Minus sign in the specified position (if the number is less than 0) + |
+
PL + |
+Plus sign in the specified position (if the number is greater than 0) + |
+
SG + |
+Plus or minus sign in the specified position + |
+
RN + |
+Roman numerals (the input values are between 1 and 3999) + |
+
TH or th + |
+Ordinal number suffix + |
+
V + |
+Shifts specified number of digits (decimal) + |
+
The following table describes the patterns of date and time values. They can be used for the to_date, to_timestamp, and to_char functions, and the nls_timestamp_format parameter.
+ +Type + |
+Schema + |
+Description + |
+
---|---|---|
Hour + |
+HH + |
+Number of hours in one day (01-12) + |
+
HH12 + |
+Number of hours in one day (01-12) + |
+|
HH24 + |
+Number of hours in one day (00-23) + |
+|
Minute + |
+MI + |
+Minute (00-59) + |
+
Second + |
+SS + |
+Second (00-59) + |
+
FF + |
+Microsecond (000000-999999) + |
+|
SSSSS + |
+Second after midnight (0-86399) + |
+|
Morning and afternoon + |
+AM or A.M. + |
+Morning identifier + |
+
PM or P.M. + |
+Afternoon identifier + |
+|
Year + + + |
+Y,YYY + |
+Year with comma (with four digits or more) + |
+
SYYYY + |
+Year with four digits BC + |
+|
YYYY + |
+Year (with four digits or more) + |
+|
YYY + |
+Last three digits of a year + |
+|
YY + |
+Last two digits of a year + |
+|
Y + |
+Last one digit of a year + |
+|
IYYY + |
+ISO year (with four digits or more) + |
+|
IYY + |
+Last three digits of an ISO year + |
+|
IY + |
+Last two digits of an ISO year + |
+|
I + |
+Last one digit of an ISO year + |
+|
RR + |
+Last two digits of a year (A year of the 20th century can be stored in the 21st century.) +The password must comply with the following rules: +
|
+|
RRRR + |
+Capable of receiving a year with four digits or two digits. If there are 2 digits, the value is the same as the returned value of RR. If there are 4 digits, the value is the same as YYYY. + |
+|
|
+Era indicator Before Christ (BC) and After Christ (AD) + |
+|
Month + |
+MONTH + |
+Full spelling of a month in uppercase (9 characters are filled in if the value is empty.) + |
+
MON + |
+Month in abbreviated format in uppercase (with three characters) + |
+|
MM + |
+Month (01-12) + |
+|
RM + |
+Month in Roman numerals (I-XII; I=JAN) and uppercase + |
+|
Day + |
+DAY + |
+Full spelling of a date in uppercase (9 characters are filled in if the value is empty.) + |
+
DY + |
+Day in abbreviated format in uppercase (with three characters) + |
+|
DDD + |
+Day in a year (001-366) + |
+|
DD + |
+Day in a month (01-31) + |
+|
D + |
+Day in a week (1-7). + |
+|
Week + |
+W + |
+Week in a month (1-5) (The first week starts from the first day of the month.) + |
+
WW + |
+Week in a year (1-53) (The first week starts from the first day of the year.) + |
+|
IW + |
+Week in an ISO year (The first Thursday is in the first week.) + |
+|
Century + |
+CC + |
+Century (with two digits) (The 21st century starts from 2001-01-01.) + |
+
Julian date + |
+J + |
+Julian date (starting from January 1 of 4712 BC) + |
+
Quarter + |
+Q + |
+Quarter + |
+
For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(1,1))' + point '(2.0,0)' AS RESULT; + result +------------- + (3,1),(2,0) +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(1,1))' - point '(2.0,0)' AS RESULT; + result +--------------- + (-1,1),(-2,0) +(1 row) + |
Description: Scaling out/rotation
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(1,1))' * point '(2.0,0)' AS RESULT; + result +------------- + (2,2),(0,0) +(1 row) + |
Description: Scaling in/rotation
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(2,2))' / point '(2.0,0)' AS RESULT; + result +------------- + (1,1),(0,0) +(1 row) + |
Description: Point or box of intersection
+For example:
+1 +2 +3 +4 +5 | SELECT box'((1,-1),(-1,1))' # box'((1,1),(-1,-1))' AS RESULT; + result +-------- +(1,1),(-1,-1) +(1 row) + |
Description: Number of paths or polygon vertexs
+For example:
+1 +2 +3 +4 +5 | SELECT # path'((1,0),(0,1),(-1,0))' AS RESULT; + result +-------- + 3 +(1 row) + |
Description: Length or circumference
+For example:
+1 +2 +3 +4 +5 | SELECT @-@ path '((0,0),(1,0))' AS RESULT; + result +-------- + 2 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT @@ circle '((0,0),10)' AS RESULT; + result +-------- + (0,0) +(1 row) + |
Description: Closest point to first figure on second figure.
+For example:
+1 +2 +3 +4 +5 | SELECT point '(0,0)' ## box '((2,0),(0,2))' AS RESULT; + result +-------- + (0,0) +(1 row) + |
Description: Distance between the two figures.
+For example:
+1 +2 +3 +4 +5 | SELECT circle '((0,0),1)' <-> circle '((5,0),1)' AS RESULT; + result +-------- + 3 +(1 row) + |
Description: Overlaps? (One point in common makes this true.)
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(1,1))' && box '((0,0),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is strictly left of (no common horizontal coordinate)?
+For example:
+1 +2 +3 +4 +5 | SELECT circle '((0,0),1)' << circle '((5,0),1)' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is strictly right of (no common horizontal coordinate)?
+For example:
+1 +2 +3 +4 +5 | SELECT circle '((5,0),1)' >> circle '((0,0),1)' AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not extend to the right of?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(1,1))' &< box '((0,0),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not extend to the left of?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(3,3))' &> box '((0,0),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is strictly below (no common horizontal coordinate)?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(3,3))' <<| box '((3,4),(5,5))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is strictly above (no common horizontal coordinate)?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((3,4),(5,5))' |>> box '((0,0),(3,3))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not extend above?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(1,1))' &<| box '((0,0),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not extend below?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(3,3))' |&> box '((0,0),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is below (allows touching)?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(-3,-3))' <^ box '((0,0),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is above (allows touching)?
+For example:
+1 +2 +3 +4 +5 | SELECT box '((0,0),(2,2))' >^ box '((0,0),(-3,-3))' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT lseg '((-1,0),(1,0))' ?# box '((-2,-2),(2,2))' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT ?- lseg '((-1,0),(1,0))' AS RESULT; + result +-------- + t +(1 row) + |
Description: Are horizontally aligned?
+For example:
+1 +2 +3 +4 +5 | SELECT point '(1,0)' ?- point '(0,0)' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT ?| lseg '((-1,0),(1,0))' AS RESULT; + result +-------- + f +(1 row) + |
Description: Are vertically aligned?
+For example:
+1 +2 +3 +4 +5 | SELECT point '(0,1)' ?| point '(0,0)' AS RESULT; + result +-------- + t +(1 row) + |
Description: Are perpendicular?
+For example:
+1 +2 +3 +4 +5 | SELECT lseg '((0,0),(0,1))' ?-| lseg '((0,0),(1,0))' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT lseg '((-1,0),(1,0))' ?|| lseg '((-1,2),(1,2))' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT circle '((0,0),2)' @> point '(1,1)' AS RESULT; + result +-------- + t +(1 row) + |
Description: Contained in or on?
+For example:
+1 +2 +3 +4 +5 | SELECT point '(1,1)' <@ circle '((0,0),2)' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT polygon '((0,0),(1,1))' ~= polygon '((1,1),(0,0))' AS RESULT; + result +-------- + t +(1 row) + |
Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT area(box '((0,0),(1,1))') AS RESULT; + result +-------- + 1 +(1 row) + |
Description: Figure center calculation
+Return type: point
+For example:
+1 +2 +3 +4 +5 | SELECT center(box '((0,0),(1,2))') AS RESULT; + result +--------- + (0.5,1) +(1 row) + |
Description: Circle diameter calculation
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT diameter(circle '((0,0),2.0)') AS RESULT; + result +-------- + 4 +(1 row) + |
Description: Vertical size of box
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT height(box '((0,0),(1,1))') AS RESULT; + result +-------- + 1 +(1 row) + |
Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT isclosed(path '((0,0),(1,1),(2,0))') AS RESULT; + result +-------- + t +(1 row) + |
Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT isopen(path '[(0,0),(1,1),(2,0)]') AS RESULT; + result +-------- + t +(1 row) + |
Description: Length calculation
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT length(path '((-1,0),(1,0))') AS RESULT; + result +-------- + 4 +(1 row) + |
Description: Number of points in path
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT npoints(path '[(0,0),(1,1),(2,0)]') AS RESULT; + result +-------- + 3 +(1 row) + |
Description: Number of points in polygon
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT npoints(polygon '((1,1),(0,0))') AS RESULT; + result +-------- + 2 +(1 row) + |
Description: Converts path to closed.
+Return type: path
+For example:
+1 +2 +3 +4 +5 | SELECT pclose(path '[(0,0),(1,1),(2,0)]') AS RESULT; + result +--------------------- + ((0,0),(1,1),(2,0)) +(1 row) + |
Description: Converts path to open.
+Return type: path
+For example:
+1 +2 +3 +4 +5 | SELECT popen(path '((0,0),(1,1),(2,0))') AS RESULT; + result +--------------------- + [(0,0),(1,1),(2,0)] +(1 row) + |
Description: Circle diameter calculation
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT radius(circle '((0,0),2.0)') AS RESULT; + result +-------- + 2 +(1 row) + |
Description: Horizontal size of box
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT width(box '((0,0),(1,1))') AS RESULT; + result +-------- + 1 +(1 row) + |
Return type: box
+For example:
+1 +2 +3 +4 +5 | SELECT box(circle '((0,0),2.0)') AS RESULT; + result +--------------------------------------------------------------------------- + (1.41421356237309,1.41421356237309),(-1.41421356237309,-1.41421356237309) +(1 row) + |
Return type: box
+For example:
+1 +2 +3 +4 +5 | SELECT box(point '(0,0)', point '(1,1)') AS RESULT; + result +------------- + (1,1),(0,0) +(1 row) + |
Return type: box
+For example:
+1 +2 +3 +4 +5 | SELECT box(polygon '((0,0),(1,1),(2,0))') AS RESULT; + result +------------- + (2,1),(0,0) +(1 row) + |
Return type: circle
+For example:
+1 +2 +3 +4 +5 | SELECT circle(box '((0,0),(1,1))') AS RESULT; + result +------------------------------- + <(0.5,0.5),0.707106781186548> +(1 row) + |
Description: Center and radius to circle
+Return type: circle
+For example:
+1 +2 +3 +4 +5 | SELECT circle(point '(0,0)', 2.0) AS RESULT; + result +----------- + <(0,0),2> +(1 row) + |
Description: Polygon to circle
+Return type: circle
+For example:
+1 +2 +3 +4 +5 | SELECT circle(polygon '((0,0),(1,1),(2,0))') AS RESULT; + result +------------------------------------------- + <(1,0.333333333333333),0.924950591148529> +(1 row) + |
Description: Box diagonal to line segment
+Return type: lseg
+For example:
+1 +2 +3 +4 +5 | SELECT lseg(box '((-1,0),(1,0))') AS RESULT; + result +---------------- + [(1,0),(-1,0)] +(1 row) + |
Description: Points to line segment
+Return type: lseg
+For example:
+1 +2 +3 +4 +5 | SELECT lseg(point '(-1,0)', point '(1,0)') AS RESULT; + result +---------------- + [(-1,0),(1,0)] +(1 row) + |
Return type: path
+For example:
+1 +2 +3 +4 +5 | SELECT path(polygon '((0,0),(1,1),(2,0))') AS RESULT; + result +--------------------- + ((0,0),(1,1),(2,0)) +(1 row) + |
Return type: point
+For example:
+1 +2 +3 +4 +5 | SELECT point(23.4, -44.5) AS RESULT; + result +-------------- + (23.4,-44.5) +(1 row) + |
Return type: point
+For example:
+1 +2 +3 +4 +5 | SELECT point(box '((-1,0),(1,0))') AS RESULT; + result +-------- + (0,0) +(1 row) + |
Return type: point
+For example:
+1 +2 +3 +4 +5 | SELECT point(circle '((0,0),2.0)') AS RESULT; + result +-------- + (0,0) +(1 row) + |
Description: Center of line segment
+Return type: point
+For example:
+1 +2 +3 +4 +5 | SELECT point(lseg '((-1,0),(1,0))') AS RESULT; + result +-------- + (0,0) +(1 row) + |
Description: Center of polygon
+Return type: point
+For example:
+1 +2 +3 +4 +5 | SELECT point(polygon '((0,0),(1,1),(2,0))') AS RESULT; + result +----------------------- + (1,0.333333333333333) +(1 row) + |
Description: Box to 4-point polygon
+Return type: polygon
+For example:
+1 +2 +3 +4 +5 | SELECT polygon(box '((0,0),(1,1))') AS RESULT; + result +--------------------------- + ((0,0),(0,1),(1,1),(1,0)) +(1 row) + |
Description: Circle to 12-point polygon
+Return type: polygon
+For example:
+1 +2 +3 +4 +5 +6 | SELECT polygon(circle '((0,0),2.0)') AS RESULT; + result + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + ((-2,0),(-1.73205080756888,1),(-1,1.73205080756888),(-1.22464679914735e-16,2),(1,1.73205080756888),(1.73205080756888,1),(2,2.44929359829471e-16),(1.73205080756888,-0.999999999999999),(1,-1.73205080756888),(3.67394039744206e-16,-2),(-0.999999999999999,-1.73205080756888),(-1.73205080756888,-1)) +(1 row) + |
Description: Circle to npts-point polygon
+Return type: polygon
+For example:
+1 +2 +3 +4 +5 +6 | SELECT polygon(12, circle '((0,0),2.0)') AS RESULT; + result + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + ((-2,0),(-1.73205080756888,1),(-1,1.73205080756888),(-1.22464679914735e-16,2),(1,1.73205080756888),(1.73205080756888,1),(2,2.44929359829471e-16),(1.73205080756888,-0.999999999999999),(1,-1.73205080756888),(3.67394039744206e-16,-2),(-0.999999999999999,-1.73205080756888),(-1.73205080756888,-1)) +(1 row) + |
Return type: polygon
+For example:
+1 +2 +3 +4 +5 | SELECT polygon(path '((0,0),(1,1),(2,0))') AS RESULT; + result +--------------------- + ((0,0),(1,1),(2,0)) +(1 row) + |
The operators <<, <<=, >>, and >>= test for subnet inclusion. They consider only the network parts of the two addresses (ignoring any host part) and determine whether one network is identical to or a subnet of the other.
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' < inet '192.168.1.6' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is less than or equals
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' <= inet '192.168.1.5' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' = inet '192.168.1.5' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is greater than or equals
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' >= inet '192.168.1.5' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' > inet '192.168.1.4' AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not equal to
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' <> inet '192.168.1.4' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.5' << inet '192.168.1/24' AS RESULT; + result +-------- + t +(1 row) + |
Description: Is contained in or equals
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1/24' <<= inet '192.168.1/24' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1/24' >> inet '192.168.1.5' AS RESULT; + result +-------- + t +(1 row) + |
Description: Contains or equals
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1/24' >>= inet '192.168.1/24' AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT ~ inet '192.168.1.6' AS RESULT; + result +--------------- + 63.87.254.249 +(1 row) + |
Description: The AND operation is performed on each bit of the two network addresses.
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.6' & inet '10.0.0.0' AS RESULT; + result +--------- + 0.0.0.0 +(1 row) + |
Description: The OR operation is performed on each bit of the two network addresses.
+For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.6' | inet '10.0.0.0' AS RESULT; + result +------------- + 202.168.1.6 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.6' + 25 AS RESULT; + result +-------------- + 192.168.1.31 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.43' - 36 AS RESULT; + result +------------- + 192.168.1.7 +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT inet '192.168.1.43' - inet '192.168.1.19' AS RESULT; + result +-------- + 24 +(1 row) + |
The abbrev, host, and text functions are primarily intended to offer alternative display formats.
+Description: Abbreviated display format as text
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT abbrev(inet '10.1.0.0/16') AS RESULT; + result +------------- + 10.1.0.0/16 +(1 row) + |
Description: Abbreviated display format as text
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT abbrev(cidr '10.1.0.0/16') AS RESULT; + result +--------- + 10.1/16 +(1 row) + |
Description: Broadcast address for network
+Return type: inet
+For example:
+1 +2 +3 +4 +5 | SELECT broadcast('192.168.1.5/24') AS RESULT; + result +------------------ + 192.168.1.255/24 +(1 row) + |
Description: Extracts family of address; 4 for IPv4, 6 for IPv6
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT family('::1') AS RESULT; + result +-------- + 6 +(1 row) + |
Description: Extracts IP address as text.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT host('192.168.1.5/24') AS RESULT; + result +------------- + 192.168.1.5 +(1 row) + |
Description: Constructs host mask for network.
+Return type: inet
+For example:
+1 +2 +3 +4 +5 | SELECT hostmask('192.168.23.20/30') AS RESULT; + result +--------- + 0.0.0.3 +(1 row) + |
Description: Extracts subnet mask length.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT masklen('192.168.1.5/24') AS RESULT; + result +-------- + 24 +(1 row) + |
Description: Constructs a subnet mask for the network.
+Return type: inet
+For example:
+1 +2 +3 +4 +5 | SELECT netmask('192.168.1.5/24') AS RESULT; + result +--------------- + 255.255.255.0 +(1 row) + |
Description: Extracts network part of address.
+Return type: cidr
+For example:
+1 +2 +3 +4 +5 | SELECT network('192.168.1.5/24') AS RESULT; + result +---------------- + 192.168.1.0/24 +(1 row) + |
Description: Sets subnet mask length for inet value.
+Return type: inet
+For example:
+1 +2 +3 +4 +5 | SELECT set_masklen('192.168.1.5/24', 16) AS RESULT; + result +---------------- + 192.168.1.5/16 +(1 row) + |
Description: Sets subnet mask length for cidr value.
+Return type: cidr
+For example:
+1 +2 +3 +4 +5 | SELECT set_masklen('192.168.1.0/24'::cidr, 16) AS RESULT; + result +---------------- + 192.168.0.0/16 +(1 row) + |
Description: Extracts IP address and subnet mask length as text.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT text(inet '192.168.1.5') AS RESULT; + result +---------------- + 192.168.1.5/32 +(1 row) + |
Any cidr value can be cast to inet implicitly or explicitly; therefore, the functions shown above as operating on inet also work on cidr values. An inet value can be cast to cidr. After the conversion, any bits to the right of the subnet mask are silently zeroed to create a valid cidr value. In addition, you can cast a text string to inet or cidr using normal casting syntax. For example, inet(expression) or colname::cidr.
+The function trunc(macaddr) returns a MAC address with the last 3 bytes set to zero.
+trunc(macaddr)
+Description: Sets last 3 bytes to zero.
+Return type: macaddr
+For example:
+1 +2 +3 +4 +5 | SELECT trunc(macaddr '12:34:56:78:90:ab') AS RESULT; + result +------------------- + 12:34:56:00:00:00 +(1 row) + |
The macaddr type also supports the standard relational operators (such as > and <=) for lexicographical ordering, and the bitwise arithmetic operators (~, & and |) for NOT, AND and OR.
+Description: Specifies whether the tsvector-typed words match the tsquery-typed words.
+For example:
+1 +2 +3 +4 +5 | SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat') AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat') AS RESULT; + result +-------- + t +(1 row) + |
Description: Connects two tsvector-typed words.
+For example:
+1 +2 +3 +4 +5 | SELECT 'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector AS RESULT; + result +--------------------------- + 'a':1 'b':2,5 'c':3 'd':4 +(1 row) + |
Description: Performs the AND operation on two tsquery-typed words.
+For example:
+1 +2 +3 +4 +5 | SELECT 'fat | rat'::tsquery && 'cat'::tsquery AS RESULT; + result +--------------------------- + ( 'fat' | 'rat' ) & 'cat' +(1 row) + |
Description: Performs the OR operation on two tsquery-typed words.
+For example:
+1 +2 +3 +4 +5 | SELECT 'fat | rat'::tsquery || 'cat'::tsquery AS RESULT; + result +--------------------------- + ( 'fat' | 'rat' ) | 'cat' +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT !! 'cat'::tsquery AS RESULT; + result +-------- + !'cat' +(1 row) + |
Description: Specifies whether a tsquery-typed word contains another tsquery-typed word.
+For example:
+1 +2 +3 +4 +5 | SELECT 'cat'::tsquery @> 'cat & rat'::tsquery AS RESULT; + result +-------- + f +(1 row) + |
Description: Specifies whether a tsquery-typed word is contained in another tsquery-typed word.
+For example:
+1 +2 +3 +4 +5 | SELECT 'cat'::tsquery <@ 'cat & rat'::tsquery AS RESULT; + result +-------- + t +(1 row) + |
In addition to the preceding operators, the ordinary B-tree comparison operators (including = and <) are defined for types tsvector and tsquery.
+Description: Gets default text search configuration.
+Return type: regconfig
+For example:
+1 +2 +3 +4 +5 | SELECT get_current_ts_config(); + get_current_ts_config +----------------------- + english +(1 row) + |
Description: Number of lexemes in a tsvector-typed word.
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT length('fat:2,4 cat:3 rat:5A'::tsvector); + length +-------- + 3 +(1 row) + |
Description: Number of lexemes plus tsquery operators
+Return type: integer
+For example:
+1 +2 +3 +4 +5 | SELECT numnode('(fat & rat) | cat'::tsquery); + numnode +--------- + 5 +(1 row) + |
Description: Generates tsquery lexemes without punctuation.
+Return type: tsquery
+For example:
+1 +2 +3 +4 +5 | SELECT plainto_tsquery('english', 'The Fat Rats'); + plainto_tsquery +----------------- + 'fat' & 'rat' +(1 row) + |
Description: Gets indexable part of a tsquery.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT querytree('foo & ! bar'::tsquery); + querytree +----------- + 'foo' +(1 row) + |
Description: Assigns weight to each element of tsvector.
+Return type: tsvector
+For example:
+1 +2 +3 +4 +5 | SELECT setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A'); + setweight +------------------------------- + 'cat':3A 'fat':2A,4A 'rat':5A +(1 row) + |
Description: Removes positions and weights from tsvector.
+Return type: tsvector
+For example:
+1 +2 +3 +4 +5 | SELECT strip('fat:2,4 cat:3 rat:5A'::tsvector); + strip +------------------- + 'cat' 'fat' 'rat' +(1 row) + |
Description: Normalizes words and converts them to tsquery.
+Return type: tsquery
+For example:
+1 +2 +3 +4 +5 | SELECT to_tsquery('english', 'The & Fat & Rats'); + to_tsquery +--------------- + 'fat' & 'rat' +(1 row) + |
Description: Reduces document text to tsvector.
+Return type: tsvector
+For example:
+1 +2 +3 +4 +5 | SELECT to_tsvector('english', 'The Fat Rats'); + to_tsvector +----------------- + 'fat':2 'rat':3 +(1 row) + |
Description: Highlights a query match.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT ts_headline('x y z', 'z'::tsquery); + ts_headline +-------------- + x y <b>z</b> +(1 row) + |
Description: Ranks document for query.
+Return type: float4
+For example:
+1 +2 +3 +4 +5 | SELECT ts_rank('hello world'::tsvector, 'world'::tsquery); + ts_rank +---------- + .0607927 +(1 row) + |
Description: Ranks document for query using cover density.
+Return type: float4
+For example:
+1 +2 +3 +4 +5 | SELECT ts_rank_cd('hello world'::tsvector, 'world'::tsquery); + ts_rank_cd +------------ + 0 +(1 row) + |
Description: Replaces tsquery-typed word.
+Return type: tsquery
+For example:
+1 +2 +3 +4 +5 | SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo|bar'::tsquery); + ts_rewrite +------------------------- + 'b' & ( 'foo' | 'bar' ) +(1 row) + |
Description: Replaces tsquery data in the target with the result of a SELECT command.
+Return type: tsquery
+For example:
+1 +2 +3 +4 +5 | SELECT ts_rewrite('world'::tsquery, 'select ''world''::tsquery, ''hello''::tsquery'); + ts_rewrite +------------ + 'hello' +(1 row) + |
Description: Tests a configuration.
+Return type: setof record
+For example:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT ts_debug('english', 'The Brightest supernovaes'); + ts_debug +----------------------------------------------------------------------------------- + (asciiword,"Word, all ASCII",The,{english_stem},english_stem,{}) + (blank,"Space symbols"," ",{},,) + (asciiword,"Word, all ASCII",Brightest,{english_stem},english_stem,{brightest}) + (blank,"Space symbols"," ",{},,) + (asciiword,"Word, all ASCII",supernovaes,{english_stem},english_stem,{supernova}) +(5 rows) + |
Description: Tests a data dictionary.
+Return type: text[]
+For example:
+1 +2 +3 +4 +5 | SELECT ts_lexize('english_stem', 'stars'); + ts_lexize +----------- + {star} +(1 row) + |
Return type: setof record
+For example:
+1 +2 +3 +4 +5 +6 +7 +8 | SELECT ts_parse('default', 'foo - bar'); + ts_parse +----------- + (1,foo) + (12," ") + (12,"- ") + (1,bar) +(4 rows) + |
Return type: setof record
+For example:
+1 +2 +3 +4 +5 +6 +7 +8 | SELECT ts_parse(3722, 'foo - bar'); + ts_parse +----------- + (1,foo) + (12," ") + (12,"- ") + (1,bar) +(4 rows) + |
Description: Gets token types defined by parser.
+Return type: setof record
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 | SELECT ts_token_type('default'); + ts_token_type +-------------------------------------------------------------- + (1,asciiword,"Word, all ASCII") + (2,word,"Word, all letters") + (3,numword,"Word, letters and digits") + (4,email,"Email address") + (5,url,URL) + (6,host,Host) + (7,sfloat,"Scientific notation") + (8,version,"Version number") + (9,hword_numpart,"Hyphenated word part, letters and digits") + (10,hword_part,"Hyphenated word part, all letters") + (11,hword_asciipart,"Hyphenated word part, all ASCII") + (12,blank,"Space symbols") + (13,tag,"XML tag") + (14,protocol,"Protocol head") + (15,numhword,"Hyphenated word, letters and digits") + (16,asciihword,"Hyphenated word, all ASCII") + (17,hword,"Hyphenated word, all letters") + (18,url_path,"URL path") + (19,file,"File or path name") + (20,float,"Decimal notation") + (21,int,"Signed integer") + (22,uint,"Unsigned integer") + (23,entity,"XML entity") +(23 rows) + |
Description: Gets token types defined by parser.
+Return type: setof record
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 | SELECT ts_token_type(3722); + ts_token_type +-------------------------------------------------------------- + (1,asciiword,"Word, all ASCII") + (2,word,"Word, all letters") + (3,numword,"Word, letters and digits") + (4,email,"Email address") + (5,url,URL) + (6,host,Host) + (7,sfloat,"Scientific notation") + (8,version,"Version number") + (9,hword_numpart,"Hyphenated word part, letters and digits") + (10,hword_part,"Hyphenated word part, all letters") + (11,hword_asciipart,"Hyphenated word part, all ASCII") + (12,blank,"Space symbols") + (13,tag,"XML tag") + (14,protocol,"Protocol head") + (15,numhword,"Hyphenated word, letters and digits") + (16,asciihword,"Hyphenated word, all ASCII") + (17,hword,"Hyphenated word, all letters") + (18,url_path,"URL path") + (19,file,"File or path name") + (20,float,"Decimal notation") + (21,int,"Signed integer") + (22,uint,"Unsigned integer") + (23,entity,"XML entity") +(23 rows) + |
Description: Gets statistics of a tsvector column.
+Return type: setof record
+For example:
+1 +2 +3 +4 +5 +6 | SELECT ts_stat('select ''hello world''::tsvector'); + ts_stat +------------- + (world,1,1) + (hello,1,1) +(2 rows) + |
UUID functions are used to generate UUID data (see UUID Type).
+Description: Generates a UUID sequence number.
+Return type: UUID
+Example:
+1 +2 +3 +4 +5 | SELECT uuid_generate_v1(); + uuid_generate_v1 +-------------------------------------- + c71ceaca-a175-11e9-a920-797ff7000001 +(1 row) + |
The uuid_generate_v1 function generates UUIDs based on the time information, cluster node ID, and thread ID that generates the sequence. Each UUID is globally unique in a cluster, but there is a low probability that a UUID is duplicated among multiple clusters.
+Description: Generate a sequence number that is the same as the sequence number generated by the Oracle sys_guid method.
+Return type: text
+Example:
+1 +2 +3 +4 +5 | SELECT sys_guid(); + sys_guid +---------------------------------- + 4EBD3C74A17A11E9A1BF797FF7000001 +(1 row) + |
The data generation principle of the sys_guid function is the same as that of the uuid_generate_v1 function.
+JSON functions are used to generate JSON data (see JSON Types).
+Description: Returns the array as JSON. A multi-dimensional array becomes a JSON array of arrays. Line feeds will be added between dimension-1 elements if pretty_bool is true.
+Return type: json
+For example:
+1 +2 +3 +4 +5 | SELECT array_to_json('{{1,5},{99,100}}'::int[]); +array_to_json +------------------ +[[1,5],[99,100]] +(1 row) + |
Description: Returns the row as JSON. Line feeds will be added between level-1 elements if pretty_bool is true.
+Return type: json
+For example:
+1 +2 +3 +4 +5 | SELECT row_to_json(row(1,'foo')); + row_to_json +--------------------- + {"f1":1,"f2":"foo"} +(1 row) + |
Description: Configures a hash seed (that is, change the hash policy) and hashes data of the bool type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_boolean(FALSE, 10); + hll_hash_boolean +-------------------- + 391264977436098630 +(1 row) + |
Description: Hashes data of the smallint type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_smallint(100::smallint); + hll_hash_smallint +--------------------- + 4631120266694327276 +(1 row) + |
If parameters with the same numeric value are hashed using different data types, the data will differ, because hash functions select different calculation policies for each type.
+Description: Configures a hash seed (that is, change the hash policy) and hashes data of the smallint type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_smallint(100::smallint, 10); + hll_hash_smallint +--------------------- + 8349353095166695771 +(1 row) + |
Description: Hashes data of the integer type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_integer(0); + hll_hash_integer +---------------------- + -3485513579396041028 +(1 row) + |
Description: Hashes data of the integer type and configures a hash seed (that is, change the hash policy).
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_integer(0, 10); + hll_hash_integer +-------------------- + 183371090322255134 +(1 row) + |
Description: Hashes data of the bigint type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_bigint(100::bigint); + hll_hash_bigint +--------------------- + 8349353095166695771 +(1 row) + |
Description: Hashes data of the bigint type and configures a hash seed (that is, change the hash policy).
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_bigint(100::bigint, 10); + hll_hash_bigint +--------------------- + 4631120266694327276 +(1 row) + |
Description: Hashes data of the bytea type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_bytea(E'\\x'); + hll_hash_bytea +---------------- + 0 +(1 row) + |
Description: Hashes data of the bytea type and configures a hash seed (that is, change the hash policy).
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_bytea(E'\\x', 10); + hll_hash_bytea +--------------------- + 6574525721897061910 +(1 row) + |
Description: Hashes data of the text type.
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_text('AB'); + hll_hash_text +--------------------- + 5365230931951287672 +(1 row) + |
Description: Hashes data of the text type and configures a hash seed (that is, change the hash policy).
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | SELECT hll_hash_text('AB', 10); +hll_hash_text +--------------------- +7680762839921155903 +(1 row) + |
Description: Hashes data of any type.
+Return type: hll_hashval
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | select hll_hash_any(1); + hll_hash_any +---------------------- + -8604791237420463362 +(1 row) + +select hll_hash_any('08:00:2b:01:02:03'::macaddr); + hll_hash_any +---------------------- + -4883882473551067169 +(1 row) + |
Description: Hashes data of any type and configures a hash seed (that is, change the hash policy).
+Return type: hll_hashval
+For example:
+1 +2 +3 +4 +5 | select hll_hash_any(1, 10); + hll_hash_any +---------------------- + -1478847531811254870 +(1 row) + |
Description: Compares two pieces of data of the hll_hashval type to check whether they are the same.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | select hll_hashval_eq(hll_hash_integer(1), hll_hash_integer(1)); + hll_hashval_eq +---------------- + t +(1 row) + |
Description: Compares two pieces of data of the hll_hashval type to check whether they are different.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | select hll_hashval_ne(hll_hash_integer(1), hll_hash_integer(1)); + hll_hashval_ne +---------------- + f +(1 row) + |
HLL supports explicit, sparse, and full modes. explicit and sparse excel when the data scale is small, and barely produce errors in calculation results. When the number of distinct values increases, full becomes more suitable, but produces some errors. The following functions are used to view precision parameters in HLLs.
+Description: Checks the schema version in the current HLL.
+For example:
+1 +2 +3 +4 +5 | select hll_schema_version(hll_empty()); + hll_schema_version +-------------------- + 1 +(1 row) + |
Description: Checks the type of the current HLL.
+For example:
+1 +2 +3 +4 +5 | select hll_type(hll_empty()); + hll_type +---------- + 1 +(1 row) + |
Description: Check the value of log2m of the current HLL. This value affects the error rate in calculating the number of distinct values by the HLL. The formula for calculating the error rate is as follows:
+For example:
+1 +2 +3 +4 +5 | select hll_log2m(hll_empty()); + hll_log2m +----------- + 11 +(1 row) + |
Description: Checks the number of bits of buckets in a hll data structure.
+For example:
+1 +2 +3 +4 +5 | select hll_regwidth(hll_empty()); + hll_regwidth +-------------- + 5 +(1 row) + |
Description: Obtains the size of expthresh in the current HLL. An HLL usually switches from the explicit mode to the sparse mode and then to the full mode. This process is called the promotion hierarchy policy. You can change the value of expthresh to change the policy. For example, if expthresh is 0, an HILL will skip the explicit mode and directly enter the sparse mode. If the value of expthresh is explicitly set to a value ranging from 1 to 7, this function returns 2expthresh.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | select hll_expthresh(hll_empty()); + hll_expthresh +--------------- + (-1,160) +(1 row) + +select hll_expthresh(hll_empty(11,5,3)); + hll_expthresh +--------------- + (8,8) +(1 row) + |
Description: Specifies whether to enable the sparse mode. 0 indicates off and 1 indicates on.
+For example:
+1 +2 +3 +4 +5 | select hll_sparseon(hll_empty()); + hll_sparseon +-------------- + 1 +(1 row) + |
Description: Groups hashed data into HLL.
+Return type: hll
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | -- Prepare data: +create table t_id(id int); +insert into t_id values(generate_series(1,500)); +create table t_data(a int, c text); +insert into t_data select mod(id,2), id from t_id; + +-- Create another table and specify an HLL column: +create table t_a_c_hll(a int, c hll); + +-- Use GROUP BY on column a to group data, and insert the data to the HLL: +insert into t_a_c_hll select a, hll_add_agg(hll_hash_text(c)) from t_data group by a; + +-- Calculate the number of distinct values for each group in the HLL: +select a, #c as cardinality from t_a_c_hll order by a; + a | cardinality +---+------------------ + 0 | 250.741759091658 + 1 | 250.741759091658 +(2 rows) + |
Description: Groups hashed data into HLL and sets the log2m parameter. The parameter value ranges from 10 to 16.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | Select hll_cardinality(hll_add_agg(hll_hash_text(c), 10)) from t_data; + hll_cardinality +------------------ + 503.932348927339 +(1 row) + |
Description: Groups hashed data into HLL and sets the log2m and regwidth parameters in sequence. The value of regwidth ranges from 1 to 5.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | Select hll_cardinality(hll_add_agg(hll_hash_text(c), NULL, 1)) from t_data; + hll_cardinality +------------------ + 496.628982624022 +(1 row) + |
Description: Groups hashed data into HLL and sets the parameters log2m, regwidth, and expthresh in sequence. The value of expthresh is an integer ranging from –1 to 7. expthresh is used to specify the threshold for switching from the explicit mode to the sparse mode. –1 indicates the auto mode; 0 indicates that the explicit mode is skipped; a value from 1 to 7 indicates that the mode is switched when the number of distinct values reaches 2expthresh.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | Select hll_cardinality(hll_add_agg(hll_hash_text(c), NULL, 1, 4)) from t_data; + hll_cardinality +------------------ + 496.628982624022 +(1 row) + |
Description: Groups hashed data into HLL and sets the log2m, regwidth, expthresh, and sparseon parameters in sequence. The value of sparseon is 0 or 1.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | Select hll_cardinality(hll_add_agg(hll_hash_text(c), NULL, 1, 4, 0)) from t_data; + hll_cardinality +------------------ + 496.628982624022 +(1 row) + |
Description: Perform the UNION operation on multiple pieces of data of the hll type to obtain one HLL.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 +6 | -- Perform the UNION operation on data of the hll type in each group to obtain one HLL, and calculate the number of distinct values: +select #hll_union_agg(c) as cardinality from t_a_c_hll; + cardinality +------------------ + 496.628982624022 +(1 row) + |
To perform UNION on data in multiple HLLs, ensure that the HLLs have the same precision. Otherwise, UNION cannot be performed. This restriction also applies to the hll_union(hll, hll) function.
+Description: Creates an empty HLL.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_empty(); + hll_empty +----------- + \x118b7f +(1 row) + |
Description: Creates an empty HLL and sets the log2m parameter. The parameter value ranges from 10 to 16.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_empty(10); + hll_empty +----------- + \x118a7f +(1 row) + |
Description: Creates an empty HLL and sets the log2m and regwidth parameters in sequence. The value of regwidth ranges from 1 to 5.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_empty(10, 4); + hll_empty +----------- + \x116a7f +(1 row) + |
Description: Creates an empty HLL and sets the log2m, regwidth, and expthresh parameters. The value of expthresh is an integer ranging from –1 to 7. This parameter specifies the threshold for switching from the explicit mode to the sparse mode. –1 indicates the auto mode; 0 indicates that the explicit mode is skipped; a value from 1 to 7 indicates that the mode is switched when the number of distinct values reaches 2expthresh.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_empty(10, 4, 7); + hll_empty +----------- + \x116a48 +(1 row) + |
Description: Creates an empty HLL and sets the log2m, regwidth, expthresh, and sparseon parameters. The value of sparseon is 0 or 1.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_empty(10,4,7,0); + hll_empty +----------- + \x116a08 +(1 row) + |
Description: Adds hll_hashval to an HLL.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_add(hll_empty(), hll_hash_integer(1)); + hll_add +-------------------------- + \x128b7f8895a3f5af28cafe +(1 row) + |
Description: Adds hll_hashval to an HLL. This function works the same as hll_add, except that the positions of parameters are switched.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_add_rev(hll_hash_integer(1), hll_empty()); + hll_add_rev +-------------------------- + \x128b7f8895a3f5af28cafe +(1 row) + |
Description: Compares two HLLs to check whether they are the same.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | select hll_eq(hll_add(hll_empty(), hll_hash_integer(1)), hll_add(hll_empty(), hll_hash_integer(2))); + hll_eq +-------- + f +(1 row) + |
Description: Compares two HLLs to check whether they are different.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | select hll_ne(hll_add(hll_empty(), hll_hash_integer(1)), hll_add(hll_empty(), hll_hash_integer(2))); + hll_ne +-------- + t +(1 row) + |
Description: Calculates the number of distinct values of an HLL.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | select hll_cardinality(hll_empty() || hll_hash_integer(1)); + hll_cardinality +----------------- + 1 +(1 row) + |
Description: Performs the UNION operation on two HLL data structures to obtain one HLL.
+Return type: hll
+For example:
+1 +2 +3 +4 +5 | select hll_union(hll_add(hll_empty(), hll_hash_integer(1)), hll_add(hll_empty(), hll_hash_integer(2))); + hll_union +------------------------------------------ + \x128b7f8895a3f5af28cafeda0ce907e4355b60 +(1 row) + |
HLL has a series of built-in functions for internal data processing. Generally, users do not need to know how to use these functions. For details, see Table 1.
+Function + |
+Description + |
+
---|---|
hll_in + |
+Receives hll data in string format. + |
+
hll_out + |
+Sends hll data in string format. + |
+
hll_recv + |
+Receives hll data in bytea format. + |
+
hll_send + |
+Sends hll data in bytea format. + |
+
hll_trans_in + |
+Receives hll_trans_type data in string format. + |
+
hll_trans_out + |
+Sends hll_trans_type data in string format. + |
+
hll_trans_recv + |
+Receives hll_trans_type data in bytea format. + |
+
hll_trans_send + |
+Sends hll_trans_type data in bytea format. + |
+
hll_typmod_in + |
+Receives typmod data. + |
+
hll_typmod_out + |
+Sends typmod data. + |
+
hll_hashval_in + |
+Receives hll_hashval data. + |
+
hll_hashval_out + |
+Sends hll_hashval data. + |
+
hll_add_trans0 + |
+Works similar to hll_add, and is used on the first phase of DNs in distributed aggregation operations. + |
+
hll_union_trans + |
+Works similar to hll_union, and is used on the first phase of DNs in distributed aggregation operations. + |
+
hll_union_collect + |
+Works similar to hll_union, and is used on the second phase of CNs in distributed aggregation operations to summarize the results of each DN. + |
+
hll_pack + |
+Is used on the third phase of CNs in distributed aggregation operations to convert a user-defined type hll_trans_type to the hll type. + |
+
hll + |
+Converts a hll type to another hll type. Input parameters can be specified. + |
+
hll_hashval + |
+Converts the bigint type to the hll_hashval type. + |
+
hll_hashval_int4 + |
+Converts the int4 type to the hll_hashval type. + |
+
Description: Compares the values of hll and hll_hashval types to check whether they are the same.
+Return type: bool
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | --hll +select (hll_empty() || hll_hash_integer(1)) = (hll_empty() || hll_hash_integer(1)); +column +---------- + t +(1 row) + +--hll_hashval +select hll_hash_integer(1) = hll_hash_integer(1); + ?column? +---------- + t +(1 row) + |
Description: Compares the values of hll and hll_hashval types to check whether they are different.
+Return type: bool
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | --hll +select (hll_empty() || hll_hash_integer(1)) <> (hll_empty() || hll_hash_integer(2)); + ?column? +---------- + t +(1 row) + +--hll_hashval +select hll_hash_integer(1) <> hll_hash_integer(2); + ?column? +---------- + t +(1 row) + |
Description: Represents the functions of hll_add, hll_union, and hll_add_rev.
+Return type: hll
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | --hll_add +select hll_empty() || hll_hash_integer(1); + ?column? +-------------------------- + \x128b7f8895a3f5af28cafe +(1 row) + +--hll_add_rev +select hll_hash_integer(1) || hll_empty(); + ?column? +-------------------------- + \x128b7f8895a3f5af28cafe +(1 row) + +--hll_union +select (hll_empty() || hll_hash_integer(1)) || (hll_empty() || hll_hash_integer(2)); + ?column? +------------------------------------------ + \x128b7f8895a3f5af28cafeda0ce907e4355b60 +(1 row) + |
Description: Calculates the number of distinct values of an HLL. It works the same as the hll_cardinality function.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | select #(hll_empty() || hll_hash_integer(1)); + ?column? +---------- + 1 +(1 row) + |
The sequence functions provide a simple method to ensure security of multiple users for users to obtain sequence values from sequence objects.
+Specifies an increasing sequence and returns a new value.
+Return type: bigint
+The nextval function can be invoked in either of the following ways: (In example 2, the Oracle syntax is supported. Currently, the sequence name cannot contain a dot.)
+Example 1:
+1 +2 +3 +4 +5 | select nextval('seqDemo'); + nextval +--------- + 2 +(1 row) + |
Example 2:
+1 +2 +3 +4 +5 | select seqDemo.nextval; + nextval +--------- + 2 +(1 row) + |
Returns the last value of nextval for a specified sequence in the current session. If nextval has not been invoked for the specified sequence in the current session, an error is reported when currval is invoked. By default, currval is disabled. To enable it, set enable_beta_features to true. After currval is enabled, nextval will not be pushed down.
+Return type: bigint
+The currval function can be invoked in either of the following ways: (In example 2, the Oracle syntax is supported. Currently, the sequence name cannot contain a dot.)
+Example 1:
+1 +2 +3 +4 +5 | select currval('seq1'); + currval +--------- + 2 +(1 row) + |
Example 2:
+1 +2 +3 +4 +5 | select seq1.currval seq1; + currval +--------- + 2 +(1 row) + |
Returns the last value of nextval in the current session. This function is equivalent to currval, but lastval does not have a parameter. If nextval has not been invoked in the current session, an error is reported when lastval is invoked.
+By default, lastval is disabled. To enable it, set enable_beta_features or lastval_supported to true. After lastval is enabled, nextval will not be pushed down.
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | select lastval(); + lastval +--------- + 2 +(1 row) + |
Sets the current value of a sequence.
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | select setval('seqDemo',1); + setval +-------- + 1 +(1 row) + |
Sets the current value of a sequence and the is_called sign.
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | select setval('seqDemo',1,true); + setval +-------- + 1 +(1 row) + |
The current session and GTM will take effect immediately after setval is performed. If other sessions have buffered sequence values, setval will take effect only after the values are used up. Therefore, to prevent sequence value conflicts, you are advised to use setval with caution.
+Because the sequence is non-transactional, changes made by setval will not be canceled when a transaction rolled back.
+Description: Specifies whether two arrays are equal.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1.1,2.1,3.1]::int[] = ARRAY[1,2,3] AS RESULT ; + result +-------- + t +(1 row) + |
Description: Specifies whether two arrays are not equal.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,2,3] <> ARRAY[1,2,4] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array is less than another.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,2,3] < ARRAY[1,2,4] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array is greater than another.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,4,3] > ARRAY[1,2,4] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array is less than another.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,2,3] <= ARRAY[1,2,3] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array is greater than or equal to another.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,4,3] >= ARRAY[1,4,3] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array contains another.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,4,3] @> ARRAY[3,1] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array is contained in another.
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[2,7] <@ ARRAY[1,7,4,2,6] AS RESULT; + result +-------- + t +(1 row) + |
Description: Specifies whether an array overlaps another (have common elements).
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,4,3] && ARRAY[2,1] AS RESULT; + result +-------- + t +(1 row) + |
Description: Array-to-array concatenation
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[1,2,3] || ARRAY[4,5,6] AS RESULT; + result +--------------- + {1,2,3,4,5,6} +(1 row) + |
1 +2 +3 +4 +5 | SELECT ARRAY[1,2,3] || ARRAY[[4,5,6],[7,8,9]] AS RESULT; + result +--------------------------- + {{1,2,3},{4,5,6},{7,8,9}} +(1 row) + |
Description: Element-to-array concatenation
+For example:
+1 +2 +3 +4 +5 | SELECT 3 || ARRAY[4,5,6] AS RESULT; + result +----------- + {3,4,5,6} +(1 row) + |
Description: Array-to-element concatenation
+For example:
+1 +2 +3 +4 +5 | SELECT ARRAY[4,5,6] || 7 AS RESULT; + result +----------- + {4,5,6,7} +(1 row) + |
Array comparisons compare the array contents element-by-element, using the default B-tree comparison function for the element data type. In multidimensional arrays, the elements are accessed in row-major order. If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order.
+Description: Appends an element to the end of an array, and only supports dimension-1 arrays.
+Return type: anyarray
+For example:
+1 +2 +3 +4 +5 | SELECT array_append(ARRAY[1,2], 3) AS RESULT; + result +--------- + {1,2,3} +(1 row) + |
Description: Appends an element to the beginning of an array, and only supports dimension-1 arrays.
+Return type: anyarray
+For example:
+1 +2 +3 +4 +5 | SELECT array_prepend(1, ARRAY[2,3]) AS RESULT; + result +--------- + {1,2,3} +(1 row) + |
Description: Concatenates two arrays, and supports multi-dimensional arrays.
+Return type: anyarray
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT array_cat(ARRAY[1,2,3], ARRAY[4,5]) AS RESULT; + result +------------- + {1,2,3,4,5} +(1 row) + +SELECT array_cat(ARRAY[[1,2],[4,5]], ARRAY[6,7]) AS RESULT; + result +--------------------- + {{1,2},{4,5},{6,7}} +(1 row) + |
Description: Returns the number of dimensions of the array.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT array_ndims(ARRAY[[1,2,3], [4,5,6]]) AS RESULT; + result +-------- + 2 +(1 row) + |
Description: Returns a text representation of array's dimensions.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT array_dims(ARRAY[[1,2,3], [4,5,6]]) AS RESULT; + result +------------ + [1:2][1:3] +(1 row) + |
Description: Returns the length of the requested array dimension.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT array_length(array[1,2,3], 1) AS RESULT; + result +-------- + 3 +(1 row) + |
Description: Returns lower bound of the requested array dimension.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT array_lower('[0:2]={1,2,3}'::int[], 1) AS RESULT; + result +-------- + 0 +(1 row) + |
Description: Returns upper bound of the requested array dimension.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT array_upper(ARRAY[1,8,3,7], 1) AS RESULT; + result +-------- + 4 +(1 row) + |
Description: Uses the first text as the new delimiter and the second text to replace NULL values.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT array_to_string(ARRAY[1, 2, 3, NULL, 5], ',', '*') AS RESULT; + result +----------- + 1,2,3,*,5 +(1 row) + |
Description: Uses the second text as the new delimiter and the third text as the substring to be replaced by NULL values. A substring can be replaced by NULL values only when it is the same as the third text.
+Return type: text[]
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT string_to_array('xx~^~yy~^~zz', '~^~', 'yy') AS RESULT; + result +-------------- + {xx,NULL,zz} +(1 row) +SELECT string_to_array('xx~^~yy~^~zz', '~^~', 'y') AS RESULT; + result +------------ + {xx,yy,zz} +(1 row) + |
Description: Expands an array to a set of rows.
+Return type: setof anyelement
+For example:
+1 +2 +3 +4 +5 +6 | SELECT unnest(ARRAY[1,2]) AS RESULT; + result +-------- + 1 + 2 +(2 rows) + |
In string_to_array, if the delimiter parameter is NULL, each character in the input string will become a separate element in the resulting array. If the delimiter is an empty string, then the entire input string is returned as a one-element array. Otherwise the input string is split at each occurrence of the delimiter string.
+In string_to_array, if the null-string parameter is omitted or NULL, none of the substrings of the input will be replaced by NULL.
+In array_to_string, if the null-string parameter is omitted or NULL, any null elements in the array are simply skipped and not represented in the output string.
+For example:
+1 +2 +3 +4 +5 | SELECT int4range(1,5) = '[1,4]'::int4range AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not equal to
+For example:
+1 +2 +3 +4 +5 | SELECT numrange(1.1,2.2) <> numrange(1.1,2.3) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT int4range(1,10) < int4range(2,3) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT int4range(1,10) > int4range(1,5) AS RESULT; + result +-------- + t +(1 row) + |
Description: Is less than or equals
+For example:
+1 +2 +3 +4 +5 | SELECT numrange(1.1,2.2) <= numrange(1.1,2.2) AS RESULT; + result +-------- + t +(1 row) + |
Description: Is greater than or equals
+For example:
+1 +2 +3 +4 +5 | SELECT numrange(1.1,2.2) >= numrange(1.1,2.0) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT int4range(2,4) @> int4range(2,3) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT '[2011-01-01,2011-03-01)'::tsrange @> '2011-01-10'::timestamp AS RESULT; + result +-------- + t +(1 row) + |
Description: Range is contained by
+For example:
+1 +2 +3 +4 +5 | SELECT int4range(2,4) <@ int4range(1,7) AS RESULT; + result +-------- + t +(1 row) + |
Description: Element is contained by
+For example:
+1 +2 +3 +4 +5 | SELECT 42 <@ int4range(1,7) AS RESULT; + result +-------- + f +(1 row) + |
Description: Overlap (have points in common)
+For example:
+1 +2 +3 +4 +5 | SELECT int8range(3,7) && int8range(4,12) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT int8range(1,10) << int8range(100,110) AS RESULT; + result +-------- + t +(1 row) + |
Description: Strictly right of
+For example:
+1 +2 +3 +4 +5 | SELECT int8range(50,60) >> int8range(20,30) AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not extend to the right of
+For example:
+1 +2 +3 +4 +5 | SELECT int8range(1,20) &< int8range(18,20) AS RESULT; + result +-------- + t +(1 row) + |
Description: Does not extend to the left of
+For example:
+1 +2 +3 +4 +5 | SELECT int8range(7,20) &> int8range(5,10) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT numrange(1.1,2.2) -|- numrange(2.2,3.3) AS RESULT; + result +-------- + t +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT numrange(5,15) + numrange(10,20) AS RESULT; + result +-------- + [5,20) +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT int8range(5,15) * int8range(10,20) AS RESULT; + result +--------- + [10,15) +(1 row) + |
For example:
+1 +2 +3 +4 +5 | SELECT int8range(5,15) - int8range(10,20) AS RESULT; + result +-------- + [5,10) +(1 row) + |
The simple comparison operators <, >, <=, and >= compare the lower bounds first, and only if those are equal, compare the upper bounds.
+The <<, >>, and -|- operators always return false when an empty range is involved; that is, an empty range is not considered to be either before or after any other range.
+The union and difference operators will fail if the resulting range would need to contain two disjoint sub-ranges.
+Description: Lower bound of range
+Return type: Range's element type
+For example:
+1 +2 +3 +4 +5 | SELECT lower(numrange(1.1,2.2)) AS RESULT; + result +-------- + 1.1 +(1 row) + |
Description: Upper bound of range
+Return type: Range's element type
+For example:
+1 +2 +3 +4 +5 | SELECT upper(numrange(1.1,2.2)) AS RESULT; + result +-------- + 2.2 +(1 row) + |
Description: Is the range empty?
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT isempty(numrange(1.1,2.2)) AS RESULT; + result +-------- + f +(1 row) + |
Description: Is the lower bound inclusive?
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT lower_inc(numrange(1.1,2.2)) AS RESULT; + result +-------- + t +(1 row) + |
Description: Is the upper bound inclusive?
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT upper_inc(numrange(1.1,2.2)) AS RESULT; + result +-------- + f +(1 row) + |
Description: Is the lower bound infinite?
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT lower_inf('(,)'::daterange) AS RESULT; + result +-------- + t +(1 row) + |
Description: Is the upper bound infinite?
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT upper_inf('(,)'::daterange) AS RESULT; + result +-------- + t +(1 row) + |
The lower and upper functions return null if the range is empty or the requested bound is infinite. The lower_inc, upper_inc, lower_inf, and upper_inf functions all return false for an empty range.
+Description: Sum of expression across all input values
+Return type:
+Generally, same as the argument data type. In the following cases, type conversion occurs:
+For example:
+1 +2 +3 +4 +5 | SELECT SUM(ss_ext_tax) FROM tpcds.STORE_SALES; + sum +-------------- + 213267594.69 +(1 row) + |
Description: Specifies the maximum value of expression across all input values.
+Argument types: any array, numeric, string, or date/time type
+Return type: same as the argument type
+For example:
+1 +2 +3 +4 +5 | SELECT MAX(inv_quantity_on_hand) FROM tpcds.inventory; + max +--------- + 1000000 +(1 row) + |
Description: Specifies the minimum value of expression across all input values.
+Argument types: any array, numeric, string, or date/time type
+Return type: same as the argument type
+For example:
+1 +2 +3 +4 +5 | SELECT MIN(inv_quantity_on_hand) FROM tpcds.inventory; + min +----- + 0 +(1 row) + |
Description: Average (arithmetic mean) of all input values
+Return type:
+NUMBER for any integer-type argument.
+DOUBLE PRECISION for floating-point parameters.
+otherwise the same as the argument data type.
+For example:
+1 +2 +3 +4 +5 | SELECT AVG(inv_quantity_on_hand) FROM tpcds.inventory; + avg +---------------------- + 500.0387129084044604 +(1 row) + |
Description: Median of all input values Currently, only the numeric and interval types are supported. Null values are not used for calculation.
+Return type: If all input values are integers, a median of the NUMERIC type is returned; otherwise, a median of the same type as the input values is returned.
+In the Teradata-compatible mode, if the input values are integers, the returned median is rounded to the nearest integer.
+For example:
+1 +2 +3 +4 +5 | SELECT MEDIAN(inv_quantity_on_hand) FROM tpcds.inventory; + median +-------- + 500 +(1 row) + |
Description: returns a value corresponding to the specified percentile in the ordering, interpolating between adjacent input items if needed. Null values are not used for calculation.
+Input: const indicates a number ranging from 0 to 1. Currently, only numeric and interval expressions are supported.
+Return type: If all input values are integers, a median of the NUMERIC type is returned; otherwise, a median of the same type as the input values is returned.
+In the Teradata-compatible mode, if the input values are integers, the returned median is rounded to the nearest integer.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | select percentile_cont(0.3) within group(order by x) from (select generate_series(1,5) as x) as t; +percentile_cont +----------------- +2.2 +(1 row) +select percentile_cont(0.3) within group(order by x desc) from (select generate_series(1,5) as x) as t; +percentile_cont +----------------- +3.8 +(1 row) + |
Description: returns the first input value whose position in the ordering equals or exceeds the specified percentile.
+Input: const indicates a number ranging from 0 to 1. Currently, only numeric and interval expressions are supported. Null values are not used for calculation.
+Return type: If all input values are integers, a median of the NUMERIC type is returned; otherwise, a median of the same type as the input values is returned.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | select percentile_disc(0.3) within group(order by x) from (select generate_series(1,5) as x) as t; +percentile_disc +----------------- +2 +(1 row) +select percentile_disc(0.3) within group(order by x desc) from (select generate_series(1,5) as x) as t; +percentile_disc +----------------- +4 +(1 row) + |
Description: Number of input rows for which the value of expression is not null
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | SELECT COUNT(inv_quantity_on_hand) FROM tpcds.inventory; + count +---------- + 11158087 +(1 row) + |
Description: Number of input rows
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | SELECT COUNT(*) FROM tpcds.inventory; + count +---------- + 11745000 +(1 row) + |
Description: Input values, including nulls, concatenated into an array
+Return type: array of the argument type
+For example:
+Create the employeeinfo table and insert data into the table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | CREATE TABLE employeeinfo (empno smallint, ename varchar(20), job varchar(20), hiredate date,deptno smallint); +INSERT INTO employeeinfo VALUES (7155, 'JACK', 'SALESMAN', '2018-12-01', 30); +INSERT INTO employeeinfo VALUES (7003, 'TOM', 'FINANCE', '2016-06-15', 20); +INSERT INTO employeeinfo VALUES (7357, 'MAX', 'SALESMAN', '2020-10-01', 30); + +SELECT * FROM employeeinfo; + empno | ename | job | hiredate | deptno +-------+-------+----------+---------------------+-------- + 7155 | JACK | SALESMAN | 2018-12-01 00:00:00 | 30 + 7357 | MAX | SALESMAN | 2020-10-01 00:00:00 | 30 + 7003 | TOM | FINANCE | 2016-06-15 00:00:00 | 20 +(3 rows) + |
Query the names of all employees in the department whose ID is 30.
+1 +2 +3 +4 +5 | SELECT array_agg(ename) FROM employeeinfo where deptno = 30; + array_agg +------------ + {JACK,MAX} +(1 row) + |
Query all employees in the same department.
+1 +2 +3 +4 +5 +6 | SELECT deptno, array_agg(ename) FROM employeeinfo group by deptno; + deptno | array_agg +--------+------------ + 30 | {JACK,MAX} + 20 | {TOM} +(2 rows) + |
Query all department IDs and deduplicate them.
+1 +2 +3 +4 +5 +6 | SELECT array_agg(distinct deptno) FROM employeeinfo group by deptno; + array_agg +----------- + {20} + {30} +(2 rows) + |
Sort the deduplicated department IDs in descending order.
+1 +2 +3 +4 +5 | SELECT array_agg(distinct deptno order by deptno desc) FROM employeeinfo; + array_agg +----------- + {30,20} +(1 row) + |
Description: Input values concatenated into a string, separated by delimiter
+Return type: same as the argument type
+For example:
+Query all employees in the same department.
+1 +2 +3 +4 +5 +6 | SELECT deptno, string_agg(ename,',') from employeeinfo group by deptno; + deptno | string_agg +--------+------------ + 30 | JACK,MAX + 20 | TOM +(2 rows) + |
Query employees whose work IDs are smaller than 7156.
+1 +2 +3 +4 +5 | SELECT string_agg(ename,',') FROM employeeinfo where empno < 7156; + string_agg +------------ + TOM,JACK +(1 row) + |
Description: Aggregation column data sorted according to the mode specified by WITHIN GROUP, and concatenated to a string using the specified delimiter
+Return type: text
+listagg is a column-to-row aggregation function, compatible with Oracle Database 11g Release 2. You can specify the OVER clause as a window function. When listagg is used as a window function, the OVER clause does not support the window sorting or framework of ORDER BY, so as to avoid ambiguity in listagg and ORDER BY of the WITHIN GROUP clause.
+For example:
+The aggregation column is of the text character set type.
+1 +2 +3 +4 +5 +6 +7 | SELECT deptno, listagg(ename, ',') WITHIN GROUP(ORDER BY ename) AS employees FROM emp GROUP BY deptno; + deptno | employees +--------+-------------------------------------- + 10 | CLARK,KING,MILLER + 20 | ADAMS,FORD,JONES,SCOTT,SMITH + 30 | ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD +(3 rows) + |
The aggregation column is of the integer type.
+1 +2 +3 +4 +5 +6 +7 | SELECT deptno, listagg(mgrno, ',') WITHIN GROUP(ORDER BY mgrno NULLS FIRST) AS mgrnos FROM emp GROUP BY deptno; + deptno | mgrnos +--------+------------------------------- + 10 | 7782,7839 + 20 | 7566,7566,7788,7839,7902 + 30 | 7698,7698,7698,7698,7698,7839 +(3 rows) + |
The aggregation column is of the floating point type.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT job, listagg(bonus, '($); ') WITHIN GROUP(ORDER BY bonus DESC) || '($)' AS bonus FROM emp GROUP BY job; + job | bonus +------------+------------------------------------------------- + CLERK | 10234.21($); 2000.80($); 1100.00($); 1000.22($) + PRESIDENT | 23011.88($) + ANALYST | 2002.12($); 1001.01($) + MANAGER | 10000.01($); 2399.50($); 999.10($) + SALESMAN | 1000.01($); 899.00($); 99.99($); 9.00($) +(5 rows) + |
The aggregation column is of the time type.
+1 +2 +3 +4 +5 +6 +7 | SELECT deptno, listagg(hiredate, ', ') WITHIN GROUP(ORDER BY hiredate DESC) AS hiredates FROM emp GROUP BY deptno; + deptno | hiredates +--------+------------------------------------------------------------------------------------------------------------------------------ + 10 | 1982-01-23 00:00:00, 1981-11-17 00:00:00, 1981-06-09 00:00:00 + 20 | 2001-04-02 00:00:00, 1999-12-17 00:00:00, 1987-05-23 00:00:00, 1987-04-19 00:00:00, 1981-12-03 00:00:00 + 30 | 2015-02-20 00:00:00, 2010-02-22 00:00:00, 1997-09-28 00:00:00, 1981-12-03 00:00:00, 1981-09-08 00:00:00, 1981-05-01 00:00:00 +(3 rows) + |
The aggregation column is of the time interval type.
+1 +2 +3 +4 +5 +6 +7 | SELECT deptno, listagg(vacationTime, '; ') WITHIN GROUP(ORDER BY vacationTime DESC) AS vacationTime FROM emp GROUP BY deptno; + deptno | vacationtime +--------+------------------------------------------------------------------------------------ + 10 | 1 year 30 days; 40 days; 10 days + 20 | 70 days; 36 days; 9 days; 5 days + 30 | 1 year 1 mon; 2 mons 10 days; 30 days; 12 days 12:00:00; 4 days 06:00:00; 24:00:00 +(3 rows) + |
By default, the delimiter is empty.
+1 +2 +3 +4 +5 +6 +7 | SELECT deptno, listagg(job) WITHIN GROUP(ORDER BY job) AS jobs FROM emp GROUP BY deptno; + deptno | jobs +--------+---------------------------------------------- + 10 | CLERKMANAGERPRESIDENT + 20 | ANALYSTANALYSTCLERKCLERKMANAGER + 30 | CLERKMANAGERSALESMANSALESMANSALESMANSALESMAN +(3 rows) + |
When listagg is used as a window function, the OVER clause does not support the window sorting of ORDER BY, and the listagg column is an ordered aggregation of the corresponding groups.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | SELECT deptno, mgrno, bonus, listagg(ename,'; ') WITHIN GROUP(ORDER BY hiredate) OVER(PARTITION BY deptno) AS employees FROM emp; + deptno | mgrno | bonus | employees +--------+-------+----------+------------------------------------------- + 10 | 7839 | 10000.01 | CLARK; KING; MILLER + 10 | | 23011.88 | CLARK; KING; MILLER + 10 | 7782 | 10234.21 | CLARK; KING; MILLER + 20 | 7566 | 2002.12 | FORD; SCOTT; ADAMS; SMITH; JONES + 20 | 7566 | 1001.01 | FORD; SCOTT; ADAMS; SMITH; JONES + 20 | 7788 | 1100.00 | FORD; SCOTT; ADAMS; SMITH; JONES + 20 | 7902 | 2000.80 | FORD; SCOTT; ADAMS; SMITH; JONES + 20 | 7839 | 999.10 | FORD; SCOTT; ADAMS; SMITH; JONES + 30 | 7839 | 2399.50 | BLAKE; TURNER; JAMES; MARTIN; WARD; ALLEN + 30 | 7698 | 9.00 | BLAKE; TURNER; JAMES; MARTIN; WARD; ALLEN + 30 | 7698 | 1000.22 | BLAKE; TURNER; JAMES; MARTIN; WARD; ALLEN + 30 | 7698 | 99.99 | BLAKE; TURNER; JAMES; MARTIN; WARD; ALLEN + 30 | 7698 | 1000.01 | BLAKE; TURNER; JAMES; MARTIN; WARD; ALLEN + 30 | 7698 | 899.00 | BLAKE; TURNER; JAMES; MARTIN; WARD; ALLEN +(14 rows) + |
Description: Overall covariance
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT COVAR_POP(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + covar_pop +------------------ + 829.749627587403 +(1 row) + |
Description: Sample covariance
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT COVAR_SAMP(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + covar_samp +------------------ + 830.052235037289 +(1 row) + |
Description: Overall standard difference
+Return type: double precision for floating-point arguments, otherwise numeric
+For example:
+1 +2 +3 +4 +5 | SELECT STDDEV_POP(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + stddev_pop +------------------ + 289.224294957556 +(1 row) + |
Description: Sample standard deviation of the input values
+Return type: double precision for floating-point arguments, otherwise numeric
+For example:
+1 +2 +3 +4 +5 | SELECT STDDEV_SAMP(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + stddev_samp +------------------ + 289.224359757315 +(1 row) + |
Description: Population variance of the input values (square of the population standard deviation)
+Return type: double precision for floating-point arguments, otherwise numeric
+For example:
+1 +2 +3 +4 +5 | SELECT VAR_POP(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + var_pop +-------------------- + 83650.692793695475 +(1 row) + |
Description: Sample variance of the input values (square of the sample standard deviation)
+Return type: double precision for floating-point arguments, otherwise numeric
+For example:
+1 +2 +3 +4 +5 | SELECT VAR_SAMP(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + var_samp +-------------------- + 83650.730277028768 +(1 row) + |
Description: The bitwise AND of all non-null input values, or null if none
+Return type: same as the argument type
+For example:
+1 +2 +3 +4 +5 | SELECT BIT_AND(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + bit_and +--------- + 0 +(1 row) + |
Description: The bitwise OR of all non-null input values, or null if none
+Return type: same as the argument type
+For example:
+1 +2 +3 +4 +5 | SELECT BIT_OR(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + bit_or +-------- + 1023 +(1 row) + |
Description: Its value is true if all input values are true, otherwise false.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | SELECT bool_and(100 <2500); + bool_and +---------- + t +(1 row) + |
Description: Its value is true if at least one input value is true, otherwise false.
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | SELECT bool_or(100 <2500); + bool_or +---------- + t +(1 row) + |
Description: Correlation coefficient
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT CORR(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + corr +------------------- + .0381383624904186 +(1 row) + |
Description: Equivalent to bool_and
+Return type: bool
+For example:
+1 +2 +3 +4 +5 | SELECT every(100 <2500); + every +------- + t +(1 row) + |
Description: The tuples in different groups are sorted non-consecutively by expression.
+Return type: bigint
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 | SELECT d_moy, d_fy_week_seq, rank() OVER(PARTITION BY d_moy ORDER BY d_fy_week_seq) FROM tpcds.date_dim WHERE d_moy < 4 AND d_fy_week_seq < 7 ORDER BY 1,2; + d_moy | d_fy_week_seq | rank +-------+---------------+------ + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 8 + 1 | 2 | 8 + 1 | 2 | 8 + 1 | 2 | 8 + 1 | 2 | 8 + 1 | 2 | 8 + 1 | 2 | 8 + 1 | 3 | 15 + 1 | 3 | 15 + 1 | 3 | 15 + 1 | 3 | 15 + 1 | 3 | 15 + 1 | 3 | 15 + 1 | 3 | 15 + 1 | 4 | 22 + 1 | 4 | 22 + 1 | 4 | 22 + 1 | 4 | 22 + 1 | 4 | 22 + 1 | 4 | 22 + 1 | 4 | 22 + 1 | 5 | 29 + 1 | 5 | 29 + 2 | 5 | 1 + 2 | 5 | 1 + 2 | 5 | 1 + 2 | 5 | 1 + 2 | 5 | 1 + 2 | 6 | 6 + 2 | 6 | 6 + 2 | 6 | 6 + 2 | 6 | 6 + 2 | 6 | 6 + 2 | 6 | 6 + 2 | 6 | 6 +(42 rows) + |
Description: Average of the independent variable (sum(X)/N)
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_AVGX(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_avgx +------------------ + 578.606576740795 +(1 row) + |
Description: Average of the dependent variable (sum(Y)/N)
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_AVGY(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_avgy +------------------ + 50.0136711629602 +(1 row) + |
Description: Number of input rows in which both expressions are non-null
+Return type: bigint
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_COUNT(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_count +------------ + 2743 +(1 row) + |
Description: y-intercept of the least-squares-fit linear equation determined by the (X, Y) pairs
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_INTERCEPT(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_intercept +------------------ + 49.2040847848607 +(1 row) + |
Description: Square of the correlation coefficient
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_R2(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_r2 +-------------------- + .00145453469345058 +(1 row) + |
Description: Slope of the least-squares-fit linear equation determined by the (X, Y) pairs
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_SLOPE(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_slope +-------------------- + .00139920009665259 +(1 row) + |
Description: sum(X^2) - sum(X)^2/N (sum of squares of the independent variables)
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_SXX(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_sxx +------------------ + 1626645991.46135 +(1 row) + |
Description: sum(X*Y) - sum(X) * sum(Y)/N ("sum of products" of independent times dependent variable)
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_SXY(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_sxy +------------------ + 2276003.22847225 +(1 row) + |
Description: sum(Y^2) - sum(Y)^2/N ("sum of squares" of the dependent variable)
+Return type: double precision
+For example:
+1 +2 +3 +4 +5 | SELECT REGR_SYY(sr_fee, sr_net_loss) FROM tpcds.store_returns WHERE sr_customer_sk < 1000; + regr_syy +----------------- + 2189417.6547314 +(1 row) + |
Description: Alias of stddev_samp
+Return type: double precision for floating-point arguments, otherwise numeric
+For example:
+1 +2 +3 +4 +5 | SELECT STDDEV(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + stddev +------------------ + 289.224359757315 +(1 row) + |
Description: Alias of var_samp
+Return type: double precision for floating-point arguments, otherwise numeric
+For example:
+1 +2 +3 +4 +5 | SELECT VARIANCE(inv_quantity_on_hand) FROM tpcds.inventory WHERE inv_warehouse_sk = 1; + variance +-------------------- + 83650.730277028768 +(1 row) + |
Description: Returns the CHECKSUM value of all input values. This function can be used to check whether the data in the tables before and after GaussDB(DWS) data restoration or migration is the same. Other databases cannot be checked by using this function. Before and after database backup, database restoration, or data migration, you need to manually run SQL commands to obtain the execution results. Compare the obtained execution results to check whether the data in the tables before and after the backup or migration is the same.
+The following types of data can be converted into TEXT types by default: char, name, int8, int2, int1, int4, raw, pg_node_tree, float4, float8, bpchar, varchar, nvarchar2, date, timestamp, timestamptz, numeric, and smalldatetime. Other types need to be forcibly converted to TEXT.
+Return type: numeric
+For example:
+The following shows the CHECKSUM value of a column that can be converted to the TEXT type by default:
+1 +2 +3 +4 +5 | SELECT CHECKSUM(inv_quantity_on_hand) FROM tpcds.inventory; + checksum +------------------- + 24417258945265247 +(1 row) + |
The following shows the CHECKSUM value of a column that cannot be converted to the TEXT type by default: The CHECKSUM parameter is set to Column name::TEXT.
+1 +2 +3 +4 +5 | SELECT CHECKSUM(inv_quantity_on_hand::TEXT) FROM tpcds.inventory; + checksum +------------------- + 24417258945265247 +(1 row) + |
The following shows the CHECKSUM value of all columns in a table. Note that the CHECKSUM parameter is set to Table name::TEXT. The table name is not modified by its schema.
+1 +2 +3 +4 +5 | SELECT CHECKSUM(inventory::TEXT) FROM tpcds.inventory; + checksum +------------------- + 25223696246875800 +(1 row) + |
Regular aggregate functions return a single value calculated from values in a row, or group all rows into a single output row. Window functions perform a calculation across a set of rows and return a value for each row.
+1 | function_name ([expression [, expression ... ]]) OVER ( window_definition ) function_name ([expression [, expression ... ]]) OVER window_namefunction_name ( * ) OVER ( window_definition ) function_name ( * ) OVER window_name + |
window_definition is defined as follows:
+1 | [ existing_window_name ] [ PARTITION BY expression [, ...] ] [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ] [ frame_clause ] + |
frame_clause is defined as follows:
+1 | [ RANGE | ROWS ] frame_start [ RANGE | ROWS ] BETWEEN frame_start AND frame_end + |
You can use RANGE and ROWS to specify the window frame. ROWS specifies the window in physical units (rows). RANGE specifies the window as a logical offset.
+In RANGE and ROWS, you can use BETWEEN frame_start AND frame_end to specify the window's first and last rows. If frame_end is left blank, it defaults to CURRENT ROW.
+The value options of BETWEEN frame_start AND frame_end are as follows:
+frame_start cannot be UNBOUNDED FOLLOWING, frame_end cannot be UNBOUNDED PRECEDING, and frame_end cannot be earlier than frame_start. For example, RANGE BETWEEN CURRENT ROW AND value PRECEDING is not allowed.
+Description: The RANK function is used for generating non-consecutive sequence numbers for the values in each group. The same values have the same sequence number.
+Return type: bigint
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, rank() OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | rank +-------+------------+------ + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 3 + 1 | 2 | 3 + 2 | 3 | 1 + 2 | 3 | 1 +(6 rows) + |
Description: The ROW_NUMBER function is used for generating consecutive sequence numbers for the values in each group. The same values have different sequence numbers.
+Return type: bigint
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, Row_number() OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | row_number +-------+------------+------------ + 1 | 1 | 1 + 1 | 1 | 2 + 1 | 2 | 3 + 1 | 2 | 4 + 2 | 3 | 1 + 2 | 3 | 2 +(6 rows) + |
Description: The DENSE_RANK function is used for generating consecutive sequence numbers for the values in each group. The same values have the same sequence number.
+Return type: bigint
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, dense_rank() OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | dense_rank +-------+------------+------------ + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 2 + 1 | 2 | 2 + 2 | 3 | 1 + 2 | 3 | 1 +(6 rows) + |
Description: The PERCENT_RANK function is used for generating corresponding sequence numbers for the values in each group. That is, the function calculates the value according to the formula Sequence number = (Rank – 1)/(Total rows – 1). Rank is the corresponding sequence number generated based on the RANK function for the value and Total rows is the total number of elements in a group.
+Return type: double precision
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, percent_rank() OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | percent_rank +-------+------------+------------------ + 1 | 1 | 0 + 1 | 1 | 0 + 1 | 2 | .666666666666667 + 1 | 2 | .666666666666667 + 2 | 3 | 0 + 2 | 3 | 0 +(6 rows) + |
Description: The CUME_DIST function is used for generating accumulative distribution sequence numbers for the values in each group. That is, the function calculates the value according to the following formula: Sequence number = Number of rows preceding or peer with current row/Total rows.
+Return type: double precision
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, cume_dist() OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date e_dim WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | cume_dist +-------+------------+----------- + 1 | 1 | .5 + 1 | 1 | .5 + 1 | 2 | 1 + 1 | 2 | 1 + 2 | 3 | 1 + 2 | 3 | 1 +(6 rows) + |
Description: The NTILE function is used for equally allocating sequential data sets to the buckets whose quantity is specified by num_buckets according to num_buckets integer and allocating the bucket number to each row. Divide the partition as equally as possible.
+Return type: integer
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, ntile(3) OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | ntile +-------+------------+------- + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 2 + 1 | 2 | 3 + 2 | 3 | 1 + 2 | 3 | 2 +(6 rows) + |
Description: The LAG function is used for generating lag values for the corresponding values in each group. That is, the value of the row obtained by moving forward the row corresponding to the current value by offset (integer) is the sequence number. If the row does not exist after the moving, the result value is the default value. If omitted, offset defaults to 1 and default to null.
+Return type: same as the parameter type
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, lag(d_mon,3,null) OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | lag +-------+------------+----- + 1 | 1 | + 1 | 1 | + 1 | 2 | + 1 | 2 | 1 + 2 | 3 | + 2 | 3 | +(6 rows) + |
Description: The LEAD function is used for generating leading values for the corresponding values in each group. That is, the value of the row obtained by moving backward the row corresponding to the current value by offset (integer) is the sequence number. If the number of rows after the moving exceeds the total number for the current group, the result value is the default value. If omitted, offset defaults to 1 and default to null.
+Return type: same as the parameter type
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, lead(d_week_seq,2) OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | lead +-------+------------+------ + 1 | 1 | 2 + 1 | 1 | 2 + 1 | 2 | + 1 | 2 | + 2 | 3 | + 2 | 3 | +(6 rows) + |
Description: The FIRST_VALUE function is used for returning the first value of each group.
+Return type: same as the parameter type
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, first_value(d_week_seq) OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 7 ORDER BY 1,2; + d_mon | d_week_seq | first_value +-------+------------+------------- + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 1 + 1 | 2 | 1 + 2 | 3 | 3 + 2 | 3 | 3 +(6 rows) + |
Description: Returns the last value of each group.
+Return type: same as the parameter type
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, last_value(d_mon) OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 6 ORDER BY 1,2; + d_mon | d_week_seq | last_value +-------+------------+------------ + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 1 + 1 | 2 | 1 + 2 | 3 | 2 + 2 | 3 | 2 +(6 rows) + |
Description: The nth row for a group is the returned value. If the row does not exist, NULL is returned by default.
+Return type: same as the parameter type
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT d_mon, d_week_seq, nth_value(d_week_seq,2) OVER(PARTITION BY d_mon ORDER BY d_week_seq) FROM reason_date WHERE d_mon < 4 AND d_week_seq < 6 ORDER BY 1,2; + d_mon | d_week_seq | nth_value +-------+------------+----------- + 1 | 1 | 1 + 1 | 1 | 1 + 1 | 2 | 1 + 1 | 2 | 1 + 2 | 3 | 3 + 2 | 3 | 3 +(6 rows) + |
Description: Indicates the number of remaining days before the password of the current user expires. After the password expires, the system prompts the user to change the password. This parameter is related to the GUC parameter password_effect_time.
+Return type: interval
+Examples:
+1 +2 +3 +4 +5 | SELECT gs_password_deadline(); + gs_password_deadline +------------------------- + 83 days 17:44:32.196094 +(1 row) + |
Description: Indicates the number of remaining days before the password of the current user expires. After the password expires, the user cannot log in to the database. This parameter is related to the DDL statement PASSWORD EXPIRATION period used for creating a user.
+Return type: interval
+Examples:
+1 +2 +3 +4 +5 | SELECT gs_password_expiration(); + gs_password_expiration +------------------------- + 29 days 23:59:49.731482 +(1 row) + |
Description: Queries login information about a login user.
+Return type: tuple
+Examples:
+1 +2 +3 +4 +5 | SELECT * FROM login_audit_messages(true); + username | database | logintime | type | result | client_conninfo +------------+----------+------------------------+---------------+--------+-------------------- + dbadmin | gaussdb | 2017-06-02 15:28:34+08 | login_success | ok | gsql@[local] +(1 row) + |
1 +2 +3 +4 | SELECT * FROM login_audit_messages(false) ORDER BY logintime desc limit 1; + username | database | logintime | type | result | client_conninfo +------------+----------+------------------------+--------------+--------+------------------------- +(0 rows) + |
1 +2 +3 +4 | SELECT * FROM login_audit_messages(false); + username | database | logintime | type | result | client_conninfo +------------+----------+------------------------+--------------+--------+------------------------- +(0 rows) + |
Description: Queries login information about a login user. Different from login_audit_messages, this function queries login information based on backendid. Information about subsequent logins of the same user does not alter the query result of previous logins and cannot be found using this function.
+Return type: tuple
+Examples:
+1 +2 +3 +4 +5 | SELECT * FROM login_audit_messages_pid(true); + username | database | logintime | type | result | client_conninfo | backendid +------------+----------+------------------------+---------------+--------+-------------------- + dbadmin | gaussdb | 2017-06-02 15:28:34+08 | login_success | ok | gsql@[local] | 140311900702464 +(1 row) + |
1 +2 +3 +4 | SELECT * FROM login_audit_messages_pid(false) ORDER BY logintime desc limit 1; + username | database | logintime | type | result | client_conninfo | backendid +------------+----------+------------------------+--------------+--------+------------------------- +(0 rows) + |
1 +2 +3 +4 | SELECT * FROM login_audit_messages_pid(false); + username | database | logintime | type | result | client_conninfo | backendid +------------+----------+------------------------+--------------+--------+------------------------- +(0 rows) + |
Description: Displays audit logs of the CN.
+Return type: SETOF record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
begintime + |
+timestamp with time zone + |
+Operation start time + |
+
endtime + |
+timestamp with time zone + |
+Operation end time + |
+
operation_type + |
+text + |
+Operation type. For details, see Table 1. + |
+
audit_type + |
+text + |
+Audit type. For details, see Table 2. + |
+
result + |
+text + |
+Operation result + |
+
username + |
+text + |
+Name of the user who performs the operation + |
+
database + |
+text + |
+Database name + |
+
client_conninfo + |
+text + |
+Client connection information, that is, gsql, JDBC, or ODBC. + |
+
object_name + |
+text + |
+Object name + |
+
command_text + |
+text + |
+Command used to perform the operation. In versions earlier than 8.1.1, the audit content of this column is contained in detail_info. + |
+
detail_info + |
+text + |
+Operation details + |
+
transaction_xid + |
+text + |
+Transaction ID + |
+
query_id + |
+text + |
+Query ID + |
+
node_name + |
+text + |
+Node name + |
+
thread_id + |
+text + |
+Thread ID + |
+
local_port + |
+text + |
+Local port + |
+
remote_port + |
+text + |
+Remote port + |
+
Operation Type + |
+Description + |
+
---|---|
none + |
+Indicates that no audit item is configured. If any audit item is configured, none becomes invalid. + |
+
all + |
+Indicates that all operations are audited. This value overwrites the concurrent configuration of any other audit items. Note that even if this parameter is set to all, not all DDL operations are audited. You need to control the object level of DDL operations by referring to audit_system_object. + |
+
login + |
+Indicates that user login operations are audited. + |
+
logout + |
+Indicates that user logout operations are audited. + |
+
database_process + |
+Indicates that database startup, stop, switchover, and recovery operations are audited. + |
+
user_lock + |
+Indicates that user locking and unlocking operations are audited. + |
+
grant_revoke + |
+Indicates that user permission granting and revoking operations are audited. + |
+
ddl + |
+Indicates that DDL operations are audited. DDL operations are controlled at a fine granularity based on operation objects. Therefore, audit_system_object is used to control the objects whose DDL operations are to be audited. (The audit function takes effect as long as audit_system_object is configured, no matter whether ddl is set.) + |
+
select + |
+Indicates that the SELECT operations are audited. + |
+
copy + |
+Indicates that the COPY operations are audited. + |
+
user function + |
+Indicates that operations related to user-defined functions, stored procedures, and anonymous blocks are audited. + |
+
set + |
+Indicates that the SET operations are audited. + |
+
transaction + |
+Indicates that transaction operations are audited. + |
+
vacuum + |
+Indicates that the VACUUM operations are audited. + |
+
analyze + |
+Indicates that the ANALYZE operations are audited. + |
+
explain + |
+Indicates that the EXPLAIN operations are audited. + |
+
specialfunc + |
+Indicates that special function invoking operations are audited. Special functions include pg_terminate_backend and pg_cancel_backend. + |
+
insert + |
+Indicates that the INSERT operations are audited. + |
+
update + |
+Indicates that the UPDATE operations are audited. + |
+
delete + |
+Indicates that the DELETE operations are audited. + |
+
merge + |
+Indicates that the MERGE operations are audited. + |
+
show + |
+Indicates that the SHOW operations are audited. + |
+
checkpoint + |
+Indicates that the CHECKPOINT operations are audited. + |
+
barrier + |
+Indicates that the BARRIER operations are audited. + |
+
cluster + |
+Indicates that the CLUSTER operations are audited. + |
+
comment + |
+Indicates that the COMMENT operations are audited. + |
+
clean connection + |
+Indicates that the CLEAN CONNECTION operations are audited. + |
+
prepare statement + |
+Indicates that the PREPARE, EXECUTE, and DEALLOCATE operations are audited. + |
+
set constraints + |
+Indicates that the CONSTRAINTS operations are audited. + |
+
cursor + |
+Indicates that cursor operations are audited. + |
+
Audit type + |
+Description + |
+
---|---|
audit_switch + |
+Enables and disables audit logs. + |
+
login_logout + |
+Indicates that successful user logins and user log-outs are audited. + |
+
system + |
+Indicates that system start and stop operations and instance switch operations are audited. + |
+
sql_parse + |
+Parses SQL statements. + |
+
user_lock + |
+Indicates that successful locking and unlocking operations are audited. + |
+
grant_revoke + |
+Indicates that failed granting and reclaiming of a user's permission are audited. + |
+
violation + |
+Indicates that user's access violation operations are audited. + |
+
ddl + |
+Indicates that successful DDL operations are audited. DDL operations are controlled at a fine granularity based on operation objects. Therefore, audit_system_object is used to control the objects whose DDL operations are to be audited. (The audit function takes effect as long as audit_system_object is configured, no matter whether ddl is set.) + |
+
dml + |
+Indicates that the INSERT, UPDATE, DELETE, and MERGE operations on a specific table are audited. + |
+
internal_event + |
+Indicates that internal events are audited. + |
+
user_func + |
+Indicates that operations related to user-defined functions, stored procedures, and anonymous blocks are audited. + |
+
special_func + |
+Indicates that successful calls to special functions are audited. Special functions include pg_terminate_backend and pg_cancel_backend. + |
+
copy + |
+Indicates that the COPY operations are audited. + |
+
set + |
+Indicates that the SET operations are audited. + |
+
transaction + |
+Indicates that transaction operations are audited. + |
+
vacuum + |
+Indicates that the VACUUM operations are audited. + |
+
analyze + |
+Indicates that the ANALYZE operations are audited. + |
+
cursor + |
+Indicates that cursor operations are audited. + |
+
anonymous_block + |
+Anonymous block. If anonymous block completed is displayed, the SQL statement is successfully executed. + |
+
explain + |
+Indicates that the EXPLAIN operations are audited. + |
+
show + |
+Indicates that the SHOW operations are audited. + |
+
lock_table + |
+Indicates that table lock operations are audited. + |
+
comment + |
+Indicates that the COMMENT operations are audited. + |
+
prepare + |
+Indicates that the PREPARE, EXECUTE, and DEALLOCATE operations are audited. + |
+
cluster + |
+Indicates that the CLUSTER operations are audited. + |
+
constraints + |
+Indicates that the CONSTRAINTS operations are audited. + |
+
checkpoint + |
+Indicates that the CHECKPOINT operations are audited. + |
+
barrier + |
+Indicates that the BARRIER operations are audited. + |
+
cleanconn + |
+Indicates that the CLEAN CONNECTION operations are audited. + |
+
seclabel + |
+Indicates that security label operations are audited. + |
+
notify + |
+Indicates that the notification operations are audited. + |
+
load + |
+Indicates that the loading operations are audited. + |
+
Description: Displays audit logs of all CNs.
+Return type: record
+The return fields of this function are the same as those of the pg_query_audit function.
+Description: Deletes audit logs in a specified period.
+Return type: void
For database security concerns, this function is unavailable. If you call it, the following message is displayed: "ERROR: For security purposes, it is not allowed to manually delete audit logs."
+Description: Generates a series of values, from start to stop with a step size of one.
+Parameter type: int, bigint, or numeric
+Return type: setof int, setof bigint, or setof numeric (same as the argument type)
+Description: Generates a series of values, from start to stop with a step size of step.
+Parameter type: int, bigint, or numeric
+Return type: setof int, setof bigint, or setof numeric (same as the argument type)
+Description: Generates a series of values, from start to stop with a step size of step.
+Parameter type: timestamp or timestamp with time zone
+Return type: setof timestamp or setof timestamp with time zone (same as argument type)
+When step is positive, zero rows are returned if start is greater than stop. Conversely, when step is negative, zero rows are returned if start is less than stop. Zero rows are also returned for NULL inputs. It is an error for step to be zero.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 | SELECT * FROM generate_series(2,4); + generate_series +----------------- + 2 + 3 + 4 +(3 rows) + +SELECT * FROM generate_series(5,1,-2); + generate_series +----------------- + 5 + 3 + 1 +(3 rows) + +SELECT * FROM generate_series(4,3); + generate_series +----------------- +(0 rows) + +-- this example relies on the date-plus-integer operator +SELECT current_date + s.a AS dates FROM generate_series(0,14,7) AS s(a); + dates +------------ + 2017-06-02 + 2017-06-09 + 2017-06-16 +(3 rows) + +SELECT * FROM generate_series('2008-03-01 00:00'::timestamp, '2008-03-04 12:00', '10 hours'); + generate_series +--------------------- + 2008-03-01 00:00:00 + 2008-03-01 10:00:00 + 2008-03-01 20:00:00 + 2008-03-02 06:00:00 + 2008-03-02 16:00:00 + 2008-03-03 02:00:00 + 2008-03-03 12:00:00 + 2008-03-03 22:00:00 + 2008-03-04 08:00:00 +(9 rows) + |
Description: Generates a series comprising the given array's subscripts.
+Return type: setof int
+Description: Generates a series comprising the given array's subscripts. When reverse is true, the series is returned in reverse order.
+Return type: setof int
+generate_subscripts is a function that generates the set of valid subscripts for the specified dimension of the given array. Zero rows are returned for arrays that do not have the requested dimension, or for NULL arrays (but valid subscripts are returned for NULL array elements). For example:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | -- basic usage +SELECT generate_subscripts('{NULL,1,NULL,2}'::int[], 1) AS s; + s +--- + 1 + 2 + 3 + 4 +(4 rows) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | -- unnest a 2D array +CREATE OR REPLACE FUNCTION unnest2(anyarray) +RETURNS SETOF anyelement AS $$ +SELECT $1[i][j] + FROM generate_subscripts($1,1) g1(i), + generate_subscripts($1,2) g2(j); +$$ LANGUAGE sql IMMUTABLE; + +SELECT * FROM unnest2(ARRAY[[1,2],[3,4]]); + unnest2 +--------- + 1 + 2 + 3 + 4 +(4 rows) + +-- Delete the function: +DROP FUNCTION unnest2; + |
Description: Returns the first argument that is not NULL in the argument list.
+COALESCE(expr1, expr2) is equivalent to CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE expr2 END.
+For example:
+1 +2 +3 +4 +5 | SELECT coalesce(NULL,'hello'); + coalesce +---------- + hello +(1 row) + |
Note:
+Description: Compares base_expr with each compare(n) and returns value(n) if they are matched. If base_expr does not match each compare(n), the default value is returned.
+For example:
+1 +2 +3 +4 +5 | SELECT decode('A','A',1,'B',2,0); + case +------ + 1 +(1 row) + |
Description: Returns expr1 or expr2. If the value of bool_expr is true, expr1 is returned. Otherwise, expr2 is returned.
+This function is equivalent to CASE WHEN bool_expr = true THEN expr1 ELSE expr2 END.
+Example:
+1 +2 +3 +4 +5 | SELECT if(1 < 2, 'yes', 'no'); + if +----- + yes +(1 row) + |
Note: expr1 and expr2 can be of any type. For details about the available types, see UNION, CASE, and Related Constructs.
+Description: Returns expr1 or expr2. If expr1 is not NULL, expr1 is returned. Otherwise, expr2 is returned.
+This function is logically equivalent to CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE expr2 END.
+Example:
+1 +2 +3 +4 +5 | SELECT ifnull(NULL,'hello'); + ifnull +-------- + hello +(1 row) + |
Note: expr1 and expr2 can be of any type. For details about the available types, see UNION, CASE, and Related Constructs.
+Description: Checks whether expr is NULL. If it is NULL, true is returned. Otherwise, false is returned.
+This function is logically equivalent to expr IS NULL.
+Example:
+1 +2 +3 +4 +5 | SELECT isnull(NULL), isnull('abc'); + isnull | isnull +--------+-------- + t | f +(1 row) + |
Description: Returns NULL or expr1. If expr1 is equal to expr2, NULL is returned. Otherwise, expr1 is returned.
+nullif(expr1, expr2) is equivalent to CASE WHEN expr1 = expr2 THEN NULL ELSE expr1 END.
+For example:
+1 +2 +3 +4 +5 | SELECT nullif('hello','world'); + nullif +-------- + hello +(1 row) + |
Note:
+Assume the two parameter data types are different:
+1 +2 +3 +4 +5 | SELECT nullif('1234'::VARCHAR,123::INT4); + nullif +-------- + 1234 +(1 row) + |
1 +2 | SELECT nullif('1234'::VARCHAR,'2012-12-24'::DATE); +ERROR: invalid input syntax for type timestamp: "1234" + |
1 +2 +3 +4 +5 | SELECT nullif(TRUE::BOOLEAN,'2012-12-24'::DATE); +ERROR: operator does not exist: boolean = timestamp without time zone +LINE 1: SELECT nullif(TRUE::BOOLEAN,'2012-12-24'::DATE) FROM DUAL; +^ +HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. + |
Returns expr1 or expr2. If expr1 is NULL, expr2 is returned. Otherwise, expr1 is returned.
+For example:
+1 +2 +3 +4 +5 | SELECT nvl('hello','world'); + nvl +------- + hello +(1 row) + |
Parameters expr1 and expr2 can be of any data type. If expr1 and expr2 are of different data types, NVL checks whether expr2 can be implicitly converted to expr1. If it can, the expr1 data type is returned. If epr2 cannot be implicitly converted to expr1 but epr1 can be implicitly converted to expr2, the expr2 data type is returned. If no implicit type conversion exists between the two parameters and the parameters are different data types, an error is reported.
+Description: Obtains and returns the parameter values of a specified namespace.
+Return type: VARCHAR
+For example:
+1 +2 +3 +4 +5 | SELECT sys_context('USERENV', 'CURRENT_SCHEMA'); + sys_context +------------- + public +(1 row) + |
The result varies according to the current actual schema.
+Note: Currently, only the following formats are supported: SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') and SYS_CONTEXT('USERENV', 'CURRENT_USER').
+Description: Selects the largest value from a list of any number of expressions.
+Return type:
+For example:
+1 +2 +3 +4 +5 | SELECT greatest(1*2,2-3,4-1); + greatest +---------- + 3 +(1 row) + |
1 +2 +3 +4 +5 | SELECT greatest('ABC', 'BCD', 'CDE'); + greatest +---------- + CDE +(1 row) + |
Description: Selects the smallest value from a list of any number of expressions.
+For example:
+1 +2 +3 +4 +5 | SELECT least(1*2,2-3,4-1); + least +------- + -1 +(1 row) + |
1 +2 +3 +4 +5 | SELECT least('ABC','BCD','CDE'); + least +-------- + ABC +(1 row) + |
Description: Initiates a BLOB variable in an INSERT or an UPDATE statement to a NULL value.
+Return type: BLOB
+For example:
+1 +2 +3 +4 +5 +6 | -- Create a table: +CREATE TABLE blob_tb(b blob,id int) DISTRIBUTE BY REPLICATION; +-- Insert data: +INSERT INTO blob_tb VALUES (empty_blob(),1); +--Delete the table. +DROP TABLE blob_tb; + |
Note: The length is 0 obtained using DBMS.GETLENGTH in a parallel mode.
+Description: Name of the current database (called "catalog" in the SQL standard)
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT current_catalog; + current_database +------------------ + gaussdb +(1 row) + |
Description: Name of the current database
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT current_database(); + current_database +------------------ + gaussdb +(1 row) + |
Description: Text of the currently executing query, as submitted by the client (might contain more than one statement)
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT current_query(); + current_query +------------------------- + SELECT current_query(); +(1 row) + |
Description: Name of current schema
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT current_schema(); + current_schema +---------------- + public +(1 row) + |
Remarks: current_schema returns the first valid schema name in the search path. (If the search path is empty or contains no valid schema name, NULL is returned.) This is the schema that will be used for any tables or other named objects that are created without specifying a target schema.
+Description: Names of schemas in search path
+Return type: name[]
+For example:
+1 +2 +3 +4 +5 | SELECT current_schemas(true); + current_schemas +--------------------- + {pg_catalog,public} +(1 row) + |
Note:
+current_schemas(boolean) returns an array of the names of all schemas presently in the search path. The Boolean option determines whether implicitly included system schemas such as pg_catalog are included in the returned search path.
+The search path can be altered at run time. The command is:
+1 | SET search_path TO schema [, schema, ...] + |
Description: User name of current execution context
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT current_user; + current_user +-------------- + dbadmin +(1 row) + |
Note: current_user is the user identifier that is applicable for permission checking. Normally it is equal to the session user, but it can be changed with SET ROLE. It also changes during the execution of functions with the attribute SECURITY DEFINER.
+Description: Displays the IP address of the currently connected client.
+Return type: inet
+For example:
+1 +2 +3 +4 +5 | SELECT inet_client_addr(); + inet_client_addr +------------------ + 10.10.0.50 +(1 row) + |
Description: Displays the port number of the currently connected client.
+It is available only in remote connection mode.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT inet_client_port(); + inet_client_port +------------------ + 33143 +(1 row) + |
Description: Displays the IP address of the current server.
+Return type: inet
+For example:
+1 +2 +3 +4 +5 | SELECT inet_server_addr(); + inet_server_addr +------------------ + 10.10.0.13 +(1 row) + |
Description: Displays the port of the current server. All these functions return NULL if the current connection is via a Unix-domain socket.
+It is available only in remote connection mode.
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT inet_server_port(); + inet_server_port +------------------ + 8000 +(1 row) + |
Description: Process ID of the server process attached to the current session
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT pg_backend_pid(); + pg_backend_pid +----------------- + 140229352617744 +(1 row) + |
Description: Configures load time. pg_conf_load_time returns the timestamp with time zone when the server configuration files were last loaded.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT pg_conf_load_time(); + pg_conf_load_time +------------------------------ + 2017-09-01 16:05:23.89868+08 +(1 row) + |
Description: OID of the temporary schema of a session. The value is 0 if the OID does not exist.
+Return type: OID
+For example:
+1 +2 +3 +4 +5 | SELECT pg_my_temp_schema(); + pg_my_temp_schema +------------------- + 0 +(1 row) + |
Note: pg_my_temp_schema returns the OID of the current session's temporary schema, or zero if it has none (because it has not created any temporary tables). pg_is_other_temp_schema returns true if the given OID is the OID of another session's temporary schema.
+Description: Whether the schema is the temporary schema of another session.
+Return type: boolean
+For example:
+1 +2 +3 +4 +5 | SELECT pg_is_other_temp_schema(25356); + pg_is_other_temp_schema +------------------------- + f +(1 row) + |
Description: Channel names that the session is currently listening on
+Return type: setof text
+For example:
+1 +2 +3 +4 | SELECT pg_listening_channels(); + pg_listening_channels +----------------------- +(0 rows) + |
Note: pg_listening_channels returns a set of names of channels that the current session is listening to.
+Description: Server start time pg_postmaster_start_time returns the timestamp with time zone when the server started.
+Return type: timestamp with time zone
+For example:
+1 +2 +3 +4 +5 | SELECT pg_postmaster_start_time(); + pg_postmaster_start_time +------------------------------ + 2017-08-30 16:02:54.99854+08 +(1 row) + |
Description: Current nesting level of triggers
+Return type: int
+For example:
+1 +2 +3 +4 +5 | SELECT pg_trigger_depth(); + pg_trigger_depth +------------------ + 0 +(1 row) + |
Description: Postgres-XC version information
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT pgxc_version(); + pgxc_version +------------------------------------------------------------------------------------------------------------- + Postgres-XC 1.1 on x86_64-unknown-linux-gnu, based on PostgreSQL 9.2.4, compiled by g++ (GCC) 5.4.0, 64-bit +(1 row) + |
Description: Session user name
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT session_user; + session_user +-------------- + dbadmin +(1 row) + |
Note: session_user is usually the user who initiated the current database connection, but administrators can change this setting with SET SESSION AUTHORIZATION.
+Description: Is equivalent to current_user.
+Return type: name
+For example:
+1 +2 +3 +4 +5 | SELECT user; + current_user +-------------- + dbadmin +(1 row) + |
Description: version information. version returns a string describing a server's version.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT version(); + version +--------------------------------------------------------------------------------------------------------------------------------------- + PostgreSQL 9.2.4 gsql ((GaussDB 8.1.1 build af002019) compiled at 2020-01-10 05:43:20 commit 6995 last mr 11566 ) on x86_64-unknown-linux-gnu, compiled by g++ (GCC) 5.4.0, 64-bit +(1 row) + |
Description: Queries whether a specified user has permission for any column of table.
+Return type: boolean
+Description: Queries whether the current user has permission for any column of table.
+Return type: boolean
+has_any_column_privilege checks whether a user can access any column of a table in a particular way. Its parameter possibilities are analogous to has_table_privilege, except that the desired access permission type must be some combination of SELECT, INSERT, UPDATE, or REFERENCES.
+Note that having any of these permissions at the table level implicitly grants it for each column of the table, so has_any_column_privilege will always return true if has_table_privilege does for the same parameters. But has_any_column_privilege also succeeds if there is a column-level grant of the permission for at least one column.
+Description: Queries whether a specified user has permission for column.
+Return type: boolean
+Description: Queries whether the current user has permission for column.
+Return type: boolean
+has_column_privilege checks whether a user can access a column in a particular way. Its argument possibilities are analogous to has_table_privilege, with the addition that the column can be specified either by name or attribute number. The desired access permission type must evaluate to some combination of SELECT, INSERT, UPDATE, or REFERENCES.
+Note that having any of these permissions at the table level implicitly grants it for each column of the table.
+Description: Queries whether a specified user has permission for database.
+Return type: boolean
+Description: Queries whether the current user has permission for database.
+Return type: boolean
+Note: has_database_privilege checks whether a user can access a database in a particular way. Its argument possibilities are analogous to has_table_privilege. The desired access permission type must evaluate to some combination of CREATE, CONNECT, TEMPORARY, or TEMP (which is equivalent to TEMPORARY).
+Description: Queries whether a specified user has permission for foreign-data wrapper.
+The fdw parameter indicates the name or ID of the foreign data wrapper.
+Return type: boolean
+Description: Queries whether the current user has permission for foreign-data wrapper.
+Return type: boolean
+Note: has_foreign_data_wrapper_privilege checks whether a user can access a foreign-data wrapper in a particular way. Its argument possibilities are analogous to has_table_privilege. The desired access permission type must evaluate to USAGE.
+Description: Queries whether a specified user has permission for function.
+Return type: boolean
+Description: Queries whether the current user has permission for function.
+Return type: boolean
+Note: has_function_privilege checks whether a user can access a function in a particular way. Its argument possibilities are analogous to has_table_privilege. When a function is specified by a text string rather than by OID, the allowed input is the same as that for the regprocedure data type (see Object Identifier Types). The desired access permission type must evaluate to EXECUTE.
+Description: Queries whether a specified user has permission for language.
+Return type: boolean
+Description: Queries whether the current user has permission for language.
+Return type: boolean
+Note: has_language_privilege checks whether a user can access a procedural language in a particular way. Its argument possibilities are analogous to has_table_privilege. The desired access permission type must evaluate to USAGE.
+Description: Queries whether a specified user has permission for schema.
+Return type: boolean
+Description: Queries whether the current user has permission for schema.
+Return type: boolean
+Note: has_schema_privilege checks whether a user can access a schema in a particular way. Its argument possibilities are analogous to has_table_privilege. The desired access permission type must evaluate to some combination of CREATE or USAGE.
+Description: Queries whether a specified user has permission for foreign server.
+Return type: boolean
+Description: Queries whether the current user has permission for foreign server.
+Return type: boolean
+Note: has_server_privilege checks whether a user can access a foreign server in a particular way. Its argument possibilities are analogous to has_table_privilege. The desired access permission type must evaluate to USAGE.
+Description: Queries whether a specified user has permission for table.
+Return type: boolean
+Description: Queries whether the current user has permission for table.
+Return type: boolean
+has_table_privilege checks whether a user can access a table in a particular way. The user can be specified by name, by OID (pg_authid.oid), public to indicate the PUBLIC pseudo-role, or if the argument is omitted current_user is assumed. The table can be specified by name or by OID. When specifying by name, the name can be schema-qualified if necessary. The desired access permission type is specified by a text string, which must be one of the values SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, or TRIGGER. Optionally, WITH GRANT OPTION can be added to a permission type to test whether the permission is held with grant option. Also, multiple permission types can be listed separated by commas, in which case the result will be true if any of the listed permissions is held.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT has_table_privilege('tpcds.web_site', 'select'); + has_table_privilege +--------------------- + t +(1 row) + +SELECT has_table_privilege('dbadmin', 'tpcds.web_site', 'select,INSERT WITH GRANT OPTION '); + has_table_privilege +--------------------- + t +(1 row) + |
Description: Queries whether a specified user has permission for role.
+Return type: boolean
+Description: Specifies whether the current user has permission for role.
+Return type: boolean
+Note: pg_has_role checks whether a user can access a role in a particular way. Its argument possibilities are analogous to has_table_privilege, except that public is not allowed as a user name. The desired access permission type must evaluate to some combination of MEMBER or USAGE. MEMBER denotes direct or indirect membership in the role (that is, the right to do SET ROLE), while USAGE denotes the permissions of the role are available without doing SET ROLE.
+Each function performs the visibility check for one type of database object. For functions and operators, an object in the search path is visible if there is no object of the same name and argument data type(s) earlier in the path. For operator classes, both name and associated index access method are considered.
+All these functions require OIDs to identify the objects to be checked. If you want to test an object by name, it is convenient to use the OID alias types (regclass, regtype, regprocedure, regoperator, regconfig, or regdictionary).
+For example, a table is said to be visible if its containing schema is in the search path and no table of the same name appears earlier in the search path. This is equivalent to the statement that the table can be referenced by name without explicit schema qualification. For example, to list the names of all visible tables:
+1 | SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); + |
Description: Queries whether the collation is visible in search path.
+Return type: boolean
+Description: Queries whether the conversion is visible in search path.
+Return type: boolean
+Description: Queries whether the function is visible in search path.
+Return type: boolean
+Description: Queries whether the operator class is visible in search path.
+Return type: boolean
+Description: Queries whether the operator is visible in search path.
+Return type: boolean
+Description: Queries whether the operator family is visible in search path.
+Return type: boolean
+Description: Queries whether the table is visible in search path.
+Return type: boolean
+Description: Queries whether the text search configuration is visible in search path.
+Return type: boolean
+Description: Queries whether the text search dictionary is visible in search path.
+Return type: boolean
+Description: Queries whether the text search parser is visible in search path.
+Return type: boolean
+Description: Queries whether the text search template is visible in search path.
+Return type: boolean
+Description: Queries whether the type (or domain) is visible in search path.
+Return type: boolean
+Description: Gets SQL name of a data type.
+Return type: text
+Note:
+format_type returns the SQL name of a data type that is identified by its type OID and possibly a type modifier. Pass NULL for the type modifier if no specific modifier is known. Certain type modifiers are passed for data types with length limitations. The SQL name returned from format_type contains the length of the data type, which can be calculated by taking sizeof(int32) from actual storage length [actual storage len - sizeof(int32)] in the unit of bytes. 32-bit space is required to store the customized length set by users. So the actual storage length contains 4 bytes more than the customized length. In the following example, the SQL name returned from format_type is character varying(6), indicating the length of varchar type is 6 bytes. So the actual storage length of varchar type is 10 bytes.
+1 +2 +3 +4 +5 | SELECT format_type((SELECT oid FROM pg_type WHERE typname='varchar'), 10); + format_type +---------------------- + character varying(6) +(1 row) + |
Description: Checks whether a role name with given OID exists.
+Return type: bool
+Description: Gets description of a database object.
+Return type: text
+Note: pg_describe_object returns a description of a database object specified by catalog OID, object OID and a (possibly zero) sub-object ID. This is useful to determine the identity of an object as stored in the pg_depend catalog.
+Description: Gets definition of a constraint.
+Return type: text
+Description: Gets definition of a constraint.
+Return type: text
+Note: pg_get_constraintdef and pg_get_indexdef respectively reconstruct the creating command for a constraint and an index.
+Description: Decompiles internal form of an expression, assuming that any Vars in it refer to the relationship indicated by the second parameter.
+Return type: text
+Description: Decompiles internal form of an expression, assuming that any Vars in it refer to the relationship indicated by the second parameter.
+Return type: text
+Note: pg_get_expr decompiles the internal form of an individual expression, such as the default value for a column. It can be useful when examining the contents of system catalogs. If the expression might contain Vars, specify the OID of the relationship they refer to as the second parameter; if no Vars are expected, zero is sufficient.
+Description: Gets definition of a function.
+Return type: text
+func_oid is the OID of the function, which can be queried in the PG_PROC system catalog.
+Example: Query the OID and definition of the justify_days function.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | SELECT oid FROM pg_proc WHERE proname ='justify_days'; + oid +------ + 1295 +(1 row) + +SELECT * FROM pg_get_functiondef(1295); + headerlines | definition +-------------+-------------------------------------------------------------- + 4 | CREATE OR REPLACE FUNCTION pg_catalog.justify_days(interval)+ + | RETURNS interval + + | LANGUAGE internal + + | IMMUTABLE STRICT NOT FENCED NOT SHIPPABLE + + | AS $function$interval_justify_days$function$ + + | +(1 row) + |
Description: Gets argument list of function's definition (with default values).
+Return type: text
+Note: pg_get_function_arguments returns the argument list of a function, in the form it would need to appear in within CREATE FUNCTION.
+Description: Gets argument list to identify a function (without default values).
+Return type: text
+Note: pg_get_function_identity_arguments returns the argument list necessary to identify a function, in the form it would need to appear in within ALTER FUNCTION. This form omits default values.
+Description: Gets RETURNS clause for function.
+Return type: text
+Note: pg_get_function_result returns the appropriate RETURNS clause for the function.
+Description: Gets CREATE INDEX command for index.
+Return type: text
+index_oid indicates the index OID, which can be queried in the PG_STATIO_ALL_INDEXES system view.
+Example: Query the OID and CREATE INDEX command of index ds_ship_mode_t1_index1.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT indexrelid FROM PG_STATIO_ALL_INDEXES WHERE indexrelname = 'ds_ship_mode_t1_index1'; + indexrelid +------------ + 136035 +(1 row) +SELECT * FROM pg_get_indexdef(136035); + pg_get_indexdef +--------------------------------------------------------------------------------------------------------------- + CREATE INDEX ds_ship_mode_t1_index1 ON tpcds.ship_mode_t1 USING psort (sm_ship_mode_sk) TABLESPACE pg_default +(1 row) + |
Description: Gets CREATE INDEX command for index, or definition of just one index column when column_no is not zero.
+Return type: text
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT * FROM pg_get_indexdef(136035,0,false); + pg_get_indexdef +--------------------------------------------------------------------------------------------------------------- + CREATE INDEX ds_ship_mode_t1_index1 ON tpcds.ship_mode_t1 USING psort (sm_ship_mode_sk) TABLESPACE pg_default +(1 row) +SELECT * FROM pg_get_indexdef(136035,1,false); + pg_get_indexdef +----------------- + sm_ship_mode_sk +(1 row) + |
Description: Gets list of SQL keywords and their categories.
+Return type: setof record
+Note: pg_get_keywords returns a set of records describing the SQL keywords recognized by the server. The word column contains the keyword. The catcode column contains a category code: U for unreserved, C for column name, T for type or function name, or R for reserved. The catdesc column contains a possibly-localized string describing the category.
+Description: Gets CREATE RULE command for a rule.
+Return type: text
+Description: Gets CREATE RULE command for a rule.
+Return type: text
+Description: Gets role name with given OID.
+Return type: name
+Note: pg_get_userbyid extracts a role's name given its OID.
+Description: gets underlying SELECT command for views.
+Return type: text
+Note:
+Description: gets underlying SELECT command for views.
+Return type: text
+Description: Gets underlying SELECT command for view, wrapping lines with columns as specified, printing is implied.
+Return type: text
+Description: Obtains a table definition based on table_oid.
+Return type: text
+Example: Obtain the OID of the table customer_t2 from the system catalog pg_class, and then use this function to query the definition of customer_t2 to obtain the table columns, storage mode (row-store or column-store), and table distribution mode configured for customer_t2 when it is created.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | select oid from pg_class where relname ='customer_t2'; + oid +------- + 17353 +(1 row) + +select * from pg_get_tabledef(17353); + pg_get_tabledef +-------------------------------------------- + SET search_path = dbadmin; + + CREATE TABLE customer_t2 ( + + state_id character(2), + + state_name character varying(40), + + area_id numeric + + ) + + WITH (orientation=column, compression=low)+ + DISTRIBUTE BY HASH(state_id) + + TO GROUP group_version1; +(1 row) + |
Description: Obtains a table definition based on table_name.
+Return type: text
+Remarks: pg_get_tabledef reconstructs the CREATE statement of the table definition, including the table definition, index information, and comments. Users need to create the dependent objects of the table, such as groups, schemas, tablespaces, and servers. The table definition does not include the statements for creating these dependent objects.
+Description: Gets the set of storage option name/value pairs.
+Return type: setof record
+Note: pg_options_to_table returns the set of storage option name/value pairs (option_name/option_value) when passing pg_class.reloptions or pg_attribute.attoptions.
+Description: Gets the data type of any value.
+Return type: regtype
+Note:
+pg_typeof returns the OID of the data type of the value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as returning regtype, which is an OID alias type (see Object Identifier Types). This means that it is the same as an OID for comparison purposes but displays as a type name.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT pg_typeof(33); + pg_typeof +----------- + integer +(1 row) + +SELECT typlen FROM pg_type WHERE oid = pg_typeof(33); + typlen +-------- + 4 +(1 row) + |
Description: Gets the collation of the parameter.
+Return type: text
+Note:
+The expression collation for returns the collation of the value that is passed to it. For example:
+1 +2 +3 +4 +5 | SELECT collation for (description) FROM pg_description LIMIT 1; + pg_collation_for +------------------ + "default" +(1 row) + |
The value might be quoted and schema-qualified. If no collation is derived for the argument expression, then a null value is returned. If the parameter is not of a collectable data type, then an error is thrown.
+Description: Gets a distribution column for a hash table.
+Return type: text
+For example:
+1 +2 +3 +4 +5 | SELECT getdistributekey('item'); + getdistributekey +------------------ + i_item_sk +(1 row) + |
Description: Gets comment for a table column.
+Return type: text
+Note: col_description returns the comment for a table column, which is specified by the OID of its table and its column number.
+Description: Gets comment for a database object.
+Return type: text
+Note: The two-parameter form of obj_description returns the comment for a database object specified by its OID and the name of the containing system catalog. For example, obj_description(123456,'pg_class') would retrieve the comment for the table with OID 123456. The one-parameter form of obj_description requires only the object OID.
+obj_description cannot be used for table columns since columns do not have OIDs of their own.
+Description: Gets comment for a database object.
+Return type: text
+Description: Gets comment for a shared database object.
+Return type: text
+Note: shobj_description is used just like obj_description except the former is used for retrieving comments on shared objects. Some system catalogs are global to all databases within each cluster, and the comments for objects in them are stored globally as well.
+The following functions provide server transaction information in an exportable form. The main use of these functions is to determine which transactions were committed between two snapshots.
+Description: Determines whether the given XID is committed or ignored. NULL indicates the unknown status (such as running, preparing, and freezing).
+Return type: bool
+Description: Gets current transaction ID.
+Return type: bigint
+Description: Gets current snapshot.
+Return type: txid_snapshot
+Description: Gets in-progress transaction IDs in snapshot.
+Return type: setof bigint
+Description: Gets xmax of snapshot.
+Return type: bigint
+Description: Gets xmin of snapshot.
+Return type: bigint
+Description: Queries whether the transaction ID is visible in snapshot. (do not use with subtransaction ids)
+Return type: boolean
+The internal transaction ID type (xid) is 32 bits wide and wraps around every 4 billion transactions. txid_snapshot, the data type used by these functions, stores information about transaction ID visibility at a particular moment in time. Table 1 describes its components.
+ +Name + |
+Description + |
+
---|---|
xmin + |
+Earliest transaction ID (txid) that is still active. All earlier transactions will either be committed and visible, or rolled back. + |
+
xmax + |
+First as-yet-unassigned txid. All txids greater than or equal to this are not yet started as of the time of the snapshot, so they are invisible. + |
+
xip_list + |
+Active txids at the time of the snapshot. The list includes only those active txids between xmin and xmax; there might be active txids higher than xmax. A txid that is xmin <= txid < xmax and not in this list was already completed at the time of the snapshot, and is either visible or dead according to its commit status. The list does not include txids of subtransactions. + |
+
txid_snapshot's textual representation is xmin:xmax:xip_list.
+For example: 10:20:10,14,15 means xmin=10, xmax=20, xip_list=10, 14, 15.
+pv_compute_pool_workload()
+Description: Load status of a computing Node Group.
+Return type: void
+For example:
+1 +2 +3 +4 +5 +6 | SELECT * from pv_compute_pool_workload(); + nodename | rpinuse | maxrp | nodestate +-----------+---------+-------+----------- + datanode1 | 0 | 1000 | normal + datanode2 | 0 | 1000 | normal +(2 rows) + |
pgxc_get_lock_conflicts()
+Description: Obtains information about conflicting locks in the cluster. When a lock is waiting for another lock or another lock is waiting for it, a lock conflict occurs.
+Return type: setof record
+Configuration setting functions are used for querying and modifying configuration parameters during running.
+Description: Specifies the current setting.
+Return type: text
+Note: current_setting obtains the current setting of setting_name by query. It is equivalent to the SHOW statement. For example:
+1 +2 +3 +4 +5 +6 | SELECT current_setting('datestyle'); + + current_setting +----------------- + ISO, MDY +(1 row) + |
Description: Sets the parameter and returns a new value.
+Return type: text
+Note: set_config sets the parameter setting_name to new_value. If is_local is true, the new value will only apply to the current transaction. If you want the new value to apply for the current session, use false instead. The function corresponds to the SET statement. For example:
+1 +2 +3 +4 +5 +6 | SELECT set_config('log_statement_stats', 'off', false); + + set_config +------------ + off +(1 row) + |
Universal file access functions provide local access interfaces for files on a database server. Only files in the database cluster directory and the log_directory directory can be accessed. Use a relative path for files in the cluster directory, and a path matching the log_directory configuration setting for log files. Only database system administrators can use these functions.
+Description: Lists files in a directory.
+Return type: setof text
+Note: pg_ls_dir returns all the names in the specified directory, except the special entries "." and "..".
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 | SELECT pg_ls_dir('./'); + pg_ls_dir +---------------------- + .postgresql.conf.swp + postgresql.conf + pg_tblspc + PG_VERSION + pg_ident.conf + core + server.crt + pg_serial + pg_twophase + postgresql.conf.lock + pg_stat_tmp + pg_notify + pg_subtrans + pg_ctl.lock + pg_xlog + pg_clog + base + pg_snapshots + postmaster.opts + postmaster.pid + server.key.rand + server.key.cipher + pg_multixact + pg_errorinfo + server.key + pg_hba.conf + pg_replslot + .pg_hba.conf.swp + cacert.pem + pg_hba.conf.lock + global + gaussdb.state +(32 rows) + |
Description: Returns the content of a text file.
+Return type: text
+Note: pg_read_file returns part of a text file. It can return a maximum of length bytes from offset. The actual size of fetched data is less than length if the end of the file is reached first. If offset is negative, it is the length rolled back from the file end. If offset and length are omitted, the entire file is returned.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT pg_read_file('postmaster.pid',0,100); + pg_read_file +--------------------------------------- + 53078 + + /srv/BigData/hadoop/data1/coordinator+ + 1500022474 + + 253088000 + + /var/run/FusionInsight + + localhost + + 2 +(1 row) + |
Description: Returns the content of a binary file.
+Return type: bytea
+Note: pg_read_binary_file is similar to pg_read_file, except that the result is a bytea value; accordingly, no encoding checks are performed. In combination with the convert_from function, this function can be used to read a file in a specified encoding:
+1 | SELECT convert_from(pg_read_binary_file('filename'), 'UTF8'); + |
Description: Returns status information about a file.
+Return type: record
+Note: pg_stat_file returns a record containing the file size, last access timestamp, last modification timestamp, last file status change timestamp, and a boolean value indicating if it is a directory. Typical use cases are as follows:
+1 | SELECT * FROM pg_stat_file('filename'); + |
1 | SELECT (pg_stat_file('filename')).modification; + |
Examples:
+ +1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT * FROM pg_stat_file('postmaster.pid'); + + size | access | modification | change +| creation | isdir +------+------------------------+------------------------+------------------------ ++----------+------- + 117 | 2017-06-05 11:06:34+08 | 2017-06-01 17:18:08+08 | 2017-06-01 17:18:08+08 +| | f +(1 row) + |
1 +2 +3 +4 +5 | SELECT (pg_stat_file('postmaster.pid')).modification; + modification +------------------------ + 2017-06-01 17:18:08+08 +(1 row) + |
Server signaling functions send control signals to other server processes. Only system administrators can use these functions.
+Description: Cancels the current query of a backend.
+Return type: boolean
+Note: pg_cancel_backend sends a query cancellation (SIGINT) signal to the backend process identified by pid. The PID of an active backend process can be found in the pid column of the pg_stat_activity view, or can be found by listing the database process using ps on the server.
+Description: Causes all server processes to reload their configuration files.
+Return type: boolean
+Note: pg_reload_conf sends a SIGHUP signal to the server. As a result, all server processes reload their configuration files.
+Description: Rotates the log files of the server.
+Return type: boolean
+Note: pg_rotate_logfile instructs the log file manager to immediately switch to a new output file. This function is valid only if the built-in log collector is running.
+Description: Terminates a backend thread.
+Return type: boolean
+Note: Each of these functions returns true if they are successful and false otherwise.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | SELECT pid from pg_stat_activity; + pid +----------------- + 140657876268816 + 140433774061312 + 140433587902208 + 140433656592128 + 140433723717376 + 140433637189376 + 140433552770816 + 140433481983744 + 140433349310208 +(1 rows) + +SELECT pg_terminate_backend(140657876268816); + pg_terminate_backend +---------------------- + t +(1 row) + |
Backup control functions help online backup.
+Description: Creates a named point for performing the restore operation (restricted to system administrators).
+Return type: text
+Note: pg_create_restore_point creates a named transaction log record that can be used as a restoration target, and returns the corresponding transaction log location. The given name can then be used with recovery_target_name to specify the point up to which restoration will proceed. Avoid creating multiple restoration points with the same name, since restoration will stop at the first one whose name matches the restoration target.
+Description: Obtains the write position of the current transaction log.
+Return type: text
+Note: pg_current_xlog_location displays the write position of the current transaction log in the same format as those of the previous functions. Read-only operations do not require rights of the system administrator.
+Description: Obtains the insert position of the current transaction log.
+Return type: text
+Note: pg_current_xlog_insert_location displays the insert position of the current transaction log. The insertion point is the logical end of the transaction log at any instant, while the write location is the end of what has been written out from the server's internal buffers. The write position is the end that can be detected externally from the server. This operation can be performed to archive only some of completed transaction log files. The insert position is mainly used for commissioning the server. Read-only operations do not require rights of the system administrator.
+Description: Starts executing online backup (restricted to system administrators or replication roles).
+Return type: text
+Note: pg_start_backup receives a user-defined backup label (usually the name of the position where the backup dump file is stored). This function writes a backup label file to the data directory of the database cluster and then returns the starting position of backed up transaction logs in text mode.
+1 +2 +3 +4 +5 | SELECT pg_start_backup('label_goes_here'); + pg_start_backup +----------------- + 0/3000020 +(1 row) + |
Description: Completes online backup (restricted to system administrators or replication roles).
+Return type: text
+Note: pg_stop_backup deletes the label file created by pg_start_backup and creates a backup history file in the transaction log archive area. The history file includes the label given to pg_start_backup, the starting and ending transaction log locations for the backup, and the starting and ending times of the backup. The return value is the backup's ending transaction log location. After the ending position is calculated, the insert position of the current transaction log automatically goes ahead to the next transaction log file. This way, the ended transaction log file can be immediately archived so that backup is complete.
+Description: Switches to a new transaction log file (restricted to system administrators).
+Return type: text
+Note: pg_switch_xlog moves to the next transaction log file so that the current log file can be archived (if continuous archive is used). The return value is the ending transaction log location + 1 within the just-completed transaction log file. If there has been no transaction log activity since the last transaction log switchover, pg_switch_xlog will do nothing but return the start location of the transaction log file currently in use.
+Description: Converts the position string in a transaction log to a file name.
+Return type: text
+Note: pg_xlogfile_name extracts only the transaction log file name. If the given transaction log position is the transaction log file border, a transaction log file name will be returned for both the two functions. This is usually the desired behavior for managing transaction log archiving, since the preceding file is the last one that currently needs to be archived.
+Description: Converts the position string in a transaction log to a file name and returns the byte offset in the file.
+Return type: text, integer
+Note: pg_xlogfile_name_offset can extract transaction log file names and byte offsets from the returned results of the preceding functions. For example:
+1 +2 +3 +4 +5 +6 +7 | SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup()); +NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived +NOTICE: pg_stop_backup complete, all required WAL segments have been archived + file_name | file_offset +--------------------------+------------- +000000010000000000000003 | 272 +(1 row) + |
Description: pg_xlog_location_diff calculates the difference in bytes between two transaction log locations.
+Return type: numeric
+Description: Queries for the LSN location parsed by CBM.
+Return type: text
+Description: Combines CBM files within the specified LSN range into one and returns the name of the combined file.
+Return type: text
+Description: Combines CBM files within the specified LSN range into a table and return records of this table.
+Return type: record
+Note: The table columns include the start LSN, end LSN, tablespace OID, database OID, table relfilenode, table fork number, whether the table is deleted, whether the table is created, whether the table is truncated, number of pages in the truncated table, number of modified pages, and list of No. of modified pages.
+Description: Deletes the CBM files that are no longer used and returns the first LSN after the deletion. If slotName is empty, targetLSNArg is used as the recycling point. During backup and DR, you need to specify a slot name due to parallelism. Record the targetLSNArg value of the task to the slot, traverse all backup slots, and find the smallest LSN as the recycling point.
+Return type: text
+Description: Forcibly executes the CBM trace to the specified Xlog position and returns the Xlog position of the actual trace end point.
+Return type: text
+Description: Enables DDL delay and returns the Xlog position of the enabling point.
+Return type: text
+Description: Disables DDL delay and returns the Xlog range where DDL delay takes effect.
+Return type: record
+Description: Enables Xlog recycle delay.
+Return type: void
+Description: Disables Xlog recycle delay.
+Return type: void
+Description: Displays the catchup information of the currently active primary/standby instance sending thread on all DNs.
+Return type: record
+The following information is returned:
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Node name + |
+
lwpid + |
+integer + |
+Current sender lwpid + |
+
local_role + |
+text + |
+Local role + |
+
peer_role + |
+text + |
+Peer role + |
+
state + |
+text + |
+Current sender's replication status + |
+
sender + |
+text + |
+Current sender type + |
+
catchup_start + |
+timestamp with time zone + |
+Startup time of a catchup task + |
+
catchup_end + |
+timestamp with time zone + |
+End time of a catchup task + |
+
catchup_type + |
+text + |
+Catchup task type, full or incremental + |
+
catchup_bcm_filename + |
+text + |
+BCM file executed by the current catchup task + |
+
catchup_bcm_finished + |
+integer + |
+Number of BCM files completed by a catchup task + |
+
catchup_bcm_total + |
+integer + |
+Total number of BCM files to be operated by a catchup task + |
+
catchup_percent + |
+text + |
+Completion percentage of a catchup task + |
+
catchup_remaining_time + |
+text + |
+Estimated remaining time of a catchup task + |
+
Restoration control functions provide information about the status of standby nodes. These functions may be executed both during restoration and in normal running.
+Description: Returns true if restoration is still in progress.
+Return type: bool
+Description: Gets the last transaction log location received and synchronized to disk by streaming replication. While streaming replication is in progress, this will increase monotonically. If restoration has completed, then this value will remain static at the value of the last WAL record received and synchronized to disk during restoration. If streaming replication is disabled or if not yet started, the function return will return NULL.
+Return type: text
+Description: Gets last transaction log location replayed during restoration. If restoration is still in progress, this will increase monotonically. If restoration has completed, then this value will remain static at the value of the last WAL record received during that restoration. When the server has been started normally without restoration, the function returns NULL.
+Return type: text
+Description: Gets the timestamp of last transaction replayed during restoration. This is the time to commit a transaction or abort a WAL record on the primary node. If no transactions have been replayed during restoration, this function will return NULL. Otherwise, if restoration is still in progress, this will increase monotonically. If restoration has completed, then this value will remain static at the value of the last WAL record received during that restoration. If the server normally starts without manual intervention, this function will return NULL.
+Return type: timestamp with time zone
+Restoration control functions control restoration processes. These functions may be executed only during restoration.
+Description: Returns true if restoration is paused.
+Return type: bool
+Description: Pauses restoration immediately.
+Return type: void
+Description: Restarts restoration if it was paused.
+Return type: void
+While restoration is paused, no further database changes are applied. In hot standby mode, all new queries will see the same consistent snapshot of the database, and no further query conflicts will be generated until restoration is resumed.
+If streaming replication is disabled, the paused state may continue indefinitely without problem. While streaming replication is in progress, WAL records will continue to be received, which will eventually fill available disk space. This progress depends on the duration of the pause, the rate of WAL generation, and available disk space.
+Description: Displays the progress of xlog redo on the current DN.
+Return type: record
+The following information is returned:
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
replay_start + |
+integer + |
+Start LSN of xlog redo + |
+
replay_current + |
+integer + |
+LSN of the current replay of xlog redo + |
+
replay_end + |
+integer + |
+Maximum LSN that requires xlog redo + |
+
replay_percent + |
+integer + |
+Completion percentage of xlog redo + |
+
Description: Displays the progress of data page file synchronization during the failover on the current DN.
+Return type: record
+The following information is returned:
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
start_index + |
+integer + |
+Start LSN of data page file synchronization + |
+
current_index + |
+integer + |
+Current LSN of data page file synchronization + |
+
total_index + |
+integer + |
+Maximum LSN of data page file synchronization + |
+
sync_percent + |
+integer + |
+Completion percentage of data page files + |
+
Description: Stops a backup started by the internal backup tool GaussRoach and returns the position where the current log is inserted. This function is similar to pg_stop_backup, but is more lightweight.
+Return type: text
+Description: Enables DDL delay and returns the log position of the enabling point. This function is similar to pg_enable_delay_ddl_recycle, but is more lightweight. In addition, this function allows you to enable DDL delay for multiple backups.
+Return type: text
+Description: Disables DDL delay, returns the logs for which DDL delay takes effect, and deletes the physical files of the column-store tables that have been deleted by the user. This function is similar to pg_enable_delay_ddl_recycle, but is more lightweight. In addition, this function allows you to disable DDL delay for multiple backups.
+Return type: record
+Description: Switches the currently used log segment file and returns the position of the segment log. If the value of request_ckpt is true, a full check point is triggered.
+Return type: text
+Description: Resumes the delay xlog flag from a specified backup and returns start_backup_flag boolean, to_delay boolean, ddl_delay_recycle_ptr text, and rewind_time text.
+Return type: record
+Snapshot synchronization functions save the current snapshot and return its identifier.
+pg_export_snapshot()
+Description: Saves the current snapshot and returns its identifier.
+Return type: text
+Note: pg_export_snapshot saves the current snapshot and returns a text string identifying the snapshot. This string must be passed to clients that want to import the snapshot. A snapshot can be imported when the set transaction snapshot snapshot_id; command is executed. Doing so is possible only when the transaction is set to the REPEATABLE READ isolation level. The output of the function cannot be used as the input of set transaction snapshot.
+Database object size functions calculate the actual disk space used by database objects.
+Description: Specifies the number of bytes used to store a particular value (possibly compressed).
+Return type: int
+Note: pg_column_size displays the space for storing an independent data value.
+1 +2 +3 +4 +5 | SELECT pg_column_size(1); + pg_column_size +---------------- + 4 +(1 row) + |
Description: Specifies the disk space used by the database with the specified OID.
+Return type: bigint
+Description: Specifies the disk space used by the database with the specified name.
+Return type: bigint
+Note: pg_database_size receives the OID or name of a database and returns the disk space used by the corresponding object.
+For example:
+1 +2 +3 +4 +5 | SELECT pg_database_size('gaussdb'); + pg_database_size +------------------ + 51590112 +(1 row) + |
Description: Specifies the disk space used by the table with a specified OID or index.
+Return type: bigint
+Description: Estimates the total size of non-compressed data in the current database.
+Return type: bigint
+Note: (1) ANALYZE must be performed before this function is called. (2) Calculate the total size of non-compressed data by estimating the compression rate of column-store tables.
+For example:
+1 +2 +3 +4 +5 +6 +7 | analyze; +ANALYZE +select get_db_source_datasize(); + get_db_source_datasize +------------------------ + 35384925667 +(1 row) + |
Description: Specifies the disk space used by the table with a specified name or index. The table name can be schema-qualified.
+Return type: bigint
+Description: Specifies the disk space used by the specified bifurcating tree ('main', 'fsm', or 'vm') of a certain table or index.
+Return type: bigint
+Description: Is an abbreviation of pg_relation_size(..., 'main').
+Return type: bigint
+Note: pg_relation_size receives the OID or name of a table, index, or compressed table, and returns the size.
+Description: Specifies the disk space used by the partition with a specified OID. The first oid is the OID of the table and the second oid is the OID of the partition.
+Return type: bigint
+Description: Specifies the disk space used by the partition with a specified name. The first text is the table name and the second text is the partition name.
+Return type: bigint
+Description: Specifies the disk space used by the index of the partition with a specified OID. The first oid is the OID of the table and the second oid is the OID of the partition.
+Return type: bigint
+Description: Specifies the disk space used by the index of the partition with a specified name. The first text is the table name and the second text is the partition name.
+Return type: bigint
+Description: Specifies the total disk space used by the index appended to the specified table.
+Return type: bigint
+Description: Converts the calculated byte size into a size readable to human beings.
+Return type: text
+Description: Converts the calculated byte size indicated by a numeral into a size readable to human beings.
+Return type: text
+Note: pg_size_pretty formats the results of other functions into a human-readable format. KB/MB/GB/TB can be used.
+Description: Specifies the disk space used by the specified table, excluding indexes (but including TOAST, free space mapping, and visibility mapping).
+Return type: bigint
+Description: Specifies the disk space used by the table with a specified OID, including the index and the compressed data.
+Return type: bigint
+Description: Specifies the total disk space used by the specified table, including all indexes and TOAST data.
+Return type: bigint
+Description: Specifies the disk space used by the table with a specified name, including the index and the compressed data. The table name can be schema-qualified.
+Return type: bigint
+Note: pg_total_relation_size receives the OID or name of a table or a compressed table, and returns the sizes of the data, related indexes, and the compressed table in bytes.
+Description: Specifies the ID of a filenode with the specified relationship.
+Return type: oid
+Description: pg_relation_filenode receives the OID or name of a table, index, sequence, or compressed table, and returns the filenode number allocated to it. The filenode is the basic component of the file name used by the relationship. For most tables, the result is the same as that of pg_class.relfilenode. For the specified system directory, relfilenode is 0 and this function must be used to obtain the correct value. If a relationship that is not stored is transmitted, such as a view, this function returns NULL.
+Description: Specifies the name of a file path with the specified relationship.
+Return type: text
+Description: pg_relation_filepath is similar to pg_relation_filenode, except that pg_relation_filepath returns the whole file path name for the relationship (relative to the data directory PGDATA of the database cluster).
+Advisory lock functions manage advisory locks. These functions are only for internal use currently.
+Description: Obtains an exclusive session-level advisory lock.
+Return type: void
+Note: pg_advisory_lock locks resources defined by an application. The resources can be identified using a 64-bit or two nonoverlapped 32-bit key values. If another session locks the resources, the function blocks the resources until they can be used. The lock is exclusive. Multiple locking requests are pushed into the stack. Therefore, if the same resource is locked three times, it must be unlocked three times so that it is released to another session.
+Description: Obtains an exclusive session-level advisory lock.
+Return type: void
+Description: Obtains a shared session-level advisory lock.
+Return type: void
+Description: Obtains a shared session-level advisory lock.
+Return type: void
+Note: pg_advisory_lock_shared works in the same way as pg_advisory_lock, except the lock can be shared with other sessions requesting shared locks. Only would-be exclusive lockers are locked out.
+Description: Releases an exclusive session-level advisory lock.
+Return type: boolean
+Description: Releases an exclusive session-level advisory lock.
+Return type: boolean
+Note: pg_advisory_unlock releases the obtained exclusive advisory lock. If the release is successful, the function returns true. If the lock was not held, it will return false. In addition, a SQL warning will be reported by the server.
+Description: Releases a shared session-level advisory lock.
+Return type: boolean
+Description: Releases a shared session-level advisory lock.
+Return type: boolean
+Note: pg_advisory_unlock_shared works in the same way as pg_advisory_unlock, except it releases a shared session-level advisory lock.
+Description: Releases all advisory locks owned by the current session.
+Return type: void
+Note: pg_advisory_unlock_all releases all advisory locks owned by the current session. The function is implicitly invoked when the session ends even if the client is abnormally disconnected.
+Description: Obtains an exclusive transaction-level advisory lock.
+Return type: void
+Description: Obtains an exclusive transaction-level advisory lock.
+Return type: void
+Note: pg_advisory_xact_lock works in the same way as pg_advisory_lock, except the lock is automatically released at the end of the current transaction and cannot be released explicitly.
+Description: Obtains a shared transaction-level advisory lock.
+Return type: void
+Description: Obtains a shared transaction-level advisory lock.
+Return type: void
+Note: pg_advisory_xact_lock_shared works in the same way as pg_advisory_lock_shared, except the lock is automatically released at the end of the current transaction and cannot be released explicitly.
+Description: Obtains an exclusive session-level advisory lock if available.
+Return type: boolean
+Note: pg_try_advisory_lock is similar to pg_advisory_lock, except pg_try_advisory_lock does not block the resource until the resource is released. pg_try_advisory_lock either immediately obtains the lock and returns true or returns false, which indicates the lock cannot be performed currently.
+Description: Obtains an exclusive session-level advisory lock if available.
+Return type: boolean
+Description: Obtains a shared session-level advisory lock if available.
+Return type: boolean
+Description: Obtains a shared session-level advisory lock if available.
+Return type: boolean
+Note: pg_try_advisory_lock_shared is similar to pg_try_advisory_lock, except pg_try_advisory_lock_shared attempts to obtain a shared lock instead of an exclusive lock.
+Description: Obtains an exclusive transaction-level advisory lock if available.
+Return type: boolean
+Description: Obtains an exclusive transaction-level advisory lock if available.
+Return type: boolean
+Note: pg_try_advisory_xact_lock works in the same way as pg_try_advisory_lock, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly.
+Description: Obtains a shared transaction-level advisory lock if available.
+Return type: boolean
+Description: Obtains a shared transaction-level advisory lock if available.
+Return type: boolean
+Note: pg_try_advisory_xact_lock_shared works in the same way as pg_try_advisory_lock_shared, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly.
+Description: Obtains all residual file records of the current node. This function is an instance-level function and is irrelevant to the current database. It can run on any instance.
+Parameter type: none
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
isverified + |
+bool + |
+Verified or not + |
+
isdeleted + |
+bool + |
+Deleted or not + |
+
dbname + |
+text + |
+Database name + |
+
residualfile + |
+text + |
+Data file path + |
+
filepath + |
+text + |
+Residual file path + |
+
notes + |
+text + |
+Notes + |
+
Example:
+1 +2 +3 +4 +5 +6 +7 | select * from pg_get_residualfiles(); + isverified | isdeleted | dbname | residualfile | filepath | notes +------------+-----------+--------+-------------------+---------------------------+------- + f | f | db2 | base/49155/114691 | pgrf_20200908160211441546 | + f | f | db2 | base/49155/114694 | pgrf_20200908160211441546 | + f | f | db2 | base/49155/114696 | pgrf_20200908160211441546 | +(3 rows) + |
Description: Unified CN query function of pg_get_residualfiles() This function is a cluster-level function and is irrelevant to the current database. It runs on CNs.
+Parameter type: none
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
isverified + |
+bool + |
+Verified or not + |
+
isdeleted + |
+bool + |
+Deleted or not + |
+
dbname + |
+text + |
+Database name + |
+
residualfile + |
+text + |
+Data file path + |
+
filepath + |
+text + |
+Residual file path + |
+
notes + |
+text + |
+Notes + |
+
Example:
+1 +2 +3 +4 +5 +6 +7 +8 | select * from pgxc_get_residualfiles(); + nodename | isverified | isdeleted | dbname | residualfile | filepath | notes +--------------+------------+-----------+----------+-------------------+---------------------------+------- + cn_5001 | f | f | gaussdb | base/15092/32803 | pgrf_20200910170129360401 | + dn_6001_6002 | f | f | db2 | base/49155/114691 | pgrf_20200908160211441546 | + dn_6001_6002 | f | f | db2 | base/49155/114694 | pgrf_20200908160211441546 | + dn_6001_6002 | f | f | db2 | base/49155/114696 | pgrf_20200908160211441546 | +(4 rows) + |
Description: Verifies whether the file recorded in the parameter specified file is a residual file. This function is an instance-level function and is related to the current database. It can run on any instance.
+Parameter type: text
+Return type: bool
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
isverified + |
+bool + |
+Verification completed or not + |
+
Example:
+1 +2 +3 +4 +5 | select * from pg_verify_residualfiles('pgrf_20200908160211441546'); + isverified +------------ + t +(1 row) + |
This function only verifies whether the recorded file is a residual file in the current database. If the recorded file is not in the current database, the verification is not applicable.
+Description: Verifies whether recorded files on all residual file lists of the current instance are residual files. This function is an instance-level function and is related to the current database. It can run on any instance.
+Parameter type: none
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
result + |
+bool + |
+Verification completed or not + |
+
filepath + |
+text + |
+Residual file path + |
+
notes + |
+text + |
+Notes + |
+
Example:
+1 +2 +3 +4 +5 | select * from pg_verify_residualfiles(); + result | filepath | notes +--------+---------------------------+------- + t | pgrf_20200908160211441546 | +(1 row) + |
This function only verifies whether the recorded file is a residual file in the current database. If the recorded file is not in the current database, the verification is not applicable.
+Description: Unified CN query function of pg_verify_residualfiles() This function is a cluster-level function and is related to the current database. It runs on CNs.
+Parameter type: none
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
result + |
+bool + |
+Verification completed or not + |
+
filepath + |
+text + |
+Residual file path + |
+
notes + |
+text + |
+Notes + |
+
Example:
+1 +2 +3 +4 +5 +6 | select * from pgxc_verify_residualfiles(); + nodename | result | filepath | notes +--------------+--------+---------------------------+------- + cn_5001 | t | pgrf_20200910170129360401 | + dn_6001_6002 | t | pgrf_20200908160211441546 | +(2 rows) + |
This function only verifies whether the recorded file is a residual file in the current database. If the recorded file is not in the current database, the verification is not applicable.
+Description: Queries whether a specified relfilenode is a residual file in the current database. This function is an instance-level function and is related to the current database. It can run on any instance.
+Parameter type: text
+Return type: bool
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
result + |
+bool + |
+Residual file or not + |
+
Example:
+1 +2 +3 +4 +5 | select * from pg_is_residualfiles('base/49155/114691'); + result +-------- + t +(1 row) + |
This function only verifies whether the recorded file is a residual file in the current database. If the recorded file is not in the current database, it is verified as a residual file.
+For example, the file base/15092/14790 is not regarded as a residual file in a gaussdb database, but it is regarded as a residual file in other databases.
+select * from pg_is_residualfiles('base/15092/14790');
+result
+--------
+f
+(1 row)
+ +\c db2
+db2=# select * from pg_is_residualfiles('base/15092/14790');
+result
+--------
+t
+(1 row)
+Description: Deletes files from a specified residual file list on the current instance. This function is an instance-level function and is irrelevant to the current database. It can run on any instance.
+Parameter type: text
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
result + |
+bool + |
+Deletion completed or not + |
+
Example:
+1 +2 +3 +4 +5 | select * from pg_rm_residualfiles('pgrf_20200908160211441599'); + result +-------- + t +(1 row) + |
1. Residual files can be deleted only after verification using the pg_verify_residualfiles() function.
+2. All verified files, regardless which database they are in, will be deleted.
+3. If all files recorded in the specified file have been deleted, the specified file will be removed and backed up in the $PGDATA/pg_residualfile/backup directory.
+Description: Deletes all files recorded on all residual file lists on the current instance. This function is an instance-level function and is irrelevant to the current database. It can run on any instance.
+Parameter type: none
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
result + |
+bool + |
+Deleted or not + |
+
filepath + |
+text + |
+Residual file path + |
+
notes + |
+text + |
+Notes + |
+
Example:
+1 +2 +3 +4 +5 | select * from pg_rm_residualfiles(); + result | filepath | notes +--------+---------------------------+------- + t | pgrf_20200908160211441546 | +(1 row) + |
Description: Unified CN query function of pgxc_rm_residualfiles. This function is a cluster-level function and is irrelevant to the current database. It runs on CNs.
+Parameter type: none
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
nodename + |
+text + |
+Node name + |
+
result + |
+bool + |
+Deletion completed or not + |
+
filepath + |
+text + |
+Residual file path + |
+
notes + |
+text + |
+Notes + |
+
Example:
+1 +2 +3 +4 +5 +6 | select * from pgxc_rm_residualfiles(); + nodename | result | filepath | notes +--------------+--------+---------------------------+------- + cn_5001 | t | pgrf_20200910170129360401 | + dn_6001_6002 | t | pgrf_20200908160211441546 | +(2 rows) + |
Procedure:
+The pgxc residual file management function only operates on the CN and the current primary DN, and does not verify or clear residual files on the standby DN. Therefore, after the primary DN is cleared, you need to clear residual files on the standby DN or build the standby DN in a timely manner. This prevents residual files on the standby DN from being copied back to the primary DN due to incremental build after a primary/standby switchover.
+Example:
+The following example uses two user-created databases, db1 and db2.
+1 | db1=# select * from pgxc_get_residualfiles() order by 4, 6; -- order by is optional. + |
In the current cluster:
+1 | db1=# select * from pgxc_verify_residualfiles(); + |
Verification functions are at the database level. Therefore, when a verification function is called in the db1 database, it only verifies residual files in db1.
+You can call the get function again to check whether the verification is complete.
+1 | db1=# select * from pgxc_get_residualfiles() order by 4, 6; + |
As shown in the preceding figure, the residual files in the db1 database have been verified, and the residual files in the db2 database are not verified.
+1 | db1=# select * from pgxc_rm_residualfiles(); + |
The result shows that the residual files in the db1 database are deleted (isdeleted is marked as t) and the residual files in the db2 database are not deleted.
+In addition, nine query results are displayed. Compared with the previous query results, a record for the residual file ending with 9438 is missing. This is because the record file that records the residual file ending with 9438 contains only one record, which is deleted in step 3. If all residual files in a record file are deleted, the record file is also deleted. Deleted files are backed up in the pg_residualfiles/backup directory.
+Query the verification result:
+All residual files recorded in the record file whose name ends with 8342 have been deleted, so the record file is deleted and backed up in the backup directory. As a result, no records are found.
+A replication function synchronizes logs and data between instances. It is a statistics or operation method provided by the system to implement HA.
+Replication functions except statistics queries are internal functions. You are not advised to use them directly.
+Description: Creates a logical replication slot.
+Parameter:
+Indicates the name of the streaming replication slot.
+Value range: a string, supporting only letters, digits, and the following special characters: _?-.
+Indicates the name of the plugin.
+Value range: a string, supporting only mppdb_decoding
+Return type: name, text
+Note: The first return value is the slot name, and the second is the start LSN position for decoding in the logical replication slot.
+Description: Creates a physical replication slot.
+Parameter:
+Indicates the name of the streaming replication slot.
+Value range: a string, supporting only letters, digits, and the following special characters: _?.-
+Indicates whether the replication slot is the secondary one.
+Value range: a boolean value, true or false
+Return type: name, text
+Note: The first return value is the slot name, and the second is the start LSN position for decoding in the physical replication slot.
+Description: Displays information about all replication slots on the current DN.
+Return type: record
+The following information is returned:
+ +Field + |
+Type + |
+Description + |
+
---|---|---|
slot_name + |
+text + |
+Replication slot name + |
+
plugin + |
+name + |
+Name of the output plug-in of the logical replication slot + |
+
slot_type + |
+text + |
+Replication slot type + |
+
datoid + |
+oid + |
+Replication slot's database OID + |
+
active + |
+boolean + |
+Whether the replication slot is active + |
+
xmin + |
+xid + |
+Transaction ID of the replication slot + |
+
catalog_xmin +restart_lsn +dummy_standby + |
+text +text +boolean + |
+ID of the earliest-decoded transaction corresponding to the logical replication slot. +Xlog file information on the replication slot. +Indicates whether the replication slot is the secondary one. + |
+
Description: Deletes a streaming replication slot.
+Parameter:
+Indicates the name of the streaming replication slot.
+Value range: a string, supporting only letters, digits, and the following special characters: _?-.
+Return type: void
+Description: Performs decoding but does not go to the next streaming replication slot. (The decoding result will be returned again on future calls.)
+Parameter:
+Indicates the name of the streaming replication slot.
+Value range: a string, supporting only letters, digits, and the following special characters: _?-.
+Indicates a target LSN. Decoding is performed only when an LSN is less than or equal to this value.
+Value range: a string, in the format of xlogid/xrecoff, for example, '1/2AAFC60' (If this parameter is set to NULL, the target LSN indicating the end position of decoding is not specified.)
+Indicates the number of decoded records (including the begin and commit timestamps). Assume that there are three transactions, which involve 3, 5, and 7 records, respectively. If upto_nchanges is 4, 8 records of the first two transactions will be decoded. Specifically, decoding is stopped when the number of decoded records exceeds 4 after decoding in the first two transactions is finished.
+Value range: a non-negative integer
+If any of the LSN and upto_nchanges values are reached, decoding ends.
+Indicates whether the decoded data column contains XID information.
+Valid value: 0 and 1. The default value is 1.
+Indicates whether to ignore empty transaction information during decoding.
+Valid value: 0 and 1. The default value is 0.
+Indicates whether decoding information contains the commit timestamp.
+Valid value: 0 and 1. The default value is 0.
+Return type: text, uint, text
+Note: The function returns the decoding result. Each decoding result contains three columns, corresponding to the above return types and indicating the LSN position, XID, and decoded content, respectively.
+Description: Performs decoding and goes to the next streaming replication slot.
+Parameter: This function has the same parameters as pg_logical_slot_peek_changes. For details, see pg_logical_slot_peek_ch....
+Description: Directly goes to the streaming replication slot for a specified LSN, without outputting any decoding result.
+Parameter:
+Indicates the name of the streaming replication slot.
+Value range: a string, supporting only letters, digits, and the following special characters: _?-.
+Indicates a target LSN. Next decoding will be performed only in transactions whose commission position is greater than this value. If an input LSN is smaller than the position recorded in the current streaming replication slot, the function directly returns. If the input LSN is greater than the LSN of the current physical log, the latter LSN will be directly used for decoding.
+Value range: a string, in the format of xlogid/xrecoff
+Return type: name, text
+Note: A return result contains the slot name and LSN that is actually used for decoding.
+Description: Displays statistics about replication sending threads on all data page on the current DN.
+Return type: record
+The following information is returned:
+ +Field + |
+Type + |
+Description + |
+
---|---|---|
pid + |
+bigint + |
+Thread PID + |
+
sender_pid + |
+integer + |
+Current sender PID + |
+
local_role + |
+text + |
+Local role + |
+
peer_role + |
+text + |
+Peer role + |
+
state + |
+text + |
+Current sender's replication status + |
+
catchup_start + |
+timestamp with time zone + |
+Startup time of a catchup task + |
+
catchup_end + |
+timestamp with time zone + |
+End time of a catchup task + |
+
queue_size + |
+text + |
+Data queue size + |
+
queue_lower_tail + |
+text + |
+Position of data queue tail 1 + |
+
queue_header + |
+text + |
+Position of data queue header + |
+
queue_upper_tail + |
+text + |
+Position of data queue tail 2 + |
+
send_position + |
+text + |
+Sending position of the sender + |
+
receive_position + |
+text + |
+Receiving position of the receiver + |
+
catchup_type + |
+text + |
+Catchup task type, full or incremental + |
+
catchup_bcm_filename + |
+text + |
+BCM file executed by the current catchup task + |
+
catchup_bcm_finished + |
+integer + |
+Number of BCM files completed by a catchup task + |
+
catchup_bcm_total + |
+integer + |
+Total number of BCM files to be operated by a catchup task + |
+
catchup_percent + |
+text + |
+Completion percentage of a catchup task + |
+
catchup_remaining_time + |
+text + |
+Estimated remaining time of a catchup task + |
+
Description: Displays statistics about replication sending threads on all WALs on the current DN.
+Return type: record
+The following information is returned:
+ +Field + |
+Type + |
+Description + |
+
---|---|---|
pid + |
+bigint + |
+Thread PID + |
+
sender_pid + |
+integer + |
+Current sender PID + |
+
local_role + |
+text + |
+Local role + |
+
peer_role + |
+text + |
+Peer role + |
+
peer_state + |
+text + |
+Peer status + |
+
state + |
+text + |
+Current sender's replication status + |
+
catchup_start + |
+timestamp with time zone + |
+Startup time of a catchup task + |
+
catchup_end + |
+timestamp with time zone + |
+End time of a catchup task + |
+
sender_sent_location + |
+text + |
+Location where the sender sends LSNs + |
+
sender_write_location + |
+text + |
+Location where the sender writes LSNs + |
+
sender_flush_location + |
+text + |
+Location where the sender flushes LSNs + |
+
sender_replay_location + |
+text + |
+Location where the sender replays LSNs + |
+
receiver_received_location + |
+text + |
+Location where the receiver receives LSNs + |
+
receiver_write_location + |
+text + |
+Location where the receiver writes LSNs + |
+
receiver_flush_location + |
+text + |
+Location where the receiver flushes LSNs + |
+
receiver_replay_location + |
+text + |
+Location where the receiver replays LSNs + |
+
sync_percent + |
+text + |
+Specifies the synchronization percentage. + |
+
sync_state + |
+text + |
+Synchronization state (asynchronous duplication, synchronous duplication, or potential synchronization) + |
+
sync_priority + |
+integer + |
+Priority of synchronous duplication (0 indicates asynchronization) + |
+
sync_most_available + |
+text + |
+Whether to block the active node when the synchronization on the standby node fails + |
+
channel + |
+text + |
+WALSender channel information + |
+
Description: Displays statistics about replication receiving threads on all WALs on the current DN.
+Return type: record
+The following information is returned:
+ +Field + |
+Type + |
+Description + |
+
---|---|---|
receiver_pid + |
+integer + |
+Current receiver PID + |
+
local_role + |
+text + |
+Local role + |
+
peer_role + |
+text + |
+Peer role + |
+
peer_state + |
+text + |
+Peer status + |
+
state + |
+text + |
+Current receiver's replication status + |
+
sender_sent_location + |
+text + |
+Location where the sender sends LSNs + |
+
sender_write_location + |
+text + |
+Location where the sender writes LSNs + |
+
sender_flush_location + |
+text + |
+Location where the sender flushes LSNs + |
+
sender_replay_location + |
+text + |
+Location where the sender replays LSNs + |
+
receiver_received_location + |
+text + |
+Location where the receiver receives LSNs + |
+
receiver_write_location + |
+text + |
+Location where the receiver writes LSNs + |
+
receiver_flush_location + |
+text + |
+Location where the receiver flushes LSNs + |
+
receiver_replay_location + |
+text + |
+Location where the receiver replays LSNs + |
+
sync_percent + |
+text + |
+Specifies the synchronization percentage. + |
+
channel + |
+text + |
+WALReceiver channel information + |
+
Description: Displays information about all replication statistics on the current DN.
+Return type: record
+The following information is returned:
+ +Field + |
+Type + |
+Description + |
+
---|---|---|
local_role + |
+text + |
+Local role + |
+
static_connections + |
+integer + |
+Connection statistics + |
+
db_state + |
+text + |
+Database status + |
+
detail_information + |
+text + |
+Detail information + |
+
Description: Displays the Xlog space usage on the current DN.
+Return type: record
+The following information is returned:
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
xlog_files + |
+bigint + |
+Number of all identified xlog files in the pg_xlog directory, excluding the backup and archive_status subdirectories. + |
+
xlog_size + |
+bigint + |
+Total size (MB) of all identified xlog files in the pg_xlog directory, excluding the backup and archive_status subdirectories. + |
+
other_size + |
+bigint + |
+Total size (MB) of files in the backup and archive_status subdirectories of the pg_xlog directory. + |
+
Description: Displays the Xlog space usage on all active DNs.
+Return type: record
+The following information is returned:
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+name + |
+Node name + |
+
xlog_files + |
+bigint + |
+Number of all identified xlog files in the pg_xlog directory, excluding the backup and archive_status subdirectories. + |
+
xlog_size + |
+bigint + |
+Total size (MB) of all identified xlog files in the pg_xlog directory, excluding the backup and archive_status subdirectories. + |
+
other_size + |
+bigint + |
+Total size (MB) of files in the backup and archive_status subdirectories of the pg_xlog directory. + |
+
Description: Checks whether the connection data buffered in the pool is consistent with pgxc_node.
+Return type: boolean
+Description: Updates the connection information buffered in the pool.
+Return type: boolean
+Description: Locks the cluster before backup. Backup is performed to restore data on new nodes.
+Return type: boolean
+pgxc_lock_for_backup locks a cluster before gs_dump or gs_dumpall is used to back up the cluster. After a cluster is locked, operations changing the system structure are not allowed. This function does not affect DML statements.
+Description: Clears invalid backend threads on a CN. (These backend threads hold invalid pooler connections to standby DNs.)
+Return type: record
+Description: queries the memory usage of all nodes.
+Return type: record
+Description: queries the percentage of table data among all nodes.
+Parameter: Indicates that the type of the name of the to-be-queried table is text.
+Return type: record
+Description: Queries the proportion of column data distributed on each node based on the hash distribution rule. The results are sorted based on the data volumes of the nodes.
+Parameters: table_name indicates a table name, column_name indicates a column name, and row_num indicates that all data in the current column is returned. The default value is 0. A value other than 0 indicates the number of data records whose statistics are sampled. (Records are randomly sampled.)
+Return type: record
+Example:
+Distribute data by hash based on the a column in the tx table. Seven records are distributed on DN 1, two records on DN 2, and one record on DN 0.
+1 +2 +3 +4 +5 +6 +7 | select table_skewness('tx','a'); + table_skewness +---------------- + (1,7,70.000%) + (2,2,20.000%) + (0,1,10.000%) +(3 rows) + |
Description: Calculates the bucket distribution index for the records concatenated using the columns in a specified table.
+Parameters: data_row indicates the record concatenated using columns in the specified table. locatorType indicates the distribution rule. You are advised to set locatorType to H, indicating hash distribution.
+Return type: smallint
+Example:
+Calculates the bucket distribution index based on the hash distribution rule for the records combined concatenated using the columns in the tx table.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | select a, table_data_skewness(row(a), 'H') from tx; + a | table_data_skewness +---+--------------------- + 3 | 0 + 6 | 2 + 7 | 2 + 4 | 1 + 5 | 1 +(5 rows) + |
Description: queries the storage space occupied by a specified table on each node.
+Parameter: Indicates that the types of the schema name and table name for the table to be queried are both text.
+Return type: record
+Description: queries the storage space occupied by a specified table on each node.
+Parameter: indicates the name or OID of the table to be queried. The table name can be defined by the schema name. Parameter type: regclass
+Return type: record
+Description: queries the storage distribution of all tables in the current database.
+Return type: record
+Description: Obtains information about insertion, update, and deletion operations on tables and the dirty page rate of tables. This function optimizes the performance of the PGXC_GET_STAT_ALL_TABLES view. It can quickly filter out tables whose dirty page rate is greater than dirty_percent and number of dead tuples is greater than n_tuples.
+Return type: SETOF record
+The following table describes return columns.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
relid + |
+oid + |
+Table OID + |
+
relname + |
+name + |
+Table name + |
+
schemaname + |
+name + |
+Schema name of the table + |
+
n_tup_ins + |
+bigint + |
+Number of inserted tuples + |
+
n_tup_upd + |
+bigint + |
+Number of updated tuples + |
+
n_tup_del + |
+bigint + |
+Number of deleted tuples + |
+
n_live_tup + |
+bigint + |
+Number of live tuples + |
+
n_dead_tup + |
+bigint + |
+Number of dead tuples + |
+
dirty_page_rate + |
+numeric(5,2) + |
+Dirty page rate (%) of a table + |
+
Description: Obtains information about insertion, update, and deletion operations on tables and the dirty page rate of tables. This function can quickly filter out tables whose dirty page rate is greater than page_dirty_rate, number of dead tuples is greater than n_tuples, and schema name is schema.
+Return type: SETOF record
+The return columns of the function are the same as those of the pgxc_get_stat_dirty_tables(int dirty_percent, int n_tuples) function.
+Description: Obtains the seed value of the previous query statement (internal use).
+Return type: int
+Description: Obtains the environment variable information about the current node.
+Return type: record
+Description: Provides information about the status of all threads under the current node.
+Return type: record
+Description: Provides information about the status of threads under all normal nodes in a cluster.
+Return type: record
+Description: Provides statistics on the number of SELECT/UPDATE/INSERT/DELETE/MERGE INTO statements executed by all users on the current node, response time, and the number of DDL, DML, and DCL statements.
+Return type: record
+Description: Provides statistics on the number of SELECT/UPDATE/INSERT/DELETE/MERGE INTO statements executed by all users on all nodes of the current cluster, response time, and the number of DDL, DML, and DCL statements.
+Return type: record
+Description: Provides statistics on the number of SELECT/UPDATE/INSERT/DELETE statements executed in all workload Cgroup on all CNs of the current cluster and the number of DDL, DML, and DCL statements.
+Return type: record
+Description: Provides statistics on response time of SELECT/UPDATE/INSERT/DELETE statements executed in all workload Cgroup on all CNs of the current cluster.
+Return type: record
+Description: Provides information about Unique SQL statistics collected on the current node. If the node is a CN, the system returns the complete information about the Unique SQL statistics collected on the CN. That is, the system collects and summarizes the information about the Unique SQL statistics on other CNs and DNs. If the node is a DN, the Unique SQL statistics on the DN is returned. For details, see GS_INSTR_UNIQUE_SQL.
+Return type: record
+Description: Clears collected Unique SQL statistics. The input parameters are described as follows:
+Return type: bool
+Description: Provides complete information about Unique SQL statistics collected on all CNs in a cluster. This function can be executed only on CNs.
+Return type: record
+Description: Provides complete information about Unique SQL statements collected on all CNs in the cluster, except the CN on which the function is being executed. This function can be executed only on CNs.
+Return type: record
+Description: Provides the environment variable information about all nodes in a cluster.
+Return type: record
+Description: Exchanges meta information of two tables or partitions. (This is only used for the redistribution tool. An error message is displayed when the function is directly used by users).
+Return type: int
+Description: Creates the error table (public.pgxc_copy_error_log) required for creating the COPY FROM error tolerance mechanism.
+Return type: boolean
+Column + |
+Type + |
+Description + |
+
---|---|---|
relname + |
+varchar + |
+Table name in the form of Schema name.Table name + |
+
begintime + |
+timestamp with time zone + |
+Time when a data format error was reported + |
+
filename + |
+character varying + |
+Name of the source data file where a data format error occurs + |
+
rownum + |
+bigint + |
+Number of the row where a data format error occurs in a source data file + |
+
rawrecord + |
+text + |
+Raw record of a data format error in the source data file To prevent a field from being too long, the length of the field cannot exceed 1024 bytes. + |
+
detail + |
+text + |
+Error details + |
+
Description: Provides the current load information about computing Node Groups on cloud.
+Return type: record
+Description: Queries for the blocking and waiting status of the backend threads and auxiliary threads in the current instance. For details about the returned results, see the PG_THREAD_WAIT_STATUS view. The input parameters are described as follows:
+Return type: record
+Description: Queries for the call hierarchy between threads generated by all SQL statements on each node in a cluster, as well as the block waiting status of each thread. For details about the returned results, see the PGXC_THREAD_WAIT_STATUS view. The type and meaning of the input parameter num_node_display are the same as those of the pg_stat_get_status function.
+Return type: record
+Description: Obtains the running status of the operating system on each node in a cluster. For details about the returned results, see "System Catalogs > System Views >PV_OS_RUN_INFO" in the Developer Guide.
+Return type: record
+Description: Obtains the waiting status and events of the current instance. For details about the returned results, see "System Catalogs > System Views > GS_WAIT_EVENTS" in the Developer Guide. If the GUC parameter enable_track_wait_event is off, this function returns 0.
+Return type: record
+Description: queries statistics about waiting status and events on each node in a cluster. For details about the returned results, see "System Catalogs > System Views > PGXC_WAIT_EVENTS" in the Developer Guide. If the GUC parameter enable_track_wait_event is off, this function returns 0.
+Return type: record
+Description: queries statistics about backend write processes on each node in a cluster. For details about the returned results, see "System Catalogs > System Views > PG_STAT_BGWRITER" in the Developer Guide.
+Return type: record
+Description: queries information about the log synchronization status on each node in a cluster, such as the location where the logs are sent and received. For details about the returned results, see "System Catalogs > System Views > PG_STAT_REPLICATION" in the Developer Guide.
+Return type: record
+Description: queries the replication status on each DN in a cluster. For details about the returned results, see "System Catalogs > System Views > PG_REPLICATION_SLOTS" in the Developer Guide.
+Return type: record
+Description: queries information about runtime parameters on each node in a cluster. For details about the returned results, see "System Catalogs > System Views > PG_SETTINGS" in the Developer Guide.
+Return type: record
+Description: queries the running time statistics of each node in a cluster and the time consumed in each execution phase. For details about the returned results, see "System Catalogs > System Views > PV_INSTANCE_TIME" in the Developer Guide.
+Return type: record
+Description: queries Xlog redo statistics on the current node. For details about the returned results, see "System Catalogs > System Views > PV_REDO_STAT" in the Developer Guide.
+Return type: record
+Description: queries the Xlog redo statistics of each node in a cluster. For details about the returned results, see "System Catalogs > System Views > PV_REDO_STAT" in the Developer Guide.
+Return type: record
+Description: Obtains the disk I/O statistics of the current instance. For details about the returned results, see "System Catalogs > System Views > GS_REL_IOSTAT" in the Developer Guide.
+Return type: record
+Description: queries the disk I/O statistics on each node in a cluster. For details about the returned result, see "System Catalogs > System Views > GS_REL_IOSTAT" in the Developer Guide.
+Return type: record
+Description: Obtains the time when statistics of the current instance were reset.
+Return type: timestamptz
+Description: queries the time when the statistics of each node in a cluster are reset. For details about the returned result, see "System Catalogs > System Views > GS_NODE_STAT_RESET_TIME" in the Developer Guide.
+Return type: record
+If any of the preceding events occurs, GaussDB(DWS) will record the time when the statistics are reset. You can query the time using the get_node_stat_reset_time function.
+This section describes the functions of the resource management module.
+Description: This function calibrates the permanent storage space of a user. The input parameter is the user OID. If the input parameter is set to 0, the permanent storage space of all users is calibrated.
+Return type: text
+Example:
+1 +2 +3 +4 +5 | select gs_wlm_readjust_user_space(0); +gs_wlm_readjust_user_space +---------------------------- +Exec Success +(1 row) + |
Description: This function calibrates the permanent storage space of a schema.
+Return type: text
+Example:
+1 +2 +3 +4 +5 | select pgxc_wlm_readjust_schema_space(); +pgxc_wlm_readjust_schema_space +-------------------------------- +Exec Success +(1 row) + |
Description: Obtains the schema space of each instance in a specified logical cluster on the CN.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+text + |
+Schema name + |
+
schemaid + |
+oid + |
+Schema OID + |
+
databasename + |
+text + |
+Database name + |
+
databaseid + |
+oid + |
+Database OID + |
+
nodename + |
+text + |
+Instance name + |
+
nodegroup + |
+text + |
+Name of the node group + |
+
usedspace + |
+bigint + |
+Size of the used space + |
+
permspace + |
+bigint + |
+Upper limit of the space + |
+
Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 | select * from pgxc_wlm_get_schema_space('group1'); + schemaname | schemaid | databasename | databaseid | nodename | nodegroup | usedspace | permspace +--------------------+----------+--------------+------------+--------------+--------------+-----------+----------- + pg_catalog | 11 | test1 | 16384 | datanode1 | installation | 9469952 | -1 + public | 2200 | gaussdb | 15253 | datanode1 | installation | 25280512 | -1 + pg_toast | 99 | test1 | 16384 | datanode1 | installation | 1859584 | -1 + cstore | 100 | test1 | 16384 | datanode1 | installation | 0 | -1 + data_redis | 18106 | gaussdb | 15253 | datanode1 | installation | 655360 | -1 + data_redis | 18116 | test1 | 16384 | datanode1 | installation | 0 | -1 + public | 2200 | test1 | 16384 | datanode1 | installation | 16384 | -1 + dbms_om | 3987 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + dbms_job | 3988 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + dbms_om | 3987 | test1 | 16384 | datanode1 | installation | 0 | -1 + dbms_job | 3988 | test1 | 16384 | datanode1 | installation | 0 | -1 + sys | 11693 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + sys | 11693 | test1 | 16384 | datanode1 | installation | 0 | -1 + utl_file | 14644 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + utl_raw | 14669 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + dbms_sql | 14674 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + dbms_output | 14662 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + dbms_random | 14666 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + dbms_lob | 14701 | gaussdb | 15253 | datanode1 | installation | 0 | -1 + information_schema | 14300 | gaussdb | 15253 | datanode1 | installation | 294912 | -1 + information_schema | 14300 | test1 | 16384 | datanode1 | installation | 294912 | -1 + utl_file | 14644 | test1 | 16384 | datanode1 | installation | 0 | -1 + dbms_output | 14662 | test1 | 16384 | datanode1 | installation | 0 | -1 + dbms_random | 14666 | test1 | 16384 | datanode1 | installation | 0 | -1 + utl_raw | 14669 | test1 | 16384 | datanode1 | installation | 0 | -1 + dbms_sql | 14674 | test1 | 16384 | datanode1 | installation | 0 | -1 + dbms_lob | 14701 | test1 | 16384 | datanode1 | installation | 0 | -1 + pg_catalog | 11 | gaussdb | 15253 | datanode1 | installation | 13049856 | -1 + redisuser | 16387 | gaussdb | 15253 | datanode1 | installation | 630784 | -1 + pg_toast | 99 | gaussdb | 15253 | datanode1 | installation | 3080192 | -1 + cstore | 100 | gaussdb | 15253 | datanode1 | installation | 2408448 | -1 + pg_catalog | 11 | test1 | 16384 | datanode2 | installation | 9469952 | -1 + public | 2200 | gaussdb | 15253 | datanode2 | installation | 25214976 | -1 + pg_toast | 99 | test1 | 16384 | datanode2 | installation | 1859584 | -1 + cstore | 100 | test1 | 16384 | datanode2 | installation | 0 | -1 + data_redis | 18106 | gaussdb | 15253 | datanode2 | installation | 655360 | -1 + data_redis | 18116 | test1 | 16384 | datanode2 | installation | 0 | -1 + public | 2200 | test1 | 16384 | datanode2 | installation | 16384 | -1 + dbms_om | 3987 | gaussdb | 15253 | datanode2 | installation | 0 | -1 + dbms_job | 3988 | gaussdb | 15253 | datanode2 | installation | 0 | -1 + dbms_om | 3987 | test1 | 16384 | datanode2 | installation | 0 | -1 + dbms_job | 3988 | test1 | 16384 | datanode2 | installation | 0 | -1 + |
Description: Obtains the schema space of a specified logical cluster on the CN.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
schemaname + |
+text + |
+Schema name + |
+
databasename + |
+text + |
+Database name + |
+
nodegroup + |
+text + |
+Name of the node group + |
+
total_value + |
+bigint + |
+Total cluster space in the current schema + |
+
avg_value + |
+bigint + |
+Average space of instances in the current schema + |
+
skew_percent + |
+integer + |
+Skew ratio + |
+
extend_info + |
+text + |
+Extended information, including the maximum space of a single instance, minimum space of a single instance, and name of the instance with the maximum or minimum space + |
+
Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 | select * from pgxc_wlm_analyze_schema_space('group1'); + schemaname | databasename | nodegroup | total_value | avg_value | skew_percent | extend_info +--------------------+--------------+--------------+-------------+-----------+--------------+----------------------------------------------- + pg_catalog | test1 | installation | 56819712 | 9469952 | 0 | min:9469952 datanode1,max:9469952 datanode1 + public | gaussdb | installation | 150495232 | 25082538 | 0 | min:24903680 datanode6,max:25280512 datanode1 + pg_toast | test1 | installation | 11157504 | 1859584 | 0 | min:1859584 datanode1,max:1859584 datanode1 + cstore | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + data_redis | gaussdb | installation | 1966080 | 327680 | 50 | min:0 datanode4,max:655360 datanode1 + data_redis | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + public | test1 | installation | 98304 | 16384 | 0 | min:16384 datanode1,max:16384 datanode1 + dbms_om | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_job | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_om | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_job | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + sys | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + sys | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + utl_file | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + utl_raw | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_sql | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_output | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_random | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_lob | gaussdb | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + information_schema | gaussdb | installation | 1769472 | 294912 | 0 | min:294912 datanode1,max:294912 datanode1 + information_schema | test1 | installation | 1769472 | 294912 | 0 | min:294912 datanode1,max:294912 datanode1 + utl_file | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_output | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_random | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + utl_raw | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_sql | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + dbms_lob | test1 | installation | 0 | 0 | 0 | min:0 datanode1,max:0 datanode1 + pg_catalog | gaussdb | installation | 75431936 | 12571989 | 3 | min:12124160 datanode4,max:13049856 datanode1 + redisuser | gaussdb | installation | 1884160 | 314026 | 50 | min:16384 datanode4,max:630784 datanode1 + pg_toast | gaussdb | installation | 17154048 | 2859008 | 7 | min:2637824 datanode4,max:3080192 datanode1 + cstore | gaussdb | installation | 15294464 | 2549077 | 5 | min:2408448 datanode1,max:2703360 datanode6 +(31 rows) + |
Description: Sets the action and query order of query_band.
+Return type: boolean
+The following table describes the input parameters.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
qband + |
+cstring + |
+Query band key-value pair. The maximum length is 63 characters. + |
+
action + |
+cstring + |
+Action associated to a query band + |
+
order + |
+int4 + |
+Query band query order. The default value is -1. + |
+
Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | select * from gs_wlm_set_queryband_action('a=1','respool=p1'); + gs_wlm_set_queryband_action +----------------------------- + t +(1 row) +select * from gs_wlm_set_queryband_action('a=3','respool=p1;priority=rush',1); + gs_wlm_set_queryband_action +----------------------------- + t +(1 row) + |
Description: Sets the query_band query order.
+Return type: boolean
+The following table describes the input parameters.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
qband + |
+cstring + |
+query_band key-value pairs + |
+
order + |
+int4 + |
+query_band query order. The default value is -1. + |
+
Example:
+1 +2 +3 +4 +5 | select * from gs_wlm_set_queryband_order('a=1',2); + gs_wlm_set_queryband_action +----------------------------- + t +(1 row) + |
Description: Obtains the action and query order of query_band.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
qband + |
+cstring + |
+query_band key-value pairs + |
+
respool_id + |
+Oid + |
+OID of the resource pool associated with query_band + |
+
respool + |
+text + |
+Name of the resource pool associated with query_band + |
+
priority + |
+text + |
+Intra-queue priority associated with query_band + |
+
qborder + |
+int4 + |
+query_band query order + |
+
Example:
+1 +2 +3 +4 +5 | select * from gs_wlm_get_queryband_action('a=1'); +qband | respool_id | respool | priority | qborder +-------+------------+---------+----------+--------- + a=1 | 16388 | p1 | Medium | -1 +(1 row) + |
Description: This function loads the Cgroup configuration file online on the current instance.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Instance name + |
+
node_host + |
+text + |
+IP address of the node where the instance is located + |
+
result + |
+text + |
+Whether Cgroup online loading is successful + |
+
Example:
+1 +2 +3 +4 | select * from gs_cgroup_reload_conf(); + node_name | node_host | result +-----------+----------------+--------- + cn_5001 | 192.168.178.35 | success + |
Description: This function loads the Cgroup configuration file online on all instances of the system.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Instance name + |
+
node_host + |
+text + |
+IP address of the node where the instance is located + |
+
result + |
+text + |
+Whether Cgroup online loading is successful + |
+
Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 | select * from pgxc_cgroup_reload_conf(); + node_name | node_host | result +--------------+-----------------+--------- + dn_6025_6026 | 192.168.178.177 | success + dn_6049_6050 | 192.168.179.79 | success + dn_6051_6052 | 192.168.179.79 | success + dn_6055_6056 | 192.168.179.79 | success + dn_6067_6068 | 192.168.181.57 | success + dn_6023_6024 | 192.168.178.39 | success + dn_6009_6010 | 192.168.181.21 | success + dn_6011_6012 | 192.168.181.21 | success + dn_6015_6016 | 192.168.181.21 | success + dn_6029_6030 | 192.168.178.177 | success + dn_6031_6032 | 192.168.178.177 | success + dn_6045_6046 | 192.168.179.45 | success + cn_5001 | 192.168.178.35 | success + cn_5003 | 192.168.178.39 | success + dn_6061_6062 | 192.168.181.179 | success + cn_5006 | 192.168.179.45 | success + cn_5004 | 192.168.178.177 | success + cn_5002 | 192.168.181.21 | success + cn_5005 | 192.168.178.187 | success + dn_6019_6020 | 192.168.178.39 | success + dn_6007_6008 | 192.168.178.35 | success + dn_6071_6072 | 192.168.181.57 | success + dn_6003_6004 | 192.168.178.35 | success + dn_6013_6014 | 192.168.181.21 | success + dn_6035_6036 | 192.168.178.187 | success + dn_6037_6038 | 192.168.178.187 | success + dn_6001_6002 | 192.168.178.35 | success + dn_6063_6064 | 192.168.181.179 | success + dn_6005_6006 | 192.168.178.35 | success + dn_6057_6058 | 192.168.181.179 | success + dn_6069_6070 | 192.168.181.57 | success + dn_6027_6028 | 192.168.178.177 | success + dn_6059_6060 | 192.168.181.179 | success + dn_6041_6042 | 192.168.179.45 | success + dn_6043_6044 | 192.168.179.45 | success + dn_6047_6048 | 192.168.179.45 | success + dn_6033_6034 | 192.168.178.187 | success + dn_6065_6066 | 192.168.181.57 | success + dn_6021_6022 | 192.168.178.39 | success + dn_6017_6018 | 192.168.178.39 | success + dn_6039_6040 | 192.168.178.187 | success + dn_6053_6054 | 192.168.179.79 | success +(42 rows) + |
Description: This function loads the Cgroup configuration file online on a node. The input parameter is the IP address of the node.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
node_name + |
+text + |
+Instance name + |
+
node_host + |
+text + |
+IP address of the node where the instance is located + |
+
result + |
+text + |
+Whether Cgroup online loading is successful + |
+
Example:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | select * from pgxc_cgroup_reload_conf('192.168.178.35'); + node_name | node_host | result +--------------+----------------+--------- + cn_5001 | 192.168.178.35 | success + dn_6007_6008 | 192.168.178.35 | success + dn_6003_6004 | 192.168.178.35 | success + dn_6001_6002 | 192.168.178.35 | success + dn_6005_6006 | 192.168.178.35 | success +(5 rows) + |
Description: Moves a task to the top of the CN queue.
+Return type: Boolean
+Note: Each of these functions returns true if they are successful and false otherwise.
+Description: Moves a job to other Cgroup to improve the job priority.
+Return type: Boolean
+Note: Each of these functions returns true if they are successful and false otherwise.
+Description: Updates and restores job information and counts on the CCN in dynamic resource management mode. This function can be executed only by administrators, and is usually used to restore a faulty CN after it was restarted. This function is called by the Cluster Manager (CM). Its usages are as follows:
+Return type: bool
+Description: On the CCN in dynamic resource management mode, clears the job information and counts of a specified CN. This function can be executed only by administrators, and is usually used to restore a faulty CN after it was restarted. This function is called by the Cluster Manager (CM). Generally, users are not advised to call it.
+Return type: bool
+Description: Displays the summary of all DN resources.
+Return type: record
+The following table describes return columns.
+ +Column + |
+Type + |
+Description + |
+
---|---|---|
min_mem_util + |
+integer + |
+Minimum memory usage of a DN + |
+
max_mem_util + |
+integer + |
+Maximum memory usage of a DN + |
+
min_cpu_util + |
+integer + |
+Minimum CPU usage of a DN + |
+
max_cpu_util + |
+integer + |
+Maximum CPU usage of a DN + |
+
min_io_util + |
+integer + |
+Minimum I/O usage of a DN + |
+
max_io_util + |
+integer + |
+Maximum I/O usage of a DN + |
+
phy_usemem_rate + |
+integer + |
+Maximum physical memory usage + |
+
Data redaction functions are used to mask and protect sensitive data. Generally, you are advised to bind these functions to the columns to be redacted based on the data redaction syntax, rather than use them directly on query statements.
+Description: Masks no data (for internal tests only).
+Return type: same as column_name
+Description: Replaces all data with a fixed value. The fixed value varies depending on the data type of the redacted column.
+Return type: same as column_name
+Description: Replaces the digits from the mask_from to mask_to position in a number with the digit specified by mask_digital. The default value of mask_to can be used, which indicates that the digits from the mask_from position to the end of the number are replaced. mask_digital can only be a digit from 0 to 9.
+Return type: same as column_name
+Description: Replaces the digits from the mask_from to mask_to position in a string with the character specified by mask_char based on the given input and output formats.
+Parameter description:
+The input format is a character string of V and F, whose length is the same as that of the data in the redacted column. Characters in positions corresponding to V may be masked, and characters in positions corresponding to F are skipped. The V character string specifies which characters are to be masked. The input and output formats apply to data with a fixed length, such as bank card numbers, ID card numbers, and phone numbers.
+The output format is a character string of V and any other character, whose length is the same as that of the data in the redacted column. V characters correspond to those in the input_format, and other characters correspond to the F characters in the input_format.
+For parameters input_format and output_format, you can use their default values or set them to "". In this case, there is no requirement for the input or output format, and the whole string will be masked.
+Masking character, which can be any one character, for example, an asterisk (*) or a number sign (#).
+First character in the string that will be masked. The value must be greater than 0.
+Last character in the string that will be masked. The default value can be used, which indicates that the character from the mask_from position to the last character of the string will be masked.
+Return type: same as column_name
+Description: Masks a date or time based on three specified fields. If mask_value is -1, the corresponding mask_field is not masked. mask_field can be month, day, year, hour, minute, or second. The value range of each field must be within that of the actual time unit.
+Return type: same as column_name
+Redaction functions are recommended if you want to create redaction policies.
+For details about how to use data redaction functions, see the examples in "Database Security Management > Managing Users and Their Permissions > Data Redaction" in the Developer Guide.
+You can use the PL/pgSQL language to customize redaction functions.
+If either of the first two requirements is not met, an error will be reported when you create a redaction policy. If either of the last two requirements is not met, unexpected problems may occur in query execution results.
+Statistics information functions are divided into the following two categories: functions that access databases, using the OID of each table or index in a database to mark the database for which statistics are generated; functions that access servers, identified by the server process ID, whose value ranges from 1 to the number of currently active servers.
+Description: Obtains the number of active server threads of a specified database on the current instance.
+Return type: integer
+Description: Obtains the total number of active server threads of a specified database on all CNs in a cluster (if this function is executed on a CN), or obtains the number of active server threads of a specified database on the current instance (if this function is executed on a DN).
+Return type: integer
+Description: Obtains the number of committed transactions in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of committed transactions in a specified database on all CNs in a cluster (if this function is executed on a CN), or obtains the number of committed transactions in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of rollback transactions in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of rollback transactions in a specified database on all CNs in a cluster (if this function is executed on a CN), or obtains the number of rollback transactions in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of disk block fetch requests in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of disk block fetch requests in a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of disk block fetch requests in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of requested disk blocks found in the cache in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of requested disk blocks found in the cache in a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of requested disk blocks found in the cache in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of tuples returned for a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of tuples returned for a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of tuples returned for a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of tuples read from a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of tuples read from a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of tuples read from a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of tuples inserted into a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of tuples inserted into a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of tuples inserted into a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of updated tuples in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of updated tuples in a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of updated tuples in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of tuples deleted from a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of tuples deleted from a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of tuples deleted from a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the total number of conflicting locks in a specified database on all CNs and DNs in a cluster (if this function is executed on a CN), or obtains the number of conflicting locks in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of deadlocks in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of deadlocks in a specified database on all CNs and DNs in a cluster (if this function is executed on a CN), or obtains the number of deadlocks in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of conflict recoveries in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of conflict recoveries in a specified database on all CNs and DNs in a cluster (if this function is executed on a CN), or obtains the number of conflict recoveries in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of temporary files created in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of temporary files created in a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of temporary files created in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the number of bytes of the temporary files created in a specified database on the current instance.
+Return type: bigint
+Description: Obtains the total number of bytes of the temporary files created in a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the number of bytes of the temporary files created in a specified database on the current instance (if this function is executed on a DN).
+Return type: bigint
+Description: Obtains the time required for reading data blocks from a specified database on the current instance.
+Return type: double
+Description: Obtains the total time required for reading data blocks from a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the time required for reading data blocks from a specified database on the current instance (if this function is executed on a DN).
+Return type: double
+Description: Obtains the time required for writing data blocks to a specified database on the current instance.
+Return type: double
+Description: Obtains the total time required for writing data blocks to a specified database on all DNs in a cluster (if this function is executed on a CN), or obtains the time required for writing data blocks to a specified database on the current instance (if this function is executed on a DN).
+Return type: double
+Description: Number of sequential row scans done if parameters are in a table
+or number of index scans done if parameters are in an index
+Return type: bigint
+Description: Number of sequential row scans done if parameters are in a table
+or number of index entries returned if parameters are in an index
+Return type: bigint
+Description: Number of table rows fetched by bitmap scans if parameters are in a table,
+or table rows fetched by simple index scans using the index if parameters are in an index
+Return type: bigint
+Description: Number of rows inserted into table
+Return type: bigint
+Description: Number of rows updated in table
+Return type: bigint
+Description: Number of rows deleted from table
+Return type: bigint
+Description: Total number of inserted, updated, and deleted rows after the table was last analyzed or autoanalyzed
+Return type: bigint
+Description: Number of rows HOT-updated in table
+Return type: bigint
+Description: Number of live rows in table
+Return type: bigint
+Description: Number of dead rows in table
+Return type: bigint
+Description: Number of disk block fetch requests for table or index
+Return type: bigint
+Description: Number of disk block requests found in cache for table or index
+Return type: bigint
+Description: Number of rows in the corresponding table partition
+Return type: bigint
+Description: Number of rows that have been updated in the corresponding table partition
+Return type: bigint
+Description: Number of rows deleted from the corresponding table partition
+Return type: bigint
+Description: Total number of inserted, updated, and deleted rows after the table partition was last analyzed or autoanalyzed
+Return type: bigint
+Description: Number of live rows in a table partition
+Return type: bigint
+Description: Number of dead rows in a table partition
+Return type: bigint
+Description: Number of tuple inserted into the active subtransactions related to the table.
+Return type: bigint
+Description: Number of deleted tuples in the active subtransactions related to a table
+Return type: bigint
+Description: Number of hot updated tuples in the active subtransactions related to a table
+Return type: bigint
+Description: Number of updated tuples in the active subtransactions related to a table
+Return type: bigint
+Description: Number of inserted tuples in the active subtransactions related to a table partition
+Return type: bigint
+Description: Number of deleted tuples in the active subtransactions related to a table partition
+Return type: bigint
+Description: Number of hot updated tuples in the active subtransactions related to a table partition
+Return type: bigint
+Description: Number of updated tuples in the active subtransactions related to a table partition
+Return type: bigint
+Description: Last time when the autovacuum thread is manually started to clear a table
+Return type: timestamptz
+Description: Time of the last vacuum initiated by the autovacuum daemon on this table
+Return type: timestamptz
+Description: Number of times a table is manually cleared
+Return type: bigint
+Description: Number of times the autovacuum daemon is started to clear a table
+Return type: bigint
+Description: Last time when a table starts to be analyzed manually or by the autovacuum thread
+Return type: timestamptz
+Description: Time of the last analysis initiated by the autovacuum daemon on this table
+Return type: timestamptz
+Description: Number of times a table is manually analyzed
+Return type: bigint
+Description: Number of times the autovacuum daemon analyzes a table
+Return type: bigint
+Description: Gets the tuple records related to total autovac, such as nodename, nspname, relname, and the IUD information of tuples.
+Return type: SETOF record
+Description: Returns autovac information, such as nodename, nspname, relname, analyze, vacuum, thresholds of analyze and vacuum, and the number of analyzed or vacuumed tuples.
+Return type: SETOF record
+Description: Returns the number of consecutive timeouts during the autovac operation on a table. If the table information is invalid or the node information is abnormal, NULL will be returned.
+Return type: bigint
+Description: Returns the name of the CN performing the autovac operation on a table. If the table information is invalid or the node information is abnormal, NULL will be returned.
+Return type: text
+Description: The query performance of the PGXC_WLM_SESSION_INFO view is poor if the view contains a large number of records. In this case, you are advised to use this function to filter the query. The input parameters are time column (start_time or finish_time), start time, end time, and maximum number of records returned for each CN. The return result is a subset of records in the GS_WLM_SESSION_HISTORY view.
+Return type: SETOF record
+Description: Queries the current resource usage of each node in the cluster on the CN and reads the data that is not stored in the GS_WLM_INSTANCE_HISTORY system catalog in the memory. The input parameters are the node name (ALL, C, D, or instance name) and the maximum number of records returned by each node. The returned value is GS_WLM_INSTANCE_HISTORY.
+Return type: SETOF record
+Description: Queries the historical resource usage of each cluster node on the CN node and reads data from the GS_WLM_INSTANCE_HISTORY system catalog. The input parameters are as follows: node name (ALL, C, D, or instance name), start time, end time, and maximum number of records returned for each instance. The returned value is GS_WLM_INSTANCE_HISTORY.
+Return type: SETOF record
+Description: Returns the time when INSERT, UPDATE, DELETE, or EXCHANGE/TRUNCATE/DROP PARTITION was performed last time on a table. The data in the last_data_changed column of the PG_STAT_ALL_TABLES view is calculated by using this function. The performance of obtaining the last modification time by using the view is poor when the table has a large amount of data. In this case, you are advised to use the function.
+Return type: timestamptz
+Description: Manually changes the time when INSERT, UPDATE, DELETE, or EXCHANGE/TRUNCATE/DROP PARTITION was performed last time.
+Return type: void
+Description: Collects statistics on the running time of each session thread on the current node and the time consumed in each execution phase.
+Return type: record
+Description: Collects statistics on the running time of the current node and the time consumed in each execution phase.
+Return type: record
+Description: Returns a record about the backend with the specified PID. A record for each active backend in the system is returned if NULL is specified. The return result is a subset of records (excluding the connection_info column) in the PG_STAT_ACTIVITY view.
+Return type: SETOF record
+Description: Returns a record about the backend with the specified PID. A record for each active backend in the system is returned if NULL is specified. The return result is a subset of records in the PG_STAT_ACTIVITY view.
+Return type: SETOF record
+Description: Displays the I/O load management information about the job currently executed by the user.
+Return type: record
+The following table describes return fields.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
userid + |
+oid + |
+User ID + |
+
min_curr_iops + |
+int4 + |
+Minimum I/O of the current user across DNs. The IOPS is counted by ones for column storage and by thousands for row storage. + |
+
max_curr_iops + |
+int4 + |
+Maximum I/O of the current user across DNs. The IOPS is counted by ones for column storage and by thousands for row storage. + |
+
min_peak_iops + |
+int4 + |
+Minimum peak I/O of the current user across DNs. The IOPS is counted by ones for column storage and by thousands for row storage. + |
+
max_peak_iops + |
+int4 + |
+Maximum peak I/O of the current user across DNs. The IOPS is counted by ones for column storage and by thousands for row storage. + |
+
io_limits + |
+int4 + |
+io_limits set for the resource pool specified by the user. The IOPS is counted by ones for column storage and by thousands for row storage. + |
+
io_priority + |
+text + |
+io_priority set for the user. The IOPS is counted by ones for column storage and by thousands for row storage. + |
+
Description: Number of times the function has been called
+Return type: bigint
+Description: Gets the total wall-clock time spent on a function, in microseconds. The time spent on calling this function is included.
+Return type: double precision
+Description: Gets the time spent only on this function in the current transaction. The time spent on calling this function is not included.
+Return type: double precision
+Description: Set of currently active server process numbers (from 1 to the number of active server processes)
+Return type: SETOF integer
+Description: Thread ID of the given server thread
+Return type: bigint
+1 +2 +3 +4 +5 | SELECT pg_stat_get_backend_pid(1); + pg_stat_get_backend_pid +------------------------- + 139706243217168 +(1 row) + |
Description: ID of the database connected to the given server process
+Return type: OID
+Description: User ID of the given server process
+Return type: OID
+Description: Active command of the given server process, but only if the current user is a system administrator or the same user as that of the session being queried and track_activities is on
+Return type: text
+Description: True if the given server process is waiting for a lock, but only if the current user is a system administrator or the same user as that of the session being queried and track_activities is on
+Return type: boolean
+Description: The time at which the given server process's currently executing query was started, but only if the current user is a system administrator or the same user as that of the session being queried and track_activities is on
+Return type: timestamp with time zone
+Description: The time at which the given server process's currently executing transaction was started, but only if the current user is a system administrator or the same user as that of the session being queried and track_activities is on
+Return type: timestamp with time zone
+Description: The time at which the given server process was started, or NULL if the current user is neither a system administrator nor the same user as that of the session being queried
+Return type: timestamp with time zone
+Description: IP address of the client connected to the given server process.
+If the connection is over a Unix domain socket, or if the current user is neither a system administrator nor the same user as that of the session being queried, NULL will be returned.
+Return type: inet
+Note: An IP address used as an input parameter of this function cannot contain periods (.). For example, 192.168.100.128 should be written as 192168100128.
+Description: TCP port number of the client connected to the given server process
+If the connection is over a Unix domain socket, -1 will be returned. If the current user is neither a system administrator nor the same user as that of the session being queried, NULL will be returned.
+Return type: integer
+Description: The number of times the background writer has started timed checkpoints (because the checkpoint_timeout time has expired)
+Return type: bigint
+Description: The number of times the background writer has started checkpoints based on requests from the backend because checkpoint_segments has been exceeded or the CHECKPOINT command has been executed
+Return type: bigint
+Description: The number of buffers written by the background writer during checkpoints
+Return type: bigint
+Description: The number of buffers written by the background writer for routine cleaning of dirty pages
+Return type: bigint
+Description: The number of times the background writer has stopped its cleaning scan because it has written more buffers than specified in the bgwriter_lru_maxpages parameter
+Return type: bigint
+Description: The number of buffers written by the backend because they needed to allocate a new buffer
+Return type: bigint
+Description: The total number of buffer allocations
+Return type: bigint
+Description: Discards the current statistics snapshot.
+Return type: void
+Description: Resets all statistics counters for the current database to zero (requires system administrator permissions).
+Return type: void
+Description: Resets all statistics counters for the current database in each node in a shared cluster to zero (requires system administrator permissions).
+Return type: void
+Description: Resets statistics for a single table or index in the current database to zero (requires system administrator permissions).
+Return type: void
+Description: Resets statistics for a single function in the current database to zero (requires system administrator permissions).
+Return type: void
+Description: Obtains the compression unit (CU) hit statistics of sessions running on the current node.
+Return type: record
+Description: Obtains the CU hit statistics of all sessions running in a cluster.
+Return type: record
+Description: Obtains the CU hit statistics of a database in a cluster.
+Return type: record
+Description: Obtains the number of CU memory hits of a column storage table in the current database of the current node.
+Return type: bigint
+Description: Obtains the times CU is synchronously read from a disk by a column storage table in the current database of the current node.
+Return type: bigint
+Description: Obtains the times CU is asynchronously read from a disk by a column storage table in the current database of the current node.
+Return type: bigint
+Description: Obtains the CU memory hit in a database of the current node.
+Return type: bigint
+Description: Obtains the times CU is synchronously read from a disk by a database of the current node.
+Return type: bigint
+Description: Obtains the times CU is asynchronously read from a disk by a database of the current node.
+Return type: bigint
+Description: Shows the number of UDF Master and Work processes.
+Return type: record
+Description: Kills all UDF Work processes.
+Return type: bool
+1 | SELECT * FROM GS_ALL_NODEGROUP_CONTROL_GROUP_INFO('installation') + |
Return type: record
+The following table describes return fields.
+ +Name + |
+Type + |
+Description + |
+
---|---|---|
name + |
+text + |
+Name of a Cgroup + |
+
type + |
+text + |
+Type of the Cgroup + |
+
gid + |
+bigint + |
+Cgroup ID + |
+
classgid + |
+bigint + |
+ID of the Class Cgroup where a Workload Cgroup belongs + |
+
class + |
+text + |
+Class Cgroup + |
+
workload + |
+text + |
+Workload Cgroup + |
+
shares + |
+bigint + |
+CPU quota allocated to a Cgroup + |
+
limits + |
+bigint + |
+Limit of CPUs allocated to a Cgroup + |
+
wdlevel + |
+bigint + |
+Workload Cgroup level + |
+
cpucores + |
+text + |
+Usage of CPU cores in a Cgroup + |
+
Description: Total number of user tables in all the databases in a logical cluster
+Return type: integer
+Description: Maximum disk space occupied by database files in all the DNs of a logical cluster. The unit is byte.
+Return type: bigint
+Description: Checks whether the system information of all logical clusters in the system is consistent. If no record is returned, the information is consistent. Otherwise, the Node Group information on CNs and DNs in the logical cluster is inconsistent. This function cannot be invoked during redistribution in a scale-in or scale-out.
+Return type: record
+Description: Checks whether the user table distribution in the system is consistent. If no record is returned, table distribution is consistent. This function cannot be invoked during redistribution in a scale-in or scale-out.
+Return type: record
+Description: Obtains damage information about pages or CUs after the current node is started.
+Return type: record
+Description: Obtains damage information about pages or CUs after all the nodes in the cluster are started.
+Return type: record
+Description: Deletes the page and CU damage information that is read and recorded on the node. (System administrator rights are required.)
+Return type: void
+Description: Deletes the page and CU damage information that is read and recorded on all the nodes in the cluster. (System administrator rights are required.)
+Return type: void
+Description: Queries for the query rule of a specified resource pool.
+Return type: record
+Description: Queries for information about Cgroups associated with a resource pool.
+Return type: record
+The following information is displayed:
+ +Attribute + |
+Value + |
+Description + |
+
---|---|---|
name + |
+class_a:workload_a1 + |
+Class name and workload name + |
+
class + |
+class_a + |
+Class Cgroup name + |
+
workload + |
+workload_a1 + |
+Workload Cgroup name + |
+
type + |
+DEFWD + |
+Cgroup type (Top, CLASS, BAKWD, DEFWD, and TSWD) + |
+
gid + |
+87 + |
+Cgroup ID + |
+
shares + |
+30 + |
+Percentage of CPU resources to those on the parent node + |
+
limits + |
+0 + |
+Percentage of CPU cores to those on the parent node + |
+
rate + |
+0 + |
+Allocation ratio in Timeshare + |
+
cpucores + |
+0-3 + |
+Number of CPU cores + |
+
Description: Queries for a user's resource quota and resource usage.
+Return type: record
+Description: Obtains the definition information of a trigger.
+Parameter: OID of the trigger to be queried
+Return type: text
+Example:
+1 +2 +3 +4 +5 | select pg_get_triggerdef(oid) from pg_trigger; + pg_get_triggerdef +---------------------------------------------------------------------------------------------------------------------- + CREATE TRIGGER insert_trigger BEFORE INSERT ON test_trigger_src_tbl FOR EACH ROW EXECUTE PROCEDURE tri_insert_func() +(1 row) + |
Description: Obtains the definition information of a trigger.
+Parameter: OID of the trigger to be queried and whether it is displayed in pretty mode
+Return type: text
+The Boolean parameters take effect only when the WHEN condition is specified during trigger creation.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | select pg_get_triggerdef(oid,true)from pg_trigger; + pg_get_triggerdef +---------------------------------------------------------------------------------------------------------------------- + CREATE TRIGGER insert_trigger BEFORE INSERT ON test_trigger_src_tbl FOR EACH ROW EXECUTE PROCEDURE tri_insert_func() +(1 row) + +select pg_get_triggerdef(oid,false)from pg_trigger; + pg_get_triggerdef +---------------------------------------------------------------------------------------------------------------------- + CREATE TRIGGER insert_trigger BEFORE INSERT ON test_trigger_src_tbl FOR EACH ROW EXECUTE PROCEDURE tri_insert_func() +(1 row) + |
Description: Generates an XML value from character data.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlparse(document '<foo>bar</foo>'); +xmlparse +---------------- +<foo>bar</foo> +(1 row) + |
Description: Generates a string from XML values.
+Return type: type, which can be character, character varying, or text (or its alias)
+Example:
+1 +2 +3 +4 +5 | SELECT xmlserialize(content 'good' AS CHAR(10)); +xmlserialize +-------------- +good +(1 row) + |
Description: Creates an XML note that uses the specified text as the content. The text cannot contain two consecutive hyphens (--) or end with a hyphen (-). If the parameter is null, the result is also null.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlcomment('hello'); +xmlcomment +-------------- +<!--hello--> +(1 row) + |
Description: Concatenates a list of XML values into a single value. Null values are ignored. If all parameters are null, the result is also null.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlconcat('<abc/>', '<bar>foo</bar>'); +xmlconcat +---------------------- +<abc/><bar>foo</bar> +(1 row) + |
Note: If XML declarations exist and they are the same XML version, the result will use the version. Otherwise, the result does not use any version. If all XML values have the standalone attribute whose status is yes, the standalone attribute in the result is yes. If at least one XML value's standalone attribute is no, the standalone attribute in the result is no. Otherwise, the result does not contain the standalone attribute.
+Example:
+1 +2 +3 +4 +5 | SELECT xmlconcat('<?xml version="1.1"?><foo/>', '<?xml version="1.1" standalone="no"?><bar/>'); +xmlconcat +----------------------------------- +<?xml version="1.1"?><foo/><bar/> +(1 row) + |
Description: Generates an XML element with the given name, attribute, and content.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlelement(name foo, xmlattributes(current_date as bar), 'cont', 'ent'); +xmlelement +------------------------------------- +<foo bar="2020-08-15">content</foo> +(1 row) + |
Description: Generates an XML forest (sequence) of an element with a given name and content.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlforest('abc' AS foo, 123 AS bar); +xmlforest +------------------------------ +<foo>abc</foo><bar>123</bar> +(1 row) + |
Description: Creates an XML processing instruction. The content cannot contain the character sequence of ?>.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlpi(name php, 'echo "hello world";'); +xmlpi +----------------------------- +<?php echo "hello world";?> +(1 row) + |
Description: Modifies the attributes of the root node of an XML value. If a version is specified, it replaces the value in the version declaration of the root node. If a standalone value is specified, it replaces the standalone value in the root node.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 | SELECT xmlroot(xmlparse(document '<?xml version="1.0" standalone="no"?><content>abc</content>'), version '1.1', standalone yes); +xmlroot +-------------------------------------------------------------- +<?xml version="1.1" standalone="yes"?><content>abc</content> +(1 row) + |
Description: The xmlagg function is an aggregate function that concatenates input values.
+Return type: XML
+Example:
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE TABLE test (y int, x xml); +INSERT INTO test VALUES (1, '<foo>abc</foo>'); +INSERT INTO test VALUES (2, '<bar/>'); +SELECT xmlagg(x) FROM test; +xmlagg +---------------------- +<foo>abc</foo><bar/> +(1 row) + |
To determine the concatenation sequence, you can add an ORDER BY clause for an aggregate call, for example:
+1 +2 +3 +4 +5 | SELECT xmlagg(x ORDER BY y DESC) FROM test; +xmlagg +---------------------- +<bar/><foo>abc</foo> +(1 row) + |
Description: IS DOCUMENT returns true if the XML value of the parameter is a correct XML document; if the XML document is incorrect, false is returned. If the parameter is null, a null value is returned.
+Return type: bool
+Description: Returns true if the XML value of the parameter is not a correct XML document. If the XML document is correct, false is returned. If the parameter is null, a null value is returned.
+Return type: bool
+Description: If the xpath expression in the first parameter returns any node, the XMLEXISTS function returns true. Otherwise, the function returns false. (If any parameter is null, the result is null.) The BY REF clause is invalid and is used to maintain SQL compatibility.
+Return type: bool
+Example:
+1 +2 +3 +4 +5 | SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF '<towns><town>Toronto</town><town>Ottawa</town></towns>'); +xmlexists +----------- +t +(1 row) + |
Description: Checks whether a text string is a well-formatted XML value and returns a Boolean result. If the xmloption parameter is set to DOCUMENT, the document is checked. If the xmloption parameter is set to CONTENT, the content is checked.
+Return type: bool
+Example:
+1 +2 +3 +4 +5 | SELECT xml_is_well_formed('<abc/>'); +xml_is_well_formed +-------------------- +t +(1 row) + |
Description: Checks whether a text string is a well-formatted text and returns a Boolean result.
+Return type: bool
+Example:
+1 +2 +3 +4 +5 | SELECT xml_is_well_formed_document('<test:foo xmlns:test="http://test.com/test">bar</test:foo>'); +xml_is_well_formed_document +----------------------------- +t +(1 row) + |
Description: Checks whether a text string is a well-formatted content and returns a Boolean result.
+Return type: bool
+Example:
+1 +2 +3 +4 +5 | SELECT xml_is_well_formed_content('content'); +xml_is_well_formed_content +---------------------------- +t +(1 row) + |
Description: Returns an array of XML values corresponding to the set of nodes produced by the xpath expression. If the xpath expression returns a scalar value instead of a set of nodes, an array of individual elements is returned. The second parameter xml must be a complete XML document, which must have a root node element. The third parameter is an array map of a namespace. The array should be a two-dimensional text array, and the length of the second dimension should be 2. (It should be an array of arrays, each containing exactly two elements). The first element of each array item is the alias of the namespace name, and the second element is the namespace URI. The alias provided in this array does not have to be the same as the alias used in the XML document itself. In other words, in the context of both XML documents and xpath functions, aliases are local.
+Return type: XML value array
+Example:
+1 +2 +3 +4 +5 | SELECT xpath('/my:a/text()', '<my:a xmlns:my="http://example.com">test</my:a>', ARRAY[ARRAY['my', 'http://example.com']]); +xpath +-------- +{test} +(1 row) + |
Description: The xpath_exists function is a special form of the xpath function. This function does not return an XML value that satisfies the xpath function; it returns a Boolean value indicating whether the query is satisfied. This function is equivalent to the standard XMLEXISTS predicate, but it also provides support for a namespace mapping parameter.
+Return type: bool
+Example:
+1 +2 +3 +4 +5 | SELECT xpath_exists('/my:a/text()', '<my:a xmlns:my="http://example.com">test</my:a>', ARRAY[ARRAY['my', 'http://example.com']]); +xpath_exists +-------------- +t +(1 row) + |
Description: Generates a table based on the input XML data, XPath expression, and column definition. An xmltable is similar to a function in syntax, but it can appear only as a table in the FROM clause of a query.
+Return value: setof record
+Syntax:
+1 +2 +3 +4 +5 +6 | XMLTABLE ( [ XMLNAMESPACES ( namespace_uri AS namespace_name [, ...] ), ] + row_expression PASSING [ BY { REF | VALUE } ] +document_expression [ BY { REF | VALUE } ] +COLUMNS name { type [ PATH column_expression ] [ DEFAULT default_expression ] [ NOT NULL | NULL ] | FOR ORDINALITY } +[, ...] +) + |
Parameter:
+XPath 1.0 does not specify the order for nodes, so the order in which results are returned depends on the order in which data is obtained.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | SELECT * FROM XMLTABLE('/ROWS/ROW' +PASSING '<ROWS><ROW id="1"><COUNTRY_ID>AU</COUNTRY_ID><COUNTRY_NAME>Australia</COUNTRY_NAME></ROW><ROW id="2"><COUNTRY_ID>FR</COUNTRY_ID><COUNTRY_NAME>France</COUNTRY_NAME></ROW><ROW id="3"><COUNTRY_ID>SG</COUNTRY_ID><COUNTRY_NAME>Singapore</COUNTRY_NAME></ROW></ROWS>' +COLUMNS id INT PATH '@id', +_id FOR ORDINALITY, +country_id TEXT PATH 'COUNTRY_ID', +country_name TEXT PATH 'COUNTRY_NAME' NOT NULL); +id | _id | country_id | country_name +----+-----+---------------+-------------- + 1 | 1 | AU | Australia + 2 | 2 | FR | France + 3 | 3 | SG | Singapore +(3 rows) + |
Description: Maps the contents of a table to XML values.
+Return type: XML
+Description: Maps a relational table schema to an XML schema document.
+Return type: XML
+Description: Maps a relational table to XML values and schema documents.
+Return type: XML
+Description: Maps the contents of an SQL query to XML values.
+Return type: XML
+Description: Maps an SQL query into an XML schema document.
+Return type: XML
+Description: Maps SQL queries to XML values and schema documents.
+Return type: XML
+Description: Maps a cursor query to an XML value.
+Return type: XML
+Description: Maps a cursor query to an XML schema document.
+Return type: XML
+Description: Maps a table in a schema to an XML value.
+Return type: XML
+Description: Maps a table in a schema to an XML schema document.
+Return type: XML
+Description: Maps a table in a schema to an XML value and a schema document.
+Return type: XML
+Description: Maps a database table to an XML value.
+Return type: XML
+Description: Maps a database table to an XML schema document.
+Return type: XML
+Description: Maps database tables to XML values and schema documents.
+Return type: XML
+The parameters for mapping a table to an XML value are described as follows:
+The pv_memory_profiling(type int) and environment variable MALLOC_CONF are used by GaussDB(DWS) to control the enabling and disabling of the memory allocation call stack recording module and the output of the memory call stack. The following figure illustrates the process.
+The environment variable MALLOC_CONF is used to enable the monitoring module. It is in the ${BIGDATA_HOME}/mppdb/.mppdbgs_profile file and is enabled by default. Note the following points:
+Commands for enabling and disabling MALLOC_CONF:
+export MALLOC_CONF=prof:true+
export MALLOC_CONF=prof:false+
Parameter description: Controls the backtrace recording and output of memory allocation functions such as malloc in the kernel.
+Value range: a positive integer from 0 to 3.
+ +pv_memory_profiling +Value + |
+Description + |
+
---|---|
0 + |
+Disables the memory trace function and does not record information of call stacks such as malloc. + |
+
1 + |
+Enables the memory trace function to record information of call stacks such as malloc. + |
+
2 + |
+Outputs trace logs of call stacks such as malloc. +
|
+
3 + |
+Outputs memory statistics. +
|
+
Return type: Boolean
+Note:
+Procedure:
+1 | select * from pv_memory_profiling(2); + |
jeprof --text --show_bytes $GAUSSHOME/bin/gaussdb trace file 1 >prof.txt+
Method 2: Export the report in PDF format.
+jeprof --pdf --show_bytes $GAUSSHOME/bin/gaussdb trace file 1 > prof.pdf+
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 | -- Log in as the system administrator, set environment variables, and start the database. +export MALLOC_CONF=prof:true + +-- Disable the memory trace recording function when the database is running. +select pv_memory_profiling(0); +pv_memory_profiling +---------------------------- +t +(1 row) + +-- Enable the memory trace recording function when the database is running. +select pv_memory_profiling(1); +pv_memory_profiling +---------------------------- +t +(1 row) + +-- Output memory trace records. +select pv_memory_profiling(2); +pv_memory_profiling +---------------------------- +t +(1 row) + +-- Generate the trace file in text or PDF format in the directory where the GaussDB process is located. +jeprof --text --show_bytes $GAUSSHOME/bin/gaussdb trace file 1 >prof.txt +jeprof --pdf --show_bytes $GAUSSHOME/bin/gaussdb trace file 1 > prof.pdf + +-- Output memory statistics. +Execute the following statement to generate the memory statistics file in the directory where the GaussDB process is located. The file can be directly read. +select pv_memory_profiling(3); +pv_memory_profiling +---------------------------- +t +(1 row) + |
Logical Operators lists the operators and calculation rules of logical expressions.
+Comparison Operators lists the common comparative operators.
+In addition to comparative operators, you can also use the following sentence structure:
+a BETWEEN x AND y is equivalent to a >= x AND a <= y.
+a NOT BETWEEN x AND y is equivalent to a < x OR a > y.
+expression IS NOT NULL
+or an equivalent (non-standard) sentence structure:
+expression ISNULL
+expression NOTNULL
+Do not write expression=NULL or expression<>(!=)NULL, because NULL represents an unknown value, and these expressions cannot determine whether two unknown values are equal.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 | SELECT 2 BETWEEN 1 AND 3 AS RESULT; + result +---------- + t +(1 row) + +SELECT 2 >= 1 AND 2 <= 3 AS RESULT; + result +---------- + t +(1 row) + +SELECT 2 NOT BETWEEN 1 AND 3 AS RESULT; + result +---------- + f +(1 row) + +SELECT 2 < 1 OR 2 > 3 AS RESULT; + result +---------- + f +(1 row) + +SELECT 2+2 IS NULL AS RESULT; + result +---------- + f +(1 row) + +SELECT 2+2 IS NOT NULL AS RESULT; + result +---------- + t +(1 row) + +SELECT 2+2 ISNULL AS RESULT; + result +---------- + f +(1 row) + +SELECT 2+2 NOTNULL AS RESULT; + result +---------- + t +(1 row) + +SELECT 2+2 IS DISTINCT FROM NULL AS RESULT; + result +---------- + t +(1 row) + +SELECT 2+2 IS NOT DISTINCT FROM NULL AS RESULT; + result +---------- + f +(1 row) + |
Data that meets the requirements specified by conditional expressions are filtered during SQL statement execution.
+Conditional expressions include the following types:
+CASE expressions are similar to the CASE statements in other coding languages.
+Figure 1 shows the syntax of a CASE expression.
+ +A CASE clause can be used in a valid expression. condition is an expression that returns a value of Boolean type.
+Examples:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | CREATE TABLE tpcds.case_when_t1(CW_COL1 INT) DISTRIBUTE BY HASH (CW_COL1); + +INSERT INTO tpcds.case_when_t1 VALUES (1), (2), (3); + +SELECT * FROM tpcds.case_when_t1; + a +--- + 1 + 2 + 3 +(3 rows) + +SELECT CW_COL1, CASE WHEN CW_COL1=1 THEN 'one' WHEN CW_COL1=2 THEN 'two' ELSE 'other' END FROM tpcds.case_when_t1; + a | case +---+------- + 3 | other + 1 | one + 2 | two +(3 rows) + +DROP TABLE tpcds.case_when_t1; + |
Figure 2 shows the syntax of a DECODE expression.
+ +Compare each following compare(n) with base_expr, value(n) is returned if a compare(n) matches the base_expr expression. If base_expr does not match each compare(n), the default value is returned.
+Conditional Expression Functions describes the examples.
+1 +2 +3 +4 +5 | SELECT DECODE('A','A',1,'B',2,0); + case +------ + 1 +(1 row) + |
Figure 3 shows the syntax of a COALESCE expression.
+ +COALESCE returns its first non-NULL value. If all the arguments are NULL, return NULL. This value is replaced by the default value when data is displayed. Like a CASE expression, COALESCE only evaluates the parameters that are needed to determine the result. That is, parameters to the right of the first non-null parameter are not evaluated.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.c_tabl(description varchar(10), short_description varchar(10), last_value varchar(10)) +DISTRIBUTE BY HASH (last_value); + +INSERT INTO tpcds.c_tabl VALUES('abc', 'efg', '123'); +INSERT INTO tpcds.c_tabl VALUES(NULL, 'efg', '123'); + +INSERT INTO tpcds.c_tabl VALUES(NULL, NULL, '123'); + +SELECT description, short_description, last_value, COALESCE(description, short_description, last_value) FROM tpcds.c_tabl ORDER BY 1, 2, 3, 4; + description | short_description | last_value | coalesce +-------------+-------------------+------------+---------- + abc | efg | 123 | abc + | efg | 123 | efg + | | 123 | 123 +(3 rows) + +DROP TABLE tpcds.c_tabl; + |
If description is not NULL, the value of description is returned. Otherwise, parameter short_description is calculated. If short_description is not NULL, the value of short_description is returned. Otherwise, parameter last_value is calculated. If last_value is not NULL, the value of last_value is returned. Otherwise, none is returned.
+1 +2 +3 +4 +5 | SELECT COALESCE(NULL,'Hello World'); + coalesce +--------------- + Hello World +(1 row) + |
Figure 4 shows the syntax of a NULLIF expression.
+ +Only if value1 is equal to value2 can NULLIF return the NULL value. Otherwise, value1 is returned.
+Examples
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | CREATE TABLE tpcds.null_if_t1 ( + NI_VALUE1 VARCHAR(10), + NI_VALUE2 VARCHAR(10) +) DISTRIBUTE BY HASH (NI_VALUE1); + +INSERT INTO tpcds.null_if_t1 VALUES('abc', 'abc'); +INSERT INTO tpcds.null_if_t1 VALUES('abc', 'efg'); + +SELECT NI_VALUE1, NI_VALUE2, NULLIF(NI_VALUE1, NI_VALUE2) FROM tpcds.null_if_t1 ORDER BY 1, 2, 3; + + ni_value1 | ni_value2 | nullif +-----------+-----------+-------- + abc | abc | + abc | efg | abc +(2 rows) +DROP TABLE tpcds.null_if_t1; + |
If value1 is equal to value2, NULL is returned. Otherwise, value1 is returned.
+1 +2 +3 +4 +5 | SELECT NULLIF('Hello','Hello World'); + nullif +-------- + Hello +(1 row) + |
Figure 5 shows the syntax of a GREATEST expression.
+ +You can select the maximum value from any numerical expression list.
+1 +2 +3 +4 +5 | SELECT greatest(9000,155555,2.01); + greatest +---------- + 155555 +(1 row) + |
Figure 6 shows the syntax of a LEAST expression.
+ +You can select the minimum value from any numerical expression list.
+Each of the preceding numeric expressions can be converted into a common data type, which will be the data type of the result.
+The NULL values in the list will be ignored. The result is NULL only if the results of all expressions are NULL.
+1 +2 +3 +4 +5 | SELECT least(9000,2); + least +------- + 2 +(1 row) + |
Conditional Expression Functions describes the examples.
+Figure 7 shows the syntax of an NVL expression.
+ +If the value of value1 is NULL, value2 is returned. Otherwise, value1 is returned.
+For example:
+1 +2 +3 +4 +5 | SELECT nvl(null,1); +NVL +----- + 1 +(1 row) + |
1 +2 +3 +4 +5 | SELECT nvl ('Hello World' ,1); + nvl +--------------- + Hello World +(1 row) + |
Figure 8 shows the syntax of an IF expression.
+ +If the value of bool_expr is true, expr1 is returned. Otherwise, expr2 is returned.
+Conditional Expression Functions describes the examples.
+Figure 9 shows the syntax of a NULLIF expression.
+ +Only if value1 is equal to value2 can NULLIF return the NULL value. Otherwise, value1 is returned.
+Conditional Expression Functions describes the examples.
+Subquery expressions include the following types:
+Figure 1 shows the syntax of an EXISTS/NOT EXISTS expression.
+ +The parameter of an EXISTS expression is an arbitrary SELECT statement, or subquery. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result of EXISTS is "true". If the subquery returns no rows, the result of EXISTS is "false".
+The subquery will generally only be executed long enough to determine whether at least one row is returned, not all the way to completion.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | SELECT sr_reason_sk,sr_customer_sk FROM tpcds.store_returns WHERE EXISTS (SELECT d_dom FROM tpcds.date_dim WHERE d_dom = store_returns.sr_reason_sk and sr_customer_sk <10); +sr_reason_sk | sr_customer_sk +--------------+---------------- + 13 | 2 + 22 | 5 + 17 | 7 + 25 | 7 + 3 | 7 + 31 | 5 + 7 | 7 + 14 | 6 + 20 | 4 + 5 | 6 + 10 | 3 + 1 | 5 + 15 | 2 + 4 | 1 + 26 | 3 +(15 rows) + |
Figure 2 shows the syntax of an IN/NOT IN expression.
+ +The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result of IN is "true" if any equal subquery row is found. The result is "false" if no equal row is found (including the case where the subquery returns no rows).
+This is in accordance with SQL's normal rules for Boolean combinations of null values. If the columns corresponding to two rows equal and are not empty, the two rows are equal to each other. If any columns corresponding to the two rows do not equal and are not empty, the two rows are not equal to each other. Otherwise, the result is NULL. If there are no equal right-hand values and at least one right-hand row yields null, the result of IN will be null, not false.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | SELECT sr_reason_sk,sr_customer_sk FROM tpcds.store_returns WHERE sr_customer_sk IN (SELECT d_dom FROM tpcds.date_dim WHERE d_dom < 10); +sr_reason_sk | sr_customer_sk +--------------+---------------- + 10 | 3 + 26 | 3 + 22 | 5 + 31 | 5 + 1 | 5 + 32 | 5 + 32 | 5 + 4 | 1 + 15 | 2 + 13 | 2 + 33 | 4 + 20 | 4 + 33 | 8 + 5 | 6 + 14 | 6 + 17 | 7 + 3 | 7 + 25 | 7 + 7 | 7 +(19 rows) + |
Figure 3 shows the syntax of an ANY/SOME expression.
+ +The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. The result of ANY is "true" if any true result is obtained. The result is "false" if no true result is found (including the case where the subquery returns no rows). SOME is a synonym of ANY. IN can be equivalently replaced with ANY.
+For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | SELECT sr_reason_sk,sr_customer_sk FROM tpcds.store_returns WHERE sr_customer_sk < ANY (SELECT d_dom FROM tpcds.date_dim WHERE d_dom < 10); +sr_reason_sk | sr_customer_sk +--------------+---------------- + 26 | 3 + 17 | 7 + 32 | 5 + 32 | 5 + 13 | 2 + 31 | 5 + 25 | 7 + 5 | 6 + 7 | 7 + 10 | 3 + 1 | 5 + 14 | 6 + 4 | 1 + 3 | 7 + 22 | 5 + 33 | 4 + 20 | 4 + 33 | 8 + 15 | 2 +(19 rows) + |
Figure 4 shows the syntax of an ALL expression.
+ +The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. The result of ALL is "true" if all rows yield true (including the case where the subquery returns no rows). The result is "false" if any false result is found.
+Example:
+1 +2 +3 +4 | SELECT sr_reason_sk,sr_customer_sk FROM tpcds.store_returns WHERE sr_customer_sk < all(SELECT d_dom FROM tpcds.date_dim WHERE d_dom < 10); + sr_reason_sk | sr_customer_sk +--------------+---------------- +(0 rows) + |
expression IN (value [, ...])
+The parentheses on the right contain an expression list. The expression result on the left is compared with the content in the expression list. If the content in the list meets the expression result on the left, the result of IN is true. If no result meets the requirements, the result of IN is false.
+Example:
+1 +2 +3 +4 +5 | SELECT 8000+500 IN (10000, 9000) AS RESULT; + result +---------- + f +(1 row) + |
If the expression result is null or the expression list does not meet the expression conditions and at least one empty value is returned for the expression list on the right, the result of IN is null rather than false. This method is consistent with the Boolean rules used when SQL statements return empty values.
+expression NOT IN (value [, ...])
+The parentheses on the right contain an expression list. The expression result on the left is compared with the content in the expression list. If the content in the list does not meet the expression result on the left, the result of NOT IN is true. If any content meets the expression result, the result of NOT IN is false.
+Example:
+1 +2 +3 +4 +5 | SELECT 8000+500 NOT IN (10000, 9000) AS RESULT; + result +---------- + t +(1 row) + |
If the query statement result is null or the expression list does not meet the expression conditions and at least one empty value is returned for the expression list on the right, the result of NOT IN is null rather than false. This method is consistent with the Boolean rules used when SQL statements return empty values.
+In all situations, X NOT IN Y equals to NOT(X IN Y).
+expression operator ANY (array expression)
+expression operator SOME (array expression)
+1 +2 +3 +4 +5 | SELECT 8000+500 < SOME (array[10000,9000]) AS RESULT; + result +---------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 8000+500 < ANY (array[10000,9000]) AS RESULT; + result +---------- + t +(1 row) + |
The parentheses on the right contain an array expression, which must generate an array value. The result of the expression on the left uses operators to compute and compare the results in each row of the array expression. The comparison result must be a Boolean value.
+If no comparison result is true and the array expression generates at least one null value, the value of ANY is NULL, rather than false. This method is consistent with the Boolean rules used when SQL statements return empty values.
+SOME is a synonym of ANY.
+expression operator ALL (array expression)
+The parentheses on the right contain an array expression, which must generate an array value. The result of the expression on the left uses operators to compute and compare the results in each row of the array expression. The comparison result must be a Boolean value.
+If the array expression yields a null array, the result of ALL will be null. If the left-hand expression yields null, the result of ALL is ordinarily null (though a non-strict comparison operator could possibly yield a different result). Also, if the right-hand array contains any null elements and no false comparison result is obtained, the result of ALL will be null, not true (again, assuming a strict comparison operator). This method is consistent with the Boolean rules used when SQL statements return empty values.
+1 +2 +3 +4 +5 | SELECT 8000+500 < ALL (array[10000,9000]) AS RESULT; + result +---------- + t +(1 row) + |
Syntax:
+row_constructor operator row_constructor
+Both sides of the row expression are row constructors. The values of both rows must have the same number of fields and they are compared with each other. The row comparison allows operators including =, <>, <, <=, and >= or a similar operator.
+The use of operators =<> is slightly different from other operators. If all fields of two rows are not empty and equal, the two rows are equal. If any field in two rows is not empty and not equal, the two rows are not equal. Otherwise, the comparison result is null.
+For operators <, <=, >, and > =, the fields in rows are compared from left to right until a pair of fields that are not equal or are empty are detected. If the pair of fields contains at least one null value, the comparison result is null. Otherwise, the comparison result of this pair of fields is the final result.
+For example:
+1 +2 +3 +4 +5 | SELECT ROW(1,2,NULL) < ROW(1,3,0) AS RESULT; + result +---------- + t +(1 row) + |
SQL is a typed language. That is, every data item has an associated data type which determines its behavior and allowed usage. GaussDB(DWS) has an extensible type system that is more general and flexible than other SQL implementations. Hence, most type conversion behavior in GaussDB(DWS) is governed by general rules. This allows the use of mixed-type expressions.
+The GaussDB(DWS) scanner/parser divides lexical elements into five fundamental categories: integers, floating-point numbers, strings, identifiers, and keywords. Constants of most non-numeric types are first classified as strings. The SQL language definition allows specifying type names with constant strings. For example, the query:
+1 +2 +3 +4 +5 | SELECT text 'Origin' AS "label", point '(0,0)' AS "value"; + label | value +--------+------- + Origin | (0,0) +(1 row) + |
has two literal constants, of type text and point. If a type is not specified for a string literal, then the placeholder type unknown is assigned initially.
+There are four fundamental SQL constructs requiring distinct type conversion rules in the GaussDB(DWS) parser:
+Much of the SQL type system is built around a rich set of functions. Functions can have one or more arguments. Since SQL permits function overloading, the function name alone does not uniquely identify the function to be called. The parser must select the right function based on the data types of the supplied arguments.
+SQL allows expressions with prefix and postfix unary (one-argument) operators, as well as binary (two-argument) operators. Like functions, operators can be overloaded, so the same problem of selecting the right operator exists.
+SQL INSERT and UPDATE statements place the results of expressions into a table. The expressions in the statement must be matched up with, and perhaps converted to, the types of the target columns.
+Since all query results from a unionized SELECT statement must appear in a single set of columns, the types of the results of each SELECT clause must be matched up and converted to a uniform set. Similarly, the result expressions of a CASE construct must be converted to a common type so that the CASE expression as a whole has a known output type. The same holds for ARRAY constructs, and for the GREATEST and LEAST functions.
+The system catalog pg_cast stores information about which conversions, or casts, exist between which data types, and how to perform those conversions. For details, see PG_CAST.
+The return type and conversion behavior of an expression are determined during semantic analysis. Data types are divided into several basic type categories, including boolean, numeric, string, bitstring, datetime, timespan, geometric, and network. Within each category there can be one or more preferred types, which are preferred when there is a choice of possible types. With careful selection of preferred types and available implicit casts, it is possible to ensure that ambiguous expressions (those with multiple candidate parsing solutions) can be resolved in a useful way.
+All type conversion rules are designed based on the following principles:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | create table t1(no int,col varchar); +insert into t1 values(1,''); +insert into t1 values(2,null); +select * from t1 where col is null; + no | col +----+----- + 2 | +(1 row) +select * from t1 where col=''; + no | col +----+----- + 1 | +(1 row) + |
Discard candidate operators for which the input types do not match and cannot be converted (using an implicit conversion) to match. unknown literals are assumed to be convertible to anything for this purpose. If only one candidate remains, use it; else continue to the next step.
+Run through all candidates and keep those with the most exact matches on input types. Domains are considered the same as their base type for this purpose. Keep all candidates if there are no exact matches. If only one candidate remains, use it; else continue to the next step.
+Run through all candidates and keep those that accept preferred types (of the input data type's type category) at the most positions where type conversion will be required. Keep all candidates if none accepts preferred types. If only one candidate remains, use it; else continue to the next step.
+If any input arguments are of unknown types, check the type categories accepted at those argument positions by the remaining candidates. At each position, select the string category if any candidate accepts that category. (This bias towards string is appropriate since an unknown-type literal looks like a string.) Otherwise, if all the remaining candidates accept the same type category, select that category; otherwise fail because the correct choice cannot be deduced without more clues. Now discard candidates that do not accept the selected type category. Furthermore, if any candidate accepts a preferred type in that category, discard candidates that accept non-preferred types for that argument. Keep all candidates if none survives these tests. If only one candidate remains, use it; else continue to the next step.
+If there are both unknown and known-type arguments, and all the known-type arguments have the same type, assume that the unknown arguments are also of that type, and check which candidates can accept that type at the unknown-argument positions. If exactly one candidate passes this test, use it. Otherwise, an error is reported.
+Example 1: factorial operator type resolution. There is only one factorial operator (postfix !) defined in the system catalog, and it takes an argument of type bigint. The scanner assigns an initial type of bigint to the argument in this query expression:
+1 +2 +3 +4 +5 +6 | SELECT 40 ! AS "40 factorial"; + + 40 factorial +-------------------------------------------------- + 815915283247897734345611269596115894272000000000 +(1 row) + |
So the parser does a type conversion on the operand and the query is equivalent to:
+1 | SELECT CAST(40 AS bigint) ! AS "40 factorial"; + |
Example 2: string concatenation operator type resolution. A string-like syntax is used for working with string types and for working with complex extension types. Strings with unspecified type are matched with likely operator candidates. An example with one unspecified argument:
+1 +2 +3 +4 +5 | SELECT text 'abc' || 'def' AS "text and unknown"; + text and unknown +------------------ + abcdef +(1 row) + |
In this example, the parser looks for an operator whose parameters are of the text type. Such an operator is found.
+Here is a concatenation of two values of unspecified types:
+1 +2 +3 +4 +5 | SELECT 'abc' || 'def' AS "unspecified"; + unspecified +------------- + abcdef +(1 row) + |
In this case there is no initial hint for which type to use, since no types are specified in the query. So, the parser looks for all candidate operators and finds that there are candidates accepting both string-category and bit-string-category inputs. Since string category is preferred when available, that category is selected, and then the preferred type for strings, text, is used as the specific type to resolve the unknown-type literals.
+Example 3: absolute-value and negation operator type resolution. The GaussDB(DWS) operator catalog has several entries for the prefix operator @. All the entries implement absolute-value operations for various numeric data types. One of these entries is for type float8, which is the preferred type in the numeric category. Therefore, GaussDB(DWS) will use that entry when faced with an unknown input:
+1 +2 +3 +4 +5 | SELECT @ '-4.5' AS "abs"; + abs +----- + 4.5 +(1 row) + |
Here the system has implicitly resolved the unknown-type literal as type float8 before applying the chosen operator.
+Example 4: array inclusion operator type resolution. The following is an example of resolving an operator with one known and one unknown input:
+1 +2 +3 +4 +5 | SELECT array[1,2] <@ '{1,2,3}' as "is subset"; + is subset +----------- + t +(1 row) + |
In the pg_operator table of GaussDB(DWS), several entries correspond to the infix operator <@, but the only two that may accept an integer array on the left-hand side are array inclusion (anyarray <@ anyarray) and range inclusion (anyelement <@ anyrange). Because none of these polymorphic pseudo-types (see Pseudo-Types) is considered preferred, the parser cannot resolve the ambiguity on that basis. However, 2.e tells it to assume that the unknown-type literal is of the same type as the other input, that is, integer array. Now only one of the two operators can match, so array inclusion is selected. (If you select range inclusion, an error will be reported because the string does not have the right format to be a range literal.)
+If the search path finds multiple functions of different argument types, a proper function in the path is considered.
+Example 1: Use the rounding function argument type resolution as the first example. There is only one round function that takes two arguments; it takes a first argument of type numeric and a second argument of type integer. So the following query automatically converts the first argument of type integer to numeric:
+1 +2 +3 +4 +5 | SELECT round(4, 4); + round +-------- + 4.0000 +(1 row) + |
That query is converted by the parser to:
+1 | SELECT round(CAST (4 AS numeric), 4); + |
Since numeric constants with decimal points are initially assigned the type numeric, the following query will require no type conversion and therefore might be slightly more efficient:
+1 | SELECT round(4.0, 4); + |
Example 2: Use the substring function type resolution as the second example. There are several substr functions, one of which takes types text and integer. If called with a string constant of unspecified type, the system chooses the candidate function that accepts an argument of the preferred category string (namely of type text).
+1 +2 +3 +4 +5 | SELECT substr('1234', 3); + substr +-------- + 34 +(1 row) + |
If the string is declared to be of type varchar, as might be the case if it comes from a table, then the parser will try to convert it to become text:
+1 +2 +3 +4 +5 | SELECT substr(varchar '1234', 3); + substr +-------- + 34 +(1 row) + |
This is transformed by the parser to effectively become:
+1 | SELECT substr(CAST (varchar '1234' AS text), 3); + |
The parser learns from the pg_cast catalog that text and varchar are binary-compatible, meaning that one can be passed to a function that accepts the other without doing any physical conversion. Therefore, no type conversion is inserted in this case.
+And, if the function is called with an argument of type integer, the parser will try to convert that to text:
+1 +2 +3 +4 +5 | SELECT substr(1234, 3); +substr +-------- + 34 +(1 row) + |
This is transformed by the parser to effectively become:
+1 +2 +3 +4 +5 | SELECT substr(CAST (1234 AS text), 3); + substr +-------- + 34 +(1 row) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE x1 +( + customer_sk integer, + customer_id char(20), + first_name char(6), + last_name char(8) +) +with (orientation = column,compression=middle) +distribute by hash (last_name); + +INSERT INTO x1(customer_sk, customer_id, first_name) VALUES (3769, 'abcdef', 'Grace'); + +SELECT customer_id, octet_length(customer_id) FROM x1; + customer_id | octet_length +----------------------+-------------- + abcdef | 20 +(1 row) +DROP TABLE x1; + |
What has really happened here is that the two unknown literals are resolved to text by default, allowing the || operator to be resolved as text concatenation. Then the text result of the operator is converted to bpchar ("blank-padded char", the internal name of the character data type) to match the target column type. Since the conversion from text to bpchar is binary-coercible, this conversion does not insert any real function call. Finally, the sizing function bpchar(bpchar, integer, boolean) is found in the system catalog and used for the operator's result and the stored column length. This type-specific function performs the required length check and addition of padding spaces.
+SQL UNION constructs must match up possibly dissimilar types to become a single result set. Since all query results from a SELECT UNION statement must appear in a single set of columns, the types of the results of each SELECT clause must be matched up and converted to a uniform set. Similarly, the result expressions of a CASE construct must be converted to a common type so that the CASE expression as a whole has a known output type. The same holds for ARRAY constructs, and for the GREATEST and LEAST functions.
+typcategory in the pg_type system catalog indicates the data type category. typispreferred indicates whether a type is preferred in typcategory.
+Example 1: Use type resolution with unknown types in a union as the first example. Here, the unknown-type literal 'b' will be resolved to type text.
+1 +2 +3 +4 +5 +6 | SELECT text 'a' AS "text" UNION SELECT 'b'; + text +------ + a + b +(2 rows) + |
Example 2: Use type resolution in a simple union as the second example. The literal 1.2 is of type numeric, and the integer value 1 can be cast implicitly to numeric, so that type is used.
+1 +2 +3 +4 +5 +6 | SELECT 1.2 AS "numeric" UNION SELECT 1; + numeric +--------- + 1 + 1.2 +(2 rows) + |
Example 3: Use type resolution in a transposed union as the third example. Here, since type real cannot be implicitly cast to integer, but integer can be implicitly cast to real, the union result type is resolved as real.
+1 +2 +3 +4 +5 +6 | SELECT 1 AS "real" UNION SELECT CAST('2.2' AS REAL); + real +------ + 1 + 2.2 +(2 rows) + |
Example 4: Use type resolution in the COALESCE function with input values of types int and varchar as the fourth example. Type resolution fails in ORA-compatible mode. The types are resolved as type varchar in TD-compatible mode, and as type text in MySQL-compatible mode.
+Create the ora_db, td_db, and mysql_db databases by setting dbcompatibility to ORA, TD, and MySQL, respectively.
+1 +2 +3 | CREATE DATABASE ora_db dbcompatibility = 'ORA'; +CREATE DATABASE td_db dbcompatibility = 'TD'; +CREATE DATABASE mysql_db dbcompatibility = 'MySQL'; + |
1 | gaussdb=# \c ora_db + |
Create table t1. Show the execution plan of a statement for querying the types int and varchar of input parameters for COALESCE.
+1 +2 +3 +4 | ora_db=# CREATE TABLE t1(a int, b varchar(10)); +ora_db=# EXPLAIN SELECT coalesce(a, b) FROM t1; +ERROR: COALESCE types integer and character varying cannot be matched +CONTEXT: referenced column: coalesce + |
1 | ora_db=# \c td_db + |
Create table t2. Show the execution plan of a statement for querying the types int and varchar of input parameters for COALESCE.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | td_db=# CREATE TABLE t2(a int, b varchar(10)); +td_db=# EXPLAIN VERBOSE select coalesce(a, b) from t2; + QUERY PLAN +----------------------------------------------------------------------------------------------- + id | operation | E-rows | E-distinct | E-width | E-costs + ----+----------------------------------------------+--------+------------+---------+--------- + 1 | -> Data Node Scan on "__REMOTE_FQS_QUERY__" | 0 | | 0 | 0.00 + + Targetlist Information (identified by plan id) + ------------------------------------------------------------------------------------------- + 1 --Data Node Scan on "__REMOTE_FQS_QUERY__" + Output: (COALESCE((t2.a)::character varying, t2.b)) + Node/s: All datanodes + Remote query: SELECT COALESCE(a::character varying, b) AS "coalesce" FROM public.t2 +(10 rows) + |
1 | td_db=# \c mysql_db + |
Create table t3. Show the execution plan of a statement for querying the types int and varchar of input parameters for COALESCE.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | mysql_db=# CREATE TABLE t3(a int, b varchar(10)); +mysql_db=# EXPLAIN VERBOSE select coalesce(a, b) from t3; + QUERY PLAN +----------------------------------------------------------------------------------------------- + id | operation | E-rows | E-distinct | E-width | E-costs + ----+----------------------------------------------+--------+------------+---------+--------- + 1 | -> Data Node Scan on "__REMOTE_FQS_QUERY__" | 0 | | 0 | 0.00 + + Targetlist Information (identified by plan id) + ------------------------------------------------------------------------------------ + 1 --Data Node Scan on "__REMOTE_FQS_QUERY__" + Output: (COALESCE((t3.a)::text, (t3.b)::text)) + Node/s: All datanodes + Remote query: SELECT COALESCE(a::text, b::text) AS "coalesce" FROM public.t3 +(10 rows) + |
1 | mysql_db=# \c gaussdb + |
Textual search operators have been used in databases for years. GaussDB(DWS) has ~, ~*, LIKE, and ILIKE operators for textual data types, but they lack many essential properties required by modern information systems. They can be supplemented by indexes and dictionaries.
+The hybrid data warehouse (standalone) does not support full-text search.
+Regular expressions are not sufficient because they cannot easily handle derived words. For example, you might miss documents that contain satisfies, although you probably would like to find them when searching for satisfy. It is possible to use OR to search for multiple derived forms, but this is tedious and error-prone, because some words can have several thousand derivatives.
+It is useful to identify various classes of tokens, for example, numbers, words, complex words, and email addresses, so that they can be processed differently. In principle, token classes depend on the specific application, but for most purposes it is adequate to use a predefined set of classes.
+A lexeme is a string, just like a token, but it has been normalized so that different forms of the same word are made alike. For example, normalization almost always includes folding upper-case letters to lower-case, and often involves removal of suffixes (such as s or es in English) This allows searches to find variant forms of the same word, without tediously entering all the possible variants. Also, this step typically eliminates stop words, which are words that are so common that they are useless for searching. (In short, tokens are raw fragments of the document text, while lexemes are words that are believed useful for indexing and searching.) GaussDB(DWS) uses dictionaries to perform this step and provides various standard dictionaries.
+For example, each document can be represented as a sorted array of normalized lexemes. Along with the lexemes, it is often desirable to store positional information for proximity ranking. Therefore, a document that contains a more "dense" region of query words is assigned with a higher rank than the one with scattered query words.
+Dictionaries allow fine-grained control over how tokens are normalized. With appropriate dictionaries, you can define stop words that should not be indexed.
+A data type tsvector is provided for storing preprocessed documents, along with a type tsquery for storing query conditions. For details, see Text Search Types. For details about the functions and operators available for these data types, see Text Search Functions and Operators. The match operator @@, which is the most important among those functions and operators, is introduced in Basic Text Matching.
+ +A document is the unit of searching in a full text search system; for example, a magazine article or email message. The text search engine must be able to parse documents and store associations of lexemes (keywords) with their parent document. Later, these associations are used to search for documents that contain query words.
+For searches within GaussDB(DWS), a document is normally a textual column within a row of a database table, or possibly a combination (concatenation) of such columns, perhaps stored in several tables or obtained dynamically. In other words, a document can be constructed from different parts for indexing and it might not be stored anywhere as a whole. For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT d_dow || '-' || d_dom || '-' || d_fy_week_seq AS identify_serials FROM tpcds.date_dim WHERE d_fy_week_seq = 1; +identify_serials +------------------ + 5-6-1 + 0-8-1 + 2-3-1 + 3-4-1 + 4-5-1 + 1-2-1 + 6-7-1 +(7 rows) + |
Actually, in these example queries, coalesce should be used to prevent a single NULL attribute from causing a NULL result for the whole document.
+Another possibility is to store the documents as simple text files in the file system. In this case, the database can be used to store the full text index and to execute searches, and some unique identifier can be used to retrieve the document from the file system. However, retrieving files from outside the database requires system administrator permissions or special function support, so this is less convenient than keeping all the data inside the database. Also, keeping everything inside the database allows easy access to document metadata to assist in indexing and display.
+For text search purposes, each document must be reduced to the preprocessed tsvector format. Searching and relevance-based ranking are performed entirely on the tsvector representation of a document. The original text is retrieved only when the document has been selected for display to a user. We therefore often speak of the tsvector as being the document, but it is only a compact representation of the full document.
+Full text search in GaussDB(DWS) is based on the match operator @@, which returns true if a tsvector (document) matches a tsquery (query). It does not matter which data type is written first:
+1 +2 +3 +4 +5 | SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector @@ 'cat & rat'::tsquery AS RESULT; + result +---------- + t +(1 row) + |
1 +2 +3 +4 +5 | SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::tsvector AS RESULT; + result +---------- + f +(1 row) + |
As the above example suggests, a tsquery is not raw text, any more than a tsvector is. A tsquery contains search terms, which must be already-normalized lexemes, and may combine multiple terms using AND, OR, and NOT operators. For details, see Text Search Types. There are functions to_tsquery and plainto_tsquery that are helpful in converting user-written text into a proper tsquery, for example by normalizing words appearing in the text. Similarly, to_tsvector is used to parse and normalize a document string. So in practice a text search match would look more like this:
+1 +2 +3 +4 +5 | SELECT to_tsvector('fat cats ate fat rats') @@ to_tsquery('fat & rat') AS RESULT; +result +---------- + t +(1 row) + |
Observe that this match would not succeed if written as follows:
+1 +2 +3 +4 +5 | SELECT 'fat cats ate fat rats'::tsvector @@ to_tsquery('fat & rat')AS RESULT; +result +---------- + f +(1 row) + |
In the preceding match, no normalization of the word rats will occur. Therefore, rats does not match rat.
+The @@ operator also supports text input, allowing explicit conversion of a text string to tsvector or tsquery to be skipped in simple cases. The variants available are:
+1 +2 +3 +4 | tsvector @@ tsquery +tsquery @@ tsvector +text @@ tsquery +text @@ text + |
We already saw the first two of these. The form text @@ tsquery is equivalent to to_tsvector(text) @@ tsquery. The form text @@ text is equivalent to to_tsvector(text) @@ plainto_tsquery(text).
+Full text search functionality includes the ability to do many more things: skip indexing certain words (stop words), process synonyms, and use sophisticated parsing, for example, parse based on more than just white space. This functionality is controlled by text search configurations. GaussDB(DWS) comes with predefined configurations for many languages, and you can easily create your own configurations. (The \dF command of gsql shows all available configurations.)
+During installation an appropriate configuration is selected and default_text_search_config is set accordingly in postgresql.conf. If you are using the same text search configuration for the entire cluster you can use the value in postgresql.conf. To use different configurations throughout the cluster but the same configuration within any one database, use ALTER DATABASE ... SET. Otherwise, you can set default_text_search_config in each session.
+Each text search function that depends on a configuration has an optional argument, so that the configuration to use can be specified explicitly. default_text_search_config is used only when this argument is omitted.
+To make it easier to build custom text search configurations, a configuration is built up from simpler database objects. GaussDB(DWS)'s text search facility provides the following types of configuration-related database objects:
+It is possible to do a full text search without an index.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 | DROP SCHEMA IF EXISTS tsearch CASCADE; + +CREATE SCHEMA tsearch; + +CREATE TABLE tsearch.pgweb(id int, body text, title text, last_mod_date date); + +INSERT INTO tsearch.pgweb VALUES(1, 'Philology is the study of words, especially the history and development of the words in a particular language or group of languages.', 'Philology', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(2, 'Mathematics is the science that deals with the logic of shape, quantity and arrangement.', 'Mathematics', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(3, 'Computer science is the study of processes that interact with data and that can be represented as data in the form of programs.', 'Computer science', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(4, 'Chemistry is the scientific discipline involved with elements and compounds composed of atoms, molecules and ions.', 'Chemistry', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(5, 'Geography is a field of science devoted to the study of the lands, features, inhabitants, and phenomena of the Earth and planets.', 'Geography', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(6, 'History is a subject studied in schools, colleges, and universities that deals with events that have happened in the past.', 'History', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(7, 'Medical science is the science of dealing with the maintenance of health and the prevention and treatment of disease.', 'Medical science', '2010-1-1'); + +INSERT INTO tsearch.pgweb VALUES(8, 'Physics is one of the most fundamental scientific disciplines, and its main goal is to understand how the universe behaves.', 'Physics', '2010-1-1'); + + +SELECT id, body, title FROM tsearch.pgweb WHERE to_tsvector('english', body) @@ to_tsquery('english', 'science'); + id | body | title +----+-------------------------------------------------------------------------------------------------------------------------+--------- + + 2 | Mathematics is the science that deals with the logic of shape, quantity and arrangement. | Mathematics + 3 | Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. | Computer science + 5 | Geography is a field of science devoted to the study of the lands, features, inhabitants, and phenomena of the Earth and planets. | Geography + 7 | Medical science is the science of dealing with the maintenance of health and the prevention and treatment of disease. | Medical science +(4 rows) + |
This will also find related words, such as science, since all these are reduced to the same normalized lexeme.
+The query above specifies that the english configuration is to be used to parse and normalize the strings. Alternatively we could omit the configuration parameters, and use the configuration set by default_text_search_config.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | SHOW default_text_search_config; + default_text_search_config +---------------------------- + pg_catalog.english +(1 row) + +SELECT id, body, title FROM tsearch.pgweb WHERE to_tsvector(body) @@ to_tsquery('science'); + id | body | title +----+-------------------------------------------------------------------------------------------------------------------------+--------- + + 2 | Mathematics is the science that deals with the logic of shape, quantity and arrangement. | Mathematics + 3 | Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. | Computer science + 5 | Geography is a field of science devoted to the study of the lands, features, inhabitants, and phenomena of the Earth and planets. | Geography + 7 | Medical science is the science of dealing with the maintenance of health and the prevention and treatment of disease. | Medical science + +(4 rows) + |
1 +2 +3 +4 +5 +6 +7 | SELECT title FROM tsearch.pgweb WHERE to_tsvector(title || ' ' || body) @@ to_tsquery('treatment & science') ORDER BY last_mod_date DESC LIMIT 10; + title +-------- + +Medical science + +(1 rows) + |
For clarity we omitted the coalesce function calls which would be needed to find rows that contain NULL in one of the two columns.
+The preceding examples show queries without using indexes. Most applications will find this approach too slow. Therefore, practical use of text searching usually requires creating an index, except perhaps for occasional ad-hoc searches.
+You can create a GIN index to speed up text searches:
+1 | CREATE INDEX pgweb_idx_1 ON tsearch.pgweb USING gin(to_tsvector('english', body)); + |
The to_tsvector() function accepts one or two augments.
+If the one-augment version of the index is used, the system will use the configuration specified by default_text_search_config by default.
+To create an index, the two-augment version must be used, or the index content may be inconsistent. Only the text search functions that specify a configuration name can be used in expression indexes. Index content is not affected by default_text_search_config, because different entries could contain tsvectors that were created with different text search configurations, and there would be no way to guess which was which. It would be impossible to dump and restore such an index correctly.
+Because the two-argument version of to_tsvector was used in the index above, only a query reference that uses the two-argument version of to_tsvector with the same configuration name will use that index. That is, WHERE to_tsvector('english', body) @@ 'a & b' can use the index, but WHERE to_tsvector(body) @@ 'a & b' cannot. This ensures that an index will be used only with the same configuration used to create the index entries.
+More complex expression indexes can be set up when the configuration name of the index is specified by another column. For example:
+1 | CREATE INDEX pgweb_idx_2 ON tsearch.pgweb USING gin(to_tsvector('zhparser', body)); + |
In this example, zhparser supports only the UTF-8 or GBK database encoding format. If the SQL_ASCII encoding is used, an error will be reported.
+body is a column in the pgweb table. This allows mixed configurations in the same index while recording which configuration was used for each index entry. This would be useful, for example, if the document collection contained documents in different languages. Again, queries that are meant to use the index must be phrased to match, for example, WHERE to_tsvector(config_name, body) @@ 'a & b' must match to_tsvector in the index.
+Indexes can even concatenate columns:
+1 | CREATE INDEX pgweb_idx_3 ON tsearch.pgweb USING gin(to_tsvector('english', title || ' ' || body)); + |
Another approach is to create a separate tsvector column to hold the output of to_tsvector. This example is a concatenation of title and body, using coalesce to ensure that one column will still be indexed when the other is NULL:
+1 +2 | ALTER TABLE tsearch.pgweb ADD COLUMN textsearchable_index_col tsvector; +UPDATE tsearch.pgweb SET textsearchable_index_col = to_tsvector('english', coalesce(title,'') || ' ' || coalesce(body,'')); + |
Then, create a GIN index to speed up the search:
+1 | CREATE INDEX textsearch_idx_4 ON tsearch.pgweb USING gin(textsearchable_index_col); + |
Now you are ready to perform a fast full text search:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT title +FROM tsearch.pgweb +WHERE textsearchable_index_col @@ to_tsquery('science & Computer') +ORDER BY last_mod_date DESC +LIMIT 10; + + title +-------- + Computer science + +(1 rows) + |
One advantage of the separate-column approach over an expression index is that it is unnecessary to explicitly specify the text search configuration in queries in order to use the index. As shown in the preceding example, the query can depend on default_text_search_config. Another advantage is that searches will be faster, since it will not be necessary to redo the to_tsvector calls to verify index matches. The expression-index approach is simpler to set up, however, and it requires less disk space since the tsvector representation is not stored explicitly.
+The following is an example of using an index. Run the following statements in a database that uses the UTF-8 or GBK encoding:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | create table table1 (c_int int,c_bigint bigint,c_varchar varchar,c_text text) with(orientation=row); + +create text search configuration ts_conf_1(parser=POUND); +create text search configuration ts_conf_2(parser=POUND) with(split_flag='%'); + +set default_text_search_config='ts_conf_1'; +create index idx1 on table1 using gin(to_tsvector(c_text)); + +set default_text_search_config='ts_conf_2'; +create index idx2 on table1 using gin(to_tsvector(c_text)); + +select c_varchar,to_tsvector(c_varchar) from table1 where to_tsvector(c_text) @@ plainto_tsquery('¥#@……&**') and to_tsvector(c_text) @@ +plainto_tsquery('Company') and c_varchar is not null order by 1 desc limit 3; + |
In this example, table1 has two GIN indexes created on the same column c_text, idx1 and idx2, but these two indexes are created under different settings of default_text_search_config. Differences between this example and the scenario where one table has common indexes created on the same column are as follows:
+As a result, using idx1 and idx2 for the same query returns different results.
+In the preceding example, when:
+To avoid different query results caused by different GIN indexes, ensure that only one GIN index is available on a column of the physical table.
+GaussDB(DWS) provides function to_tsvector for converting a document to the tsvector data type.
+1 | to_tsvector([ config regconfig, ] document text) returns tsvector + |
to_tsvector parses a textual document into tokens, reduces the tokens to lexemes, and returns a tsvector, which lists the lexemes together with their positions in the document. The document is processed according to the specified or default text search configuration. Here is a simple example:
+1 +2 +3 +4 | SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); + to_tsvector +----------------------------------------------------- + 'ate':9 'cat':3 'fat':2,11 'mat':7 'rat':12 'sat':4 + |
In the preceding example we see that the resulting tsvector does not contain the words a, on, or it, the word rats became rat, and the punctuation sign (-) was ignored.
+The to_tsvector function internally calls a parser which breaks the document text into tokens and assigns a type to each token. For each token, a list of dictionaries is consulted. where the list can vary depending on the token type. The first dictionary that recognizes the token emits one or more normalized lexemes to represent the token. For example:
+The choices of parser, dictionaries and which types of tokens to index are determined by the selected text search configuration. It is possible to have many different configurations in the same database, and predefined configurations are available for various languages. In our example we used the default configuration english for the English language.
+The function setweight can be used to label the entries of a tsvector with a given weight, where a weight is one of the letters A, B, C, or D. This is typically used to mark entries coming from different parts of a document, such as title versus body. Later, this information can be used for ranking of search results.
+Because to_tsvector(NULL) will return NULL, you are advised to use coalesce whenever a column might be NULL. Here is the recommended method for creating a tsvector from a structured document:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | CREATE TABLE tsearch.tt (id int, title text, keyword text, abstract text, body text, ti tsvector); + +INSERT INTO tsearch.tt(id, title, keyword, abstract, body) VALUES (1, 'book', 'literature', 'Ancient poetry','Tang poem Song jambic verse'); + +UPDATE tsearch.tt SET ti = + setweight(to_tsvector(coalesce(title,'')), 'A') || + setweight(to_tsvector(coalesce(keyword,'')), 'B') || + setweight(to_tsvector(coalesce(abstract,'')), 'C') || + setweight(to_tsvector(coalesce(body,'')), 'D'); +DROP TABLE tsearch.tt; + |
Here we have used setweight to label the source of each lexeme in the finished tsvector, and then merged the labeled tsvector values using the tsvector concatenation operator ||. For details about these operations, see Manipulating tsvector.
+GaussDB(DWS) provides functions to_tsquery and plainto_tsquery for converting a query to the tsquery data type. to_tsquery offers access to more features than plainto_tsquery, but is less forgiving about its input.
+to_tsquery([ config regconfig, ] querytext text) returns tsquery+
to_tsquery creates a tsquery value from querytext, which must consist of single tokens separated by the Boolean operators & (AND), | (OR), and ! (NOT). These operators can be grouped using parentheses. In other words, the input to to_tsquery must already follow the general rules for tsquery input, as described in Text Search Types. The difference is that while basic tsquery input takes the tokens at face value, to_tsquery normalizes each token to a lexeme using the specified or default configuration, and discards any tokens that are stop words according to the configuration. For example:
+1 +2 +3 +4 +5 | SELECT to_tsquery('english', 'The & Fat & Rats'); + to_tsquery +--------------- + 'fat' & 'rat' +(1 row) + |
As in basic tsquery input, weight(s) can be attached to each lexeme to restrict it to match only tsvector lexemes of those weight(s). For example:
+1 +2 +3 +4 +5 | SELECT to_tsquery('english', 'Fat | Rats:AB'); + to_tsquery +------------------ + 'fat' | 'rat':AB +(1 row) + |
Also, the asterisk (*) can be attached to a lexeme to specify prefix matching:
+1 +2 +3 +4 +5 | SELECT to_tsquery('supern:*A & star:A*B'); + to_tsquery +-------------------------- + 'supern':*A & 'star':*AB +(1 row) + |
Such a lexeme will match any word having the specified string and weight in a tsquery.
+plainto_tsquery([ config regconfig, ] querytext text) returns tsquery+
plainto_tsquery transforms unformatted text querytext to tsquery. The text is parsed and normalized much as for to_tsvector, then the & (AND) Boolean operator is inserted between surviving words.
+For example:
+1 +2 +3 +4 +5 | SELECT plainto_tsquery('english', 'The Fat Rats'); + plainto_tsquery +----------------- + 'fat' & 'rat' +(1 row) + |
Note that plainto_tsquery cannot recognize Boolean operators, weight labels, or prefix-match labels in its input:
+1 +2 +3 +4 +5 | SELECT plainto_tsquery('english', 'The Fat & Rats:C'); + plainto_tsquery +--------------------- + 'fat' & 'rat' & 'c' +(1 row) + |
Here, all the input punctuation was discarded as being space symbols.
+Ranking attempts to measure how relevant documents are to a particular query, so that when there are many matches the most relevant ones can be shown first. GaussDB(DWS) provides two predefined ranking functions. which take into account lexical, proximity, and structural information; that is, they consider how often the query terms appear in the document, how close together the terms are in the document, and how important is the part of the document where they occur. However, the concept of relevancy is vague and application-specific. Different applications might require additional information for ranking, for example, document modification time. The built-in ranking functions are only examples. You can write your own ranking functions and/or combine their results with additional factors to fit your specific needs.
+The two ranking functions currently available are:
+1 | ts_rank([ weights float4[], ] vector tsvector, query tsquery [, normalization integer ]) returns float4 + |
Ranks vectors based on the frequency of their matching lexemes.
+1 | ts_rank_cd([ weights float4[], ] vector tsvector, query tsquery [, normalization integer ]) returns float4 + |
This function requires positional information in its input. Therefore, it will not work on "stripped" tsvector values. It will always return zero.
+For both these functions, the optional weights argument offers the ability to weigh word instances more or less heavily depending on how they are labeled. The weight arrays specify how heavily to weigh each category of word, in the order:
+{D-weight, C-weight, B-weight, A-weight}+
If no weights are provided, then these defaults are used: {0.1, 0.2, 0.4, 1.0}
+Typically weights are used to mark words from special areas of the document, like the title or an initial abstract, so they can be treated with more or less importance than words in the document body.
+Since a longer document has a greater chance of containing a query term it is reasonable to take into account document size. For example, a hundred-word document with five instances of a search word is probably more relevant than a thousand-word document with five instances. Both ranking functions take an integer normalization option that specifies whether and how a document's length should impact its rank. The integer option controls several behaviors, so it is a bit mask: you can specify one or more behaviors using a vertical bar (|) (for example, 2|4).
+If more than one flag bit is specified, the transformations are applied in the order listed.
+It is important to note that the ranking functions do not use any global information, so it is impossible to produce a fair normalization to 1% or 100% as sometimes desired. Normalization option 32 (rank/(rank+1)) can be applied to scale all ranks into the range zero to one, but of course this is just a cosmetic change; it will not affect the ordering of the search results.
+The following example selects the top 10 matches. Run the following statements in a database that uses the UTF-8 or GBK encoding:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | SELECT id, title, ts_rank_cd(to_tsvector(body), query) AS rank +FROM tsearch.pgweb, to_tsquery('science') query +WHERE query @@ to_tsvector(body) +ORDER BY rank DESC +LIMIT 10; + id | title | rank +----+---------+------ + 11 | Philology | .2 + 2 | Mathematics | .1 + 12 | Geography | .1 + 13 | Computer science | .1 +(4 rows) + |
This is the same example using normalized ranking:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | SELECT id, title, ts_rank_cd(to_tsvector(body), query, 32 /* rank/(rank+1) */ ) AS rank +FROM tsearch.pgweb, to_tsquery('science') query +WHERE query @@ to_tsvector(body) +ORDER BY rank DESC +LIMIT 10; + id | title | rank +----+---------+---------- + 11 | Philology | .166667 + 2 | Mathematics | .0909091 + 12 | Geography | .0909091 + 13 | Computer science | .0909091 +(4 rows) + |
The following example sorts query by Chinese word segmentation:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 | CREATE TABLE tsearch.ts_zhparser(id int, body text); +INSERT INTO tsearch.ts_zhparser VALUES (1, 'sort'); +INSERT INTO tsearch.ts_zhparser VALUES(2, 'sort query'); +INSERT INTO tsearch.ts_zhparser VALUES(3, 'query sort'); +-- Accurate match +SELECT id, body, ts_rank_cd (to_tsvector ('zhparser', body), query) AS rank FROM tsearch.ts_zhparser, to_tsquery ('sort') query WHERE query @@ to_tsvector (body); + id | body | rank +----+------+------ + 1 | sort | .1 +(1 row) + +-- Fuzzy match +SELECT id, body, ts_rank_cd (to_tsvector ('zhparser', body), query) AS rank FROM tsearch.ts_zhparser, to_tsquery ('sort') query WHERE query @@ to_tsvector ('zhparser',body); + id | body | rank +----+----------+------ + 3 | query sort | .1 + 1 | sort | .1 + 2 | sort query | .1 +(3 rows) + |
Ranking can be expensive since it requires consulting the tsvector of each matching document, which can be I/O bound and therefore slow. Unfortunately, it is almost impossible to avoid since practical queries often result in large numbers of matches.
+To present search results it is ideal to show a part of each document and how it is related to the query. Usually, search engines show fragments of the document with marked search terms. GaussDB(DWS) provides function ts_headline that implements this functionality.
+1 | ts_headline([ config regconfig, ] document text, query tsquery [, options text ]) returns text + |
ts_headline accepts a document along with a query, and returns an excerpt from the document in which terms from the query are highlighted. The configuration to be used to parse the document can be specified by config. If config is omitted, the default_text_search_config configuration is used.
+If an options string is specified it must consist of a comma-separated list of one or more option=value pairs. The available options are:
+Any unspecified options receive these defaults:
+1 +2 +3 | StartSel=<b>, StopSel=</b>, +MaxWords=35, MinWords=15, ShortWord=3, HighlightAll=FALSE, +MaxFragments=0, FragmentDelimiter=" ... " + |
For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 | SELECT ts_headline('english', +'The most common type of search +is to find all documents containing given query terms +and return them in order of their similarity to the +query.', +to_tsquery('english', 'query & similarity')); + ts_headline +------------------------------------------------------------ + containing given <b>query</b> terms + and return them in order of their <b>similarity</b> to the + <b>query</b>. +(1 row) + +SELECT ts_headline('english', +'The most common type of search +is to find all documents containing given query terms +and return them in order of their similarity to the +query.', +to_tsquery('english', 'query & similarity'), +'StartSel = <, StopSel = >'); + ts_headline +------------------------------------------------------- + containing given <query> terms + and return them in order of their <similarity> to the + <query>. +(1 row) + |
ts_headline uses the original document, not a tsvector summary, so it can be slow and should be used with care.
+GaussDB(DWS) provides functions and operators that can be used to manipulate documents that are already in tsvector type.
+The tsvector concatenation operator returns a new tsvector which combines the lexemes and positional information of the two tsvectors given as arguments. Positions and weight labels are retained during the concatenation. Positions appearing in the right-hand tsvector are offset by the largest position mentioned in the left-hand tsvector, so that the result is nearly equivalent to the result of performing to_tsvector on the concatenation of the two original document strings. (The equivalence is not exact, because any stop-words removed from the end of the left-hand argument will not affect the result, whereas they would have affected the positions of the lexemes in the right-hand argument if textual concatenation were used.)
+One advantage of using concatenation in the tsvector form, rather than concatenating text before applying to_tsvector, is that you can use different configurations to parse different sections of the document. Also, because the setweight function marks all lexemes of the given tsvector the same way, it is necessary to parse the text and do setweight before concatenating if you want to label different parts of the document with different weights.
+setweight returns a copy of the input tsvector in which every position has been labeled with the given weight, either A, B, C, or D. (D is the default for new tsvectors and as such is not displayed on output.) These labels are retained when tsvectors are concatenated, allowing words from different parts of a document to be weighted differently by ranking functions.
+Note that weight labels apply to positions, not lexemes. If the input tsvector has been stripped of positions then setweight does nothing.
+Returns a tsvector which lists the same lexemes as the given tsvector, but which lacks any position or weight information. While the returned tsvector is much less useful than an unstripped tsvector for relevance ranking, it will usually be much smaller.
+GaussDB(DWS) provides functions and operators that can be used to manipulate queries that are already in tsquery type.
+ + + +Returns the number of nodes (lexemes plus operators) in a tsquery. This function is useful to determine if the query is meaningful (returns > 0), or contains only stop words (returns 0). For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT numnode(plainto_tsquery('the any')); +NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored +CONTEXT: referenced column: numnode + numnode +--------- + 0 + +SELECT numnode('foo & bar'::tsquery); + numnode +--------- + 3 + |
Returns the portion of a tsquery that can be used for searching an index. This function is useful for detecting unindexable queries, for example those containing only stop words or only negated terms. For example:
+1 +2 +3 +4 +5 | SELECT querytree(to_tsquery('!defined')); + querytree +----------- + T +(1 row) + |
The ts_rewrite family of functions searches a given tsquery for occurrences of a target subquery, and replace each occurrence with a substitute subquery. In essence this operation is a tsquery specific version of substring replacement. A target and substitute combination can be thought of as a query rewrite rule. A collection of such rewrite rules can be a powerful search aid. For example, you can expand the search using synonyms (that is, new york, big apple, nyc, gotham) or narrow the search to direct the user to some hot topic.
+This form of ts_rewrite simply applies a single rewrite rule: target is replaced by substitute wherever it appears in query. For example:
+1 +2 +3 +4 | SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery); + ts_rewrite +------------ + 'b' & 'c' + |
This form of ts_rewrite accepts a starting query and a SQL select command, which is given as a text string. The select must yield two columns of tsquery type. For each row of the select result, occurrences of the first column value (the target) are replaced by the second column value (the substitute) within the current query value.
+Note that when multiple rewrite rules are applied in this way, the order of application can be important; so in practice you will want the source query to ORDER BY some ordering key.
+Consider a real-life astronomical example. We will expand query supernovae using table-driven rewriting rules:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE TABLE tsearch.aliases (id int, t tsquery, s tsquery); + +INSERT INTO tsearch.aliases VALUES(1, to_tsquery('supernovae'), to_tsquery('supernovae|sn')); + +SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT t, s FROM tsearch.aliases'); + + ts_rewrite +--------------------------------- + 'crab' & ( 'supernova' | 'sn' ) + |
We can change the rewriting rules just by updating the table:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | UPDATE tsearch.aliases +SET s = to_tsquery('supernovae|sn & !nebulae') +WHERE t = to_tsquery('supernovae'); + +SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT t, s FROM tsearch.aliases'); + + ts_rewrite +--------------------------------------------- + 'crab' & ( 'supernova' | 'sn' & !'nebula' ) + |
Rewriting can be slow when there are many rewriting rules, since it checks every rule for a possible match. To filter out obvious non-candidate rules we can use the containment operators for the tsquery type. In the example below, we select only those rules which might match the original query:
+1 +2 +3 +4 +5 +6 +7 | SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM tsearch.aliases WHERE ''a & b''::tsquery @> t'); + + ts_rewrite +------------ + 'b' & 'a' +(1 row) +DROP TABLE ts_rewrite; + |
The function ts_stat is useful for checking your configuration and for finding stop-word candidates.
+1 +2 +3 | ts_stat(sqlquery text, [ weights text, ] + OUT word text, OUT ndoc integer, + OUT nentry integer) returns setof record + |
sqlquery is a text value containing an SQL query which must return a single tsvector column. ts_stat executes the query and returns statistics about each distinct lexeme (word) contained in the tsvector data. The columns returned are
+If weights are supplied, only occurrences having one of those weights are counted. For example, to find the ten most frequent words in a document collection:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | SELECT * FROM ts_stat('SELECT to_tsvector(''english'', sr_reason_sk) FROM tpcds.store_returns WHERE sr_customer_sk < 10') ORDER BY nentry DESC, ndoc DESC, word LIMIT 10;; + word | ndoc | nentry +------+------+-------- + 32 | 2 | 2 + 33 | 2 | 2 + 1 | 1 | 1 + 10 | 1 | 1 + 13 | 1 | 1 + 14 | 1 | 1 + 15 | 1 | 1 + 17 | 1 | 1 + 20 | 1 | 1 + 22 | 1 | 1 +(10 rows) + |
The same, but counting only word occurrences with weight A or B:
+1 +2 +3 +4 | SELECT * FROM ts_stat('SELECT to_tsvector(''english'', sr_reason_sk) FROM tpcds.store_returns WHERE sr_customer_sk < 10', 'a') ORDER BY nentry DESC, ndoc DESC, word LIMIT 10; + word | ndoc | nentry +------+------+-------- +(0 rows) + |
Text search parsers are responsible for splitting raw document text into tokens and identifying each token's type, where the set of types is defined by the parser itself. Note that a parser does not modify the text at all — it simply identifies plausible word boundaries. Because of this limited scope, there is less need for application-specific custom parsers than there is for custom dictionaries.
+Currently, GaussDB(DWS) provides the following built-in parsers: pg_catalog.default for English configuration, and pg_catalog.ngram, pg_catalog.zhparser, and pg_catalog.pound for full text search in texts containing Chinese, or both Chinese and English.
+The built-in parser is named pg_catalog.default. It recognizes 23 token types, shown in Table 1.
+ +Alias + |
+Description + |
+Examples + |
+
---|---|---|
asciiword + |
+Word, all ASCII letters + |
+elephant + |
+
word + |
+Word, all letters + |
+mañana + |
+
numword + |
+Word, letters and digits + |
+beta1 + |
+
asciihword + |
+Hyphenated word, all ASCII + |
+up-to-date + |
+
hword + |
+Hyphenated word, all letters + |
+lógico-matemática + |
+
numhword + |
+Hyphenated word, letters and digits + |
+postgresql-beta1 + |
+
hword_asciipart + |
+Hyphenated word part, all ASCII + |
+postgresql in the context postgresql-beta1 + |
+
hword_part + |
+Hyphenated word part, all letters + |
+lógico or matemática in the context lógico-matemática + |
+
hword_numpart + |
+Hyphenated word part, letters and digits + |
+beta1 in the context postgresql-beta1 + |
+
Email address + |
+foo@example.com + |
+|
protocol + |
+Protocol head + |
+http:// + |
+
url + |
+URL + |
+example.com/stuff/index.html + |
+
host + |
+Host + |
+example.com + |
+
url_path + |
+URL path + |
+/stuff/index.html, in the context of a URL + |
+
file + |
+File or path name + |
+/usr/local/foo.txt, if not within a URL + |
+
sfloat + |
+Scientific notation + |
+-1.23E+56 + |
+
float + |
+Decimal notation + |
+-1.234 + |
+
int + |
+Signed integer + |
+-1234 + |
+
uint + |
+Unsigned integer + |
+1234 + |
+
version + |
+Version number + |
+8.3.0 + |
+
tag + |
+XML tag + |
+<a href="dictionaries.html"> + |
+
entity + |
+XML entity + |
+& + |
+
blank + |
+Space symbols + |
+(any whitespace or punctuation not otherwise recognized) + |
+
Note: The parser's notion of a "letter" is determined by the database's locale setting, specifically lc_ctype. Words containing only the basic ASCII letters are reported as a separate token type, since it is sometimes useful to distinguish them. In most European languages, token types word and asciiword should be treated alike.
+email does not support all valid email characters as defined by RFC 5322. Specifically, the only non-alphanumeric characters supported for email user names are period, dash, and underscore.
+It is possible for the parser to identify overlapping tokens in the same piece of text. As an example, a hyphenated word will be reported both as the entire word and as each component:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT alias, description, token FROM ts_debug('english','foo-bar-beta1'); + alias | description | token +-----------------+------------------------------------------+--------------- + numhword | Hyphenated word, letters and digits | foo-bar-beta1 + hword_asciipart | Hyphenated word part, all ASCII | foo + blank | Space symbols | - + hword_asciipart | Hyphenated word part, all ASCII | bar + blank | Space symbols | - + hword_numpart | Hyphenated word part, letters and digits | beta1 + |
This behavior is desirable since it allows searches to work for both the whole compound word and for components. Here is another instructive example:
+1 +2 +3 +4 +5 +6 +7 | SELECT alias, description, token FROM ts_debug('english','http://example.com/stuff/index.html'); + alias | description | token +----------+---------------+------------------------------ + protocol | Protocol head | http:// + url | URL | example.com/stuff/index.html + host | Host | example.com + url_path | URL path | /stuff/index.html + |
N-gram is a mechanical word segmentation method, and applies to no semantic Chinese segmentation scenarios. The N-gram segmentation method ensures the completeness of the segmentation. However, to cover all the possibilities, it but adds unnecessary words to the index, resulting in a large number of index items. N-gram supports Chinese coding, including GBK and UTF-8. Six built-in token types are shown in Table 2.
+ +Alias + |
+Description + |
+
---|---|
zh_words + |
+chinese words + |
+
en_word + |
+english word + |
+
numeric + |
+numeric data + |
+
alnum + |
+alnum string + |
+
grapsymbol + |
+graphic symbol + |
+
multisymbol + |
+multiple symbol + |
+
Zhparser is a dictionary-based semantic word segmentation method. The bottom-layer calls the Simple Chinese Word Segmentation (SCWS) algorithm (https://github.com/hightman/scws), which applies to Chinese segmentation scenarios. SCWS is a term frequency and dictionary-based mechanical Chinese words engine. It can split a whole paragraph Chinese text into words. The two Chinese coding formats, GBK and UTF-8, are supported. The 26 built-in token types are shown in Table 3.
+ +Alias + |
+Description + |
+
---|---|
A + |
+Adjective + |
+
B + |
+Differentiation + |
+
C + |
+Conjunction + |
+
D + |
+Adverb + |
+
E + |
+Exclamation + |
+
F + |
+Position + |
+
G + |
+Lexeme + |
+
H + |
+Preceding element + |
+
I + |
+Idiom + |
+
J + |
+Acronyms and abbreviations + |
+
K + |
+Subsequent element + |
+
L + |
+Common words + |
+
M + |
+Numeral + |
+
N + |
+Noun + |
+
O + |
+Onomatopoeia + |
+
P + |
+Preposition + |
+
Q + |
+Quantifiers + |
+
R + |
+Pronoun + |
+
S + |
+Space + |
+
T + |
+Time + |
+
U + |
+Auxiliary word + |
+
V + |
+Verb + |
+
W + |
+Punctuation + |
+
X + |
+Unknown + |
+
Y + |
+Interjection + |
+
Z + |
+Status words + |
+
Pound segments words in a fixed format. It is used to segment to-be-parsed nonsense Chinese and English words that are separated by fixed separators. It supports Chinese encoding (including GBK and UTF8) and English encoding (including ASCII). Pound has six pre-configured token types (as listed in Table 4) and supports five separators (as listed in Table 5). The default, the separator is #. Pound The maximum length of a token is 256 characters.
+ + + + +A dictionary is used to define stop words, that is, words to be ignored in full-text retrieval.
+A dictionary can also be used to normalize words so that different derived forms of the same word will match. A normalized word is called a lexeme.
+In addition to improving retrieval quality, normalization and removal of stop words can reduce the size of the tsvector representation of a document, thereby improving performance. Normalization and removal of stop words do not always have linguistic meaning. Users can define normalization and removal rules in dictionary definition files based on application environments.
+A dictionary is a program that receives a token as input and returns:
+An array of lexemes if the input token is known to the dictionary (note that one token can produce more than one lexeme).
+An empty array if the input token is known to the dictionary but is a stop word.
+GaussDB(DWS) provides predefined dictionaries for many languages and also provides five predefined dictionary templates, Simple, Synonym, Thesaurus, Ispell, and Snowball. These templates can be used to create new dictionaries with custom parameters.
+When using full-text retrieval, you are advised to:
+1 +2 | ALTER TEXT SEARCH CONFIGURATION astro_en + ADD MAPPING FOR asciiword WITH astro_syn, english_ispell, english_stem; + |
A filtering dictionary can be placed anywhere in the list, except at the end where it would be useless. Filtering dictionaries are useful to partially normalize words to simplify the task of later dictionaries.
+Stop words are words that are very common, appear in almost every document, and have no discrimination value. Therefore, they can be ignored in the context of full text searching. Each type of dictionaries treats stop words in different ways. For example, Ispell dictionaries first normalize words and then check the list of stop words, while Snowball dictionaries first check the list of stop words.
+For example, every English text contains words like a and the, so it is useless to store them in an index. However, stop words affect the positions in tsvector, which in turn affect ranking.
+1 +2 +3 +4 | SELECT to_tsvector('english','in the list of stop words'); + to_tsvector +---------------------------- + 'list':3 'stop':5 'word':6 + |
The missing positions 1, 2, and 4 are because of stop words. Ranks calculated for documents with and without stop words are quite different:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT ts_rank_cd (to_tsvector('english','in the list of stop words'), to_tsquery('list & stop')); + ts_rank_cd +------------ + .05 + +SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list & stop')); + ts_rank_cd +------------ + .1 + |
A Simple dictionary operates by converting the input token to lower case and checking it against a list of stop words. If the token is found in the list, an empty array will be returned, causing the token to be discarded. If it is not found, the lower-cased form of the word is returned as the normalized lexeme. In addition, you can set Accept to false for Simple dictionaries (default: true) to report non-stop-words as unrecognized, allowing them to be passed on to the next dictionary in the list.
+1 +2 +3 +4 | CREATE TEXT SEARCH DICTIONARY public.simple_dict ( + TEMPLATE = pg_catalog.simple, + STOPWORDS = english +); + |
english.stop is the full name of a file of stop words. For details about the syntax and parameters for creating a Simple dictionary, see CREATE TEXT SEARCH DICTIONARY.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT ts_lexize('public.simple_dict','YeS'); + ts_lexize +----------- + {yes} +(1 row) + +SELECT ts_lexize('public.simple_dict','The'); + ts_lexize +----------- + {} +(1 row) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | ALTER TEXT SEARCH DICTIONARY public.simple_dict ( Accept = false ); +SELECT ts_lexize('public.simple_dict','YeS'); + ts_lexize +----------- + +(1 row) + +SELECT ts_lexize('public.simple_dict','The'); + ts_lexize +----------- + {} +(1 row) + |
A synonym dictionary is used to define, identify, and convert synonyms of tokens. Phrases are not supported (use the thesaurus dictionary in Thesaurus Dictionary).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 | SELECT * FROM ts_debug('english', 'Paris'); + alias | description | token | dictionaries | dictionary | lexemes +-----------+-----------------+-------+----------------+--------------+--------- + asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari} +(1 row) + +CREATE TEXT SEARCH DICTIONARY my_synonym ( + TEMPLATE = synonym, + SYNONYMS = my_synonyms, + FILEPATH = 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx' +); + +ALTER TEXT SEARCH CONFIGURATION english + ALTER MAPPING FOR asciiword + WITH my_synonym, english_stem; + +SELECT * FROM ts_debug('english', 'Paris'); + alias | description | token | dictionaries | dictionary | lexemes +-----------+-----------------+-------+---------------------------+------------+--------- + asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris} +(1 row) + +SELECT * FROM ts_debug('english', 'paris'); + alias | description | token | dictionaries | dictionary | lexemes +-----------+-----------------+-------+---------------------------+------------+--------- + asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris} +(1 row) + +ALTER TEXT SEARCH DICTIONARY my_synonym ( CASESENSITIVE=true); + +SELECT * FROM ts_debug('english', 'Paris'); + alias | description | token | dictionaries | dictionary | lexemes +-----------+-----------------+-------+---------------------------+------------+--------- + asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris} +(1 row) + +SELECT * FROM ts_debug('english', 'paris'); + alias | description | token | dictionaries | dictionary | lexemes +-----------+-----------------+-------+---------------------------+------------+--------- + asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {pari} +(1 row) + |
The full name of the synonym dictionary file is my_synonyms.syn, and the dictionary is stored in 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx'. For details about the syntax and parameters for creating a synonym dictionary, see CREATE TEXT SEARCH DICTIONARY.
+Assume that the content in the dictionary file synonym_sample.syn is as follows:
+1 +2 +3 +4 +5 | postgres pgsql +postgresql pgsql +postgre pgsql +gogle googl +indices index* + |
Create and use a dictionary.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 | CREATE TEXT SEARCH DICTIONARY syn ( + TEMPLATE = synonym, + SYNONYMS = synonym_sample +); + +SELECT ts_lexize('syn','indices'); + ts_lexize +----------- + {index} +(1 row) + +CREATE TEXT SEARCH CONFIGURATION tst (copy=simple); + +ALTER TEXT SEARCH CONFIGURATION tst ALTER MAPPING FOR asciiword WITH syn; + +SELECT to_tsvector('tst','indices'); + to_tsvector +------------- + 'index':1 +(1 row) + +SELECT to_tsquery('tst','indices'); + to_tsquery +------------ + 'index':* +(1 row) + +SELECT 'indexes are very useful'::tsvector; + tsvector +--------------------------------- + 'are' 'indexes' 'useful' 'very' +(1 row) + +SELECT 'indexes are very useful'::tsvector @@ to_tsquery('tst','indices'); + ?column? +---------- + t +(1 row) + |
A thesaurus dictionary (sometimes abbreviated as TZ) is a collection of words that include relationships between words and phrases, such as broader terms (BT), narrower terms (NT), preferred terms, non-preferred terms, and related terms. A thesaurus dictionary replaces all non-preferred terms by one preferred term and, optionally, preserves the original terms for indexing as well. A thesaurus dictionary is an extension of the synonym dictionary with added phrase support.
+1 +2 | supernovae stars : sn +crab nebulae : crab + |
Run the following statement to create the TZ:
+1 +2 +3 +4 +5 +6 | CREATE TEXT SEARCH DICTIONARY thesaurus_astro ( + TEMPLATE = thesaurus, + DictFile = thesaurus_astro, + Dictionary = pg_catalog.english_stem, + FILEPATH = 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx' +); + |
The full name of the dictionary file is thesaurus_astro.ths, and the dictionary is stored in 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx'. pg_catalog.english_stem is the subdictionary (a Snowball English stemmer) used for input normalization. The subdictionary has its own configuration (for example, stop words), which is not shown here. For details about the syntax and parameters for creating a TZ, see CREATE TEXT SEARCH DICTIONARY.
+1 +2 +3 | ALTER TEXT SEARCH CONFIGURATION english + ALTER MAPPING FOR asciiword, asciihword, hword_asciipart + WITH thesaurus_astro, english_stem; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | SELECT plainto_tsquery('english','supernova star'); + plainto_tsquery +----------------- + 'sn' +(1 row) + +SELECT to_tsvector('english','supernova star'); + to_tsvector +------------- + 'sn':1 +(1 row) + +SELECT to_tsquery('english','''supernova star'''); + to_tsquery +------------ + 'sn' +(1 row) + |
supernova star matches supernovae stars in thesaurus_astro because the english_stem stemmer is specified in the thesaurus_astro definition. The stemmer removed e and s.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | supernovae stars : sn supernovae stars + +ALTER TEXT SEARCH DICTIONARY thesaurus_astro ( + DictFile = thesaurus_astro, + FILEPATH = 'file:///home/dicts/'); + +SELECT plainto_tsquery('english','supernova star'); + plainto_tsquery +----------------------------- + 'sn' & 'supernova' & 'star' +(1 row) + |
The Ispell dictionary template supports morphological dictionaries, which can normalize many different linguistic forms of a word into the same lexeme. For example, an English Ispell dictionary can match all declensions and conjugations of the search term bank, such as banking, banked, banks, banks', and bank's.
+GaussDB(DWS) does not provide any predefined Ispell dictionaries or dictionary files. The .dict files and .affix files support multiple open-source dictionary formats, including Ispell, MySpell, and Hunspell.
+You can use an open-source dictionary. The name extensions of the open-source dictionary may be .aff and .dic. In this case, you need to change them to .affix and .dict. In addition, for some dictionary files (for example, Norwegian dictionary files), you need to run the following commands to convert the character encoding to UTF-8:
+1 +2 | iconv -f ISO_8859-1 -t UTF-8 -o nn_no.affix nn_NO.aff +iconv -f ISO_8859-1 -t UTF-8 -o nn_no.dict nn_NO.dic + |
1 +2 +3 +4 +5 +6 | CREATE TEXT SEARCH DICTIONARY norwegian_ispell ( + TEMPLATE = ispell, + DictFile = nn_no, + AffFile = nn_no, + FilePath = 'obs://bucket_name/path accesskey=ak secretkey=sk region=rg' +); + |
The full name of the Ispell dictionary file is nn_no.dict and nn_no.affix, and the dictionary is stored in the 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx'. For details about the syntax and parameters for creating an Ispell dictionary, see CREATE TEXT SEARCH DICTIONARY.
+1 +2 +3 +4 +5 | SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk'); + ts_lexize +--------------------- + {sjokolade,fabrikk} +(1 row) + |
MySpell does not support compound words. Hunspell supports compound words. GaussDB(DWS) supports only the basic compound word operations of Hunspell. Generally, an Ispell dictionary recognizes a limited set of words, so they should be followed by another broader dictionary, for example, a Snowball dictionary, which recognizes everything.
+A Snowball dictionary is based on a project by Martin Porter and is used for stem analysis, providing stemming algorithms for many languages. GaussDB(DWS) provides predefined Snowball dictionaries of many languages. You can query the PG_TS_DICT system catalog to view the predefined Snowball dictionaries and supported stemming algorithms.
+A Snowball dictionary recognizes everything, no matter whether it is able to simplify the word. Therefore, it should be placed at the end of the dictionary list. It is useless to place it before any other dictionary because a token will never pass it through to the next dictionary.
+For details about the syntax of Snowball dictionaries, see CREATE TEXT SEARCH DICTIONARY.
+ +Text search configuration specifies the following components required for converting a document into a tsvector:
+Each time when the to_tsvector or to_tsquery function is invoked, a text search configuration is required to specify a processing procedure. The GUC parameter default_text_search_config specifies the default text search configuration, which will be used if the text search function does not explicitly specify a text search configuration.
+GaussDB(DWS) provides some predefined text search configurations. You can also create user-defined text search configurations. In addition, to facilitate the management of text search objects, multiple gsql meta-commands are provided to display related information. For details, see "Meta-Command Reference" in the Tool Guide.
+1 +2 | CREATE TEXT SEARCH CONFIGURATION ts_conf ( COPY = pg_catalog.english ); +CREATE TEXT SEARCH CONFIGURATION + |
1 +2 +3 | postgres pg +pgsql pg +postgresql pg + |
Run the following statement to create the Synonym dictionary:
+1 +2 +3 +4 +5 | CREATE TEXT SEARCH DICTIONARY pg_dict ( + TEMPLATE = synonym, + SYNONYMS = pg_dict, + FILEPATH = 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx' + ); + |
1 +2 +3 +4 +5 +6 +7 | CREATE TEXT SEARCH DICTIONARY english_ispell ( + TEMPLATE = ispell, + DictFile = english, + AffFile = english, + StopWords = english, + FILEPATH = 'obs://bucket01/obs.xxx.xxx.com accesskey=xxxxx secretkey=xxxxx region=xx-xx-xx' +); + |
1 +2 +3 +4 | ALTER TEXT SEARCH CONFIGURATION ts_conf + ALTER MAPPING FOR asciiword, asciihword, hword_asciipart, + word, hword, hword_part + WITH pg_dict, english_ispell, english_stem; + |
1 +2 | ALTER TEXT SEARCH CONFIGURATION ts_conf + DROP MAPPING FOR email, url, url_path, sfloat, float; + |
1 +2 +3 +4 +5 | SELECT * FROM ts_debug('ts_conf', ' +PostgreSQL, the highly scalable, SQL compliant, open source object-relational +database management system, is now undergoing beta testing of the next +version of our software. +'); + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 | \dF+ ts_conf + Text search configuration "public.ts_conf" +Parser: "pg_catalog.default" + Token | Dictionaries +-----------------+------------------------------------- + asciihword | pg_dict,english_ispell,english_stem + asciiword | pg_dict,english_ispell,english_stem + file | simple + host | simple + hword | pg_dict,english_ispell,english_stem + hword_asciipart | pg_dict,english_ispell,english_stem + hword_numpart | simple + hword_part | pg_dict,english_ispell,english_stem + int | simple + numhword | simple + numword | simple + uint | simple + version | simple + word | pg_dict,english_ispell,english_stem + +SET default_text_search_config = 'public.ts_conf'; +SET +SHOW default_text_search_config; + default_text_search_config +---------------------------- + public.ts_conf +(1 row) + |
The function ts_debug allows easy testing of a text search configuration.
+1 +2 +3 +4 +5 +6 +7 +8 | ts_debug([ config regconfig, ] document text, + OUT alias text, + OUT description text, + OUT token text, + OUT dictionaries regdictionary[], + OUT dictionary regdictionary, + OUT lexemes text[]) + returns setof record + |
ts_debug displays information about every token of document as produced by the parser and processed by the configured dictionaries. It uses the configuration specified by config, or default_text_search_config if that argument is omitted.
+ts_debug returns one row for each token identified in the text by the parser. The columns returned are:
+Here is a simple example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 | SELECT * FROM ts_debug('english','a fat cat sat on a mat - it ate a fat rats'); + alias | description | token | dictionaries | dictionary | lexemes +-----------+-----------------+-------+----------------+--------------+--------- + asciiword | Word, all ASCII | a | {english_stem} | english_stem | {} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | fat | {english_stem} | english_stem | {fat} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | cat | {english_stem} | english_stem | {cat} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | sat | {english_stem} | english_stem | {sat} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | on | {english_stem} | english_stem | {} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | a | {english_stem} | english_stem | {} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | mat | {english_stem} | english_stem | {mat} + blank | Space symbols | | {} | | + blank | Space symbols | - | {} | | + asciiword | Word, all ASCII | it | {english_stem} | english_stem | {} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | ate | {english_stem} | english_stem | {ate} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | a | {english_stem} | english_stem | {} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | fat | {english_stem} | english_stem | {fat} + blank | Space symbols | | {} | | + asciiword | Word, all ASCII | rats | {english_stem} | english_stem | {rat} +(24 rows) + |
The ts_parse function allows direct testing of a text search parser.
+1 +2 | ts_parse(parser_name text, document text, + OUT tokid integer, OUT token text) returns setof record + |
ts_parse parses the given document and returns a series of records, one for each token produced by parsing. Each record includes a tokid showing the assigned token type and a token which is the text of the token. For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT * FROM ts_parse('default', '123 - a number'); + tokid | token +-------+-------- + 22 | 123 + 12 | + 12 | - + 1 | a + 12 | + 1 | number +(6 rows) + |
1 +2 | ts_token_type(parser_name text, OUT tokid integer, + OUT alias text, OUT description text) returns setof record + |
ts_token_type returns a table which describes each type of token the specified parser can recognize. For each token type, the table gives the integer tokid that the parser uses to label a token of that type, the alias that names the token type in configuration commands, and a short description. For example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 | SELECT * FROM ts_token_type('default'); + tokid | alias | description +-------+-----------------+------------------------------------------ + 1 | asciiword | Word, all ASCII + 2 | word | Word, all letters + 3 | numword | Word, letters and digits + 4 | email | Email address + 5 | url | URL + 6 | host | Host + 7 | sfloat | Scientific notation + 8 | version | Version number + 9 | hword_numpart | Hyphenated word part, letters and digits + 10 | hword_part | Hyphenated word part, all letters + 11 | hword_asciipart | Hyphenated word part, all ASCII + 12 | blank | Space symbols + 13 | tag | XML tag + 14 | protocol | Protocol head + 15 | numhword | Hyphenated word, letters and digits + 16 | asciihword | Hyphenated word, all ASCII + 17 | hword | Hyphenated word, all letters + 18 | url_path | URL path + 19 | file | File or path name + 20 | float | Decimal notation + 21 | int | Signed integer + 22 | uint | Unsigned integer + 23 | entity | XML entity +(23 rows) + |
The ts_lexize function facilitates dictionary testing.
+ts_lexize(dict regdictionary, token text) returns text[] ts_lexize returns an array of lexemes if the input token is known to the dictionary, or an empty array if the token is known to the dictionary but it is a stop word, or NULL if it is an unknown word.
+For example:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SELECT ts_lexize('english_stem', 'stars'); + ts_lexize +----------- + {star} + +SELECT ts_lexize('english_stem', 'a'); + ts_lexize +----------- + {} + |
The ts_lexize function expects a single token, not text.
+The current limitations of GaussDB(DWS)'s full text search are:
+GaussDB(DWS) runs SQL statements to perform different system operations, such as setting variables, displaying the execution plan, and collecting garbage data.
+For details about how to set various parameters for a session or transaction, see SET.
+For details about how to display the execution plan that GaussDB(DWS) makes for SQL statements, see EXPLAIN.
+By default, WALs periodically specify checkpoints in a transaction log. CHECKPOINT forces an immediate checkpoint when the related command is issued, without waiting for a regular checkpoint scheduled by the system. For details, see CHECKPOINT.
+For details about how to collect garbage data and analyze a database as required, For details, see VACUUM.
+For details about how to collect statistics on tables in databases, For details, see ANALYZE | ANALYSE.
+For details about how to set the constraint check mode for the current transaction, For details, see SET CONSTRAINTS.
+A transaction is a user-defined sequence of database operations, which form an integral unit of work.
+GaussDB(DWS) starts a transaction using START TRANSACTION and BEGIN. For details, see START TRANSACTION and BEGIN.
+GaussDB(DWS) sets a transaction using SET TRANSACTION or SET LOCAL TRANSACTION. For details, see SET TRANSACTION.
+GaussDB(DWS) commits all operations of a transaction using COMMIT or END. For details, see COMMIT | END.
+If a fault occurs during a transaction and the transaction cannot proceed, the system performs rollback to cancel all the completed database operations related to the transaction. For details, see ROLLBACK.
+If an execution request (not in a transaction block) received in the database contains multiple statements, the statements will be packed into a transaction. If one of the statements fails, the entire request will be rolled back.
+Data definition language (DDL) is used to define or modify an object in a database, such as a table, index, or view.
+GaussDB(DWS) does not support DDL if its CN is unavailable. For example, if a CN in the cluster is faulty, creating a database or table will fail.
+A database is the warehouse for organizing, storing, and managing data. Defining a database includes: creating a database, altering the database attributes, and dropping the database. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a database + |
++ | +
Alter database attributes + |
++ | +
Delete a database + |
++ | +
A schema is the set of a group of database objects and is used to control the access to the database objects. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a schema + |
++ | +
Alter schema attributes + |
++ | +
Delete a schema + |
++ | +
A table is a special data structure in a database and is used to store data objects and the relationship between data objects. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a table + |
++ | +
Alter table attributes + |
++ | +
Delete a table + |
++ | +
Delete all the data from a table + |
++ | +
A partitioned table is a special data structure in a database and is used to store data objects and the relationship between data objects. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a partitioned table + |
++ | +
Create a partition + |
++ | +
Alter partitioned table attributes + |
++ | +
Delete a partition + |
++ | +
Delete a partitioned table + |
++ | +
An index indicates the sequence of values in one or more columns in the database table. The database index is a data structure that improves the speed of data access to specific information in a database table. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create an index + |
++ | +
Alter index attributes + |
++ | +
Delete an index + |
++ | +
Rebuild an index + |
++ | +
A role is used to manage rights. For database security, all management and operation rights can be assigned to different roles. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a role + |
++ | +
Alter role attributes + |
++ | +
Delete a role + |
++ | +
A user is used to log in to a database. Different rights can be assigned to users for managing data accesses and operations of users. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a user + |
++ | +
Alter user attributes + |
++ | +
Delete a user + |
++ | +
Data redaction is to protect sensitive data by masking or changing data. You can create a data redaction policy for a specific table object and specify the effective scope of the policy. You can also add, modify, and delete redaction columns. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a data redaction policy + |
++ | +
Modify a data redaction policy applied to a specified table + |
++ | +
Delete a data redaction policy applied to a specified table + |
++ | +
Row-level access control policies control the visibility of rows in database tables. In this way, the same SQL query may return different results for different users. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a row-level access control policy + |
++ | +
Modify an existing row-level access control policy + |
++ | +
Delete a row-level access control policy from a table + |
++ | +
A stored procedure is a set of SQL statements for achieving specific functions and is stored in the database after compiling. Users can specify a name and provide parameters (if necessary) to execute the stored procedure. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a stored procedure + |
++ | +
Delete a stored procedure + |
++ | +
In GaussDB(DWS), a function is similar to a stored procedure, which is a set of SQL statements. The function and stored procedure are used the same. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a function + |
++ | +
Alter function attributes + |
++ | +
Delete a function + |
++ | +
A view is a virtual table exported from one or several basic tables. The view is used to control data accesses for users. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a view + |
++ | +
Delete a view + |
++ | +
To process SQL statements, the stored procedure process assigns a memory segment to store context association. Cursors are handles or pointers to context regions. With a cursor, the stored procedure can control alterations in context areas.
+ + +A session is a connection established between the user and the database. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Alter a session + |
++ | +
End a session + |
++ | +
A resource pool is a system catalog used by the resource load management module to specify attributes related to resource management, such as Cgroups. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a resource pool + |
++ | +
Change resource attributes + |
++ | +
Delete a resource pool + |
++ | +
A synonym is a special database object compatible with Oracle. It is used to store the mapping between a database object and another. Currently, only synonyms can be used to associate the following database objects: tables, views, functions, and stored procedures. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Creating a synonym + |
++ | +
Modifying a synonym + |
++ | +
Deleting a synonym + |
++ | +
A text search configuration specifies a text search parser that can divide a string into tokens, plus dictionaries that can be used to determine which tokens are of interest for searching. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a text search configuration + |
++ | +
Modify a text search configuration + |
++ | +
Delete a text search configuration + |
++ | +
A dictionary is used to identify and process specific words during full-text retrieval. Dictionaries are created by using predefined templates (defined in the PG_TS_TEMPLATE system catalog). Dictionaries of the Simple, Ispell, Synonym, Thesaurus, and Snowball types can be created. The following table lists the related SQL statements.
+ +Function + |
+SQL Statement + |
+
---|---|
Create a full-text retrieval dictionary + |
++ | +
Modify a full-text retrieval dictionary + |
++ | +
Delete a full-text retrieval dictionary + |
++ | +
This command is used to modify the attributes of a database, including the database name, owner, maximum number of connections, and object isolation attribute.
+1 +2 | ALTER DATABASE database_name + [ [ WITH ] CONNECTION LIMIT connlimit ]; + |
1 +2 | ALTER DATABASE database_name + RENAME TO new_name; + |
If the database contains OBS hot or cold tables, the database name cannot be changed.
+1 +2 | ALTER DATABASE database_name + OWNER TO new_owner; + |
1 +2 | ALTER DATABASE database_name + SET TABLESPACE new_tablespace; + |
The current tablespaces cannot be changed to OBS tablespaces.
+1 +2 | ALTER DATABASE database_name + SET configuration_parameter { { TO | = } { value | DEFAULT } | FROM CURRENT }; + |
1 +2 | ALTER DATABASE database_name RESET + { configuration_parameter | ALL }; + |
1 | ALTER DATABASE database_name [ WITH ] { ENABLE | DISABLE } PRIVATE OBJECT; + |
Specifies the name of the database whose attributes are to be modified.
+Value range: a string. It must comply with the naming convention.
+Specifies the maximum number of concurrent connections that can be made to this database (excluding administrators' connections).
+Value range: The value must be an integer, preferably between 1 and 50. The default value -1 indicates no restrictions.
+Specifies the new name of a database.
+Value range: a string. It must comply with the naming convention.
+Specifies the new owner of a database.
+Value range: a string indicating a valid user name
+value
+Sets a specified database session parameter. If the value is DEFAULT or RESET, the default setting is used in the new session. OFF closes the setting.
+Value range: a string. It can be set to:
+Sets the value based on the database connected to the current session.
+Resets the specified database session parameter.
+Resets all database session parameters.
+Modify the number of connections of the music database.
+1 | ALTER DATABASE music CONNECTION LIMIT= 10; + |
Change the name of the music database to music1.
+1 | ALTER DATABASE music RENAME TO music1; + |
Change the owner of the music1 database.
+1 | ALTER DATABASE music1 OWNER TO tom; + |
Modify the tablespace of the music1 database.
+1 | ALTER DATABASE music1 SET TABLESPACE PG_DEFAULT; + |
Disable the default index scan on the music1 database.
+1 | ALTER DATABASE music1 SET enable_indexscan TO off; + |
Reset the enable_indexscan parameter of the music1 database.
+1 | ALTER DATABASE music1 RESET enable_indexscan; + |
ALTER FOREIGN TABLE modifies a foreign table.
+None
+1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] table_name + OPTIONS ( {[ ADD | SET | DROP ] option ['value']}[, ... ]); + |
1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] tablename + OWNER TO new_owner; + |
Specifies the name of an existing foreign table to be modified.
+Value range: an existing foreign table name.
+Name of the option to be modified.
+Value range: See Parameter Description in CREATE FOREIGN TABLE.
+Specifies the new value of option.
+Modify the customer_ft attribute of the foreign table. Delete the mode option.
+1 | ALTER FOREIGN TABLE customer_ft options(drop mode); + |
CREATE FOREIGN TABLE (for GDS Import and Export), DROP FOREIGN TABLE
+ALTER FOREIGN TABLE modifies an HDFS or OBS foreign table.
+None
+1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] table_name + OPTIONS ( {[ ADD | SET | DROP ] option ['value']} [, ... ]); + |
1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] tablename + OWNER TO new_owner; + |
1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] table_name + MODIFY ( { column_name data_type | column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] | column_name [ CONSTRAINT constraint_name ] NULL } [, ...] ); + |
1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] tablename + action [, ... ]; + |
1 +2 +3 +4 +5 +6 +7 | ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type + | ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL + | ALTER [ COLUMN ] column_name SET STATISTICS [PERCENT] integer + | ALTER [ COLUMN ] column_name OPTIONS ( {[ ADD | SET | DROP ] option ['value'] } [, ... ]) + | MODIFY column_name data_type + | MODIFY column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] + | MODIFY column_name [ CONSTRAINT constraint_name ] NULL + |
For details, see ALTER TABLE.
+1 +2 +3 +4 | ALTER FOREIGN TABLE [ IF EXISTS ] tablename + ADD [ CONSTRAINT constraint_name ] + { PRIMARY KEY | UNIQUE } ( column_name ) + [ NOT ENFORCED [ ENABLE QUERY OPTIMIZATION | DISABLE QUERY OPTIMIZATION ] | ENFORCED ]; + |
For parameters about adding an informational constraint to a foreign table, see Parameter Description in CREATE FOREIGN TABLE (For HDFS).
+1 +2 | ALTER FOREIGN TABLE [ IF EXISTS ] tablename + DROP CONSTRAINT constraint_name ; + |
Sends a notification instead of an error if no tables have identical names. The notification prompts that the table you are querying does not exist.
+Specifies the name of an existing foreign table to be modified.
+Value range: an existing foreign table name
+Specifies the new owner of the foreign table.
+Value range: A string indicating a valid user name.
+Specifies the new type for an existing column.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a constraint to add or delete.
+Specifies the name of an existing column.
+Value range: a string. It must comply with the naming convention.
+For details on how to modify other parameters in the foreign table, such as IF EXISTS, see Parameter Description in ALTER TABLE.
+Change the type of the r_name column to text in the ft_region foreign table.
+1 | ALTER FOREIGN TABLE ft_region ALTER r_name TYPE TEXT; + |
1 | ALTER FOREIGN TABLE ft_region ALTER r_name SET NOT NULL; + |
ALTER FUNCTION modifies the attributes of a customized function.
+Only the owner of a function or a system administrator can run this statement. If a function involves operations on temporary tables, the ALTER FUNCTION cannot be used.
+1 +2 | ALTER FUNCTION function_name ( [ { [ argmode ] [ argname ] argtype} [, ...] ] ) + action [ ... ] [ RESTRICT ]; + |
The syntax of the action clause is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | {CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT} + | {IMMUTABLE | STABLE | VOLATILE} + | {SHIPPABLE | NOT SHIPPABLE} + | {NOT FENCED | FENCED} + | [ NOT ] LEAKPROOF + | { [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER } + | AUTHID { DEFINER | CURRENT_USER } + | COST execution_cost + | ROWS result_rows + | SET configuration_parameter { { TO | = } { value | DEFAULT }| FROM CURRENT} + | RESET {configuration_parameter | ALL} + |
1 +2 | ALTER FUNCTION funname ( [ { [ argmode ] [ argname ] argtype} [, ...] ] ) + RENAME TO new_name; + |
1 +2 | ALTER FUNCTION funname ( [ { [ argmode ] [ argname ] argtype} [, ...] ] ) + OWNER TO new_owner; + |
1 +2 | ALTER FUNCTION funname ( [ { [ argmode ] [ argname ] argtype} [, ...] ] ) + SET SCHEMA new_schema; + |
Specifies the function name to be modified.
+Value range: An existing function name.
+Specifies whether a parameter is an input or output parameter.
+Value range: IN, OUT, IN OUT
+Indicates the parameter name.
+Value range: A string. It must comply with the naming convention.
+Specifies the parameter type.
+Value range: A valid type. For details, see Data Types.
+Declares that some parameters of the function can be invoked in normal mode if the parameter values are NULL. By default, the usage is the same as specifying the parameters.
+STRICT
+Indicates that the function always returns NULL whenever any of its arguments are NULL. If this parameter is specified, the function is not executed when there are null arguments; instead a null result is assumed automatically.
+The usage of RETURNS NULL ON NULL INPUT is the same as that of STRICT.
+Indicates that the function always returns the same result if the parameter values are the same.
+Indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same parameter values, but that its result varies by SQL statements.
+Indicates that the function value can change in one table scanning and no optimization is performed.
+Indicates whether the function can be pushed down to DNs for execution.
+Functions of the IMMUTABLE type can always be pushed down to the DNs.
+Functions of the STABLE or VOLATILE type can be pushed down to DNs only if their attribute is SHIPPABLE.
+Indicates that the function has no side effect and specifies that the parameter includes only the returned value. LEAKPROOF can be set only by the system administrator.
+(Optional) The objective is to be compatible with SQL. This feature applies to all functions, including external functions.
+AUTHID CURREN_USER
+Declares that the function will be executed according to the permission of the user that invokes it. By default, the usage is the same as specifying the parameters.
+SECURITY INVOKER and AUTHID CURREN_USER have the same functions.
+AUTHID DEFINER
+Specifies that the function is to be executed with the permissions of the user that created it.
+The usage of AUTHID DEFINER is the same as that of SECURITY DEFINER.
+A positive number giving the estimated execution cost for the function.
+The unit of execution_cost is cpu_operator_cost.
+Value range: A positive number.
+Estimates the number of rows returned by the function. This is only allowed when the function is declared to return a set.
+Value range: A positive number. The default is 1000 rows.
+Sets a specified database session parameter to a specified value. If the value is DEFAULT or RESET, the default setting is used in the new session. OFF closes the setting.
+Value range: a string
+Specifies the default value.
+Uses the value of configuration_parameter of the current session.
+Specifies the new name of a function. To change a function's schema, you must also have the CREATE permission on the new schema.
+Value range: A string. It must comply with the naming convention.
+Specifies the new owner of a function. To alter the owner, the new owner must also be a direct or indirect member of the new owning role, and that role must have CREATE permission on the function's schema.
+Value range: Existing user roles.
+Specifies the new schema of a function.
+Value range: Existing schemas.
+Alter the execution rule of the func_add_sql function to IMMUTABLE (that is, the same result is returned if the parameter remains unchanged):
+1 | ALTER FUNCTION func_add_sql(INTEGER, INTEGER) IMMUTABLE; + |
Change the name of the func_add_sql function to add_two_number.
+1 | ALTER FUNCTION func_add_sql(INTEGER, INTEGER) RENAME TO add_two_number; + |
Change the owner of the func_add_sql function to dbadmin.
+1 | ALTER FUNCTION add_two_number(INTEGER, INTEGER) OWNER TO dbadmin; + |
ALTER GROUP modifies the attributes of a user group.
+ALTER GROUP is an alias for ALTER ROLE, and it is not a standard SQL command and not recommended. Users can use ALTER ROLE directly.
+1 +2 | ALTER GROUP group_name + ADD USER user_name [, ... ]; + |
1 +2 | ALTER GROUP group_name + DROP USER user_name [, ... ]; + |
1 +2 | ALTER GROUP group_name + RENAME TO new_name; + |
See the Example in ALTER ROLE.
+ALTER INDEX modifies the definition of an existing index.
+There are several sub-forms:
+If the specified index does not exist, a notice instead of an error is sent.
+Changes only the name of the index. There is no effect on the stored data.
+Change one or more index-method-specific storage parameters. Note that the index contents will not be modified immediately by this command. You might need to rebuild the index with REINDEX to get the desired effects depending on parameters.
+Reset one or more index-method-specific storage parameters to the default value. Similar to the SET statement, REINDEX may be used to completely update the index.
+Sets the index on a table or index partition to be unavailable.
+1 +2 | ALTER INDEX [ IF EXISTS ] index_name + RENAME TO new_name; + |
1 +2 | ALTER INDEX [ IF EXISTS ] index_name + SET ( {storage_parameter = value} [, ... ] ); + |
1 +2 | ALTER INDEX [ IF EXISTS ] index_name + RESET ( storage_parameter [, ... ] ) ; + |
1 +2 | ALTER INDEX [ IF EXISTS ] index_name + [ MODIFY PARTITION index_partition_name ] UNUSABLE; + |
The syntax cannot be used for column-store tables.
+1 +2 | ALTER INDEX index_name + REBUILD [ PARTITION index_partition_name ]; + |
1 +2 | ALTER INDEX [ IF EXISTS ] index_name + RENAME PARTITION index_partition_name TO new_index_partition_name; + |
PG_OBJECT does not support the record of the syntax when the last modification time of the index is recorded.
+Specifies the index name to be modified.
+Specifies the new name for the index.
+Value range: a string that must comply with the identifier naming rules.
+Specifies the name of an index-method-specific parameter.
+Specifies the new value for an index-method-specific storage parameter. This might be a number or a word depending on the parameter.
+Specifies the new name of the index partition.
+Specifies the name of the index partition.
+Rename the ds_ship_mode_t1_index1 index to ds_ship_mode_t1_index5.
+1 | ALTER INDEX tpcds.ds_ship_mode_t1_index1 RENAME TO ds_ship_mode_t1_index5; + |
Set the ds_ship_mode_t1_index2 index as unusable.
+1 | ALTER INDEX tpcds.ds_ship_mode_t1_index2 UNUSABLE; + |
Rebuild the ds_ship_mode_t1_index2 index.
+1 | ALTER INDEX tpcds.ds_ship_mode_t1_index2 REBUILD; + |
Rename a partitioned table index.
+1 | ALTER INDEX tpcds.ds_customer_address_p1_index2 RENAME PARTITION CA_ADDRESS_SK_index1 TO CA_ADDRESS_SK_index4; + |
ALTER LARGE OBJECT modifies the definition of a large object. It can only assign a new owner to a large object.
+Only the administrator or the owner of the to-be-modified large object can run ALTER LARGE OBJECT.
+1 +2 | ALTER LARGE OBJECT large_object_oid + OWNER TO new_owner; + |
OID of a large object.
+Value range: an existing large object name
+New owner of a large object
+Value range: an existing user name/role
+None.
+ALTER REDACTION POLICY modifies a data redaction policy applied to a specified table.
+Only the owner of the table to which the redaction policy is applied has the permission to modify the redaction policy.
+1 | ALTER REDACTION POLICY policy_name ON table_name WHEN (new_when_expression); + |
1 | ALTER REDACTION POLICY policy_name ON table_name ENABLE | DISABLE; + |
1 | ALTER REDACTION POLICY policy_name ON table_name RENAME TO new_policy_name; + |
1 +2 | ALTER REDACTION POLICY policy_name ON table_name + action; + |
There are several clauses of action:
+1 +2 +3 | ADD COLUMN column_name WITH function_name ( arguments ) + | MODIFY COLUMN column_name WITH function_name ( arguments ) + | DROP COLUMN column_name + |
Specifies the name of the redaction policy to be modified.
+Specifies the name of the table to which the redaction policy is applied.
+Specifies the new expression used for the redaction policy to take effect.
+Specifies whether to enable or disable the current redaction policy.
+ +Specifies the new name of the redaction policy.
+Specifies the name of the table column to which the redaction policy is applied.
+To add a column, use a column name that has not been bound to any redaction functions.
+To modify a column, use the name of an existing column.
+To delete a column, use the name of an existing column.
+Specifies the name of a redaction function.
+Specifies the list of arguments of the redaction function.
+Modify the expression for the data redaction policy to take effect for all users.
+1 | ALTER REDACTION POLICY mask_emp ON emp WHEN (1=1); + |
Disable the redaction policy.
+1 | ALTER REDACTION POLICY mask_emp ON emp DISABLE; + |
Enable the redaction policy again.
+1 | ALTER REDACTION POLICY mask_emp ON emp ENABLE; + |
Change the redaction policy name to mask_emp_new.
+1 | ALTER REDACTION POLICY mask_emp ON emp RENAME TO mask_emp_new; + |
Add a column with the redaction policy used.
+1 | ALTER REDACTION POLICY mask_emp_new ON emp ADD COLUMN name WITH mask_partial(name, '*', 1, length(name)); + |
Modify the redaction policy for the name column. Use the MASK_FULL function to redact all data in the name column.
+1 | ALTER REDACTION POLICY mask_emp_new ON emp MODIFY COLUMN name WITH mask_full(name); + |
Delete an existing column where the redaction policy is used.
+1 | ALTER REDACTION POLICY mask_emp_new ON emp DROP COLUMN name; + |
ALTER RESOURCE POOL changes the Cgroup of a resource pool.
+Users having the ALTER permission can modify resource pools.
+1 +2 | ALTER RESOURCE POOL pool_name + WITH ({MEM_PERCENT= pct | CONTROL_GROUP="group_name" | ACTIVE_STATEMENTS=stmt | MAX_DOP = dop | MEMORY_LIMIT='memory_size' | io_limits=io_limits | io_priority='io_priority'}[, ... ]); + |
Specifies the name of the resource pool.
+The name of the resource pool is the name of an existing resource pool.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a Cgroup.
+of the DefaultClass Cgroup.
+Value range: an existing control group.
+Specifies the maximum number of statements that can be concurrently executed in a resource pool.
+Value range: Numeric data ranging from -1 to INT_MAX.
+This is a reserved parameter.
+Value range: Numeric data ranging from -1 to INT_MAX.
+Specifies the maximum storage for a resource pool.
+Value range: a string, from 1KB to 2047GB.
+Specifies the proportion of available resource pool memory to the total memory or group user memory.
+The value of mem_percent for a common user is an integer ranging from 0 to 100. The default value is 0.
+Specifies the upper limit of IOPS in a resource pool.
+The IOPS is counted by ones for column storage and by 10 thousands for row storage.
+Specifies the I/O priority for jobs that consume many I/O resources. It takes effect when the I/O usage reaches 90%.
+There are three priorities: Low, Medium, and High. If you do not want to control I/O resources, set this parameter to None, which is the default value.
+The settings of io_limits and io_priority are valid only for complex jobs, such as batch import (using INSERT INTO SELECT, COPY FROM, or CREATE TABLE AS), complex queries involving over 500 MB data on each DN, and VACUUM FULL.
+Specify "High" Timeshare Workload under "DefaultClass" as the Cgroup for a resource pool.
+1 | ALTER RESOURCE POOL pool1 WITH (CONTROL_GROUP="High"); + |
ALTER ROLE changes the attributes of a role.
+None
+1 | ALTER ROLE role_name [ [ WITH ] option [ ... ] ]; + |
The option clause for granting rights is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | {CREATEDB | NOCREATEDB} + | {CREATEROLE | NOCREATEROLE} + | {INHERIT | NOINHERIT} + | {AUDITADMIN | NOAUDITADMIN} + | {SYSADMIN | NOSYSADMIN} + | {USEFT | NOUSEFT} + | {LOGIN | NOLOGIN} + | {REPLICATION | NOREPLICATION} + | {INDEPENDENT | NOINDEPENDENT} + | {VCADMIN | NOVCADMIN} + | CONNECTION LIMIT connlimit + | [ ENCRYPTED | UNENCRYPTED ] PASSWORD 'password' + | [ ENCRYPTED | UNENCRYPTED ] IDENTIFIED BY 'password' [ REPLACE 'old_password' ] + | [ ENCRYPTED | UNENCRYPTED ] PASSWORD { 'password' | DISABLE } + | [ ENCRYPTED | UNENCRYPTED ] IDENTIFIED BY { 'password' [ REPLACE 'old_password' ] | DISABLE } + | VALID BEGIN 'timestamp' + | VALID UNTIL 'timestamp' + | RESOURCE POOL 'respool' + | USER GROUP 'groupuser' + | PERM SPACE 'spacelimit' + | NODE GROUP logic_cluster_name + | ACCOUNT { LOCK | UNLOCK } + | PGUSER + | AUTHINFO 'authinfo' + | PASSWORD EXPIRATOIN period + |
1 +2 | ALTER ROLE role_name + RENAME TO new_name; + |
1 +2 | ALTER ROLE role_name [ IN DATABASE database_name ] + SET configuration_parameter {{ TO | = } { value | DEFAULT } | FROM CURRENT}; + |
1 +2 | ALTER ROLE role_name + [ IN DATABASE database_name ] RESET {configuration_parameter|ALL}; + |
Indicates a role name.
+Value range: an existing user name
+Modifies the parameters of a role on a specified database.
+Sets parameters for a role. The session parameters modified using the ALTER ROLE command is only for a specific role and is valid in the next session triggered by the role.
+Valid value:
+Values of configuration_parameter and value are listed in SET.
+DEFAULT clears the value of configuration_parameter. The value of the configuration_parameter parameter will inherit the default value of the new session generated for the role.
+FROM CURRENT uses the value of configuration_parameter of the current session.
+The effect of clearing the configuration_parameter value is the same as setting it to DEFAULT.
+Value range: ALL indicates that all parameter values are cleared.
+PGUSER of a role cannot be modified in the current version.
+For details about other parameters, see Parameter Description in CREATE ROLE.
+Change the password of role manager.
+1 | ALTER ROLE manager IDENTIFIED BY '{password}' REPLACE '{old_password}'; + |
Alter role manager to a system administrator.
+1 | ALTER ROLE manager SYSADMIN; + |
Modify the fulluser information of the LDAP authentication role.
+1 | ALTER ROLE role2 WITH LOGIN AUTHINFO 'ldapcn=role2,cn=user2,dc=func,dc=com' PASSWORD DISABLE; + |
Change the validity period of the login password of the role to 90 days.
+1 | ALTER ROLE role3 PASSWORD EXPIRATION 90; + |
ALTER ROW LEVEL SECURITY POLICY modifies an existing row-level access control policy, including the policy name and the users and expressions affected by the policy.
+Only the table owner or administrators can perform this operation.
+1 +2 +3 +4 +5 | ALTER [ ROW LEVEL SECURITY ] POLICY [ IF EXISTS ] policy_name ON table_name RENAME TO new_policy_name + +ALTER [ ROW LEVEL SECURITY ] POLICY policy_name ON table_name + [ TO { role_name | PUBLIC } [, ...] ] + [ USING ( using_expression ) ] + |
Specifies the name of a row-level access control policy to be modified.
+Specifies the name of a table to which a row-level access control policy is applied.
+Specifies the new name of a row-level access control policy.
+Specifies names of users affected by a row-level access control policy will be applied. PUBLIC indicates that the row-level access control policy will affect all users.
+Specifies an expression defined for a row-level access control policy. The return value is of the boolean type.
+Change the name of the all_data_rls policy.
+1 | ALTER ROW LEVEL SECURITY POLICY all_data_rls ON all_data RENAME TO all_data_new_rls; + |
Change the users affected by the row-level access control policy.
+1 | ALTER ROW LEVEL SECURITY POLICY all_data_new_rls ON all_data TO alice, bob; + |
Modify the expression defined for the access control policy.
+1 | ALTER ROW LEVEL SECURITY POLICY all_data_new_rls ON all_data USING (id > 100 AND role = current_user); + |
CREATE ROW LEVEL SECURITY POLICY, DROP ROW LEVEL SECURITY POLICY
+ALTER SCHEMA changes the attributes of a schema.
+Only the owner of an index or a system administrator can run this statement.
+1 +2 | ALTER SCHEMA schema_name + RENAME TO new_name; + |
1 +2 | ALTER SCHEMA schema_name + OWNER TO new_owner; + |
1 +2 | ALTER SCHEMA schema_name + WITH PERM SPACE 'space_limit'; + |
Indicates the name of the current schema.
+Value range: An existing schema name.
+Renames a schema.
+new_name: new name of the schema
+Value range: A string. It must comply with the naming convention.
+Changes the owner of a schema. To do this as a non-administrator, you must be a direct or indirect member of the new owning role, and that role must have CREATE permission in the database.
+new_owner: new owner of a schema
+Value range: An existing user name/role.
+Changes the storage upper limit of the permanent table in the schema. If a non-administrator user wants to change the storage upper limit, the user must be a direct or indirect member of all new roles, and the member must have the CREATE permission on the database.
+new_owner: new owner of a schema
+Value range: A string consists of an integer and unit. The unit can be K/M/G/T/P currently. The unit of parsed value is K and cannot exceed the range that can be expressed in 64 bits, which is 1 KB to 9007199254740991 KB.
+Rename the ds schema to ds_new.
+1 | ALTER SCHEMA ds RENAME TO ds_new; + |
Change the owner of ds_new to jack.
+1 | ALTER SCHEMA ds_new OWNER TO jack; + |
ALTER SEQUENCE modifies the parameters of an existing sequence.
+Change the maximum value or home column of the sequence.
+1 +2 +3 | ALTER SEQUENCE [ IF EXISTS ] name + [ MAXVALUE maxvalue | NO MAXVALUE | NOMAXVALUE ] + [ OWNED BY { table_name.column_name | NONE } ] ; + |
Change the owner of a sequence.
+1 | ALTER SEQUENCE [ IF EXISTS ] name OWNER TO new_owner; + |
Sends a notification instead of an error when you are modifying a non-existing sequence.
+Maximum value of a sequence. If NO MAXVALUE is declared, the default value of the ascending sequence is 263-1, and that of the descending sequence is -1. NOMAXVALUE is equivalent to NO MAXVALUE.
+Associates a sequence with a specified column included in a table. In this way, the sequence will be deleted when you delete its associated field or the table where the field belongs.
+If the sequence has been associated with another table before you use this parameter, the new association will overwrite the old one.
+The associated table and sequence must be owned by the same user and in the same schema.
+If OWNED BY NONE is used, existing associations will be deleted.
+Specifies the user name of the new owner. To change the owner, you must also be a direct or indirect member of the new role, and this role must have CREATE permission on the sequence's schema.
+Modify the maximum value of serial to 200.
+1 | ALTER SEQUENCE serial MAXVALUE 200; + |
Create a table, and specify default values for the sequence.
+1 | CREATE TABLE T1(C1 bigint default nextval('serial')); + |
Change the owning column of the serial sequence to T1.C1.
+1 | ALTER SEQUENCE serial OWNED BY T1.C1; + |
ALTER SERVER adds, modifies, or deletes the parameters of an existing server. You can query existing servers from the pg_foreign_server system catalog.
+Only the owner of a server or a system administrator can run this statement.
+1 +2 | ALTER SERVER server_name [ VERSION 'new_version' ] + [ OPTIONS ( {[ ADD | SET | DROP ] option ['value']} [, ... ] ) ]; + |
In OPTIONS, ADD, SET, and DROP are operations to be executed. If these operations are not specified, the ADD operation will be performed by default. option and value are corresponding operation parameters.
+Currently, only SET is supported on an HDFS server. ADD and DROP are not supported. The syntax for SET and DROP operations is retained for later use.
+1 +2 | ALTER SERVER server_name + OWNER TO new_owner; + |
1 +2 | ALTER SERVER server_name + RENAME TO new_name; + |
1 | ALTER SERVER server_name REFRESH OPTIONS; + |
The server parameters to be modified are as follows:
+Specifies the name of the server to be modified.
+Specifies the new version of the server.
+Specifies the endpoint of the OBS service.
+Specifies the IP address and port number of the primary and standby nodes of the HDFS cluster.
+Specifies the HDFS cluster configuration file.
+Specifies whether data is encrypted. This parameter is available only when type is OBS. The default value is off.
+Value range:
+Indicates the access key (AK) (obtained by users from the OBS page) used for the OBS access protocol. When you create a foreign table, its AK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is OBS.
+Indicates the secret access key (SK) (obtained by users from the OBS page) used for the OBS access protocol. When you create a foreign table, its SK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is OBS.
+Specifies the endpoint of the DLI service. This parameter is available only when type is DLI.
+Specifies the AK (obtained by users from the DLI page) used for the DLI access protocol. When you create a foreign table, its AK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is DLI.
+Specifies the SK (obtained by users from the DLI page) used for the DLI access protocol. When you create a foreign table, its SK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is DLI.
+Indicates the IP address or domain name of the OBS server. This parameter is available only when type is OBS.
+Specifies the database name of a remote cluster to be connected. This parameter is used for collaborative analysis.
+Specifies the username of a remote cluster to be connected. This parameter is used for collaborative analysis.
+Specifies the user password of a remote cluster to be connected. This parameter is used for collaborative analysis.
+Specifies the new owner of the server. To change the owner, you must be the owner of the foreign server and a direct or indirect member of the new owner role, and must have the USAGE permission on the encapsulator of the external server.
+Specifies the new name of the server.
+Refreshes the HDFS configuration file. This command is executed when the configuration file is modified. If this command is not executed, an access error may be reported.
+Change the current name to the IP address of the hdfs_server server.
+ALTER SERVER hdfs_server OPTIONS ( SET address '10.10.0.110:25000,10.10.0.120:25000');+
Change the current name to hdfscfgpath of the hdfs_server server.
+ALTER SERVER hdfs_server OPTIONS ( SET hdfscfgpath '/opt/bigdata/hadoop');+ +
ALTER SESSION defines or modifies the conditions or parameters that affect the current session. Modified session parameters are kept until the current session is disconnected.
+1 +2 | ALTER SESSION SET [ SESSION CHARACTERISTICS AS ] TRANSACTION + { ISOLATION LEVEL { READ COMMITTED | READ UNCOMMITTED } | { READ ONLY | READ WRITE } } [, ...] ; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | ALTER SESSION SET + {{config_parameter { { TO | = } { value | DEFAULT } + | FROM CURRENT }} | CURRENT_SCHEMA [ TO | = ] { schema | DEFAULT } + | TIME ZONE time_zone + | SCHEMA schema + | NAMES encoding_name + | ROLE role_name PASSWORD 'password' + | SESSION AUTHORIZATION { role_name PASSWORD 'password' | DEFAULT } + | XML OPTION { DOCUMENT | CONTENT } + } ; + |
To modify the description of parameters related to the session, see Parameter Description of the SET syntax.
+Create the ds schema.
+CREATE SCHEMA ds;+
Set the search path of the schema.
+SET SEARCH_PATH TO ds, public;+
Set the time/date type to the traditional postgres format (date before month).
+SET DATESTYLE TO postgres, dmy;+
Set the character code of the current session to UTF8.
+ALTER SESSION SET NAMES 'UTF8';+
Set the time zone to Berkeley of California.
+SET TIME ZONE 'PST8PDT';+
Set the time zone to Italy.
+SET TIME ZONE 'Europe/Rome';+
Set the current schema.
+ALTER SESSION SET CURRENT_SCHEMA TO tpcds;+
Set XML OPTION to DOCUMENT.
+ALTER SESSION SET XML OPTION DOCUMENT;+
Create the role joe, and set the session role to joe.
+CREATE ROLE joe WITH PASSWORD '{password}'; +ALTER SESSION SET SESSION AUTHORIZATION joe PASSWORD '{password}';+
Switch to the default user.
+ALTER SESSION SET SESSION AUTHORIZATION default;+ +
ALTER SYNONYM is used to modify the attribute of a synonym.
+1 +2 | ALTER SYNONYM synonym_name + OWNER TO new_owner; + |
Name of a synonym to be modified (optionally with schema names)
+Value range: A string compliant with the identifier naming rules
+New owner of a synonym object
+Value range: A string. It must be a valid username.
+Create synonym t1.
+1 | CREATE OR REPLACE SYNONYM t1 FOR ot.t1; + |
Create user u1.
+1 | CREATE USER u1 PASSWORD '{password}'; + |
Change the owner of the synonym t1 to u1.
+1 | ALTER SYNONYM t1 OWNER TO u1; + |
ALTER SYSTEM KILL SESSION ends a session.
+None
+1 | ALTER SYSTEM KILL SESSION 'session_sid, serial' [ IMMEDIATE ]; + |
Specifies SID and SERIAL of a session (see examples for format).
+Value range: The SIDs and SERIALs of all sessions that can be queried from the system catalog V$SESSION.
+Indicates that a session will be ended instantly after the command is executed.
+Query session information.
+SELECT sid,serial#,username FROM V$SESSION; + + sid | serial# | username +-----------------+---------+---------- + 140131075880720 | 0 | + 140131025549072 | 0 | + 140131073779472 | 0 | + 140131071678224 | 0 | + 140131125774096 | 0 | + 140131127875344 | 0 | + 140131113629456 | 0 | + 140131094742800 | 0 | +(8 rows)+
End the session whose SID is 140131075880720.
+ALTER SYSTEM KILL SESSION '140131075880720,0' IMMEDIATE;+
ALTER TABLE is used to modify tables, including modifying table definitions, renaming tables, renaming specified columns in tables, renaming table constraints, setting table schemas, enabling or disabling row-level access control, and adding or updating multiple columns.
+1 +2 | ALTER TABLE [ IF EXISTS ] { table_name [*] | ONLY table_name | ONLY ( table_name ) } + action [, ... ]; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | column_clause + | ADD table_constraint [ NOT VALID ] + | ADD table_constraint_using_index + | VALIDATE CONSTRAINT constraint_name + | DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] + | CLUSTER ON index_name + | SET WITHOUT CLUSTER + | SET ( {storage_parameter = value} [, ... ] ) + | RESET ( storage_parameter [, ... ] ) + | OWNER TO new_owner + | SET TABLESPACE new_tablespace + | SET {COMPRESS|NOCOMPRESS} + | DISTRIBUTE BY { REPLICATION | { HASH ( column_name [,...] ) } } + | TO { GROUP groupname | NODE ( nodename [, ... ] ) } + | ADD NODE ( nodename [, ... ] ) + | DELETE NODE ( nodename [, ... ] ) + | DISABLE TRIGGER [ trigger_name | ALL | USER ] + | ENABLE TRIGGER [ trigger_name | ALL | USER ] + | ENABLE REPLICA TRIGGER trigger_name + | ENABLE ALWAYS TRIGGER trigger_name + | DISABLE ROW LEVEL SECURITY + | ENABLE ROW LEVEL SECURITY + | FORCE ROW LEVEL SECURITY + | NO FORCE ROW LEVEL SECURITY + | REFRESH STORAGE + |
Adds a new table constraint.
+Adds primary key constraint or unique constraint based on the unique index.
+Validates a foreign key or check constraint that was previously created as NOT VALID, by scanning the table to ensure there are no rows for which the constraint is not satisfied. Nothing happens if the constraint is already marked valid.
+Drops a table constraint.
+Selects the default index for future CLUSTER operations. It does not actually re-cluster the table.
+Removes the most recently used CLUSTER index specification from the table. This operation affects future cluster operations that do not specify an index.
+Changes one or more storage parameters for the table.
+Resets one or more storage parameters to their defaults. As with SET, a table rewrite might be needed to update the table entirely.
+Changes the owner of the table, sequence, or view to the specified user.
+Sets the compression feature of a table. The table compression feature affects only the storage mode of data inserted in a batch subsequently and does not affect storage of existing data. Setting the table compression feature will result in the fact that there are both compressed and uncompressed data in the table.
+Changing a table's distribution mode will physically redistribute the table data based on the new distribution mode. After the distribution mode is changed, you are advised to manually run the ANALYZE statement to collect new statistics about the table.
+The syntax is only available in extended mode (when GUC parameter support_extended_features is on). Exercise caution when enabling the mode. It is used for tools like internal dilatation tools. Common users should not use the mode.
+It is only available for tools like internal dilatation. General users should not use the mode.
+It is only available for internal scale-in tools. Common users should not use the syntax.
+Disables a single trigger specified by trigger_name, disables all triggers, or disables only user triggers (excluding internally generated constraint triggers, for example, deferrable unique constraint triggers and exclusion constraints triggers).
+Exercise caution when using this function because data integrity cannot be ensured as expected if the triggers are not executed.
+Enables a single trigger specified by trigger_name, enables all triggers, or enables only user triggers.
+Determines that the trigger firing mechanism is affected by the configuration variable session_replication_role. When the replication role is origin (default value) or local, a simple trigger is fired.
+When ENABLE REPLICA is configured for a trigger, it is fired only when the session is in replica mode.
+Determines that all triggers are fired regardless of the current replication mode.
+Enables or disables row-level access control for a table.
+If row-level access control is enabled for a data table but no row-level access control policy is defined, the row-level access to the data table is not affected. If row-level access control for a table is disabled, the row-level access to the table is not affected even if a row-level access control policy has been defined. For details, see CREATE ROW LEVEL SECURITY POLICY.
+Forcibly enables or disables row-level access control for a table.
+By default, the table owner is not affected by the row-level access control feature. However, if row-level access control is forcibly enabled, the table owner (excluding system administrators) will be affected. System administrators are not affected by any row-level access control policies.
+Changes the local hot partitions that meet the criteria defined by the rules specified in the storage_policy parameter of an OBS hot or cold table to the cold partitions stored in the OBS.
+For example, if storage_policy is set to 'LMT:10' for an OBS hot or cold table when it is created, the partitions that are not updated within the last 10 days are switched to cold partitions in the OBS.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | ADD [ COLUMN ] column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] +| MODIFY column_name data_type +| MODIFY column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] +| MODIFY column_name [ CONSTRAINT constraint_name ] NULL +| DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] +| ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ] +| ALTER [ COLUMN ] column_name { SET DEFAULT expression | DROP DEFAULT } +| ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL +| ALTER [ COLUMN ] column_name SET STATISTICS [PERCENT] integer +| ADD STATISTICS (( column_1_name, column_2_name [, ...] )) +| DELETE STATISTICS (( column_1_name, column_2_name [, ...] )) +| ALTER [ COLUMN ] column_name SET ( {attribute_option = value} [, ... ] ) +| ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) +| ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } + |
Adds a column to a table. If a column is added with ADD COLUMN, all existing rows in the table are initialized with the column's default value (NULL if no DEFAULT clause is specified).
+Adds columns in the table.
+Change the data type of an existing field in the table. Only the type conversion of the same category (between values, character strings, and time) is allowed.
+Adds a NOT NULL constraint to a column of a table. Currently, this clause is unavailable to column-store tables.
+Deletes the NOT NULL constraint to a certain column in the table.
+Drops a column from a table. Index and constraint related to the column are automatically dropped. If an object not belonging to the table depends on the column, CASCADE must be specified, such as foreign key reference and view.
+The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a NULL value for the column. Therefore, column deletion takes a short period of time but does not immediately release the table space on the disks, because the space occupied by the deleted column is not reclaimed. The space will be reclaimed when VACUUM is executed.
+To change the data type of a table column (data in the distribution column is not allowed to change types), only the type conversion of the same category (between values, strings, and time) is allowed. Indexes and simple table constraints on the column will automatically use the new data type by reparsing the originally supplied expression.
+ALTER TYPE requires an entire table be rewritten. This is an advantage sometimes, because it frees up unnecessary space from a table. For example, to reclaim the space occupied by a deleted column, the fastest method is to use the command.
+1 | ALTER TABLE table ALTER COLUMN anycol TYPE anytype; + |
In this command, anycol indicates any column existing in the table and anytype indicates the type of the prototype of the column. ALTER TYPE does not change the table except that the table is forcibly rewritten. In this way, the data that is no longer used is deleted.
+Sets or removes the default value for a column. The default values only apply to subsequent INSERT commands; they do not cause rows already in the table to change. Defaults can also be created for views, in which case they are inserted into INSERT statements on the view before the view's ON INSERT rule is applied.
+Changes whether a column is marked to allow NULL values or to reject NULL values. You can only use SET NOT NULL when the column contains no NULL values.
+Specifies the per-column statistics-gathering target for subsequent ANALYZE operations. The value ranges from 0 to 10000. Set it to -1 to revert to using the default system statistics target.
+Adds or deletes the declaration of collecting multi-column statistics to collect multi-column statistics as needed when ANALYZE is performed for a table or a database. The statistics about a maximum of 32 columns can be collected at a time. You are not allowed to add or delete the declaration for system tables or foreign tables
+ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] )
+Sets or resets per-attribute options.
+Currently, the only defined per-attribute options are n_distinct and n_distinct_inherited. n_distinct affects statistics of table, while n_distinct_inherited affects the statistics of table and its subtables. Currently, only SET/RESET n_distinct is supported, and SET/RESET n_distinct_inherited is forbidden.
+Sets the storage mode for a column. This clause specifies whether this column is held inline or in a secondary TOAST table, and whether the data should be compressed. This statement can only be used for row-based tables. SET STORAGE only sets the strategy to be used for future table operations.
+1 +2 +3 +4 +5 +6 +7 +8 | [ CONSTRAINT constraint_name ] + { NOT NULL | + NULL | + CHECK ( expression ) | + DEFAULT default_expr | + UNIQUE index_parameters | + PRIMARY KEY index_parameters } + [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + |
1 | [ DELTA | PREFIX | DICTIONARY | NUMSTR | NOCOMPRESS ] + |
1 +2 +3 | [ CONSTRAINT constraint_name ] + { UNIQUE | PRIMARY KEY } USING INDEX index_name + [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + |
1 +2 +3 +4 +5 +6 | [ CONSTRAINT constraint_name ] + { CHECK ( expression ) | + UNIQUE ( column_name [, ... ] ) index_parameters | + PRIMARY KEY ( column_name [, ... ] ) index_parameters } + + [ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + |
1 +2 | [ WITH ( {storage_parameter = value} [, ... ] ) ] + [ USING INDEX TABLESPACE tablespace_name ] + |
1 +2 | ALTER TABLE [ IF EXISTS ] table_name + RENAME TO new_table_name; + |
1 +2 | ALTER TABLE [ IF EXISTS ] { table_name [*] | ONLY table_name | ONLY ( table_name )} + RENAME [ COLUMN ] column_name TO new_column_name; + |
1 +2 | ALTER TABLE { table_name [*] | ONLY table_name | ONLY ( table_name ) } + RENAME CONSTRAINT constraint_name TO new_constraint_name; + |
1 +2 | ALTER TABLE [ IF EXISTS ] table_name + SET SCHEMA new_schema; + |
1 +2 | ALTER TABLE [ IF EXISTS ] table_name + ADD ( { column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ]} [, ...] ); + |
1 +2 | ALTER TABLE [ IF EXISTS ] table_name + MODIFY ( { column_name data_type | column_name [ CONSTRAINT constraint_name ] NOT NULL [ ENABLE ] | column_name [ CONSTRAINT constraint_name ] NULL } [, ...] ); + |
Sends a notification instead of an error if no tables have identical names. The notification prompts that the table you are querying does not exist.
+table_name is the name of table that you need to modify.
+If ONLY is specified, only the table is modified. If ONLY is not specified, the table and all subtables will be modified. You can add the asterisk (*) option following the table name to specify that all subtables are scanned, which is the default operation.
+Specifies the name of an existing constraint to drop.
+Specifies the name of this index.
+Specifies the name of a storage parameter.
+Specifies the name of the new table owner.
+Specifies the new name of the tablespace to which the table belongs.
+Specifies the name of a new or an existing column.
+Specifies the type of a new column or a new type of an existing column.
+Specifies the compress options of the table, only available for row-based tables. The clause specifies the algorithm preferentially used by the column.
+Specifies the collation rule name of a column. The optional COLLATE clause specifies a collation for the new column; if omitted, the collation is the default for the new column.
+A USING clause specifies how to compute the new column value from the old; if omitted, the default conversion is an assignment cast from old data type to new. A USING clause must be provided if there is no implicit or assignment cast from the old to new type.
+USING in ALTER TYPE can specify any expression involving the old values of the row; that is, it can refer to any columns other than the one being converted. This allows very general conversions to be done with the ALTER TYPE syntax. Because of this flexibility, the USING expression is not applied to the column's default value (if any); the result might not be a constant expression as required for a default. This means that when there is no implicit or assignment cast from old to new type, ALTER TYPE might fail to convert the default even though a USING clause is supplied. In such cases, drop the default with DROP DEFAULT, perform the ALTER TYPE, and then use SET DEFAULT to add a suitable new default. Similar considerations apply to indexes and constraints involving the column.
+Sets whether the column allows null values.
+Specifies the constant value of an integer with a sign. If PERCENT is used, the range of integer is from 0 to 100.
+Specifies an attribute option.
+Specifies a column storage mode.
+New or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE succeed. If any row of an insert or update operation produces a FALSE result, an error exception is raised and the insert or update does not alter the database.
+A check constraint specified as a column constraint should reference only the column's values, while an expression appearing in a table constraint can reference multiple columns.
+Currently, CHECK expression does not include subqueries and cannot use variables apart from the current column.
+Assigns a default data value for a column.
+The data type of the default expression must match the data type of the column.
+The default expression will be used in any insert operation that does not specify a value for the column. If there is no default value for a column, then the default value is NULL.
+UNIQUE ( column_name [, ... ] ) index_parameters
+The UNIQUE constraint specifies that a group of one or more columns of a table can contain only unique values.
+PRIMARY KEY ( column_name [, ... ] ) index_parameters
+The primary key constraint specifies that one or more columns of a table must contain unique (non-duplicate) and non-null values. This parameter is valid only for columns with the NOT NULL constraint.
+Sets whether the constraint is deferrable. This option is unavailable to column-store tables.
+Specifies an optional storage parameter for a table or an index.
+Specifies the new table name.
+Specifies the new name of a specific column in a table.
+Specifies the new name of a table constraint.
+Specifies the new schema name.
+Automatically drops objects that depend on the dropped column or constraint (for example, views referencing the column).
+Refuses to drop the column or constraint if there are any dependent objects. This is the default behavior.
+Specifies the schema name of a table.
+Move a table to another schema.
+1 | ALTER TABLE tpcds.warehouse_t19 SET SCHEMA joe; + |
When renaming an existing table, the new table name cannot be prefixed with the schema name of the original table.
+1 | ALTER TABLE joe.warehouse_t19 RENAME TO warehouse_t23; + |
Change the distribution mode of the tpcds.warehouse_t22 table to REPLICATION.
+1 | ALTER TABLE tpcds.warehouse_t22 DISTRIBUTE BY REPLICATION; + |
Change the distribution column of the tpcds.warehouse_t22 table to W_WAREHOUSE_SK.
+1 | ALTER TABLE tpcds.warehouse_t22 DISTRIBUTE BY HASH(W_WAREHOUSE_SK); + |
Switch the storage format of a column-store table.
+1 | ALTER TABLE tpcds.warehouse_t18 SET (COLVERSION = 1.0); + |
Disable the delta table function of the column-store table.
+1 | ALTER TABLE tpcds.warehouse_t21 SET (ENABLE_DELTA = OFF); + |
Disable the SKIP_FPI_HINT function of the table.
+1 | ALTER TABLE tpcds.warehouse_t22 SET (SKIP_FPI_HINT = FALSE); + |
Change the data temperature for a single table.
+1 | ALTER TABLE tpcds.warehouse_t23 REFRESH STORAGE; + |
Change the data temperature for multiple tables in batches.
+SELECT pg_refresh_storage();+
Create an index ds_warehouse_t1_index1 for the table tpcds.warehouse_t1. Then add primary key constraints, and rename the created index.
+1 +2 | CREATE UNIQUE INDEX ds_warehouse_t1_index1 ON tpcds.warehouse_t1(W_WAREHOUSE_SK); +ALTER TABLE tpcds.warehouse_t1 ADD CONSTRAINT ds_warehouse_t1_index2 PRIMARY KEY USING INDEX ds_warehouse_t1_index1; + |
Delete the primary key ds_warehouse_t1_index2 from the table tpcds.warehouse_t1.
+1 | ALTER TABLE tpcds.warehouse_t1 DROP CONSTRAINT ds_warehouse_t1_index2; + |
If no partial clusters have been specified in a column-store table, add a partial cluster to the table.
+1 | ALTER TABLE tpcds.warehouse_t17 ADD PARTIAL CLUSTER KEY(W_WAREHOUSE_SK); + |
Delete a partial cluster column from the column-store table.
+1 | ALTER TABLE tpcds.warehouse_t17 DROP CONSTRAINT warehouse_t17_cluster; + |
Add a Not-Null constraint to an existing column.
+1 | ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY SET NOT NULL; + |
Remove Not-Null constraints from an existing column.
+1 | ALTER TABLE tpcds.warehouse_t19 ALTER COLUMN W_GOODS_CATEGORY DROP NOT NULL; + |
Add a check constraint to the tpcds.warehouse_t19 table.
+1 | ALTER TABLE tpcds.warehouse_t19 ADD CONSTRAINT W_CONSTR_KEY4 CHECK (W_STATE <> ''); + |
Add a primary key to the tpcds.warehouse_t1 table.
+1 | ALTER TABLE tpcds.warehouse_t1 ADD PRIMARY KEY(W_WAREHOUSE_SK); + |
Add a varchar column to the tpcds.warehouse_t19 table.
+1 | ALTER TABLE tpcds.warehouse_t19 ADD W_GOODS_CATEGORY varchar(30); + |
Use one statement to alter the types of two existing columns.
+1 +2 +3 | ALTER TABLE tpcds.warehouse_t19 +ALTER COLUMN W_GOODS_CATEGORY TYPE varchar(80), +ALTER COLUMN W_STREET_NAME TYPE varchar(100); + |
This statement is equivalent to the preceding statement.
+1 | ALTER TABLE tpcds.warehouse_t19 MODIFY (W_GOODS_CATEGORY varchar(30), W_STREET_NAME varchar(60)); + |
Delete a column from the tpcds.warehouse_t23 table.
+1 | ALTER TABLE tpcds.warehouse_t23 DROP COLUMN W_STREET_NAME; + |
ALTER TABLE PARTITION modifies table partitioning, including adding, deleting, splitting, merging partitions, and modifying partition attributes.
+1 +2 | ALTER TABLE [ IF EXISTS ] { table_name [*] | ONLY table_name | ONLY ( table_name )} + action [, ... ]; + |
1 +2 +3 +4 +5 +6 +7 +8 | move_clause | + exchange_clause | + row_clause | + merge_clause | + modify_clause | + split_clause | + add_clause | + drop_clause + |
1 | MOVE PARTITION { partition_name | FOR ( partition_value [, ...] ) } TABLESPACE tablespacename + |
1 +2 +3 | EXCHANGE PARTITION { ( partition_name ) | FOR ( partition_value [, ...] ) } + WITH TABLE {[ ONLY ] ordinary_table_name | ordinary_table_name * | ONLY ( ordinary_table_name )} + [ { WITH | WITHOUT } VALIDATION ] [ VERBOSE ] + |
The ordinary table and the partitioned table whose data is to be exchanged must meet the following requirements:
+When the execution is complete, the data and tablespace of the ordinary table and the partitioned table are exchanged. In this case, statistics about the ordinary table and the partitioned table become unreliable. Both tables should be analyzed again.
+1 | { ENABLE | DISABLE } ROW MOVEMENT + |
1 +2 | MERGE PARTITIONS { partition_name } [, ...] INTO PARTITION partition_name + + |
1 | MODIFY PARTITION partition_name { UNUSABLE LOCAL INDEXES | REBUILD UNUSABLE LOCAL INDEXES } + |
1 | SPLIT PARTITION { partition_name | FOR ( partition_value [, ...] ) } { split_point_clause | no_split_point_clause } + |
1 | AT ( partition_value ) INTO ( PARTITION partition_name , PARTITION partition_name ) + |
The size of split point should be in the range of splitting partition key. The split point can only split one partition into two.
+1 | INTO { ( partition_less_than_item [, ...] ) | ( partition_start_end_item [, ...] ) } + |
1 +2 | PARTITION partition_name VALUES LESS THAN ( { partition_value | MAXVALUE } [, ...] ) + [ TABLESPACE tablespacename ] + |
1 +2 +3 +4 +5 +6 | PARTITION partition_name { + {START(partition_value) END (partition_value) EVERY (interval_value)} | + {START(partition_value) END ({partition_value | MAXVALUE})} | + {START(partition_value)} | + {END({partition_value | MAXVALUE})} +} [TABLESPACE tablespace_name] + |
1 | ADD {partition_less_than_item | partition_start_end_item} + |
1 | DROP PARTITION { partition_name | FOR ( partition_value [, ...] ) } + |
1 +2 | ALTER TABLE [ IF EXISTS ] { table_name [*] | ONLY table_name | ONLY ( table_name )} + RENAME PARTITION { partition_name | FOR ( partition_value [, ...] ) } TO partition_new_name; + |
Specifies the name of a partitioned table.
+Value range: an existing partitioned table name
+Specifies the name of a partition.
+Value range: an existing partition name
+Specifies the key value of a partition.
+The value specified by PARTITION FOR ( partition_value [, ...] ) can uniquely identify a partition.
+Value range: value range of the partition key for the partition to be renamed
+Sets all the indexes unusable in the partition.
+Rebuilds all the indexes in the partition.
+Specifies the row movement switch.
+If the tuple value is updated on the partition key during the UPDATE action, the partition where the tuple is located is altered. Setting of this parameter enables error messages to be reported or movement of the tuple between partitions.
+Valid value:
+The switch is disabled by default.
+Specifies the name of the ordinary table whose data is to be migrated.
+Value range: an existing ordinary table name
+Checks whether the ordinary table data meets the specified partition key range of the partition to be migrated.
+Valid value:
+The default value is WITH.
+The check is time consuming, especially when the data volume is large. Therefore, use WITHOUT when you are sure that the current common table data meets the partition key range of the partition to be exchanged.
+When VALIDATION is WITH, if the ordinary table contains data that is out of the partition key range, insert the data to the correct partition. If there is no correct partition where the data can be route to, an error is reported.
+Only when VALIDATION is WITH, VERBOSE can be specified.
+Specifies the new name of a partition.
+Value range: a string. It must comply with the naming convention.
+Delete partition P8.
+1 | ALTER TABLE tpcds.web_returns_p1 DROP PARTITION P8; + |
Add a partition WR_RETURNED_DATE_SK with values ranging from 2453005 to 2453105.
+1 | ALTER TABLE tpcds.web_returns_p1 ADD PARTITION P8 VALUES LESS THAN (2453105); + |
Add a partition WR_RETURNED_DATE_SK with values ranging from 2453105 to MAXVALUE.
+1 | ALTER TABLE tpcds.web_returns_p1 ADD PARTITION P9 VALUES LESS THAN (MAXVALUE); + |
Rename the P7 partition as P10.
+1 | ALTER TABLE tpcds.web_returns_p1 RENAME PARTITION P7 TO P10; + |
Rename the P6 partition as P11.
+1 | ALTER TABLE tpcds.web_returns_p1 RENAME PARTITION FOR (2452639) TO P11; + |
Query rows in the P10 partition.
+1 +2 +3 +4 +5 | SELECT count(*) FROM tpcds.web_returns_p1 PARTITION (P10); + count +-------- + 9362 +(1 row) + |
Split the P8 partition at 2453010.
+1 +2 +3 +4 +5 | ALTER TABLE tpcds.web_returns_p2 SPLIT PARTITION P8 AT (2453010) INTO +( + PARTITION P9, + PARTITION P10 +); + |
Merge the P6 and P7 partitions into one.
+1 | ALTER TABLE tpcds.web_returns_p2 MERGE PARTITIONS P6, P7 INTO PARTITION P8; + |
Modify the migration attribute of a partitioned table.
+1 | ALTER TABLE tpcds.web_returns_p2 DISABLE ROW MOVEMENT; + |
Add partitions [5000, 5300), [5300, 5600), [5600, 5900), and [5900, 6000).
+1 | ALTER TABLE tpcds.startend_pt ADD PARTITION p6 START(5000) END(6000) EVERY(300); + |
Add the partition p7, specified by MAXVALUE.
+1 | ALTER TABLE tpcds.startend_pt ADD PARTITION p7 END(MAXVALUE); + |
Rename the partition where 5950 is located to p71.
+1 | ALTER TABLE tpcds.startend_pt RENAME PARTITION FOR(5950) TO p71; + |
Split the partition [4000, 5000) where 4500 is located.
+1 | ALTER TABLE tpcds.startend_pt SPLIT PARTITION FOR(4500) INTO(PARTITION q1 START(4000) END(5000) EVERY; + |
ALTER TEXT SEARCH CONFIGURATION modifies the definition of a text search configuration. You can modify its mappings from token types to dictionaries, change the configuration's name or owner, or modify the parameters.
+The ADD MAPPING FOR form installs a list of dictionaries to be consulted for the specified token types; an error will be generated if there is already a mapping for any of the token types.
+The ALTER MAPPING FOR form removes existing mapping for those token types and then adds specified mappings.
+ALTER MAPPING REPLACE ... WITH ... and ALTER MAPPING FOR... REPLACE ... WITH ... options replace old_dictionary with new_dictionary. Note that only when pg_ts_config_map has tuples corresponding to maptokentype and old_dictionary, the update will succeed. If the update fails, no messages are returned.
+The DROP MAPPING FOR form deletes all dictionaries for the specified token types in the text search configuration. If IF EXISTS is not specified and the string type mapping specified by DROP MAPPING FOR does not exist in text search configuration, an error will occur in database.
+1 +2 | ALTER TEXT SEARCH CONFIGURATION name + ADD MAPPING FOR token_type [, ... ] WITH dictionary_name [, ... ]; + |
1 +2 | ALTER TEXT SEARCH CONFIGURATION name + ALTER MAPPING FOR token_type [, ... ] REPLACE old_dictionary WITH new_dictionary; + |
1 +2 | ALTER TEXT SEARCH CONFIGURATION name + ALTER MAPPING FOR token_type [, ... ] WITH dictionary_name [, ... ]; + |
1 +2 | ALTER TEXT SEARCH CONFIGURATION name + ALTER MAPPING REPLACE old_dictionary WITH new_dictionary; + |
1 +2 | ALTER TEXT SEARCH CONFIGURATION name + DROP MAPPING [ IF EXISTS ] FOR token_type [, ... ]; + |
1 | ALTER TEXT SEARCH CONFIGURATION name OWNER TO new_owner; + |
1 | ALTER TEXT SEARCH CONFIGURATION name RENAME TO new_name; + |
1 | ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA new_schema; + |
1 | ALTER TEXT SEARCH CONFIGURATION name SET ( { configuration_option = value } [, ...] ); + |
1 | ALTER TEXT SEARCH CONFIGURATION name RESET ( {configuration_option} [, ...] ); + |
Specifies the name (optionally schema-qualified) of an existing text search configuration.
+Specifies the name of a token type that is emitted by the configuration's parser. For details, see Parsers.
+Specifies the name of a text search dictionary to be consulted for the specified token types. If multiple dictionaries are listed, they are consulted in the specified order.
+Specifies the name of a text search dictionary to be replaced in the mapping.
+Specifies the name of a text search dictionary to be substituted for old_dictionary.
+Specifies the new owner of the text search configuration.
+Specifies the new name of the text search configuration.
+Specifies the new schema for the text search configuration.
+Text search configuration option. For details, see CREATE TEXT SEARCH CONFIGURATION.
+Specifies the value of text search configuration option.
+Add a type mapping for the text search type ngram1.
+1 | ALTER TEXT SEARCH CONFIGURATION ngram1 ADD MAPPING FOR multisymbol WITH simple; + |
Change the owner of text search configuration.
+1 | ALTER TEXT SEARCH CONFIGURATION ngram1 OWNER TO joe; + |
Modify the schema of text search configuration.
+1 | ALTER TEXT SEARCH CONFIGURATION ngram1 SET SCHEMA joe; + |
Rename a text search configuration.
+1 | ALTER TEXT SEARCH CONFIGURATION joe.ngram1 RENAME TO ngram_1; + |
Delete type mapping.
+1 | ALTER TEXT SEARCH CONFIGURATION joe.ngram_1 DROP MAPPING IF EXISTS FOR multisymbol; + |
Add text search configuration string mapping.
+1 | ALTER TEXT SEARCH CONFIGURATION english_1 ADD MAPPING FOR word WITH simple,english_stem; + |
Add text search configuration string mapping.
+1 | ALTER TEXT SEARCH CONFIGURATION english_1 ADD MAPPING FOR email WITH english_stem, french_stem; + |
Modify text search configuration string mapping.
+1 | ALTER TEXT SEARCH CONFIGURATION english_1 ALTER MAPPING REPLACE french_stem with german_stem; + |
Query information about the text search configuration.
+1 +2 +3 +4 +5 +6 +7 +8 | SELECT b.cfgname,a.maptokentype,a.mapseqno,a.mapdict,c.dictname FROM pg_ts_config_map a,pg_ts_config b, pg_ts_dict c WHERE a.mapcfg=b.oid AND a.mapdict=c.oid AND b.cfgname='english_1' ORDER BY 1,2,3,4,5; + cfgname | maptokentype | mapseqno | mapdict | dictname +-----------+--------------+----------+---------+-------------- + english_1 | 2 | 1 | 3765 | simple + english_1 | 2 | 2 | 12960 | english_stem + english_1 | 4 | 1 | 12960 | english_stem + english_1 | 4 | 2 | 12966 | german_stem +(4 rows) + |
ALTER TEXT SEARCH DICTIONARY modifies the definition of a full-text retrieval dictionary, including its parameters, name, owner, and schema.
+1 +2 +3 | ALTER TEXT SEARCH DICTIONARY name ( + option [ = value ] [, ... ] +); + |
1 | ALTER TEXT SEARCH DICTIONARY name RENAME TO new_name; + |
1 | ALTER TEXT SEARCH DICTIONARY name SET SCHEMA new_schema; + |
1 | ALTER TEXT SEARCH DICTIONARY name OWNER TO new_owner; + |
Specifies the name of an existing dictionary. (If you do not specify a schema name, the dictionary in the current schema will be used.)
+Value range: name of an existing dictionary
+Specifies the name of a parameter to be modified. Each type of dictionaries has a template containing their custom parameters. Parameters function in a way irrelevant to their setting sequence. For details about parameters, see option.
+Specifies the new value of a parameter. If = and value are omitted, the previous settings of the parameter will be deleted and the default value will be used.
+Value range: valid values defined for the parameter
+Specifies the new name of a dictionary.
+Value range: a string, which complies with the identifier naming convention. A value can contain a maximum of 63 characters.
+Specifies the new owner of a dictionary.
+Value range: an existing user name
+Specifies the new schema of a dictionary.
+Value range: an existing schema name
+Modify the definition of stop words in Snowball dictionaries. Retain the values of other parameters.
+1 | ALTER TEXT SEARCH DICTIONARY my_dict ( StopWords = newrussian, FilePath = 'obs://bucket_name/path accesskey=ak secretkey=sk region=rg' ); + |
Modify the Language parameter in Snowball dictionaries and delete the definition of stop words.
+1 | ALTER TEXT SEARCH DICTIONARY my_dict ( Language = dutch, StopWords ); + |
Update the dictionary definition and do not change any other content.
+1 | ALTER TEXT SEARCH DICTIONARY my_dict ( dummy ); + |
ALTER TRIGGER modifies the definition of a trigger.
+Only the owner of a table where a trigger is created and system administrators can run the ALTER TRIGGER statement.
+1 | ALTER TRIGGER trigger_name ON table_name RENAME TO new_name; + |
Specifies the name of the trigger to be modified.
+Value range: an existing trigger
+Specifies the name of the table where the trigger to be modified is located.
+Value range: an existing table having a trigger
+Specifies the new name after modification.
+Value range: a string that complies with the identifier naming convention. A value contains a maximum of 63 characters and cannot be the same as other triggers on the same table.
+Modified the trigger delete_trigger.
+1 | ALTER TRIGGER delete_trigger ON test_trigger_src_tbl RENAME TO delete_trigger_renamed; + |
Disable the trigger insert_trigger.
+1 | ALTER TABLE test_trigger_src_tbl DISABLE TRIGGER insert_trigger; + |
Disable all triggers on the test_trigger_src_tbl table.
+1 | ALTER TABLE test_trigger_src_tbl DISABLE TRIGGER ALL; + |
ALTER TYPE modifies the definition of a type.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | ALTER TYPE name action [, ... ] +ALTER TYPE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER TYPE name RENAME ATTRIBUTE attribute_name TO new_attribute_name [ CASCADE | RESTRICT ] +ALTER TYPE name RENAME TO new_name +ALTER TYPE name SET SCHEMA new_schema +ALTER TYPE name ADD VALUE [ IF NOT EXISTS ] new_enum_value [ { BEFORE | AFTER } neighbor_enum_value ] +ALTER TYPE name RENAME VALUE existing_enum_value TO new_enum_value + +where action is one of: + ADD ATTRIBUTE attribute_name data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] + DROP ATTRIBUTE [ IF EXISTS ] attribute_name [ CASCADE | RESTRICT ] + ALTER ATTRIBUTE attribute_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] + |
1 | ALTER TYPE name ADD ATTRIBUTE attribute_name data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] + |
1 | ALTER TYPE name DROP ATTRIBUTE [ IF EXISTS ] attribute_name [ CASCADE | RESTRICT ] + |
1 | ALTER TYPE name ALTER ATTRIBUTE attribute_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] + |
1 | ALTER TYPE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } + |
1 +2 | ALTER TYPE name RENAME TO new_name +ALTER TYPE name RENAME ATTRIBUTE attribute_name TO new_attribute_name [ CASCADE | RESTRICT ] + |
1 | ALTER TYPE name SET SCHEMA new_schema + |
1 | ALTER TYPE name ADD VALUE [ IF NOT EXISTS ] new_enum_value [ { BEFORE | AFTER } neighbor_enum_value ] + |
1 | ALTER TYPE name RENAME VALUE existing_enum_value TO new_enum_value + |
Specifies the name of an existing type that needs to be modified (schema-qualified).
+Specifies the new name of the type.
+Specifies the new owner of the type.
+Specifies the new schema of the type.
+Specifies the name of the attribute to be added, modified, or deleted.
+Specifies the new name of the attribute to be renamed.
+Specifies the data type of the attribute to be added, or the new type of the attribute to be modified.
+Specifies a new enumerated value. It is a non-empty string with a maximum length of 64 bytes.
+Specifies an existing enumerated value before or after which a new enumerated value will be added.
+Specifies an enumerated value to be changed. It is a non-empty string with a maximum length of 64 bytes.
+Determines that the type to be modified, its associated records, and subtables that inherit the type will all be updated.
+Refuses to update the association record of the modified type. This is the default.
+Rename the data type.
+1 | ALTER TYPE compfoo RENAME TO compfoo1; + |
Change the owner of the user-defined type compfoo1 to usr1.
+1 | ALTER TYPE compfoo1 OWNER TO usr1; + |
Change the schema of the user-defined type compfoo1 to usr1.
+1 | ALTER TYPE compfoo1 SET SCHEMA usr1; + |
Add the f3 attribute to the compfoo1 data type.
+1 | ALTER TYPE compfoo1 ADD ATTRIBUTE f3 int; + |
Add a tag value to the enumeration type bugstatus.
+1 | ALTER TYPE bugstatus ADD VALUE IF NOT EXISTS 'regress' BEFORE 'closed'; + |
Rename a tag value of the enumeration type bugstatus.
+1 | ALTER TYPE bugstatus RENAME VALUE 'create' TO 'new'; + |
ALTER USER modifies the attributes of a database user.
+Session parameters modified by ALTER USER apply to a specified user and take effect in the next session.
+1 | ALTER USER user_name [ [ WITH ] option [ ... ] ]; + |
The option clause is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 | { CREATEDB | NOCREATEDB } + | { CREATEROLE | NOCREATEROLE } + | { INHERIT | NOINHERIT } + | { AUDITADMIN | NOAUDITADMIN } + | { SYSADMIN | NOSYSADMIN } + | { USEFT | NOUSEFT } + | { LOGIN | NOLOGIN } + | { REPLICATION | NOREPLICATION } + | {INDEPENDENT | NOINDEPENDENT} + | {VCADMIN | NOVCADMIN} + | CONNECTION LIMIT connlimit + | [ ENCRYPTED | UNENCRYPTED ] PASSWORD { 'password' | DISABLE } + | [ ENCRYPTED | UNENCRYPTED ] IDENTIFIED BY { 'password' [ REPLACE 'old_password' ] | DISABLE } + | VALID BEGIN 'timestamp' + | VALID UNTIL 'timestamp' + | RESOURCE POOL 'respool' + | USER GROUP 'groupuser' + | PERM SPACE 'spacelimit' + | TEMP SPACE 'tmpspacelimit' + | SPILL SPACE 'spillspacelimit' + | NODE GROUP logic_cluster_name + | ACCOUNT { LOCK | UNLOCK } + | PGUSER + | AUTHINFO 'authinfo' + | PASSWORD EXPIRATOIN period + |
1 +2 | ALTER USER user_name + RENAME TO new_name; + |
1 +2 | ALTER USER user_name + SET configuration_parameter { { TO | = } { value | DEFAULT } | FROM CURRENT }; + |
1 +2 | ALTER USER user_name + RESET { configuration_parameter | ALL }; + |
Specifies the current user name.
+Value range: an existing user name
+Indicates a new password.
+A password must:
+Value range: a string
+Indicates the old password.
+PGUSER of a user cannot be modified in the current version.
+For details about other parameters, see "Parameter Description" in CREATE ROLE and ALTER ROLE.
+Change the login password of user jim.
+1 | ALTER USER jim IDENTIFIED BY '{password}' REPLACE '{old_password}'; + |
Add the CREATEROLE permission to user jim.
+1 | ALTER USER jim CREATEROLE; + |
Set enable_seqscan to on (the setting will take effect in the next session).
+1 | ALTER USER jim SET enable_seqscan TO on; + |
Reset the enable_seqscan parameter for user jim.
+1 | ALTER USER jim RESET enable_seqscan; + |
Lock the jim account.
+1 | ALTER USER jim ACCOUNT LOCK; + |
ALTER VIEW modifies all auxiliary attributes of a view. (To modify the query definition of a view, use CREATE OR REPLACE VIEW.)
+1 +2 | ALTER VIEW [ IF EXISTS ] view_name + ALTER [ COLUMN ] column_name SET DEFAULT expression; + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + ALTER [ COLUMN ] column_name DROP DEFAULT; + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + OWNER TO new_owner; + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + RENAME TO new_name; + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + SET SCHEMA new_schema; + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + SET ( { view_option_name [ = view_option_value ] } [, ... ] ); + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + RESET ( view_option_name [, ... ] ); + |
1 +2 | ALTER VIEW [ IF EXISTS ] view_name + REBUILD; + |
1 +2 | ALTER VIEW ONLY [ IF EXISTS ] view_name + REBUILD; + |
If this option is specified, no error is reported if the view does not exist. Only a message is displayed.
+Specifies the view name, which can be schema-qualified.
+Value range: a string. It must comply with the naming convention.
+Indicates an optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.
+Value range: a string. It must comply with the naming convention.
+Sets or deletes the default value of a column. Currently, this parameter does not take effect.
+Specifies the new owner of a view.
+Specifies the new view name.
+Specifies the new schema of the view.
+This clause specifies optional parameters for a view.
+Currently, the only parameter supported by view_option_name is security_barrier, which should be enabled when a view is intended to provide row-level security.
+Value range: boolean type. It can be TRUE or FALSE.
+Only views and their dependent views are rebuilt. This function is available only if view_independent is set to on.
+Rename a view.
+1 | ALTER VIEW tpcds.customer_details_view_v1 RENAME TO customer_details_view_v2; + |
Change the schema of a view.
+1 | ALTER VIEW tpcds.customer_details_view_v2 SET schema public; + |
Rebuild a view.
+1 | ALTER VIEW public.customer_details_view_v2 REBUILD; + |
Rebuild a dependent view.
+1 | ALTER VIEW ONLY public.customer_details_view_v2 REBUILD; + |
CLEAN CONNECTION clears database connections when a database is abnormal. You may use this statement to delete a specific user's connections to a specified database.
+None
+1 +2 +3 +4 | CLEAN CONNECTION + TO { COORDINATOR ( nodename [, ... ] ) | NODE ( nodename [, ... ] )| ALL [ CHECK ] [ FORCE ] } + [ FOR DATABASE dbname ] + [ TO USER username ]; + |
This parameter can be specified only when the node list is specified as TO ALL. Setting this parameter will check whether a database is accessed by other sessions before its connections are cleared. If any sessions are detected before DROP DATABASE is executed, an error will be reported and the database will not be deleted.
+This parameter can be specified only when the node list is specified as TO ALL. Setting this parameter will send SIGTERM signals to all the threads related to the specified dbname and username and forcibly shut them down.
+Deletes connections on a specified node. There are three scenarios:
+Value range: nodename is an existing node name.
+Deletes connections to a specific database. If this parameter is not specified, connections to all databases will be deleted.
+Value range: an existing database name
+Deletes connections of a specific user. If this parameter is not specified, connections of all users will be deleted.
+Value range: an existing user name
+Either dbname or username must be specified.
+Clean connections to nodes dn1 and dn2 for the template1 database.
+1 | CLEAN CONNECTION TO NODE (dn1,dn2) FOR DATABASE template1; + |
Clean user jack's connections to dn1.
+1 | CLEAN CONNECTION TO NODE (dn1) TO USER jack; + |
Delete all connections to the gaussdb database.
+1 | CLEAN CONNECTION TO ALL FORCE FOR DATABASE gaussdb; + |
CLOSE frees the resources associated with an open cursor.
+1 | CLOSE { cursor_name | ALL } ; + |
Specifies the name of a cursor to be closed.
+Closes all open cursors.
+Close a cursor.
+1 | CLOSE cursor1; + |
Cluster a table according to an index.
+CLUSTER instructs GaussDB(DWS) to cluster the table specified by table_name based on the index specified by index_name. The index must have been defined on table_name.
+When a table is clustered, it is physically reordered based on the index information. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered. That is, no attempt is made to store new or updated rows according to their index order.
+When a table is clustered, GaussDB(DWS) records which index the table was clustered by. The form CLUSTER table_name reclusters the table using the same index as before. You can also use the CLUSTER or SET WITHOUT CLUSTER forms of ALTER TABLE to set the index to be used for future cluster operations, or to clear any previous setting.
+CLUSTER without any parameter reclusters all the previously-clustered tables in the current database that the calling user owns, or all such tables if called by an administrator.
+When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other database operations (both reads and writes) from operating on the table until the CLUSTER is finished.
+Only row-store B-tree indexes support CLUSTER.
+In cases where you are accessing single rows randomly within a table, the actual order of the data in the table is unimportant. However, if you tend to access some data more than others, and there is an index that groups them together, you will benefit from using CLUSTER. If you are requesting a range of indexed values from a table, or a single indexed value that has multiple rows that match, CLUSTER will help because once the index identifies the table page for the first row that matches, all other rows that match are probably already on the same table page, and so you save disk accesses and speed up the query.
+When an index scan is used, a temporary copy of the table is created that contains the table data in the index order. Temporary copies of each index on the table are created as well. Therefore, you need free space on disk at least equal to the sum of the table size and the index sizes.
+Because CLUSTER remembers which indexes are clustered, one can cluster the tables manually the first time, then set up a time like VACUUM without any parameters, so that the desired tables are periodically reclustered.
+Because the optimizer records statistics about the ordering of tables, it is advisable to run ANALYZE on the newly clustered table. Otherwise, the optimizer might make poor choices of query plans.
+CLUSTER cannot be executed in transactions.
+1 | CLUSTER [ VERBOSE ] table_name [ USING index_name ]; + |
1 | CLUSTER [ VERBOSE ] table_name PARTITION ( partition_name ) [ USING index_name ]; + |
1 | CLUSTER [ VERBOSE ]; + |
Enables the display of progress messages.
+Specifies the name of the table.
+Value range: an existing table name
+Name of this index
+Value range: An existing index name.
+Specifies the partition name.
+Value range: An existing partition name.
+Create a partitioned table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.inventory_p1 +( + INV_DATE_SK INTEGER NOT NULL, + INV_ITEM_SK INTEGER NOT NULL, + INV_WAREHOUSE_SK INTEGER NOT NULL, + INV_QUANTITY_ON_HAND INTEGER +) +DISTRIBUTE BY HASH(INV_ITEM_SK) +PARTITION BY RANGE(INV_DATE_SK) +( + PARTITION P1 VALUES LESS THAN(2451179), + PARTITION P2 VALUES LESS THAN(2451544), + PARTITION P3 VALUES LESS THAN(2451910), + PARTITION P4 VALUES LESS THAN(2452275), + PARTITION P5 VALUES LESS THAN(2452640), + PARTITION P6 VALUES LESS THAN(2453005), + PARTITION P7 VALUES LESS THAN(MAXVALUE) +); + |
Create an index named ds_inventory_p1_index1.
+1 | CREATE INDEX ds_inventory_p1_index1 ON tpcds.inventory_p1 (INV_ITEM_SK) LOCAL; + |
Cluster the tpcds.inventory_p1 table.
+1 | CLUSTER tpcds.inventory_p1 USING ds_inventory_p1_index1; + |
Cluster the p3 partition.
+1 | CLUSTER tpcds.inventory_p1 PARTITION (p3) USING ds_inventory_p1_index1; + |
Cluster the tables that can be clustered in the database.
+1 | CLUSTER; + |
COMMENT defines or changes the comment of an object.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 | COMMENT ON +{ + AGGREGATE agg_name (agg_type [, ...] ) | + CAST (source_type AS target_type) | + COLLATION object_name | + COLUMN { table_name.column_name | view_name.column_name } | + CONSTRAINT constraint_name ON table_name | + CONVERSION object_name | + DATABASE object_name | + DOMAIN object_name | + EXTENSION object_name | + FOREIGN DATA WRAPPER object_name | + FOREIGN TABLE object_name | + FUNCTION function_name ( [ {[ argmode ] [ argname ] argtype} [, ...] ] ) | + INDEX object_name | + LARGE OBJECT large_object_oid | + OPERATOR operator_name (left_type, right_type) | + OPERATOR CLASS object_name USING index_method | + OPERATOR FAMILY object_name USING index_method | + [ PROCEDURAL ] LANGUAGE object_name | + ROLE object_name | + RULE rule_name ON table_name | + SCHEMA object_name | + SERVER object_name | + TABLE object_name | + TABLESPACE object_name | + TEXT SEARCH CONFIGURATION object_name | + TEXT SEARCH DICTIONARY object_name | + TEXT SEARCH PARSER object_name | + TEXT SEARCH TEMPLATE object_name | + TYPE object_name | + VIEW object_name +} + IS 'text'; + |
Specifies the new name of an aggregation function.
+Specifies the data types of the aggregation function parameters.
+Specifies the name of the source data type of the cast.
+Specifies the name of the target data type of the cast.
+Specifies the name of the object to be commented.
+view_name.column_name
+Specifies the column whose comment is defined or modified. You can add the table name or view name as the prefix.
+Specifies the table constraints whose comment is defined or modified.
+Specifies the table name.
+Specifies the function whose comment is defined or modified.
+Specifies the schema, name, and type of the function parameters.
+Specifies the OID of the large object whose comment is defined or modified.
+Specifies the name of the operator.
+The data type(s) of the operator's arguments (optionally schema-qualified). Write NONE for the missing argument of a prefix or postfix operator.
+Specifies the new comment, written as a string literal; or NULL to drop the comment.
+Add a comment to the customer.c_customer_sk column.
+1 | COMMENT ON COLUMN customer.c_customer_sk IS 'Primary key of customer demographics table.'; + |
Add a comment to the tpcds.customer_details_view_v2 view.
+1 | COMMENT ON VIEW tpcds.customer_details_view_v2 IS 'View of customer detail'; + |
Add comments to the customer table.
+1 | COMMENT ON TABLE customer IS 'This is my table'; + |
Creates a barrier for cluster nodes. The barrier can be used for data restoration.
+Before creating a barrier, ensure that gtm_backup_barrier and enable_cbm_tracking are set to on for CNs and DNs in the cluster.
+1 | CREATE BARRIER [ barrier_name ] ; + |
barrier_name
+(Optional) Indicates the name of a barrier.
+Value range: a string. It must comply with the naming convention.
+Create a barrier without specifying its name.
+1 | CREATE BARRIER; + |
Create a barrier named barrier1.
+1 | CREATE BARRIER 'barrier1'; + |
CREATE DATABASE creates a database. By default, the new database will be created by cloning the standard system database template1. A different template can be specified using TEMPLATE template name.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE DATABASE database_name + [ [ WITH ] { [ OWNER [=] user_name ] | + [ TEMPLATE [=] template ] | + [ ENCODING [=] encoding ] | + [ LC_COLLATE [=] lc_collate ] | + [ LC_CTYPE [=] lc_ctype ] | + [ DBCOMPATIBILITY [=] compatibilty_type ] | + + [ CONNECTION LIMIT [=] connlimit ]}[...] ]; + |
Indicates the database name.
+Value range: a string. It must comply with the naming convention.
+Indicates the owner of the new database. By default, the owner of the database is the current user.
+Value range: an existing user name
+Indicates the template name, that is, the name of the template to be used to create the database. GaussDB(DWS) creates a database by coping a database template. GaussDB(DWS) has two default template databases template0 and template1 and a default user database gaussdb.
+Value range: An existing database name. If this it is not specified, the system copies template1 by default. Its value cannot be gaussdb.
+Currently, database templates cannot contain sequences. If a database template contains sequences, database creation using this template will fail.
+Specifies the encoding format used by the new database. The value can be a string (for example, SQL_ASCII) or an integer.
+By default, the encoding format of the template database is used. The encoding formats of the template databases template0 and template1 vary based on OS environments by default. The template1 database does not allow encoding customization. To specify encoding for a database when creating it, use template0. To specify encoding, set template to template0.
+Common values: GBK, UTF8, and Latin1
+Specifies the collation order to use in the new database. For example, this parameter can be set using lc_collate = 'zh_CN.gbk'.
+The use of this parameter affects the sort order applied to strings, for example, in queries with ORDER BY, as well as the order used in indexes on text columns. The default is to use the collation order of the template database.
+Value range: A valid order type.
+Specifies the character classification to use in the new database. For example, this parameter can be set using lc_ctype = 'zh_CN.gbk'. The use of this parameter affects the categorization of characters, for example, lower, upper and digit. The default is to use the character classification of the template database.
+Value range: A valid character type.
+Specifies the compatible database type.
+Value range: ORA, TD, and MySQL, representing the Oracle-, Teradata-, and MySQL-compatible modes, respectively. If this parameter is not specified, the default value ORA is used.
+Specifies the name of the tablespace that will be associated with the new database.
+Value range: An existing tablespace name.
+The specified tablespace cannot be the OBS tablespace.
+Indicates the maximum number of concurrent connections that can be made to the new database.
+Value range: An integer greater than or equal to -1. The default value -1 means no limit.
+The following are limitations on character encoding:
+Create database music using GBK (the local encoding type is also GBK).
+1 | CREATE DATABASE music ENCODING 'GBK' template = template0; + |
Create database music2 and specify jim as its owner.
+1 | CREATE DATABASE music2 OWNER jim; + |
Create database music3 using template template0 and specify jim as its owner.
+1 | CREATE DATABASE music3 OWNER jim TEMPLATE template0; + |
Create a compatible Oracle database ora_compatible_db.
+1 | CREATE DATABASE ora_compatible_db DBCOMPATIBILITY 'ORA'; + |
CREATE FOREIGN TABLE creates a GDS foreign table.
+CREATE FOREIGN TABLE creates a GDS foreign table in the current database for concurrent data import and export. The GDS foreign table can be read-only or write-only, used for concurrent data import and export, respectively. The OBS foreign table is read-only by default.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name + ( [ { column_name type_name POSITION(offset,length) | LIKE source_table } [, ...] ] ) + SERVER gsmpp_server + OPTIONS ( { option_name ' value ' } [, ...] ) + [ { WRITE ONLY | READ ONLY }] + [ WITH error_table_name | LOG INTO error_table_name] + [REMOTE LOG 'name'] + [PER NODE REJECT LIMIT 'value'] + [ TO { GROUP groupname | NODE ( nodename [, ... ] ) } ]; + |
CREATE FOREIGN TABLE provides multiple parameters, which are classified as follows:
+Does not throw an error if a table with the same name already exists. A notice is issued in this case.
+Specifies the name of the foreign table to be created.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a column in the foreign table.
+Value range: a string. It must comply with the naming convention.
+Specifies the data type of the column.
+Defining the location of each column in the data file in fixed length mode.
+offset is the start of the column in the source file, and length is the length of the column.
+Value range: offset must be greater than 0 bytes, and its unit is byte.
+The length of each record must be less than 1 GB. By default, columns not in the file are replaced with null.
+Specifies the server name of the foreign table. For the GDS foreign table, its server is created by initial database, which is gsmpp_server.
+Specifies all types of parameters of foreign table data.
+Specifies the data source location of the foreign table, which can be expressed through URLs. Separate URLs with vertical bars (|).
+Currently, GDS can automatically create a directory defined by a foreign table during data export. For example, when the foreign table location defines that gsfs:// 192.168.0.91:5000/2019/09 executes an export task, if the 2019/09 subdirectory in the GDS data directory does not exist, the subdirectory is automatically created. You do not need to manually create the directory specified in the foreign table.
+For example: gsfs://192.168.0.90:5000/* or file:///data/data.txt or gsfs:// 192.168.0.90:5000/* | gsfs:// 192.168.0.91:5000/*.
+Specifies the format of the data source file in a foreign table.
+Value range: CSV, TEXT. The default value is TEXT.
+Specifies whether a data file contains a table header. header is available only for CSV and FIXED files.
+When data is imported, if header is on, the first row of the data file will be identified as title row and ignored. If header is off, the first row is identified as data.
+When data is exported, if header is on, fileheader must be specified. fileheader is used to specify the export header file format. If header is off, the exported file does not include a title row.
+Value range: true, on, false, and off. The default value is false or off.
+Specifies a file that defines the content in the header for exported data. The file contains one row of data description of each column.
+For example, to add a header in a file containing product information, define the file as follows:
+The information of products.\n
+Specifies the name prefix of the exported data file exported using GDS from a write-only foreign table.
+If file_type is set to pipe, the pipe file dbName_schemaName_foreignTableName.pipe is generated.
+If both out_filename_prefix and location specify a pipe name, the pipe name specified in location is used.
+"con","aux","nul","prn","com0","com1","com2","com3","com4","com5","com6","com7","com8","com9","lpt0","lpt1","lpt2","lpt3","lpt4","lpt5","lpt6","lpt7","lpt8","lpt9"
+Specifies the column delimiter of data, and uses the default delimiter if it is not set. The default delimiter of TEXT is a tab and that of CSV is a comma (,). No delimiter is used in FIXED format.
+Valid value:
+The value of delimiter can be a multi-character delimiter whose length is less than or equal to 10 bytes.
+Specifies which characters in a CSV source data file will be identified as quotation marks. The default value is a double quotation mark (").
+Specifies which characters in a CSV source data file are escape characters. Escape characters can only be single-byte characters.
+Default value: the same as the value of QUOTE
+Valid value:
+Specifies in TEXT format, whether to escape the backslash (\) and its following characters.
+noescaping is available only for the TEXT format.
+Value range: true, on, false, and off. The default value is false or off.
+Specifies the encoding of a data file, that is, the encoding used to parse, check, and generate a data file. Its default value is the default client_encoding value of the current database.
+Before you import foreign tables, it is recommended that you set client_encoding to the file encoding format, or a format matching the character set of the file. Otherwise, unnecessary parsing and check errors may occur, leading to import errors, rollback, or even invalid data import. Before you import foreign tables, you are also advised to specify this parameter, because the export result using the default character set may not be what you expected.
+If this parameter is not specified when you create a foreign table, a warning message will be displayed on the client.
+Currently, GDS cannot parse or write in a file using multiple encoding formats during foreign table import or export.
+Specifies whether to generate an error message when the last column in a row in the source file is lost during data import.
+Value range: true, on, false, and off. The default value is false or off.
+missing data for column "tt"+
Specifies whether to ignore excessive columns when the number of data source files exceeds the number of foreign table columns. This parameter is available during data import.
+Value range: true, on, false, and off. The default value is false or off.
+extra data after last expected column+
If the newline character at the end of the row is lost, setting the parameter to true will ignore data in the next row.
+Specifies the maximum number of data format errors allowed during a data import task. If the number of errors does not reach the maximum number, the data import task can still be executed.
+You are advised to replace this syntax with PER NODE REJECT LIMIT 'value'.
+Examples of data format errors include the following: a column is lost, an extra column exists, a data type is incorrect, and encoding is incorrect. Once a non-data format error occurs, the whole data import process is stopped.
+Value range: a positive integer or unlimited
+The default value is 0, indicating that error information is returned immediately.
+Enclose positive integer values with single quotation marks ('').
+Specifies the data import policy during a specific data import process. GaussDB(DWS) supports only the Normal mode.
+Valid value:
+Specifies the newline character style of the imported or exported data file.
+Value range: multi-character newline characters within 10 bytes. Common newline characters include \r (0x0D), \n (0x0A), and \r\n (0x0D0A). Special newline characters include $ and #.
+This parameter is generally used with the compatible_illegal_chars parameter. If a data file contains a truncated Chinese character, the truncated character and a delimiter will be encoded into another Chinese character due to inconsistent encoding between the foreign table and the database. As a result, the delimiter is masked and an error will be reported, indicating that there are missing fields.
+This parameter is used to avoid encoding a truncated character and a delimiter into another character.
+Value range: true, on, false, and off. The default value is false or off.
+This parameter is disabled by default. It is recommended that you disable this parameter, because encoding a truncated character and a delimiter into another character is rarely required. If the parameter is enabled, the scenario may be incorrectly identified and thereby causing incorrect information imported to the table.
+Specifies the type of the file to be imported or exported.
+Value options: normal, pipe. normal is the default value.
+This parameter specifies whether the GDS process automatically creates a named pipe.
+Value options: true, on, false, and off. The default value is true/on.
+Specifies the length of fixed format data. The unit is byte. This syntax is available only for READ ONLY foreign tables.
+Value range: Less than 1 GB, and greater than or equal to the total length specified by POSITION (The total length is the sum of offset and length in the last column of the table definition.)
+Specifies how the columns of the types BYTEAOID, CHAROID, NAMEOID, TEXTOID, BPCHAROID, VARCHAROID, NVARCHAR2OID, and CSTRINGOID are aligned during fixed-length export.
+Value range: align_left, align_right
+Default value: align_right
+The bytea data type must be in hexadecimal format (for example, \XXXX) or octal format (for example, \XXX\XXX\XXX). The data to be imported must be left-aligned (that is, the column data starts with either of the two formats instead of spaces). Therefore, if the exported file needs to be imported using a GDS foreign table and the file data length is less than that specified by the foreign table formatter, the exported file must be left aligned. Otherwise, an error is reported during the import.
+Imports data of the DATE type. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid DATE value. For details, see Date and Time Processing Functions and Operators.
+If ORACLE is specified as the compatible database, the DATE format is TIMESTAMP. For details, see timestamp_format below.
+Imports data of the TIME type. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid TIME value. Time zones cannot be used. For details, see Date and Time Processing Functions and Operators.
+Imports data of the TIMESTAMP type. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid TIMESTAMP value. Time zones are not supported. For details, see Date and Time Processing Functions and Operators.
+Imports data of the SMALLDATETIME type. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid SMALLDATETIME value. For details, see Date and Time Processing Functions and Operators.
+Enables or disables fault tolerance on invalid characters during data import. This syntax is available only for READ ONLY foreign tables.
+Value range: true, on, false, and off. The default value is false or off.
+The rule of error tolerance when you import invalid characters is as follows:
+(1) \0 is converted to a space.
+(2) Other invalid characters are converted to question marks.
+(3) If compatible_illegal_chars is set to true or on, invalid characters are tolerated. If NULL, DELIMITER, QUOTE, and ESCAPE are set to a spaces or question marks. Errors like "illegal chars conversion may confuse COPY escape 0x20" will be displayed to prompt user to modify parameter values that cause confusion, preventing import errors.
+Specifies whether a foreign table is read-only. This parameter is available only for data import.
+Specifies whether a foreign table is write-only. This parameter is available only for data import.
+Specifies the table where data format errors generated during parallel data import are recorded. You can query the error information table after data is imported to obtain error details. This parameter is available only after reject_limit is set.
+To be compatible with PostgreSQL open source interfaces, you are advised to replace this syntax with LOG INTO.
+Value range: a string. It must comply with the naming convention.
+Specifies the table where data format errors generated during parallel data import are recorded. You can query the error information table after data is imported to obtain error details.
+This parameter is available only after PER NODE REJECT LIMIT is set.
+Value range: a string. It must comply with the naming convention.
+Concurrently imports data in parallel through GDS foreign tables, to improve single-file import performance. This parameter is only used for data import.
+The parameter format is file_sequence'total number of shards-current shard'. Example:
+file_sequence '3-1' indicates that the imported file is logically split into three shards and the data currently imported by the foreign table is the data on the first shard.
+file_sequence '3-2' indicates that the imported file is logically split into three shards and the data currently imported by the foreign table is the data on the second shard.
+file_sequence '3-3' indicates that the imported file is logically split into three shards and the data currently imported by the foreign table is the data on the third shard.
+This parameter has the following constraints:
+When data is imported in parallel in CSV format, some shards fail to be imported in the following scenario because the CSV rules conflict with the GDS splitting logic:
+Scenario: A CSV file contains a newline character that is not escaped, the newline character is contained in the character specified by quote, and the data of this line is in the first row of the logical shard.
+For example, if you import the big.csv file in parallel, the following information is displayed:
+--id, username, address +10001,"customer1 name","Rose District" +10002,"customer2 name"," +23 Road Rose +District NewCity" +10003,"customer3 name","NewCity"+
After the file is split into two shards, the content of the first shard is as follows:
+10001,"customer1 name","Rose District" +10002,"customer2 name"," +23+
The content of the second shard is as follows:
+Road Rose +District NewCity" +10003,"customer3 name","NewCity"+
The newline character after 23 Road Rose in the first line of the second shard is contained between double quotation marks. As a result, GDS cannot determine whether the newline character is a newline character in the field or a separator in the line. Therefore, two data records on the first shard are successfully imported, but the second shard fails to be imported.
+The data format error information is saved as files in GDS. name is the prefix of the error data file.
+This parameter specifies the allowed number of data format errors on each DN during data import. If the number of errors exceeds the specified value on any DN, data import fails, an error is reported, and the system exits data import.
+This syntax specifies the error tolerance of a single node.
+Examples of data format errors include the following: a column is lost, an extra column exists, a data type is incorrect, and encoding is incorrect. When a non-data format error occurs, the whole data import process stops.
+Value range: integer, unlimited. The default value is 0, indicating that error information is returned immediately.
+Currently, TO GROUP cannot be used. TO NODE is used for internal scale-out tools.
+Create a foreign tablecustomer_ft to import data from GDS server 10.10.123.234 in TEXT format.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 | CREATE FOREIGN TABLE customer_ft +( + c_customer_sk integer , + c_customer_id char(16) , + c_current_cdemo_sk integer , + c_current_hdemo_sk integer , + c_current_addr_sk integer , + c_first_shipto_date_sk integer , + c_first_sales_date_sk integer , + c_salutation char(10) , + c_first_name char(20) , + c_last_name char(30) , + c_preferred_cust_flag char(1) , + c_birth_day integer , + c_birth_month integer , + c_birth_year integer , + c_birth_country varchar(20) , + c_login char(13) , + c_email_address char(50) , + c_last_review_date char(10) +) + SERVER gsmpp_server + OPTIONS +( + location 'gsfs://10.10.123.234:5000/customer1*.dat', + FORMAT 'TEXT' , + DELIMITER '|', + encoding 'utf8', + mode 'Normal') +READ ONLY; + |
Create a foreign table to import data from GDS servers 192.168.0.90 and 192.168.0.91 in TEXT format. Record errors that occur during data import in foreign_HR_staffS_ft. A maximum of two data format errors are allowed during the data import.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | CREATE FOREIGN TABLE foreign_HR_staffS_ft +( + staff_ID NUMBER(6) , + FIRST_NAME VARCHAR2(20), + LAST_NAME VARCHAR2(25), + EMAIL VARCHAR2(25), + PHONE_NUMBER VARCHAR2(20), + HIRE_DATE DATE, + employment_ID VARCHAR2(10), + SALARY NUMBER(8,2), + COMMISSION_PCT NUMBER(2,2), + MANAGER_ID NUMBER(6), + section_ID NUMBER(4) +) SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/* | gsfs://192.168.0.91:5000/*', format 'TEXT', delimiter E'\x08', null '',reject_limit '2') WITH err_HR_staffS_ft; + |
CREATE FOREIGN TABLE creates a foreign table in the current database for parallel data import and export of OBS data. The server used is gsmpp_server, which is created by the database by default.
+The hybrid data warehouse (standalone) does not support OBS foreign table import and export.
+Data Type + |
+User-built Server + |
+gsmpp_server + |
+||
---|---|---|---|---|
- + |
+READ ONLY + |
+WRITE ONLY + |
+READ ONLY + |
+WRITE ONLY + |
+
ORC + |
+√ + |
+√ + |
+× + |
+× + |
+
CARBONDATA + |
+√ + |
+× + |
+× + |
+× + |
+
TEXT + |
+√ + |
+× + |
+√ + |
+√ + |
+
CSV + |
+√ + |
+× + |
+√ + |
+√ + |
+
JSON + |
+√ + |
+× + |
+× + |
+× + |
+
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name +( { column_name type_name [column_constraint ] + | LIKE source_table | table_constraint [, ...]} [, ...] ) +SERVER gsmpp_server +OPTIONS ( { option_name ' value ' } [, ...] ) +[ { WRITE ONLY | READ ONLY }] +[ WITH error_table_name | LOG INTO error_table_name] +[PER NODE REJECT LIMIT 'value'] ;+
[CONSTRAINT constraint_name] +{PRIMARY KEY | UNIQUE} +[NOT ENFORCED [ENABLE QUERY OPTIMIZATION | DISABLE QUERY OPTIMIZATION] | ENFORCED]+
[CONSTRAINT constraint_name] +{PRIMARY KEY | UNIQUE} (column_name) +[NOT ENFORCED [ENABLE QUERY OPTIMIZATION | DISABLE QUERY OPTIMIZATION] | ENFORCED]+
Does not throw an error if a table with the same name exists. A notice is issued in this case.
+Specifies the name of the foreign table to be created.
+Value range: a string compliant with the naming convention.
+Specifies the name of a column in the foreign table.
+Value range: a string compliant with the naming convention.
+Specifies the data type of the column.
+Specifies the server name of the foreign table. In the OBS foreign table, its server gsmpp_server is created by the initial database.
+Specifies parameters of foreign table data.
+Specifies whether HTTPS is enabled for data transfer. on enables HTTPS and off disables it (in this case, HTTP is used). The default value is off.
+Indicates the access key (AK, obtained from the user information on the console) used for the OBS access protocol. When you create a foreign table, its AK value is encrypted and saved to the metadata table of the database.
+Indicates the secret access key (SK, obtained from the user information on the console) used for the OBS access protocol. When you create a foreign table, its SK value is encrypted and saved to the metadata table of the database.
+Specifies the cache read by each OBS thread on a DN. Its value range is 8 to 512 in the unit of MB. Its default value is 64.
+Specifies the data source location of a foreign table. Currently, only URLs are allowed. Multiple URLs are separated using vertical bars (|).
+When importing and exporting data, you are advised to use the location parameter as follows:
+(Optional) Specifies the value of regionCode, which indicates the region information on the cloud.
+If the region parameter is explicitly specified, the value of region will be read. If the region parameter is not specified, the value of defaultRegion will be read.
+Note the following when setting parameters for importing or exporting OBS foreign tables in TEXT or CSV format:
+Specifies the format of the source data file in a foreign table.
+Valid value: CSV and TEXT. The default value is TEXT. GaussDB(DWS) only supports CSV and TEXT formats.
+Specifies whether a file contains a header with the names of each column in the file.
+When OBS exports data, this parameter cannot be set to true. Use the default value false, indicating that the first row of the exported data file is not the header.
+When data is imported, if header is on, the first row of the data file will be identified as the header and ignored. If header is off, the first row will be identified as a data row.
+Valid value: true, on, false, and off. The default value is false or off.
+Specifies the column delimiter of data. Use the default delimiter if it is not set. The default delimiter of TEXT is a tab and that of CSV is a comma (,).
+Value range:
+The value of delimiter can be a multi-character delimiter whose length is less than or equal to 10 bytes.
+Specifies the quotation mark for the CSV format. The default value is a double quotation mark (").
+Specifies an escape character for a CSV file. The value must be a single-byte character.
+The default value is a double quotation mark ("). If the value is the same as the quote value, it will be replaced with \0.
+Value range:
+Specifies whether to escape the backslash (\) and its following characters in the TEXT format.
+noescaping is available only for the TEXT format.
+Valid value: true, on, false, and off. The default value is false or off.
+Specifies the encoding of a data file, that is, the encoding used to parse, check, and generate a data file. Its default value is the default client_encoding value of the current database.
+Before you import foreign tables, it is recommended that you set client_encoding to the file encoding format, or a format matching the character set of the file. Otherwise, unnecessary parsing and check errors may occur, leading to import errors, rollback, or even invalid data import. Before exporting foreign tables, you are also advised to specify this parameter, because the export result using the default character set may not be what you expect.
+If this parameter is not specified when you create a foreign table, a warning message will be displayed on the client.
+Currently, OBS cannot parse a file using multiple character sets during foreign table import.
+Currently, OBS cannot write a file using multiple character sets during foreign table export.
+Specifies how to handle the problem that the last column of a row in the source file is lost during data import.
+Valid value: true, on, false, and off. The default value is false or off.
+missing data for column "tt"+
Specifies whether to ignore excessive columns when the number of columns in a source data file exceeds that defined in the foreign table. This parameter is available only for data import.
+Valid value: true, on, false, and off. The default value is false or off.
+extra data after last expected column+
If the linefeed at the end of a row is lost and this parameter is set to true, data in the next row will be ignored.
+Specifies the maximum number of data format errors allowed during a data import task. If the number of errors does not reach the maximum number, the data import task can still be executed.
+You are advised to replace this syntax with PER NODE REJECT LIMIT 'value'.
+Examples of data format errors include the following: a column is lost, an extra column exists, a data type is incorrect, and encoding is incorrect. When a non-data format error occurs, the whole data import process is stopped.
+Value range: an integer and unlimited.
+The default value is 0, indicating that error information is returned immediately.
+Specifies the newline character style of the imported or exported data file.
+Value range: multi-character newline characters within 10 bytes. Common newline characters include \r (0x0D), \n (0x0A), and \r\n (0x0D0A). Special newline characters include $ and #.
+Specifies the DATE format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: a valid DATE value. For details, see Date and Time Processing Functions and Operators.
+If Oracle is specified as the compatible database, the DATE format is TIMESTAMP. For details, see timestamp_format below.
+Specifies the TIME format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid TIME value. Time zones cannot be used.
+Specifies the TIMESTAMP format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid TIMESTAMP value. Time zones cannot be used.
+Specifies the SMALLDATETIME format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: a valid SMALLDATETIME value.
+Specifies whether to enable fault tolerance on invalid characters during data import. This syntax is available only for READ ONLY foreign tables.
+Valid value: true, on, false, and off. The default value is false or off.
+On a Windows platform, if OBS reads data files using the TEXT format, 0x1A will be treated as an EOF symbol and a parsing error will occur. It is the implementation constraint of the Windows platform. Since OBS on a Windows platform does not support BINARY read, the data can be read by OBS on a Linux platform.
+The rule of error tolerance for invalid characters imported is as follows:
+(1) \0 is converted to a space.
+(2) Other invalid characters are converted to question marks.
+(3) If compatible_illegal_chars is set to true or on, invalid characters are tolerated. If null, delimiter, quote, and escape are set to a spaces or question marks, errors like "illegal chars conversion may confuse COPY escape 0x20" will be displayed to prompt users to change parameter values that cause confusion, preventing import errors.
+Specifies whether a foreign table is read-only. This parameter is available only for data import.
+Specifies whether a foreign table is write-only. This parameter is available only for data import.
+Specifies the table where data format errors generated during parallel data import are recorded. You can query the error information table after data is imported to obtain error details. This parameter is available only after reject_limit is set.
+To be compatible with postgres open source interfaces, you are advised to replace this syntax with LOG INTO. When this parameter is specified, an error table is automatically created.
+Value range: a string compliant with the naming convention.
+Specifies the table where data format errors generated during parallel data import are recorded. You can query the error information table after data is imported to obtain error details.
+Value range: a string compliant with the naming convention.
+Specifies the maximum number of data format errors on each DN during data import. If the number of errors exceeds the specified value on any DN, data import fails, an error is reported, and the system exits data import.
+This syntax specifies the error tolerance of a single node.
+Examples of data format errors include the following: a column is lost, an extra column exists, a data type is incorrect, and encoding is incorrect. When a non-data format error occurs, the whole data scanning process is stopped.
+Valid value: an integer and unlimited. The default value is 0, indicating that error information is returned immediately.
+Specifies the constraint to be an informational constraint. This constraint is guaranteed by the user instead of the database.
+The default value is ENFORCED. ENFORCED is a reserved parameter and is currently not supported.
+Specifies the informational constraint on column_name.
+Value range: a string. It must comply with the naming convention, and the value of column_name must exist.
+Optimizes the query plan using an informational constraint.
+Disables the optimization of the query plan using an informational constraint.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | DROP FOREIGN TABLE IF EXISTS OBS_ft; +NOTICE: foreign table "obs_ft" does not exist, skipping +DROP FOREIGN TABLE + +CREATE FOREIGN TABLE OBS_ft( a int, b int)SERVER gsmpp_server OPTIONS (location 'obs://gaussdbcheck/obs_ddl/test_case_data/txt_obs_informatonal_test001',format 'text',encoding 'utf8',chunksize '32', encrypt 'on',ACCESS_KEY 'access_key_value_to_be_replaced',SECRET_ACCESS_KEY 'secret_access_key_value_to_be_replaced',delimiter E'\x08') read only; +CREATE FOREIGN TABLE + +DROP TABLE row_tbl; +DROP TABLE + +CREATE TABLE row_tbl( a int, b int); +NOTICE: The 'DISTRIBUTE BY' clause is not specified. Using 'a' as the distribution column by default. +HINT: Please use 'DISTRIBUTE BY' clause to specify suitable data distribution column. +CREATE TABLE + +INSERT INTO row_tbl SELECT * FROM OBS_ft; +INSERT 0 3 + |
CREATE FOREIGN TABLE creates an HDFS or OBS foreign table in the current database to access or export structured data stored on HDFS or OBS. You can also export data in ORC format to HDFS or OBS.
+The hybrid data warehouse (standalone) does not support OBS and HDFS foreign table import and export.
+Create an HDFS foreign table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name +( [ { column_name type_name + [ { [CONSTRAINT constraint_name] NULL | + [CONSTRAINT constraint_name] NOT NULL | + column_constraint [...]} ] | + table_constraint [, ...]} [, ...] ] ) + SERVER server_name + OPTIONS ( { option_name ' value ' } [, ...] ) + [ {WRITE ONLY | READ ONLY}] + DISTRIBUTE BY {ROUNDROBIN | REPLICATION} + + [ PARTITION BY ( column_name ) [ AUTOMAPPED ] ] ; + |
1 +2 +3 | [CONSTRAINT constraint_name] +{PRIMARY KEY | UNIQUE} +[NOT ENFORCED [ENABLE QUERY OPTIMIZATION | DISABLE QUERY OPTIMIZATION] | ENFORCED] + |
1 +2 +3 | [CONSTRAINT constraint_name] +{PRIMARY KEY | UNIQUE} (column_name) +[NOT ENFORCED [ENABLE QUERY OPTIMIZATION | DISABLE QUERY OPTIMIZATION] | ENFORCED] + |
Does not throw an error if a table with the same name exists. A notice is issued in this case.
+Specifies the name of the foreign table to be created.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a column in the foreign table. Columns are separated by commas (,).
+Value range: a string. It must comply with the naming convention.
+Specifies the data type of the column.
+Data types supported by ORC tables.
+The data types supported by TXT table are the same as those in row-store tables.
+Specifies the name of a constraint for the foreign table.
+Specifies whether the column allows NULL.
+When you create a table, whether the data in HDFS is NULL or NOT NULL cannot be guaranteed. The consistency of data is guaranteed by users. Users must decide whether the column is NULL or NOT NULL. (The optimizer optimizes the NULL/NOT NULL and generates a better plan.)
+Specifies the server name of the foreign table. Users can customize its name.
+Value range: a string indicating an existing server. It must comply with the naming convention.
+Specifies whether a data file contains a table header. header is available only for CSV files.
+If header is on, the first row of the data file will be identified as the header and ignored during export. If header is off, the first row will be identified as a data row.
+Value range: true, on, false, and off. The default value is false or off.
+Specifies the quotation mark for the CSV format. The default value is a double quotation mark (").
+The quote value cannot be the same as the delimiter or null value.
+The quote value must be a single-byte character.
+Invisible characters are recommended as quote values, such as 0x07, 0x08, and 0x1b.
+Specifies an escape character for a CSV file. The value must be a single-byte character.
+The default value is a double quotation mark ("). If the value is the same as the quote value, it will be replaced with \0.
+Specifies the file path on OBS. This is an OBS foreign table parameter. The data sources of multiple buckets are separated by vertical bars (|), for example, LOCATION 'obs://bucket1/folder/ | obs://bucket2/'. The database scans all objects in the specified folders.
+When accessing a DLI multi-version table, you do not need to specify the location parameter.
+When accessing a DLI multi-version table, you do not need to specify the foldername parameter.
+Specifies the column delimiter of data, and uses the default delimiter if it is not set. The default delimiter of TEXT is a tab.
+Valid value:
+The value of delimiter can be a multi-character delimiter whose length is less than or equal to 10 bytes.
+Specifies the newline character style of the imported data file.
+Value range: multi-character newline characters within 10 bytes. Common newline characters include \r (0x0D), \n (0x0A), and \r\n (0x0D0A). Special newline characters include $ and #.
+Valid value:
+The default value is \N for the TEXT format.
+Specifies whether to escape the backslash (\) and its following characters in .txt format.
+noescaping is available only for TEXT source data files.
+Value range: true, on, false, and off. The default value is false or off.
+Specifies whether to generate an error message when the last column in a row in the source file is lost during data loading.
+Value range: true, on, false, and off. The default value is false or off.
+missing data for column "tt"+
Specifies whether to ignore excessive columns when the number of data source files exceeds the number of foreign table columns. This parameter is available during data import.
+Value range: true, on, false, and off. The default value is false or off.
+extra data after last expected column+
Specifies the DATE format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid DATE value. For details, see Date and Time Processing Functions and Operators.
+Specifies the TIME format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: a valid TIME value. Time zones cannot be used. For details, see Date and Time Processing Functions and Operators.
+time_format is available only for TEXT and CSV source data files.
+Specifies the TIMESTAMP format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: any valid TIMESTAMP value. Time zones are not supported. For details, see Date and Time Processing Functions and Operators.
+timestamp_format is available only for TEXT and CSV source data files.
+Specifies the SMALLDATETIME format for data import. This syntax is available only for READ ONLY foreign tables.
+Value range: a valid SMALLDATETIME value. For details, see Date and Time Processing Functions and Operators.
+smalldatetime_format is available only for TEXT and CSV source data files.
+This parameter specifies the data code of the data table to be exported when the database code is different from the data code of the data table. For example, the database code is Latin-1, but the data in the exported data table is in UTF-8 format. This parameter is optional. If this parameter is not specified, the database encoding format is used by default. This syntax is valid only for the write-only HDFS foreign table.
+Value range: data code types supported by the database encoding
+The dataencoding parameter is valid only for the ORC-formatted write-only HDFS foreign table.
+Specifies the file size of a write-only foreign table. This parameter is optional. If this parameter is not specified, the file size in the distributed file system configuration is used by default. This syntax is available only for the write-only foreign table.
+Value range: an integer ranging from 1 to 1024
+The filesize parameter is valid only for the ORC-formatted write-only HDFS foreign table.
+Specifies the compression mode of ORC files. This parameter is optional. This syntax is available only for the write-only foreign table.
+Value range: zlib, snappy, and lz4 The default value is snappy.
+Specifies the ORC version number. This parameter is optional. This syntax is available only for the write-only foreign table.
+Value range: Only 0.12 is supported. The default value is 0.12.
+Specifies the project ID corresponding to DLI. You can obtain the project ID from the management console. This parameter is available only when the server type is DLI. This feature is supported only in 8.1.1 or later.
+Specifies the name of the database where the DLI multi-version table to be accessed is located. This parameter is available only when the server type is DLI. This feature is supported only in 8.1.1 or later.
+Specifies the name of the DLI multi-version table to be accessed. This parameter is available only when the server type is DLI. This feature is supported only in 8.1.1 or later.
+Specifies whether to check the character encoding.
+In TEXT format, the rule of error tolerance for invalid characters imported is as follows:
+In ORC format, the rule of error tolerance for invalid characters imported is as follows:
+Parameter + |
+OBS + |
+HDFS + |
+||||||||
---|---|---|---|---|---|---|---|---|---|---|
- + + |
+TEXT + |
+CSV + |
+ORC + |
+CARBONDATA + |
+TEXT + |
+CSV + |
+ORC + |
+PARQUET + |
+||
READ ONLY + |
+READ ONLY + |
+READ ONLY + |
+WRITE ONLY + |
+READ ONLY + |
+READ ONLY + |
+READ ONLY + |
+READ ONLY + |
+WRITE ONLY + |
+READ ONLY + |
+|
location + |
+√ + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+
format + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+
header + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+
delimiter + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
quote + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+
escape + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+
null + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
noescaping + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+
encoding + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+
fill_missing_fields + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
ignore_extra_data + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
date_format + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
time_format + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
timestamp_format + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
smalldatetime_format + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
chunksize + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+
filenames + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+√ + |
+√ + |
+√ + |
+× + |
+√ + |
+
foldername + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+
dataencoding + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+
filesize + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+
compression + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+
version + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+√ + |
+× + |
+
checkencoding + |
+√ + |
+√ + |
+√ + |
+× + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+
totalrows + |
+√ + |
+√ + |
+√ + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+× + |
+
WRITE ONLY creates a write-only HDFS/OBS foreign table.
+READ ONLY creates a read-only HDFS/OBS foreign table.
+If the foreign table type is not specified, a read-only foreign table is created by default.
+Specifies ROUNDROBIN as the distribution mode for the HDFS/OBS foreign table.
+Specifies REPLICATION as the distribution mode for the HDFS/OBS foreign table.
+column_name specifies the partition column. AUTOMAPPED means the partition column specified by the HDFS partitioned foreign table is automatically mapped with the partition directory information in HDFS. The prerequisite is that the sequences of partition columns specified in the HDFS foreign table and in the directory are the same. This function is applicable only to read-only foreign tables.
+Specifies the name of informational constraint of the foreign table.
+Value range: a string. It must comply with the naming convention.
+The primary key constraint specifies that one or more columns of a table must contain unique (non-duplicate) and non-null values. Only one primary key can be specified for a table.
+Specifies that a group of one or more columns of a table must contain unique values. For the purpose of a unique constraint, NULL is not considered equal.
+Specifies the constraint to be an informational constraint. This constraint is guaranteed by the user instead of the database.
+The default value is ENFORCED. ENFORCED is a reserved parameter and is currently not supported.
+Specifies the informational constraint on column_name.
+Value range: a string. It must comply with the naming convention, and the value of column_name must exist.
+Optimizes an execution plan using an informational constraint.
+Disables the optimization of an execution plan using an informational constraint.
+In GaussDB(DWS), the use of data constraints depend on users. If users can make data sources strictly comply with certain constraints, the query on data with such constraints can be accelerated. Foreign tables do not support Index. Informational constraint is used for optimizing query plans.
+The constraints of creating informational constraints for a foreign table are as follows:
+Example 1: In HDFS, import the TPC-H benchmark test tables part and region using Hive. The path of the part table is /user/hive/warehouse/partition.db/part_4, and that of the region table is /user/hive/warehouse/mppdb.db/region_orc11_64stripe/.
+1 | CREATE SERVER hdfs_server FOREIGN DATA WRAPPER HDFS_FDW OPTIONS (address '10.10.0.100:25000,10.10.0.101:25000',hdfscfgpath '/opt/hadoop_client/HDFS/hadoop/etc/hadoop',type'HDFS'); + |
The IP addresses and port numbers of HDFS NameNodes are specified in OPTIONS. 10.10.0.100:25000,10.10.0.101:25000 indicates the IP addresses and port numbers of the primary and standby HDFS NameNodes. It is the recommended format. Two groups of parameter values are separated by commas (,). Take '10.10.0.100:25000' as an example. In this example, the IP address is 10.10.0.100, and the port number is 25000.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | CREATE FOREIGN TABLE ft_region +( + R_REGIONKEY INT4, + R_NAME TEXT, + R_COMMENT TEXT +) +SERVER + hdfs_server +OPTIONS +( + FORMAT 'orc', + encoding 'utf8', + FOLDERNAME '/user/hive/warehouse/mppdb.db/region_orc11_64stripe/' +) +DISTRIBUTE BY + roundrobin; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | CREATE FOREIGN TABLE ft_part +( + p_partkey int, + p_name text, + p_mfgr text, + p_brand text, + p_type text, + p_size int, + p_container text, + p_retailprice float8, + p_comment text +) +SERVER + hdfs_server +OPTIONS +( + FORMAT 'orc', + encoding 'utf8', + FOLDERNAME '/user/hive/warehouse/partition.db/part_4' +) +DISTRIBUTE BY + roundrobin +PARTITION BY + (p_mfgr) AUTOMAPPED; + |
GaussDB(DWS) allows you to specify files using the keyword filenames or foldername. The latter is recommended. The key word distribute specifies the storage distribution mode of the region table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT * FROM pg_foreign_table WHERE ftrelid='ft_region'::regclass; + ftrelid | ftserver | ftwriteonly | ftoptions +---------+----------+-------------+------------------------------------------------------------------------------ + 16510 | 16509 | f | {format=orc,foldername=/user/hive/warehouse/mppdb.db/region_orc11_64stripe/} +(1 row) + +select * from pg_foreign_table where ftrelid='ft_part'::regclass; + ftrelid | ftserver | ftwriteonly | ftoptions +---------+----------+-------------+------------------------------------------------------------------ + 16513 | 16509 | f | {format=orc,foldername=/user/hive/warehouse/partition.db/part_4} +(1 row) + |
Export data from the TPC-H benchmark test table region table to the /user/hive/warehouse/mppdb.db/regin_orc/ directory of the HDFS file system through the HDFS write-only foreign table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 | CREATE FOREIGN TABLE ft_wo_region +( + R_REGIONKEY INT4, + R_NAME TEXT, + R_COMMENT TEXT +) +SERVER + hdfs_server +OPTIONS +( + FORMAT 'orc', + encoding 'utf8', + FOLDERNAME '/user/hive/warehouse/mppdb.db/regin_orc/' +) +WRITE ONLY; + |
1 | INSERT INTO ft_wo_regin SELECT * FROM region; + |
Perform operations on an HDFS foreign table that includes informational constraints.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | CREATE FOREIGN TABLE ft_region ( + R_REGIONKEY int, + R_NAME TEXT, + R_COMMENT TEXT + , primary key (R_REGIONKEY) not enforced) +SERVER hdfs_server +OPTIONS(format 'orc', + encoding 'utf8', + foldername '/user/hive/warehouse/mppdb.db/region_orc11_64stripe') +DISTRIBUTE BY roundrobin; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | SELECT relname,relhasindex FROM pg_class WHERE oid='ft_region'::regclass; + relname | relhasindex +------------------------+------------- + ft_region | f +(1 row) + +SELECT conname, contype, consoft, conopt, conindid, conkey FROM pg_constraint WHERE conname ='region_pkey'; + conname | contype | consoft | conopt | conindid | conkey +-------------+---------+---------+--------+----------+-------- + region_pkey | p | t | t | 0 | {1} +(1 row) + |
1 +2 +3 +4 +5 +6 | ALTER FOREIGN TABLE ft_region DROP CONSTRAINT region_pkey RESTRICT; + +SELECT conname, contype, consoft, conindid, conkey FROM pg_constraint WHERE conname ='region_pkey'; + conname | contype | consoft | conindid | conkey +---------+---------+---------+----------+-------- +(0 rows) + |
1 | ALTER FOREIGN TABLE ft_region ADD CONSTRAINT constr_unique UNIQUE(R_REGIONKEY) NOT ENFORCED; + |
1 +2 +3 +4 +5 +6 | ALTER FOREIGN TABLE ft_region DROP CONSTRAINT constr_unique RESTRICT; + +SELECT conname, contype, consoft, conindid, conkey FROM pg_constraint WHERE conname ='constr_unique'; + conname | contype | consoft | conindid | conkey +---------+---------+---------+----------+-------- +(0 rows) + |
1 +2 +3 +4 +5 +6 +7 | ALTER FOREIGN TABLE ft_region ADD CONSTRAINT constr_unique UNIQUE(R_REGIONKEY) NOT ENFORCED disable query optimization; + +SELECT relname,relhasindex FROM pg_class WHERE oid='ft_region'::regclass; + relname | relhasindex +------------------------+------------- + ft_region | f +(1 row) + |
1 | ALTER FOREIGN TABLE ft_region DROP CONSTRAINT constr_unique CASCADE; + |
Read data stored in OBS using a foreign table.
+1 +2 +3 +4 +5 +6 | CREATE SERVER obs_server FOREIGN DATA WRAPPER DFS_FDW OPTIONS ( + ADDRESS 'obs.xxx.xxx.com', + ACCESS_KEY 'xxxxxxxxx', + SECRET_ACCESS_KEY 'yyyyyyyyyyyyy', + TYPE 'OBS' +); + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | CREATE FOREIGN TABLE customer_address +( + ca_address_sk integer not null, + ca_address_id char(16) not null, + ca_street_number char(10) , + ca_street_name varchar(60) , + ca_street_type char(15) , + ca_suite_number char(10) , + ca_city varchar(60) , + ca_county varchar(30) , + ca_state char(2) , + ca_zip char(10) , + ca_country varchar(20) , + ca_gmt_offset decimal(36,33) , + ca_location_type char(20) +) +SERVER obs_server OPTIONS ( + FOLDERNAME '/user/hive/warehouse/mppdb.db/region_orc11_64stripe1/', + FORMAT 'ORC', + ENCODING 'utf8', + TOTALROWS '20' +) +DISTRIBUTE BY roundrobin; + |
1 +2 +3 +4 +5 | SELECT COUNT(*) FROM customer_address; + count +------- + 20 +(1 row) + |
Read a DLI multi-version foreign table using a foreign table. Only DLI 8.1.1 and later support the multi-version foreign table example.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE SERVER dli_server FOREIGN DATA WRAPPER DFS_FDW OPTIONS ( + ADDRESS 'obs.xxx.xxx.com', + ACCESS_KEY 'xxxxxxxxx', + SECRET_ACCESS_KEY 'yyyyyyyyyyyyy', + TYPE 'DLI', + DLI_ADDRESS 'dli.xxx.xxx.com', + DLI_ACCESS_KEY 'xxxxxxxxx', + DLI_SECRET_ACCESS_KEY 'yyyyyyyyyyyyy' +); + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | CREATE FOREIGN TABLE customer_address +( + ca_address_sk integer not null, + ca_address_id char(16) not null, + ca_street_number char(10) , + ca_street_name varchar(60) , + ca_street_type char(15) , + ca_suite_number char(10) , + ca_city varchar(60) , + ca_county varchar(30) , + ca_state char(2) , + ca_zip char(10) , + ca_country varchar(20) , + ca_gmt_offset decimal(36,33) , + ca_location_type char(20) +) +SERVER dli_server OPTIONS ( + FORMAT 'ORC', + ENCODING 'utf8', + DLI_PROJECT_ID 'xxxxxxxxxxxxxxx', + DLI_DATABASE_NAME 'database123', + DLI_TABLE_NAME 'table456' +) +DISTRIBUTE BY roundrobin; + |
1 +2 +3 +4 +5 | SELECT COUNT(*) FROM customer_address; + count +------- + 20 +(1 row) + |
CREATE FUNCTION creates a function.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | CREATE [ OR REPLACE ] FUNCTION function_name + ( [ { argname [ argmode ] argtype [ { DEFAULT | := | = } expression ]} [, ...] ] ) + [ RETURNS rettype [ DETERMINISTIC ] | RETURNS TABLE ( { column_name column_type } [, ...] )] + LANGUAGE lang_name + [ + {IMMUTABLE | STABLE | VOLATILE } + | {SHIPPABLE | NOT SHIPPABLE} + | WINDOW + | [ NOT ] LEAKPROOF + | {CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT } + | {[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER | AUTHID DEFINER | AUTHID CURRENT_USER} + | {fenced | not fenced} + | {PACKAGE} + + | COST execution_cost + | ROWS result_rows + | SET configuration_parameter { {TO | =} value | FROM CURRENT }} + ][...] + { + AS 'definition' + | AS 'obj_file', 'link_symbol' + } + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | CREATE [ OR REPLACE ] FUNCTION function_name + ( [ { argname [ argmode ] argtype [ { DEFAULT | := | = } expression ] } [, ...] ] ) + RETURN rettype [ DETERMINISTIC ] + [ + {IMMUTABLE | STABLE | VOLATILE } + | {SHIPPABLE | NOT SHIPPABLE} + | {PACKAGE} + | {FENCED | NOT FENCED} + | [ NOT ] LEAKPROOF + | {CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT } + | {[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER | +AUTHID DEFINER | AUTHID CURRENT_USER +} + | COST execution_cost + | ROWS result_rows + | SET configuration_parameter { {TO | =} value | FROM CURRENT + + ][...] + + { + IS | AS +} plsql_body +/ + |
Indicates the name of the function to create (optionally schema-qualified).
+Value range: a string. It must comply with the naming convention.
+Indicates the name of a function parameter.
+Value range: a string. It must comply with the naming convention.
+Indicates the mode of a parameter.
+Value range: IN, OUT, IN OUT, INOUT, and VARIADIC. The default value is IN. Only the parameter of OUT mode can be followed by VARIADIC. The parameters of OUT and INOUT cannot be used in function definition of RETURNS TABLE.
+VARIADIC specifies parameters of array types.
+Indicates the data types of the function's parameters.
+Indicates the default expression of a parameter.
+Indicates the return data type.
+When there is OUT or IN OUT parameter, the RETURNS clause can be omitted. If the clause exists, it must be the same as the result type indicated by the output parameter. If there are multiple output parameters, the value is RECORD. Otherwise, the value is the same as the type of a single output parameter.
+The SETOF modifier indicates that the function will return a set of items, rather than a single item.
+The adaptation oracle SQL syntax. You are not advised to use it.
+Specifies the column name.
+Specifies the column type.
+Specifies a string constant defining the function; the meaning depends on the language. It can be an internal function name, a path pointing to a target file, a SQL query, or text in a procedural language.
+Indicates the name of the language that is used to implement the function. It can be SQL, internal, or the name of user-defined process language. To ensure downward compatibility, the name can use single quotation marks. Contents in single quotation marks must be capitalized.
+Indicates that the function is a window function. The WINDOW attribute cannot be changed when the function definition is replaced.
+For a user-defined window function, the value of LANGUAGE can only be internal, and the referenced internal function must be a window function.
+Indicates that the function always returns the same result if the parameter values are the same.
+If the input argument of the function is a constant, the function value is calculated at the optimizer stage. The advantage is that the expression value can be obtained as early as possible, so the cost estimation is more accurate and the execution plan generated is better.
+A user-defined IMMUTABLE function is automatically pushed down to DNs for execution, which may cause potential risks. If a function is defined as IMMUTABLE but the function execution process is in fact not IMMUTABLE, serious problems such as result errors may occur. Therefore, exercise caution when defining the IMMUTABLE attribute for a function.
+Examples:
+To prevent possible problems, you can set behavior_compat_options to check_function_conflicts in the database to check definition conflicts. This method can identify the 1 and 2 scenarios described above.
+Indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same parameter values, but that its result varies by SQL statements.
+Indicates that the function value can change even within a single table scan, so no optimizations can be made.
+NOT SHIPPABLE
+Indicates whether the function can be pushed down to DNs for execution.
+Exercise caution when defining the SHIPPABLE attribute for a function. SHIPPABLE means that the entire function will be pushed down to DNs for execution. If the attribute is incorrectly set, serious problems such as result errors may occur.
+Similar to the IMMUTABLE attribute, the SHIPPABLE attribute has use restrictions. The function cannot contain factors that do not allow the function to be pushed down for execution. If a function is pushed down to a single DN for execution, the function's calculation logic will depend only on the data set of the DN.
+Examples:
+Indicates that the function has no side effects. LEAKPROOF can be set only by the system administrator.
+Declares that some parameters of the function can be invoked in normal mode if the parameter values are NULL. This parameter can be omitted.
+STRICT
+Indicates that the function always returns NULL whenever any of its parameters are NULL. If this parameter is specified, the function is not executed when there are NULL parameters; instead a NULL result is returned automatically.
+The usage of RETURNS NULL ON NULL INPUT is the same as that of STRICT.
+The keyword EXTERNAL is allowed for SQL conformance, but it is optional since, unlike in SQL, this feature applies to all functions not only external ones.
+AUTHID CURRENT_USER
+Indicates that the function is to be executed with the permissions of the user that calls it. This parameter can be omitted.
+SECURITY INVOKER and AUTHID CURRENT_USER have the same functions.
+AUTHID DEFINER
+Specifies that the function is to be executed with the permissions of the user that created it.
+The usage of AUTHID DEFINER is the same as that of SECURITY DEFINER.
+NOT FENCED
+(Effective only for C functions) Specifies whether functions are executed in fenced mode. In NOT FENCED mode, a function is executed in a CN or DN process. In FENCED mode, a function is executed in a new fork process, which does not affect CN or DN processes.
+Application scenarios:
+A positive number giving the estimated execution cost for the function.
+The unit of execution_cost is cpu_operator_cost.
+Value range: A positive number.
+Estimates the number of rows returned by the function. This is only allowed when the function is declared to return a set.
+Value range: A positive number. The default is 1000 rows.
+Sets a specified database session parameter to a specified value. If the value is DEFAULT or RESET, the default setting is used in the new session. OFF closes the setting.
+Value range: a string
+Specifies the default value.
+Uses the value of configuration_parameter of the current session.
+(Used for C functions) Specifies the absolute path of the dynamic library using obj_file and the link symbol (function name in C programming language) of the function using link_symbol.
+Indicates the PL/SQL stored procedure body.
+When the function is creating users, the log will record unencrypted passwords. You are not advised to do it.
+Define the function as SQL query.
+1 +2 +3 +4 +5 | CREATE FUNCTION func_add_sql(integer, integer) RETURNS integer + AS 'select $1 + $2;' + LANGUAGE SQL + IMMUTABLE + RETURNS NULL ON NULL INPUT; + |
Add an integer by parameter name using PL/pgSQL.
+1 +2 +3 +4 +5 | CREATE OR REPLACE FUNCTION func_increment_plsql(i integer) RETURNS integer AS $$ + BEGIN + RETURN i + 1; + END; +$$ LANGUAGE plpgsql; + |
Return the RECORD type.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE OR REPLACE FUNCTION compute(i int, out result_1 bigint, out result_2 bigint) +returns SETOF RECORD +as $$ +begin + result_1 = i + 1; + result_2 = i * 10; +return next; +end; +$$language plpgsql; + |
Get a record containing multiple output parameters.
+1 +2 +3 +4 | CREATE FUNCTION func_dup_sql(in int, out f1 int, out f2 text) + AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$ + LANGUAGE SQL; +SELECT * FROM func_dup_sql(42); + |
Calculate the sum of two integers and get the result. If the input is null, null will be returned.
+1 +2 +3 +4 +5 +6 | CREATE FUNCTION func_add_sql2(num1 integer, num2 integer) RETURN integer +AS +BEGIN +RETURN num1 + num2; +END; +/ + |
Create an overloaded function with the PACKAGE attribute.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | CREATE OR REPLACE FUNCTION package_func_overload(col int, col2 int) +return integer package +as +declare + col_type text; +begin + col := 122; + dbms_output.put_line('two int parameters ' || col2); + return 0; +end; +/ + +CREATE OR REPLACE FUNCTION package_func_overload(col int, col2 smallint) +return integer package +as +declare + col_type text; +begin + col := 122; + dbms_output.put_line('two smallint parameters ' || col2); + return 0; +end; +/ + |
CREATE GROUP creates a user group.
+CREATE GROUP is an alias for CREATE ROLE, and it is not a standard SQL command and not recommended. Users can use CREATE ROLE directly.
+1 +2 | CREATE GROUP group_name [ [ WITH ] option [ ... ] ] + [ ENCRYPTED | UNENCRYPTED ] { PASSWORD | IDENTIFIED BY } { 'password' | DISABLE }; + |
The syntax of optional action clause is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 | where option can be: +{SYSADMIN | NOSYSADMIN} + | {AUDITADMIN | NOAUDITADMIN} + | {CREATEDB | NOCREATEDB} + | {USEFT | NOUSEFT} + | {CREATEROLE | NOCREATEROLE} + | {INHERIT | NOINHERIT} + | {LOGIN | NOLOGIN} + | {REPLICATION | NOREPLICATION} + | {INDEPENDENT | NOINDEPENDENT} + | {VCADMIN | NOVCADMIN} + | CONNECTION LIMIT connlimit + | VALID BEGIN 'timestamp' + | VALID UNTIL 'timestamp' + | RESOURCE POOL 'respool' + | USER GROUP 'groupuser' + | PERM SPACE 'spacelimit' + | NODE GROUP logic_group_name + | IN ROLE role_name [, ...] + | IN GROUP role_name [, ...] + | ROLE role_name [, ...] + | ADMIN role_name [, ...] + | USER role_name [, ...] + | SYSID uid + | DEFAULT TABLESPACE tablespace_name + | PROFILE DEFAULT + | PROFILE profile_name + | PGUSER + |
See Parameter Description in CREATE ROLE.
+CREATE INDEX-bak defines a new index.
+Indexes are primarily used to enhance database performance (though inappropriate use can result in slower database performance). You are advised to create indexes on:
+The partitioned table does not support concurrent index creation, partial index creation, and NULL FIRST.
+1 +2 +3 +4 +5 | CREATE [ UNIQUE ] INDEX [ [ schema_name. ] index_name ] ON table_name [ USING method ] + ({ { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] }[, ...] ) + [ WITH ( {storage_parameter = value} [, ... ] ) ] + [ TABLESPACE tablespace_name ] + [ WHERE predicate ]; + |
1 +2 +3 +4 +5 | CREATE [ UNIQUE ] INDEX [ [ schema_name. ] index_name ] ON table_name [ USING method ] + ( {{ column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [ ASC | DESC ] [ NULLS LAST ] }[, ...] ) + LOCAL [ ( { PARTITION index_partition_name [ TABLESPACE index_partition_tablespace ] } [, ...] ) ] + [ WITH ( { storage_parameter = value } [, ...] ) ] + [ TABLESPACE tablespace_name ]; + |
Causes the system to check for duplicate values in the table when the index is created (if data exists) and each time data is added. Attempts to insert or update data which would result in duplicate entries will generate an error.
+Currently, only B-tree indexes of row-store tables and column-store tables support unique indexes.
+Name of the schema where the index to be created is located. The specified schema name must be the same as the schema of the table.
+Specifies the name of the index to be created. The schema of the index is the same as that of the table.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of the table to be indexed (optionally schema-qualified).
+Value range: an existing table name
+Specifies the name of the index method to be used.
+Valid value:
+Row-based tables support the following index types: btree (default), gin, and gist. Column-based tables support the following index types: Psort (default), btree, and gin.
+Specifies the name of a column of the table.
+Multiple columns can be specified if the index method supports multi-column indexes. A maximum of 32 columns can be specified.
+Specifies an expression based on one or more columns of the table. The expression usually must be written with surrounding parentheses, as shown in the syntax. However, the parentheses can be omitted if the expression has the form of a function call.
+Expression can be used to obtain fast access to data based on some transformation of the basic data. For example, an index computed on upper(col) would allow the clause WHERE upper(col) = 'JIM' to use an index.
+If an expression contains IS NULL, the index for this expression is invalid. In this case, you are advised to create a partial index.
+Assigns a collation to the column (which must be of a collatable data type). If no collation is specified, the default collation is used.
+Specifies the name of an operator class. Specifies an operator class for each column of an index. The operator class identifies the operators to be used by the index for that column. For example, a B-tree index on the type int4 would use the int4_ops class; this operator class includes comparison functions for values of type int4. In practice, the default operator class for the column's data type is sufficient. The operator class applies to data with multiple sorts. For example, we might want to sort a complex-number data type either by absolute value or by real part. We could do this by defining two operator classes for the data type and then selecting the proper class when making an index.
+Indicates ascending sort order (default). This option is supported only by row storage.
+Indicates descending sort order. This option is supported only by row storage.
+Specifies that nulls sort before not-null values. This is the default when DESC is specified.
+Specifies that nulls sort after not-null values. This is the default when DESC is not specified.
+Specifies the name of an index-method-specific storage parameter.
+Valid value:
+The fillfactor for an index is a percentage between 10 and 100.
+Value range: 10–100
+Specifies whether fast update is enabled for the GIN index.
+Valid value: ON and OFF
+Default: ON
+Specifies the maximum capacity of the pending list of the GIN index when fast update is enabled for the GIN index.
+Value range: 64–INT_MAX. The unit is KB.
+Default value: The default value of gin_pending_list_limit depends on gin_pending_list_limit specified in GUC parameters. By default, the value is 4 MB.
+Creates a partial index. A partial index is an index that contains entries for only a portion of a table, usually a portion that is more useful for indexing than the rest of the table. For example, if you have a table that contains both billed and unbilled orders where the unbilled orders take up a small fraction of the total table and yet that is an often used section, you can improve performance by creating an index on just that portion. Another possible application is to use WHERE with UNIQUE to enforce uniqueness over a subset of a table.
+Value range: predicate expression can refer only to columns of the underlying table, but it can use all columns, not just the ones being indexed. Presently, subquery and aggregate expressions are also forbidden in WHERE.
+Specifies the name of the index partition.
+Value range: a string. It must comply with the naming convention.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | CREATE TABLE tpcds.ship_mode_t1 +( + SM_SHIP_MODE_SK INTEGER NOT NULL, + SM_SHIP_MODE_ID CHAR(16) NOT NULL, + SM_TYPE CHAR(30) , + SM_CODE CHAR(10) , + SM_CARRIER CHAR(20) , + SM_CONTRACT CHAR(20) +) +DISTRIBUTE BY HASH(SM_SHIP_MODE_SK); + |
-- Create a common index on the SM_SHIP_MODE_SK column in the tpcds.ship_mode_t1 table:
+1 | CREATE UNIQUE INDEX ds_ship_mode_t1_index1 ON tpcds.ship_mode_t1(SM_SHIP_MODE_SK); + |
Create a B-tree index on the SM_SHIP_MODE_SK column in the tpcds.ship_mode_t1 table.
+1 | CREATE INDEX ds_ship_mode_t1_index4 ON tpcds.ship_mode_t1 USING btree(SM_SHIP_MODE_SK); + |
Create an expression index on the SM_CODE column in the tpcds.ship_mode_t1 table.
+1 | CREATE INDEX ds_ship_mode_t1_index2 ON tpcds.ship_mode_t1(SUBSTR(SM_CODE,1 ,4)); + |
Create a partial index on the SM_SHIP_MODE_SK column where SM_SHIP_MODE_SK is greater than 10 in the tpcds.ship_mode_t1 table.
+CREATE UNIQUE INDEX ds_ship_mode_t1_index3 ON tpcds.ship_mode_t1(SM_SHIP_MODE_SK) WHERE SM_SHIP_MODE_SK>10;
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | CREATE TABLE tpcds.customer_address_p1 +( + CA_ADDRESS_SK INTEGER NOT NULL, + CA_ADDRESS_ID CHAR(16) NOT NULL, + CA_STREET_NUMBER CHAR(10) , + CA_STREET_NAME VARCHAR(60) , + CA_STREET_TYPE CHAR(15) , + CA_SUITE_NUMBER CHAR(10) , + CA_CITY VARCHAR(60) , + CA_COUNTY VARCHAR(30) , + CA_STATE CHAR(2) , + CA_ZIP CHAR(10) , + CA_COUNTRY VARCHAR(20) , + CA_GMT_OFFSET DECIMAL(5,2) , + CA_LOCATION_TYPE CHAR(20) +) +DISTRIBUTE BY HASH(CA_ADDRESS_SK) +PARTITION BY RANGE(CA_ADDRESS_SK) +( + PARTITION p1 VALUES LESS THAN (3000), + PARTITION p2 VALUES LESS THAN (5000) , + PARTITION p3 VALUES LESS THAN (MAXVALUE) +) +ENABLE ROW MOVEMENT; + |
Create the partitioned table index ds_customer_address_p1_index1 with the name of the index partition not specified.
+1 | CREATE INDEX ds_customer_address_p1_index1 ON tpcds.customer_address_p1(CA_ADDRESS_SK) LOCAL; + |
Create the partitioned table index ds_customer_address_p1_index2 with the name of the index partition specified.
+1 +2 +3 +4 +5 +6 +7 | CREATE INDEX ds_customer_address_p1_index2 ON tpcds.customer_address_p1(CA_ADDRESS_SK) LOCAL +( + PARTITION CA_ADDRESS_SK_index1, + PARTITION CA_ADDRESS_SK_index2, + PARTITION CA_ADDRESS_SK_index3 +) +; + |
CREATE REDACTION POLICY creates a data redaction policy for a table.
+1 +2 +3 | CREATE REDACTION POLICY policy_name ON table_name + [ WHEN (when_expression) ] + [ ADD COLUMN column_name WITH redaction_function_name ( [ argument [, ...] ] )] [, ... ]; + |
Specifies the name of a redaction policy.
+Specifies the name of the table to which the redaction policy is applied.
+Specifies the expression used for the redaction policy to take effect. The redaction policy takes effect only when this expression is true.
+When a query statement is querying a table where a redaction policy is enabled, the redacted data is invisible in the query only if the WHEN expression for the redaction policy is true. Generally, the WHEN clause is used to specify the users for which the redaction policy takes effect.
+The WHEN clause must comply with the following rules:
+Specifies the name of the table column to which the redaction policy is applied.
+Specifies the redaction function applied to the specified table column.
+Specifies the list of arguments of the redaction function.
+The system provides three built-in redaction functions: MASK_NONE, MASK_FULL, and MASK_PARTIAL. For details about the function specifications, see Data Redaction Functions. You can also define your own redaction functions, which must comply with the following rules:
+Built-in redaction functions can cover common redaction scenarios of sensitive information. Therefore, you are advised to use built-in redaction functions to create redaction policies.
+Create a table object emp as user alice, and insert data into the table.
+1 +2 | CREATE TABLE emp(id int, name varchar(20), salary NUMERIC(10,2)); +INSERT INTO emp VALUES(1, 'July', 1230.10), (2, 'David', 999.99); + |
Create a redaction policy mask_emp for the emp table as user alice to make the salary column invisible to user matu.
+1 | CREATE REDACTION POLICY mask_emp ON emp WHEN(current_user = 'matu') ADD COLUMN salary WITH mask_full(salary); + |
Grant the SELECT permission on the emp table to user matu as user alice.
+1 | GRANT SELECT ON emp TO matu; + |
Switch to user matu.
+1 | SET ROLE matu PASSWORD '{password}'; + |
Query the emp table. Data in the salary column has been redacted.
+1 | SELECT * FROM emp; + |
CREATE ROW LEVEL SECURITY POLICY creates a row-level access control policy for a table.
+The policy takes effect only after row-level access control is enabled (by running ALTER TABLE... ENABLE ROW LEVEL SECURITY).
+Currently, row-level access control affects the read (SELECT, UPDATE, DELETE) of data tables and does not affect the write (INSERT and MERGE INTO) of data tables. The table owner or system administrators can create an expression in the USING clause. When the client reads the data table, the database server combines the expressions that meet the condition and applies it to the execution plan in the statement rewriting phase of a query. For each tuple in a data table, if the expression returns TRUE, the tuple is visible to the current user; if the expression returns FALSE or NULL, the tuple is invisible to the current user.
+A row-level access control policy name is specific to a table. A data table cannot have row-level access control policies with the same name. Different data tables can have the same row-level access control policy.
+Row-level access control policies can be applied to specified operations (SELECT, UPDATE, DELETE, and ALL). ALL indicates that SELECT, UPDATE, and DELETE will be affected. For a new row-level access control policy, the default value ALL will be used if you do not specify the operations that will be affected.
+Row-level access control policies can be applied to a specified user (role) or to all users (PUBLIC). For a new row-level access control policy, the default value PUBLIC will be used if you do not specify the user that will be affected.
+1 | ALTER TABLE public.all_data ALTER COLUMN role TYPE text; + |
1 +2 +3 +4 +5 | CREATE [ ROW LEVEL SECURITY ] POLICY policy_name ON table_name + [ AS { PERMISSIVE | RESTRICTIVE } ] + [ FOR { ALL | SELECT | UPDATE | DELETE } ] + [ TO { role_name | PUBLIC } [, ...] ] + USING ( using_expression ) + |
Specifies the name of a row-level access control policy to be created. The names of row-level access control policies for a table must be unique.
+Specifies the name of a table to which a row-level access control policy is applied.
+Specifies that the row-level access control policy is to be created as a permissive policy. For a given query, all applicable permissive policies are combined using the OR operator. Row-level access control policies are permissive by default.
+Specifies that the row-level access control policy is to be created as a restrictive policy. For a given query, all applicable restrictive policies are combined using the AND operator.
+At least one permissive policy is required to grant access to data records. If only restrictive policies are used, no records will be accessible. When both permissive and restrictive policies are used, a record is accessible only when it passes at least one permissive policy and all restrictive policies.
+Specifies the SQL operations affected by a row-level access control policy, including ALL, SELECT, UPDATE, and DELETE. If this parameter is not specified, the default value ALL will be used, covering SELECT, UPDATE, and DELETE.
+If command is set to SELECT, only tuple data that meets the condition (the return value of using_expression is TRUE) can be queried. The operations that are affected include SELECT, UPDATE.... RETURNING, and DELETE... RETURNING.
+If command is set to UPDATE, only tuple data that meets the condition (the return value of using_expression is TRUE) can be updated. The operations that are affected include UPDATE, UPDATE ... RETURNING, and SELECT ... FOR UPDATE/SHARE.
+If command is set to DELETE, only tuple data that meets the condition (the return value of using_expression is TRUE) can be deleted. The operations that are affected include DELETE and DELETE ... RETURNING.
+The following table describes the relationship between row-level access control policies and SQL statements.
+ +Command + |
+SELECT/ALL Policy + |
+UPDATE/ALL Policy + |
+DELETE/ALL Policy + |
+
---|---|---|---|
SELECT + |
+Existing row + |
+No + |
+No + |
+
SELECT FOR UPDATE/SHARE + |
+Existing row + |
+Existing row + |
+No + |
+
UPDATE + |
+No + |
+Existing row + |
+No + |
+
UPDATE RETURNING + |
+Existing row + |
+Existing row + |
+No + |
+
DELETE + |
+No + |
+No + |
+Existing row + |
+
DELETE RETURNING + |
+Existing row + |
+No + |
+Existing row + |
+
Specifies database users affected by a row-level access control policy.
+If this parameter is not specified, the default value PUBLIC will be used, indicating that all database users will be affected. You can specify multiple affected database users.
+System administrators are not affected by row access control.
+Specifies an expression defined for a row-level access control policy (return type: boolean).
+The expression cannot contain aggregate functions and window functions. In the statement rewriting phase of a query, if row-level access control for a data table is enabled, the expressions that meet the specified conditions will be added to the plan tree. The expression is calculated for each tuple in the data table. For SELECT, UPDATE, and DELETE, row data is visible to the current user only when the return value of the expression is TRUE. If the expression returns FALSE, the tuple is invisible to the current user. In this case, the user cannot view the tuple through the SELECT statement, update the tuple through the UPDATE statement, or delete the tuple through the DELETE statement.
+CREATE ROLE alice PASSWORD '{password1}' +CREATE ROLE bob PASSWORD '{password2}';+
CREATE TABLE public.all_data(id int, role varchar(100), data varchar(100));+
INSERT INTO all_data VALUES(1, 'alice', 'alice data'); +INSERT INTO all_data VALUES(2, 'bob', 'bob data'); +INSERT INTO all_data VALUES(3, 'peter', 'peter data');+
GRANT SELECT ON all_data TO alice, bob;+
1 | ALTER TABLE all_data ENABLE ROW LEVEL SECURITY; + |
CREATE ROW LEVEL SECURITY POLICY all_data_rls ON all_data USING(role = CURRENT_USER);+
\d+ all_data + Table "public.all_data" + Column | Type | Modifiers | Storage | Stats target | Description +--------+------------------------+-----------+----------+--------------+------------- + id | integer | | plain | | + role | character varying(100) | | extended | | + data | character varying(100) | | extended | | +Row Level Security Policies: + POLICY "all_data_rls" + USING (((role)::name = "current_user"())) +Has OIDs: no +Distribute By: HASH(id) +Location Nodes: ALL DATANODES +Options: orientation=row, compression=no, enable_rowsecurity=true+
SELECT * FROM all_data; + id | role | data +----+-------+------------ + 1 | alice | alice data + 2 | bob | bob data + 3 | peter | peter data +(3 rows) + +EXPLAIN(COSTS OFF) SELECT * FROM all_data; + QUERY PLAN +---------------------------- + Streaming (type: GATHER) + Node/s: All datanodes + -> Seq Scan on all_data +(3 rows)+
set role alice password '{password1}';+
SELECT * FROM all_data; + id | role | data +----+-------+------------ + 1 | alice | alice data +(1 row) + +EXPLAIN(COSTS OFF) SELECT * FROM all_data; + QUERY PLAN +---------------------------------------------------------------- + Streaming (type: GATHER) + Node/s: All datanodes + -> Seq Scan on all_data + Filter: ((role)::name = 'alice'::name) + Notice: This query is influenced by row level security feature +(5 rows)+
CREATE PROCEDURE creates a stored procedure.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | CREATE [ OR REPLACE ] PROCEDURE procedure_name + [ ( {[ argmode ] [ argname ] argtype [ { DEFAULT | := | = } expression ]}[,...]) ] + [ + { IMMUTABLE | STABLE | VOLATILE } + | { SHIPPABLE | NOT SHIPPABLE } + | {PACKAGE} + | [ NOT ] LEAKPROOF + | { CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT } + | {[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER | AUTHID DEFINER | AUTHID CURRENT_USER} + | COST execution_cost + | ROWS result_rows + | SET configuration_parameter { [ TO | = ] value | FROM CURRENT } + ][ ... ] + { IS | AS } +plsql_body +/ + |
Replaces the original definition when two stored procedures are with the same name.
+Specifies the name of the stored procedure that is created (optionally with schema names).
+Value range: a string. It must comply with the naming convention.
+Specifies the mode of an argument.
+VARIADIC specifies arguments of array types.
+Value range: IN, OUT, IN OUT, INOUT, and VARIADIC. The default value is IN. Only the argument of OUT mode can be followed by VARIADIC. The parameters of OUT and INOUT cannot be used in procedure definition of RETURNS TABLE.
+Specifies the name of an argument.
+Value range: a string. It must comply with the naming convention.
+Specifies the type of a parameter.
+Value range: A valid data type.
+Specifies a constraint. Parameters here are similar to those of CREATE FUNCTION. For details, see 5.18.17.13-CREATE FUNCTION.
+Indicates the PL/SQL stored procedure body.
+When you create a user, or perform other operations requiring password input in a stored procedure, the system catalog and csv log records the unencrypted password. Therefore, you are advised not to perform such operations in the stored procedure.
+No specific order is applied to argument_name and argmode. The following order is advised: argument_name, argmode, and argument_type.
+Create a stored procedure.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE OR REPLACE PROCEDURE prc_add +( + param1 IN INTEGER, + param2 IN OUT INTEGER +) +AS +BEGIN + param2:= param1 + param2; + dbms_output.put_line('result is: '||to_char(param2)); +END; +/ + |
Call the stored procedure.
+1 | SELECT prc_add(2,3); + |
Create a stored procedure whose parameter type is VARIADIC.
+1 +2 +3 +4 +5 +6 | CREATE OR REPLACE PROCEDURE pro_variadic (var1 VARCHAR2(10) DEFAULT 'hello!',var4 VARIADIC int4[]) +AS +BEGIN + dbms_output.put_line(var1); +END; +/ + |
Execute the stored procedure.
+1 | SELECT pro_variadic(var1=>'hello', VARIADIC var4=> array[1,2,3,4]); + |
Create a stored procedure with the PACKAGE attribute.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | create or replace procedure package_func_overload(col int, col2 out varchar) +package +as +declare + col_type text; +begin + col2 := '122'; + dbms_output.put_line('two varchar parameters ' || col2); +end; +/ + |
CREATE RESOURCE POOL creates a resource pool and specifies the Cgroup for the resource pool.
+As long as the current user has CREATE permission, it can create a resource pool.
+1 +2 | CREATE RESOURCE POOL pool_name + [WITH ({MEM_PERCENT=pct | CONTROL_GROUP="group_name" | ACTIVE_STATEMENTS=stmt | MAX_DOP = dop | MEMORY_LIMIT='memory_size' | io_limits=io_limits | io_priority='io_priority' | nodegroup="nodegroupname" | is_foreign=boolean }[, ... ])]; + |
Specifies the name of a resource pool.
+The name of a resource pool cannot be same as that of an existing resource pool.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a Cgroup.
+Value range: a string. It must comply with the rule in the description, specifying an existing Cgroup.
+Specifies the maximum number of statements that can be concurrently executed in a resource pool.
+Value range: Numeric data ranging from -1 to INT_MAX.
+This is a reserved parameter.
+Value range: Numeric data ranging from 1 to INT_MAX.
+Specifies the maximum storage for a resource pool.
+Value range: a string, from 1KB to 2047GB.
+Specifies the proportion of available resource pool memory to the total memory or group user memory.
+In multi-tenant scenarios, mem_percent of group users or service users ranges from 1 to 100. The default value is 20.
+In common scenarios, mem_percent of common users ranges from 0 to 100. The default value is 0.
+When both of mem_percent and memory_limit are specified, only mem_percent takes effect.
+Specifies the upper limit of IOPS in a resource pool.
+The IOPS is counted by ones for column storage and by 10 thousands for row storage.
+Specifies the I/O priority for jobs that consume many I/O resources. It takes effect when the I/O usage reaches 90%.
+There are three priorities: Low, Medium, and High. If you do not want to control I/O resources, use the default value None.
+The settings of io_limits and io_priority are valid only for complex jobs, such as batch import (using INSERT INTO SELECT, COPY FROM, or CREATE TABLE AS), complex queries involving over 500 MB data on each DN, and VACUUM FULL.
+Specifies the name of a logical cluster where the resource pool is. The logical cluster must already exist.
+If the logical cluster name contains uppercase letters or special characters or begins with a digit, enclose the name with double quotation marks in SQL statements.
+In logical cluster mode, lets the current resource pool to control the resources of common users that are not associated with the logical cluster specified by nodegroup.
+This example assumes that Cgroups have been created by users in advance.
+Create a default resource pool, and associate it with the Medium Timeshare Cgroup under Workload under DefaultClass.
+1 | CREATE RESOURCE POOL pool1; + |
Create a resource pool, and associate it with the High Timeshare Cgroup under Workload under DefaultClass.
+1 | CREATE RESOURCE POOL pool2 WITH (CONTROL_GROUP="High"); + |
Create a resource pool, and associate it with the Low Timeshare Cgroup under Workload under class1.
+1 | CREATE RESOURCE POOL pool3 WITH (CONTROL_GROUP="class1:Low"); + |
Create a resource pool, and associate it with the wg1 Workload Cgroup under class1.
+1 | CREATE RESOURCE POOL pool4 WITH (CONTROL_GROUP="class1:wg1"); + |
Create a resource pool, and associate it with the wg2 Workload Cgroup under class1.
+1 | CREATE RESOURCE POOL pool5 WITH (CONTROL_GROUP="class1:wg2:3"); + |
Create a role.
+A role is an entity that has own database objects and permissions. In different environments, a role can be considered a user, a group, or both.
+1 | CREATE ROLE role_name [ [ WITH ] option [ ... ] ] [ ENCRYPTED | UNENCRYPTED ] { PASSWORD | IDENTIFIED BY } { 'password' | DISABLE }; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 | {SYSADMIN | NOSYSADMIN} + | {AUDITADMIN | NOAUDITADMIN} + | {CREATEDB | NOCREATEDB} + | {USEFT | NOUSEFT} + | {CREATEROLE | NOCREATEROLE} + | {INHERIT | NOINHERIT} + | {LOGIN | NOLOGIN} + | {REPLICATION | NOREPLICATION} + | {INDEPENDENT | NOINDEPENDENT} + | {VCADMIN | NOVCADMIN} + | CONNECTION LIMIT connlimit + | VALID BEGIN 'timestamp' + | VALID UNTIL 'timestamp' + | RESOURCE POOL 'respool' + | USER GROUP 'groupuser' + | PERM SPACE 'spacelimit' + | TEMP SPACE 'tmpspacelimit' + | SPILL SPACE 'spillspacelimit' + | NODE GROUP logic_cluster_name + | IN ROLE role_name [, ...] + | IN GROUP role_name [, ...] + | ROLE role_name [, ...] + | ADMIN rol e_name [, ...] + | USER role_name [, ...] + | SYSID uid + | DEFAULT TABLESPACE tablespace_name + | PROFILE DEFAULT + | PROFILE profile_name + | PGUSER + | AUTHINFO 'authinfo' + | PASSWORD EXPIRATOIN period + |
Role name
+Value range: a string. It must comply with the naming convention. and can contain a maximum of 63 characters.
+Specifies the login password.
+A password must:
+Value range: a string
+By default, you can change your password unless it is disabled. Use this parameter to disable the password of a user. After the password of a user is disabled, the password will be deleted from the system. The user can connect to the database only through external authentication, for example, IAM authentication, Kerberos authentication, or LDAP authentication. Only administrators can enable or disable a password. Common users cannot disable the password of an initial user. To enable a password, run ALTER USER and specify the password.
+Determines the password stored in the system is encrypted or unencrypted. (If neither is specified, the password status is determined by password_encryption_type.) According to product security requirements, the password must be stored encrypted. Therefore, UNENCRYPTED is forbidden in GaussDB(DWS). If the presented password string is already in SHA256-encrypted format, then it is stored encrypted as-is, regardless of whether ENCRYPTED or UNENCRYPTED is specified (since the system cannot decrypt the specified encrypted password string). This allows reloading of encrypted passwords during dump/restore.
+Determines whether a new role is a system administrator. Roles having the SYSADMIN attribute have the highest permission.
+Value range: If not specified, NOSYSADMIN is the default.
+Determines whether a role has the audit and management attributes.
+If not specified, NOAUDITADMIN is the default.
+Defines a role's ability to create databases.
+A new role does not have the permission to create databases.
+Value range: If not specified, NOCREATEDB is the default.
+Determines whether a new role can perform operations on foreign tables, such as creating, deleting, modifying, and reading/witting foreign tables.
+A new role does not have permissions for these operations.
+The default value is NOUSEFT.
+Determines whether a role will be permitted to create new roles (that is, execute CREATE ROLE and CREATE USER). A role with the CREATEROLE permission can also modify and delete other roles.
+Value range: If not specified, NOCREATEROLE is the default.
+Determines whether a role "inherits" the permissions of roles it is a member of. You are not advised to execute them.
+Determines whether a role is allowed to log in to a database. A role having the LOGIN attribute can be thought of as a user.
+Value range: If not specified, NOLOGIN is the default.
+Determines whether a role is allowed to initiate streaming replication or put the system in and out of backup mode. A role having the REPLICATION attribute is a highly privileged role, and should only be used on roles used for replication.
+If not specified, NOREPLICATION is the default.
+Defines private, independent roles. For a role with the INDEPENDENT attribute, administrators' rights to control and access this role are separated. Specific rules are as follows:
+Defines the role of a logical cluster administrator. A logical cluster administrator has the following more permissions than common users:
+Indicates how many concurrent connections the role can make.
+Value range: Integer, >=-1. The default value is -1, which means unlimited.
+To ensure the proper running of a cluster, the minimum value of CONNECTION LIMIT is the number of CNs in the cluster, because when a cluster runs ANALYZE on a CN, other CNs will connect to the running CN for metadata synchronization. For example, if there are three CNs in the cluster, set CONNECTION LIMIT to 3 or a greater value.
+Sets a date and time when the role's password becomes valid. If this clause is omitted, the password will be valid for all time.
+Sets a date and time after which the role's password is no longer valid. If this clause is omitted, the password will be valid for all time.
+Sets the name of resource pool used by the role, and the name belongs to the system catalog: pg_resource_pool.
+Creates a sub-user. For details, see "Resource Load Management > Tenant Management > User Level Management" in the Developer Guide.
+Sets the storage space of the user permanent table.
+Sets the storage space of the user temporary table.
+Sets the operator disk flushing space of the user.
+Specifies the name of the logical cluster associated with a user. If the name contains uppercase characters or special characters, enclose the name with double quotation marks.
+Lists one or more existing roles whose permissions will be inherited by a new role. You are not advised to execute them.
+Indicates an obsolete spelling of IN ROLE. You are not advised to execute them.
+Lists one or more existing roles which are automatically added as members of the new role.
+Is similar to ROLE. However, the roles after ADMIN can grant rights of new roles to other roles.
+Indicates an obsolete spelling of the ROLE clause.
+The SYSID clause is ignored.
+The DEFAULT TABLESPACE clause is ignored.
+The PROFILE clause is ignored.
+This attribute is used to be compatible with open-source Postgres communication. An open-source Postgres client interface (Postgres 9.2.19 is recommended) can use a database user having this attribute to connect to the database.
+This attribute only ensures compatibility with the connection process. Incompatibility caused by kernel differences between this product and Postgres cannot be solved using this attribute.
+Users having the PGUSER attribute are authenticated in a way different from other users. Error information reported by the open-source client may cause the attribute to be enumerated. Therefore, you are advised to use a client of this product. Example:
+1 +2 +3 +4 +5 +6 +7 | # normaluser is a user that does not have the PGUSER attribute. psql is the Postgres client tool. +pg@MPPDB04:~> psql -d postgres -p 8000 -h 10.11.12.13 -U normaluser +psql: authentication method 10 not supported + +# pguser is a user having the PGUSER attribute. +pg@MPPDB04:~> psql -d postgres -p 8000 -h 10.11.12.13 -U pguser +Password for user pguser: + |
This attribute is used to specify the role authentication type. authinfo is the description character string, which is case sensitive. Only the LDAP type is supported. Its description character string is ldap. LDAP authentication is an external authentication mode. Therefore, PASSWORD DISABLE must be specified.
+Number of days before the login password of the role expires. The user needs to change the password in time before the login password expires. If the login password expires, the user cannot log in to the system. In this case, the user needs to ask the administrator to set a new login password.
+Value range: an integer greater than or equal to -1. The default value is -1, indicating that the password does not expire. The value 0 indicates that the password expires immediately.
+Create a role manager.
+1 | CREATE ROLE manager IDENTIFIED BY '{password}'; + |
Create a role with a validity from January 1, 2015 to January 1, 2026.
+1 | CREATE ROLE miriam WITH LOGIN PASSWORD '{password}' VALID BEGIN '2015-01-01' VALID UNTIL '2026-01-01'; + |
Create a role. The authentication type is LDAP. Other LDAP authentication information is provided by pg_hba.conf.
+1 | CREATE ROLE role1 WITH LOGIN AUTHINFO 'ldap' PASSWORD DISABLE; + |
Create a role. The authentication type is LDAP. The fulluser information for LDAP authentication is specified during the role creation. In this case, LDAP is case sensitive and must be enclosed in single quotation marks.
+1 | CREATE ROLE role2 WITH LOGIN AUTHINFO 'ldapcn=role2,cn=user,dc=lework,dc=com' PASSWORD DISABLE; + |
Create a role and set the validity period of the login password to 30 days.
+1 | CREATE ROLE role3 WITH LOGIN PASSWORD '{password}' PASSWORD EXPIRATION 30; + |
CREATE SCHEMA creates a schema.
+Named objects are accessed either by "qualifying" their names with the schema name as a prefix, or by setting a search path that includes the desired schema(s). When creating named objects, you can also use the schema name as a prefix.
+Optionally, CREATE SCHEMA can include sub-commands to create objects within the new schema. The sub-commands are treated essentially the same as separate commands issued after creating the schema, If the AUTHORIZATION clause is used, all the created objects are owned by this user.
+1 +2 | CREATE SCHEMA schema_name + [ AUTHORIZATION user_name ] [ WITH PERM SPACE 'space_limit'] [ schema_element [ ... ] ]; + |
1 | CREATE SCHEMA AUTHORIZATION user_name [ WITH PERM SPACE 'space_limit'] [ schema_element [ ... ] ]; + |
Indicates the name of the schema to be created.
+The name must be unique,
+and cannot start with pg_.
+Value range: a string. It must comply with the naming convention rule.
+Indicates the name of the user who will own this schema. If schema_name is not specified, user_name will be used as the schema name. In this case, user_name can only be a role name.
+Value range: An existing user name/role.
+Indicates the storage upper limit of the permanent table in the specified schema. If space_limit is not specified, the space is not limited.
+Value range: A string consists of an integer and unit. The unit can be K/M/G/T/P currently. The unit of parsed value is K and cannot exceed the range that can be expressed in 64 bits, which is 1 KB to 9007199254740991 KB.
+Indicates an SQL statement defining an object to be created within the schema. Currently, only CREATE TABLE, CREATE VIEW, CREATE INDEX, CREATE PARTITION, and GRANT are accepted as clauses within CREATE SCHEMA.
+Objects created by sub-commands are owned by the user specified by AUTHORIZATION.
+If objects in the schema on the current search path are with the same name, specify the schemas different objects are in. You can run the SHOW SEARCH_PATH command to check the schemas on the current search path.
+Create a schema named role1 for the role1 role. The owner of the films and winners tables created by the clause is role1.
+CREATE SCHEMA AUTHORIZATION role1 +CREATE TABLE films (title text, release date, awards text[]) +CREATE VIEW winners AS +SELECT title, release FROM films WHERE awards IS NOT NULL;+
CREATE SEQUENCE adds a sequence to the current database. The owner of a sequence is the user who creates the sequence.
+1 +2 +3 +4 | CREATE SEQUENCE name [ INCREMENT [ BY ] increment ] + [ MINVALUE minvalue | NO MINVALUE | NOMINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE | NOMAXVALUE] + [ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE | NOCYCLE ] + [ OWNED BY { table_name.column_name | NONE } ]; + |
Specifies the name of a sequence to be created.
+Value range: The value can contain only lowercase letters, uppercase letters, special characters #_$, and digits.
+Specifies the step for a sequence. A positive generates an ascending sequence, and a negative generates a decreasing sequence.
+The default value is 1.
+Specifies the minimum value of a sequence. If MINVALUE is not declared, or NO MINVALUE is declared, the default value of the ascending sequence is 1, and that of the descending sequence is -263-1. NOMINVALUE is equivalent to NO MINVALUE.
+Specifies the maximum value of a sequence. If MAXVALUE is not declared or NO MAXVALUE is declared, the default value of the ascending sequence is 263-1, and that of the descending sequence is -1. NOMAXVALUE is equivalent to NO MAXVALUE.
+Specifies the start value of the sequence. The default value for ascending sequences is minvalue and for descending sequences maxvalue.
+Specifies the number of sequence numbers stored in the memory for quick access. Within a cache period, the CN does not request a sequence number from the GTM. Instead, the CN uses the sequence number that is locally applied for in advance.
+Default value 1 indicates that one value can be generated each time.
+Used to ensure that sequences can recycle after the number of sequences reaches maxvalue or minvalue.
+If you declare NO CYCLE, any invocation of nextval would return an error after the sequence reaches its maximum value.
+NOCYCLE is equivalent to NO CYCLE.
+The default value is NO CYCLE.
+If the sequence is defined as CYCLE, the sequence uniqueness cannot be ensured.
+Associates a sequence with a specified column included in a table. In this way, the sequence will be deleted when you delete its associated field or the table where the field belongs. The associated table and sequence must be owned by the same user and in the same schema. OWNED BY only establishes the association between a table column and the sequence. The sequence is not created for this column.
+If the default value is OWNED BY NONE, indicating that such association does not exist.
+You are not advised to use the sequence created using OWNED BY in other tables. If multiple tables need to share a sequence, the sequence must not belong to a specific table.
+Create an ascending sequence named serial, which starts from 101:
+1 +2 +3 | CREATE SEQUENCE serial + START 101 + CACHE 20; + |
Select the next number from the sequence:
+1 +2 +3 +4 | SELECT nextval('serial'); + nextval + --------- + 101 + |
Select the next number from the sequence:
+1 +2 +3 +4 | SELECT nextval('serial'); + nextval + --------- + 102 + |
Create a sequence associated with the table:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | CREATE TABLE customer_address +( + ca_address_sk integer not null, + ca_address_id char(16) not null, + ca_street_number char(10) , + ca_street_name varchar(60) , + ca_street_type char(15) , + ca_suite_number char(10) , + ca_city varchar(60) , + ca_county varchar(30) , + ca_state char(2) , + ca_zip char(10) , + ca_country varchar(20) , + ca_gmt_offset decimal(5,2) , + ca_location_type char(20) +) ; + +CREATE SEQUENCE serial1 + START 101 + CACHE 20 +OWNED BY customer_address.ca_address_sk; + |
Use SERIAL to create a serial table serial_table for primary key auto-increment.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE TABLE serial_table(a int, b serial); +INSERT INTO serial_table (a) VALUES (1),(2),(3); +SELECT * FROM serial_table ORDER BY b; + a | b +---+--- + 1 | 1 + 2 | 2 + 3 | 3 +(3 rows) + |
CREATE SERVER creates an external server.
+An external server stores information of HDFS clusters, OBS servers, DLI connections, or other homogeneous clusters.
+By default, only the system administrator can create a foreign server. Otherwise, creating a server requires permissions on the foreign data wrapper being used. Use the following syntax to grant permissions:
+1 | GRANT USAGE ON FOREIGN DATA WRAPPER fdw_name TO username; + |
fdw_name is the name of the foreign data wrapper, and username is the username of creating SERVER.
+1 +2 +3 | CREATE SERVER server_name + FOREIGN DATA WRAPPER fdw_name + OPTIONS ( { option_name ' value ' } [, ...] ) ; + |
Name of the foreign server to be created. The server name must be unique in a database.
+Value range: The length must be less than or equal to 63.
+Specifies the name of the foreign data wrapper.
+Value range: fdw_name indicates the data wrapper created by the system in the initial phase of the database. Currently, fdw_name can be hdfs_fdw or dfs_fdw for the HDFS cluster, and can be gc_fdw for other homogeneous clusters.
+Specifies the parameters for the server. The detailed parameter description is as follows:
+Specifies the IP address of the OBS service endpoint or HDFS cluster.
+OBS: address is the endpoint of OBS.
+HDFS: Specifies the IP address and port number of a NameNode (metadata node) in the HDFS cluster, or the IP address and port number of a CN in other homogeneous clusters.
+HDFS NameNodes are deployed in primary/secondary mode for HA. Add the addresses of the primary and secondary NameNodes to address. When accessing HDFS, GaussDB(DWS) dynamically searches for the active NameNode.
+address option must exist.
+If the server type is DLI, the address is the OBS address stored on DLI.
+This parameter is available only when type is HDFS.
+You can set the hdfscfgpath parameter to specify the HDFS configuration file path. GaussDB(DWS) accesses the HDFS cluster based on the connection configuration mode and security mode specified in the HDFS configuration file stored in that path. If the HDFS cluster is connected in non-secure mode, data transmission encryption is not supported.
+If the address option is not specified, the address specified by hdfscfgpath in the configuration file is used by default.
+Specifies whether data is encrypted. This parameter is available only when type is OBS. The default value is off.
+Value range:
+Specifies the access key (AK) (obtained by users from the OBS console) used for the OBS access protocol. When you create a foreign table, its AK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is OBS.
+Specifies the secret access key (SK) (obtained by users from the OBS console) used for the OBS access protocol. When you create a foreign table, its SK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is OBS.
+Specifies the dfs_fdw connection type.
+Value range:
+Specifies the endpoint of the DLI service. This parameter is available only when type is DLI.
+Specifies the access key (AK) (obtained by users from the DLI console) used for the DLI access protocol. When you create a foreign table, its AK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is DLI.
+Specifies the secret access key (SK) (obtained by users from the DLI console) used for the DLI access protocol. When you create a foreign table, its SK value is encrypted and saved to the metadata table of the database. This parameter is available only when type is DLI.
+Specifies the database name of a remote cluster to be connected. This parameter is used for collaborative analysis.
+Specifies the username of a remote cluster to be connected. This parameter is used for collaborative analysis.
+Specifies the user password of a remote cluster to be connected. This parameter is used for collaborative analysis.
+When an on-premises cluster is migrated to the cloud, the password in the server configuration exported from the on-premises cluster is in ciphertext. The encryption and decryption keys of the on-premises cluster are different from those of the cloud cluster. Therefore, if CREATE SERVER is executed on the cloud cluster, the execution fails and a decryption failure error is reported. In this case, you need to manually change the password in CREATE SERVER to a plaintext password.
+Create the hdfs_server server, in which hdfs_fdw is the built-in foreign data wrapper.
+1 +2 +3 +4 +5 | CREATE SERVER hdfs_server FOREIGN DATA WRAPPER HDFS_FDW OPTIONS + (address '10.10.0.100:25000,10.10.0.101:25000', + hdfscfgpath '/opt/hadoop_client/HDFS/hadoop/etc/hadoop', + type 'HDFS' +) ; + |
Create the obs_server server, in which dfs_fdw is the built-in foreign data wrapper.
+1 +2 +3 +4 +5 +6 | CREATE SERVER obs_server FOREIGN DATA WRAPPER DFS_FDW OPTIONS ( + address 'obs.xxx.xxx.com', + access_key 'xxxxxxxxx', + secret_access_key 'yyyyyyyyyyyyy', + type 'obs' +); + |
Create the dli_server server, in which dfs_fdw is the built-in foreign data wrapper.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE SERVER dli_server FOREIGN DATA WRAPPER DFS_FDW OPTIONS ( + address 'obs.xxx.xxx.com', + access_key 'xxxxxxxxx', + secret_access_key 'yyyyyyyyyyyyy', + type 'dli', + dli_address 'dli.xxx.xxx.com', + dli_access_key 'xxxxxxxxx', + dli_secret_access_key 'yyyyyyyyyyyyy' +); + |
Create another server in the homogeneous cluster, where gc_fdw is the foreign data wrapper in the database.
+1 +2 +3 +4 +5 +6 | CREATE SERVER server_remote FOREIGN DATA WRAPPER GC_FDW OPTIONS + (address '10.10.0.100:25000,10.10.0.101:25000', + dbname 'test', + username 'test', + password 'xxxxxxxx' +); + |
Helpful Links
+ +CREATE SYNONYM is used to create a synonym object. A synonym is an alias of a database object and is used to record the mapping between database object names. You can use synonyms to access associated database objects.
+1 +2 | CREATE [ OR REPLACE ] SYNONYM synonym_name + FOR object_name; + |
Name of a synonym which is created (optionally with schema names)
+Value range: a string. It must comply with the identifier naming rules.
+Name of an object that is associated (optionally with schema names)
+Value range: a string. It must comply with the identifier naming rules.
+object_name can be the name of an object that does not exist.
+Create schema ot.
+1 | CREATE SCHEMA ot; + |
Create table ot.t1 and its synonym t1.
+1 +2 | CREATE TABLE ot.t1(id int, name varchar2(10)) DISTRIBUTE BY hash(id); +CREATE OR REPLACE SYNONYM t1 FOR ot.t1; + |
Use synonym t1.
+1 +2 +3 | SELECT * FROM t1; +INSERT INTO t1 VALUES (1, 'ada'), (2, 'bob'); +UPDATE t1 SET t1.name = 'cici' WHERE t1.id = 2; + |
Create synonym v1 and its associated view ot.v_t1.
+1 +2 | CREATE SYNONYM v1 FOR ot.v_t1; +CREATE VIEW ot.v_t1 AS SELECT * FROM ot.t1; + |
Use synonym v1.
+1 | SELECT * FROM v1; + |
Create overloaded function ot.add and its synonym add.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | CREATE OR REPLACE FUNCTION ot.add(a integer, b integer) RETURNS integer AS +$$ +SELECT $1 + $2 +$$ +LANGUAGE sql; + +CREATE OR REPLACE FUNCTION ot.add(a decimal(5,2), b decimal(5,2)) RETURNS decimal(5,2) AS +$$ +SELECT $1 + $2 +$$ +LANGUAGE sql; + +CREATE OR REPLACE SYNONYM add FOR ot.add; + |
Use synonym add.
+1 +2 | SELECT add(1,2); +SELECT add(1.2,2.3); + |
Create stored procedure ot.register and its synonym register.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE PROCEDURE ot.register(n_id integer, n_name varchar2(10)) +SECURITY INVOKER +AS +BEGIN + INSERT INTO ot.t1 VALUES(n_id, n_name); +END; +/ + +CREATE OR REPLACE SYNONYM register FOR ot.register; + |
Use synonym register to invoke the stored procedure.
+1 | CALL register(3,'mia'); + |
CREATE TABLE creates a table in the current database. The table will be owned by the user who created it.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name + ({ column_name data_type [ compress_mode ] [ COLLATE collation ] [ column_constraint [ ... ] ] + | table_constraint + | LIKE source_table [ like_option [...] ] } + [, ... ]) + [ WITH ( {storage_parameter = value} [, ... ] ) ] + [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] + [ COMPRESS | NOCOMPRESS ] + + [ DISTRIBUTE BY { REPLICATION | { HASH ( column_name [,...] ) } } ] + [ TO { GROUP groupname | NODE ( nodename [, ... ] ) } ]; + |
1 +2 +3 +4 +5 +6 +7 +8 | [ CONSTRAINT constraint_name ] +{ NOT NULL | + NULL | + CHECK ( expression ) | + DEFAULT default_expr | + UNIQUE index_parameters | + PRIMARY KEY index_parameters } +[ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + |
1 | { DELTA | PREFIX | DICTIONARY | NUMSTR | NOCOMPRESS } + |
1 +2 +3 +4 +5 +6 | [ CONSTRAINT constraint_name ] +{ CHECK ( expression ) | + UNIQUE ( column_name [, ... ] ) index_parameters | + PRIMARY KEY ( column_name [, ... ] ) index_parameters | + PARTIAL CLUSTER KEY ( column_name [, ... ] ) } +[ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + |
1 | { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE | COMMENTS | PARTITION | RELOPTIONS | DISTRIBUTION | DROPCOLUMNS | ALL } + |
1 | [ WITH ( {storage_parameter = value} [, ... ] ) ] + |
If this key word is specified, the created table is not a log table. Data written to unlogged tables is not written to the write-ahead log, which makes them considerably faster than ordinary tables. However, an unlogged table is automatically truncated after a crash or unclean shutdown, incurring data loss risks. The contents of an unlogged table are also not replicated to standby servers. Any indexes created on an unlogged table are not automatically logged as well.
+Usage scenario: Unlogged tables do not ensure safe data. Users can back up data before using unlogged tables; for example, users should back up the data before a system upgrade.
+Troubleshooting: If data is missing in the indexes of unlogged tables due to some unexpected operations such as an unclean shutdown, users should re-create the indexes with errors.
+When creating a temporary table, you can specify the GLOBAL or LOCAL keyword before TEMP or TEMPORARY. Currently, the two keywords are used to be compatible with the SQL standard. GaussDB(DWS) will create a local temporary table regardless of whether GLOBAL or LOCAL is specified.
+If TEMP or TEMPORARY is specified, the created table is a temporary table. Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction. Therefore, apart from CN and other CN errors connected by the current session, you can still create and use temporary table in the current session. Temporary tables are created only in the current session. If a DDL statement involves operations on temporary tables, a DDL error will be generated. Therefore, you are not advised to perform operations on temporary tables in DDL statements. TEMP is equivalent to TEMPORARY.
+If IF NOT EXISTS is specified, a table will be created if there is no table using the specified name. If there is already a table using the specified name, no error will be reported. A message will be displayed indicating that the table already exists, and the database will skip table creation.
+Specifies the name of the table to be created.
+The table name can contain a maximum of 63 characters, including letters, digits, underscores (_), dollar signs ($), and number signs (#). It must start with a letter or underscore (_).
+Specifies the name of a column to be created in the new table.
+The column name can contain a maximum of 63 characters, including letters, digits, underscores (_), dollar signs ($), and number signs (#). It must start with a letter or underscore (_).
+Specifies the data type of the column.
+Specifies the compress option of the table, only available for row-store table. The option specifies the algorithm preferentially used by table columns.
+Value range: DELTA, PREFIX, DICTIONARY, NUMSTR, NOCOMPRESS
+Assigns a collation to the column (which must be of a collatable data type). If no collation is specified, the default collation is used.
+Specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.
+The new table and the source table are decoupled after creation is complete. Changes to the source table will not be applied to the new table, and it is not possible to include data of the new table in scans of the source table.
+Columns and constraints copied by LIKE are not merged with the same name. If the same name is specified explicitly or in another LIKE clause, an error is reported.
+Specifies an optional storage parameter for a table or an index.
+Using Numeric of any precision to define column, specifies precision p and scale s. When precision and scale are not specified, the input will be displayed.
+The description of parameters is as follows:
+The fillfactor of a table is a percentage between 10 and 100. 100 (complete packing) is the default value. When a smaller fillfactor is specified, INSERT operations pack table pages only to the indicated percentage. The remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page, which is more efficient than placing it on a different page. For a table whose records are never updated, setting the fillfactor to 100 (complete packing) is the appropriate choice, but in heavily updated tables smaller fillfactors are appropriate. The parameter has no meaning for column-based tables.
+Value range: 10–100
+Specifies the storage mode (row-store, column-store) for table data. This parameter cannot be modified once it is set.
+Valid value:
+ROW applies to OLTP service, which has many interactive transactions. An interaction involves many columns in the table. Using ROW can improve the efficiency.
+COLUMN applies to the data warehouse service, which has a large amount of aggregation computing, and involves a few column operations.
+Default value:
+If an ordinary tablespace is specified, the default is ROW.
+Specifies the compression level of the table data. It determines the compression ratio and time. Generally, the higher the level of compression, the higher the ratio, the longer the time, and the lower the level of compression, the lower the ratio, the shorter the time. The actual compression ratio depends on the distribution characteristics of loading table data.
+Valid value:
+GaussDB(DWS) provides the following compression algorithms:
+ +COMPRESSION + |
+NUMERIC + |
+STRING + |
+INT + |
+
---|---|---|---|
LOW + |
+Delta compression + RLE compression + |
+LZ4 compression + |
+Delta compression (RLE is optional.) + |
+
MIDDLE + |
+Delta compression + RLE compression + LZ4 compression + |
+dict compression or LZ4 compression + |
+Delta compression or LZ4 compression (RLE is optional) + |
+
HIGH + |
+Delta compression + RLE compression + zlib compression + |
+dict compression or zlib compression + |
+Delta compression or zlib compression (RLE is optional) + |
+
Specifies the compression level of the table data. It determines the compression ratio and time. This divides a compression level into sublevels, providing you with more choices for compression rate and duration. As the value becomes greater, the compression rate becomes higher and duration longer at the same compression level. The parameter is only valid for column-store table.
+Value range: 0 to 3. The default value is 0.
+Specifies the maximum of a storage unit during data loading process. The parameter is only valid for column-store table.
+Value range: 10000 to 60000
+Default value: 60000
+When a column-store table is imported, the following error is reported: cu.cpp: 249: The parameter destMax is equal to zero or larger than the macro: SECUREC_STRING_MAX_LEN.
+If the error persists after the statement or sorting is adjusted, change the maximum number of records in a storage unit from 60,000 to 30,000 by setting MAX_BATCHROW.
+Specifies the number of records to be partial cluster stored during data loading process. The parameter is only valid for column-store table.
+Value range: 600000 to 2147483647
+Specifies whether to enable delta tables in column-store tables. The parameter is only valid for column-store tables.
+Default value: off
+Specifies the upper limit of to-be-imported rows for triggering the data import to a delta table when data is to be imported to a column-store table. This parameter takes effect only if the enable_delta table parameter is set to on. The parameter is only valid for column-store table.
+The value ranges from 0 to 60000. The default value is 6000.
+Specifies the version of the column-store format. You can switch between different storage formats.
+Valid value:
+1.0: Each column in a column-store table is stored in a separate file. The file name is relfilenode.C1.0, relfilenode.C2.0, relfilenode.C3.0, or similar.
+2.0: All columns of a column-store table are combined and stored in a file. The file is named relfilenode.C1.0.
+Default value: 2.0
+The value of COLVERSION can only be set to 2.0 for OBS hot and cold tables.
+Specifies the OBS tablespace for the cold partitions in a hot or cold table. This parameter is available only to partitioned column-store tables and cannot be modified. It must be used together with storage_policy.
+Valid value: a valid OBS tablespace name
+Specifies the hot and cold partition switching policy. This parameter is supported only by hot and cold tables. This parameter must be used together with cold_tablespace.
+Value range: Cold and hot switchover policy name:Cold and hot switchover threshold. Currently, only LMT and HPN policies are supported. LMT indicates that the switchover is performed based on the last update time of partitions. HPN indicates the switchover is performed based on a fixed number of reserved hot partitions.
+The hybrid data warehouse (standalone) does not support cold and hot partition switchover.
+Indicates whether to skip the hint bits operation when the full-page writes (FPW) log needs to be written during sequential scanning.
+If SKIP_FPI_HINT is set to true and the checkpoint operation is performed on a table, no Xlog will be generated when the table is sequentially scanned. This applies to intermediate tables that are queried less frequently, reducing the size of Xlogs and improving query performance.
+ON COMMIT determines what to do when you commit a temporary table creation operation. The three options are as follows. Currently, only PRESERVE ROWS and DELETE ROWS can be used.
+If you specify COMPRESS in the CREATE TABLE statement, the compression feature is triggered in the case of a bulk INSERT operation. If this feature is enabled, a scan is performed for all tuple data within the page to generate a dictionary and then the tuple data is compressed and stored. If NOCOMPRESS is specified, the table is not compressed.
+Default value: NOCOMPRESS, tuple data is not compressed before storage.
+Specifies how the table is distributed or replicated between DNs.
+Valid value:
+Default value: HASH(column_name), the key column of column_name (if any) or the column of distribution column supported by first data type.
+column_name supports the following data types:
+When you create a table, the choices of distribution keys and partition keys have major impact on SQL query performance. Therefore, choosing proper distribution column and partition key with strategies.
+Connect to the database and run the following statements to check the number of tuples on each DN: Replace tablename with the actual name of the table to be analyzed.
+SELECT a.count,b.node_name FROM (SELECT count(*) AS count,xc_node_id FROM tablename GROUP BY xc_node_id) a, pgxc_node b WHERE a.xc_node_id=b.node_id ORDER BY a.count DESC;+
If tuple numbers vary greatly (several times or tenfold) in each DN, a data skew occurs. Change the data distribution key based on the following principles:
+The column value of the distribution column should be discrete so that data can be evenly distributed on each DN. For example, you are advised to select the primary key of a table as the distribution column, and the ID card number as the distribution column in a personnel information table.
+With the above principles met, you can select join conditions as distribution keys so that join tasks can be pushed down to DNs, reducing the amount of data transferred between the DNs.
+In range partitioning, the table is partitioned into ranges defined by a key column or set of columns, with no overlap between the ranges of values assigned to different partitions. Each range has a dedicated partition for data storage.
+Modify partition keys to make the query result stored in the same or least partitions (partition pruning). Obtaining consecutive I/O to improve the query performance.
+In actual services, time is used to filter query objects. Therefore, you can use time as a partition key, and change the key value based on the total data volume and single data query volume.
+TO GROUP specifies the Node Group in which the table is created. Currently, it cannot be used for HDFS tables. TO NODE is used for internal scale-out tools.
+Specifies a name for a column or table constraint. The optional constraint clauses specify constraints that new or updated rows must satisfy for an insert or update operation to succeed.
+There are two ways to define constraints:
+Indicates that the column is not allowed to contain NULL values.
+The column is allowed to contain NULL values. This is the default setting.
+This clause is only provided for compatibility with non-standard SQL databases. You are advised not to use this clause.
+Specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. If any row of an insert or update operation produces a FALSE result, an error exception is raised and the insert or update does not alter the database.
+A check constraint specified as a column constraint should reference only the column's values, while an expression appearing in a table constraint can reference multiple columns.
+<>NULL and !=NULL are invalid in an expression. Change them to IS NOT NULL.
+Assigns a default data value for a column. The value can be any variable-free expressions (Subqueries and cross-references to other columns in the current table are not allowed). The data type of the default expression must match the data type of the column.
+The default expression will be used in any insert operation that does not specify a value for the column. If there is no default value for a column, then the default value is NULL.
+UNIQUE ( column_name [, ... ] ) index_parameters
+Specifies that a group of one or more columns of a table can contain only unique values.
+For the purpose of a unique constraint, NULL is not considered equal.
+If DISTRIBUTE BY REPLICATION is not specified, the column table that contains only unique values must contain distribution columns.
+PRIMARY KEY ( column_name [, ... ] ) index_parameters
+Specifies the primary key constraint specifies that a column or columns of a table can contain only unique (non-duplicate) and non-null values.
+Only one primary key can be specified for a table.
+If DISTRIBUTE BY REPLICATION is not specified, the column set with a primary key constraint must contain distributed columns.
+Controls whether the constraint can be deferred. A constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction using the SET CONSTRAINTS command. NOT DEFERRABLE is the default value. Currently, only UNIQUE and PRIMARY KEY constraints of row-store tables accept this clause. All the other constraints are not deferrable.
+Specifies a partial cluster key for storage. When importing data to a column-store table, you can perform local data sorting by specified columns (single or multiple).
+If a constraint is deferrable, this clause specifies the default time to check the constraint.
+The constraint check time can be altered using the SET CONSTRAINTS command.
+The new table films_bk automatically inherits all column names, data types, and non-null constraints from the source table films.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE TABLE films ( +code char(5) PRIMARY KEY, +title varchar(40) NOT NULL, +did integer NOT NULL, +date_prod date, +kind varchar(10), +len interval hour to minute +); +CREATE TABLE films_bk LIKE films; + |
Specify that the default value of the W_STATE column to GA. At the end of the transaction, check for duplicate values in the W_WAREHOUSE_NAME column.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t2 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) UNIQUE DEFERRABLE, + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) DEFAULT 'GA', + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +); + |
Set the fill factor to 70%.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t3 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + UNIQUE(W_WAREHOUSE_NAME) WITH(fillfactor=70) +); + |
Alternatively, use the following syntax to create a table with its fillfactor set to 70%:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t4 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) UNIQUE, + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH(fillfactor=70); + |
Use UNLOGGED to specify that table data is not written to write-ahead logs (WALs).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE UNLOGGED TABLE tpcds.warehouse_t5 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +); + |
If IF NOT EXISTS is specified, a table will be created if there is no table using the specified name. If there is already a table using the specified name, no error will be reported. A message will be displayed indicating that the table already exists, and the database will skip table creation.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE IF NOT EXISTS tpcds.warehouse_t6 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +); + |
Use PRIMARY KEY to declare the primary key.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t7 +( + W_WAREHOUSE_SK INTEGER PRIMARY KEY, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +); + |
Alternatively, use the following syntax to create a table with a primary key constraint:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t8 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + PRIMARY KEY(W_WAREHOUSE_SK) +); + |
Or use the following statement to specify the name of the constraint:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t9 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + CONSTRAINT W_CSTR_KEY1 PRIMARY KEY(W_WAREHOUSE_SK) +); + |
Use PRIMARY KEY to declare two primary keys at the same time.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t10 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + CONSTRAINT W_CSTR_KEY2 PRIMARY KEY(W_WAREHOUSE_SK, W_WAREHOUSE_ID) +); + |
Use ORIENTATION to specify the storage mode of table data.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t11 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (ORIENTATION = COLUMN); + |
When data is imported to a column-store table, perform partial sorting based on the one or more columns specified by PARTIAL CLUSTER KEY.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t12 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + PARTIAL CLUSTER KEY(W_WAREHOUSE_SK, W_WAREHOUSE_ID) +) WITH (ORIENTATION = COLUMN) + |
Use the with clause to declare the compression level.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t17 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (ORIENTATION = COLUMN, COMPRESSION=HIGH); + |
When creating a table, specify the keyword COMPRESS.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t13 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) COMPRESS; + |
Use CONSTRAINT to declare a constraint.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t19 +( + W_WAREHOUSE_SK INTEGER PRIMARY KEY CHECK (W_WAREHOUSE_SK > 0), + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) CHECK (W_WAREHOUSE_NAME IS NOT NULL), + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +); + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t20 +( + W_WAREHOUSE_SK INTEGER PRIMARY KEY, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) CHECK (W_WAREHOUSE_NAME IS NOT NULL), + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + CONSTRAINT W_CONSTR_KEY2 CHECK(W_WAREHOUSE_SK > 0 AND W_WAREHOUSE_NAME IS NOT NULL) +); + |
Specify the TEMP or TEMPORARY keyword to create a temporary table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TEMPORARY TABLE warehouse_t14 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +); + |
Create a temporary table in a transaction and specify that data of this table is deleted when the transaction is committed.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TEMPORARY TABLE warehouse_t15 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) ON COMMIT DELETE ROWS; + |
Set ORIENTATION to ROW.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t16 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (ORIENTATION = ROW); + |
Set COLVERSION to specify the version of the column storage format.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t18 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (ORIENTATION = COLUMN, COLVERSION=2.0); + |
Set enable_delta=on to enable the delta table in column-store tables.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t21 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (ORIENTATION = COLUMN, ENABLE_DELTA = ON); + |
Use the with clause to set SKIP_FPI_HINT.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t22 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (SKIP_FPI_HINT = TRUE); + |
Create an OBS tablespace that hot and cold tables depend on.
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE TABLESPACE obs_location WITH( + filesystem = obs, + address = 'obs URL', + access_key = 'xxxxxxxx', + secret_access_key = 'xxxxxxxx', + encrypt = 'on', + storepath = '/obs_bucket/obs_tablespace' +); + |
Create a hot or cold table. Only column-store partitioned tables are supported.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 | CREATE TABLE tpcds.warehouse_t23 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) +WITH (ORIENTATION = COLUMN, cold_tablespace = "obs_location", storage_policy = 'LMT:30') +DISTRIBUTE BY HASH (W_WAREHOUSE_SK) +PARTITION BY RANGE(W_WAREHOUSE_SQ_FT) +( + PARTITION P1 VALUES LESS THAN(100000), + PARTITION P2 VALUES LESS THAN(200000), + PARTITION P3 VALUES LESS THAN(300000), + PARTITION P4 VALUES LESS THAN(400000), + PARTITION P5 VALUES LESS THAN(500000), + PARTITION P6 VALUES LESS THAN(600000), + PARTITION P7 VALUES LESS THAN(700000), + PARTITION P8 VALUES LESS THAN(MAXVALUE) +)ENABLE ROW MOVEMENT; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t24 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_UUID SMALLSERIAL , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +) WITH (ORIENTATION = ROW); + |
Use DISTRIBUTE BY to specify table distribution across nodes.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | CREATE TABLE tpcds.warehouse_t25 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2), + CONSTRAINT W_CONSTR_KEY3 UNIQUE(W_WAREHOUSE_SK) +)DISTRIBUTE BY HASH(W_WAREHOUSE_SK); + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | CREATE TABLE tpcds.warehouse_t26 +( + W_WAREHOUSE_SK INTEGER NOT NULL, + W_WAREHOUSE_ID CHAR(16) NOT NULL, + W_WAREHOUSE_NAME VARCHAR(20) , + W_WAREHOUSE_SQ_FT INTEGER , + W_STREET_NUMBER CHAR(10) , + W_STREET_NAME VARCHAR(60) , + W_STREET_TYPE CHAR(15) , + W_SUITE_NUMBER CHAR(10) , + W_CITY VARCHAR(60) , + W_COUNTY VARCHAR(30) , + W_STATE CHAR(2) , + W_ZIP CHAR(10) , + W_COUNTRY VARCHAR(20) , + W_GMT_OFFSET DECIMAL(5,2) +)DISTRIBUTE BY REPLICATION; + |
CREATE TABLE AS creates a table based on the results of a query.
+It creates a table and fills it with data obtained using SELECT. The table columns have the names and data types associated with the output columns of the SELECT. Except that you can override the SELECT output column names by giving an explicit list of new column names.
+CREATE TABLE AS queries once the source table and writes data in the new table. The query result view changes when the source table changes. In contrast, a view re-evaluates its defining SELECT statement whenever it is queried.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | CREATE [ UNLOGGED ] TABLE table_name + [ (column_name [, ...] ) ] + [ WITH ( {storage_parameter = value} [, ... ] ) ] + [ COMPRESS | NOCOMPRESS ] + + [ DISTRIBUTE BY { REPLICATION | { [HASH ] ( column_name ) } } ] + + AS query + [ WITH [ NO ] DATA ]; + |
Specifies that the table is created as an unlogged table. Data written to unlogged tables is not written to the write-ahead log, which makes them considerably faster than ordinary tables. However, they are not crash-safe: an unlogged table is automatically truncated after a crash or unclean shutdown. The contents of an unlogged table are also not replicated to standby servers. Any indexes created on an unlogged table are automatically unlogged as well.
+Specifies the name of the table to be created.
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a column to be created in the new table.
+Value range: a string. It must comply with the naming convention.
+Specifies an optional storage parameter for a table or an index. See details of parameters below.
+The fillfactor of a table is a percentage between 10 and 100. 100 (complete packing) is the default value. When a smaller fillfactor is specified, INSERT operations pack table pages only to the indicated percentage. The remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page, which is more efficient than placing it on a different page. For a table whose records are never updated, setting the fillfactor to 100 (complete packing) is the appropriate choice, but in heavily updated tables smaller fillfactors are appropriate. The parameter is only valid for row–store table.
+Value range: 10–100
+COLUMN: The data will be stored in columns.
+ROW (default value): The data will be stored in rows.
+Specifies the compression level of the table data. It determines the compression ratio and time. Generally, the higher the level of compression, the higher the ratio, the longer the time, and the lower the level of compression, the lower the ratio, the shorter the time. The actual compression ratio depends on the distribution characteristics of loading table data.
+Valid value:
+The valid values for column-store tables are YES/NO and LOW/MIDDLE/HIGH, and the default is LOW.
+The valid values for row-store tables are YES and NO, and the default is NO.
+The row-store table compression function is not put into commercial use. To use this function, contact technical support engineers.
+Specifies the maximum of a storage unit during data loading process. The parameter is only valid for column-store table.
+Value range: 10000 to 60000
+Default value: 60000
+Specifies the number of records to be partial cluster stored during data loading process. The parameter is only valid for column-store table.
+Value range: 600000 to 2147483647
+Specifies whether to enable delta tables in column-store tables. The parameter is only valid for column-store tables.
+Default value: off
+Specifies the version of the column-store format. You can switch between different storage formats.
+Valid value:
+1.0: Each column in a column-store table is stored in a separate file. The file name is relfilenode.C1.0, relfilenode.C2.0, relfilenode.C3.0, or similar.
+2.0: All columns of a column-store table are combined and stored in a file. The file is named relfilenode.C1.0.
+Default value: 2.0
+When creating a column-store table, set COLVERSION to 2.0. Compared with the 1.0 storage format, the performance is significantly improved:
+Indicates whether to skip the hint bits operation when the full-page writes (FPW) log needs to be written during sequential scanning.
+If SKIP_FPI_HINT is set to true and the checkpoint operation is performed on a table, no Xlog will be generated when the table is sequentially scanned. This applies to intermediate tables that are queried less frequently, reducing the size of Xlogs and improving query performance.
+Specifies the keyword COMPRESS during the creation of a table, so that the compression feature is triggered in the case of a bulk INSERT operation. If this feature is enabled, a scan is performed for all tuple data within the page to generate a dictionary and then the tuple data is compressed and stored. If NOCOMPRESS is specified, the table is not compressed.
+Default value: NOCOMPRESS, tuple data is not compressed before storage.
+Specifies how the table is distributed or replicated between DNs.
+Default value: HASH(column_name), the key column of column_name (if any) or the column of distribution column supported by first data type.
+column_name supports the following data types:
+Indicates a SELECT or VALUES command, or an EXECUTE command that runs a prepared SELECT, or VALUES query.
+Specifies whether the data produced by the query should be copied into the new table. By default, the data is copied. If the NO parameter is used, the data is not copied.
+Create the store_returns_t1 table and insert numbers that are greater than 4795 in the sr_item_sk column of the store_returns table.
+1 | CREATE TABLE store_returns_t1 AS SELECT * FROM store_returns WHERE sr_item_sk > '4795'; + |
-- Copy store_returns to create the store_returns_t2 table.
+1 | CREATE TABLE store_returns_t2 AS table store_returns; + |
CREATE TABLE PARTITION creates a partitioned table. Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partition cable, and a physical piece is called a partition. Data is stored on these smaller physical pieces, namely, partitions, instead of the larger logical partitioned table.
+The common forms of partitioning include range partitioning, hash partitioning, list partitioning, and value partitioning. Currently, row-store and column-store tables support only range partitioning.
+In range partitioning, the table is partitioned into ranges defined by a key column or set of columns, with no overlap between the ranges of values assigned to different partitions. Each range has a dedicated partition for data storage.
+The partitioning policy for Range Partitioning refers to how data is inserted into partitions. Currently, range partitioning only allows the use of the range partitioning policy.
+Range partitioning policy: Data is mapped to a created partition based on the partition key value. If the data can be mapped to, it is inserted into the specific partition; if it cannot be mapped to, error messages are returned. This is the most commonly used partitioning policy.
+Partitioning can provide several benefits:
+A partitioned table supports unique and primary key constraints. The constraint keys of these constraints contain all partition keys.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | CREATE TABLE [ IF NOT EXISTS ] partition_table_name +( [ + { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] + | table_constraint + | LIKE source_table [ like_option [...] ] }[, ... ] +] ) + [ WITH ( {storage_parameter = value} [, ... ] ) ] + [ COMPRESS | NOCOMPRESS ] + [ TABLESPACE tablespace_name ] + [ DISTRIBUTE BY { REPLICATION | { [ HASH ] ( column_name ) } } ] + [ TO { GROUP groupname | NODE ( nodename [, ... ] ) } ] + PARTITION BY { + {VALUES (partition_key)} | + {RANGE (partition_key) ( partition_less_than_item [, ... ] )} | + {RANGE (partition_key) ( partition_start_end_item [, ... ] )} + } [ { ENABLE | DISABLE } ROW MOVEMENT ]; + |
1 +2 +3 +4 +5 +6 +7 +8 | [ CONSTRAINT constraint_name ] +{ NOT NULL | + NULL | + CHECK ( expression ) | + DEFAULT default_expr | + UNIQUE index_parameters | + PRIMARY KEY index_parameters } +[ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + |
[ CONSTRAINT constraint_name ] +{ CHECK ( expression ) | + UNIQUE ( column_name [, ... ] ) index_parameters | + PRIMARY KEY ( column_name [, ... ] ) index_parameters} +[ DEFERRABLE | NOT DEFERRABLE | INITIALLY DEFERRED | INITIALLY IMMEDIATE ]+
1 | { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE | COMMENTS | RELOPTIONS | DISTRIBUTION | ALL } + |
1 +2 | [ WITH ( {storage_parameter = value} [, ... ] ) ] +[ USING INDEX TABLESPACE tablespace_name ] + |
1 | PARTITION partition_name VALUES LESS THAN ( { partition_value | MAXVALUE } ) [TABLESPACE tablespace_name] + |
1 +2 +3 +4 +5 +6 | PARTITION partition_name { + {START(partition_value) END (partition_value) EVERY (interval_value)} | + {START(partition_value) END ({partition_value | MAXVALUE})} | + {START(partition_value)} | + {END({partition_value | MAXVALUE})} +} [TABLESPACE tablespace_name] + |
Does not throw an error if a table with the same name exists. A notice is issued in this case.
+Name of the partitioned table
+Value range: a string. It must comply with the naming convention.
+Specifies the name of a column to be created in the new table.
+Value range: a string. It must comply with the naming convention.
+Specifies the data type of the column.
+Assigns a collation to the column (which must be of a collatable data type). If no collation is specified, the default collation is used.
+The collatable types are char, varchar, text, nchar, and nvarchar.
+Specifies a name for a column or table constraint. The optional constraint clauses specify constraints that new or updated rows must satisfy for an insert or update operation to succeed.
+There are two ways to define constraints:
+Specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.
+Unlike INHERITS, the new table and original table are decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.
+Default expressions for the copied column definitions will only be copied if INCLUDING DEFAULTS is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having default values NULL.
+NOT NULL constraints are always copied to the new table. CHECK constraints will only be copied if INCLUDING CONSTRAINTS is specified; other types of constraints will never be copied. These rules also apply to column constraints and table constraints.
+Unlike INHERITS, columns and constraints copied by LIKE are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE clause, an error is reported.
+Specifies an optional storage parameter for a table or an index. Optional parameters are as follows:
+The fillfactor of a table is a percentage between 10 and 100. 100 (complete packing) is the default value. When a smaller fillfactor is specified, INSERT operations pack table pages only to the indicated percentage. The remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page, which is more efficient than placing it on a different page. For a table whose records are never updated, setting the fillfactor to 100 (complete packing) is the appropriate choice, but in heavily updated tables smaller fillfactors are appropriate. The parameter has no meaning for column-store tables.
+Value range: 10–100
+Determines the storage mode of the data in the table.
+Valid value:
+orientation cannot be modified.
+The row-store table compression function is not put into commercial use. To use this function, contact technical support engineers.
+Specifies the maximum of a storage unit during data loading process. The parameter is only valid for column-store table.
+Value range: 10000 to 60000
+Default value: 60000
+Specifies the number of records to be partial cluster stored during data loading process. The parameter is only valid for column-store table.
+Value range: The valid value is no less than 100000. The value is the multiple of MAX_BATCHROW.
+Specifies whether to enable delta tables in column-store tables. The parameter is only valid for column-store tables.
+Default value: off
+A reserved parameter. The parameter is only valid for column-store table.
+The value ranges from 0 to 60000. The default value is 6000.
+Specifies the OBS tablespace for the cold partitions in a hot or cold table. This parameter is available only to partitioned column-store tables and cannot be modified. It must be used together with storage_policy.
+Valid value: a valid OBS tablespace name
+Specifies the rule for switching between hot and cold partitions. This parameter is used only for multi-temperature tables. It must be used together with cold_tablespace.
+Value range: Cold and hot switchover policy name:Cold and hot switchover threshold. Currently, only LMT and HPN policies are supported. LMT indicates that the switchover is performed based on the last update time of partitions. HPN indicates the switchover is performed based on a fixed number of reserved hot partitions.
+Specifies the version of the column-store format. Switching between different storage formats is supported. However, the storage format of a partitioned table cannot be switched.
+Valid value:
+1.0: Each column in a column-store table is stored in a separate file. The file name is relfilenode.C1.0, relfilenode.C2.0, relfilenode.C3.0, or similar.
+2.0: All columns of a column-store table are combined and stored in a file. The file is named relfilenode.C1.0.
+Default value: 2.0
+The value of COLVERSION can only be set to 2.0 for OBS hot and cold tables.
+When creating a column-store table, set COLVERSION to 2.0. Compared with the 1.0 storage format, the performance is significantly improved:
+Indicates whether to skip the hint bits operation when the full-page writes (FPW) log needs to be written during sequential scanning.
+If SKIP_FPI_HINT is set to true and the checkpoint operation is performed on a table, no Xlog will be generated when the table is sequentially scanned. This applies to intermediate tables that are queried less frequently, reducing the size of Xlogs and improving query performance.
+Specifies the keyword COMPRESS during the creation of a table, so that the compression feature is triggered in the case of a bulk INSERT operation. If this feature is enabled, a scan is performed for all tuple data within the page to generate a dictionary and then the tuple data is compressed and stored. If NOCOMPRESS is specified, the table is not compressed.
+Default value: NOCOMPRESS, tuple data is not compressed before storage.
+Specifies the new table will be created in tablespace_name tablespace. If not specified, default tablespace is used. The OBS tablespace is not supported.
+Specifies how the table is distributed or replicated between DNs.
+Valid value:
+Default value: HASH(column_name), the key column of column_name (if any) or the column of distribution column supported by first data type.
+column_name supports the following data types:
+TO GROUP specifies the Node Group in which the table is created. Currently, it cannot be used for HDFS tables. TO NODE is used for internal scale-out tools.
+Creates a range partition. partition_key is the name of the partition key.
+(1) Assume that the VALUES LESS THAN syntax is used.
+In this case, a maximum of four partition keys are supported.
+Data types supported by the partition keys are as follows: SMALLINT, INTEGER, BIGINT, DECIMAL, NUMERIC, REAL, DOUBLE PRECISION, CHARACTER VARYING(n), VARCHAR(n), CHARACTER(n), CHAR(n), CHARACTER, CHAR, TEXT, NVARCHAR2, NAME, TIMESTAMP[(p)] [WITHOUT TIME ZONE], TIMESTAMP[(p)] [WITH TIME ZONE], and DATE.
+(2) Assume that the START END syntax is used.
+In this case, only one partition key is supported.
+Data types supported by the partition key are as follows: SMALLINT, INTEGER, BIGINT, DECIMAL, NUMERIC, REAL, DOUBLE PRECISION, TIMESTAMP[(p)] [WITHOUT TIME ZONE], TIMESTAMP[(p)] [WITH TIME ZONE], and DATE.
+Specifies the information of partitions. partition_name is the name of a range partition. partition_value is the upper limit of range partition, and the value depends on the type of partition_key. MAXVALUE can specify the upper boundary of a range partition, and it is commonly used to specify the upper boundary of the last range partition.
+Specifies partition definitions.
+Specifies the row movement switch.
+If the tuple value is updated on the partition key during the UPDATE action, the partition where the tuple is located is altered. Setting of this parameter enables error messages to be reported or movement of the tuple between partitions.
+Valid value:
+Indicates that the column is not allowed to contain NULL values. ENABLE can be omitted.
+Indicates that the column is allowed to contain NULL values. This is the default setting.
+This clause is only provided for compatibility with non-standard SQL databases. You are advised not to use this clause.
+Specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. If any row of an insert or update operation produces a FALSE result, an error exception is raised and the insert or update does not alter the database.
+A check constraint specified as a column constraint should reference only the column's values, while an expression appearing in a table constraint can reference multiple columns.
+A constraint marked with NO INHERIT will not propagate to child tables.
+ENABLE can be omitted.
+Assigns a default data value for a column. The value can be any variable-free expressions (Subqueries and cross-references to other columns in the current table are not allowed). The data type of the default expression must match the data type of the column.
+The default expression will be used in any insert operation that does not specify a value for the column. If there is no default value for a column, then the default value is NULL.
+UNIQUE ( column_name [, ... ] ) index_parameters
+Specifies that a group of one or more columns of a table can contain only unique values.
+For the purpose of a unique constraint, NULL is not considered equal.
+If DISTRIBUTE BY REPLICATION is not specified, the column table that contains only unique values must contain distribution columns.
+PRIMARY KEY ( column_name [, ... ] ) index_parameters
+Specifies the primary key constraint specifies that a column or columns of a table can contain only unique (non-duplicate) and non-null values.
+Only one primary key can be specified for a table.
+If DISTRIBUTE BY REPLICATION is not specified, the column set with a primary key constraint must contain distributed columns.
+Controls whether the constraint can be deferred. A constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction using the SET CONSTRAINTS command. NOT DEFERRABLE is the default value. Currently, only UNIQUE and PRIMARY KEY constraints of row-store tables accept this clause. All the other constraints are not deferrable.
+If a constraint is deferrable, this clause specifies the default time to check the constraint.
+The constraint check time can be altered using the SET CONSTRAINTS command.
+Allows selection of the tablespace in which the index associated with a UNIQUE or PRIMARY KEY constraint will be created. If not specified, default_tablespace is consulted, or the default tablespace in the database if default_tablespace is empty. The OBS tablespace is not supported.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 | CREATE TABLE tpcds.web_returns_p1 +( + WR_RETURNED_DATE_SK INTEGER , + WR_RETURNED_TIME_SK INTEGER , + WR_ITEM_SK INTEGER NOT NULL, + WR_REFUNDED_CUSTOMER_SK INTEGER , + WR_REFUNDED_CDEMO_SK INTEGER , + WR_REFUNDED_HDEMO_SK INTEGER , + WR_REFUNDED_ADDR_SK INTEGER , + WR_RETURNING_CUSTOMER_SK INTEGER , + WR_RETURNING_CDEMO_SK INTEGER , + WR_RETURNING_HDEMO_SK INTEGER , + WR_RETURNING_ADDR_SK INTEGER , + WR_WEB_PAGE_SK INTEGER , + WR_REASON_SK INTEGER , + WR_ORDER_NUMBER BIGINT NOT NULL, + WR_RETURN_QUANTITY INTEGER , + WR_RETURN_AMT DECIMAL(7,2) , + WR_RETURN_TAX DECIMAL(7,2) , + WR_RETURN_AMT_INC_TAX DECIMAL(7,2) , + WR_FEE DECIMAL(7,2) , + WR_RETURN_SHIP_COST DECIMAL(7,2) , + WR_REFUNDED_CASH DECIMAL(7,2) , + WR_REVERSED_CHARGE DECIMAL(7,2) , + WR_ACCOUNT_CREDIT DECIMAL(7,2) , + WR_NET_LOSS DECIMAL(7,2) +) +WITH (ORIENTATION = COLUMN,COMPRESSION=MIDDLE) +DISTRIBUTE BY HASH (WR_ITEM_SK) +PARTITION BY RANGE(WR_RETURNED_DATE_SK) +( + PARTITION P1 VALUES LESS THAN(2450815), + PARTITION P2 VALUES LESS THAN(2451179), + PARTITION P3 VALUES LESS THAN(2451544), + PARTITION P4 VALUES LESS THAN(2451910), + PARTITION P5 VALUES LESS THAN(2452275), + PARTITION P6 VALUES LESS THAN(2452640), + PARTITION P7 VALUES LESS THAN(2453005), + PARTITION P8 VALUES LESS THAN(MAXVALUE) +); + |
The ranges of the partitions are: wr_returned_date_sk < 2450815, 2450815 ≤ wr_returned_date_sk < 2451179, 2451179 ≤ wr_returned_date_sk < 2451544, 2451544 ≤ wr_returned_date_sk < 2451910, 2451910 ≤ wr_returned_date_sk < 2452275, 2452275 ≤ wr_returned_date_sk < 2452640, 2452640 ≤ wr_returned_date_sk < 2453005, and wr_returned_date_sk ≥ 2453005.
+Assume that CN and DN data directory/pg_location/mount1/path1, CN and DN data directory/pg_location/mount2/path2, CN and DN data directory/pg_location/mount3/path3, and CN and DN data directory/pg_location/mount4/path4 are empty directories for which user dwsadmin has read and write permissions.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 | CREATE TABLE tpcds.web_returns_p2 +( + WR_RETURNED_DATE_SK INTEGER , + WR_RETURNED_TIME_SK INTEGER , + WR_ITEM_SK INTEGER NOT NULL, + WR_REFUNDED_CUSTOMER_SK INTEGER , + WR_REFUNDED_CDEMO_SK INTEGER , + WR_REFUNDED_HDEMO_SK INTEGER , + WR_REFUNDED_ADDR_SK INTEGER , + WR_RETURNING_CUSTOMER_SK INTEGER , + WR_RETURNING_CDEMO_SK INTEGER , + WR_RETURNING_HDEMO_SK INTEGER , + WR_RETURNING_ADDR_SK INTEGER , + WR_WEB_PAGE_SK INTEGER , + WR_REASON_SK INTEGER , + WR_ORDER_NUMBER BIGINT NOT NULL, + WR_RETURN_QUANTITY INTEGER , + WR_RETURN_AMT DECIMAL(7,2) , + WR_RETURN_TAX DECIMAL(7,2) , + WR_RETURN_AMT_INC_TAX DECIMAL(7,2) , + WR_FEE DECIMAL(7,2) , + WR_RETURN_SHIP_COST DECIMAL(7,2) , + WR_REFUNDED_CASH DECIMAL(7,2) , + WR_REVERSED_CHARGE DECIMAL(7,2) , + WR_ACCOUNT_CREDIT DECIMAL(7,2) , + WR_NET_LOSS DECIMAL(7,2) +) +DISTRIBUTE BY HASH (WR_ITEM_SK) +PARTITION BY RANGE(WR_RETURNED_DATE_SK) +( + PARTITION P1 VALUES LESS THAN(2450815), + PARTITION P2 VALUES LESS THAN(2451179), + PARTITION P3 VALUES LESS THAN(2451544), + PARTITION P4 VALUES LESS THAN(2451910), + PARTITION P5 VALUES LESS THAN(2452275), + PARTITION P6 VALUES LESS THAN(2452640), + PARTITION P7 VALUES LESS THAN(2453005), + PARTITION P8 VALUES LESS THAN(MAXVALUE) +) +ENABLE ROW MOVEMENT; + |
Assume that /home/dbadmin/startend_tbs1, /home/dbadmin/startend_tbs2, /home/dbadmin/startend_tbs3, and /home/dbadmin/startend_tbs4 are empty directories that user dbadmin has read and write permissions for.
+ +Create a partitioned table with the partition key of type integer.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE TABLE tpcds.startend_pt (c1 INT, c2 INT) + +DISTRIBUTE BY HASH (c1) +PARTITION BY RANGE (c2) ( + PARTITION p1 START(1) END(1000) EVERY(200) , + PARTITION p2 END(2000), + PARTITION p3 START(2000) END(2500) , + PARTITION p4 START(2500), + PARTITION p5 START(3000) END(5000) EVERY(1000) +) +ENABLE ROW MOVEMENT; + |
View the information of the partitioned table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 | SELECT relname, boundaries FROM pg_partition p where p.parentid='tpcds.startend_pt'::regclass ORDER BY 1; + relname | boundaries +-------------+------------ + p1_0 | {1} + p1_1 | {201} + p1_2 | {401} + p1_3 | {601} + p1_4 | {801} + p1_5 | {1000} + p2 | {2000} + p3 | {2500} + p4 | {3000} + p5_1 | {4000} + p5_2 | {5000} + tpcds.startend_pt | +(12 rows) + |
Import data and check the data volume in the partition.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | INSERT INTO tpcds.startend_pt VALUES (GENERATE_SERIES(0, 4999), GENERATE_SERIES(0, 4999)); +SELECT COUNT(*) FROM tpcds.startend_pt PARTITION FOR (0); +count +------- +1 +(1 row) + +SELECT COUNT(*) FROM tpcds.startend_pt PARTITION (p3); +count +------- +500 +(1 row) + |
View the information of the partitioned table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | SELECT relname, boundaries FROM pg_partition p where p.parentid='tpcds.startend_pt'::regclass ORDER BY 1; + relname | boundaries +-------------+------------ + p1_0 | {1} + p1_1 | {201} + p1_2 | {401} + p1_3 | {601} + p1_4 | {801} + p1_5 | {1000} + p2 | {2000} + p3 | {2500} + p4 | {3000} + p5_1 | {4000} + p6_1 | {5300} + p6_2 | {5600} + p6_3 | {5900} + p71 | {6000} + q1_1 | {4250} + q1_2 | {4500} + q1_3 | {4750} + q1_4 | {5000} + tpcds.startend_pt | +(19 rows) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | CREATE TABLE customer_address +( + ca_address_sk integer NOT NULL, + ca_address_date date NOT NULL +) +DISTRIBUTE BY HASH (ca_address_sk) +PARTITION BY RANGE (ca_address_date) +( + PARTITION p202001 VALUES LESS THAN('20200101'), + PARTITION p202002 VALUES LESS THAN('20200201'), + PARTITION p202003 VALUES LESS THAN('20200301'), + PARTITION p202004 VALUES LESS THAN('20200401'), + PARTITION p202005 VALUES LESS THAN('20200501'), + PARTITION p202006 VALUES LESS THAN('20200601'), + PARTITION p202007 VALUES LESS THAN('20200701'), + PARTITION p202008 VALUES LESS THAN('20200801'), + PARTITION p202009 VALUES LESS THAN('20200901'), + PARTITION p202010 VALUES LESS THAN('20201001'), + PARTITION p202011 VALUES LESS THAN('20201101'), + PARTITION p202012 VALUES LESS THAN('20201201'), + PARTITION p202013 VALUES LESS THAN(MAXVALUE) +); + |
Insert data:
+1 +2 +3 +4 | INSERT INTO customer_address values('1','20200215'); +INSERT INTO customer_address values('7','20200805'); +INSERT INTO customer_address values('9','20201111'); +INSERT INTO customer_address values('4','20201231'); + |
Query a partition:
+1 +2 +3 +4 +5 | SELECT * FROM customer_address PARTITION(p202009); + ca_address_sk | ca_address_date +---------------+--------------------- + 7 | 2020-08-05 00:00:00 +(1 row) + |
1 +2 +3 +4 +5 | CREATE table day_part(id int,d_time date) +DISTRIBUTE BY HASH (id) +PARTITION BY RANGE (d_time) +(PARTITION p1 START('2022-01-01') END('2022-01-31') EVERY(interval '1 day')); +ALTER TABLE day_part ADD PARTITION pmax VALUES LESS THAN (maxvalue); + |
1 +2 +3 +4 +5 | CREATE table week_part(id int,w_time date) +DISTRIBUTE BY HASH (id) +PARTITION BY RANGE (w_time) +(PARTITION p1 START('2021-01-01') END('2022-01-01') EVERY(interval '7 day')); +ALTER TABLE week_part ADD PARTITION pmax VALUES LESS THAN (maxvalue); + |
1 +2 +3 +4 +5 | CREATE table month_part(id int,m_time date) +DISTRIBUTE BY HASH (id) +PARTITION BY RANGE (m_time) +(PARTITION p1 START('2021-01-01') END('2022-01-01') EVERY(interval '1 month')); +ALTER TABLE month_part ADD PARTITION pmax VALUES LESS THAN (maxvalue); + |
CREATE TEXT SEARCH CONFIGURATION creates a text search configuration. A text search configuration specifies a text search parser that can divide a string into tokens, plus dictionaries that can be used to determine which tokens are of interest for searching.
+1 +2 +3 | CREATE TEXT SEARCH CONFIGURATION name + ( PARSER = parser_name | COPY = source_config ) + [ WITH ( {configuration_option = value} [, ...] )]; + |
Specifies the name of the text search configuration to be created. Specifies the name can be schema-qualified.
+Specifies the name of the text search parser to use for this configuration.
+Specifies the name of an existing text search configuration to copy.
+Specifies the configuration parameter of text search configuration is mainly for the parser executed by parser_name or contained by source_config.
+Parser + |
+Parameters for adding an account + |
+Description + |
+Value Range + |
+
---|---|---|---|
ngram + |
+gram_size + |
+Length of word segmentation + |
+Integer, 1 to 4 +Default value: 2 + |
+
punctuation_ignore + |
+Whether to ignore punctuations + |
+
|
+|
grapsymbol_ignore + |
+Whether to ignore graphical characters + |
+
|
+|
zhparser + |
+punctuation_ignore + |
+Whether to ignore special characters including punctuations (\r and \n will not be ignored) in the word segmentation result + |
+
|
+
seg_with_duality + |
+Whether to aggregate segments with duality + |
+
|
+|
multi_short + |
+Whether to execute long words composite divide + |
+
|
+|
multi_duality + |
+Whether to aggregate segments in long words with duality + |
+
|
+|
multi_zmain + |
+Whether to display key single words individually + |
+
|
+|
multi_zall + |
+Whether to display all single words individually. + |
+
|
+
Create a text search configuration.
+1 | CREATE TEXT SEARCH CONFIGURATION ngram1 (parser=ngram) WITH (gram_size = 2, grapsymbol_ignore = false); + |
Create a text search configuration.
+1 | CREATE TEXT SEARCH CONFIGURATION ngram2 (copy=ngram1) WITH (gram_size = 2, grapsymbol_ignore = false); + |
Create a text search configuration.
+1 | CREATE TEXT SEARCH CONFIGURATION english_1 (parser=default); + |
ALTER TEXT SEARCH CONFIGURATION, DROP TEXT SEARCH CONFIGURATION
+CREATE TEXT SEARCH DICTIONARY creates a full-text search dictionary. A dictionary is used to identify and process specified words during full-text search.
+Dictionaries are created by using predefined templates (defined in the PG_TS_TEMPLATE system catalog). Five types of dictionaries can be created, Simple, Ispell, Synonym, Thesaurus, and Snowball. Each type of dictionaries is used to handle different tasks.
+1 +2 +3 +4 | CREATE TEXT SEARCH DICTIONARY name ( + TEMPLATE = template + [, option = value [, ... ]] +); + |
Specifies the name of a dictionary to be created. (If you do not specify a schema name, the dictionary will be created in the current schema.)
+Value range: a string, which complies with the identifier naming convention. A value can contain a maximum of 63 characters.
+Specifies a template name.
+Value range: templates (Simple, Synonym, Thesaurus, Ispell, and Snowball) defined in the PG_TS_TEMPLATE system catalog
+Specifies a parameter name. Each type of dictionaries has a template containing their custom parameters. Parameters function in a way irrelevant to their setting sequence.
+Specifies the name of a file listing stop words. The default file name extension is .stop. For example, if the value of STOPWORDS is french, the actual file name is french.stop. In the file, each line defines a stop word. Dictionaries will ignore blank lines and spaces in the file and convert stop-word phrases into lowercase.
+Specifies whether to accept a non-stop word as recognized. The default value is true.
+If ACCEPT=true is set for a Simple dictionary, no token will be passed to subsequent dictionaries. In this case, you are advised to place the Simple dictionary at the end of the dictionary list. If ACCEPT=false is set, you are advised to place the Simple dictionary before at least one dictionary in the list.
+Specifies the directory for storing the stop word file. The stop word file can be stored locally or on the OBS server. If the file is stored locally, the directory format is 'file://absolute_path'. If the file is stored on the OBS server, the directory format is 'obs://bucket/path accesskey=ak secretkey=sk region=region_name'. The directory must be enclosed in single quotation marks ('). The default value is the directory where predefined dictionary files are located. Both the FILEPATH and STOPWORDS parameters need to be specified.
+To create a dictionary using the stop word file on the OBS server, perform the following steps:
+For example, if region_name is set to rg, region_map is as follows: "rg": "obsv3.sa-fb-1.externaldemo.com".
+region_name and obs domain are enclosed in double quotation marks. There is no space on the left of the colon and one space on the right of the colon.
+CREATE TEXT SEARCH DICTIONARY french_dict ( TEMPLATE = pg_catalog.simple, STOPWORDS = french, FILEPATH = 'obs://gaussdb accesskey=xxx secretkey=yyy region=rg' );+
The french.stop file is stored in the root directory of the gaussdb bucket. Therefore, the path is empty.
+Specifies the name of the definition file for a Synonym dictionary. The default file name extension is .syn.
+The file is a list of synonyms. Each line is in the format of token synonym, that is, token and its synonym separated by a space.
+Specifies whether tokens and their synonyms are case sensitive. The default value is false, indicating that tokens and synonyms in dictionary files will be converted into lowercase. If this parameter is set to true, they will not be converted into lowercase.
+Specifies the directory for storing Synonym dictionary files. The directory can be a local directory or an OBS directory. The default value is the directory where predefined dictionary files are located. The directory format and the process of creating a Synonym dictionary using a file on the OBS server are the same as those of the FILEPATH of the Simple dictionary.
+Specifies the name of a dictionary definition file. The default file name extension is .ths.
+The file is a list of synonyms. Each line is in the format of sample words : indexed words. The colon (:) is used as a separator between a phrase and its substitute word. If multiple sample words are matched, the TZ selects the longest one.
+Specifies the name of a subdictionary used for word normalization. This parameter is mandatory and only one subdictionary name can be specified. The specified subdictionary must exist. It is used to identify and normalize input text before phrase matching.
+If an input word cannot be recognized by the subdictionary, an error will be reported. In this case, remove the word or update the subdictionary to make the word recognizable. In addition, an asterisk (*) can be placed at the beginning of an indexed word to skip the application of a subdictionary on it, but all sample words must be recognizable by the subdictionary.
+? one ? two : swsw+
a one the two and the one a two will be matched and output as swsw.
+Specifies the directory for storing dictionary definition files. The directory can be a local directory or an OBS directory. The default value is the directory where predefined dictionary files are located. The directory format and the process of creating a Synonym dictionary using a file on the OBS server are the same as those of the FILEPATH of the Simple dictionary.
+Specifies the name of a dictionary definition file. The default file name extension is .dict.
+Specifies the name of an affix file. The default file name extension is .affix.
+Specifies the name of a file listing stop words. The default file name extension is .stop. The file content format is the same as that of the file for a Simple dictionary.
+Specifies the directory for storing dictionary files. The directory can be a local directory or an OBS directory. The default value is the directory where predefined dictionary files are located. The directory format and the process of creating a Synonym dictionary using a file on the OBS server are the same as those of the FILEPATH of the Simple dictionary.
+Specifies the name of a language whose stemming algorithm will be used. According to spelling rules in the language, the algorithm normalizes the variants of an input word into a basic word or a stem.
+Specifies the name of a file listing stop words. The default file name extension is .stop. The file content format is the same as that of the file for a Simple dictionary.
+Specifies the directory for storing dictionary definition files. The directory can be a local directory or an OBS directory. The default value is the directory where predefined dictionary files are located. Both the FILEPATH and STOPWORDS parameters need to be specified. The directory format and the process of creating a Snowball dictionary using a file on the OBS server are the same as those of the Simple dictionary.
+Specifies a parameter value. If the value is not an identifier or a number, enclose it with single quotation marks (''). You can also enclose identifiers and numbers with single quotation marks.
+1 +2 +3 +4 +5 +6 +7 | CREATE TEXT SEARCH DICTIONARY english_ispell ( + TEMPLATE = ispell, + DictFile = english, + AffFile = english, + StopWords = english, + FilePath = 'obs://bucket_name/path accesskey=ak secretkey=sk region=rg' +); + |
See examples in Configuration Examples.
+CREATE TRIGGER creates a trigger. The trigger will be associated with a specified table or view, and will execute a specified function when certain events occur.
+1 +2 +3 +4 +5 +6 +7 | CREATE [ CONSTRAINT ] TRIGGER trigger_name { BEFORE | AFTER | INSTEAD OF } { event [ OR ... ] } + ON table_name + [ FROM referenced_table_name ] + { NOT DEFERRABLE | [ DEFERRABLE ] { INITIALLY IMMEDIATE | INITIALLY DEFERRED } } + [ FOR [ EACH ] { ROW | STATEMENT } ] + [ WHEN ( condition ) ] + EXECUTE PROCEDURE function_name ( arguments ); + |
Events include:
+1 +2 +3 +4 | INSERT + UPDATE [ OF column_name [, ... ] ] + DELETE + TRUNCATE + |
(Optional) Creates a constraint trigger, that is, a trigger is used as a constraint. Such a trigger is similar to a regular trigger except that the timing of the trigger firing can be adjusted using SET CONSTRAINTS. Constraint triggers must be AFTER ROW triggers.
+Specifies the name of a new trigger. The name cannot be schema-qualified because the trigger inherits the schema of its table. In addition, triggers on the same table cannot be named the same. For a constraint trigger, this is also the name to use when you modify the trigger's behavior using SET CONSTRAINTS.
+Value range: a string that complies with the identifier naming convention. A value can contain a maximum of 63 characters.
+Specifies that a trigger function is called before the trigger event.
+Specifies that a trigger function is called after the trigger event. A constraint trigger can only be specified as AFTER.
+Specifies that a trigger function directly replaces the trigger event.
+Specifies the event that will fire a trigger. Values are INSERT, UPDATE, DELETE, and TRUNCATE. You can also specify multiple trigger events through OR.
+For UPDATE events, use the following syntax to specify a list of columns:
+1 | UPDATE OF column_name1 [, column_name2 ... ] + |
The trigger will only fire if at least one of the listed columns is mentioned as a target of the UPDATE statement. INSTEAD OF UPDATE events do not support lists of columns.
+Specifies the name of the table where a trigger needs to be created.
+Value range: name of an existing table in the database
+Specifies the name of another table referenced by a constraint. This parameter can be specified only for constraint triggers. It does not support foreign key constraints and is not recommended for general use.
+Value range: name of an existing table in the database
+Controls whether a constraint can be deferred. The two parameters determine the timing for firing a constraint trigger, and can be specified only for constraint triggers.
+For details, see CREATE TABLE.
+If a constraint is deferrable, the two clauses specify the default time to check the constraint, and can be specified only for constraint triggers.
+For details, see CREATE TABLE.
+Specifies the frequency of firing a trigger.
+If this parameter is not specified, the default value FOR EACH STATEMENT will be used. Constraint triggers can only be specified as FOR EACH ROW.
+Specifies a Boolean expression that determines whether a trigger function will actually be executed. If WHEN is specified, the function will be called only when condition returns true.
+In FOR EACH ROW triggers, the WHEN condition can reference the columns of old or new row values by writing OLD.column_name or NEW.column_name, respectively. In addition, INSERT triggers cannot reference OLD and DELETE triggers cannot reference NEW.
+INSTEAD OF triggers do not support WHEN conditions.
+WHEN expressions cannot contain subqueries.
+For constraint triggers, evaluation of the WHEN condition is not deferred, but occurs immediately after the update operation is performed. If the condition does not return true, the trigger will not be queued for deferred execution.
+Specifies a user-defined function, which must be declared as taking no parameters and returning data of the trigger type. This function is executed when a trigger fires.
+Specifies an optional, comma-separated list of parameters to be provided to a function when a trigger is executed. Parameters are literal string constants. Simple names and numeric constants can also be included, but they will all be converted to strings. Check descriptions of the implementation language of a trigger function to find out how these parameters are accessed within the function.
+The following details trigger types:
+Trigger Timing + |
+Trigger Event + |
+Row-level + |
+Statement-level + |
+
---|---|---|---|
BEFORE + |
+INSERT/UPDATE/DELETE + |
+Tables + |
+Tables and views + |
+
TRUNCATE + |
+Not supported + |
+Tables + |
+|
AFTER + |
+INSERT/UPDATE/DELETE + |
+Tables + |
+Tables and views + |
+
TRUNCATE + |
+Not supported + |
+Tables + |
+|
INSTEAD OF + |
+INSERT/UPDATE/DELETE + |
+Views + |
+Not supported + |
+
TRUNCATE + |
+Not supported + |
+Not supported + |
+
Variable + |
+Description + |
+
---|---|
NEW + |
+New tuple for INSERT/UPDATE operations. This variable is NULL for DELETE operations. + |
+
OLD + |
+Old tuple for UPDATE/DELETE operations. This variable is NULL for INSERT operations. + |
+
TG_NAME + |
+Trigger name + |
+
TG_WHEN + |
+Trigger timing (BEFORE/AFTER/INSTEAD OF) + |
+
TG_LEVEL + |
+Trigger frequency (ROW/STATEMENT) + |
+
TG_OP + |
+Trigger event (INSERT/UPDATE/DELETE/TRUNCATE) + |
+
TG_RELID + |
+OID of the table where a trigger is located + |
+
TG_RELNAME + |
+Name of the table where a trigger is located. (This variable is now discarded and is replaced by TG_TABLE_NAME.) + |
+
TG_TABLE_NAME + |
+Name of the table where a trigger is located. + |
+
TG_TABLE_SCHEMA + |
+Schema information of the table where a trigger is located + |
+
TG_NARGS + |
+Number of parameters for a trigger function + |
+
TG_ARGV[] + |
+List of parameters for a trigger function + |
+
Create a source table and a target table.
+1 | CREATE TABLE test_trigger_src_tbl(id1 INT, id2 INT, id3 INT); + |
1 | CREATE TABLE test_trigger_des_tbl(id1 INT, id2 INT, id3 INT); + |
Create the trigger function tri_insert_func().
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE OR REPLACE FUNCTION tri_insert_func() RETURNS TRIGGER AS + $$ + DECLARE + BEGIN + INSERT INTO test_trigger_des_tbl VALUES(NEW.id1, NEW.id2, NEW.id3); + RETURN NEW; + END + $$ LANGUAGE PLPGSQL; + |
Create the trigger function tri_update_func().
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE OR REPLACE FUNCTION tri_update_func() RETURNS TRIGGER AS + $$ + DECLARE + BEGIN + UPDATE test_trigger_des_tbl SET id3 = NEW.id3 WHERE id1=OLD.id1; + RETURN OLD; + END + $$ LANGUAGE PLPGSQL; + |
Create the trigger function tri_delete_func().
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE OR REPLACE FUNCTION tri_delete_func() RETURNS TRIGGER AS + $$ + DECLARE + BEGIN + DELETE FROM test_trigger_des_tbl WHERE id1=OLD.id1; + RETURN OLD; + END + $$ LANGUAGE PLPGSQL; + |
Create an INSERT trigger.
+1 +2 +3 +4 | CREATE TRIGGER insert_trigger + BEFORE INSERT ON test_trigger_src_tbl + FOR EACH ROW + EXECUTE PROCEDURE tri_insert_func(); + |
Create an UPDATE trigger.
+1 +2 +3 +4 | CREATE TRIGGER update_trigger + AFTER UPDATE ON test_trigger_src_tbl + FOR EACH ROW + EXECUTE PROCEDURE tri_update_func(); + |
Create a DELETE trigger.
+1 +2 +3 +4 | CREATE TRIGGER delete_trigger + BEFORE DELETE ON test_trigger_src_tbl + FOR EACH ROW + EXECUTE PROCEDURE tri_delete_func(); + |
CREATE TYPE defines a new data type in the current database. The user who defines a new data type becomes its owner. Types are designed only for row-store tables.
+Four types of data can be created by using CREATE TYPE: composite data, base data, a shell data, and enumerated data.
+A composite type is specified by a list of attribute names and data types. If the data type of an attribute is collatable, the attribute's collation rule can also be specified. A composite type is essentially the same as the row type of a table. However, using CREATE TYPE avoids the need to create an actual table when only a type needs to be defined. In addition, a standalone composite type is useful, for example, as the parameter or return type of a function.
+To create a composite type, you must have the USAGE permission for all its attribute types.
+You can customize a new base type (scalar type). Generally, functions required for base types must be coded in C or another low-level language.
+A shell type is simply a placeholder for a type to be defined later. It can be created by delivering CREATE TYPE with no parameters except for a type name. Shell types are needed as forward references when base types are created.
+An enumerated type is a list of enumerated values. Each value is a non-empty string with the maximum length of 64 bytes.
+If a schema name is given, the type will be created in the specified schema. Otherwise, it will be created in the current schema. A type name must be different from the name of any existing type or domain in the same schema. (Since tables have associated data types, a type name must also be different from the name of any existing table in the same schema.)
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 | CREATE TYPE name AS + ( [ attribute_name data_type [ COLLATE collation ] [, ... ] ] ) + +CREATE TYPE name ( + INPUT = input_function, + OUTPUT = output_function + [ , RECEIVE = receive_function ] + [ , SEND = send_function ] + [ , TYPMOD_IN = +type_modifier_input_function ] + [ , TYPMOD_OUT = +type_modifier_output_function ] + [ , ANALYZE = analyze_function ] + [ , INTERNALLENGTH = { internallength | +VARIABLE } ] + [ , PASSEDBYVALUE ] + [ , ALIGNMENT = alignment ] + [ , STORAGE = storage ] + [ , LIKE = like_type ] + [ , CATEGORY = category ] + [ , PREFERRED = preferred ] + [ , DEFAULT = default ] + [ , ELEMENT = element ] + [ , DELIMITER = delimiter ] + [ , COLLATABLE = collatable ] +) + +CREATE TYPE name + +CREATE TYPE name AS ENUM + ( [ 'label' [, ... ] ] ) + |
Composite types
+Specifies the name of the type to be created. It can be schema-qualified.
+Specifies the name of an attribute (column) for the composite type.
+Specifies the name of an existing data type to become a column of the composite type.
+Specifies the name of an existing collation rule to be associated with a column of the composite type.
+Base types
+When creating a base type, you can place parameters in any order. The input_function and output_function parameters are mandatory, and other parameters are optional.
+Specifies the name of a function that converts data from the external text format of a type to its internal format.
+An input function can be declared as taking one parameter of the cstring type or taking three parameters of the cstring, oid, and integer types.
+An input function must return a value of the data type itself. Generally, an input function must be declared as STRICT. If it is not, it will be called with a NULL parameter coming first when the system reads a NULL input value. In this case, the function must still return NULL unless an error raises. (This mechanism is designed for supporting domain input functions, which may need to reject NULL input values.)
+Input and output functions can be declared to have the results or parameters of a new type because they have to be created before the new type is created. The new type should first be defined as a shell type, which is a placeholder type that has no attributes except a name and an owner. This can be done by delivering the CREATE TYPE name statement, with no additional parameters. Then, the C I/O functions can be defined as referencing the shell type. Finally, CREATE TYPE with a full definition replaces the shell type with a complete, valid type definition. After that, the new type can be used normally.
+Specifies the name of a function that converts data from the internal format of a type to its external text format.
+An output function must be declared as taking one parameter of a new data type. It must return data of the cstring type. Output functions are not invoked for NULL values.
+(Optional) Specifies the name of a function that converts data from the external binary format of a type to its internal format.
+If this function is not used, the type cannot participate in binary input. It costs lower to convert the binary format to the internal format, more portable. (For example, the standard integer data types use the network byte order as an external binary representation, whereas the internal representation is in the machine's native byte order.) This function should perform adequate checks to ensure a valid value.
+Also, this function can be declared as taking one parameter of the internal type or taking three parameters of the internal, oid, and integer types.
+A receive function must return a value of the data type itself. Generally, a receive function must be declared as STRICT. If it is not, it will be called with a NULL parameter coming first when the system reads a NULL input value. In this case, the function must still return NULL unless an error raises. (This mechanism is designed for supporting domain receive functions, which may need to reject NULL input values.)
+(Optional) Specifies the name of a function that converts data from the internal format of a type to its external binary format.
+If this function is not used, the type cannot participate in binary output. A send function must be declared as taking one parameter of a new data type. It must return data of the bytea type. Send functions are not invoked for NULL values.
+(Optional) Specifies the name of a function that converts an array of modifiers for a type to its internal format.
+(Optional) Specifies the name of a function that converts the internal format of modifiers for a type to its external text format.
+type_modifier_input_function and type_modifier_output_function are needed if a type supports modifiers, that is, optional constraints attached to a type declaration, such as char(5) or numeric(30,2). GaussDB(DWS) allows user-defined types to take one or more simple constants or identifiers as modifiers. However, this information must be capable of being packed into a single non-negative integer value for storage in system catalogs. Declared modifiers are passed to type_modifier_input_function in the cstring array format. The parameter must check values for validity, throwing an error if they are wrong. If they are correct, the parameter will return a single non-negative integer value, which will be stored as typmod in a column. If the type does not have type_modifier_input_function, type modifiers will be rejected. type_modifier_output_function converts the internal integer typmod value back to a correct format for user display. It must return a cstring value, which is the exact string appending to the type name. For example, a numeric function may return (30,2). If the default display format is enclosing a stored typmod integer value in parentheses, you can omit type_modifier_output_function.
+(Optional) Specifies the name of a function that performs statistical analysis for a data type.
+By default, if there is a default B-tree operator class for a type, ANALYZE will attempt to gather statistics by using the "equals" and "less-than" operators of the type. This behavior is inappropriate for non-scalar types, and can be overridden by specifying a custom analysis function. The analysis function must be declared to take one parameter of the internal type and return a boolean result.
+(Optional) Specifies a numeric constant for specifying the length in bytes of the internal representation of a new type. By default, it is variable-length.
+Although the details of the new type's internal representation are only known to I/O functions and other functions that you create to work with the type, there are still some attributes of the internal representation that must be declared to GaussDB(DWS). The most important one is internallength. Base data types can be fixed-length (when internallength is a positive integer) or variable-length (when internallength is set to VARIABLE; internally, this is represented by setting typlen to -1). The internal representation of all variable-length types must start with a 4-byte integer. internallength defines the total length.
+(Optional) Specifies that values of a data type are passed by value, rather than by reference. Types passed by value must be fixed-length, and their internal representation cannot be larger than the size of the Datum type (4 bytes on some machines, and 8 bytes on others).
+(Optional) Specifies the storage alignment required for a data type. It supports values char, int2, int4, and double. The default value is int4.
+The allowed values equate to alignment on 1-, 2-, 4-, or 8-byte boundaries. Note that variable-length types must have an alignment of at least 4 since they must contain an int4 value as their first component.
+(Optional) Specifies the storage strategy for a data type.
+It supports values plain, external, extended, and main. The default value is plain.
+All storage values except plain imply that the functions of the data type can handle values that have been toasted. A given value merely determines the default TOAST storage strategy for columns of a toastable data type. Users can choose other strategies for individual columns by using ALTER TABLE SET STORAGE.
+(Optional) Specifies the name of an existing data type that has the same representation as a new type. The values of internallength, passedbyvalue, alignment, and storage are copied from this type, unless they are overridden by explicit specifications elsewhere in the CREATE TYPE command.
+Specifying representation in this way is especially useful when the low-level implementation of a new type references an existing type.
+(Optional) Specifies the category code (a single ASCII character) for a type. The default value is U for a user-defined type. You can also choose other ASCII characters to create custom categories.
+(Optional) Specifies whether a type is preferred within its type category. If it is, the value will be TRUE, else FALSE. The default value is FALSE. Be cautious when creating a new preferred type within an existing type category because this could cause great changes in behavior.
+The category and preferred parameters can be used to help determine which implicit cast excels in ambiguous situations. Each data type belongs to a category named by a single ASCII character, and each type is either preferred or not within its category. If this rule is helpful in resolving overloaded functions or operators, the parser will prefer casting to preferred types (but only from other types within the same category). For types that have no implicit casts to or from any other types, it is sufficient to leave these parameters at their default values. However, for a group of types that have implicit casts, mark them all as belonging to a category and select one or two of the most general types as being preferred within the category. The category parameter is helpful in adding a user-defined type to an existing built-in category, such as the numeric or string type. However, you can also create new entirely-user-defined type categories. Select any ASCII character other than an uppercase letter to name such a category.
+(Optional) Specifies the default value for a data type. If this parameter is omitted, the default value will be NULL.
+A default value can be specified if you expect the columns of a data type to default to something other than the NULL value. You can also specify a default value using the DEFAULT keyword. (Such a default value can be overridden by an explicit DEFAULT clause attached to a particular column.)
+(Optional) Specifies the type of an array element when an array type is created. For example, to define an array of 4-byte integers (int4), set ELEMENT to int4.
+(Optional) Specifies the delimiter character to be used between values in arrays made of a type.
+delimiter can be set to a specific character. The default delimiter is a comma (,). Note that a delimiter is associated with the array element type, instead of the array type itself.
+(Optional) Specifies whether a type's operations can use collation information. If they can, the value will be TRUE, else FALSE (default).
+If collatable is TRUE, column definitions and expressions of a type may carry collation information by using the COLLATE clause. It is the implementations of functions operating on the type that actually use the collation information. This use cannot be achieved merely by marking the type collatable.
+(Optional) Specifies a text label associated with an enumerated value. It is a non-empty string of up to 64 characters.
+Whenever a user-defined type is created, GaussDB(DWS) automatically creates an associated array type whose name consists of the element type name prepended with an underscore (_).
+Example 1: Create a composite type, create a table, insert data, and make a query.
+1 +2 +3 +4 +5 +6 +7 | CREATE TYPE compfoo AS (f1 int, f2 text); +CREATE TABLE t1_compfoo(a int, b compfoo); +CREATE TABLE t2_compfoo(a int, b compfoo); +INSERT INTO t1_compfoo values(1,(1,'demo')); +INSERT INTO t2_compfoo select * from t1_compfoo; +SELECT (b).f1 FROM t1_compfoo; +SELECT * FROM t1_compfoo t1 join t2_compfoo t2 on (t1.b).f1=(t1.b).f1; + |
Example 2: Create an enumeration type and use it in the table definition.
+1 +2 +3 +4 | CREATE TYPE bugstatus AS ENUM ('create', 'modify', 'closed'); +CREATE TABLE customer (name text,current_bugstatus bugstatus); +INSERT INTO customer VALUES ('type','create'); +SELECT * FROM customer WHERE current_bugstatus = 'create'; + |
Example 3: Compile a .so file and create the shell type.
+1 | CREATE TYPE complex; + |
This statement creates a placeholder for the type to be created, which can then be referenced when defining its I/O function. Now you can define an I/O function. Note that the function must be declared in NOT FENCED mode when it is created.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 | CREATE FUNCTION +complex_in(cstring) + RETURNS complex + AS 'filename' + LANGUAGE C IMMUTABLE STRICT not fenced; + +CREATE FUNCTION +complex_out(complex) + RETURNS cstring + AS 'filename' + LANGUAGE C IMMUTABLE STRICT not fenced; + +CREATE FUNCTION +complex_recv(internal) + RETURNS complex + AS 'filename' + LANGUAGE C IMMUTABLE STRICT not fenced; + +CREATE FUNCTION +complex_send(complex) + RETURNS bytea + AS 'filename' + LANGUAGE C IMMUTABLE STRICT not fenced; + |
Finally, provide a complete definition of the data type.
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE TYPE complex ( +internallength = 16, +input = complex_in, +output = complex_out, +receive = complex_recv, +send = complex_send, +alignment = double +); + |
The C functions corresponding to the input, output, receive, and send functions are defined as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 | -- Define a structure body Complex: +typedef struct Complex { + double x; + double y; +} Complex; + +-- Define an input function: +PG_FUNCTION_INFO_V1(complex_in); + +Datum +complex_in(PG_FUNCTION_ARGS) +{ + char *str = PG_GETARG_CSTRING(0); + double x, + y; + Complex *result; + + if (sscanf(str, " ( %lf , %lf )", &x, &y) != 2) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), + errmsg("invalid input syntax for complex: \"%s\"", + str))); + + result = (Complex *) palloc(sizeof(Complex)); + result->x = x; + result->y = y; + PG_RETURN_POINTER(result); +} + +-- Define an output function: +PG_FUNCTION_INFO_V1(complex_out); + +Datum +complex_out(PG_FUNCTION_ARGS) +{ + Complex *complex = (Complex *) PG_GETARG_POINTER(0); + char *result; + + result = (char *) palloc(100); + snprintf(result, 100, "(%g,%g)", complex->x, complex->y); + PG_RETURN_CSTRING(result); +} + +-- Define a receive function: +PG_FUNCTION_INFO_V1(complex_recv); + +Datum +complex_recv(PG_FUNCTION_ARGS) +{ + StringInfo buf = (StringInfo) PG_GETARG_POINTER(0); + Complex *result; + + result = (Complex *) palloc(sizeof(Complex)); + result->x = pq_getmsgfloat8(buf); + result->y = pq_getmsgfloat8(buf); + PG_RETURN_POINTER(result); +} + +-- Define a send function: +PG_FUNCTION_INFO_V1(complex_send); + +Datum +complex_send(PG_FUNCTION_ARGS) +{ + Complex *complex = (Complex *) PG_GETARG_POINTER(0); + StringInfoData buf; + + pq_begintypsend(&buf); + pq_sendfloat8(&buf, complex->x); + pq_sendfloat8(&buf, complex->y); + PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); +} + |
CREATE USER creates a user.
+1 | CREATE USER user_name [ [ WITH ] option [ ... ] ] [ ENCRYPTED | UNENCRYPTED ] { PASSWORD | IDENTIFIED BY } { 'password' | DISABLE }; + |
The option clause is used for setting information including permissions and attributes.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 | {SYSADMIN | NOSYSADMIN} + | {AUDITADMIN | NOAUDITADMIN} + | {CREATEDB | NOCREATEDB} + | {USEFT | NOUSEFT} + | {CREATEROLE | NOCREATEROLE} + | {INHERIT | NOINHERIT} + | {LOGIN | NOLOGIN} + | {REPLICATION | NOREPLICATION} + | {INDEPENDENT | NOINDEPENDENT} + | {VCADMIN | NOVCADMIN} + | CONNECTION LIMIT connlimit + | VALID BEGIN 'timestamp' + | VALID UNTIL 'timestamp' + | RESOURCE POOL 'respool' + | USER GROUP 'groupuser' + | PERM SPACE 'spacelimit' + | TEMP SPACE 'tmpspacelimit' + | SPILL SPACE 'spillspacelimit' + | NODE GROUP logic_cluster_name + | IN ROLE role_name [, ...] + | IN GROUP role_name [, ...] + | ROLE role_name [, ...] + | ADMIN role_name [, ...] + | USER role_name [, ...] + | SYSID uid + | DEFAULT TABLESPACE tablespace_name + | PROFILE DEFAULT + | PROFILE profile_name + | PGUSER + | AUTHINFO 'authinfo' + | PASSWORD EXPIRATOIN period + |
Specifies the user name.
+Value range: a string. It must comply with the naming convention. A value can contain a maximum of 63 characters.
+Specifies the login password.
+A password must:
+Value range: a string
+For details on other parameters, see CREATE ROLE Parameter Description.
+Create user jim.
+1 | CREATE USER jim PASSWORD '{password}'; + |
The following statements are equivalent to the above.
+1 | CREATE USER kim IDENTIFIED BY '{password}'; + |
For a user having the Create Database permission, add the CREATEDB keyword.
+1 | CREATE USER dim CREATEDB PASSWORD '{password}'; + |
CREATE VIEW creates a view. A view is a virtual table, not a base table. A database only stores the definition of a view and does not store its data. The data is still stored in the original base table. If data in the base table changes, the data in the view changes accordingly. In this sense, a view is like a window through which users can know their interested data and data changes in the database.
+None
+1 +2 +3 | CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] VIEW view_name [ ( column_name [, ...] ) ] + [ WITH ( {view_option_name [= view_option_value]} [, ... ] ) ] + AS query; + |
Redefines a view if there is already a view.
+Creates a temporary view.
+Specifies the name of a view to be created. It is optionally schema-qualified.
+Value range: A string. It must comply with the naming convention.
+Specifies an optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.
+Value range: A string. It must comply with the naming convention.
+This clause specifies optional parameters for a view.
+Currently, the only parameter supported by view_option_name is security_barrier, which should be enabled when a view is intended to provide row-level security.
+Value range: boolean type. It can be TRUE or FALSE.
+A SELECT or VALUES statement which will provide the columns and rows of the view.
+CTE names cannot be duplicate when the view decoupling function is enabled. The following shows an example.
+1 +2 +3 | CREATE TABLE t1(a1 INT, b1 INT); +CREATE TABLE t2(a2 INT, b2 INT, c2 INT); +CREATE OR REPLACE VIEW v1 AS WITH tmp AS (SELECT * FROM t2) ,tmp1 AS (SELECT b2,c2 FROM tmp WHERE b2 = (WITH RECURSIVE tmp(aa, bb) AS (SELECT a1,b1 FROM t1) SELECT bb FROM tmp WHERE aa = c2)) SELECT c2 FROM tmp1; + |
Create a view consisting of columns whose spcname is pg_default.
+1 +2 | CREATE VIEW myView AS + SELECT * FROM pg_tablespace WHERE spcname = 'pg_default'; + |
Run the following command to redefine the existing view myView and create a view consisting of columns whose spcname is pg_global:
+1 +2 | CREATE OR REPLACE VIEW myView AS + SELECT * FROM pg_tablespace WHERE spcname = 'pg_global'; + |
Create a view consisting of rows with c_customer_sk smaller than 150.
+1 +2 +3 | CREATE VIEW tpcds.customer_details_view_v1 AS + SELECT * FROM tpcds.customer + WHERE c_customer_sk < 150; + |
After the enable_view_update parameter is enabled, simple views that meet all the following conditions can be updated using the INSERT, UPDATE, and DELETE statements:
+If the definition of the updatable view contains a WHERE condition, the condition restricts the UPDATE and DELETE statements from modifying rows on the base table. If the WHERE condition is not met after the UPDATE statement is executed, the updated rows cannot be queried in the view. Similarly, If the WHERE condition is not met after the INSERT statement is executed, the inserted data cannot be queried in the view. To insert, update, or delete data in a view, you must have the corresponding permission on the view and tables.
+ALTER VIEW and DROP VIEW
+CURSOR defines a cursor. This command retrieves few rows of data in a query.
+To process SQL statements, the stored procedure process assigns a memory segment to store context association. Cursors are handles or pointers to context regions. With cursors, stored procedures can control alterations in context regions.
+1 +2 +3 | CURSOR cursor_name + [ BINARY ] [ NO SCROLL ] [ { WITH | WITHOUT } HOLD ] + FOR query; + |
Specifies the name of a cursor to be created.
+Value range: Its value must comply with the database naming convention.
+Specifies that data retrieved by the cursor will be returned in binary format, not in text format.
+Specifies the mode of data retrieval by the cursor.
+Specifies whether the cursor can still be used after the cursor creation event.
+The SELECT or VALUES clause specifies the row to return the cursor value.
+Value range: SELECT or VALUES clause
+Set up the cursor1 cursor.
+1 | CURSOR cursor1 FOR SELECT * FROM tpcds.customer_address ORDER BY 1; + |
Set up the cursor cursor2.
+1 | CURSOR cursor2 FOR VALUES(1,2),(0,3) ORDER BY 1; + |
An example of using the WITH HOLD cursor is as follows:
+Start a transaction.
+1 | START TRANSACTION; + |
Set up a WITH HOLD cursor.
+1 | DECLARE cursor3 CURSOR WITH HOLD FOR SELECT * FROM tpcds.customer_address ORDER BY 1; + |
Fetch the first two rows from cursor3.
+1 +2 +3 +4 +5 +6 | FETCH FORWARD 2 FROM cursor3; + ca_address_sk | ca_address_id | ca_street_number | ca_street_name | ca_street_type | ca_suite_number | ca_city | ca_county | ca_state | ca_zip | ca_country | ca_gmt_offset | ca_location_type +---------------+------------------+------------------+--------------------+-----------------+-----------------+-----------------+-----------------+----------+------------+---------------+---------------+---------------------- + 1 | AAAAAAAABAAAAAAA | 18 | Jackson | Parkway | Suite 280 | Fairfield | Maricopa County | AZ | 86192 | United States | -7.00 | condo + 2 | AAAAAAAACAAAAAAA | 362 | Washington 6th | RD | Suite 80 | Fairview | Taos County | NM | 85709 | United States | -7.00 | condo +(2 rows) + |
End the transaction.
+1 | END; + |
Fetch the next row from cursor3.
+1 +2 +3 +4 +5 | FETCH FORWARD 1 FROM cursor3; + ca_address_sk | ca_address_id | ca_street_number | ca_street_name | ca_street_type | ca_suite_number | ca_city | ca_county | ca_state | ca_zip | ca_country | ca_gmt_offset | ca_location_type +---------------+------------------+------------------+--------------------+-----------------+-----------------+-----------------+-----------------+----------+------------+---------------+---------------+---------------------- + 3 | AAAAAAAADAAAAAAA | 585 | Dogwood Washington | Circle | Suite Q | Pleasant Valley | York County | PA | 12477 | United States | -5.00 | single family +(1 row) + |
Close a cursor.
+1 | CLOSE cursor3; + |
DROP DATABASE deletes a database.
+1 | DROP DATABASE [ IF EXISTS ] database_name; + |
Sends a notice instead of an error if the specified database does not exist.
+Specifies the name of the database to be deleted.
+Value range: A string indicating an existing database name.
+Delete the database named music.
+1 | DROP DATABASE music; + |
DROP FOREIGN TABLE deletes a specified foreign table.
+DROP FOREIGN TABLE forcibly deletes a specified table. After a table is deleted, any indexes that exist for the table will be deleted. The functions and stored procedures used in this table cannot be run.
+1 +2 | DROP FOREIGN TABLE [ IF EXISTS ] + table_name [, ...] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified table does not exist.
+Specifies the name of the table.
+Value range: An existing table name.
+Delete the foreign table named customer_ft.
+1 | DROP FOREIGN TABLE customer_ft; + |
DROP FUNCTION deletes an existing function.
+If a function involves operations on temporary tables, the function cannot be deleted by running DROP FUNCTION.
+1 +2 | DROP FUNCTION [ IF EXISTS ] function_name +[ ( [ {[ argmode ] [ argname ] argtype} [, ...] ] ) [ CASCADE | RESTRICT ] ]; + |
Sends a notice instead of an error if the function does not exist.
+Specifies the name of the function to be deleted.
+Value range: An existing function name.
+Specifies the mode of a function parameter.
+Specifies the name of a function parameter.
+Specifies the data types of a function parameter.
+Delete a function named add_two_number.
+1 | DROP FUNCTION add_two_number; + |
DROP GROUP deletes a user group.
+DROP GROUP is the alias for DROP ROLE.
+DROP GROUP is the internal interface encapsulated in the gs_om tool. You are not advised to use this interface, because doing so affects the cluster.
+1 | DROP GROUP [ IF EXISTS ] group_name [, ...]; + |
See Examples in DROP ROLE.
+DROP INDEX deletes an index.
+Only the owner of an index or a system administrator can run DROP INDEX command.
+1 +2 | DROP INDEX [ IF EXISTS ] + index_name [, ...] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified index does not exist.
+Specifies the name of the index to be deleted.
+Value range: An existing index.
+Delete the ds_ship_mode_t1_index2 index.
+1 | DROP INDEX tpcds.ds_ship_mode_t1_index2; + |
DROP OWNED deletes the database objects of a database role.
+The role's permissions on all the database objects in the current database and shared objects (databases and tablespaces) are revoked.
+1 | DROP OWNED BY name [, ...] [ CASCADE | RESTRICT ]; + |
Name of the role whose objects are to be deleted and whose permissions are to be revoked.
+DROP REDACTION POLICY deletes a data redaction policy applied to a specified table.
+Only the table owner has the permission to delete a data redaction policy.
+1 | DROP REDACTION POLICY [ IF EXISTS ] policy_name ON table_name; + |
Sends a notice instead of throwing an error if the redaction policy to be deleted does not exist.
+Specifies the name of a redaction policy.
+Specifies the name of the table to which the redaction policy is applied.
+Delete a data masking policy.
+1 | DROP REDACTION POLICY mask_emp ON emp; + |
DROP ROW LEVEL SECURITY POLICY deletes a row-level access control policy from a table.
+Only the table owner or administrators can delete a row-level access control policy from the table.
+1 | DROP [ ROW LEVEL SECURITY ] POLICY [ IF EXISTS ] policy_name ON table_name [ CASCADE | RESTRICT ] + |
Reports a notice instead of an error if the specified row-level access control policy does not exist.
+Delete the row-level access control policy.
+DROP ROW LEVEL SECURITY POLICY all_data_rls ON all_data;+
ALTER ROW LEVEL SECURITY POLICY, CREATE ROW LEVEL SECURITY POLICY
+DROP PROCEDURE deletes an existing stored procedure.
+None.
+1 | DROP PROCEDURE [ IF EXISTS ] procedure_name ; + |
Sends a notice instead of an error if the stored procedure does not exist.
+Specifies the name of the stored procedure to be deleted.
+Value range: An existing stored procedure name.
+Delete a stored procedure.
+1 | DROP PROCEDURE prc_add; + |
DROP RESOURCE POOL deletes a resource pool.
+The resource pool cannot be deleted if it is associated with a role.
+The user must have the DROP permission in order to delete a resource pool.
+1 | DROP RESOURCE POOL [ IF EXISTS ] pool_name; + |
Sends a notice instead of an error if the stored procedure does not exist.
+Specifies the name of a created resource pool.
+Value range: a string. It must comply with the naming convention.
+A resource pool can be independently deleted only when it is not associated with any users.
+Delete a resource pool.
+DROP RESOURCE POOL pool1;+
DROP ROLE deletes a specified role.
+If a "role is being used by other users" error is displayed when you run DROP ROLE, it might be that threads cannot respond to signals in a timely manner during the CLEAN CONNECTION process. As a result, connections are not completely cleared. In this case, you need to run CLEAN CONNECTION again.
+1 | DROP ROLE [ IF EXISTS ] role_name [, ...]; + |
Sends a notice instead of an error if the specified role does not exist.
+Specifies the name of the role to be deleted.
+Value range: An existing role.
+DROP SCHEMA deletes a schema in a database.
+Only a schema owner or a system administrator can run the DROP SCHEMA command.
+ +1 | DROP SCHEMA [ IF EXISTS ] schema_name [, ...] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified schema does not exist.
+Specifies the name of a schema.
+Value range: An existing schema name.
+Do not delete the schemas with the beginning of pg_temp or pg_toast_temp. They are internal system schemas, and deleting them may cause unexpected errors.
+A user cannot delete the schema in use. To delete the schema in use, switch to another schema.
+Delete the ds_new schema.
+DROP SCHEMA ds_new;+ +
DROP SEQUENCE deletes a sequence from the current database.
+Only a sequence owner or a system administrator can delete a sequence.
+1 | DROP SEQUENCE [ IF EXISTS ] {[schema.]sequence_name} [ , ... ] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified sequence does not exist.
+Specifies the name of the sequence.
+Automatically deletes objects that depend on the sequence to be deleted.
+Refuses to delete the sequence if any objects depend on it. This is the default.
+Delete the sequence.
+1 | DROP SEQUENCE serial; + |
DROP SERVER deletes an existing data server.
+Only the server owner can delete a server.
+1 | DROP SERVER [ IF EXISTS ] server_name [ {CASCADE | RESTRICT} ] ; + |
Sends a notice instead of an error if the specified table does not exist.
+Specifies the name of a server.
+Delete the hdfs_server server.
+1 | DROP SERVER hdfs_server; + |
DROP SYNONYM is used to delete a synonym object.
+Only a synonym owner or a system administrator can run the DROP SYNONYM command.
+1 | DROP SYNONYM [ IF EXISTS ] synonym_name [ CASCADE | RESTRICT ]; + |
Send a notice instead of reporting an error if the specified synonym does not exist.
+Name of a synonym (optionally with schema names)
+Delete a synonym.
+1 +2 | DROP SYNONYM t1; +DROP SCHEMA ot CASCADE; + |
DROP TABLE deletes a specified table.
+1 +2 | DROP TABLE [ IF EXISTS ] + { [schema.]table_name } [, ...] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified table does not exist.
+Specifies the schema name.
+Specifies the name of the table.
+Delete the warehouse_t1 table.
+1 | DROP TABLE tpcds.warehouse_t1; + |
DROP TEXT SEARCH CONFIGURATION deletes an existing text search configuration.
+To run the DROP TEXT SEARCH CONFIGURATION command, you must be the owner of the text search configuration.
+1 | DROP TEXT SEARCH CONFIGURATION [ IF EXISTS ] name [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified text search configuration does not exist.
+Specifies the name (optionally schema-qualified) of a text search configuration to be deleted.
+Automatically deletes objects that depend on the text search configuration to be deleted.
+Refuses to delete the text search configuration if any objects depend on it. This is the default.
+Delete the text search configuration ngram1.
+1 | DROP TEXT SEARCH CONFIGURATION ngram1; + |
ALTER TEXT SEARCH CONFIGURATION, CREATE TEXT SEARCH CONFIGURATION
+DROP TEXT SEARCH DICTIONARY deletes a full-text retrieval dictionary.
+1 | DROP TEXT SEARCH DICTIONARY [ IF EXISTS ] name [ CASCADE | RESTRICT ] + |
Reports a notice instead of throwing an error if the specified full-text retrieval dictionary does not exist.
+Specifies the name of a dictionary to be deleted. (If you do not specify a schema name, the dictionary in the current schema will be deleted by default.)
+Value range: name of an existing dictionary
+Automatically deletes dependent objects of a dictionary and then deletes all dependent objects of these objects in sequence.
+If any text search configuration that uses the dictionary exists, DROP execution will fail. You can add CASCADE to delete all text search configurations and dictionaries that use the dictionary.
+Rejects the deletion of a dictionary if any object depends on the dictionary. This is the default.
+Delete the english dictionary.
+1 | DROP TEXT SEARCH DICTIONARY english; + |
DROP TRIGGER deletes a trigger.
+Only the owner of a trigger and system administrators can run the DROP TRIGGER statement.
+1 | DROP TRIGGER [ IF EXISTS ] trigger_name ON table_name [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified trigger does not exist.
+Specifies the name of the trigger to be deleted.
+Value range: an existing trigger
+Specifies the name of the table where the trigger to be deleted is located.
+Value range: an existing table having a trigger
+Delete the trigger insert_trigger.
+1 | DROP TRIGGER insert_trigger ON test_trigger_src_tbl; + |
DROP TYPE deletes a user-defined data type. Only the type owner has permission to run this statement.
+1 | DROP TYPE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] + |
Sends a notice instead of an error if the specified type does not exist.
+Specifies the name of the type to be deleted (schema-qualified).
+Deletes objects (such as columns, functions, and operators) that depend on the type.
+RESTRICT
+Refuses to delete the type if any objects depend on it. This is the default.
+Delete the compfoo type.
+1 | DROP TYPE compfoo cascade; + |
Deleting a user will also delete the schema having the same name as the user.
+1 | DROP USER [ IF EXISTS ] user_name [, ...] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified user does not exist.
+Specifies the name of a user to be deleted.
+Value range: An existing user name.
+Delete user jim.
+1 | DROP USER jim CASCADE; + |
DROP VIEW forcibly deletes an existing view in a database.
+Only a view owner or a system administrator can run DROP VIEW command.
+1 | DROP VIEW [ IF EXISTS ] view_name [, ...] [ CASCADE | RESTRICT ]; + |
Sends a notice instead of an error if the specified view does not exist.
+Specifies the name of the view to be deleted.
+Value range: An existing view.
+Delete the myView view.
+1 | DROP VIEW myView; + |
Delete the customer_details_view_v2 view.
+1 | DROP VIEW public.customer_details_view_v2; + |
FETCH retrieves data using a previously-created cursor.
+A cursor has an associated position, which is used by FETCH. The cursor position can be before the first row of the query result, on any particular row of the result, or after the last row of the result.
+1 | FETCH [ direction { FROM | IN } ] cursor_name; + |
The direction clause specifies optional parameters.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | NEXT + | PRIOR + | FIRST + | LAST + | ABSOLUTE count + | RELATIVE count + | count + | ALL + | FORWARD + | FORWARD count + | FORWARD ALL + | BACKWARD + | BACKWARD count + | BACKWARD ALL + |
Defines the fetch direction.
+Valid value:
+Fetches the next row.
+Fetches the (count)'th row of the query.
+ABSOLUTE fetches are not any faster than navigating to the desired row with a relative move: the underlying implementation must traverse all the intermediate rows anyway.
+count is a possibly-signed integer constant:
+Fetches the (count)'th succeeding row, or the abs(count)'th prior row if count is negative.
+count is a possibly-signed integer constant:
+Fetches the next count rows (same as RELATIVE count). FORWARD 0 re-fetches the current row.
+Fetches the prior count rows (scanning backwards).
+count is a possibly-signed integer constant:
+Specifies the cursor name using the keyword FROM or IN.
+Value range: an existing cursor name.
+Example 1: Run the SELECT statement to read a table using a cursor.
+Set up the cursor1 cursor.
+1 | CURSOR cursor1 FOR SELECT * FROM tpcds.customer_address ORDER BY 1; + |
Fetch the first three rows from cursor1.
+1 +2 +3 +4 +5 +6 +7 | FETCH FORWARD 3 FROM cursor1; + ca_address_sk | ca_address_id | ca_street_number | ca_street_name | ca_street_type | ca_suite_number | ca_city | ca_county | ca_state | ca_zip | ca_country | ca_gmt_offset | ca_location_type +---------------+------------------+------------------+--------------------+-----------------+-----------------+-----------------+-----------------+----------+------------+---------------+---------------+---------------------- + 1 | AAAAAAAABAAAAAAA | 18 | Jackson | Parkway | Suite 280 | Fairfield | Maricopa County | AZ | 86192 | United States | -7.00 | condo + 2 | AAAAAAAACAAAAAAA | 362 | Washington 6th | RD | Suite 80 | Fairview | Taos County | NM | 85709 | United States | -7.00 | condo + 3 | AAAAAAAADAAAAAAA | 585 | Dogwood Washington | Circle | Suite Q | Pleasant Valley | York County | PA | 12477 | United States | -5.00 | single family +(3 rows) + |
Example 2: Use a cursor to read the content in the VALUES clause.
+Set up the cursor cursor2.
+1 | CURSOR cursor2 FOR VALUES(1,2),(0,3) ORDER BY 1; + |
Fetch the first two rows from cursor2.
+1 +2 +3 +4 +5 +6 | FETCH FORWARD 2 FROM cursor2; +column1 | column2 +---------+--------- +0 | 3 +1 | 2 +(2 rows) + |
MOVE repositions a cursor without retrieving any data. MOVE works exactly like the FETCH command, except it only repositions the cursor and does not return rows.
+None
+1 | MOVE [ direction [ FROM | IN ] ] cursor_name; + |
The direction clause specifies optional parameters.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | NEXT + | PRIOR + | FIRST + | LAST + | ABSOLUTE count + | RELATIVE count + | count + | ALL + | FORWARD + | FORWARD count + | FORWARD ALL + | BACKWARD + | BACKWARD count + | BACKWARD ALL + |
MOVE command parameters are the same as FETCH command parameters. For details, see Parameter Description in FETCH.
+On successful completion, a MOVE command returns a command tag of the form MOVE count. The count is the number of rows that a FETCH command with the same parameters would have returned (possibly zero).
+Skip the first three rows of cursor1.
+1 | MOVE FORWARD 3 FROM cursor1; + |
REINDEX rebuilds an index using the data stored in the index's table, replacing the old copy of the index.
+There are several scenarios in which REINDEX can be used:
+An index build with the CONCURRENTLY option failed, leaving an "invalid" index.
+Index reconstruction of the REINDEX DATABASE or SYSTEM type cannot be performed in transaction blocks.
+1 | REINDEX { INDEX | TABLE | DATABASE | SYSTEM } name [ FORCE ]; + |
1 +2 | REINDEX { TABLE } name + PARTITION partition_name [ FORCE ]; + |
Recreates the specified index.
+Recreates all indexes of the specified table. If the table has a secondary TOAST table, that is reindexed as well.
+Recreates all indexes within the current database. Indexes on the shared system directory will also be processed. This form of REINDEX cannot be executed within a transaction block.
+Recreates all indexes on system catalogs within the current database. Indexes on user tables are not processed.
+Name of the specific index, table, or database to be reindexed. Index and table names can be schema-qualified.
+REINDEX DATABASE and SYSTEM can create indexes for only the current database. Therefore, name must be the same as the current database name.
+This is an obsolete option. It is ignored if specified.
+Specifies the name of the partition or index partition to be reindexed.
+Value range:
+Index reconstruction of the REINDEX DATABASE or SYSTEM type cannot be performed in transaction blocks.
+Rebuild a single index.
+1 | REINDEX INDEX tpcds.tpcds_customer_index1; + |
Rebuild all indexes on the tpcds.customer_t1 table.
+1 | REINDEX TABLE tpcds.customer_t1; + |
RESET restores run-time parameters to their default values. The default values are parameter default values complied in the postgresql.conf configuration file.
+RESET is an alternative spelling for:
+SET configuration_parameter TO DEFAULT
+RESET and SET have the same transaction behavior. Their impact will be rolled back.
+RESET {configuration_parameter | CURRENT_SCHEMA | TIME ZONE | TRANSACTION ISOLATION LEVEL | SESSION AUTHORIZATION | ALL };+
Specifies the name of a settable run-time parameter.
+Value range: Run-time parameters. You can view them by running the SHOW ALL command.
+Some parameters that viewed by SHOW ALL cannot be set by SET. For example, max_datanodes.
+Specifies the current schema.
+Specifies the time zone.
+Specifies the transaction isolation level.
+Specifies the session authorization.
+Resets all settable run-time parameters to default values.
+Reset timezone to the default value.
+1 | RESET timezone; + |
Set all parameters to their default values.
+1 | RESET ALL; + |
SET modifies a run-time parameter.
+Most run-time parameters can be modified by executing SET. Some parameters cannot be modified after a server or session starts.
+1 | SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT }; + |
1 +2 +3 | SET [ SESSION | LOCAL ] + {CURRENT_SCHEMA { TO | = } { schema | DEFAULT } + | SCHEMA 'schema'}; + |
1 | SET [ SESSION | LOCAL ] NAMES encoding_name; + |
1 | SET [ SESSION | LOCAL ] XML OPTION { DOCUMENT | CONTENT }; + |
1 +2 +3 | SET [ LOCAL | SESSION ] + { {config_parameter { { TO | = } { value | DEFAULT } + | FROM CURRENT }}}; + |
Indicates that the specified parameters take effect for the current session. This is the default value if neither SESSION nor LOCAL appears.
+If SET or SET SESSION is executed within a transaction that is later aborted, the effects of the SET command disappear when the transaction is rolled back. Once the surrounding transaction is committed, the effects will persist until the end of the session, unless overridden by another SET.
+Indicates that the specified parameters take effect for the current transaction. After COMMIT or ROLLBACK, the session-level setting takes effect again.
+The effects of SET LOCAL last only till the end of the current transaction, whether committed or not. A special case is SET followed by SET LOCAL within a single transaction: the SET LOCAL value will be seen until the end of the transaction, but afterwards (if the transaction is committed) the SET value will take effect.
+Indicates the local time zone for the current session.
+Value range: A valid local time zone. The corresponding run-time parameter is TimeZone. The default value is PRC.
+schema
+Indicates the current schema.
+Value range: An existing schema name.
+Indicates the current schema. Here the schema is a string.
+Example: set schema 'public';
+Indicates the client character encoding name. This command is equivalent to set client_encoding to encoding_name.
+Value range: A valid character encoding name. The run-time parameter corresponding to this option is client_encoding. The default encoding is UTF8.
+Indicates the XML resolution mode.
+Value range: CONTENT (default), DOCUMENT
+Indicates the configurable run-time parameters. You can use SHOW ALL to view available run-time parameters.
+Some parameters that viewed by SHOW ALL cannot be set by SET. For example, max_datanodes.
+Indicates the new value of the config_parameter parameter. This parameter can be specified as string constants, identifiers, numbers, or comma-separated lists of these. DEFAULT can be written to indicate resetting the parameter to its default value.
+Configure the search path of the tpcds schema.
+1 | SET search_path TO tpcds, public; + |
Set the date style to the traditional POSTGRES style (date placed before month).
+1 | SET datestyle TO postgres; + |
SET CONSTRAINTS sets the behavior of constraint checking within the current transaction.
+IMMEDIATE constraints are checked at the end of each statement. DEFERRED constraints are not checked until transaction commit. Each constraint has its own IMMEDIATE or DEFERRED mode.
+Upon creation, a constraint is given one of three characteristics DEFERRABLE INITIALLY DEFERRED, DEFERRABLE INITIALLY IMMEDIATE, or NOT DEFERRABLE. The third class is always IMMEDIATE and is not affected by the SET CONSTRAINTS command. The first two classes start every transaction in specified modes, but its behaviors can be changed within a transaction by SET CONSTRAINTS.
+SET CONSTRAINTS with a list of constraint names changes the mode of just those constraints (which must all be deferrable). If multiple constraints match a name, the name is affected by all of these constraints. SET CONSTRAINTS ALL changes the modes of all deferrable constraints.
+When SET CONSTRAINTS changes the mode of a constraint from DEFERRED to IMMEDIATE, the new mode takes effect retroactively: any outstanding data modifications that would have been checked at the end of the transaction are instead checked during the execution of the SET CONSTRAINTS command. If any such constraint is violated, the SET CONSTRAINTS fails (and does not change the constraint mode). Therefore, SET CONSTRAINTS can be used to force checking of constraints to occur at a specific point in a transaction.
+Only foreign key constraints are affected by this setting. Check and unique constraints are always checked immediately when a row is inserted or modified.
+SET CONSTRAINTS sets the behavior of constraint checking only within the current transaction. Therefore, if you execute this command outside of a transaction block (START TRANSACTION/COMMIT pair), it will not appear to have any effect.
+1 | SET CONSTRAINTS { ALL | { name } [, ...] } { DEFERRED | IMMEDIATE } ; + |
Specifies the constraint name.
+Value range: an existing constraint name, which can be found in the system catalog pg_constraint.
+Indicates all constraints.
+Indicates that constraints are not checked until transaction commit.
+Indicates that constraints are checked at the end of each statement.
+Set that constraints are checked when a transaction is committed.
+1 | SET CONSTRAINTS ALL DEFERRED; + |
SET ROLE sets the current user identifier of the current session.
+1 | SET [ SESSION | LOCAL ] ROLE role_name PASSWORD 'password'; + |
1 | RESET ROLE; + |
Specifies that the command takes effect only for the current session. This parameter is used by default.
+Value range: A string. It must comply with the naming convention rule.
+Indicates that the specified command takes effect only for the current transaction.
+Specifies the role name.
+Value range: A string. It must comply with the naming convention rule.
+Specifies the password of a role. It must comply with the password convention.
+Resets the current user identifier.
+Set the current user to paul.
+1 | SET ROLE paul PASSWORD '{password}'; + |
View the current session user and the current user.
+1 | SELECT SESSION_USER, CURRENT_USER; + |
Reset the current user.
+1 | RESET role; + |
SET SESSION AUTHORIZATION sets the session user identifier and the current user identifier of the current SQL session to a specified user.
+The session identifier can be changed only when the initial session user has the system administrator rights. Otherwise, the system supports the command only when the authenticated user name is specified.
+1 | SET [ SESSION | LOCAL ] SESSION AUTHORIZATION role_name PASSWORD 'password'; + |
1 +2 | {SET [ SESSION | LOCAL ] SESSION AUTHORIZATION DEFAULT + | RESET SESSION AUTHORIZATION}; + |
Indicates that the specified parameters take effect for the current session.
+Value range: A string. It must comply with the naming convention.
+Indicates that the specified command takes effect only for the current transaction.
+User name.
+Value range: A string. It must comply with the naming convention.
+Specifies the password of a role. It must comply with the password convention.
+Reset the identifiers of the session and current users to the initially authenticated user names.
+Set the current user to paul.
+1 | SET SESSION AUTHORIZATION paul password '{password}'; + |
View the current session user and the current user.
+1 | SELECT SESSION_USER, CURRENT_USER; + |
Reset the current user.
+1 | RESET SESSION AUTHORIZATION; + |
SHOW shows the current value of a run-time parameter. You can use the SET statement to set these parameters.
+Some parameters that can be viewed by SHOW are read-only. You can view but cannot modify their values.
+1 +2 +3 +4 +5 +6 +7 +8 +9 | SHOW + { + configuration_parameter | + CURRENT_SCHEMA | + TIME ZONE | + TRANSACTION ISOLATION LEVEL | + SESSION AUTHORIZATION | + ALL + }; + |
See Parameter Description in RESET.
+Show the value of timezone.
+1 | SHOW timezone; + |
Show the current setting of the DateStyle parameter.
+1 | SHOW DateStyle; + |
Show the current setting of all parameters.
+1 | SHOW ALL; + |
TRUNCATE quickly removes all rows from a database table.
+It has the same effect as an unqualified DELETE on each table, but it is faster since it does not actually scan the tables. This is most useful on large tables.
+TRUNCATE obtains an ACCESS EXCLUSIVE lock on each table it operates on, which blocks all other concurrent operations on that table. If concurrent access to the table is required, use the DELETE command instead.
+1 +2 | TRUNCATE [ TABLE ] [ ONLY ] {[[database_name.]schema_name.]table_name [ * ]} [, ... ] + [ CONTINUE IDENTITY ] [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 | ALTER TABLE [ IF EXISTS ] { [ ONLY ] [[database_name.]schema_name.]table_name + | table_name * + | ONLY ( table_name ) } + TRUNCATE PARTITION { partition_name + | FOR ( partition_value [, ...] ) } ; + |
If ONLY is specified, only the specified table is cleared. Otherwise, the table and all its subtables (if any) are cleared.
+Database name of the target table
+Schema name of the target table
+Specifies the name (optionally schema-qualified) of a target table.
+Value range: an existing table name
+Does not change the values of sequences. This is the default.
+Indicates the partition in the target partition table.
+Value range: An existing partition name.
+Specifies the value of the specified partition key.
+The value specified by PARTITION FOR can uniquely identify a partition.
+Value range: The partition key of the partition to be deleted.
+When the PARTITION FOR clause is used, the entire partition where partition_value is located is cleared.
+Clear the p1 partition of the customer_address table.
+1 | ALTER TABLE tpcds.customer_address TRUNCATE PARTITION p1; + |
Clear a partitioned table.
+1 | TRUNCATE TABLE tpcds.customer_address; + |
VACUUM reclaims storage space occupied by tables or B-tree indexes. In normal database operation, rows that have been deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore, it is necessary to execute VACUUM periodically, especially on frequently-updated tables.
+ +1 +2 | VACUUM [ ( { FULL | FREEZE | VERBOSE | {ANALYZE | ANALYSE }} [,...] ) ] + [ table_name [ (column_name [, ...] ) ] ] [ PARTITION ( partition_name ) ]; + |
1 | VACUUM [ FULL [COMPACT] ] [ FREEZE ] [ VERBOSE ] [ table_name ] [ PARTITION ( partition_name ) ]; + |
1 +2 | VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] { ANALYZE | ANALYSE } [ VERBOSE ] + [ table_name [ (column_name [, ...] ) ] ] [ PARTITION ( partition_name ) ]; + |
1 | VACUUM DELTAMERGE [ table_name ]; + |
1 | VACUUM HDFSDIRECTORY [ table_name ]; + |
Selects "FULL" vacuum, which can reclaim more space, but takes much longer and exclusively locks the table. This method also requires additional disk space, because it writes a new copy of the table and does not free the old copy until the operation is complete. Generally, this option is used only when a large amount of space needs to be reclaimed from a table.
+FULL options can also contain the COMPACT parameter, which is only used for the HDFS table. Specifying the COMPACT parameter improves VACUUM FULL operation performance.
+COMPACT and PARTITION cannot be used at the same time.
+Using FULL will cause statistics missing. To collect statistics, add the keyword ANALYZE to VACUUM FULL.
+Is equivalent to executing VACUUM with the vacuum_freeze_min_age parameter set to zero.
+Prints a detailed vacuum activity report for each table.
+Updates statistics used by the planner to determine the most efficient way to execute a query.
+Indicates the name (optionally schema-qualified) of a specific table to vacuum.
+Value range: The name of a specific table to vacuum. Defaults are all tables in the current database.
+Indicates the name of a specific field to analyze.
+Value range: Indicates the name of a specific field to analyze. Defaults are all columns.
+HDFS table does not support PARTITION. COMPACT and PARTITION cannot be used at the same time.
+Indicates the partition name of a specific table to vacuum. Defaults are all partitions.
+(For HDFS and column-store tables) Migrates data from the delta table to primary tables. If the data volume of the delta table is less than 60,000 rows, the data will not be migrated. Otherwise, the data will be migrated to HDFS, and the delta table will be cleared by TRUNCATE. For a column-store table, this operation always transfers all data in the delta table to the CU.
+The following DFX functions are provided to return the data storage in the delta table of a column-store table (for an HDFS table, it can be returned by EXPLAIN ANALYZE):
+Deletes the empty value partition directory of HDFS table in HDFS storage for HDFS table.
+Delete all tables in the current database.
+1 | VACUUM; + |
Reclaim the space of partition P2 of the tpcds.web_returns_p1 table without updating statistics.
+1 | VACUUM FULL tpcds.web_returns_p1 PARTITION(P2); + |
Reclaim the tpcds.web_returns_p1 table and update statistics.
+1 | VACUUM FULL ANALYZE tpcds.web_returns_p1; + |
Delete all tables in the current database and collect statistics about the query optimizer.
+1 | VACUUM ANALYZE; + |
Delete only the reason table.
+1 | VACUUM (VERBOSE, ANALYZE) reason; + |
Data Manipulation Language (DML) is used to perform operations on data in database tables, such as inserting, updating, querying, or deleting data.
+Inserting data refers to adding one or multiple records to a database table. For details, see INSERT.
+Modifying data refers to modifying one or multiple records in a database table. For details, see UPDATE.
+The database query statement SELECT is used to search required information in a database. For details, see SELECT.
+For details about how to delete data that meets specified conditions from a table, see DELETE.
+GaussDB(DWS) provides a statement for copying data between tables and files. For details, see COPY.
+GaussDB(DWS) provides multiple lock modes to control concurrent accesses to table data. For details, see LOCK.
+GaussDB(DWS) provides three statements for invoking functions. These statements are the same in the syntax structure. For details, see CALL.
+CALL calls defined functions or stored procedures.
+None
+1 | CALL [schema.] {func_name| procedure_name} ( param_expr ); + |
Specifies the name of the schema where a function or stored procedure is located.
+Specifies the name of the function or stored procedure to be called.
+Value range: an existing function name
+Specifies a list of parameters in the function. Use := or => to separate a parameter name and its value. This method allows parameters to be placed in any order. If only parameter values are in the list, the value order must be the same as that defined in the function or stored procedure.
+Value range: names of existing function or stored procedure parameters
+The parameters include input parameters (whose name and type are separated by IN) and output parameters (whose name and type are separated by OUT). When you run the CALL statement to call a function or stored procedure, the parameter list must contain an output parameter for non-overloaded functions. You can set the output parameter to a variable or any constant. For details, see Examples. For an overloaded package function, the parameter list can have no output parameter, but the function may not be found. If an output parameter is contained, it must be a constant.
+1 +2 +3 +4 +5 +6 | CREATE FUNCTION func_add_sql(num1 integer, num2 integer) RETURN integer +AS +BEGIN +RETURN num1 + num2; +END; +/ + |
1 | CALL func_add_sql(1, 3); + |
1 +2 | CALL func_add_sql(num1 => 1,num2 => 3); +CALL func_add_sql(num2 := 2, num1 := 3); + |
1 | DROP FUNCTION func_add_sql; + |
1 +2 +3 +4 +5 +6 +7 | CREATE FUNCTION func_increment_sql(num1 IN integer, num2 IN integer, res OUT integer) +RETURN integer +AS +BEGIN +res := num1 + num2; +END; +/ + |
1 | CALL func_increment_sql(1,2,1); + |
1 +2 +3 +4 +5 +6 +7 | DECLARE +res int; +BEGIN +func_increment_sql(1, 2, res); +dbms_output.put_line(res); +END; +/ + |
1 +2 +3 +4 +5 +6 +7 +8 +9 | create or replace procedure package_func_overload(col int, col2 out int) package +as +declare + col_type text; +begin + col := 122; + dbms_output.put_line('two out parameters ' || col2); +end; +/ + |
1 +2 +3 +4 +5 +6 +7 +8 +9 | create or replace procedure package_func_overload(col int, col2 out varchar) package +as +declare + col_type text; +begin + col2 := '122'; + dbms_output.put_line('two varchar parameters ' || col2); +end; +/ + |
1 +2 | call package_func_overload(1, 'test'); +call package_func_overload(1, 1); + |
1 | DROP FUNCTION func_increment_sql; + |
COPY copies data between tables and files.
+COPY FROM copies data from a file to a table. COPY TO copies data from a table to a file.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | COPY table_name [ ( column_name [, ...] ) ] + FROM { 'filename' | STDIN } + [ [ USING ] DELIMITERS 'delimiters' ] + [ WITHOUT ESCAPING ] + [ LOG ERRORS ] + [ LOG ERRORS data ] + [ REJECT LIMIT 'limit' ] + [ [ WITH ] ( option [, ...] ) ] + | copy_option + | FIXED FORMATTER ( { column_name( offset, length ) } [, ...] ) [ ( option [, ...] ) | copy_option [ ...] ] ]; + |
In the SQL syntax, FIXED, FORMATTER ( { column_name( offset, length ) } [, ...] ), and [ ( option [, ...] ) | copy_option [ ...] ] can be in any sequence.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | COPY table_name [ ( column_name [, ...] ) ] + TO { 'filename' | STDOUT } + [ [ USING ] DELIMITERS 'delimiters' ] + [ WITHOUT ESCAPING ] + [ [ WITH ] ( option [, ...] ) ] + | copy_option + | FIXED FORMATTER ( { column_name( offset, length ) } [, ...] ) [ ( option [, ...] ) | copy_option [ ...] ] ]; + +COPY query + TO { 'filename' | STDOUT } + [ WITHOUT ESCAPING ] + [ [ WITH ] ( option [, ...] ) ] + | copy_option + | FIXED FORMATTER ( { column_name( offset, length ) } [, ...] ) [ ( option [, ...] ) | copy_option [ ...] ] ]; + |
(query) is incompatible with [USING] DELIMITER. If the data of COPY TO comes from a query result, COPY TO cannot specify [USING] DELIMITERS.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 | FORMAT 'format_name' +| OIDS [ boolean ] +| DELIMITER 'delimiter_character' +| NULL 'null_string' +| HEADER [ boolean ] +| FILEHEADER 'header_file_string' +| FREEZE [ boolean ] +| QUOTE 'quote_character' +| ESCAPE 'escape_character' +| EOL 'newline_character' +| NOESCAPING [ boolean ] +| FORCE_QUOTE { ( column_name [, ...] ) | * } +| FORCE_NOT_NULL ( column_name [, ...] ) +| ENCODING 'encoding_name' +| IGNORE_EXTRA_DATA [ boolean ] +| FILL_MISSING_FIELDS [ boolean ] +| COMPATIBLE_ILLEGAL_CHARS [ boolean ] +| DATE_FORMAT 'date_format_string' +| TIME_FORMAT 'time_format_string' +| TIMESTAMP_FORMAT 'timestamp_format_string' +| SMALLDATETIME_FORMAT 'smalldatetime_format_string' + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 | OIDS +| NULL 'null_string' +| HEADER +| FILEHEADER 'header_file_string' +| FREEZE +| FORCE_NOT_NULL column_name [, ...] +| FORCE_QUOTE { column_name [, ...] | * } +| BINARY +| CSV +| QUOTE [ AS ] 'quote_character' +| ESCAPE [ AS ] 'escape_character' +| EOL 'newline_character' +| ENCODING 'encoding_name' +| IGNORE_EXTRA_DATA +| FILL_MISSING_FIELDS +| COMPATIBLE_ILLEGAL_CHARS +| DATE_FORMAT 'date_format_string' +| TIME_FORMAT 'time_format_string' +| TIMESTAMP_FORMAT 'timestamp_format_string' +| SMALLDATETIME_FORMAT 'smalldatetime_format_string' + |
Indicates that the results are to be copied.
+Value range: a SELECT or VALUES command in parentheses
+Specifies the name (optionally schema-qualified) of an existing table.
+Value range: an existing table name
+Indicates an optional list of columns to be copied.
+Value range: If no column list is specified, all columns of the table will be copied.
+Indicates that the input comes from the client application.
+Indicates that output goes to the client application.
+Fixes column length. When the column length is fixed, DELIMITER, NULL, and CSV cannot be specified. When FIXED is specified, BINARY, CSV, and TEXT cannot be specified by option or copy_option.
+The definition of fixed length:
+The string that separates columns within each row (line) of the file, and it cannot be larger than 10 bytes.
+Value range: The delimiter cannot include any of the following characters: \.abcdefghijklmnopqrstuvwxyz0123456789
+Value range: The default value is a tab character in text format and a comma in CSV format.
+In TEXT, do not escape a backslash (\) and the characters that follow it.
+Value range: text only.
+If this parameter is specified, the error tolerance mechanism for data type errors in the COPY FROM statement is enabled. Row errors are recorded in the public.pgxc_copy_error_log table in the database for future reference.
+Value range: A value set while data is imported using COPY FROM.
+The restrictions of this error tolerance parameter are as follows:
+The differences between LOG ERRORS DATA and LOG ERRORS are as follows:
+If error content is too complex, it may fail to be written to the error tolerance table by using LOG ERRORS DATA, causing the task failure.
+Used with the LOG ERROR parameter to set the upper limit of the tolerated errors in the COPY FROM statement. If the number of errors exceeds the limit, later errors will be reported based on the original mechanism.
+Value range: a positive integer (1 to INTMAX) or unlimited
+Default value: If LOG ERRORS is not specified, an error will be reported. If LOG ERRORS is specified, the default value is 0.
+Different from the GDS error tolerance mechanism, in the error tolerance mechanism described in the description of LOG ERRORS, the count of REJECT LIMIT is calculated based on the number of data parsing errors on the CN where the COPY FROM statement is run, not based on the number of errors on each DN.
+Defining the location of each column in the data file in fixed length mode. Defining the place of each column in the data file based on column (offset, length) format.
+Value range:
+The total length of all columns must be less than 1 GB.
+Replace columns that are not in the file with NULL.
+Specifies all types of parameters of a compatible foreign table.
+Specifies the format of the source data file in the foreign table.
+Value range: CSV, TEXT, FIXED, and BINARY.
+Default value: TEXT
+An error is raised if OIDs are specified for a table that does not have OIDs, or in the case of copying a query.
+Value range: true, on, false, and off
+Default value: false
+Value range: multi-character delimiter within 10 bytes.
+Default value:
+Specifies the string that represents a null value.
+Value range:
+Default value:
+Specifies whether a file contains a header with the names of each column in the file. header is available only for CSV and FIXED files.
+When data is imported, if header is on, the first row of the data file will be identified as title row and ignored. If header is off, the first row is identified as data.
+When data is exported, if header is on, fileheader must be specified. If header is off, the exported file does not include a title row.
+Value range: true, on, false, and off
+Default value: false
+Specifies the quote character for a CSV file.
+Default value: double quotation mark ("")
+This option is allowed only when using CSV format. This must be a single one-byte character.
+Default value: the same as the value of QUOTE
+Specifies the newline character style of the imported or exported data file.
+Value range: multi-character newline characters within 10 bytes. Common newline characters include \r (0x0D), \n (0x0A), and \r\n (0x0D0A). Special newline characters include $ and #.
+Forces quoting to be used for all non-null values in each specified column. This option is allowed only in COPY TO, and only when using the CSV format. NULL values are not quoted.
+Value range: an existing column
+Does not match the specified columns' values against the null string. This option is allowed only in COPY FROM, and only when using the CSV format.
+Value range: an existing column
+Specifies that the file is encoded in the encoding_name. If this option is omitted, the current encoding format is used by default.
+When the number of data source files exceeds the number of foreign table columns, whether ignoring excessive columns at the end of the row. This parameter is available only during data importing.
+Value range: true/on, false/off.
+1 | extra data after last expected column + |
Default value: false
+If the newline character at the end of the row is lost, setting the parameter to true will ignore data in the next row.
+Enables or disables fault tolerance on invalid characters during importing. This parameter is available only for COPY FROM.
+Value range: true, on, false, and off
+Default value: false or off
+The rule of error tolerance when you import invalid characters is as follows:
+(1) \0 is converted to a space.
+(2) Other invalid characters are converted to question marks.
+(3) If compatible_illegal_chars is set to true or on, invalid characters are tolerated. If NULL, DELIMITER, QUOTE, and ESCAPE are set to a spaces or question marks. Errors like "illegal chars conversion may confuse COPY escape 0x20" will be displayed to prompt user to modify parameter values that cause confusion, preventing import errors.
+Specifies whether to generate an error message when the last column in a row in the source file is lost during data loading.
+Value range: true, on, false, and off
+Default value: false or off
+Imports data of the DATE type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: any valid DATE value. For details, see Date and Time Processing Functions and Operators.
+If ORACLE is specified as the compatible database, the DATE format is TIMESTAMP. For details, see timestamp_format below.
+Imports data of the TIME type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: Valid TIME. Time zones cannot be used. For details, see Date and Time Processing Functions and Operators.
+Imports data of the TIMESTAMP type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: any valid TIMESTAMP value. Time zones are not supported. For details, see Date and Time Processing Functions and Operators.
+Imports data of the SMALLDATETIME type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: any valid SMALLDATETIME value. For details, see Date and Time Processing Functions and Operators.
+Specifies all types of native parameters of COPY.
+When using COPY FROM, any data item that matches this string will be stored as a NULL value, so you should make sure that you use the same string as you used with COPY TO.
+Value range:
+Default value:
+Specifies whether a file contains a header with the names of each column in the file. header is available only for CSV and FIXED files.
+When data is imported, if header is on, the first row of the data file will be identified as title row and ignored. If header is off, the first row is identified as data.
+When data is exported, if header is on, fileheader must be specified. If header is off, the exported file does not include a title row.
+Specifies a file that defines the content in the header for exported data. The file contains data description of each column.
+Sets the COPY loaded data row as frozen, like these data have executed VACUUM FREEZE.
+This is a performance option of initial data loading. The data will be frozen only when the following three requirements are met:
+When COPY is completed, all the other sessions will see the data immediately. This violates the normal rules of MVCC visibility and users should be aware of the potential problems this might cause.
+Does not match the specified columns' values against the null string. This option is allowed only in COPY FROM, and only when using the CSV format.
+Value range: an existing column
+Forces quoting to be used for all non-NULL values in each specified column. This option is allowed only in COPY TO, and only when using the CSV format. NULL values are not quoted.
+Value range: an existing column
+The binary format option causes all data to be stored/read as binary format rather than as text. In binary mode, you cannot declare DELIMITER, NULL, or CSV. After specifying BINARY, CSV, FIXED and TEXT cannot be specified through option or copy_option.
+Enables the CSV mode. After CSV is specified, BINARY, FIXED and TEXT cannot be specified through option or copy_option.
+Specifies the quote character for a CSV file.
+Default value: double quotation mark ("")
+This option is allowed only when using CSV format. This must be a single one-byte character.
+The default value is a double quotation mark ("). If it is the same as the value of quote, it will be replaced with \0.
+Specifies the newline character style of the imported or exported data file.
+Value range: multi-character newline characters within 10 bytes. Common newline characters include \r (0x0D), \n (0x0A), and \r\n (0x0D0A). Special newline characters include $ and #.
+Specifies that the file is encoded in the encoding_name.
+Value range: a valid encoding format
+Default value: current encoding format of the database
+When the number of data source files exceeds the number of foreign table columns, excess columns at the end of the row are ignored. This parameter is available only during data importing.
+1 | extra data after last expected column + |
Specifies error tolerance for invalid characters during importing. Invalid characters are converted before importing. No error message is displayed. The import is not interrupted. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+If you do not use this parameter, an error occurs when there is an invalid character, and the import stops.
+The rule of error tolerance when you import invalid characters is as follows:
+(1) \0 is converted to a space.
+(2) Other invalid characters are converted to question marks.
+(3) Setting compatible_illegal_chars to true/on enables toleration of invalid characters. If NULL, DELIMITER, QUOTE, and ESCAPE are set to spaces or question marks, errors like "illegal chars conversion may confuse COPY escape 0x20" will be displayed to prompt the user to modify parameters that may cause confusion, preventing importing errors.
+Specifies whether to generate an error message when the last column in a row in the source file is lost during data loading.
+Value range: true, on, false, and off
+Default value: false or off
+Do not specify this option. Currently, it does not enable error tolerance, but will make the parser ignore the said errors during data parsing on the CN. Such errors will not be recorded in the COPY error table (enabled using LOG ERRORS REJECT LIMIT) but will be reported later by DNs.
+Imports data of the DATE type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: any valid DATE value. For details, see Date and Time Processing Functions and Operators.
+If ORACLE is specified as the compatible database, the DATE format is TIMESTAMP. For details, see timestamp_format below.
+Imports data of the TIME type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: Valid TIME. Time zones cannot be used. For details, see Date and Time Processing Functions and Operators.
+Specifies the TIMESTAMP format for data import. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: any valid TIMESTAMP value. Time zones are not supported. For details, see Date and Time Processing Functions and Operators.
+Imports data of the SMALLDATETIME type. The BINARY format is not supported. When data of such format is imported, error "cannot specify bulkload compatibility options in BINARY mode" will occur. The parameter is valid only for data importing using the COPY FROM option.
+Value range: any valid SMALLDATETIME value. For details, see Date and Time Processing Functions and Operators.
+1 | COPY tpcds.ship_mode TO '/home/omm/ds_ship_mode.dat'; + |
1 | COPY tpcds.ship_mode TO stdout; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE TABLE tpcds.ship_mode_t1 +( + SM_SHIP_MODE_SK INTEGER NOT NULL, + SM_SHIP_MODE_ID CHAR(16) NOT NULL, + SM_TYPE CHAR(30) , + SM_CODE CHAR(10) , + SM_CARRIER CHAR(20) , + SM_CONTRACT CHAR(20) +) +WITH (ORIENTATION = COLUMN,COMPRESSION=MIDDLE) +DISTRIBUTE BY HASH(SM_SHIP_MODE_SK ); + |
1 | COPY tpcds.ship_mode_t1 FROM stdin; + |
Copy data from the /home/omm/ds_ship_mode.dat file to the tpcds.ship_mode_t1 table.
+1 | COPY tpcds.ship_mode_t1 FROM '/home/omm/ds_ship_mode.dat'; + |
Copy data from the /home/omm/ds_ship_mode.dat file to the tpcds.ship_mode_t1 table, with the import format set to TEXT (format 'text'), the delimiter set to \t' (delimiter E'\t'), excessive columns ignored (ignore_extra_data 'true'), and characters not escaped (noescaping 'true').
+1 | COPY tpcds.ship_mode_t1 FROM '/home/omm/ds_ship_mode.dat' WITH(format 'text', delimiter E'\t', ignore_extra_data 'true', noescaping 'true'); + |
Copy data from the /home/omm/ds_ship_mode.dat file to the tpcds.ship_mode_t1 table, with the import format set to FIXED, fixed-length format specified (FORMATTER(SM_SHIP_MODE_SK(0, 2), SM_SHIP_MODE_ID(2,16), SM_TYPE(18,30), SM_CODE(50,10), SM_CARRIER(61,20), SM_CONTRACT(82,20))), excessive columns ignored (ignore_extra_data), and headers included (header).
+1 | COPY tpcds.ship_mode_t1 FROM '/home/omm/ds_ship_mode.dat' FIXED FORMATTER(SM_SHIP_MODE_SK(0, 2), SM_SHIP_MODE_ID(2,16), SM_TYPE(18,30), SM_CODE(50,10), SM_CARRIER(61,20), SM_CONTRACT(82,20)) header ignore_extra_data; + |
Delete the tpcds.ship_mode_t1 table.
+1 | DROP TABLE tpcds.ship_mode_t1; + |
DELETE deletes rows that satisfy the WHERE clause from the specified table. If the WHERE clause does not exist, all rows in the table will be deleted. The result is a valid, but an empty table.
+1 +2 +3 +4 +5 | [ WITH [ RECURSIVE ] with_query [, ...] ] +DELETE FROM [ ONLY ] table_name [ * ] [ [ AS ] alias ] + [ USING using_list ] + [ WHERE condition | WHERE CURRENT OF cursor_name ] + [ RETURNING { * | { output_expr [ [ AS ] output_name ] } [, ...] } ]; + |
The WITH clause allows you to specify one or more subqueries that can be referenced by name in the primary query, equal to temporary table.
+If RECURSIVE is specified, it allows a SELECT subquery to reference itself by name.
+The with_query detailed format is as follows:
+with_query_name [ ( column_name [, ...] ) ] AS
+( {select | values | insert | update | delete} )
+-- with_query_name specifies the name of the result set generated by a subquery. Such names can be used to access the result sets of
+subqueries in a query.
+column_name specifies the column name displayed in the subquery result set.
+Each subquery can be a SELECT, VALUES, INSERT, UPDATE or DELETE statement.
+If ONLY is specified, only that table is deleted. If ONLY is not specified, this table and all its sub-tables are deleted.
+Specifies the name (optionally schema-qualified) of a target table.
+Value range: an existing table name
+Specifies the alias for the target table.
+Value range: a string. It must comply with the naming convention.
+Specifies the USING clause.
+Specifies an expression that returns a value of type boolean. Only rows for which this expression returns true will be deleted.
+Not supported currently. Only syntax interface is provided.
+Specifies an expression to be computed and returned by the DELETE command after each row is deleted. The expression can use any column names of the table. Write * to return all columns.
+Specifies a name to use for a returned column.
+Value range: a string. It must comply with the naming convention.
+Create the tpcds.customer_address_bak table.
+1 | CREATE TABLE tpcds.customer_address_bak AS TABLE tpcds.customer_address; + |
Delete employees whose ca_address_sk is less than 14888 in the tpcds.customer_address_bak table.
+1 | DELETE FROM tpcds.customer_address_bak WHERE ca_address_sk < 14888; + |
Delete the employees whose ca_address_sk is 14891, 14893, and 14895 from tpcds.customer_address_bak.
+1 | DELETE FROM tpcds.customer_address_bak WHERE ca_address_sk in (14891,14893,14895); + |
Delete all data in the tpcds.customer_address_bak table.
+1 | DELETE FROM tpcds.customer_address_bak; + |
Use a subquery (to delete the row-store table tpcds.warehouse_t30) to obtain a temporary table temp_t, and then query all data in the temporary table temp_t.
+1 | WITH temp_t AS (DELETE FROM tpcds.warehouse_t30 RETURNING *) SELECT * FROM temp_t ORDER BY 1; + |
EXPLAIN shows the execution plan of an SQL statement.
+The execution plan shows how the tables referenced by the SQL statement will be scanned, for example, by plain sequential scan or index scan. If multiple tables are referenced, the execution plan also shows what join algorithms will be used to bring together the required rows from each input table.
+The most critical part of the display is the estimated statement execution cost, which is the planner's guess at how long it will take to run the statement.
+The ANALYZE option causes the statement to be executed, not only planned. Then actual runtime statistics are added to the display, including the total elapsed time expended within each plan node (in milliseconds) and the total number of rows it actually returned. This is useful to check whether the planner's estimates are close to reality.
+The statement is executed when the ANALYZE option is used. To use EXPLAIN ANALYZE on an INSERT, UPDATE, DELETE, CREATE TABLE AS, or EXECUTE statement without letting the command affect your data, use this approach:
+1 +2 +3 | START TRANSACTION; +EXPLAIN ANALYZE ...; +ROLLBACK; + |
1 | EXPLAIN [ ( option [, ...] ) ] statement; + |
The syntax of the option clause is as follows:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | ANALYZE [ boolean ] | + ANALYSE [ boolean ] | + VERBOSE [ boolean ] | + COSTS [ boolean ] | + CPU [ boolean ] | + DETAIL [ boolean ] | + NODES [ boolean ] | + NUM_NODES [ boolean ] | + BUFFERS [ boolean ] | + TIMING [ boolean ] | + PLAN [ boolean ] | + FORMAT { TEXT | XML | JSON | YAML } + |
1 | EXPLAIN { [ { ANALYZE | ANALYSE } ] [ VERBOSE ] | PERFORMANCE } statement; + |
1 | EXPLAIN ( STATS [ boolean ] ) statement; + |
Specifies the SQL statement to explain.
+Displays the actual run times and other statistics.
+Valid value:
+Displays additional information regarding the plan.
+Valid value:
+Includes information on the estimated total cost of each plan node, as well as the estimated number of rows and the estimated width of each row.
+Valid value:
+Prints information on CPU usage.
+Valid value:
+Prints DN information.
+Valid value:
+Prints information about the nodes executed by query.
+Valid value:
+Prints the quantity of executing nodes.
+Valid value:
+Includes information on buffer usage.
+Valid value:
+Includes the startup time and the time spent on the output node.
+Valid value:
+Specifies whether to store the execution plan in PLAN_TABLE. If this parameter is set to on, the execution plan is stored in PLAN_TABLE and is not displayed on the screen. Therefore, this parameter cannot be used together with other parameters when it is set to on.
+Valid value:
+Specifies the output format.
+Value range: TEXT, XML, JSON, and YAML.
+Default value: TEXT
+This option prints all relevant information in execution.
+Specifies whether to display information required for reproducing the execution plan of an SQL statement, including the object definition, statistics, and configuration parameters. The information is usually used for fault locating.
+Valid value:
+Create the tpcds.customer_address_p1 table.
+1 | CREATE TABLE tpcds.customer_address_p1 AS TABLE tpcds.customer_address; + |
Change the value of explain_perf_mode to normal.
+1 | SET explain_perf_mode=normal; + |
Display an execution plan for simple queries in the table.
+1 +2 +3 +4 +5 +6 | EXPLAIN SELECT * FROM tpcds.customer_address_p1; + QUERY PLAN +---------------------------------------------------------------------------- + Data Node Scan on "__REMOTE_FQS_QUERY__" (cost=0.00..0.00 rows=0 width=0) + Node/s: All datanodes +(2 rows) + |
Generate an execution plan in JSON format (assume explain_perf_mode is set to normal).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 | EXPLAIN(FORMAT JSON) SELECT * FROM tpcds.customer_address_p1; + QUERY PLAN +--------------------------------------------------- + [ + + { + + "Plan": { + + "Node Type": "Data Node Scan", + + "RemoteQuery name": "__REMOTE_FQS_QUERY__",+ + "Alias": "__REMOTE_FQS_QUERY__", + + "Startup Cost": 0.00, + + "Total Cost": 0.00, + + "Plan Rows": 0, + + "Plan Width": 0, + + "Nodes": "All datanodes" + + } + + } + + ] +(1 row) + |
If there is an index and we use a query with an indexable WHERE condition, EXPLAIN might show a different pla.
+1 +2 +3 +4 +5 +6 | EXPLAIN SELECT * FROM tpcds.customer_address_p1 WHERE ca_address_sk=10000; + QUERY PLAN +------------------------------------------------------------------------------ + Data Node Scan on "__REMOTE_LIGHT_QUERY__" (cost=0.00..0.00 rows=0 width=0) + Node/s: datanode2 +(2 rows) + |
Generate an execution plan in YAML format (assume explain_perf_mode is set to normal).
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | EXPLAIN(FORMAT YAML) SELECT * FROM tpcds.customer_address_p1 WHERE ca_address_sk=10000; + QUERY PLAN +------------------------------------------------ + - Plan: + + Node Type: "Data Node Scan" + + RemoteQuery name: "__REMOTE_LIGHT_QUERY__"+ + Alias: "__REMOTE_LIGHT_QUERY__" + + Startup Cost: 0.00 + + Total Cost: 0.00 + + Plan Rows: 0 + + Plan Width: 0 + + Nodes: "datanode2" +(1 row) + |
Here is an example of an execution plan with cost estimates suppressed.
+1 +2 +3 +4 +5 +6 | EXPLAIN(COSTS FALSE)SELECT * FROM tpcds.customer_address_p1 WHERE ca_address_sk=10000; + QUERY PLAN +-------------------------------------------- + Data Node Scan on "__REMOTE_LIGHT_QUERY__" + Node/s: datanode2 +(2 rows) + |
Here is an example of an execution plan for a query that uses an aggregate function.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | EXPLAIN SELECT SUM(ca_address_sk) FROM tpcds.customer_address_p1 WHERE ca_address_sk<10000; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate (cost=18.19..14.32 rows=1 width=4) + -> Streaming (type: GATHER) (cost=18.19..14.32 rows=3 width=4) + Node/s: All datanodes + -> Aggregate (cost=14.19..14.20 rows=3 width=4) + -> Seq Scan on customer_address_p1 (cost=0.00..14.18 rows=10 width=4) + Filter: (ca_address_sk < 10000) +(6 rows) + |
-- Delete the tpcds.customer_address_p1 table.
+1 | DROP TABLE tpcds.customer_address_p1; + |
You can run the EXPLAIN PLAN statement to save the information about an execution plan to the PLAN_TABLE table. Different from the EXPLAIN statement, EXPLAIN PLAN only stores plan information and does not print it on the screen.
+1 +2 +3 | EXPLAIN PLAN +[ SET STATEMENT_ID = string ] +FOR statement ; + |
Stores plan information in PLAN_TABLE. If the storing is successful, EXPLAIN SUCCESS is returned.
+If the EXPLAIN PLAN statement does not contain SET STATEMENT_ID, the value of STATEMENT_ID is empty by default. In addition, the value of STATEMENT_ID cannot exceed 30 bytes. Otherwise, an error will be reported.
+You can perform the following steps to collect execution plans of SQL statements by running EXPLAIN PLAN:
+After the EXPLAIN PLAN statement is executed, plan information is automatically stored in PLAN_TABLE. INSERT, UPDATE, and ANALYZE cannot be performed on PLAN_TABLE.
+For details about PLAN_TABLE, see the PLAN_TABLE system view.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | explain plan set statement_id='TPCH-Q4' for +select +o_orderpriority, +count(*) as order_count +from +orders +where +o_orderdate >= '1993-07-01'::date +and o_orderdate < '1993-07-01'::date + interval '3 month' +and exists ( +select +* +from +lineitem +where +l_orderkey = o_orderkey +and l_commitdate < l_receiptdate +) +group by +o_orderpriority +order by +o_orderpriority; + |
1 | SELECT * FROM PLAN_TABLE; + |
1 | DELETE FROM PLAN_TABLE WHERE xxx; + |
For a query that cannot be pushed down, only such information as REMOTE_QUERY and CTE can be collected from PLAN_TABLE after EXPLAIN PLAN is executed.
+1 +2 +3 +4 +5 | explain plan set statement_id = 'test remote query' for + select + current_user + from + customer; + |
1 | SELECT * FROM PLAN_TABLE; + |
LOCK TABLE obtains a table-level lock.
+GaussDB(DWS) always tries to select the lock mode with minimum constraints when automatically requesting a lock for a command referenced by a table. Use LOCK if users need a more strict lock mode. For example, suppose an application runs a transaction at the Read Committed isolation level and needs to ensure that data in a table remains stable in the duration of the transaction. To achieve this, you could obtain SHARE lock mode over the table before the query. This will prevent concurrent data changes and ensure subsequent reads of the table see a stable view of committed data. It is because the SHARE lock mode conflicts with the ROW EXCLUSIVE lock acquired by writers, and your LOCK TABLE name IN SHARE MODE statement will wait until any concurrent holders of ROW EXCLUSIVE mode locks commit or roll back. Therefore, once you obtain the lock, there are no uncommitted writes outstanding; furthermore none can begin until you release the lock.
+1 +2 +3 | LOCK [ TABLE ] {[ ONLY ] name [, ...]| {name [ * ]} [, ...]} + [ IN {ACCESS SHARE | ROW SHARE | ROW EXCLUSIVE | SHARE UPDATE EXCLUSIVE | SHARE | SHARE ROW EXCLUSIVE | EXCLUSIVE | ACCESS EXCLUSIVE} MODE ] + [ NOWAIT ]; + |
Requested Lock Mode/Current Lock Mode + |
+ACCESS SHARE + |
+ROW SHARE + |
+ROW EXCLUSIVE + |
+SHARE UPDATE EXCLUSIVE + |
+SHARE + |
+SHARE ROW EXCLUSIVE + |
+EXCLUSIVE + |
+ACCESS EXCLUSIVE + |
+
---|---|---|---|---|---|---|---|---|
ACCESS SHARE + |
+- + |
+- + |
+- + |
+- + |
+- + |
+- + |
+- + |
+X + |
+
ROW SHARE + |
+- + |
+- + |
+- + |
+- + |
+- + |
+- + |
+X + |
+X + |
+
ROW EXCLUSIVE + |
+- + |
+- + |
+- + |
+- + |
+X + |
+X + |
+X + |
+X + |
+
SHARE UPDATE EXCLUSIVE + |
+- + |
+- + |
+- + |
+X + |
+X + |
+X + |
+X + |
+X + |
+
SHARE + |
+- + |
+- + |
+X + |
+X + |
+- + |
+X + |
+X + |
+X + |
+
SHARE ROW EXCLUSIVE + |
+- + |
+- + |
+X + |
+X + |
+X + |
+X + |
+X + |
+X + |
+
EXCLUSIVE + |
+- + |
+X + |
+X + |
+X + |
+X + |
+X + |
+X + |
+X + |
+
ACCESS EXCLUSIVE + |
+X + |
+X + |
+X + |
+X + |
+X + |
+X + |
+X + |
+X + |
+
LOCK parameters are as follows:
+The name (optionally schema-qualified) of an existing table to lock.
+The tables are locked one-by-one in the order specified in the LOCK TABLE command.
+Value range: an existing table name
+Only locks only this table. If Only is not specified, this table and all its sub-tables are locked.
+ACCESS SHARE allows only read operations on a table. In general, any SQL statements that only read a table and do not modify it will acquire this lock mode. The SELECT command acquires a lock of this mode on referenced tables.
+ROW SHARE allows concurrent read of a table but does not allow any other operations on the table.
+SELECT FOR UPDATE and SELECT FOR SHARE automatically acquire the ROW SHARE lock on the target table and add the ACCESS SHARE lock to other referenced tables except FOR SHARE and FOR UPDATE.
+Like ROW SHARE, ROW EXCLUSIVE allows concurrent read of a table but does not allow modification of data in the table. UPDATE, DELETE, and INSERT automatically acquire the ROW SHARE lock on the target table and add the ACCESS SHARE lock to other referenced tables. Generally, all commands that modify table data acquire the ROW EXCLUSIVE lock for tables.
+This mode protects a table against concurrent schema changes and VACUUM runs.
+Acquired by VACUUM (without FULL), ANALYZE, CREATE INDEX CONCURRENTLY, and some forms of ALTER TABLE.
+SHARE allows concurrent queries of a table but does not allow modification of the table.
+Acquired by CREATE INDEX (without CONCURRENTLY).
+SHARE ROW EXCLUSIVE protects a table against concurrent data changes, and is self-exclusive so that only one session can hold it at a time.
+No SQL statements automatically acquire this lock mode.
+EXCLUSIVE allows concurrent queries of the target table but does not allow any other operations.
+This mode allows only concurrent ACCESS SHARE locks; that is, only reads from the table can proceed in parallel with a transaction holding this lock mode.
+No SQL statements automatically acquire this lock mode on user tables. However, it will be acquired on some system tables in case of some operations.
+This mode guarantees that the holder is the only transaction accessing the table in any way.
+Acquired by the ALTER TABLE, DROP TABLE, TRUNCATE, REINDEX, CLUSTER, and VACUUM FULL commands.
+This is also the default lock mode for LOCK TABLE statements that do not specify a mode explicitly.
+Specifies that LOCK TABLE should not wait for any conflicting locks to be released: if the specified lock(s) cannot be acquired immediately without waiting, the transaction is aborted.
+If NOWAIT is not specified, LOCK TABLE obtains a table-level lock, waiting if necessary for any conflicting locks to be released.
+Obtain a SHARE lock on a primary key table when going to perform inserts into a foreign key table.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | START TRANSACTION; + +LOCK TABLE tpcds.reason IN SHARE MODE; + +SELECT r_reason_desc FROM tpcds.reason WHERE r_reason_sk=5; +r_reason_desc +----------- + Parts missing +(1 row) + +COMMIT; + |
Obtain a SHARE ROW EXCLUSIVE lock on a primary key table when going to perform a delete operation.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | CREATE TABLE tpcds.reason_t1 AS TABLE tpcds.reason; + +START TRANSACTION; + +LOCK TABLE tpcds.reason_t1 IN SHARE ROW EXCLUSIVE MODE; + +DELETE FROM tpcds.reason_t1 WHERE r_reason_desc IN(SELECT r_reason_desc FROM tpcds.reason_t1 WHERE r_reason_sk < 6 ); + +DELETE FROM tpcds.reason_t1 WHERE r_reason_sk = 7; + +COMMIT; + |
Delete the tpcds.reason_t1 table.
+1 | DROP TABLE tpcds.reason_t1; + |
The MERGE INTO statement is used to conditionally match data in a target table with that in a source table. If data matches, UPDATE is executed on the target table; if data does not match, INSERT is executed. You can use this syntax to run UPDATE and INSERT at a time for convenience.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | MERGE INTO table_name [ [ AS ] alias ] +USING { { table_name | view_name } | subquery } [ [ AS ] alias ] +ON ( condition ) +[ + WHEN MATCHED THEN + UPDATE SET { column_name = { expression | DEFAULT } | + ( column_name [, ...] ) = ( { expression | DEFAULT } [, ...] ) } [, ...] + [ WHERE condition ] +] +[ + WHEN NOT MATCHED THEN + INSERT { DEFAULT VALUES | + [ ( column_name [, ...] ) ] VALUES ( { expression | DEFAULT } [, ...] ) [, ...] [ WHERE condition ] } +]; + |
Specifies the target table that is being updated or has data being inserted. It cannot be a replication table.
+Specifies the name of the target table.
+Specifies the alias of the target table.
+Value range: a string. It must comply with the naming convention.
+Specifies the source table, which can be a table, view, or subquery.
+Specifies the condition used to match data between the source and target tables. Columns in the condition cannot be updated.
+Performs the UPDATE operation if data in the source table matches that in the target table based on the condition.
+Distribution keys cannot be updated. System catalogs and system columns cannot be updated.
+Specifies that the INSERT operation is performed if data in the source table does not match that in the target table based on the condition.
+The INSERT clause is not allowed to contain multiple VALUES.
+The order of WHEN MATCHED and WHEN NOT MATCHED clauses can be reversed. One of them can be used by default, but they cannot be both used at one time. Two WHEN MATCHED or WHEN NOT MATCHED clauses cannot be specified at the same time.
+Specifies the default value of a column.
+It will be NULL if no specific default value has been assigned to it.
+Specifies the conditions for the UPDATE and INSERT clauses. The two clauses will be executed only when the conditions are met. The default value can be used. System columns cannot be referenced in WHERE condition.
+Create the target table products and source table newproducts, and insert data to them.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 | CREATE TABLE products +( +product_id INTEGER, +product_name VARCHAR2(60), +category VARCHAR2(60) +); + +INSERT INTO products VALUES (1501, 'vivitar 35mm', 'electrncs'); +INSERT INTO products VALUES (1502, 'olympus is50', 'electrncs'); +INSERT INTO products VALUES (1600, 'play gym', 'toys'); +INSERT INTO products VALUES (1601, 'lamaze', 'toys'); +INSERT INTO products VALUES (1666, 'harry potter', 'dvd'); + +CREATE TABLE newproducts +( +product_id INTEGER, +product_name VARCHAR2(60), +category VARCHAR2(60) +); + +INSERT INTO newproducts VALUES (1502, 'olympus camera', 'electrncs'); +INSERT INTO newproducts VALUES (1601, 'lamaze', 'toys'); +INSERT INTO newproducts VALUES (1666, 'harry potter', 'toys'); +INSERT INTO newproducts VALUES (1700, 'wait interface', 'books'); + |
Run MERGE INTO.
+1 +2 +3 +4 +5 +6 +7 +8 | MERGE INTO products p +USING newproducts np +ON (p.product_id = np.product_id) +WHEN MATCHED THEN + UPDATE SET p.product_name = np.product_name, p.category = np.category WHERE p.product_name != 'play gym' +WHEN NOT MATCHED THEN + INSERT VALUES (np.product_id, np.product_name, np.category) WHERE np.category = 'books'; +MERGE 4 + |
Query updates.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | SELECT * FROM products ORDER BY product_id; + product_id | product_name | category +------------+----------------+----------- + 1501 | vivitar 35mm | electrncs + 1502 | olympus camera | electrncs + 1600 | play gym | toys + 1601 | lamaze | toys + 1666 | harry potter | toys + 1700 | wait interface | books +(6 rows) + |
Delete a table.
+1 +2 | DROP TABLE products; +DROP TABLE newproducts; + |
INSERT inserts new rows into a table.
+If inserting multi-byte character data (such as Chinese characters) to database with the character set byte encoding (SQL_ASCII, LATIN1), and the character data crosses the truncation position, the string is truncated based on its bytes instead of characters. Unexpected result will occur in tail after the truncation. If you want correct truncation result, you are advised to adopt encoding set such as UTF8, which has no character data crossing the truncation position.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 | [ WITH [ RECURSIVE ] with_query [, ...] ] +INSERT [ IGNORE | OVERWRITE ] INTO table_name [ AS alias ] [ ( column_name [, ...] ) ] + { DEFAULT VALUES + | VALUES {( { expression | DEFAULT } [, ...] ) }[, ...] + | query } + [ ON DUPLICATE KEY duplicate_action | ON CONFLICT [ conflict_target ] conflict_action ] + [ RETURNING {* | {output_expression [ [ AS ] output_name ] }[, ...]} ]; + +where duplicate_action can be: + + UPDATE { column_name = { expression | DEFAULT } | + ( column_name [, ...] ) = ( { expression | DEFAULT } [, ...] ) + } [, ...] + +and conflict_target can be one of: + + ( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ] + ON CONSTRAINT constraint_name + +and conflict_action is one of: + + DO NOTHING + DO UPDATE SET { column_name = { expression | DEFAULT } | + ( column_name [, ...] ) = ( { expression | DEFAULT } [, ...] ) + } [, ...] + [ WHERE condition ] + |
The WITH clause allows you to specify one or more subqueries that can be referenced by name in the primary query, equal to temporary table.
+If RECURSIVE is specified, it allows a SELECT subquery to reference itself by name.
+The detailed format of with_query is as follows: with_query_name [ (column_name [,...]) ] AS
+( {select | values | insert | update | delete} )
+-- with_query_name specifies the name of the result set generated by a subquery. Such names can be used to access the result sets of
+subqueries in a query.
+column_name specifies the column name displayed in the subquery result set.
+Each subquery can be a SELECT, VALUES, INSERT, UPDATE or DELETE statement.
+Specifies that the data that duplicates an existing primary key or unique key value will be ignored.
+For details, see UPSERT.
+Specifies the overwrite mode. After this mode is used, the original data is cleared and only the newly inserted data exists.
+You can specify the columns on which OVERWRITE takes effect, and the other columns will keep their original data. If a column has no original data, its value is NULL.
+Specifies the name of the target table.
+Value range: an existing table name
+Specifies an alias for the target table table_name. alias indicates the alias name.
+Specifies the name of a column in a table.
+Value range: an existing column name
+Specifies an expression or a value to assign to the corresponding column.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 | create table tt01 (id int,content varchar(50)); +NOTICE: The 'DISTRIBUTE BY' clause is not specified. Using 'id' as the distribution column by default. +HINT: Please use 'DISTRIBUTE BY' clause to specify suitable data distribution column. +CREATE TABLE +insert into tt01 values (1,'Jack say ''hello'''); +INSERT 0 1 +insert into tt01 values (2,'Rose do 50%'); +INSERT 0 1 +insert into tt01 values (3,'Lilei say ''world'''); +INSERT 0 1 +insert into tt01 values (4,'Hanmei do 100%'); +INSERT 0 1 +select * from tt01; + id | content +----+------------------- + 3 | Lilei say 'world' + 4 | Hanmei do 100% + 1 | Jack say 'hello' + 2 | Rose do 50% +(4 rows) +drop table tt01; +DROP TABLE + |
All columns will be filled with their default values. The value is NULL if no specified default value has been assigned to it.
+Specifies a query statement (SELECT statement) that uses the query result as the inserted data.
+Specifies that the data that duplicates an existing primary key or unique key value will be updated.
+duplicate_action specifies the columns and data to be updated.
+For details, see UPSERT.
+Specifies that the data that duplicates an existing primary key or unique key value will be ignored or updated.
+conflict_target specifies the column name index_column_name, expression index_expression that contains multiple column names, or constraint name constraint_name. It is used to infer whether there is a unique index from the column name, the expression that contains multiple column names, or the constraint name. index_column_name and index_expression must comply with the index column format of CREATE INDEX.
+conflict_action specifies the policy to be executed upon a primary key or unique constraint conflict. There are two available actions:
+For details, see UPSERT.
+Returns the inserted rows. The syntax of the RETURNING list is identical to that of the output list of SELECT.
+An expression used to calculate the output of the INSERT command after each row is inserted.
+Value range: The expression can use any field in the table. Write * to return all columns of the inserted row(s).
+A name to use for a returned column.
+Value range: a string. It must comply with the naming convention.
+Create the reason_t1 table.
+1 +2 +3 +4 +5 +6 | CREATE TABLE reason_t1 +( + TABLE_SK INTEGER , + TABLE_ID VARCHAR(20) , + TABLE_NA VARCHAR(20) +); + |
Insert a record into a table.
+1 | INSERT INTO reason_t1(TABLE_SK, TABLE_ID, TABLE_NA) VALUES (1, 'S01', 'StudentA'); + |
Insert a record into a table. This command is equivalent to the last one.
+1 | INSERT INTO reason_t1 VALUES (1, 'S01', 'StudentA'); + |
Insert records whose TABLE_SK is less than 1 into the table.
+1 | INSERT INTO reason_t1 SELECT * FROM reason_t1 WHERE TABLE_SK < 1; + |
Insert records into the table.
+1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO reason_t1 VALUES (1, 'S01', 'StudentA'),(2, 'T01', 'TeacherA'),(3, 'T02', 'TeacherB'); +SELECT * FROM reason_t1 ORDER BY 1; + TABLE_SK | TABLE_ID | TABLE_NAME +----------+----------+------------ + 1 | S01 | StudentA + 2 | T01 | TeacherA + 3 | T02 | TeacherB +(3 rows) + |
Clear existing data in the table and insert data to the table.
+1 +2 +3 +4 +5 +6 | INSERT OVERWRITE INTO reason_t1 values (4, 'S02', 'StudentB'); +SELECT * FROM reason_t1 ORDER BY 1; + TABLE_SK | TABLE_ID | TABLE_NAME +----------+----------+------------ + 4 | S02 | StudentB +(1 rows) + |
Insert data back into the reason_t1 table.
+INSERT INTO reason_t1 SELECT * FROM reason_t1;+
Specify default values for independent columns.
+INSERT INTO reason_t1 VALUES (5, 'S03', DEFAULT);+
Insert some data in a table to another table: Use the WITH subquery to obtain a temporary table temp_t, and then insert all data in temp_t to another table reason_t1.
+WITH temp_t AS (SELECT * FROM reason_t1) INSERT INTO reason_t1 SELECT * FROM temp_t ORDER BY 1;+
UPSERT inserts rows into a table. When a row duplicates an existing primary key or unique key value, the row will be ignored or updated.
+The UPSERT syntax is supported only in 8.1.1 and later.
+For details, see Syntax of INSERT. The following table describes the syntax of UPSERT.
+ +Syntax + |
+Update Data Upon Conflict + |
+Ignore Data Upon Conflict + |
+
---|---|---|
Syntax 1: No index is specified. + |
+INSERT INTO ON DUPLICATE KEY UPDATE+ |
+INSERT IGNORE +INSERT INTO ON CONFLICT DO NOTHING+ |
+
Syntax 2: The unique key constraint can be inferred from the specified column name or constraint name. + |
+INSERT INTO ON CONFLICT(...) DO UPDATE SET +INSERT INTO ON CONFLICT ON CONSTRAINT con_name DO UPDATE SET+ |
+INSERT INTO ON CONFLICT(...) DO NOTHING +INSERT INTO ON CONFLICT ON CONSTRAINT con_name DO NOTHING+ |
+
In syntax 1, no index is specified. The system checks for conflicts on all primary keys or unique indexes. If a conflict exists, the system ignores or updates the corresponding data.
+In syntax 2, a specified index is used for conflict check. The primary key or unique index is inferred from the column name, the expression that contains column names, or the constraint name specified in the ON CONFLICT clause.
+Syntax 2 infers the primary key or unique index by specifying the column name or constraint name. You can specify a single column name or multiple column names by using an expression, for example, (column1, column2, column3).
+collation and opclass can be specified when you create an index. Therefore, you can also specify them after the column name for index inference.
+COLLATE collation specifies the collation of a column, and opclass specifies the name of the operator class. For details, see CREATE INDEX.
+The UPDATE clause can use VALUES(colname) or EXCLUDED.colname to reference inserted data. EXCLUDED indicates the rows that should be excluded due to conflicts. An example is as follows:
+1 +2 +3 +4 +5 +6 +7 | CREATE TABLE t1(id int PRIMARY KEY, a int, b int); +INSERT INTO t1 VALUES(1,1,1); +-- Upon a conflicting row, change the value in column a to the value in column a of the target table plus 1, which, in this example, is (1,2,1). +INSERT INTO t1 VALUES(1,10,20) ON CONFLICT(id) DO UPDATE SET a = a + 1; +-- EXCLUDED.a is used to reference the value of column a that is originally proposed for insertion. In this example, the value is 10. +-- Upon a conflicting row, change the value of column a to that of the referenced column plus 1. In this example, the value is updated to (1,11,1). +INSERT INTO t1 VALUES(1,10,20) ON CONFLICT(id) DO UPDATE SET a = EXCLUDED.a + 1; + |
For example, multiple UPSERT statements are executed in batches in a transaction or through JDBC (setAutoCommit(false)). Multiple similar tasks are executed at the same time.
+Possible result: The update sequences of different threads may vary depending on nodes. As a result, a deadlock may occur when the same row is concurrently updated.
+Solution:
+In the preceding solution, method 1 can only reduce the waiting time but cannot solve the deadlock problem. If there are UPSERT statements in the service, you are advised to decrease the value of this parameter. Methods 2, 3, and 4 can solve the deadlock problem, but method 2 is recommended because its performance is better than another two methods.
+1 +2 +3 | CREATE TABLE t1(dist_key int PRIMARY KEY, a int, b int); +INSERT INTO t1 VALUES(1,2,3) ON CONFLICT(dist_key) DO UPDATE SET dist_key = EXCLUDED.dist_key, a = EXCLUDED.a + 1; +INSERT INTO t1 VALUES(1,2,3) ON CONFLICT(dist_key) DO UPDATE SET dist_key = dist_key, a = EXCLUDED.a + 1; + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 | CREATE TABLE t1(id int PRIMARY KEY, a int, b int); +-- Use the stream query plan: +EXPLAIN (COSTS OFF) INSERT INTO t1 VALUES(1,2,3),(1,5,6) ON CONFLICT(id) DO UPDATE SET a = EXCLUDED.a + 1; + QUERY PLAN +------------------------------------------------ + id | operation + ----+----------------------------------------- + 1 | -> Streaming (type: GATHER) + 2 | -> Insert on t1 + 3 | -> Streaming(type: REDISTRIBUTE) + 4 | -> Values Scan on "*VALUES*" + Predicate Information (identified by plan id) + --------------------------------------------- + 2 --Insert on t1 + Conflict Resolution: UPDATE + Conflict Arbiter Indexes: t1_pkey + ====== Query Summary ===== + ------------------------------ + System available mem: 819200KB + Query Max mem: 819200KB + Query estimated mem: 3104KB +(18 rows) +INSERT INTO t1 VALUES(1,2,3),(1,5,6) ON CONFLICT(id) DO UPDATE SET a = EXCLUDED.a + 1; +ERROR: INSERT ON CONFLICT DO UPDATE command cannot affect row a second time +HINT: Ensure that no rows proposed for insertion within the same command have duplicate constrained values. +-- Disable the stream plan and generate a PGXC plan: +set enable_stream_operator = off; +EXPLAIN (COSTS OFF) INSERT INTO t1 VALUES(1,2,3),(1,5,6) ON CONFLICT(id) DO UPDATE SET a = EXCLUDED.a + 1; + QUERY PLAN +----------------------------------------------- + id | operation + ----+---------------------------------- + 1 | -> Insert on t1 + 2 | -> Values Scan on "*VALUES*" + Predicate Information (identified by plan id) + --------------------------------------------- + 1 --Insert on t1 + Conflict Resolution: UPDATE + Conflict Arbiter Indexes: t1_pkey + Node expr: id +(11 rows) +INSERT INTO t1 VALUES(1,2,3),(1,5,6) ON CONFLICT(id) DO UPDATE SET a = EXCLUDED.a + 1; +INSERT 0 2 + |
Create table reason_t2 and insert data into it.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | CREATE TABLE reason_t2 +( + a int primary key, + b int, + c int +); + +INSERT INTO reason_t2 VALUES (1, 2, 3); +SELECT * FROM reason_t2 ORDER BY 1; + a | b | c +---+---+--- + 1 | 2 | 3 + (1 rows) + |
Insert two data records into the table reason_t2. One data record conflicts and the other does not. Conflicting data is ignored, and non-conflicting data is inserted.
+1 +2 +3 +4 +5 +6 +7 | INSERT INTO reason_t2 VALUES (1, 4, 5),(2, 6, 7) ON CONFLICT(a) DO NOTHING; +SELECT * FROM reason_t2 ORDER BY 1; + a | b | c +---+---+---- + 1 | 2 | 3 + 2 | 6 | 7 +(2 rows) + |
Insert two data records into the table reason_t2. One data record conflicts and the other does not. Conflicting data is updated, and non-conflicting data is inserted.
+1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO reason_t2 VALUES (1, 4, 5),(3, 8, 9) ON CONFLICT(a) DO UPDATE SET b = EXCLUDED.b, c = EXCLUDED.c; +SELECT * FROM reason_t2 ORDER BY 1; + a | b | c +---+---+---- + 1 | 4 | 5 + 2 | 6 | 7 + 3 | 8 | 9 + (3 rows) + |
Filter the updated rows.
+1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO reason_t2 VALUES (2, 7, 8) ON CONFLICT (a) DO UPDATE SET b = excluded.b, c = excluded.c WHERE reason_t2.c = 7; +SELECT * FROM reason_t2 ORDER BY 1; + a | b | c +---+---+--- + 1 | 4 | 5 + 2 | 7 | 8 + 3 | 8 | 9 +(3 rows) + |
Insert data into the table reason_t. Update the conflicting data and adjust the mapping. That is, update column c to column b and column b to column c.
+1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO reason_t2 VALUES (1, 2, 3) ON CONFLICT (a) DO UPDATE SET b = excluded.c, c = excluded.b; +SELECT * FROM reason_t2 ORDER BY 1; + a | b | c +---+---+--- + 1 | 3 | 2 + 2 | 7 | 8 + 3 | 8 | 9 +(3 rows) + |
SELECT retrieves data from a table or view.
+Serving as an overlaid filter for a database table, SELECT using SQL keywords retrieves required data from data tables.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | [ WITH [ RECURSIVE ] with_query [, ...] ] +SELECT [/*+ plan_hint */] [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ] +{ * | {expression [ [ AS ] output_name ]} [, ...] } +[ FROM from_item [, ...] ] +[ WHERE condition ] +[ GROUP BY grouping_element [, ...] ] +[ HAVING condition [, ...] ] +[ WINDOW {window_name AS ( window_definition )} [, ...] ] +[ { UNION | INTERSECT | EXCEPT | MINUS } [ ALL | DISTINCT ] select ] +[ ORDER BY {expression [ [ ASC | DESC | USING operator ] | nlssort_expression_clause ] [ NULLS { FIRST | LAST } ]} [, ...] ] +[ { [ LIMIT { count | ALL } ] [ OFFSET start [ ROW | ROWS ] ] } | { LIMIT start, { count | ALL } } ] +[ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ] +[ {FOR { UPDATE | SHARE } [ OF table_name [, ...] ] [ NOWAIT ]} [...] ]; + |
In condition and expression, you can use the aliases of expressions in targetlist in compliance with the following rules:
+1 +2 | with_query_name [ ( column_name [, ...] ) ] + AS ( {select | values | insert | update | delete} ) + |
1 +2 +3 +4 +5 +6 | {[ ONLY ] table_name [ * ] [ partition_clause ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ] +|( select ) [ AS ] alias [ ( column_alias [, ...] ) ] +|with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ] +|function_name ( [ argument [, ...] ] ) [ AS ] alias [ ( column_alias [, ...] | column_definition [, ...] ) ] +|function_name ( [ argument [, ...] ] ) AS ( column_definition [, ...] ) +|from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) ]} + |
1 +2 +3 +4 +5 +6 | ( ) +| expression +| ( expression [, ...] ) +| ROLLUP ( { expression | ( expression [, ...] ) } [, ...] ) +| CUBE ( { expression | ( expression [, ...] ) } [, ...] ) +| GROUPING SETS ( grouping_element [, ...] ) + |
1 +2 | PARTITION { ( partition_name ) | + FOR ( partition_value [, ...] ) } + |
Partitions can be specified only for ordinary tables.
+1 | NLSSORT ( column_name, ' NLS_SORT = { SCHINESE_PINYIN_M | generic_m_ci } ' ) + |
1 | TABLE { ONLY {(table_name)| table_name} | table_name [ * ]}; + |
The WITH clause allows you to specify one or more subqueries that can be referenced by name in the primary query, equal to temporary table.
+If RECURSIVE is specified, it allows a SELECT subquery to reference itself by name.
+The detailed format of with_query is as follows: with_query_name [ ( column_name [, ...] ) ] AS ( {select | values | insert | update | delete} )
+Follows the SELECT keyword in the /*+<Plan hint> */ format. It is used to optimize the plan of a SELECT statement block. For details, see section "Hint-based Tuning."
+Specifies that all rows meeting the requirements are returned. This is the default behavior, so you can omit this keyword.
+Removes all duplicate rows from the SELECT result set.
+ON ( expression [, ...] ) is only reserved for the first row among all the rows with the same result calculated using given expressions.
+DISTINCT ON expression is explained with the same rule of ORDER BY. Unless you use ORDER BY to guarantee that the required row appears first, you cannot know what the first row is.
+Indicates columns to be queried. Some or all columns (using wildcard character *) can be queried.
+You may use the AS output_name clause to give an alias for an output column. The alias is used for the displaying of the output column.
+Column names may be either of:
+Indicates one or more source tables for SELECT.
+The FROM clause can contain the following elements:
+Indicates the name (optionally schema-qualified) of an existing table or view, for example, schema_name.table_name.
+Gives a temporary alias to a table to facilitate the quotation by other queries.
+An alias is used for brevity or to eliminate ambiguity for self-joins. When an alias is provided, it completely hides the actual name of the table or function.
+Queries data in the specified partition of a partitioned table.
+Specifies the value of the specified partition key. If there are many partition keys, use the PARTITION FOR clause to specify the value of the only partition key you want to use.
+Performs a subquery in the FROM clause. A temporary table is created to save subquery results.
+WITH clause can also be the source of FROM clause and can be referenced with the name queried by executing WITH.
+Function name. Function calls can appear in the FROM clause.
+A JOIN clause combines two FROM items. Use parentheses if necessary to determine the order of nesting. In the absence of parentheses, JOIN nests left-to-right.
+In any case, JOIN binds more tightly than the commas separating FROM items.
+Returns all rows in the qualified Cartesian product (all combined rows that pass its join condition), and pluses one copy of each row in the left-hand table for which there was no right-hand row that passed the join condition. This left-hand row is extended to the full width of the joined table by inserting NULL values for the right-hand columns. Note that only the JOIN clause's own condition is considered while deciding which rows have matches. Outer conditions are applied afterwards.
+Returns all the joined rows, plus one row for each unmatched right-hand row (extended with NULL on the left).
+This is just a notational convenience, since you could convert it to a LEFT OUTER JOIN by switching the left and right inputs.
+Returns all the joined rows, pluses one row for each unmatched left-hand row (extended with NULL on the right), and pluses one row for each unmatched right-hand row (extended with NULL on the left).
+CROSS JOIN is equivalent to INNER JOIN ON (TRUE), which means no rows are removed by qualification. These join types are just a notational convenience, since they do nothing you could not do with plain FROM and WHERE.
+For the INNER and OUTER join types, a join condition must be specified, namely exactly one of NATURAL ON, join_condition, or USING (join_column [, ...]). For CROSS JOIN, none of these clauses can appear.
+CROSS JOIN and INNER JOIN produce a simple Cartesian product, the same result as you get from listing the two items at the top level of FROM.
+A join condition to define which rows have matches in joins. Example: ON left_table.a = right_table.a
+Abbreviation of ON left_table.a = right_table.a AND left_table.b = right_table.b .... Corresponding columns must have the same name.
+NATURAL is a shorthand for a USING list that mentions all columns in the two tables that have the same names.
+The WHERE clause forms an expression for row selection to narrow down the query range of SELECT. The condition is any expression that evaluates to a result of Boolean type. Rows that do not satisfy this condition will be eliminated from the output.
+In the WHERE clause, you can use the operator (+) to convert a table join to an outer join. However, this method is not recommended because it is not the standard SQL syntax and may raise syntax compatibility issues during platform migration. There are many restrictions on using the operator (+):
+For the WHERE clause, if a special character % _ or \ is queried in LIKE, add the slash (\) before each character.
+Example:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 | create table tt01 (id int,content varchar(50)); +NOTICE: The 'DISTRIBUTE BY' clause is not specified. Using 'id' as the distribution column by default. +HINT: Please use 'DISTRIBUTE BY' clause to specify suitable data distribution column. +CREATE TABLE +insert into tt01 values (1,'Jack say ''hello'''); +INSERT 0 1 +insert into tt01 values (2,'Rose do 50%'); +INSERT 0 1 +insert into tt01 values (3,'Lilei say ''world'''); +INSERT 0 1 +insert into tt01 values (4,'Hanmei do 100%'); +INSERT 0 1 +select * from tt01; + id | content +----+------------------- + 3 | Lilei say 'world' + 4 | Hanmei do 100% + 1 | Jack say 'hello' + 2 | Rose do 50% +(4 rows) + +select * from tt01 where content like '%''he%'; + id | content +----+------------------ + 1 | Jack say 'hello' +(1 row) + +select * from tt01 where content like '%50\%%'; + id | content +----+------------- + 2 | Rose do 50% +(1 row) + +drop table tt01; +DROP TABLE + |
Condenses query results into a single row or selected rows that share the same values for the grouped expressions.
+A CUBE grouping is an extension to the GROUP BY clause that creates subtotals for all of the possible combinations of the given list of grouping columns (or expressions). In terms of multidimensional analysis, CUBE generates all the subtotals that could be calculated for a data cube with the specified dimensions. For example, given three expressions (n=3) in the CUBE clause, the operation results in 2n = 23 = 8 groupings. Rows grouped on the values of n expressions are called regular rows, and the rest are called superaggregate rows.
+GROUPING SETS is another extension to the GROUP BY clause. It allows users to specify multiple GROUP BY clauses. This improves efficiency by trimming away unnecessary data. After you specify the set of groups that you want to create using a GROUPING SETS expression within a GROUP BY clause, the database does not need to compute a whole ROLLUP or CUBE.
+If the SELECT list expression quotes some ungrouped fields and no aggregate function is used, an error is displayed. This is because multiple values may be returned for ungrouped fields.
+Selects special groups by working with the GROUP BY clause. The HAVING clause compares some attributes of groups with a constant. Only groups that matching the logical expression in the HAVING clause are extracted.
+The general format is WINDOW window_name AS ( window_definition ) [, ...]. window_name is a name can be referenced by window_definition. window_definition can be expressed in the following forms:
+[ existing_window_name ]
+[ PARTITION BY expression [, ...] ]
+[ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ]
+[ frame_clause ]
+frame_clause defines a window frame for the window function. The window function (not all window functions) depends on window frame and window frame is a set of relevant rows of the current query row. frame_clause can be expressed in the following forms:
+[ RANGE | ROWS ] frame_start
+[ RANGE | ROWS ] BETWEEN frame_start AND frame_end
+frame_start and frame_end can be expressed in the following forms:
+UNBOUNDED PRECEDING
+value PRECEDING (not supported for RANGE)
+CURRENT ROW
+value FOLLOWING (not supported for RANGE)
+UNBOUNDED FOLLOWING
+For the query of column storage table, only row_number window function is supported, frame_clause is not supported.
+Computes the set union of the rows returned by the involved SELECT statements.
+The UNION clause has the following constraints:
+General expression:
+select_statement UNION [ALL] select_statement
+Computes the set intersection of rows returned by the involved SELECT statements. The result of INTERSECT does not contain any duplicate rows.
+The INTERSECT clause has the following constraints:
+General format:
+select_statement INTERSECT select_statement
+select_statement can be any SELECT statement without a FOR UPDATE clause.
+EXCEPT clause has the following common form:
+select_statement EXCEPT [ ALL ] select_statement
+select_statement can be any SELECT statement without a FOR UPDATE clause.
+The EXCEPT operator computes the set of rows that are in the result of the left SELECT statement but not in the result of the right one.
+The result of EXCEPT does not contain any duplicate rows unless the ALL option is specified. To execute ALL, a row that has m duplicates in the left table and n duplicates in the right table will appear MAX(m–n, 0) times in the result set.
+Multiple EXCEPT operators in the same SELECT statement are evaluated left to right, unless parentheses dictate otherwise. EXCEPT binds at the same level as UNION.
+Currently, FOR UPDATE and FOR SHARE cannot be specified either for an EXCEPT result or for any input of an EXCEPT.
+Has the same function and syntax as EXCEPT clause.
+Sorts data retrieved by SELECT in descending or ascending order. If the ORDER BY expression contains multiple columns:
+initdb –E UTF8 –D ../data –locale=zh_CN.UTF-8 or initdb –E GBK –D ../data –locale=zh_CN.GBK
+The LIMIT clause consists of two independent LIMIT clauses, an OFFSET clause, and a LIMIT clause with multiple parameters.
+LIMIT { count | ALL }
+OFFSET start [ ROW | ROWS ]
+LIMIT start, { count | ALL }
+count in the clauses specifies the maximum number of rows to return, while start specifies the number of rows to skip before starting to return rows. When both are specified, start rows are skipped before starting to count the count rows to be returned. A multi-parameter LIMIT clause cannot be used together with a single-parameter LIMIT or OFFSET clause.
+If count is omitted in a FETCH clause, it defaults to 1.
+Locks rows retrieved by SELECT. This ensures that the rows cannot be modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, or SELECT FOR UPDATE of these rows will be blocked until the current transaction ends.
+To avoid waiting for the committing of other transactions, you can apply NOWAIT. Rows to which NOWAIT applies cannot be immediately locked. After SELECT FOR UPDATE NOWAIT is executed, an error is reported.
+FOR SHARE behaves similarly, except that it acquires a shared rather than exclusive lock on each retrieved row. A share lock blocks other transaction from performing UPDATE, DELETE, or SELECT FOR UPDATE on these rows, but it does not prevent them from performing SELECT FOR SHARE.
+If specified tables are named in FOR UPDATE or FOR SHARE, then only rows coming from those tables are locked; any other tables used in SELECT are simply read as usual. Otherwise, locking all tables in the command.
+If FOR UPDATE or FOR SHARE is applied to a view or sub-query, it affects all tables used in the view or sub-query.
+Multiple FOR UPDATE and FOR SHARE clauses can be written if it is necessary to specify different locking behaviors for different tables.
+If the same table is mentioned (or implicitly affected) by both FOR UPDATE and FOR SHARE clauses, it is processed as FOR UPDATE. Similarly, a table is processed as NOWAIT if that is specified in any of the clauses affecting it.
+Indicates a field to be ordered in a special mode. Currently, only the Chinese Pinyin order and case insensitive order are supported.
+Valid value:
+Queries data in the specified partition of a partitioned table.
+Obtain the temp_t temporary table by a subquery and query all records in this table.
+1 | WITH temp_t(name,isdba) AS (SELECT usename,usesuper FROM pg_user) SELECT * FROM temp_t; + |
Query all the r_reason_sk records in the tpcds.reason table and de-duplicate them.
+1 | SELECT DISTINCT(r_reason_sk) FROM tpcds.reason; + |
Example of a LIMIT clause: Obtain a record from the table.
+1 | SELECT * FROM tpcds.reason LIMIT 1; + |
Example of a LIMIT clause: Obtain the third record from the table.
+1 | SELECT * FROM tpcds.reason LIMIT 1 OFFSET 2; + |
Example of a LIMIT clause: Obtain the first two records from a table.
+1 | SELECT * FROM tpcds.reason LIMIT 2; + |
Query all records and sort them in alphabetic order.
+1 | SELECT r_reason_desc FROM tpcds.reason ORDER BY r_reason_desc; + |
Use table aliases to obtain data from the pg_user and pg_user_status tables.
+1 | SELECT a.usename,b.locktime FROM pg_user a,pg_user_status b WHERE a.usesysid=b.roloid; + |
Example of the FULL JOIN clause: Join data in the pg_user and pg_user_status tables.
+1 | SELECT a.usename,b.locktime,a.usesuper FROM pg_user a FULL JOIN pg_user_status b on a.usesysid=b.roloid; + |
Example of the GROUP BY clause: Filter data based on query conditions, and group the results.
+1 | SELECT r_reason_id, AVG(r_reason_sk) FROM tpcds.reason GROUP BY r_reason_id HAVING AVG(r_reason_sk) > 25; + |
Example of the GROUP BY clause: Group the results by alias.
+1 | SELECT r_reason_id AS id FROM tpcds.reason GROUP BY id; + |
Example of the GROUP BY CUBE clause: Filter data based on query conditions, and group the results.
+1 | SELECT r_reason_id,AVG(r_reason_sk) FROM tpcds.reason GROUP BY CUBE(r_reason_id,r_reason_sk); + |
Example of the GROUP BY GROUPING SETS clause: Filter data based on query conditions, and group the results.
+1 | SELECT r_reason_id,AVG(r_reason_sk) FROM tpcds.reason GROUP BY GROUPING SETS((r_reason_id,r_reason_sk),r_reason_sk); + |
Example of the UNION clause: Merge the names started with W and N in the r_reason_desc column in the tpcds.reason table.
+1 +2 +3 +4 +5 +6 +7 | SELECT r_reason_sk, tpcds.reason.r_reason_desc + FROM tpcds.reason + WHERE tpcds.reason.r_reason_desc LIKE 'W%' +UNION +SELECT r_reason_sk, tpcds.reason.r_reason_desc + FROM tpcds.reason + WHERE tpcds.reason.r_reason_desc LIKE 'N%'; + |
1 | SELECT * FROM stu_pinyin_info ORDER BY NLSSORT (name, 'NLS_SORT = SCHINESE_PINYIN_M' ); + |
Case-insensitive order:
+1 +2 +3 +4 +5 +6 +7 +8 | CREATE TABLE stu_icase_info (id bigint, name text) DISTRIBUTE BY REPLICATION; +INSERT INTO stu_icase_info VALUES (1, 'aaaa'),(2, 'AAAA'); +SELECT * FROM stu_icase_info ORDER BY NLSSORT (name, 'NLS_SORT = generic_m_ci'); + id | name +----+------ + 1 | aaaa + 2 | AAAA +(2 rows) + |
Create the table tpcds.reason_p.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | CREATE TABLE tpcds.reason_p +( + r_reason_sk integer, + r_reason_id character(16), + r_reason_desc character(100) +) +PARTITION BY RANGE (r_reason_sk) +( + partition P_05_BEFORE values less than (05), + partition P_15 values less than (15), + partition P_25 values less than (25), + partition P_35 values less than (35), + partition P_45_AFTER values less than (MAXVALUE) +); + |
Insert data.
+1 | INSERT INTO tpcds.reason_p values(3,'AAAAAAAABAAAAAAA','reason 1'),(10,'AAAAAAAABAAAAAAA','reason 2'),(4,'AAAAAAAABAAAAAAA','reason 3'),(10,'AAAAAAAABAAAAAAA','reason 4'),(10,'AAAAAAAABAAAAAAA','reason 5'),(20,'AAAAAAAACAAAAAAA','reason 6'),(30,'AAAAAAAACAAAAAAA','reason 7'); + |
Example of the PARTITION clause: Obtain data from the P_05_BEFORE partition in the tpcds.reason_p table.
+1 +2 +3 +4 +5 +6 | SELECT * FROM tpcds.reason_p PARTITION (P_05_BEFORE); + r_reason_sk | r_reason_id | r_reason_desc +-------------+------------------+------------------------------------ + 4 | AAAAAAAABAAAAAAA | reason 3 + 3 | AAAAAAAABAAAAAAA | reason 1 +(2 rows) + |
Example of the GROUP BY clause: Group records in the tpcds.reason_p table by r_reason_id, and count the number of records in each group.
+1 +2 +3 +4 +5 +6 | SELECT COUNT(*),r_reason_id FROM tpcds.reason_p GROUP BY r_reason_id; + count | r_reason_id +-------+------------------ + 2 | AAAAAAAACAAAAAAA + 5 | AAAAAAAABAAAAAAA +(2 rows) + |
Example of the GROUP BY CUBE clause: Filter data based on query conditions, and group the results.
+1 | SELECT * FROM tpcds.reason GROUP BY CUBE (r_reason_id,r_reason_sk,r_reason_desc); + |
Example of the GROUP BY GROUPING SETS clause: Filter data based on query conditions, and group the results.
+1 | SELECT * FROM tpcds.reason GROUP BY GROUPING SETS ((r_reason_id,r_reason_sk),r_reason_desc); + |
Example of the HAVING clause: Group records in the tpcds.reason_p table by r_reason_id, count the number of records in each group, and display only values whose number of r_reason_id is greater than 2.
+1 +2 +3 +4 +5 | SELECT COUNT(*) c,r_reason_id FROM tpcds.reason_p GROUP BY r_reason_id HAVING c>2; + c | r_reason_id +---+------------------ + 5 | AAAAAAAABAAAAAAA +(1 row) + |
Example of the IN clause: Group records in the tpcds.reason_p table by r_reason_id, count the number of records in each group, and display only the numbers of records whose r_reason_id is AAAAAAAABAAAAAAA or AAAAAAAADAAAAAAA.
+1 +2 +3 +4 +5 | SELECT COUNT(*),r_reason_id FROM tpcds.reason_p GROUP BY r_reason_id HAVING r_reason_id IN('AAAAAAAABAAAAAAA','AAAAAAAADAAAAAAA'); +count | r_reason_id +-------+------------------ + 5 | AAAAAAAABAAAAAAA +(1 row) + |
Example of the INTERSECT clause: Query records whose r_reason_id is AAAAAAAABAAAAAAA and whose r_reason_sk is smaller than 5.
+1 +2 +3 +4 +5 +6 | SELECT * FROM tpcds.reason_p WHERE r_reason_id='AAAAAAAABAAAAAAA' INTERSECT SELECT * FROM tpcds.reason_p WHERE r_reason_sk<5; + r_reason_sk | r_reason_id | r_reason_desc +-------------+------------------+------------------------------------ + 4 | AAAAAAAABAAAAAAA | reason 3 + 3 | AAAAAAAABAAAAAAA | reason 1 +(2 rows) + |
Example of the EXCEPT clause: Query records whose r_reason_id is AAAAAAAABAAAAAAA and whose r_reason_sk is greater than or equal to 4.
+1 +2 +3 +4 +5 +6 +7 +8 | SELECT * FROM tpcds.reason_p WHERE r_reason_id='AAAAAAAABAAAAAAA' EXCEPT SELECT * FROM tpcds.reason_p WHERE r_reason_sk<4; +r_reason_sk | r_reason_id | r_reason_desc +-------------+------------------+------------------------------------ + 10 | AAAAAAAABAAAAAAA | reason 2 + 10 | AAAAAAAABAAAAAAA | reason 5 + 10 | AAAAAAAABAAAAAAA | reason 4 + 4 | AAAAAAAABAAAAAAA | reason 3 +(4 rows) + |
Specify the operator (+) in the WHERE clause to indicate a left join.
+1 +2 +3 +4 +5 +6 | select t1.sr_item_sk ,t2.c_customer_id from store_returns t1, customer t2 where t1.sr_customer_sk = t2.c_customer_sk(+) +order by 1 desc limit 1; + sr_item_sk | c_customer_id +------------+--------------- + 18000 | +(1 row) + |
Specify the operator (+) in the WHERE clause to indicate a right join.
+1 +2 +3 +4 +5 +6 | select t1.sr_item_sk ,t2.c_customer_id from store_returns t1, customer t2 where t1.sr_customer_sk(+) = t2.c_customer_sk +order by 1 desc limit 1; + sr_item_sk | c_customer_id +------------+------------------ + | AAAAAAAAJNGEBAAA +(1 row) + |
Specify the operator (+) in the WHERE clause to indicate a left join and add a join condition.
+1 +2 +3 +4 +5 | select t1.sr_item_sk ,t2.c_customer_id from store_returns t1, customer t2 where t1.sr_customer_sk = t2.c_customer_sk(+) and t2.c_customer_sk(+) < 1 order by 1 limit 1; + sr_item_sk | c_customer_id +------------+--------------- + 1 | +(1 row) + |
If the operator (+) is specified in the WHERE clause, do not use expressions connected through AND/OR.
+1 +2 +3 +4 | select t1.sr_item_sk ,t2.c_customer_id from store_returns t1, customer t2 where not(t1.sr_customer_sk = t2.c_customer_sk(+) and t2.c_customer_sk(+) < 1); +ERROR: Operator "(+)" can not be used in nesting expression. +LINE 1: ...tomer_id from store_returns t1, customer t2 where not(t1.sr_... + ^ + |
If the operator (+) is specified in the WHERE clause which does not support expression macros, an error will be reported.
+1 +2 | select t1.sr_item_sk ,t2.c_customer_id from store_returns t1, customer t2 where (t1.sr_customer_sk = t2.c_customer_sk(+))::bool; +ERROR: Operator "(+)" can only be used in common expression. + |
If the operator (+) is specified on both sides of an expression in the WHERE clause, an error will be reported.
+1 +2 +3 | select t1.sr_item_sk ,t2.c_customer_id from store_returns t1, customer t2 where t1.sr_customer_sk(+) = t2.c_customer_sk(+); +ERROR: Operator "(+)" can't be specified on more than one relation in one join condition +HINT: "t1", "t2"...are specified Operator "(+)" in one condition. + |
SELECT INTO defines a new table based on a query result and insert data obtained by query to the new table.
+Different from SELECT, data found by SELECT INTO is not returned to the client. The table columns have the same names and data types as the output columns of the SELECT.
+CREATE TABLE AS provides functions similar to SELECT INTO in functions and provides a superset of functions provided by SELECT INTO. You are advised to use CREATE TABLE AS, because SELECT INTO cannot be used in a stored procedure.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | [ WITH [ RECURSIVE ] with_query [, ...] ] +SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ] + { * | {expression [ [ AS ] output_name ]} [, ...] } + INTO [ UNLOGGED ] [ TABLE ] new_table + [ FROM from_item [, ...] ] + [ WHERE condition ] + [ GROUP BY expression [, ...] ] + [ HAVING condition [, ...] ] + [ WINDOW {window_name AS ( window_definition )} [, ...] ] + [ { UNION | INTERSECT | EXCEPT | MINUS } [ ALL | DISTINCT ] select ] + [ ORDER BY {expression [ [ ASC | DESC | USING operator ] | nlssort_expression_clause ] [ NULLS { FIRST | LAST } ]} [, ...] ] + [ { [ LIMIT { count | ALL } ] [ OFFSET start [ ROW | ROWS ] ] } | { LIMIT start, { count | ALL } } ] + [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ] + [ {FOR { UPDATE | SHARE } [ OF table_name [, ...] ] [ NOWAIT ]} [...] ]; + |
INTO [ UNLOGGED ] [ TABLE ] new_table
+UNLOGGED indicates that the table is created as an unlogged table. Data written to unlogged tables is not written to the write-ahead log, which makes them considerably faster than ordinary tables. However, they are not crash-safe: an unlogged table is automatically truncated after a crash or unclean shutdown. The contents of an unlogged table are also not replicated to standby servers. Any indexes created on an unlogged table are automatically unlogged as well.
+new_table specifies the name of a new table, which can be schema-qualified.
+ +Add values that are less than 5 in the r_reason_sk column in the tpcds.reason table to the new table.
+1 +2 | SELECT * INTO tpcds.reason_t1 FROM tpcds.reason WHERE r_reason_sk < 5; +INSERT 0 6 + |
Delete the tpcds.reason_t1 table.
+1 | DROP TABLE tpcds.reason_t1; + |
UPDATE updates data in a table. UPDATE changes the values of the specified columns in all rows that satisfy the condition. The WHERE clause clarifies conditions. The columns to be modified need be mentioned in the SET clause; columns not explicitly modified retain their previous values.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 | UPDATE [ ONLY ] table_name [ * ] [ [ AS ] alias ] +SET {column_name = { expression | DEFAULT } + |( column_name [, ...] ) = {( { expression | DEFAULT } [, ...] ) |sub_query }}[, ...] + [ FROM from_list] [ WHERE condition ] + [ RETURNING {* + | {output_expression [ [ AS ] output_name ]} [, ...] }]; + +where sub_query can be: +SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ] +{ * | {expression [ [ AS ] output_name ]} [, ...] } +[ FROM from_item [, ...] ] +[ WHERE condition ] +[ GROUP BY grouping_element [, ...] ] +[ HAVING condition [, ...] ] + |
Name (optionally schema-qualified) of the table to be updated.
+Value range: an existing table name
+Specifies the alias for the target table.
+Value range: a string. It must comply with the naming convention.
+Renames a column.
+You can refer to this column by specifying the table name and column name of the target table. Example:
+1 | UPDATE foo SET foo.col_name = 'GaussDB'; + |
You can refer to this column by specifying the target table alias and the column name. For example:
+1 | UPDATE foo AS f SET f.col_name = 'GaussDB'; + |
Value range: an existing column name
+An expression or value to assign to the column.
+Sets the column to its default value.
+The value is NULL if no specified default value has been assigned to it.
+Specifies a subquery.
+This command can be executed to update a table with information for other tables in the same database. For details about clauses in the SELECT statement, see SELECT.
+A list of table expressions, allowing columns from other tables to appear in the WHERE condition and the update expressions. This is similar to the list of tables that can be specified in the FROM clause of a SELECT statement.
+Note that the target table must not appear in the from_list, unless you intend a self-join (in which case it must appear with an alias in the from_list).
+An expression that returns a value of type boolean. Only rows for which this expression returns true are updated.
+An expression to be computed and returned by the UPDATE command after each row is updated.
+Value range: The expression can use any column names of the table named by table_name or table(s) listed in FROM. Write * to return all columns.
+A name to use for a returned column.
+Update the values of all records.
+1 | UPDATE reason SET r_reason_sk = r_reason_sk * 2; + |
If the WHERE clause is not included, all r_reason_sk values are updated.
+1 | UPDATE reason SET r_reason_sk = r_reason_sk + 100; + |
Redefine r_reason_sk whose r_reason_desc is reason2 in the reason table.
+1 | UPDATE reason SET r_reason_sk = 5 WHERE r_reason_desc = 'reason2'; + |
Redefine r_reason_sk whose value is 2 in the reason table.
+1 | UPDATE reason SET r_reason_sk = r_reason_sk + 100 WHERE r_reason_sk = 2; + |
Redefine the course IDs whose r_reason_sk is greater than 2 in the reason table.
+1 | UPDATE reason SET r_reason_sk = 201 WHERE r_reason_sk > 2; + |
You can run an UPDATE statement to update multiple columns by specifying multiple values in the SET clause. For example:
+1 | UPDATE reason SET r_reason_sk = 5, r_reason_desc = 'reason5' WHERE r_reason_id = 'fourth'; + |
VALUES computes a row or a set of rows based on given values. It is most commonly used to generate a constant table within a large command.
+1 +2 +3 +4 | VALUES {( expression [, ...] )} [, ...] + [ ORDER BY { sort_expression [ ASC | DESC | USING operator ] } [, ...] ] + [ { [ LIMIT { count | ALL } ] [ OFFSET start [ ROW | ROWS ] ] } | { LIMIT start, { count | ALL } } ] + [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]; + |
Specifies a constant or expression to compute and insert at the indicated place in the resulting table or set of rows.
+In a VALUES list appearing at the top level of an INSERT, an expression can be replaced by DEFAULT to indicate that the destination column's default value should be inserted. DEFAULT cannot be used when VALUES appears in other contexts.
+Specifies an expression or integer constant indicating how to sort the result rows.
+Indicates ascending sort order.
+Indicates descending sort order.
+Specifies a sorting operator.
+Specifies the maximum number of rows to return.
+Specifies the number of rows to skip before starting to return rows.
+The FETCH clause restricts the total number of rows starting from the first row of the return query result, and the default value of count is 1.
+Create the reason_t1 table.
+1 +2 +3 +4 +5 +6 | CREATE TABLE reason_t1 +( + TABLE_SK INTEGER , + TABLE_ID VARCHAR(20) , + TABLE_NA VARCHAR(20) +); + |
Insert a record into a table.
+1 | INSERT INTO reason_t1(TABLE_SK, TABLE_ID, TABLE_NA) VALUES (1, 'S01', 'StudentA'); + |
Insert a record into a table. This command is equivalent to the last one.
+1 | INSERT INTO reason_t1 VALUES (1, 'S01', 'StudentA'); + |
Insert records whose TABLE_SK is less than 1 into the table.
+1 | INSERT INTO reason_t1 SELECT * FROM reason_t1 WHERE TABLE_SK < 1; + |
Insert records into the table.
+1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO reason_t1 VALUES (1, 'S01', 'StudentA'),(2, 'T01', 'TeacherA'),(3, 'T02', 'TeacherB'); +SELECT * FROM reason_t1 ORDER BY 1; + TABLE_SK | TABLE_ID | TABLE_NAME +----------+----------+------------ + 1 | S01 | StudentA + 2 | T01 | TeacherA + 3 | T02 | TeacherB +(3 rows) + |
Clear existing data in the table and insert data to the table.
+1 +2 +3 +4 +5 +6 | INSERT OVERWRITE INTO reason_t1 values (4, 'S02', 'StudentB'); +SELECT * FROM reason_t1 ORDER BY 1; + TABLE_SK | TABLE_ID | TABLE_NAME +----------+----------+------------ + 4 | S02 | StudentB +(1 rows) + |
Insert data back into the reason_t1 table.
+INSERT INTO reason_t1 SELECT * FROM reason_t1;+
Specify default values for independent columns.
+INSERT INTO reason_t1 VALUES (5, 'S03', DEFAULT);+
Insert some data in a table to another table: Use the WITH subquery to obtain a temporary table temp_t, and then insert all data in temp_t to another table reason_t1.
+WITH temp_t AS (SELECT * FROM reason_t1) INSERT INTO reason_t1 SELECT * FROM temp_t ORDER BY 1;+
Data control language (DCL) is used to set or modify database users or role rights.
+GaussDB(DWS) provides a statement for granting rights to data objects and roles. For details, see GRANT.
+GaussDB(DWS) provides a statement for revoking rights. For details, see REVOKE.
+GaussDB(DWS) allows users to set rights for objects that will be created. For details, see ALTER DEFAULT PRIVILEGES.
+ALTER DEFAULT PRIVILEGES allows you to set the permissions that will be used for objects to be created. It does not affect permissions assigned to existing objects.
+To isolate permissions, the WITH GRANT OPTION syntax is disabled in the current GaussDB(DWS) version.
+A user can modify only the default permissions of the objects created by the user or the role to which the user belongs. These permissions can be set globally (that is, all objects created in the database) or for objects in a specified schema.
+To view information about the default permissions of database users, query the system catalog .
+Only the permissions for tables (including views), sequences, functions, and types (including domains) can be altered.
+1 +2 +3 +4 | ALTER DEFAULT PRIVILEGES + [ FOR { ROLE | USER } target_role [, ...] ] + [ IN SCHEMA schema_name [, ...] ] + abbreviated_grant_or_revoke; + |
1 +2 +3 +4 +5 +6 +7 +8 | grant_on_tables_clause + | grant_on_functions_clause + | grant_on_types_clause + | grant_on_sequences_clause + | revoke_on_tables_clause + | revoke_on_functions_clause + | revoke_on_types_clause + | revoke_on_sequences_clause + |
1 +2 +3 +4 +5 | GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER | ANALYZE | ANALYSE } + [, ...] | ALL [ PRIVILEGES ] } + ON TABLES + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ] + |
1 +2 +3 +4 | GRANT { EXECUTE | ALL [ PRIVILEGES ] } + ON FUNCTIONS + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ] + |
1 +2 +3 +4 | GRANT { USAGE | ALL [ PRIVILEGES ] } + ON TYPES + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ] + |
1 +2 +3 +4 +5 | GRANT { { USAGE | SELECT | UPDATE } + [, ...] | ALL [ PRIVILEGES ] } + ON SEQUENCES + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ] + |
1 +2 +3 +4 +5 +6 | REVOKE [ GRANT OPTION FOR ] + { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER | ANALYZE | ANALYSE } + [, ...] | ALL [ PRIVILEGES ] } + ON TABLES + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT | CASCADE CONSTRAINTS ] + |
1 +2 +3 +4 +5 | REVOKE [ GRANT OPTION FOR ] + { EXECUTE | ALL [ PRIVILEGES ] } + ON FUNCTIONS + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT | CASCADE CONSTRAINTS ] + |
1 +2 +3 +4 +5 | REVOKE [ GRANT OPTION FOR ] + { USAGE | ALL [ PRIVILEGES ] } + ON TYPES + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT | CASCADE CONSTRAINTS ] + |
1 +2 +3 +4 +5 +6 | REVOKE [ GRANT OPTION FOR ] + { { USAGE | SELECT | UPDATE } + [, ...] | ALL [ PRIVILEGES ] } + ON SEQUENCES + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT | CASCADE CONSTRAINTS ] + |
Specifies the name of an existing role. If FOR ROLE/USER is omitted, the current role or user is assumed.
+target_role must have the CREATE permissions for schema_name. You can use the has_schema_privilege function to check whether a role or user has the CREATE permission on a schema.
+1 | select a.rolname, n.nspname from pg_authid as a, pg_namespace as n where has_schema_privilege(a.oid, n.oid, 'CREATE'); + |
Value range: An existing role name.
+Specifies the name of an existing schema.
+If a schema name is specified, the default permissions of all objects created in the schema will be modified. If IN SCHEMA is omitted, global permissions will be modified.
+Value range: An existing schema name.
+Specifies the name of an existing role whose permissions are to be granted or revoked.
+Value range: An existing role name.
+If you want to delete a role that has been assigned default permissions, you must revoke the changes to the default permissions or use DROP OWNED BY to get rid of the default permission entry for the role.
+1 | ALTER DEFAULT PRIVILEGES IN SCHEMA tpcds GRANT SELECT ON TABLES TO PUBLIC; + |
1 | ALTER DEFAULT PRIVILEGES IN SCHEMA tpcds GRANT INSERT ON TABLES TO jack; + |
1 +2 | ALTER DEFAULT PRIVILEGES IN SCHEMA tpcds REVOKE SELECT ON TABLES FROM PUBLIC; +ALTER DEFAULT PRIVILEGES IN SCHEMA tpcds REVOKE INSERT ON TABLES FROM jack; + |
1 | grant usage, create on schema test1 to test2; + |
1 | ALTER DEFAULT PRIVILEGES FOR USER test1 IN SCHEMA test1 GRANT SELECT ON tables TO test2; + |
1 +2 | set role test1 password '{password1}'; +create table test3( a int, b int); + |
1 +2 +3 +4 +5 | set role test2 password '{password2}'; +select * from test1.test3; + a | b +---+--- +(0 rows) + |
ANALYZE collects statistics about ordinary tables in a database, and stores the results in the PG_STATISTIC system catalog. The execution plan generator uses these statistics to determine which one is the most effective execution plan.
+If no parameters are specified, ANALYZE analyzes each table and partitioned table in the current database. You can also specify table_name, column, and partition_name to limit the analysis to a specified table, column, or partitioned table.
+Users who can execute ANALYZE on a specific table include the owner of the table, the owner of the database where the table resides, users who are granted the ANALYZE permission on the table through GRANT, and users who have the SYSADMIN attribute.
+To collect statistics using percentage sampling, you must have the ANALYZE and SELECT permissions.
+ANALYZE and ANALYSE VERIFY are used to check whether data files of common tables (row-store and column-store tables) in a database are damaged. Currently, this function does not support HDFS tables.
+1 +2 | { ANALYZE | ANALYSE } [ VERBOSE ] + [ table_name [ ( column_name [, ...] ) ] ]; + |
1 +2 +3 | { ANALYZE | ANALYSE } [ VERBOSE ] + [ table_name [ ( column_name [, ...] ) ] ] + PARTITION ( patrition_name ) ; + |
An ordinary partitioned table supports the syntax but not the function of collecting statistics about specified partitions. Run the ANALYZE command on a specified partition. A warning message is displayed.
+1 +2 | { ANALYZE | ANALYSE } [ VERBOSE ] + { foreign_table_name | FOREIGN TABLES }; + |
1 +2 | {ANALYZE | ANALYSE} [ VERBOSE ] + table_name (( column_1_name, column_2_name [, ...] )); + |
1 | {ANALYZE | ANALYSE} VERIFY {FAST|COMPLETE}; + |
1 | {ANALYZE | ANALYSE} VERIFY {FAST|COMPLETE} table_name|index_name [CASCADE]; + |
1 | {ANALYZE | ANALYSE} VERIFY {FAST|COMPLETE} table_name PARTITION {(partition_name)}[CASCADE]; + |
Enables the display of progress messages.
+If this parameter is specified, progress information is displayed by ANALYZE to indicate the table that is being processed, and statistics about the table are printed.
+Specifies the name (possibly schema-qualified) of a specific table to analyze. If omitted, all regular tables (but not foreign tables) in the current database are analyzed.
+Currently, you can use ANALYZE to collect statistics about row-store tables, column-store tables, HDFS tables, ORC- or CARBONDATA-formatted OBS foreign tables, and foreign tables for collaborative analysis.
+Value range: an existing table name
+Specifies the name of a specific column to analyze. All columns are analyzed by default.
+Value range: an existing column name
+Assumes the table is a partitioned table. You can specify partition_name following the keyword PARTITION to analyze the statistics of this table. Currently the partitioned table supports the syntax of analyzing a partitioned table, but does not execute this syntax.
+Value range: a partition name in a table
+Specifies the name (possibly schema-qualified) of a specific table to analyze. The data of the table is stored in HDFS.
+Value range: an existing table name
+Analyzes HDFS foreign tables stored in HDFS and accessible to the current user.
+Name of the index table to be analyzed. The name may contain the schema name.
+Value range: an existing table name
+For row-store tables, the CRC and page header of row-store tables are verified in FAST mode. If the verification fails, an alarm is reported. In COMPLETE mode, parse and verify the pointers and tuples of row-store tables. For column-store tables, the CRC and magic of column-store tables are verified in FAST mode. If the verification fails, an alarm is reported. In COMPLETE mode, parse and verify CU of column-store tables.
+In CASCADE mode, all indexes of the current table are checked.
+1 | ANALYZE customer_info; + |
1 +2 +3 +4 +5 | ANALYZE VERBOSE customer_info; +INFO: analyzing "cstore.pg_delta_3394584009"(cn_5002 pid=53078) +INFO: analyzing "public.customer_info"(cn_5002 pid=53078) +INFO: analyzing "public.customer_info" inheritance tree(cn_5002 pid=53078) +ANALYZE + |
DEALLOCATE deallocates a previously prepared statement. If you do not explicitly deallocate a prepared statement, it is deallocated when the session ends.
+The PREPARE key word is always ignored.
+None
+1 | DEALLOCATE [ PREPARE ] { name | ALL }; + |
Specifies the name of the prepared statement to deallocate.
+Deallocates all prepared statements.
+None
+DO executes an anonymous code block.
+A code block is a function body without parameters that returns void. It is analyzed and executed at the same time.
+1 | DO [ LANGUAGE lang_name ] code; + |
Parses the programming language used by the code. If not specified, the default value plpgsql is used.
+Specifies executable programming language code. The language is specified as a string.
+1 +2 +3 +4 +5 +6 +7 +8 | DO $$DECLARE r record; +BEGIN + FOR r IN SELECT c.relname,n.nspname FROM pg_class c,pg_namespace n + WHERE c.relnamespace = n.oid AND n.nspname = 'tpcds' AND relkind IN ('r','v') + LOOP + EXECUTE 'GRANT ALL ON ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name) || ' TO webuser'; + END LOOP; +END$$; + |
EXECUTE executes a prepared statement. A prepared statement only exists in the lifecycle of a session. Therefore, only prepared statements created using PREPARE earlier in the session can be executed.
+If the PREPARE statement creating the prepared statement declares certain parameters, the parameter set transferred to the EXECUTE statement must be compatible. Otherwise, an error occurs.
+1 | EXECUTE name [ ( parameter [, ...] ) ]; + |
Specifies the name of the statement to be executed.
+Specifies a parameter of the prepared statement. It must be an expression that generates a value compatible with the data type specified when the prepared statement is created.
+1 +2 | PREPARE insert_reason(integer,character(16),character(100)) AS INSERT INTO tpcds.reason_t1 VALUES($1,$2,$3); +EXECUTE insert_reason(52, 'AAAAAAAADDAAAAAA', 'reason 52'); + |
EXECUTE DIRECT executes an SQL statement on a specified node. Generally, the cluster automatically allocates an SQL statement to proper nodes. EXECUTE DIRECT is mainly used for database maintenance and testing.
+1 | EXECUTE DIRECT ON ( nodename [, ... ] ) query ; + |
Specifies the node name.
+Value range: An existing node.
+Specifies the query SQL statement that you want to execute.
+Query records in table tpcds.customer_address on the dn_6001_6002 node.
+1 +2 +3 +4 +5 | EXECUTE DIRECT ON(dn_6001_6002) 'select count(*) from tpcds.customer_address'; + count +------- + 16922 +(1 row) + |
GRANT grants permissions to roles and users.
+GRANT is used in the following scenarios:
+System permissions are also called user attributes, including SYSADMIN, CREATEDB, CREATEROLE, AUDITADMIN, and LOGIN.
+They can be specified only by the CREATE ROLE or ALTER ROLE syntax. The SYSADMIN permission can be granted and revoked using GRANT ALL PRIVILEGE and REVOKE ALL PRIVILEGE, respectively. System permissions cannot be inherited by a user from a role, and cannot be granted using PUBLIC.
+Grant permissions related to database objects (tables, views, specified columns, databases, functions, and schemas) to specified roles or users.
+GRANT grants specified database object permissions to one or more roles. These permissions are appended to those already granted, if any.
+GaussDB(DWS) grants the permissions for objects of certain types to PUBLIC. By default, permissions for tables, table columns, sequences, external data sources, external servers, schemas, and tablespace are not granted to PUBLIC. However, permissions for the following objects are granted to PUBLIC: CONNECT and CREATE TEMP TABLE permissions for databases, EXECUTE permission for functions, and USAGE permission for languages and data types (including domains). An object owner can revoke the default permissions granted to PUBLIC and grant permissions to other users as needed. For security purposes, you are advised to create an object and set permissions for it in the same transaction so that other users do not have time windows to use the object. In addition, you can run the ALTER DEFAULT PRIVILEGES statement to modify the initial default permissions.
+Grant a role's or user's permissions to one or more roles or users. In this case, every role or user can be regarded as a set of one or more database permissions.
+If WITH ADMIN OPTION is specified, the member can in turn grant permissions in the role to others, and revoke permissions in the role as well. If a role or user granted with certain permissions is changed or revoked, the permissions inherited from the role or user also change.
+A database administrator can grant permissions to and revoke them from any role or user. Roles having CREATEROLE permission can grant or revoke membership in any role that is not an administrator.
+To isolate permissions, GaussDB(DWS) disables WITH GRANT OPTION and TO PUBLIC.
+1 +2 +3 +4 +5 +6 | GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER | ANALYZE | ANALYSE } [, ...] + | ALL [ PRIVILEGES ] } + ON { [ TABLE ] table_name [, ...] + | ALL TABLES IN SCHEMA schema_name [, ...] } + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 +5 | GRANT { {{ SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] )} [, ...] + | ALL [ PRIVILEGES ] ( column_name [, ...] ) } + ON [ TABLE ] table_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 +5 | GRANT { { CREATE | CONNECT | TEMPORARY | TEMP } [, ...] + | ALL [ PRIVILEGES ] } + ON DATABASE database_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 | GRANT { USAGE | ALL [ PRIVILEGES ] } + ON DOMAIN domain_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
The current version does not support granting the domain access permission.
+1 +2 +3 +4 | GRANT { USAGE | ALL [ PRIVILEGES ] } + ON FOREIGN DATA WRAPPER fdw_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 | GRANT { USAGE | ALL [ PRIVILEGES ] } + ON FOREIGN SERVER server_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 +5 | GRANT { EXECUTE | ALL [ PRIVILEGES ] } + ON { FUNCTION {function_name ( [ {[ argmode ] [ arg_name ] arg_type} [, ...] ] )} [, ...] + | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 | GRANT { USAGE | ALL [ PRIVILEGES ] } + ON LANGUAGE lang_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
The current version does not support granting the procedural language access permission.
+1 +2 +3 +4 | GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } + ON LARGE OBJECT loid [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
The current version does not support granting the large object access permission.
+1 +2 +3 +4 +5 | GRANT { { SELECT | UPDATE | USAGE } [, ...] | ALL [ PRIVILEGES ] } + ON { SEQUENCE sequence_name [, ...] + | ALL SEQUENCES IN SCHEMA schema_name [, ...] } + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 | GRANT { CREATE | USAGE | COMPUTE | ALL [ PRIVILEGES ] } + ON NODE GROUP group_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
1 +2 +3 +4 | GRANT { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] } + ON SCHEMA schema_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
When you grant table or view rights to other users, you also need to grant the USAGE permission for the schema that the tables and views belong to. Without this permission, the users granted with the table or view rights can only see the object names, but cannot access them.
+1 +2 +3 +4 | GRANT { USAGE | ALL [ PRIVILEGES ] } + ON TYPE type_name [, ...] + TO { [ GROUP ] role_name | PUBLIC } [, ...] + [ WITH GRANT OPTION ]; + |
The current version does not support granting the type access permission.
+1 +2 +3 | GRANT role_name [, ...] + TO role_name [, ...] + [ WITH ADMIN OPTION ]; + |
1 +2 | GRANT ALL { PRIVILEGES | PRIVILEGE } + TO role_name; + |
GRANT grants the following permissions:
+Allows SELECT from any column, or the specific columns listed, of the specified table, view, or sequence.
+Allows INSERT of a new row into the specified table.
+Allows UPDATE of any column, or the specific columns listed, of the specified table. SELECT ... FOR UPDATE and SELECT ... FOR SHARE also require this permission on at least one column, in addition to the SELECT permission.
+Allows DELETE of a row from the specified table.
+Allows TRUNCATE on the specified table.
+To create a foreign key constraint, it is necessary to have this permission on both the referencing and referenced columns.
+To create a trigger, you must have the TRIGGER permission on the table or view.
+To perform the ANALYZE | ANALYSE operation on a table to collect statistics data, you must have the ANALYZE | ANALYSE permission on the table.
+Allows the user to connect to the specified database.
+Allows the use of the specified function and the use of any operators that are implemented on top of the function.
+Allows users to perform elastic computing in a computing sub-cluster that they have the compute permission on.
+Grants all of the available permissions at once. Only system administrators have permission to run GRANT ALL PRIVILEGES.
+GRANT parameters are as follows:
+Specifies an existing user name.
+Specifies an existing table name.
+Specifies an existing column name.
+Specifies an existing schema name.
+Specifies an existing database name.
+Specifies an existing function name.
+Specifies an existing sequence name.
+Specifies an existing domain type.
+Specifies an existing foreign data wrapper name.
+Specifies an existing language name.
+Specifies an existing type name.
+Specifies an existing sub-cluster name.
+Specifies the parameter mode.
+Value range: a string. It must comply with the naming convention.
+Indicates the parameter name.
+Value range: a string. It must comply with the naming convention.
+Specifies the parameter type.
+Value range: a string. It must comply with the naming convention.
+Identifier of the large object that includes this page
+Value range: a string. It must comply with the naming convention.
+Specifies a directory name.
+Value range: a string. It must comply with the naming convention.
+1 | GRANT ALL PRIVILEGES TO joe; + |
Afterward, user joe has the sysadmin permissions.
+1 | GRANT SELECT ON TABLE tpcds.reason TO joe; + |
1 | GRANT ALL PRIVILEGES ON tpcds.reason TO kim; + |
1 | GRANT USAGE ON SCHEMA tpcds TO joe; + |
After the granting succeeds, user joe has all the permissions of the tpcds.reason table, including the add, delete, modify, and query permissions.
+1 | GRANT select (r_reason_sk,r_reason_id,r_reason_desc),update (r_reason_desc) ON tpcds.reason TO joe; + |
After the granting succeeds, user joe immediately has the query permission of the r_reason_sk and r_reason_id columns in the tpcds.reason table.
+1 | GRANT select (r_reason_sk, r_reason_id) ON tpcds.reason TO joe ; + |
1 | GRANT EXECUTE ON FUNCTION func_add_sql TO joe; + |
1 | GRANT UPDATE ON SEQUENCE serial TO joe; + |
1 | GRANT create,connect on database gaussdb TO joe ; + |
1 | GRANT USAGE,CREATE ON SCHEMA tpcds TO tpcds_manager; + |
1 | GRANT joe TO manager WITH ADMIN OPTION; + |
1 | GRANT manager TO senior_manager; + |
PREPARE creates a prepared statement.
+A prepared statement is a performance optimizing object on the server. When the PREPARE statement is executed, the specified query is parsed, analyzed, and rewritten. When the EXECUTE is executed, the prepared statement is planned and executed. This avoids repetitive parsing and analysis. After the PREPARE statement is created, it exists throughout the database session. Once it is created (even if in a transaction block), it will not be deleted when a transaction is rolled back. It can only be deleted by explicitly invoking DEALLOCATE or automatically deleted when the session ends.
+None
+1 | PREPARE name [ ( data_type [, ...] ) ] AS statement; + |
Specifies the name of a prepared statement. It must be unique in the current session.
+Specifies the type of a parameter.
+Specifies a SELECT, INSERT, UPDATE, DELETE, or VALUES statement.
+1 +2 | PREPARE insert_reason(integer,character(16),character(100)) AS INSERT INTO tpcds.reason_t1 VALUES($1,$2,$3); +EXECUTE insert_reason(52, 'AAAAAAAADDAAAAAA', 'reason 52'); + |
REASSIGN OWNED changes the owner of a database.
+REASSIGN OWNED requires that the system change owners of all the database objects owned by old_roles to new_role.
+1 | REASSIGN OWNED BY old_role [, ...] TO new_role; + |
Specifies the role name of the old owner.
+Specifies the role name of the new owner.
+Reassign all database objects owned by the joe and jack roles to admin.
+1 | REASSIGN OWNED BY joe, jack TO admin; + |
REVOKE revokes rights from one or more roles.
+If a non-owner user of an object attempts to REVOKE rights on the object, the command is executed based on the following rules:
+1 +2 +3 +4 +5 +6 +7 | REVOKE [ GRANT OPTION FOR ] + { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER | ANALYZE | ANALYSE }[, ...] + | ALL [ PRIVILEGES ] } + ON { [ TABLE ] table_name [, ...] + | ALL TABLES IN SCHEMA schema_name [, ...] } + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 +6 | REVOKE [ GRANT OPTION FOR ] + { {{ SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] )}[, ...] + | ALL [ PRIVILEGES ] ( column_name [, ...] ) } + ON [ TABLE ] table_name [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 +6 | REVOKE [ GRANT OPTION FOR ] + { { CREATE | CONNECT | TEMPORARY | TEMP } [, ...] + | ALL [ PRIVILEGES ] } + ON DATABASE database_name [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 +6 | REVOKE [ GRANT OPTION FOR ] + { EXECUTE | ALL [ PRIVILEGES ] } + ON { FUNCTION {function_name ( [ {[ argmode ] [ arg_name ] arg_type} [, ...] ] )} [, ...] + | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 | REVOKE [ GRANT OPTION FOR ] + { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } + ON LARGE OBJECT loid [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 | REVOKE [ GRANT OPTION FOR ] + { { SELECT | UPDATE | USAGE } [, ...] | ALL [ PRIVILEGES ] } + ON SEQUENCE sequence_name [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 | REVOKE [ GRANT OPTION FOR ] + { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] } + ON SCHEMA schema_name [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 +4 +5 | REVOKE [ GRANT OPTION FOR ] + { CREATE | USAGE | COMPUTE | ALL [ PRIVILEGES ] } + ON NODE GROUP group_name [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] + [ CASCADE | RESTRICT ]; + |
1 +2 +3 | REVOKE [ ADMIN OPTION FOR ] + role_name [, ...] FROM role_name [, ...] + [ CASCADE | RESTRICT ]; + |
1 | REVOKE ALL { PRIVILEGES | PRIVILEGE } FROM role_name; + |
The keyword PUBLIC indicates an implicitly defined group that contains all roles.
+See Parameter Description of the GRANT command for the meaning of the privileges and related parameters.
+Permissions of a role include the permissions directly granted to the role, permissions inherited from the parent role, and permissions granted to PUBLIC. Therefore, revoking the SELECT permission on an object from PUBLIC does not necessarily mean that such permission has been revoked from all roles, because the SELECT permission directly granted to roles or inherited from parent roles remains. Similarly, if the SELECT permission is revoked from a user but is not revoked from PUBLIC, the user can still run the SELECT statement.
+If GRANT OPTION FOR is specified, only the grant option for the right is revoked, not the right itself.
+If user A holds the UPDATE rights on a table and the WITH GRANT OPTION and has granted them to user B, the rights that user B holds are called dependent rights. If the rights or the grant option held by user A is revoked, the dependent rights still exist. Those dependent rights are also revoked if CASCADE is specified.
+A user can only revoke rights that were granted directly by that user. If, for example, user A has granted a right with grant option (WITH ADMIN OPTION) to user B, and user B has in turned granted it to user C, then user A cannot revoke the right directly from C. However, user A can revoke the grant option held by user B and use CASCADE. In this manner, the rights held by user C are automatically revoked. For another example, if both user A and user B have granted the same right to C, A can revoke his own grant but not B's grant, so C will still effectively have the right.
+If the role executing REVOKE holds rights indirectly via more than one role membership path, it is unspecified which containing role will be used to execute the command. In such cases, it is best practice to use SET ROLE to become the specific role you want to do the REVOKE as, and then execute REVOKE. Failure to do so may lead to deleting rights not intended to delete, or not deleting any rights at all.
+1 | REVOKE ALL PRIVILEGES FROM joe; + |
1 | REVOKE USAGE,CREATE ON SCHEMA tpcds FROM tpcds_manager; + |
Revoke the CONNECT privilege from user joe.
+1 | REVOKE CONNECT FROM joe; + |
Revoke the membership of role admins from user joe.
+1 | REVOKE admins FROM joe; + |
Revoke all the privileges of user joe for the myView view.
+1 | REVOKE ALL PRIVILEGES ON myView FROM joe; + |
Revoke the public insert permission on the customer_t1 table.
+1 | REVOKE INSERT ON customer_t1 FROM PUBLIC; + |
Revoke user joe's permission for the tpcds schema.
+1 | REVOKE USAGE ON SCHEMA tpcds FROM joe; + |
Revoke the query permissions for r_reason_sk and r_reason_id in the tpcds.reason table from user joe.
+1 | REVOKE select (r_reason_sk, r_reason_id) ON tpcds.reason FROM joe; + |
Transaction Control Language (TCL) controls the time and effect of database transactions and monitors the database.
+GaussDB(DWS) uses the COMMIT or END statement to commit transactions. For details, see COMMIT | END.
+GaussDB(DWS) creates a new savepoint in the current transaction. For details, see SAVEPOINT.
+GaussDB(DWS) rolls back the current transaction to the last committed state. For details, see ROLLBACK.
+ABORT rolls back the current transaction and cancels the changes in the transaction.
+This command is equivalent to ROLLBACK, and is present only for historical reasons. Now ROLLBACK is recommended.
+ABORT has no impact outside a transaction, but will provoke a warning.
+1 | ABORT [ WORK | TRANSACTION ] ; + |
WORK | TRANSACTION
+Optional keyword has no effect except increasing readability.
+Abort a transaction. Performed update operations will be undone.
+1 | ABORT; + |
BEGIN may be used to initiate an anonymous block or a single transaction. This section describes the syntax of BEGIN used to initiate an anonymous block. For details about the BEGIN syntax that initiates transactions, see START TRANSACTION.
+An anonymous block is a structure that can dynamically create and execute stored procedure code instead of permanently storing code as a database object in the database.
+None
+1 +2 +3 +4 +5 | [DECLARE [declare_statements]] +BEGIN +execution_statements +END; +/ + |
1 +2 +3 +4 +5 +6 +7 | BEGIN [ WORK | TRANSACTION ] + [ + { + ISOLATION LEVEL { READ COMMITTED | READ UNCOMMITTED | SERIALIZABLE | REPEATABLE READ } + | { READ WRITE | READ ONLY } + } [, ...] + ]; + |
Declares a variable, including its name and type, for example, sales_cnt int.
+Specifies the statement to be executed in an anonymous block.
+Value range: an existing function name
+1 | BEGIN; + |
1 | BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; + |
1 +2 +3 | BEGIN +dbms_output.put_line('Hello'); +END; + |
A checkpoint is a point in the transaction log sequence at which all data files have been updated to reflect the information in the log. All data files will be flushed to a disk.
+CHECKPOINT forces a transaction log checkpoint. By default, WALs periodically specify checkpoints in a transaction log. You may use gs_guc to specify run-time parameters checkpoint_segments and checkpoint_timeout to adjust the atomized checkpoint intervals.
+1 | CHECKPOINT; + |
None
+Set a checkpoint.
+1 | CHECKPOINT; + |
COMMIT or END commits all operations of a transaction.
+Only the transaction creators or system administrators can run the COMMIT command. The creation and commit operations must be in different sessions.
+1 | { COMMIT | END } [ WORK | TRANSACTION ] ; + |
Commits the current transaction and makes all changes made by the transaction become visible to others.
+Optional keyword has no effect except increasing readability.
+Commit the transaction to make all changes permanent.
+1 | COMMIT; + |
COMMIT PREPARED commits a prepared two-phase transaction.
+1 +2 | COMMIT PREPARED transaction_id ; +COMMIT PREPARED transaction_id WITH CSN; + |
Specifies the identifier of the transaction to be submitted. The identifier must be different from those for current prepared transactions.
+Specifies the sequence number of the transaction to be committed. It is a 64-bit, incremental, unsigned number.
+PREPARE TRANSACTION prepares the current transaction for two-phase commit.
+After this command, the transaction is no longer associated with the current session; instead, its state is fully stored on disk, and there is a high probability that it can be committed successfully, even if a database crash occurs before the commit is requested.
+Once prepared, a transaction can later be committed or rolled back with COMMIT PREPARED or ROLLBACK PREPARED, respectively. Those commands can be issued from any session, not only the one that executed the original transaction.
+From the point of view of the issuing session, PREPARE TRANSACTION is not unlike a ROLLBACK command: after executing it, there is no active current transaction, and the effects of the prepared transaction are no longer visible. (The effects will become visible again if the transaction is committed.)
+If the PREPARE TRANSACTION command fails for any reason, it becomes a ROLLBACK and the current transaction is canceled.
+PREPARE TRANSACTION transaction_id;+
transaction_id
+An arbitrary identifier that later identifies this transaction for COMMIT PREPARED or ROLLBACK PREPARED. The identifier must be different from those for current prepared transactions.
+Value range: The identifier must be written as a string literal, and must be less than 200 bytes long.
+SAVEPOINT establishes a new savepoint within the current transaction.
+A savepoint is a special mark inside a transaction that rolls back all commands that are executed after the savepoint was established, restoring the transaction state to what it was at the time of the savepoint.
+SAVEPOINT savepoint_name;+
savepoint_name
+Specifies the name of a new savepoint.
+1 +2 +3 +4 +5 +6 +7 | START TRANSACTION; +INSERT INTO table1 VALUES (1); +SAVEPOINT my_savepoint; +INSERT INTO table1 VALUES (2); +ROLLBACK TO SAVEPOINT my_savepoint; +INSERT INTO table1 VALUES (3); +COMMIT; + |
Query the table content, which should contain 1 and 3 but not 2, because 2 has been rolled back.
+1 +2 +3 +4 +5 +6 | START TRANSACTION; +INSERT INTO table1 VALUES (3); +SAVEPOINT my_savepoint; +INSERT INTO table1 VALUES (4); +RELEASE SAVEPOINT my_savepoint; +COMMIT; + |
Query the table content, which should contain both 3 and 4.
+SET TRANSACTION sets the characteristics of the current transaction. It has no effect on any subsequent transactions. Available transaction characteristics include the transaction separation level and transaction access mode (read/write or read only).
+None
+1 +2 +3 | { SET [ LOCAL ] TRANSACTION|SET SESSION CHARACTERISTICS AS TRANSACTION } + { ISOLATION LEVEL { READ COMMITTED | READ UNCOMMITTED | SERIALIZABLE | REPEATABLE READ } + | { READ WRITE | READ ONLY } } [, ...] + |
Indicates that the specified command takes effect only for the current transaction.
+Indicates that the specified parameters take effect for the current session.
+Value range: a string. It must comply with the naming convention.
+Valid value:
+Specifies the transaction access mode (read/write or read only).
+Set the isolation level of the current transaction to READ COMMITTED and the access mode to READ ONLY.
+1 +2 +3 | START TRANSACTION; +SET LOCAL TRANSACTION ISOLATION LEVEL READ COMMITTED READ ONLY; +COMMIT; + |
START TRANSACTION starts a transaction. If the isolation level, read/write mode, or deferrable mode is specified, a new transaction will have those characteristics. You can also specify them using SET TRANSACTION.
+None
+Format 1: START TRANSACTION
+1 +2 +3 +4 +5 +6 +7 | START TRANSACTION + [ + { + ISOLATION LEVEL { READ COMMITTED | READ UNCOMMITTED | SERIALIZABLE | REPEATABLE READ } + | { READ WRITE | READ ONLY } + } [, ...] + ]; + |
Format 2: BEGIN
+1 +2 +3 +4 +5 +6 +7 | BEGIN [ WORK | TRANSACTION ] + [ + { + ISOLATION LEVEL { READ COMMITTED | READ UNCOMMITTED | SERIALIZABLE | REPEATABLE READ } + | { READ WRITE | READ ONLY } + } [, ...] + ]; + |
Optional keyword in BEGIN format without functions.
+Specifies the transaction isolation level that determines the data that a transaction can view if other concurrent transactions exist.
+The isolation level of a transaction cannot be reset after the first clause (INSERT, DELETE, UPDATE, FETCH, COPY) for modifying data is executed in the transaction.
+Valid value:
+Specifies the transaction access mode (read/write or read only).
+1 +2 +3 | START TRANSACTION; +SELECT * FROM tpcds.reason; +END; + |
1 +2 +3 | START TRANSACTION ISOLATION LEVEL READ COMMITTED READ WRITE; +SELECT * FROM tpcds.reason; +COMMIT; + |
Rolls back the current transaction and backs out all updates in the transaction.
+ROLLBACK backs out of all changes that a transaction makes to a database if the transaction fails to be executed due to a fault.
+If a ROLLBACK statement is executed out of a transaction, no error occurs, but a warning information is displayed.
+1 | ROLLBACK [ WORK | TRANSACTION ]; + |
WORK | TRANSACTION
+Optional keyword that more clearly illustrates the syntax.
+Undo all changes in the current transaction.
+1 | ROLLBACK; + |
RELEASE SAVEPOINT destroys a savepoint previously defined in the current transaction.
+Destroying a savepoint makes it unavailable as a rollback point, but it has no other user visible behavior. It does not undo the effects of commands executed after the savepoint was established. To do that, use ROLLBACK TO SAVEPOINT. Destroying a savepoint when it is no longer needed allows the system to reclaim some resources earlier than transaction end.
+RELEASE SAVEPOINT also destroys all savepoints that were established after the named savepoint was established.
+1 | RELEASE [ SAVEPOINT ] savepoint_name; + |
savepoint_name
+Specifies the name of the savepoint you want to destroy.
+Create and then destroy a savepoint.
+1 +2 +3 +4 +5 +6 | BEGIN; + INSERT INTO tpcds.table1 VALUES (3); + SAVEPOINT my_savepoint; + INSERT INTO tpcds.table1 VALUES (4); + RELEASE SAVEPOINT my_savepoint; +COMMIT; + |
ROLLBACK PREPARED cancels a transaction ready for two-phase committing.
+1 | ROLLBACK PREPARED transaction_id ; + |
transaction_id
+Specifies the identifier of the transaction to be submitted. The identifier must be different from those for current prepared transactions.
+ROLLBACK TO SAVEPOINT rolls back to a savepoint. It implicitly destroys all savepoints that were established after the named savepoint.
+Rolls back all commands that were executed after the savepoint was established. The savepoint remains valid and can be rolled back to again later, if needed.
+1 | ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name; + |
savepoint_name
+Rolls back to a savepoint.
+Undo the effects of the commands executed after my_savepoint was established.
+1 | ROLLBACK TO SAVEPOINT my_savepoint; + |
Cursor positions are not affected by savepoint rollback.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 | BEGIN; +DECLARE foo CURSOR FOR SELECT 1 UNION SELECT 2; +SAVEPOINT foo; +FETCH 1 FROM foo; + ?column? +---------- + 1 +ROLLBACK TO SAVEPOINT foo; +FETCH 1 FROM foo; + ?column? +---------- + 2 +COMMIT; + |
Generalized Inverted Index (GIN) is designed for handling cases where the items to be indexed are composite values, and the queries to be handled by the index need to search for element values in the composite items. For example, the items could be documents, and the queries could be searches for documents containing specific words.
+We use the word "item" to refer to a composite value that is to be indexed, and the word "key" to refer to an element value. GIN stores and searches for keys, not item values.
+A GIN index stores a set of (key, posting list) key-value pairs, where a posting list is a set of row IDs in which the key occurs. The same row ID can appear in multiple posting lists, since an item can contain more than one key. Each key value is stored only once, so a GIN index is very compact for cases where the same key appears many times.
+GIN is generalized in the sense that the GIN access method code does not need to know the specific operations that it accelerates. Instead, it uses custom strategies defined for particular data types. The strategy defines how keys are extracted from indexed items and query conditions, and how to determine whether a row that contains some of the key values in a query actually satisfies the query.
+The GIN interface has a high level of abstraction, requiring the access method implementer only to implement the semantics of the data type being accessed. The GIN layer itself takes care of concurrency, logging and searching the tree structure.
+All it takes to get a GIN access method working is to implement multiple user-defined methods, which define the behavior of keys in the tree and the relationships between keys, indexed items, and indexable queries. In short, GIN combines extensibility with generality, code reuse, and a clean interface.
+There are four methods that an operator class for GIN must provide:
+Compares two keys (not indexed items) and returns an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, equal to, or greater than the second. Null keys are never passed to this function.
+Returns a palloc'd array of keys given an item to be indexed. The number of returned keys must be stored into *nkeys. If any of the keys can be null, also palloc an array of *nkeys bool fields, store its address at *nullFlags, and set these null flags as needed. *nullFlags can be left NULL (its initial value) if all keys are non-null. The returned value can be NULL if the item contains no keys.
+Returns a palloc'd array of keys given a value to be queried; that is, query is the value on the right-hand side of an indexable operator whose left-hand side is the indexed column. n is the strategy number of the operator within the operator class. Often, extractQuery will need to consult n to determine the data type of query and the method it should use to extract key values. The number of returned keys must be stored into *nkeys. If any of the keys can be null, also palloc an array of *nkeys bool fields, store its address at *nullFlags, and set these null flags as needed. *nullFlags can be left NULL (its initial value) if all keys are non-null. The returned value can be NULL if the query contains no keys.
+searchMode is an output argument that allows extractQuery to specify details about how the search will be done. If *searchMode is set to GIN_SEARCH_MODE_DEFAULT (which is the value it is initialized to before call), only items that match at least one of the returned keys are considered candidate matches. If *searchMode is set to GIN_SEARCH_MODE_INCLUDE_EMPTY, then in addition to items containing at least one matching key, items that contain no keys at all are considered candidate matches. (This mode is useful for implementing is-subset-of operators, for example.) If *searchMode is set to GIN_SEARCH_MODE_ALL, then all non-null items in the index are considered candidate matches, whether they match any of the returned keys or not.
+pmatch is an output argument for use when partial match is supported. To use it, extractQuery must allocate an array of *nkeys Booleans and store its address at *pmatch. Each element of the array should be set to TRUE if the corresponding key requires partial match, FALSE if not. If *pmatch is set to NULL then GIN assumes partial match is not required. The variable is initialized to NULL before call, so this argument can simply be ignored by operator classes that do not support partial match.
+extra_data is an output argument that allows extractQuery to pass additional data to the consistent and comparePartial methods. To use it, extractQuery must allocate an array of *nkeys pointers and store its address at *extra_data, then store whatever it wants to into the individual pointers. The variable is initialized to NULL before call, so this argument can simply be ignored by operator classes that do not require extra data. If *extra_data is set, the whole array is passed to the consistent method, and the appropriate element to the comparePartial method.
+Returns TRUE if an indexed item satisfies the query operator with StrategyNumber n (or might satisfy it, if the recheck indication is returned). This function does not have direct access to the indexed item's value, since GIN does not store items explicitly. Rather, what is available is knowledge about which key values extracted from the query appear in a given indexed item. The check array has length nkeys, which is the same as the number of keys previously returned by extractQuery for this query datum. Each element of the check array is TRUE if the indexed item contains the corresponding query key, for example, if (check[i] == TRUE), the i-th key of the extractQuery result array is present in the indexed item. The original query datum is passed in case the consistent method needs to consult it, and so are the queryKeys[] and nullFlags[] arrays previously returned by extractQuery. extra_data is the extra-data array returned by extractQuery, or NULL if none.
+When extractQuery returns a null key in queryKeys[], the corresponding check[] element is TRUE if the indexed item contains a null key; that is, the semantics of check[] are like IS NOT DISTINCT FROM. The consistent function can examine the corresponding nullFlags[] element if it needs to tell the difference between a regular value match and a null match.
+On success, *recheck should be set to TRUE if the heap tuple needs to be rechecked against the query operator, or FALSE if the index test is exact. That is, a FALSE return value guarantees that the heap tuple does not match the query; a TRUE return value with *recheck set to FALSE guarantees that the heap tuple does match the query; and a TRUE return value with *recheck set to TRUE means that the heap tuple might match the query, so it needs to be fetched and rechecked by evaluating the query operator directly against the originally indexed item.
+Optionally, an operator class for GIN can supply the following method:
+Compares a partial-match query key to an index key. Returns an integer whose sign indicates the result: less than zero means the index key does not match the query, but the index scan should continue; zero means that the index key matches the query; greater than zero indicates that the index scan should stop because no more matches are possible. The strategy number n of the operator that generated the partial match query is provided, in case its semantics are needed to determine when to end the scan. Also, extra_data is the corresponding element of the extra-data array made by extractQuery, or NULL if none. Null keys are never passed to this function.
+To support "partial match" queries, an operator class must provide the comparePartial method, and its extractQuery method must set the pmatch parameter when a partial-match query is encountered. For details, see Partial Match Algorithm.
+The actual data types of the various Datum values mentioned in this section vary depending on the operator class. The item values passed to extractValue are always of the operator class's input type, and all key values must be of the class's STORAGE type. The type of the query argument passed to extractQuery, consistent and triConsistent is whatever is specified as the right-hand input type of the class member operator identified by the strategy number. This need not be the same as the item type, so long as key values of the correct type can be extracted from it.
+Internally, a GIN index contains a B-tree index constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and where each tuple in a leaf page contains either a pointer to a B-tree of heap pointers (a "posting tree"), or a simple list of heap pointers (a "posting list") when the list is small enough to fit into a single index tuple along with the key value.
+Multi-column GIN indexes are implemented by building a single B-tree over composite values (column number, key value). The key values for different columns can be of different types.
+Updating a GIN index tends to be slow because of the intrinsic nature of inverted indexes: inserting or updating one heap row can cause many inserts into the index. After the table is vacuumed or if the pending list becomes larger than work_mem, the entries are moved to the main GIN data structure using the same bulk insert techniques used during initial index creation. This greatly increases the GIN index update speed, even counting the additional vacuum overhead. Moreover the overhead work can be done by a background process instead of in foreground query processing.
+The main disadvantage of this approach is that searches must scan the list of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update that causes the pending list to become "too large" will incur an immediate cleanup cycle and be much slower than other updates. Proper use of autovacuum can minimize both of these problems.
+If consistent response time (of entity cleanup and of update) is more important than update speed, use of pending entries can be disabled by turning off the fastupdate storage parameter for a GIN index. For details, see the CREATE INDEX.
+GIN can support "partial match" queries, in which the query does not determine an exact match for one or more keys, but the possible matches fall within a narrow range of key values (within the key sorting order determined by the compare support method). The extractQuery method, instead of returning a key value to be matched exactly, returns a key value that is the lower bound of the range to be searched, and sets the pmatch flag true. The key range is then scanned using the comparePartial method. comparePartial must return zero for a matching index key, less than zero for a non-match that is still within the range to be searched, or greater than zero if the index key is past the range that could match.
+Create vs. Insert
+Insertion into a GIN index can be slow due to the likelihood of many keys being inserted for each item. So, for bulk insertions into a table, it is advisable to drop the GIN index and recreate it after finishing the bulk insertions. GUC parameters related to GIN index creation and query performance as follows:
+Build time for a GIN index is very sensitive to the maintenance_work_mem setting.
+During a series of insertions into an existing GIN index that has fastupdate enabled, the system will clean up the pending-entry list whenever the list grows larger than work_mem. To avoid fluctuations in observed response time, it is desirable to have pending-list cleanup occur in the background (that is, via autovacuum). Foreground cleanup operations can be avoided by increasing work_mem or making autovacuum more aggressive. However, if work_mem is increased, a foreground cleanup (if any) will take a longer time.
+The primary goal of developing GIN indexes is to create support for highly scalable full-text search in GaussDB(DWS). However, a very large set of results may be returned by a full-text query for words that frequently occur. In addition, reading many tuples from the disk and sorting them will consume large numbers of resources, which is unacceptable for production.
+To facilitate controlled execution of such queries, GIN has a configurable soft upper limit on the number of rows returned: the gin_fuzzy_search_limit configuration parameter. It is set to 0 (meaning no limit) by default. If a non-zero limit is set, then the returned set is a subset of the whole result set, chosen at random.
+Data Query Language (DQL) can obtain data from tables or views.
+GaussDB(DWS) provides statements for obtaining data from tables or views. For details, see SELECT.
+GaussDB(DWS) provides a statement for creating a table based on query results and inserting the queried data into the table. For details, see SELECT INTO.
+