SELECT + table_alias.<auto-suggest> +FROM test.t1 AS table_alias + WHERE + table_alias.<auto-suggest> = 5 +GROUP BY table_alias.<auto-suggest> +HAVING table_alias.<auto-suggest> = 5 +ORDER BY table alias.<auto-suggest>+
diff --git a/docs/dws/tool/ALL_META.TXT.json b/docs/dws/tool/ALL_META.TXT.json new file mode 100644 index 00000000..3134cb5a --- /dev/null +++ b/docs/dws/tool/ALL_META.TXT.json @@ -0,0 +1,1642 @@ +[ + { + "uri":"dws_07_0001.html", + "product_code":"dws", + "code":"1", + "des":"This document describes how to use GaussDB(DWS) tools, including client tools, as shown in Table 1, and server tools, as shown in Table 2.The client tools can be obtained", + "doc_type":"tg", + "kw":"Overview,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_07_0002.html", + "product_code":"dws", + "code":"2", + "des":"Log in to the GaussDB(DWS) management console at: https://console.otc.t-systems.com/dws/You can download the following tools:gsql CLI client: The gsql tool package contai", + "doc_type":"tg", + "kw":"Downloading Client Tools,Tool Guide", + "title":"Downloading Client Tools", + "githuburl":"" + }, + { + "uri":"dws_gsql_index.html", + "product_code":"dws", + "code":"3", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"gsql - CLI Client", + "title":"gsql - CLI Client", + "githuburl":"" + }, + { + "uri":"dws_gsql_002.html", + "product_code":"dws", + "code":"4", + "des":"Connect to the database: Use the gsql client to remotely connect to the GaussDB(DWS) database. If the gsql client is used to connect to a database, the connection timeout", + "doc_type":"tg", + "kw":"Overview,gsql - CLI Client,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"dws_gsql_003.html", + "product_code":"dws", + "code":"5", + "des":"For details about how to download and install gsql and connect it to the cluster database, see \"Using the gsql CLI Client to Connect to a Cluster\" in the Data Warehouse S", + "doc_type":"tg", + "kw":"Instruction,gsql - CLI Client,Tool Guide", + "title":"Instruction", + "githuburl":"" + }, + { + "uri":"dws_gsql_005.html", + "product_code":"dws", + "code":"6", + "des":"When a database is being connected, run the following commands to obtain the help information:gsql --helpThe following information is displayed:......\nUsage:\n gsql [OPTI", + "doc_type":"tg", + "kw":"Online Help,gsql - CLI Client,Tool Guide", + "title":"Online Help", + "githuburl":"" + }, + { + "uri":"dws_gsql_006.html", + "product_code":"dws", + "code":"7", + "des":"For details about gsql parameters, see Table 1, Table 2, Table 3, and Table 4.", + "doc_type":"tg", + "kw":"Command Reference,gsql - CLI Client,Tool Guide", + "title":"Command Reference", + "githuburl":"" + }, + { + "uri":"dws_gsql_007.html", + "product_code":"dws", + "code":"8", + "des":"This section describes meta-commands provided by gsql after the GaussDB(DWS) database CLI tool is used to connect to a database. A gsql meta-command can be anything that ", + "doc_type":"tg", + "kw":"Meta-Command Reference,gsql - CLI Client,Tool Guide", + "title":"Meta-Command Reference", + "githuburl":"" + }, + { + "uri":"dws_gsql_008.html", + "product_code":"dws", + "code":"9", + "des":"The database kernel slowly runs the initialization statement.Problems are difficult to locate in this scenario. Try using the strace Linux trace command.strace gsql -U My", + "doc_type":"tg", + "kw":"Troubleshooting,gsql - CLI Client,Tool Guide", + "title":"Troubleshooting", + "githuburl":"" + }, + { + "uri":"dws_ds_index.html", + "product_code":"dws", + "code":"10", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Data Studio - Integrated Database Development Tool", + "title":"Data Studio - Integrated Database Development Tool", + "githuburl":"" + }, + { + "uri":"DWS_DS_09.html", + "product_code":"dws", + "code":"11", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"About Data Studio", + "title":"About Data Studio", + "githuburl":"" + }, + { + "uri":"dws_07_0012.html", + "product_code":"dws", + "code":"12", + "des":"Data Studio shows major database features using a GUI to simplify database development and application building.Data Studio allows database developers to create and manag", + "doc_type":"tg", + "kw":"Overview,About Data Studio,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_12.html", + "product_code":"dws", + "code":"13", + "des":"This section describes the constraints and limitations for using Data Studio.The filter count and filter status are not displayed in the filter tree.If the SQL statement,", + "doc_type":"tg", + "kw":"Constraints and Limitations,About Data Studio,Tool Guide", + "title":"Constraints and Limitations", + "githuburl":"" + }, + { + "uri":"DWS_DS_13.html", + "product_code":"dws", + "code":"14", + "des":"The following figure shows the structure of the Data Studio release package.", + "doc_type":"tg", + "kw":"Structure of the Release Package,About Data Studio,Tool Guide", + "title":"Structure of the Release Package", + "githuburl":"" + }, + { + "uri":"DWS_DS_14.html", + "product_code":"dws", + "code":"15", + "des":"This section describes the minimum system requirements for using Data Studio.OSThe following table lists the OS requirements of Data Studio.BrowserThe following table lis", + "doc_type":"tg", + "kw":"System Requirements,About Data Studio,Tool Guide", + "title":"System Requirements", + "githuburl":"" + }, + { + "uri":"DWS_DS_16.html", + "product_code":"dws", + "code":"16", + "des":"This section describes how to install and configure Data Studio, and how to configure servers for debugging PL/SQL Functions.Topics in this section include:Installing Dat", + "doc_type":"tg", + "kw":"Installing and Configuring Data Studio,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Installing and Configuring Data Studio", + "githuburl":"" + }, + { + "uri":"DWS_DS_19.html", + "product_code":"dws", + "code":"17", + "des":"This section describes the steps to be followed to start Data Studio.The StartDataStudio.bat batch file checks the version of Operating System (OS), Java and Data Studio ", + "doc_type":"tg", + "kw":"Getting Started,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Getting Started", + "githuburl":"" + }, + { + "uri":"DWS_DS_20.html", + "product_code":"dws", + "code":"18", + "des":"This section describes the Data Studio GUI.The Data Studio GUI contains the following:Main Menu provides basic operations of Data Studio.Toolbar contains the access to fr", + "doc_type":"tg", + "kw":"Data Studio GUI,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Data Studio GUI", + "githuburl":"" + }, + { + "uri":"DWS_DS_21.html", + "product_code":"dws", + "code":"19", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Data Studio Menus", + "title":"Data Studio Menus", + "githuburl":"" + }, + { + "uri":"DWS_DS_22.html", + "product_code":"dws", + "code":"20", + "des":"The File menu contains database connection options. Click File in the main menu or press Alt+F to open the File menu.Perform the following steps to stop Data Studio:Alter", + "doc_type":"tg", + "kw":"File,Data Studio Menus,Tool Guide", + "title":"File", + "githuburl":"" + }, + { + "uri":"DWS_DS_23.html", + "product_code":"dws", + "code":"21", + "des":"The Editmenu contains clipboard, Format, Find and Replace, andSearch Objectsoperations to use in the PL/SQL Viewer and SQL Terminal tab. Press Alt+E to open the Edit menu", + "doc_type":"tg", + "kw":"Edit,Data Studio Menus,Tool Guide", + "title":"Edit", + "githuburl":"" + }, + { + "uri":"DWS_DS_24.html", + "product_code":"dws", + "code":"22", + "des":"The Run menu contains options of performing a database operation in the PL/SQL Viewer tab and executing SQL statements in the SQL Terminal tab. Press Alt+R to open the Ru", + "doc_type":"tg", + "kw":"Run,Data Studio Menus,Tool Guide", + "title":"Run", + "githuburl":"" + }, + { + "uri":"DWS_DS_25.html", + "product_code":"dws", + "code":"23", + "des":"The Debug menu contains debugging operations in the PL/SQL Viewer and SQL Terminal tabs. Press Alt+D to open the Debug menu.", + "doc_type":"tg", + "kw":"Debug,Data Studio Menus,Tool Guide", + "title":"Debug", + "githuburl":"" + }, + { + "uri":"DWS_DS_26.html", + "product_code":"dws", + "code":"24", + "des":"The Settings menu contains the option of changing the language. Press Alt+G to open the Settings menu.", + "doc_type":"tg", + "kw":"Settings,Data Studio Menus,Tool Guide", + "title":"Settings", + "githuburl":"" + }, + { + "uri":"DWS_DS_27.html", + "product_code":"dws", + "code":"25", + "des":"The Help menu contains the user manual and version information of Data Studio. Press Alt+H to open the Help menu.Visit https://java.com/en/download/help/path.xml to set t", + "doc_type":"tg", + "kw":"Help,Data Studio Menus,Tool Guide", + "title":"Help", + "githuburl":"" + }, + { + "uri":"DWS_DS_28.html", + "product_code":"dws", + "code":"26", + "des":"The following figure shows the Data Studio Toolbar.The toolbar contains the following operations:Adding a ConnectionRemoving a ConnectionConnecting to a DatabaseDisconnec", + "doc_type":"tg", + "kw":"Data Studio Toolbar,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Data Studio Toolbar", + "githuburl":"" + }, + { + "uri":"DWS_DS_29.html", + "product_code":"dws", + "code":"27", + "des":"This section describes the right-click menus of Data Studio.The following figure shows the Object Browser pane.Right-clicking a connection name allows you to select Renam", + "doc_type":"tg", + "kw":"Data Studio Right-Click Menus,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Data Studio Right-Click Menus", + "githuburl":"" + }, + { + "uri":"DWS_DS_32.html", + "product_code":"dws", + "code":"28", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Connection Profiles", + "title":"Connection Profiles", + "githuburl":"" + }, + { + "uri":"DWS_DS_33.html", + "product_code":"dws", + "code":"29", + "des":"When Data Studio is started, the New Database Connection dialog box is displayed by default. To perform database operations, Data Studio must be connected to at least one", + "doc_type":"tg", + "kw":"Overview,Connection Profiles,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_34.html", + "product_code":"dws", + "code":"30", + "des":"Perform the following steps to create a database connection.Alternatively, click on the toolbar, or press Ctrl+N to connect to the database. The New Database Connection ", + "doc_type":"tg", + "kw":"Adding a Connection,Connection Profiles,Tool Guide", + "title":"Adding a Connection", + "githuburl":"" + }, + { + "uri":"DWS_DS_35.html", + "product_code":"dws", + "code":"31", + "des":"Perform the following steps to rename a database connection.A Rename Connection dialog box is displayed prompting you to enter the new connection name.The status of the c", + "doc_type":"tg", + "kw":"Renaming a Connection,Connection Profiles,Tool Guide", + "title":"Renaming a Connection", + "githuburl":"" + }, + { + "uri":"DWS_DS_36.html", + "product_code":"dws", + "code":"32", + "des":"Perform the following steps to edit the properties of a database connection.To edit an active connection, you need to disable the connection and then open the connection ", + "doc_type":"tg", + "kw":"Editing a Connection,Connection Profiles,Tool Guide", + "title":"Editing a Connection", + "githuburl":"" + }, + { + "uri":"DWS_DS_37.html", + "product_code":"dws", + "code":"33", + "des":"Follow the steps below to remove an existing database connection:A confirmation dialog box is displayed to remove the connection.The status bar displays the status of the", + "doc_type":"tg", + "kw":"Removing a Connection,Connection Profiles,Tool Guide", + "title":"Removing a Connection", + "githuburl":"" + }, + { + "uri":"DWS_DS_38.html", + "product_code":"dws", + "code":"34", + "des":"Follow the steps below to view the properties of a connection:The status bar displays the status of the completed operation.Properties of the selected connection is displ", + "doc_type":"tg", + "kw":"Viewing Connection Properties,Connection Profiles,Tool Guide", + "title":"Viewing Connection Properties", + "githuburl":"" + }, + { + "uri":"DWS_DS_39.html", + "product_code":"dws", + "code":"35", + "des":"Perform the following steps to refresh a database connection.The status of the completed operation is displayed in the status bar.The time taken to refresh a database dep", + "doc_type":"tg", + "kw":"Refreshing a Database Connection,Connection Profiles,Tool Guide", + "title":"Refreshing a Database Connection", + "githuburl":"" + }, + { + "uri":"DWS_DS_40.html", + "product_code":"dws", + "code":"36", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Databases", + "title":"Databases", + "githuburl":"" + }, + { + "uri":"DWS_DS_41.html", + "product_code":"dws", + "code":"37", + "des":"A relational database is a database that has a set of tables which is manipulated in accordance with the relational model of data. It contains a set of data objects used ", + "doc_type":"tg", + "kw":"Creating a Database,Databases,Tool Guide", + "title":"Creating a Database", + "githuburl":"" + }, + { + "uri":"DWS_DS_42.html", + "product_code":"dws", + "code":"38", + "des":"You can disconnect all the databases from a connection.Follow the steps below to disconnect a connection from the database:This operation can be performed only when there", + "doc_type":"tg", + "kw":"Disconnecting All Databases,Databases,Tool Guide", + "title":"Disconnecting All Databases", + "githuburl":"" + }, + { + "uri":"DWS_DS_43.html", + "product_code":"dws", + "code":"39", + "des":"You can connect to the database.Follow the steps below to connect a database:This operation can be performed only on an inactive database.The database is connected.The st", + "doc_type":"tg", + "kw":"Connecting to a Database,Databases,Tool Guide", + "title":"Connecting to a Database", + "githuburl":"" + }, + { + "uri":"DWS_DS_44.html", + "product_code":"dws", + "code":"40", + "des":"You can disconnect the database.Follow the steps below to disconnect a database:This operation can be performed only on an active database.A confirmation dialog box is di", + "doc_type":"tg", + "kw":"Disconnecting a Database,Databases,Tool Guide", + "title":"Disconnecting a Database", + "githuburl":"" + }, + { + "uri":"DWS_DS_45.html", + "product_code":"dws", + "code":"41", + "des":"Follow the steps below to rename a database:This operation can be performed only on an inactive database.A Rename Database dialog box is displayed prompting you to provid", + "doc_type":"tg", + "kw":"Renaming a Database,Databases,Tool Guide", + "title":"Renaming a Database", + "githuburl":"" + }, + { + "uri":"DWS_DS_46.html", + "product_code":"dws", + "code":"42", + "des":"Individual or batch drop can be performed on databases. Refer to Batch Dropping Objects section for batch drop.Follow the steps below to drop a database:This operation ca", + "doc_type":"tg", + "kw":"Dropping a Database,Databases,Tool Guide", + "title":"Dropping a Database", + "githuburl":"" + }, + { + "uri":"DWS_DS_47.html", + "product_code":"dws", + "code":"43", + "des":"Follow the steps below to view the properties of a database:This operation can be performed only on an active database.The status bar displays the status of the completed", + "doc_type":"tg", + "kw":"Viewing Properties of a Database,Databases,Tool Guide", + "title":"Viewing Properties of a Database", + "githuburl":"" + }, + { + "uri":"DWS_DS_48.html", + "product_code":"dws", + "code":"44", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Schemas", + "title":"Schemas", + "githuburl":"" + }, + { + "uri":"DWS_DS_49.html", + "product_code":"dws", + "code":"45", + "des":"This section describes working with database schemas. All system schemas are grouped under Catalogs and user schemas under Schemas.", + "doc_type":"tg", + "kw":"Overview,Schemas,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_50.html", + "product_code":"dws", + "code":"46", + "des":"In relational database technology, schemas provide a logical classification of objects in the database. Some of the objects that a schema may contain include functions/pr", + "doc_type":"tg", + "kw":"Creating a Schema,Schemas,Tool Guide", + "title":"Creating a Schema", + "githuburl":"" + }, + { + "uri":"DWS_DS_51.html", + "product_code":"dws", + "code":"47", + "des":"You can export the schema DDL to export the DDL of functions/procedures, tables, sequences, and views of the schema.Perform the following steps to export the schema DDL:T", + "doc_type":"tg", + "kw":"Exporting Schema DDL,Schemas,Tool Guide", + "title":"Exporting Schema DDL", + "githuburl":"" + }, + { + "uri":"DWS_DS_52.html", + "product_code":"dws", + "code":"48", + "des":"The exported schema DDL and data include the following:DDL of functions/proceduresDDL and data of tablesDDL of viewsDDL of sequencesPerform the following steps to export ", + "doc_type":"tg", + "kw":"Exporting Schema DDL and Data,Schemas,Tool Guide", + "title":"Exporting Schema DDL and Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_53.html", + "product_code":"dws", + "code":"49", + "des":"Follow the steps to rename a schema:You can view the renamed schema in the Object Browser.The status bar displays the status of the completed operation.", + "doc_type":"tg", + "kw":"Renaming a Schema,Schemas,Tool Guide", + "title":"Renaming a Schema", + "githuburl":"" + }, + { + "uri":"DWS_DS_201.html", + "product_code":"dws", + "code":"50", + "des":"Data Studio provides the option to show sequence DDL or allow users to export sequence DDL. It provides \"Show DDL\", \"Export DDL\", \"Export DDL and Data\"Follow the steps to", + "doc_type":"tg", + "kw":"Supporting Sequence DDL,Schemas,Tool Guide", + "title":"Supporting Sequence DDL", + "githuburl":"" + }, + { + "uri":"DWS_DS_54.html", + "product_code":"dws", + "code":"51", + "des":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog is displayed.In SQL Preview tab, you can view the SQL query automatically generated for the inp", + "doc_type":"tg", + "kw":"Granting/Revoking a Privilege,Schemas,Tool Guide", + "title":"Granting/Revoking a Privilege", + "githuburl":"" + }, + { + "uri":"DWS_DS_55.html", + "product_code":"dws", + "code":"52", + "des":"Individual or batch dropping can be performed on schemas. Refer to Batch Dropping Objects section for batch dropping.Follow the steps below to drop a schema:A confirmatio", + "doc_type":"tg", + "kw":"Dropping a Schema,Schemas,Tool Guide", + "title":"Dropping a Schema", + "githuburl":"" + }, + { + "uri":"DWS_DS_57.html", + "product_code":"dws", + "code":"53", + "des":"Perform the following steps to create a function/procedure and SQL function:The selected template is displayed in the new tab of Data Studio.The Created function/procedur", + "doc_type":"tg", + "kw":"Creating a Function/Procedure,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Creating a Function/Procedure", + "githuburl":"" + }, + { + "uri":"DWS_DS_58.html", + "product_code":"dws", + "code":"54", + "des":"Perform the following steps to edit a function/procedure or SQL function:The selected function/procedure or SQL function is displayed in the PL/SQL Viewer tab page.If mul", + "doc_type":"tg", + "kw":"Editing a Function/Procedure,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Editing a Function/Procedure", + "githuburl":"" + }, + { + "uri":"DWS_DS_59.html", + "product_code":"dws", + "code":"55", + "des":"Perform the following steps to grant or revoke a permission:The Grant/Revoke dialog box is displayed.The Privilege Selection tab is displayed.The SQL Preview tab displays", + "doc_type":"tg", + "kw":"Granting/Revoking a Permission (Function/Procedure),Data Studio - Integrated Database Development To", + "title":"Granting/Revoking a Permission (Function/Procedure)", + "githuburl":"" + }, + { + "uri":"DWS_DS_62.html", + "product_code":"dws", + "code":"56", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Debugging a PL/SQL Function", + "title":"Debugging a PL/SQL Function", + "githuburl":"" + }, + { + "uri":"DWS_DS_621.html", + "product_code":"dws", + "code":"57", + "des":"During debugging, if the connection is lost but the database remains connected to Object Browser, the Connection Error dialog box is displayed with the following options:", + "doc_type":"tg", + "kw":"Overview,Debugging a PL/SQL Function,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_622.html", + "product_code":"dws", + "code":"58", + "des":"Topics in this section include:Using the Breakpoints PaneSetting or Adding a Breakpoint on a LineEnabling or Disabling a Breakpoint on a LineRemoving a Breakpoint from a ", + "doc_type":"tg", + "kw":"Using Breakpoints,Debugging a PL/SQL Function,Tool Guide", + "title":"Using Breakpoints", + "githuburl":"" + }, + { + "uri":"DWS_DS_623.html", + "product_code":"dws", + "code":"59", + "des":"Topics in this section include:Starting DebuggingSingle Stepping a PL/SQL FunctionContinuing the DebuggingViewing CallstackSelect the function that you want to debug in t", + "doc_type":"tg", + "kw":"Controlling Execution,Debugging a PL/SQL Function,Tool Guide", + "title":"Controlling Execution", + "githuburl":"" + }, + { + "uri":"DWS_DS_624.html", + "product_code":"dws", + "code":"60", + "des":"When you use Data Studio, you can examine debugging information through several debugging panes. This section describes how to check the debugging information:Operating o", + "doc_type":"tg", + "kw":"Checking Debugging Information,Debugging a PL/SQL Function,Tool Guide", + "title":"Checking Debugging Information", + "githuburl":"" + }, + { + "uri":"DWS_DS_60.html", + "product_code":"dws", + "code":"61", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Working with Functions/Procedures", + "title":"Working with Functions/Procedures", + "githuburl":"" + }, + { + "uri":"DWS_DS_61.html", + "product_code":"dws", + "code":"62", + "des":"This section provides you with details on working with functions/procedures and SQL functions in Data Studio.Data Studio supports PL/pgSQL and SQL languages for the opera", + "doc_type":"tg", + "kw":"Overview,Working with Functions/Procedures,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_63.html", + "product_code":"dws", + "code":"63", + "des":"Data Studio suggests a list of possible schema names, table names, column names, views, sequences, and functions in thePL/SQL Viewer.Follow the steps below to select a DB", + "doc_type":"tg", + "kw":"Selecting a DB Object in the PL/SQL Viewer,Working with Functions/Procedures,Tool Guide", + "title":"Selecting a DB Object in the PL/SQL Viewer", + "githuburl":"" + }, + { + "uri":"DWS_DS_64.html", + "product_code":"dws", + "code":"64", + "des":"Perform the following steps to export the DDL of a function or procedure:The Data Studio Security Disclaimer dialog box is displayed.The Save As dialog box is displayed.T", + "doc_type":"tg", + "kw":"Exporting the DDL of a Function or Procedure,Working with Functions/Procedures,Tool Guide", + "title":"Exporting the DDL of a Function or Procedure", + "githuburl":"" + }, + { + "uri":"DWS_DS_65.html", + "product_code":"dws", + "code":"65", + "des":"Data Studio allows you to view table properties, procedures/functions and SQL functions.Follow the steps below to view table properties:The properties of the selected tab", + "doc_type":"tg", + "kw":"Viewing Object Properties in the PL/SQL Viewer,Working with Functions/Procedures,Tool Guide", + "title":"Viewing Object Properties in the PL/SQL Viewer", + "githuburl":"" + }, + { + "uri":"DWS_DS_66.html", + "product_code":"dws", + "code":"66", + "des":"Individual or batch drop can be performed on functions/procedures. Refer to Batch Dropping Objects section for batch drop.Follow the steps below to drop a function/proced", + "doc_type":"tg", + "kw":"Dropping a Function/Procedure,Working with Functions/Procedures,Tool Guide", + "title":"Dropping a Function/Procedure", + "githuburl":"" + }, + { + "uri":"DWS_DS_67.html", + "product_code":"dws", + "code":"67", + "des":"After you connect to the database, all the stored functions/procedures and tables will be automatically populated in the Object Browser pane. You can use Data Studio to e", + "doc_type":"tg", + "kw":"Executing a Function/Procedure,Working with Functions/Procedures,Tool Guide", + "title":"Executing a Function/Procedure", + "githuburl":"" + }, + { + "uri":"DWS_DS_68.html", + "product_code":"dws", + "code":"68", + "des":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog box is displayed.", + "doc_type":"tg", + "kw":"Granting/Revoking a Privilege,Working with Functions/Procedures,Tool Guide", + "title":"Granting/Revoking a Privilege", + "githuburl":"" + }, + { + "uri":"DWS_DS_69.html", + "product_code":"dws", + "code":"69", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"GaussDB(DWS) Tables", + "title":"GaussDB(DWS) Tables", + "githuburl":"" + }, + { + "uri":"DWS_DS_70.html", + "product_code":"dws", + "code":"70", + "des":"This section describes how to manage tables efficiently.You need to configure all mandatory parameters to complete the operation. Mandatory parameters are marked with an ", + "doc_type":"tg", + "kw":"Table Management Overview,GaussDB(DWS) Tables,Tool Guide", + "title":"Table Management Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_71.html", + "product_code":"dws", + "code":"71", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Creating Regular Table", + "title":"Creating Regular Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_72.html", + "product_code":"dws", + "code":"72", + "des":"This section describes how to create a common table.A table is a logical structure maintained by a database administrator and consists of rows and columns. You can define", + "doc_type":"tg", + "kw":"Overview,Creating Regular Table,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_73.html", + "product_code":"dws", + "code":"73", + "des":"After creating a table, you can add new columns in that table. You can also perform the following operations on the existing column only for a Regular table:Creating a Ne", + "doc_type":"tg", + "kw":"Working with Columns,Creating Regular Table,Tool Guide", + "title":"Working with Columns", + "githuburl":"" + }, + { + "uri":"DWS_DS_74.html", + "product_code":"dws", + "code":"74", + "des":"You can perform the following operations after a table is created only for a Regular table:Creating a ConstraintRenaming a ConstraintDropping a ConstraintFollow the steps", + "doc_type":"tg", + "kw":"Working with Constraints,Creating Regular Table,Tool Guide", + "title":"Working with Constraints", + "githuburl":"" + }, + { + "uri":"DWS_DS_75.html", + "product_code":"dws", + "code":"75", + "des":"You can create indexes in a table to search for data efficiently.After a table is created, you can add indexes to it. You can perform the following operations only in a c", + "doc_type":"tg", + "kw":"Managing Indexes,Creating Regular Table,Tool Guide", + "title":"Managing Indexes", + "githuburl":"" + }, + { + "uri":"DWS_DS_76.html", + "product_code":"dws", + "code":"76", + "des":"Foreign tables created using query execution in SQL Terminal or any other tool can be viewed in the Object browser after refresh.GDS Foreign table is denoted with icon b", + "doc_type":"tg", + "kw":"Creating Foreign Table,GaussDB(DWS) Tables,Tool Guide", + "title":"Creating Foreign Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_77.html", + "product_code":"dws", + "code":"77", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Creating Partition Table", + "title":"Creating Partition Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_78.html", + "product_code":"dws", + "code":"78", + "des":"Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partit", + "doc_type":"tg", + "kw":"Overview,Creating Partition Table,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_79.html", + "product_code":"dws", + "code":"79", + "des":"After creating a table, you can add/modify partitions. You can also perform the following operations on an existing partition:Renaming a PartitionDropping a PartitionFoll", + "doc_type":"tg", + "kw":"Working with Partitions,Creating Partition Table,Tool Guide", + "title":"Working with Partitions", + "githuburl":"" + }, + { + "uri":"DWS_DS_80.html", + "product_code":"dws", + "code":"80", + "des":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog box is displayed.In the SQL Preview tab, you can view the SQL query automatically generated for", + "doc_type":"tg", + "kw":"Grant/Revoke Privilege - Regular/Partition Table,GaussDB(DWS) Tables,Tool Guide", + "title":"Grant/Revoke Privilege - Regular/Partition Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_81.html", + "product_code":"dws", + "code":"81", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Managing Table", + "title":"Managing Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_82.html", + "product_code":"dws", + "code":"82", + "des":"This section describes how to manage tables efficiently.You need to configure all mandatory parameters to complete the operation. Mandatory parameters are marked with ast", + "doc_type":"tg", + "kw":"Overview,Managing Table,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_83.html", + "product_code":"dws", + "code":"83", + "des":"Follow the steps below to rename a table:The Rename Table dialog box is displayed prompting you to provide the new name.Data Studio displays the status of the operation i", + "doc_type":"tg", + "kw":"Renaming a Table,Managing Table,Tool Guide", + "title":"Renaming a Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_84.html", + "product_code":"dws", + "code":"84", + "des":"Follow the steps below to truncate a table:Data Studio prompts you to confirm this operation.A popup message and status bar display the status of the completed operation.", + "doc_type":"tg", + "kw":"Truncating a Table,Managing Table,Tool Guide", + "title":"Truncating a Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_85.html", + "product_code":"dws", + "code":"85", + "des":"Index facilitate lookup of records. You need to reindex tables in the following scenarios:The index is corrupted and no longer contains valid data. Although in theory thi", + "doc_type":"tg", + "kw":"Reindexing a Table,Managing Table,Tool Guide", + "title":"Reindexing a Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_86.html", + "product_code":"dws", + "code":"86", + "des":"The analyzing table operation collects statistics about tables and table indicies and stores the collected information in internal tables of the database where the query ", + "doc_type":"tg", + "kw":"Analyzing a Table,Managing Table,Tool Guide", + "title":"Analyzing a Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_87.html", + "product_code":"dws", + "code":"87", + "des":"Vacuuming table operation reclaims space and makes it available for re-use.Follow the steps below to vacuum the table:The Vacuum Table message and status bar display the ", + "doc_type":"tg", + "kw":"Vacuuming a Table,Managing Table,Tool Guide", + "title":"Vacuuming a Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_88.html", + "product_code":"dws", + "code":"88", + "des":"Follow the steps below to set the description of a table:The Update Table Description dialog box is displayed. It prompts you to set the table description.The status bar ", + "doc_type":"tg", + "kw":"Setting the Table Description,Managing Table,Tool Guide", + "title":"Setting the Table Description", + "githuburl":"" + }, + { + "uri":"DWS_DS_90.html", + "product_code":"dws", + "code":"89", + "des":"Follow the steps below to set a schema:TheSet Schema dialog box is displayed that prompts you to select the new schema for the selected table.The status bar displays the ", + "doc_type":"tg", + "kw":"Setting the Schema,Managing Table,Tool Guide", + "title":"Setting the Schema", + "githuburl":"" + }, + { + "uri":"DWS_DS_91.html", + "product_code":"dws", + "code":"90", + "des":"Individual or batch dropping can be performed on tables. Refer to Batch Dropping Objects section for batch dropping.This operation removes the complete table structure (i", + "doc_type":"tg", + "kw":"Dropping a Table,Managing Table,Tool Guide", + "title":"Dropping a Table", + "githuburl":"" + }, + { + "uri":"DWS_DS_92.html", + "product_code":"dws", + "code":"91", + "des":"Follow the steps below to view the properties of a table:Data Studio displays the properties (General, Columns, Constraints, and Index) of the selected table in different", + "doc_type":"tg", + "kw":"Viewing Table Properties,Managing Table,Tool Guide", + "title":"Viewing Table Properties", + "githuburl":"" + }, + { + "uri":"DWS_DS_93.html", + "product_code":"dws", + "code":"92", + "des":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog is displayed.", + "doc_type":"tg", + "kw":"Grant/Revoke Privilege,Managing Table,Tool Guide", + "title":"Grant/Revoke Privilege", + "githuburl":"" + }, + { + "uri":"DWS_DS_94.html", + "product_code":"dws", + "code":"93", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Managing Table Data", + "title":"Managing Table Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_96.html", + "product_code":"dws", + "code":"94", + "des":"Perform the following steps to export the table DDL:The Data Studio Security Disclaimer dialog box is displayed.The Save As dialog box is displayed.To cancel the export o", + "doc_type":"tg", + "kw":"Exporting Table DDL,Managing Table Data,Tool Guide", + "title":"Exporting Table DDL", + "githuburl":"" + }, + { + "uri":"DWS_DS_97.html", + "product_code":"dws", + "code":"95", + "des":"The exported table DDL and data include the following:DDL of the tableColumns and rows of the tablePerform the following steps to export the table DDL and data:The Data S", + "doc_type":"tg", + "kw":"Exporting Table DDL and Data,Managing Table Data,Tool Guide", + "title":"Exporting Table DDL and Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_98.html", + "product_code":"dws", + "code":"96", + "des":"Perform the following steps to export table data:The Export Table Data dialog box is displayed with the following options:Format: Table data can be exported in Excel (xls", + "doc_type":"tg", + "kw":"Exporting Table Data,Managing Table Data,Tool Guide", + "title":"Exporting Table Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_99.html", + "product_code":"dws", + "code":"97", + "des":"Follow the steps below to show the DDL query of a table:The DDL of the selected table is displayed.A new terminal is opened each time the Show DDL operation is executed.M", + "doc_type":"tg", + "kw":"Showing DDL,Managing Table Data,Tool Guide", + "title":"Showing DDL", + "githuburl":"" + }, + { + "uri":"DWS_DS_100.html", + "product_code":"dws", + "code":"98", + "des":"Prerequisites:If the definition of the source file does not match that of the target table, modify the properties of the target table in the Import Table Data dialog box.", + "doc_type":"tg", + "kw":"Importing Table Data,Managing Table Data,Tool Guide", + "title":"Importing Table Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_101.html", + "product_code":"dws", + "code":"99", + "des":"Follow the steps to view table data:The View Table Data tab is displayed where you can view the table data information.Toolbar menu in the View Table Data window:Icons in", + "doc_type":"tg", + "kw":"Viewing Table Data,Managing Table Data,Tool Guide", + "title":"Viewing Table Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_102.html", + "product_code":"dws", + "code":"100", + "des":"Follow the steps below to edit table data:The Edit Table datatabisdisplayed.Refer to Viewing Table Data for description on copy and search toolbar options.Data Studio val", + "doc_type":"tg", + "kw":"Editing Table Data,Managing Table Data,Tool Guide", + "title":"Editing Table Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_103.html", + "product_code":"dws", + "code":"101", + "des":"Data Studio allows you to edit temporary tables. Temporary tables are deleted automatically when you close the connection that was used to create the table.Ensure that co", + "doc_type":"tg", + "kw":"Editing Temporary Tables,GaussDB(DWS) Tables,Tool Guide", + "title":"Editing Temporary Tables", + "githuburl":"" + }, + { + "uri":"DWS_DS_104.html", + "product_code":"dws", + "code":"102", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Sequences", + "title":"Sequences", + "githuburl":"" + }, + { + "uri":"DWS_DS_105.html", + "product_code":"dws", + "code":"103", + "des":"Follow the steps below to create a sequence:The Create New Sequence dialog box is displayed.Enter a name in the Sequence Name field.Select theCase check box to retain the", + "doc_type":"tg", + "kw":"Creating Sequence,Sequences,Tool Guide", + "title":"Creating Sequence", + "githuburl":"" + }, + { + "uri":"DWS_DS_106.html", + "product_code":"dws", + "code":"104", + "des":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog is displayed.In the SQL Preview tab, you can view the SQL query automatically generated for the", + "doc_type":"tg", + "kw":"Grant/Revoke Privilege,Sequences,Tool Guide", + "title":"Grant/Revoke Privilege", + "githuburl":"" + }, + { + "uri":"DWS_DS_107.html", + "product_code":"dws", + "code":"105", + "des":"You can perform the following operations on an existing sequence:Granting/Revoking a PrivilegeDropping a SequenceDropping a Sequence CascadeIndividual or batch dropping c", + "doc_type":"tg", + "kw":"Working with Sequences,Sequences,Tool Guide", + "title":"Working with Sequences", + "githuburl":"" + }, + { + "uri":"DWS_DS_108.html", + "product_code":"dws", + "code":"106", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Views", + "title":"Views", + "githuburl":"" + }, + { + "uri":"DWS_DS_109.html", + "product_code":"dws", + "code":"107", + "des":"Follow the steps below to create a new view:The DDL template for the view is displayed in the SQL Terminal tab.You can view the new view in the Object Browser.The status ", + "doc_type":"tg", + "kw":"Creating a View,Views,Tool Guide", + "title":"Creating a View", + "githuburl":"" + }, + { + "uri":"DWS_DS_110.html", + "product_code":"dws", + "code":"108", + "des":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog box is displayed.In the SQL Preview tab, you can view the SQL query automatically generated for", + "doc_type":"tg", + "kw":"Granting/Revoking a Privilege,Views,Tool Guide", + "title":"Granting/Revoking a Privilege", + "githuburl":"" + }, + { + "uri":"DWS_DS_111.html", + "product_code":"dws", + "code":"109", + "des":"Views can be created to restrict access to specific rows or columns of a table. A view can be created from one or more tables and is determined by the query used to creat", + "doc_type":"tg", + "kw":"Working with Views,Views,Tool Guide", + "title":"Working with Views", + "githuburl":"" + }, + { + "uri":"DWS_DS_115.html", + "product_code":"dws", + "code":"110", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Users/Roles", + "title":"Users/Roles", + "githuburl":"" + }, + { + "uri":"DWS_DS_116.html", + "product_code":"dws", + "code":"111", + "des":"A database is used by many users, and the users are grouped for management convenience. A database role can be one or a group of database users.Users and roles have simil", + "doc_type":"tg", + "kw":"Creating a User/Role,Users/Roles,Tool Guide", + "title":"Creating a User/Role", + "githuburl":"" + }, + { + "uri":"DWS_DS_117.html", + "product_code":"dws", + "code":"112", + "des":"You can perform the following operations on an existing user/role:Dropping a User/RoleViewing/Editing User/Role PropertiesViewing the User/Role DDLFollow the steps below ", + "doc_type":"tg", + "kw":"Working with Users/Roles,Users/Roles,Tool Guide", + "title":"Working with Users/Roles", + "githuburl":"" + }, + { + "uri":"DWS_DS_118.html", + "product_code":"dws", + "code":"113", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"SQL Terminal", + "title":"SQL Terminal", + "githuburl":"" + }, + { + "uri":"DWS_DS_119.html", + "product_code":"dws", + "code":"114", + "des":"You can open multiple SQL Terminal tabs in Data Studio to execute multiple SQL statements for query in the current SQL Terminal tab. Perform the following steps to open a", + "doc_type":"tg", + "kw":"Opening Multiple SQL Terminal Tabs,SQL Terminal,Tool Guide", + "title":"Opening Multiple SQL Terminal Tabs", + "githuburl":"" + }, + { + "uri":"DWS_DS_120.html", + "product_code":"dws", + "code":"115", + "des":"Data Studio allows viewing and managing frequently executed SQL queries. The history of executed SQL queries is saved only in SQL Terminal.Perform the following steps to ", + "doc_type":"tg", + "kw":"Managing the History of Executed SQL Queries,SQL Terminal,Tool Guide", + "title":"Managing the History of Executed SQL Queries", + "githuburl":"" + }, + { + "uri":"DWS_DS_121.html", + "product_code":"dws", + "code":"116", + "des":"Follow the steps to open an SQL script:If the SQL Terminal has existing content, then there will be an option to overwrite the existing content or append content to it.Th", + "doc_type":"tg", + "kw":"Opening and Saving SQL Scripts,SQL Terminal,Tool Guide", + "title":"Opening and Saving SQL Scripts", + "githuburl":"" + }, + { + "uri":"DWS_DS_122.html", + "product_code":"dws", + "code":"117", + "des":"Data Studio allows you to view table properties and functions/procedures.Follow the steps to view table properties:The table properties are read-only.Follow the steps to ", + "doc_type":"tg", + "kw":"Viewing Object Properties in the SQL Terminal,SQL Terminal,Tool Guide", + "title":"Viewing Object Properties in the SQL Terminal", + "githuburl":"" + }, + { + "uri":"DWS_DS_123.html", + "product_code":"dws", + "code":"118", + "des":"Data Studio allows you to cancel the execution of an SQL query being executed in the SQL Terminal.Follow the steps to cancel execution of an SQL query:Alternatively, you", + "doc_type":"tg", + "kw":"Canceling the Execution of SQL Queries,SQL Terminal,Tool Guide", + "title":"Canceling the Execution of SQL Queries", + "githuburl":"" + }, + { + "uri":"DWS_DS_124.html", + "product_code":"dws", + "code":"119", + "des":"Data Studio supports formatting and highlighting of SQL queries and PL/SQL statements.Follow the steps to format PL/SQL statements:Alternatively, use the key combination ", + "doc_type":"tg", + "kw":"Formatting of SQL Queries,SQL Terminal,Tool Guide", + "title":"Formatting of SQL Queries", + "githuburl":"" + }, + { + "uri":"DWS_DS_125.html", + "product_code":"dws", + "code":"120", + "des":"Data Studio suggests a list of possible schema names, table names and column names, and views in theSQL Terminal.Follow the steps below to select a DB object:On selection", + "doc_type":"tg", + "kw":"Selecting a DB Object in the SQL Terminal,SQL Terminal,Tool Guide", + "title":"Selecting a DB Object in the SQL Terminal", + "githuburl":"" + }, + { + "uri":"DWS_DS_126.html", + "product_code":"dws", + "code":"121", + "des":"The execution plan shows how the table(s) referenced by the SQL statement will be scanned (plain sequential scan and index scan).The SQL statement execution cost is the e", + "doc_type":"tg", + "kw":"Viewing the Query Execution Plan and Cost,SQL Terminal,Tool Guide", + "title":"Viewing the Query Execution Plan and Cost", + "githuburl":"" + }, + { + "uri":"DWS_DS_127.html", + "product_code":"dws", + "code":"122", + "des":"Visual Explain plan displays a graphical representation of the SQL query using information from the extended JSON format. This helps to refine query to enhance query and ", + "doc_type":"tg", + "kw":"Viewing the Query Execution Plan and Cost Graphically,SQL Terminal,Tool Guide", + "title":"Viewing the Query Execution Plan and Cost Graphically", + "githuburl":"" + }, + { + "uri":"DWS_DS_128.html", + "product_code":"dws", + "code":"123", + "des":"The Auto Commit option is available in the Preferences pane. For details, see Transaction.If Auto Commit is enabled, the Commit and Rollback functions are disabled. Trans", + "doc_type":"tg", + "kw":"Using SQL Terminals,SQL Terminal,Tool Guide", + "title":"Using SQL Terminals", + "githuburl":"" + }, + { + "uri":"DWS_DS_129.html", + "product_code":"dws", + "code":"124", + "des":"You can export the results of an SQL query into a CSV, Text or Binary file.This section contains the following topics:Exporting all dataExporting current page dataThe fol", + "doc_type":"tg", + "kw":"Exporting Query Results,SQL Terminal,Tool Guide", + "title":"Exporting Query Results", + "githuburl":"" + }, + { + "uri":"DWS_DS_130.html", + "product_code":"dws", + "code":"125", + "des":"Data Studio allows you to reuse an existing SQL Terminal connection or create a new SQL Terminal connection for execution plan and cost, visual explain plan, and operatio", + "doc_type":"tg", + "kw":"Managing SQL Terminal Connections,SQL Terminal,Tool Guide", + "title":"Managing SQL Terminal Connections", + "githuburl":"" + }, + { + "uri":"DWS_DS_131.html", + "product_code":"dws", + "code":"126", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Batch Operation", + "title":"Batch Operation", + "githuburl":"" + }, + { + "uri":"DWS_DS_132.html", + "product_code":"dws", + "code":"127", + "des":"You can view accessible database objects in the navigation tree in Object Browser. Schema are displayed under databases, and tables are displayed under schemas.Object Bro", + "doc_type":"tg", + "kw":"Overview,Batch Operation,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_133.html", + "product_code":"dws", + "code":"128", + "des":"The batch drop operation allows you to drop multiple objects. This operation also applies to searched objects.Batch drop is allowed only within a database.An error is rep", + "doc_type":"tg", + "kw":"Batch Dropping Objects,Batch Operation,Tool Guide", + "title":"Batch Dropping Objects", + "githuburl":"" + }, + { + "uri":"DWS_DS_134.html", + "product_code":"dws", + "code":"129", + "des":"The batch grant/revoke operation allows you select multiple objects to grant/revoke privileges. You can also perform batch grant/revoke operation on searched objects.This", + "doc_type":"tg", + "kw":"Granting/Revoking Privileges,Batch Operation,Tool Guide", + "title":"Granting/Revoking Privileges", + "githuburl":"" + }, + { + "uri":"DWS_DS_135.html", + "product_code":"dws", + "code":"130", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Personalizing Data Studio", + "title":"Personalizing Data Studio", + "githuburl":"" + }, + { + "uri":"DWS_DS_136.html", + "product_code":"dws", + "code":"131", + "des":"This section provides details on how to personalize Data Studio using preferences settings.", + "doc_type":"tg", + "kw":"Overview,Personalizing Data Studio,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_137.html", + "product_code":"dws", + "code":"132", + "des":"This section describes how to customize shortcut keys.You can customize Data Studio shortcut keys.Perform the following steps to set or modify shortcut keys:The Preferenc", + "doc_type":"tg", + "kw":"General,Personalizing Data Studio,Tool Guide", + "title":"General", + "githuburl":"" + }, + { + "uri":"DWS_DS_138.html", + "product_code":"dws", + "code":"133", + "des":"This section describes how to customize syntax highlighting, SQL history information, templates, and formatters.Perform the following steps to customize SQL highlighting:", + "doc_type":"tg", + "kw":"Editor,Personalizing Data Studio,Tool Guide", + "title":"Editor", + "githuburl":"" + }, + { + "uri":"DWS_DS_139.html", + "product_code":"dws", + "code":"134", + "des":"Perform the following steps to configure Data Studio encoding and file encoding:The Preferences dialog box is displayed.The Session Setting pane is displayed.Data Studio ", + "doc_type":"tg", + "kw":"Environment,Personalizing Data Studio,Tool Guide", + "title":"Environment", + "githuburl":"" + }, + { + "uri":"DWS_DS_141.html", + "product_code":"dws", + "code":"135", + "des":"This section describes how to customize the settings in the Query Results pane, including the column width, number of records to be obtained, and copy of column headers o", + "doc_type":"tg", + "kw":"Result Management,Personalizing Data Studio,Tool Guide", + "title":"Result Management", + "githuburl":"" + }, + { + "uri":"DWS_DS_142.html", + "product_code":"dws", + "code":"136", + "des":"This section describes how to customize the display of passwords and security disclaimers.You can configure whether to display the option of saving password permanently i", + "doc_type":"tg", + "kw":"Security,Personalizing Data Studio,Tool Guide", + "title":"Security", + "githuburl":"" + }, + { + "uri":"DWS_DS_144.html", + "product_code":"dws", + "code":"137", + "des":"The loading and operation performance of Data Studio depends on the number of objects to be loaded in Object Browser, including tables, views, and columns.Memory consumpt", + "doc_type":"tg", + "kw":"Performance Specifications,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Performance Specifications", + "githuburl":"" + }, + { + "uri":"DWS_DS_146.html", + "product_code":"dws", + "code":"138", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Security Management", + "title":"Security Management", + "githuburl":"" + }, + { + "uri":"DWS_DS_147.html", + "product_code":"dws", + "code":"139", + "des":"Ensure that the operating system and the required software's (refer to System Requirements for more details) are updated with the latest patches to prevent vulnerabilitie", + "doc_type":"tg", + "kw":"Overview,Security Management,Tool Guide", + "title":"Overview", + "githuburl":"" + }, + { + "uri":"DWS_DS_148.html", + "product_code":"dws", + "code":"140", + "des":"The following information is critical to the security management for Data Studio:When you log into a database, Data Studio displays a dialog box that describes the last s", + "doc_type":"tg", + "kw":"Login History,Security Management,Tool Guide", + "title":"Login History", + "githuburl":"" + }, + { + "uri":"DWS_DS_149.html", + "product_code":"dws", + "code":"141", + "des":"The following information is critical to manage security for Data Studio:Your password will expire within 7 days from the date of notification. If the password expires, c", + "doc_type":"tg", + "kw":"Password Expiry Notification,Security Management,Tool Guide", + "title":"Password Expiry Notification", + "githuburl":"" + }, + { + "uri":"DWS_DS_151.html", + "product_code":"dws", + "code":"142", + "des":"The following information is critical to manage security for Data Studio:While running Data Studio in a trusted environment, user must ensure to prevent malicious softwar", + "doc_type":"tg", + "kw":"Securing the Application In-Memory Data,Security Management,Tool Guide", + "title":"Securing the Application In-Memory Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_152.html", + "product_code":"dws", + "code":"143", + "des":"The following information is critical to manage security for Data Studio:You can ensure encryption of auto saved data by enabling encryption option from Preferences page.", + "doc_type":"tg", + "kw":"Data Encryption for Saved Data,Security Management,Tool Guide", + "title":"Data Encryption for Saved Data", + "githuburl":"" + }, + { + "uri":"DWS_DS_153.html", + "product_code":"dws", + "code":"144", + "des":"The following information is critical to manage security for Data Studio:SQL History scripts are not encrypted.The SQL History list does not display sensitive queries tha", + "doc_type":"tg", + "kw":"SQL History,Security Management,Tool Guide", + "title":"SQL History", + "githuburl":"" + }, + { + "uri":"DWS_DS_154.html", + "product_code":"dws", + "code":"145", + "des":"The information about using SSL certificates is for reference only. For details about the certificates and the security guidelines for managing the certificates and relat", + "doc_type":"tg", + "kw":"SSL Certificates,Security Management,Tool Guide", + "title":"SSL Certificates", + "githuburl":"" + }, + { + "uri":"DWS_DS_145.html", + "product_code":"dws", + "code":"146", + "des":"The Data Studio cannot be opened for a long time.Solution: Check whether JRE is found. Verify the Java path configured in the environment. For details about the supported", + "doc_type":"tg", + "kw":"Troubleshooting,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"Troubleshooting", + "githuburl":"" + }, + { + "uri":"DWS_DS_155.html", + "product_code":"dws", + "code":"147", + "des":"What do I need to check if my connection fails?Answer: Check the following items:Check whether Connection Properties are properly configured.Check whether the server vers", + "doc_type":"tg", + "kw":"FAQs,Data Studio - Integrated Database Development Tool,Tool Guide", + "title":"FAQs", + "githuburl":"" + }, + { + "uri":"dws_gds_index.html", + "product_code":"dws", + "code":"148", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"GDS: Parallel Data Loader", + "title":"GDS: Parallel Data Loader", + "githuburl":"" + }, + { + "uri":"dws_07_0759.html", + "product_code":"dws", + "code":"149", + "des":"GaussDB(DWS) uses GDS to allocate the source data for parallel data import. Deploy GDS on the data server.If a large volume of data is stored on multiple data servers, in", + "doc_type":"tg", + "kw":"Installing, Configuring, and Starting GDS,GDS: Parallel Data Loader,Tool Guide", + "title":"Installing, Configuring, and Starting GDS", + "githuburl":"" + }, + { + "uri":"dws_07_0128.html", + "product_code":"dws", + "code":"150", + "des":"Stop GDS after data is imported successfully.If GDS is started using the gds command, perform the following operations to stop GDS:Query the GDS process ID:ps -ef|grep gd", + "doc_type":"tg", + "kw":"Stopping GDS,GDS: Parallel Data Loader,Tool Guide", + "title":"Stopping GDS", + "githuburl":"" + }, + { + "uri":"dws_07_0692.html", + "product_code":"dws", + "code":"151", + "des":"The data servers reside on the same intranet as the cluster. Their IP addresses are 192.168.0.90 and 192.168.0.91. Source data files are in CSV format.Create the target t", + "doc_type":"tg", + "kw":"Example of Importing Data Using GDS,GDS: Parallel Data Loader,Tool Guide", + "title":"Example of Importing Data Using GDS", + "githuburl":"" + }, + { + "uri":"gds_cmd_reference.html", + "product_code":"dws", + "code":"152", + "des":"gds is used to import and export data of GaussDB(DWS).The -d and -H parameters are mandatory and option is optional. gds provides the file data from DIRECTORY for GaussDB", + "doc_type":"tg", + "kw":"gds,GDS: Parallel Data Loader,Tool Guide", + "title":"gds", + "githuburl":"" + }, + { + "uri":"dws_07_0129.html", + "product_code":"dws", + "code":"153", + "des":"gds_ctl.py can be used to start and stop gds if gds.conf has been configured.Run the following commands on Linux OS: You need to ensure that the directory structure is as", + "doc_type":"tg", + "kw":"gds_ctl.py,GDS: Parallel Data Loader,Tool Guide", + "title":"gds_ctl.py", + "githuburl":"" + }, + { + "uri":"dws_07_0056.html", + "product_code":"dws", + "code":"154", + "des":"Handle errors that occurred during data import.Errors that occur when data is imported are divided into data format errors and non-data format errors.Data format errorWhe", + "doc_type":"tg", + "kw":"Handling Import Errors,GDS: Parallel Data Loader,Tool Guide", + "title":"Handling Import Errors", + "githuburl":"" + }, + { + "uri":"dws_07_0100.html", + "product_code":"dws", + "code":"155", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Server Tool", + "title":"Server Tool", + "githuburl":"" + }, + { + "uri":"dws_07_0101.html", + "product_code":"dws", + "code":"156", + "des":"gs_dump is tool provided by GaussDB(DWS) to export database information. You can export a database or its objects, such as schemas, tables, and views. The database can be", + "doc_type":"tg", + "kw":"gs_dump,Server Tool,Tool Guide", + "title":"gs_dump", + "githuburl":"" + }, + { + "uri":"dws_07_0102.html", + "product_code":"dws", + "code":"157", + "des":"gs_dumpall is a tool provided by GaussDB(DWS) to export all database information, including the data of the default postgres database, data of user-specified databases, a", + "doc_type":"tg", + "kw":"gs_dumpall,Server Tool,Tool Guide", + "title":"gs_dumpall", + "githuburl":"" + }, + { + "uri":"dws_07_0103.html", + "product_code":"dws", + "code":"158", + "des":"gs_restore is a tool provided by GaussDB(DWS) to import data that was exported using gs_dump. It can also be used to import files that were exported using gs_dump.It has ", + "doc_type":"tg", + "kw":"gs_restore,Server Tool,Tool Guide", + "title":"gs_restore", + "githuburl":"" + }, + { + "uri":"dws_07_0104.html", + "product_code":"dws", + "code":"159", + "des":"gds_check is used to check the GDS deployment environment, including the OS parameters, network environment, and disk usage. It also supports the recovery of system param", + "doc_type":"tg", + "kw":"gds_check,Server Tool,Tool Guide", + "title":"gds_check", + "githuburl":"" + }, + { + "uri":"dws_07_0106.html", + "product_code":"dws", + "code":"160", + "des":"gds_install is a script tool used to install GDS in batches, improving GDS deployment efficiency.Set environment variables before executing the script. For details, see \"", + "doc_type":"tg", + "kw":"gds_install,Server Tool,Tool Guide", + "title":"gds_install", + "githuburl":"" + }, + { + "uri":"dws_07_0107.html", + "product_code":"dws", + "code":"161", + "des":"gds_uninstall is a script tool used to uninstall GDS in batches.Set environment variables before executing the script. For details, see \"Importing Data > Using a Foreign ", + "doc_type":"tg", + "kw":"gds_uninstall,Server Tool,Tool Guide", + "title":"gds_uninstall", + "githuburl":"" + }, + { + "uri":"dws_07_0105.html", + "product_code":"dws", + "code":"162", + "des":"gds_ctl is a script tool used for starting or stopping GDS service processes in batches. You can start or stop GDS service processes, which use the same port, on multiple", + "doc_type":"tg", + "kw":"gds_ctl,Server Tool,Tool Guide", + "title":"gds_ctl", + "githuburl":"" + }, + { + "uri":"dws_07_0108.html", + "product_code":"dws", + "code":"163", + "des":"During cluster installation, you need to execute commands and transfer files among hosts in the cluster. Therefore, mutual trust relationships must be established among t", + "doc_type":"tg", + "kw":"gs_sshexkey,Server Tool,Tool Guide", + "title":"gs_sshexkey", + "githuburl":"" + }, + { + "uri":"dws_07_0200.html", + "product_code":"dws", + "code":"164", + "des":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "doc_type":"tg", + "kw":"Change History,Tool Guide", + "title":"Change History", + "githuburl":"" + } +] \ No newline at end of file diff --git a/docs/dws/tool/CLASS.TXT.json b/docs/dws/tool/CLASS.TXT.json new file mode 100644 index 00000000..fcfb17d2 --- /dev/null +++ b/docs/dws/tool/CLASS.TXT.json @@ -0,0 +1,1478 @@ +[ + { + "desc":"This document describes how to use GaussDB(DWS) tools, including client tools, as shown in Table 1, and server tools, as shown in Table 2.The client tools can be obtained", + "product_code":"dws", + "title":"Overview", + "uri":"dws_07_0001.html", + "doc_type":"tg", + "p_code":"", + "code":"1" + }, + { + "desc":"Log in to the GaussDB(DWS) management console at: https://console.otc.t-systems.com/dws/You can download the following tools:gsql CLI client: The gsql tool package contai", + "product_code":"dws", + "title":"Downloading Client Tools", + "uri":"dws_07_0002.html", + "doc_type":"tg", + "p_code":"", + "code":"2" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"gsql - CLI Client", + "uri":"dws_gsql_index.html", + "doc_type":"tg", + "p_code":"", + "code":"3" + }, + { + "desc":"Connect to the database: Use the gsql client to remotely connect to the GaussDB(DWS) database. If the gsql client is used to connect to a database, the connection timeout", + "product_code":"dws", + "title":"Overview", + "uri":"dws_gsql_002.html", + "doc_type":"tg", + "p_code":"3", + "code":"4" + }, + { + "desc":"For details about how to download and install gsql and connect it to the cluster database, see \"Using the gsql CLI Client to Connect to a Cluster\" in the Data Warehouse S", + "product_code":"dws", + "title":"Instruction", + "uri":"dws_gsql_003.html", + "doc_type":"tg", + "p_code":"3", + "code":"5" + }, + { + "desc":"When a database is being connected, run the following commands to obtain the help information:gsql --helpThe following information is displayed:......\nUsage:\n gsql [OPTI", + "product_code":"dws", + "title":"Online Help", + "uri":"dws_gsql_005.html", + "doc_type":"tg", + "p_code":"3", + "code":"6" + }, + { + "desc":"For details about gsql parameters, see Table 1, Table 2, Table 3, and Table 4.", + "product_code":"dws", + "title":"Command Reference", + "uri":"dws_gsql_006.html", + "doc_type":"tg", + "p_code":"3", + "code":"7" + }, + { + "desc":"This section describes meta-commands provided by gsql after the GaussDB(DWS) database CLI tool is used to connect to a database. A gsql meta-command can be anything that ", + "product_code":"dws", + "title":"Meta-Command Reference", + "uri":"dws_gsql_007.html", + "doc_type":"tg", + "p_code":"3", + "code":"8" + }, + { + "desc":"The database kernel slowly runs the initialization statement.Problems are difficult to locate in this scenario. Try using the strace Linux trace command.strace gsql -U My", + "product_code":"dws", + "title":"Troubleshooting", + "uri":"dws_gsql_008.html", + "doc_type":"tg", + "p_code":"3", + "code":"9" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Data Studio - Integrated Database Development Tool", + "uri":"dws_ds_index.html", + "doc_type":"tg", + "p_code":"", + "code":"10" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"About Data Studio", + "uri":"DWS_DS_09.html", + "doc_type":"tg", + "p_code":"10", + "code":"11" + }, + { + "desc":"Data Studio shows major database features using a GUI to simplify database development and application building.Data Studio allows database developers to create and manag", + "product_code":"dws", + "title":"Overview", + "uri":"dws_07_0012.html", + "doc_type":"tg", + "p_code":"11", + "code":"12" + }, + { + "desc":"This section describes the constraints and limitations for using Data Studio.The filter count and filter status are not displayed in the filter tree.If the SQL statement,", + "product_code":"dws", + "title":"Constraints and Limitations", + "uri":"DWS_DS_12.html", + "doc_type":"tg", + "p_code":"11", + "code":"13" + }, + { + "desc":"The following figure shows the structure of the Data Studio release package.", + "product_code":"dws", + "title":"Structure of the Release Package", + "uri":"DWS_DS_13.html", + "doc_type":"tg", + "p_code":"11", + "code":"14" + }, + { + "desc":"This section describes the minimum system requirements for using Data Studio.OSThe following table lists the OS requirements of Data Studio.BrowserThe following table lis", + "product_code":"dws", + "title":"System Requirements", + "uri":"DWS_DS_14.html", + "doc_type":"tg", + "p_code":"11", + "code":"15" + }, + { + "desc":"This section describes how to install and configure Data Studio, and how to configure servers for debugging PL/SQL Functions.Topics in this section include:Installing Dat", + "product_code":"dws", + "title":"Installing and Configuring Data Studio", + "uri":"DWS_DS_16.html", + "doc_type":"tg", + "p_code":"10", + "code":"16" + }, + { + "desc":"This section describes the steps to be followed to start Data Studio.The StartDataStudio.bat batch file checks the version of Operating System (OS), Java and Data Studio ", + "product_code":"dws", + "title":"Getting Started", + "uri":"DWS_DS_19.html", + "doc_type":"tg", + "p_code":"10", + "code":"17" + }, + { + "desc":"This section describes the Data Studio GUI.The Data Studio GUI contains the following:Main Menu provides basic operations of Data Studio.Toolbar contains the access to fr", + "product_code":"dws", + "title":"Data Studio GUI", + "uri":"DWS_DS_20.html", + "doc_type":"tg", + "p_code":"10", + "code":"18" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Data Studio Menus", + "uri":"DWS_DS_21.html", + "doc_type":"tg", + "p_code":"10", + "code":"19" + }, + { + "desc":"The File menu contains database connection options. Click File in the main menu or press Alt+F to open the File menu.Perform the following steps to stop Data Studio:Alter", + "product_code":"dws", + "title":"File", + "uri":"DWS_DS_22.html", + "doc_type":"tg", + "p_code":"19", + "code":"20" + }, + { + "desc":"The Editmenu contains clipboard, Format, Find and Replace, andSearch Objectsoperations to use in the PL/SQL Viewer and SQL Terminal tab. Press Alt+E to open the Edit menu", + "product_code":"dws", + "title":"Edit", + "uri":"DWS_DS_23.html", + "doc_type":"tg", + "p_code":"19", + "code":"21" + }, + { + "desc":"The Run menu contains options of performing a database operation in the PL/SQL Viewer tab and executing SQL statements in the SQL Terminal tab. Press Alt+R to open the Ru", + "product_code":"dws", + "title":"Run", + "uri":"DWS_DS_24.html", + "doc_type":"tg", + "p_code":"19", + "code":"22" + }, + { + "desc":"The Debug menu contains debugging operations in the PL/SQL Viewer and SQL Terminal tabs. Press Alt+D to open the Debug menu.", + "product_code":"dws", + "title":"Debug", + "uri":"DWS_DS_25.html", + "doc_type":"tg", + "p_code":"19", + "code":"23" + }, + { + "desc":"The Settings menu contains the option of changing the language. Press Alt+G to open the Settings menu.", + "product_code":"dws", + "title":"Settings", + "uri":"DWS_DS_26.html", + "doc_type":"tg", + "p_code":"19", + "code":"24" + }, + { + "desc":"The Help menu contains the user manual and version information of Data Studio. Press Alt+H to open the Help menu.Visit https://java.com/en/download/help/path.xml to set t", + "product_code":"dws", + "title":"Help", + "uri":"DWS_DS_27.html", + "doc_type":"tg", + "p_code":"19", + "code":"25" + }, + { + "desc":"The following figure shows the Data Studio Toolbar.The toolbar contains the following operations:Adding a ConnectionRemoving a ConnectionConnecting to a DatabaseDisconnec", + "product_code":"dws", + "title":"Data Studio Toolbar", + "uri":"DWS_DS_28.html", + "doc_type":"tg", + "p_code":"10", + "code":"26" + }, + { + "desc":"This section describes the right-click menus of Data Studio.The following figure shows the Object Browser pane.Right-clicking a connection name allows you to select Renam", + "product_code":"dws", + "title":"Data Studio Right-Click Menus", + "uri":"DWS_DS_29.html", + "doc_type":"tg", + "p_code":"10", + "code":"27" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Connection Profiles", + "uri":"DWS_DS_32.html", + "doc_type":"tg", + "p_code":"10", + "code":"28" + }, + { + "desc":"When Data Studio is started, the New Database Connection dialog box is displayed by default. To perform database operations, Data Studio must be connected to at least one", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_33.html", + "doc_type":"tg", + "p_code":"28", + "code":"29" + }, + { + "desc":"Perform the following steps to create a database connection.Alternatively, click on the toolbar, or press Ctrl+N to connect to the database. The New Database Connection ", + "product_code":"dws", + "title":"Adding a Connection", + "uri":"DWS_DS_34.html", + "doc_type":"tg", + "p_code":"28", + "code":"30" + }, + { + "desc":"Perform the following steps to rename a database connection.A Rename Connection dialog box is displayed prompting you to enter the new connection name.The status of the c", + "product_code":"dws", + "title":"Renaming a Connection", + "uri":"DWS_DS_35.html", + "doc_type":"tg", + "p_code":"28", + "code":"31" + }, + { + "desc":"Perform the following steps to edit the properties of a database connection.To edit an active connection, you need to disable the connection and then open the connection ", + "product_code":"dws", + "title":"Editing a Connection", + "uri":"DWS_DS_36.html", + "doc_type":"tg", + "p_code":"28", + "code":"32" + }, + { + "desc":"Follow the steps below to remove an existing database connection:A confirmation dialog box is displayed to remove the connection.The status bar displays the status of the", + "product_code":"dws", + "title":"Removing a Connection", + "uri":"DWS_DS_37.html", + "doc_type":"tg", + "p_code":"28", + "code":"33" + }, + { + "desc":"Follow the steps below to view the properties of a connection:The status bar displays the status of the completed operation.Properties of the selected connection is displ", + "product_code":"dws", + "title":"Viewing Connection Properties", + "uri":"DWS_DS_38.html", + "doc_type":"tg", + "p_code":"28", + "code":"34" + }, + { + "desc":"Perform the following steps to refresh a database connection.The status of the completed operation is displayed in the status bar.The time taken to refresh a database dep", + "product_code":"dws", + "title":"Refreshing a Database Connection", + "uri":"DWS_DS_39.html", + "doc_type":"tg", + "p_code":"28", + "code":"35" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Databases", + "uri":"DWS_DS_40.html", + "doc_type":"tg", + "p_code":"10", + "code":"36" + }, + { + "desc":"A relational database is a database that has a set of tables which is manipulated in accordance with the relational model of data. It contains a set of data objects used ", + "product_code":"dws", + "title":"Creating a Database", + "uri":"DWS_DS_41.html", + "doc_type":"tg", + "p_code":"36", + "code":"37" + }, + { + "desc":"You can disconnect all the databases from a connection.Follow the steps below to disconnect a connection from the database:This operation can be performed only when there", + "product_code":"dws", + "title":"Disconnecting All Databases", + "uri":"DWS_DS_42.html", + "doc_type":"tg", + "p_code":"36", + "code":"38" + }, + { + "desc":"You can connect to the database.Follow the steps below to connect a database:This operation can be performed only on an inactive database.The database is connected.The st", + "product_code":"dws", + "title":"Connecting to a Database", + "uri":"DWS_DS_43.html", + "doc_type":"tg", + "p_code":"36", + "code":"39" + }, + { + "desc":"You can disconnect the database.Follow the steps below to disconnect a database:This operation can be performed only on an active database.A confirmation dialog box is di", + "product_code":"dws", + "title":"Disconnecting a Database", + "uri":"DWS_DS_44.html", + "doc_type":"tg", + "p_code":"36", + "code":"40" + }, + { + "desc":"Follow the steps below to rename a database:This operation can be performed only on an inactive database.A Rename Database dialog box is displayed prompting you to provid", + "product_code":"dws", + "title":"Renaming a Database", + "uri":"DWS_DS_45.html", + "doc_type":"tg", + "p_code":"36", + "code":"41" + }, + { + "desc":"Individual or batch drop can be performed on databases. Refer to Batch Dropping Objects section for batch drop.Follow the steps below to drop a database:This operation ca", + "product_code":"dws", + "title":"Dropping a Database", + "uri":"DWS_DS_46.html", + "doc_type":"tg", + "p_code":"36", + "code":"42" + }, + { + "desc":"Follow the steps below to view the properties of a database:This operation can be performed only on an active database.The status bar displays the status of the completed", + "product_code":"dws", + "title":"Viewing Properties of a Database", + "uri":"DWS_DS_47.html", + "doc_type":"tg", + "p_code":"36", + "code":"43" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Schemas", + "uri":"DWS_DS_48.html", + "doc_type":"tg", + "p_code":"10", + "code":"44" + }, + { + "desc":"This section describes working with database schemas. All system schemas are grouped under Catalogs and user schemas under Schemas.", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_49.html", + "doc_type":"tg", + "p_code":"44", + "code":"45" + }, + { + "desc":"In relational database technology, schemas provide a logical classification of objects in the database. Some of the objects that a schema may contain include functions/pr", + "product_code":"dws", + "title":"Creating a Schema", + "uri":"DWS_DS_50.html", + "doc_type":"tg", + "p_code":"44", + "code":"46" + }, + { + "desc":"You can export the schema DDL to export the DDL of functions/procedures, tables, sequences, and views of the schema.Perform the following steps to export the schema DDL:T", + "product_code":"dws", + "title":"Exporting Schema DDL", + "uri":"DWS_DS_51.html", + "doc_type":"tg", + "p_code":"44", + "code":"47" + }, + { + "desc":"The exported schema DDL and data include the following:DDL of functions/proceduresDDL and data of tablesDDL of viewsDDL of sequencesPerform the following steps to export ", + "product_code":"dws", + "title":"Exporting Schema DDL and Data", + "uri":"DWS_DS_52.html", + "doc_type":"tg", + "p_code":"44", + "code":"48" + }, + { + "desc":"Follow the steps to rename a schema:You can view the renamed schema in the Object Browser.The status bar displays the status of the completed operation.", + "product_code":"dws", + "title":"Renaming a Schema", + "uri":"DWS_DS_53.html", + "doc_type":"tg", + "p_code":"44", + "code":"49" + }, + { + "desc":"Data Studio provides the option to show sequence DDL or allow users to export sequence DDL. It provides \"Show DDL\", \"Export DDL\", \"Export DDL and Data\"Follow the steps to", + "product_code":"dws", + "title":"Supporting Sequence DDL", + "uri":"DWS_DS_201.html", + "doc_type":"tg", + "p_code":"44", + "code":"50" + }, + { + "desc":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog is displayed.In SQL Preview tab, you can view the SQL query automatically generated for the inp", + "product_code":"dws", + "title":"Granting/Revoking a Privilege", + "uri":"DWS_DS_54.html", + "doc_type":"tg", + "p_code":"44", + "code":"51" + }, + { + "desc":"Individual or batch dropping can be performed on schemas. Refer to Batch Dropping Objects section for batch dropping.Follow the steps below to drop a schema:A confirmatio", + "product_code":"dws", + "title":"Dropping a Schema", + "uri":"DWS_DS_55.html", + "doc_type":"tg", + "p_code":"44", + "code":"52" + }, + { + "desc":"Perform the following steps to create a function/procedure and SQL function:The selected template is displayed in the new tab of Data Studio.The Created function/procedur", + "product_code":"dws", + "title":"Creating a Function/Procedure", + "uri":"DWS_DS_57.html", + "doc_type":"tg", + "p_code":"10", + "code":"53" + }, + { + "desc":"Perform the following steps to edit a function/procedure or SQL function:The selected function/procedure or SQL function is displayed in the PL/SQL Viewer tab page.If mul", + "product_code":"dws", + "title":"Editing a Function/Procedure", + "uri":"DWS_DS_58.html", + "doc_type":"tg", + "p_code":"10", + "code":"54" + }, + { + "desc":"Perform the following steps to grant or revoke a permission:The Grant/Revoke dialog box is displayed.The Privilege Selection tab is displayed.The SQL Preview tab displays", + "product_code":"dws", + "title":"Granting/Revoking a Permission (Function/Procedure)", + "uri":"DWS_DS_59.html", + "doc_type":"tg", + "p_code":"10", + "code":"55" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Debugging a PL/SQL Function", + "uri":"DWS_DS_62.html", + "doc_type":"tg", + "p_code":"10", + "code":"56" + }, + { + "desc":"During debugging, if the connection is lost but the database remains connected to Object Browser, the Connection Error dialog box is displayed with the following options:", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_621.html", + "doc_type":"tg", + "p_code":"56", + "code":"57" + }, + { + "desc":"Topics in this section include:Using the Breakpoints PaneSetting or Adding a Breakpoint on a LineEnabling or Disabling a Breakpoint on a LineRemoving a Breakpoint from a ", + "product_code":"dws", + "title":"Using Breakpoints", + "uri":"DWS_DS_622.html", + "doc_type":"tg", + "p_code":"56", + "code":"58" + }, + { + "desc":"Topics in this section include:Starting DebuggingSingle Stepping a PL/SQL FunctionContinuing the DebuggingViewing CallstackSelect the function that you want to debug in t", + "product_code":"dws", + "title":"Controlling Execution", + "uri":"DWS_DS_623.html", + "doc_type":"tg", + "p_code":"56", + "code":"59" + }, + { + "desc":"When you use Data Studio, you can examine debugging information through several debugging panes. This section describes how to check the debugging information:Operating o", + "product_code":"dws", + "title":"Checking Debugging Information", + "uri":"DWS_DS_624.html", + "doc_type":"tg", + "p_code":"56", + "code":"60" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Working with Functions/Procedures", + "uri":"DWS_DS_60.html", + "doc_type":"tg", + "p_code":"10", + "code":"61" + }, + { + "desc":"This section provides you with details on working with functions/procedures and SQL functions in Data Studio.Data Studio supports PL/pgSQL and SQL languages for the opera", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_61.html", + "doc_type":"tg", + "p_code":"61", + "code":"62" + }, + { + "desc":"Data Studio suggests a list of possible schema names, table names, column names, views, sequences, and functions in thePL/SQL Viewer.Follow the steps below to select a DB", + "product_code":"dws", + "title":"Selecting a DB Object in the PL/SQL Viewer", + "uri":"DWS_DS_63.html", + "doc_type":"tg", + "p_code":"61", + "code":"63" + }, + { + "desc":"Perform the following steps to export the DDL of a function or procedure:The Data Studio Security Disclaimer dialog box is displayed.The Save As dialog box is displayed.T", + "product_code":"dws", + "title":"Exporting the DDL of a Function or Procedure", + "uri":"DWS_DS_64.html", + "doc_type":"tg", + "p_code":"61", + "code":"64" + }, + { + "desc":"Data Studio allows you to view table properties, procedures/functions and SQL functions.Follow the steps below to view table properties:The properties of the selected tab", + "product_code":"dws", + "title":"Viewing Object Properties in the PL/SQL Viewer", + "uri":"DWS_DS_65.html", + "doc_type":"tg", + "p_code":"61", + "code":"65" + }, + { + "desc":"Individual or batch drop can be performed on functions/procedures. Refer to Batch Dropping Objects section for batch drop.Follow the steps below to drop a function/proced", + "product_code":"dws", + "title":"Dropping a Function/Procedure", + "uri":"DWS_DS_66.html", + "doc_type":"tg", + "p_code":"61", + "code":"66" + }, + { + "desc":"After you connect to the database, all the stored functions/procedures and tables will be automatically populated in the Object Browser pane. You can use Data Studio to e", + "product_code":"dws", + "title":"Executing a Function/Procedure", + "uri":"DWS_DS_67.html", + "doc_type":"tg", + "p_code":"61", + "code":"67" + }, + { + "desc":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog box is displayed.", + "product_code":"dws", + "title":"Granting/Revoking a Privilege", + "uri":"DWS_DS_68.html", + "doc_type":"tg", + "p_code":"61", + "code":"68" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"GaussDB(DWS) Tables", + "uri":"DWS_DS_69.html", + "doc_type":"tg", + "p_code":"10", + "code":"69" + }, + { + "desc":"This section describes how to manage tables efficiently.You need to configure all mandatory parameters to complete the operation. Mandatory parameters are marked with an ", + "product_code":"dws", + "title":"Table Management Overview", + "uri":"DWS_DS_70.html", + "doc_type":"tg", + "p_code":"69", + "code":"70" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Creating Regular Table", + "uri":"DWS_DS_71.html", + "doc_type":"tg", + "p_code":"69", + "code":"71" + }, + { + "desc":"This section describes how to create a common table.A table is a logical structure maintained by a database administrator and consists of rows and columns. You can define", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_72.html", + "doc_type":"tg", + "p_code":"71", + "code":"72" + }, + { + "desc":"After creating a table, you can add new columns in that table. You can also perform the following operations on the existing column only for a Regular table:Creating a Ne", + "product_code":"dws", + "title":"Working with Columns", + "uri":"DWS_DS_73.html", + "doc_type":"tg", + "p_code":"71", + "code":"73" + }, + { + "desc":"You can perform the following operations after a table is created only for a Regular table:Creating a ConstraintRenaming a ConstraintDropping a ConstraintFollow the steps", + "product_code":"dws", + "title":"Working with Constraints", + "uri":"DWS_DS_74.html", + "doc_type":"tg", + "p_code":"71", + "code":"74" + }, + { + "desc":"You can create indexes in a table to search for data efficiently.After a table is created, you can add indexes to it. You can perform the following operations only in a c", + "product_code":"dws", + "title":"Managing Indexes", + "uri":"DWS_DS_75.html", + "doc_type":"tg", + "p_code":"71", + "code":"75" + }, + { + "desc":"Foreign tables created using query execution in SQL Terminal or any other tool can be viewed in the Object browser after refresh.GDS Foreign table is denoted with icon b", + "product_code":"dws", + "title":"Creating Foreign Table", + "uri":"DWS_DS_76.html", + "doc_type":"tg", + "p_code":"69", + "code":"76" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Creating Partition Table", + "uri":"DWS_DS_77.html", + "doc_type":"tg", + "p_code":"69", + "code":"77" + }, + { + "desc":"Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partit", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_78.html", + "doc_type":"tg", + "p_code":"77", + "code":"78" + }, + { + "desc":"After creating a table, you can add/modify partitions. You can also perform the following operations on an existing partition:Renaming a PartitionDropping a PartitionFoll", + "product_code":"dws", + "title":"Working with Partitions", + "uri":"DWS_DS_79.html", + "doc_type":"tg", + "p_code":"77", + "code":"79" + }, + { + "desc":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog box is displayed.In the SQL Preview tab, you can view the SQL query automatically generated for", + "product_code":"dws", + "title":"Grant/Revoke Privilege - Regular/Partition Table", + "uri":"DWS_DS_80.html", + "doc_type":"tg", + "p_code":"69", + "code":"80" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Managing Table", + "uri":"DWS_DS_81.html", + "doc_type":"tg", + "p_code":"69", + "code":"81" + }, + { + "desc":"This section describes how to manage tables efficiently.You need to configure all mandatory parameters to complete the operation. Mandatory parameters are marked with ast", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_82.html", + "doc_type":"tg", + "p_code":"81", + "code":"82" + }, + { + "desc":"Follow the steps below to rename a table:The Rename Table dialog box is displayed prompting you to provide the new name.Data Studio displays the status of the operation i", + "product_code":"dws", + "title":"Renaming a Table", + "uri":"DWS_DS_83.html", + "doc_type":"tg", + "p_code":"81", + "code":"83" + }, + { + "desc":"Follow the steps below to truncate a table:Data Studio prompts you to confirm this operation.A popup message and status bar display the status of the completed operation.", + "product_code":"dws", + "title":"Truncating a Table", + "uri":"DWS_DS_84.html", + "doc_type":"tg", + "p_code":"81", + "code":"84" + }, + { + "desc":"Index facilitate lookup of records. You need to reindex tables in the following scenarios:The index is corrupted and no longer contains valid data. Although in theory thi", + "product_code":"dws", + "title":"Reindexing a Table", + "uri":"DWS_DS_85.html", + "doc_type":"tg", + "p_code":"81", + "code":"85" + }, + { + "desc":"The analyzing table operation collects statistics about tables and table indicies and stores the collected information in internal tables of the database where the query ", + "product_code":"dws", + "title":"Analyzing a Table", + "uri":"DWS_DS_86.html", + "doc_type":"tg", + "p_code":"81", + "code":"86" + }, + { + "desc":"Vacuuming table operation reclaims space and makes it available for re-use.Follow the steps below to vacuum the table:The Vacuum Table message and status bar display the ", + "product_code":"dws", + "title":"Vacuuming a Table", + "uri":"DWS_DS_87.html", + "doc_type":"tg", + "p_code":"81", + "code":"87" + }, + { + "desc":"Follow the steps below to set the description of a table:The Update Table Description dialog box is displayed. It prompts you to set the table description.The status bar ", + "product_code":"dws", + "title":"Setting the Table Description", + "uri":"DWS_DS_88.html", + "doc_type":"tg", + "p_code":"81", + "code":"88" + }, + { + "desc":"Follow the steps below to set a schema:TheSet Schema dialog box is displayed that prompts you to select the new schema for the selected table.The status bar displays the ", + "product_code":"dws", + "title":"Setting the Schema", + "uri":"DWS_DS_90.html", + "doc_type":"tg", + "p_code":"81", + "code":"89" + }, + { + "desc":"Individual or batch dropping can be performed on tables. Refer to Batch Dropping Objects section for batch dropping.This operation removes the complete table structure (i", + "product_code":"dws", + "title":"Dropping a Table", + "uri":"DWS_DS_91.html", + "doc_type":"tg", + "p_code":"81", + "code":"90" + }, + { + "desc":"Follow the steps below to view the properties of a table:Data Studio displays the properties (General, Columns, Constraints, and Index) of the selected table in different", + "product_code":"dws", + "title":"Viewing Table Properties", + "uri":"DWS_DS_92.html", + "doc_type":"tg", + "p_code":"81", + "code":"91" + }, + { + "desc":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog is displayed.", + "product_code":"dws", + "title":"Grant/Revoke Privilege", + "uri":"DWS_DS_93.html", + "doc_type":"tg", + "p_code":"81", + "code":"92" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Managing Table Data", + "uri":"DWS_DS_94.html", + "doc_type":"tg", + "p_code":"69", + "code":"93" + }, + { + "desc":"Perform the following steps to export the table DDL:The Data Studio Security Disclaimer dialog box is displayed.The Save As dialog box is displayed.To cancel the export o", + "product_code":"dws", + "title":"Exporting Table DDL", + "uri":"DWS_DS_96.html", + "doc_type":"tg", + "p_code":"93", + "code":"94" + }, + { + "desc":"The exported table DDL and data include the following:DDL of the tableColumns and rows of the tablePerform the following steps to export the table DDL and data:The Data S", + "product_code":"dws", + "title":"Exporting Table DDL and Data", + "uri":"DWS_DS_97.html", + "doc_type":"tg", + "p_code":"93", + "code":"95" + }, + { + "desc":"Perform the following steps to export table data:The Export Table Data dialog box is displayed with the following options:Format: Table data can be exported in Excel (xls", + "product_code":"dws", + "title":"Exporting Table Data", + "uri":"DWS_DS_98.html", + "doc_type":"tg", + "p_code":"93", + "code":"96" + }, + { + "desc":"Follow the steps below to show the DDL query of a table:The DDL of the selected table is displayed.A new terminal is opened each time the Show DDL operation is executed.M", + "product_code":"dws", + "title":"Showing DDL", + "uri":"DWS_DS_99.html", + "doc_type":"tg", + "p_code":"93", + "code":"97" + }, + { + "desc":"Prerequisites:If the definition of the source file does not match that of the target table, modify the properties of the target table in the Import Table Data dialog box.", + "product_code":"dws", + "title":"Importing Table Data", + "uri":"DWS_DS_100.html", + "doc_type":"tg", + "p_code":"93", + "code":"98" + }, + { + "desc":"Follow the steps to view table data:The View Table Data tab is displayed where you can view the table data information.Toolbar menu in the View Table Data window:Icons in", + "product_code":"dws", + "title":"Viewing Table Data", + "uri":"DWS_DS_101.html", + "doc_type":"tg", + "p_code":"93", + "code":"99" + }, + { + "desc":"Follow the steps below to edit table data:The Edit Table datatabisdisplayed.Refer to Viewing Table Data for description on copy and search toolbar options.Data Studio val", + "product_code":"dws", + "title":"Editing Table Data", + "uri":"DWS_DS_102.html", + "doc_type":"tg", + "p_code":"93", + "code":"100" + }, + { + "desc":"Data Studio allows you to edit temporary tables. Temporary tables are deleted automatically when you close the connection that was used to create the table.Ensure that co", + "product_code":"dws", + "title":"Editing Temporary Tables", + "uri":"DWS_DS_103.html", + "doc_type":"tg", + "p_code":"69", + "code":"101" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Sequences", + "uri":"DWS_DS_104.html", + "doc_type":"tg", + "p_code":"10", + "code":"102" + }, + { + "desc":"Follow the steps below to create a sequence:The Create New Sequence dialog box is displayed.Enter a name in the Sequence Name field.Select theCase check box to retain the", + "product_code":"dws", + "title":"Creating Sequence", + "uri":"DWS_DS_105.html", + "doc_type":"tg", + "p_code":"102", + "code":"103" + }, + { + "desc":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog is displayed.In the SQL Preview tab, you can view the SQL query automatically generated for the", + "product_code":"dws", + "title":"Grant/Revoke Privilege", + "uri":"DWS_DS_106.html", + "doc_type":"tg", + "p_code":"102", + "code":"104" + }, + { + "desc":"You can perform the following operations on an existing sequence:Granting/Revoking a PrivilegeDropping a SequenceDropping a Sequence CascadeIndividual or batch dropping c", + "product_code":"dws", + "title":"Working with Sequences", + "uri":"DWS_DS_107.html", + "doc_type":"tg", + "p_code":"102", + "code":"105" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Views", + "uri":"DWS_DS_108.html", + "doc_type":"tg", + "p_code":"10", + "code":"106" + }, + { + "desc":"Follow the steps below to create a new view:The DDL template for the view is displayed in the SQL Terminal tab.You can view the new view in the Object Browser.The status ", + "product_code":"dws", + "title":"Creating a View", + "uri":"DWS_DS_109.html", + "doc_type":"tg", + "p_code":"106", + "code":"107" + }, + { + "desc":"Follow the steps below to grant/revoke a privilege:The Grant/Revoke dialog box is displayed.In the SQL Preview tab, you can view the SQL query automatically generated for", + "product_code":"dws", + "title":"Granting/Revoking a Privilege", + "uri":"DWS_DS_110.html", + "doc_type":"tg", + "p_code":"106", + "code":"108" + }, + { + "desc":"Views can be created to restrict access to specific rows or columns of a table. A view can be created from one or more tables and is determined by the query used to creat", + "product_code":"dws", + "title":"Working with Views", + "uri":"DWS_DS_111.html", + "doc_type":"tg", + "p_code":"106", + "code":"109" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Users/Roles", + "uri":"DWS_DS_115.html", + "doc_type":"tg", + "p_code":"10", + "code":"110" + }, + { + "desc":"A database is used by many users, and the users are grouped for management convenience. A database role can be one or a group of database users.Users and roles have simil", + "product_code":"dws", + "title":"Creating a User/Role", + "uri":"DWS_DS_116.html", + "doc_type":"tg", + "p_code":"110", + "code":"111" + }, + { + "desc":"You can perform the following operations on an existing user/role:Dropping a User/RoleViewing/Editing User/Role PropertiesViewing the User/Role DDLFollow the steps below ", + "product_code":"dws", + "title":"Working with Users/Roles", + "uri":"DWS_DS_117.html", + "doc_type":"tg", + "p_code":"110", + "code":"112" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"SQL Terminal", + "uri":"DWS_DS_118.html", + "doc_type":"tg", + "p_code":"10", + "code":"113" + }, + { + "desc":"You can open multiple SQL Terminal tabs in Data Studio to execute multiple SQL statements for query in the current SQL Terminal tab. Perform the following steps to open a", + "product_code":"dws", + "title":"Opening Multiple SQL Terminal Tabs", + "uri":"DWS_DS_119.html", + "doc_type":"tg", + "p_code":"113", + "code":"114" + }, + { + "desc":"Data Studio allows viewing and managing frequently executed SQL queries. The history of executed SQL queries is saved only in SQL Terminal.Perform the following steps to ", + "product_code":"dws", + "title":"Managing the History of Executed SQL Queries", + "uri":"DWS_DS_120.html", + "doc_type":"tg", + "p_code":"113", + "code":"115" + }, + { + "desc":"Follow the steps to open an SQL script:If the SQL Terminal has existing content, then there will be an option to overwrite the existing content or append content to it.Th", + "product_code":"dws", + "title":"Opening and Saving SQL Scripts", + "uri":"DWS_DS_121.html", + "doc_type":"tg", + "p_code":"113", + "code":"116" + }, + { + "desc":"Data Studio allows you to view table properties and functions/procedures.Follow the steps to view table properties:The table properties are read-only.Follow the steps to ", + "product_code":"dws", + "title":"Viewing Object Properties in the SQL Terminal", + "uri":"DWS_DS_122.html", + "doc_type":"tg", + "p_code":"113", + "code":"117" + }, + { + "desc":"Data Studio allows you to cancel the execution of an SQL query being executed in the SQL Terminal.Follow the steps to cancel execution of an SQL query:Alternatively, you", + "product_code":"dws", + "title":"Canceling the Execution of SQL Queries", + "uri":"DWS_DS_123.html", + "doc_type":"tg", + "p_code":"113", + "code":"118" + }, + { + "desc":"Data Studio supports formatting and highlighting of SQL queries and PL/SQL statements.Follow the steps to format PL/SQL statements:Alternatively, use the key combination ", + "product_code":"dws", + "title":"Formatting of SQL Queries", + "uri":"DWS_DS_124.html", + "doc_type":"tg", + "p_code":"113", + "code":"119" + }, + { + "desc":"Data Studio suggests a list of possible schema names, table names and column names, and views in theSQL Terminal.Follow the steps below to select a DB object:On selection", + "product_code":"dws", + "title":"Selecting a DB Object in the SQL Terminal", + "uri":"DWS_DS_125.html", + "doc_type":"tg", + "p_code":"113", + "code":"120" + }, + { + "desc":"The execution plan shows how the table(s) referenced by the SQL statement will be scanned (plain sequential scan and index scan).The SQL statement execution cost is the e", + "product_code":"dws", + "title":"Viewing the Query Execution Plan and Cost", + "uri":"DWS_DS_126.html", + "doc_type":"tg", + "p_code":"113", + "code":"121" + }, + { + "desc":"Visual Explain plan displays a graphical representation of the SQL query using information from the extended JSON format. This helps to refine query to enhance query and ", + "product_code":"dws", + "title":"Viewing the Query Execution Plan and Cost Graphically", + "uri":"DWS_DS_127.html", + "doc_type":"tg", + "p_code":"113", + "code":"122" + }, + { + "desc":"The Auto Commit option is available in the Preferences pane. For details, see Transaction.If Auto Commit is enabled, the Commit and Rollback functions are disabled. Trans", + "product_code":"dws", + "title":"Using SQL Terminals", + "uri":"DWS_DS_128.html", + "doc_type":"tg", + "p_code":"113", + "code":"123" + }, + { + "desc":"You can export the results of an SQL query into a CSV, Text or Binary file.This section contains the following topics:Exporting all dataExporting current page dataThe fol", + "product_code":"dws", + "title":"Exporting Query Results", + "uri":"DWS_DS_129.html", + "doc_type":"tg", + "p_code":"113", + "code":"124" + }, + { + "desc":"Data Studio allows you to reuse an existing SQL Terminal connection or create a new SQL Terminal connection for execution plan and cost, visual explain plan, and operatio", + "product_code":"dws", + "title":"Managing SQL Terminal Connections", + "uri":"DWS_DS_130.html", + "doc_type":"tg", + "p_code":"113", + "code":"125" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Batch Operation", + "uri":"DWS_DS_131.html", + "doc_type":"tg", + "p_code":"10", + "code":"126" + }, + { + "desc":"You can view accessible database objects in the navigation tree in Object Browser. Schema are displayed under databases, and tables are displayed under schemas.Object Bro", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_132.html", + "doc_type":"tg", + "p_code":"126", + "code":"127" + }, + { + "desc":"The batch drop operation allows you to drop multiple objects. This operation also applies to searched objects.Batch drop is allowed only within a database.An error is rep", + "product_code":"dws", + "title":"Batch Dropping Objects", + "uri":"DWS_DS_133.html", + "doc_type":"tg", + "p_code":"126", + "code":"128" + }, + { + "desc":"The batch grant/revoke operation allows you select multiple objects to grant/revoke privileges. You can also perform batch grant/revoke operation on searched objects.This", + "product_code":"dws", + "title":"Granting/Revoking Privileges", + "uri":"DWS_DS_134.html", + "doc_type":"tg", + "p_code":"126", + "code":"129" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Personalizing Data Studio", + "uri":"DWS_DS_135.html", + "doc_type":"tg", + "p_code":"10", + "code":"130" + }, + { + "desc":"This section provides details on how to personalize Data Studio using preferences settings.", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_136.html", + "doc_type":"tg", + "p_code":"130", + "code":"131" + }, + { + "desc":"This section describes how to customize shortcut keys.You can customize Data Studio shortcut keys.Perform the following steps to set or modify shortcut keys:The Preferenc", + "product_code":"dws", + "title":"General", + "uri":"DWS_DS_137.html", + "doc_type":"tg", + "p_code":"130", + "code":"132" + }, + { + "desc":"This section describes how to customize syntax highlighting, SQL history information, templates, and formatters.Perform the following steps to customize SQL highlighting:", + "product_code":"dws", + "title":"Editor", + "uri":"DWS_DS_138.html", + "doc_type":"tg", + "p_code":"130", + "code":"133" + }, + { + "desc":"Perform the following steps to configure Data Studio encoding and file encoding:The Preferences dialog box is displayed.The Session Setting pane is displayed.Data Studio ", + "product_code":"dws", + "title":"Environment", + "uri":"DWS_DS_139.html", + "doc_type":"tg", + "p_code":"130", + "code":"134" + }, + { + "desc":"This section describes how to customize the settings in the Query Results pane, including the column width, number of records to be obtained, and copy of column headers o", + "product_code":"dws", + "title":"Result Management", + "uri":"DWS_DS_141.html", + "doc_type":"tg", + "p_code":"130", + "code":"135" + }, + { + "desc":"This section describes how to customize the display of passwords and security disclaimers.You can configure whether to display the option of saving password permanently i", + "product_code":"dws", + "title":"Security", + "uri":"DWS_DS_142.html", + "doc_type":"tg", + "p_code":"130", + "code":"136" + }, + { + "desc":"The loading and operation performance of Data Studio depends on the number of objects to be loaded in Object Browser, including tables, views, and columns.Memory consumpt", + "product_code":"dws", + "title":"Performance Specifications", + "uri":"DWS_DS_144.html", + "doc_type":"tg", + "p_code":"10", + "code":"137" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Security Management", + "uri":"DWS_DS_146.html", + "doc_type":"tg", + "p_code":"10", + "code":"138" + }, + { + "desc":"Ensure that the operating system and the required software's (refer to System Requirements for more details) are updated with the latest patches to prevent vulnerabilitie", + "product_code":"dws", + "title":"Overview", + "uri":"DWS_DS_147.html", + "doc_type":"tg", + "p_code":"138", + "code":"139" + }, + { + "desc":"The following information is critical to the security management for Data Studio:When you log into a database, Data Studio displays a dialog box that describes the last s", + "product_code":"dws", + "title":"Login History", + "uri":"DWS_DS_148.html", + "doc_type":"tg", + "p_code":"138", + "code":"140" + }, + { + "desc":"The following information is critical to manage security for Data Studio:Your password will expire within 7 days from the date of notification. If the password expires, c", + "product_code":"dws", + "title":"Password Expiry Notification", + "uri":"DWS_DS_149.html", + "doc_type":"tg", + "p_code":"138", + "code":"141" + }, + { + "desc":"The following information is critical to manage security for Data Studio:While running Data Studio in a trusted environment, user must ensure to prevent malicious softwar", + "product_code":"dws", + "title":"Securing the Application In-Memory Data", + "uri":"DWS_DS_151.html", + "doc_type":"tg", + "p_code":"138", + "code":"142" + }, + { + "desc":"The following information is critical to manage security for Data Studio:You can ensure encryption of auto saved data by enabling encryption option from Preferences page.", + "product_code":"dws", + "title":"Data Encryption for Saved Data", + "uri":"DWS_DS_152.html", + "doc_type":"tg", + "p_code":"138", + "code":"143" + }, + { + "desc":"The following information is critical to manage security for Data Studio:SQL History scripts are not encrypted.The SQL History list does not display sensitive queries tha", + "product_code":"dws", + "title":"SQL History", + "uri":"DWS_DS_153.html", + "doc_type":"tg", + "p_code":"138", + "code":"144" + }, + { + "desc":"The information about using SSL certificates is for reference only. For details about the certificates and the security guidelines for managing the certificates and relat", + "product_code":"dws", + "title":"SSL Certificates", + "uri":"DWS_DS_154.html", + "doc_type":"tg", + "p_code":"138", + "code":"145" + }, + { + "desc":"The Data Studio cannot be opened for a long time.Solution: Check whether JRE is found. Verify the Java path configured in the environment. For details about the supported", + "product_code":"dws", + "title":"Troubleshooting", + "uri":"DWS_DS_145.html", + "doc_type":"tg", + "p_code":"10", + "code":"146" + }, + { + "desc":"What do I need to check if my connection fails?Answer: Check the following items:Check whether Connection Properties are properly configured.Check whether the server vers", + "product_code":"dws", + "title":"FAQs", + "uri":"DWS_DS_155.html", + "doc_type":"tg", + "p_code":"10", + "code":"147" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"GDS: Parallel Data Loader", + "uri":"dws_gds_index.html", + "doc_type":"tg", + "p_code":"", + "code":"148" + }, + { + "desc":"GaussDB(DWS) uses GDS to allocate the source data for parallel data import. Deploy GDS on the data server.If a large volume of data is stored on multiple data servers, in", + "product_code":"dws", + "title":"Installing, Configuring, and Starting GDS", + "uri":"dws_07_0759.html", + "doc_type":"tg", + "p_code":"148", + "code":"149" + }, + { + "desc":"Stop GDS after data is imported successfully.If GDS is started using the gds command, perform the following operations to stop GDS:Query the GDS process ID:ps -ef|grep gd", + "product_code":"dws", + "title":"Stopping GDS", + "uri":"dws_07_0128.html", + "doc_type":"tg", + "p_code":"148", + "code":"150" + }, + { + "desc":"The data servers reside on the same intranet as the cluster. Their IP addresses are 192.168.0.90 and 192.168.0.91. Source data files are in CSV format.Create the target t", + "product_code":"dws", + "title":"Example of Importing Data Using GDS", + "uri":"dws_07_0692.html", + "doc_type":"tg", + "p_code":"148", + "code":"151" + }, + { + "desc":"gds is used to import and export data of GaussDB(DWS).The -d and -H parameters are mandatory and option is optional. gds provides the file data from DIRECTORY for GaussDB", + "product_code":"dws", + "title":"gds", + "uri":"gds_cmd_reference.html", + "doc_type":"tg", + "p_code":"148", + "code":"152" + }, + { + "desc":"gds_ctl.py can be used to start and stop gds if gds.conf has been configured.Run the following commands on Linux OS: You need to ensure that the directory structure is as", + "product_code":"dws", + "title":"gds_ctl.py", + "uri":"dws_07_0129.html", + "doc_type":"tg", + "p_code":"148", + "code":"153" + }, + { + "desc":"Handle errors that occurred during data import.Errors that occur when data is imported are divided into data format errors and non-data format errors.Data format errorWhe", + "product_code":"dws", + "title":"Handling Import Errors", + "uri":"dws_07_0056.html", + "doc_type":"tg", + "p_code":"148", + "code":"154" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Server Tool", + "uri":"dws_07_0100.html", + "doc_type":"tg", + "p_code":"", + "code":"155" + }, + { + "desc":"gs_dump is tool provided by GaussDB(DWS) to export database information. You can export a database or its objects, such as schemas, tables, and views. The database can be", + "product_code":"dws", + "title":"gs_dump", + "uri":"dws_07_0101.html", + "doc_type":"tg", + "p_code":"155", + "code":"156" + }, + { + "desc":"gs_dumpall is a tool provided by GaussDB(DWS) to export all database information, including the data of the default postgres database, data of user-specified databases, a", + "product_code":"dws", + "title":"gs_dumpall", + "uri":"dws_07_0102.html", + "doc_type":"tg", + "p_code":"155", + "code":"157" + }, + { + "desc":"gs_restore is a tool provided by GaussDB(DWS) to import data that was exported using gs_dump. It can also be used to import files that were exported using gs_dump.It has ", + "product_code":"dws", + "title":"gs_restore", + "uri":"dws_07_0103.html", + "doc_type":"tg", + "p_code":"155", + "code":"158" + }, + { + "desc":"gds_check is used to check the GDS deployment environment, including the OS parameters, network environment, and disk usage. It also supports the recovery of system param", + "product_code":"dws", + "title":"gds_check", + "uri":"dws_07_0104.html", + "doc_type":"tg", + "p_code":"155", + "code":"159" + }, + { + "desc":"gds_install is a script tool used to install GDS in batches, improving GDS deployment efficiency.Set environment variables before executing the script. For details, see \"", + "product_code":"dws", + "title":"gds_install", + "uri":"dws_07_0106.html", + "doc_type":"tg", + "p_code":"155", + "code":"160" + }, + { + "desc":"gds_uninstall is a script tool used to uninstall GDS in batches.Set environment variables before executing the script. For details, see \"Importing Data > Using a Foreign ", + "product_code":"dws", + "title":"gds_uninstall", + "uri":"dws_07_0107.html", + "doc_type":"tg", + "p_code":"155", + "code":"161" + }, + { + "desc":"gds_ctl is a script tool used for starting or stopping GDS service processes in batches. You can start or stop GDS service processes, which use the same port, on multiple", + "product_code":"dws", + "title":"gds_ctl", + "uri":"dws_07_0105.html", + "doc_type":"tg", + "p_code":"155", + "code":"162" + }, + { + "desc":"During cluster installation, you need to execute commands and transfer files among hosts in the cluster. Therefore, mutual trust relationships must be established among t", + "product_code":"dws", + "title":"gs_sshexkey", + "uri":"dws_07_0108.html", + "doc_type":"tg", + "p_code":"155", + "code":"163" + }, + { + "desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.", + "product_code":"dws", + "title":"Change History", + "uri":"dws_07_0200.html", + "doc_type":"tg", + "p_code":"", + "code":"164" + } +] \ No newline at end of file diff --git a/docs/dws/tool/DWS_DS_09.html b/docs/dws/tool/DWS_DS_09.html new file mode 100644 index 00000000..9353009a --- /dev/null +++ b/docs/dws/tool/DWS_DS_09.html @@ -0,0 +1,21 @@ + + +
Prerequisites:
+Perform the following steps to import table data:
+The Open dialog box is displayed.
+The status bar displays the operation progress. The imported data will be added to the existing table data.
+The Data Imported Successfully dialog box and status bar display the status of the completed operation.
+Perform the following steps to cancel table data import:
+The Messages tab and status bar display the status of the canceled operation.
+Follow the steps to view table data:
+The View Table Data tab is displayed where you can view the table data information.
+Toolbar menu in the View Table Data window:
+ +Toolbar Name + |
+Toolbar Icon + |
+Description + |
+
---|---|---|
Copy + |
+Click the icon to copy selected content from View Table Data window to clipboard. Shortcut key - Ctrl+C. + |
+|
Advanced Copy + |
+Click the icon to copy content from result window to the clipboard. Results can be copied to include the row number and/or column header. Refer to Query Results to set this preference. Shortcut key - Ctrl+Shift+C. + |
+|
Show/Hide Search bar + |
+Click the icon to display/hide the search text field. This is a toggle button. + |
+|
Encoding + |
+- + |
+Refer to Executing SQL Queries for information on encoding selection. + |
+
Icons in Search field:
+ +Icon Name + |
+Icon + |
+Description + |
+
---|---|---|
Search + |
+Click the icon to search the table data displayed based on the criteria defined. Search text are case insensitive. + |
+|
Clear Search Text + |
+Click the icon to clear the search texts entered in the search field. + |
+
Refer to Executing SQL Queries for column reordering and sort option.
+Refer to Query Results for more information.
+Follow the steps below to edit table data:
+The Edit Table data tab is displayed.
+Refer to Viewing Table Data for description on copy and search toolbar options.
+Data Studio validates only the following data types entered into cells:
+Bigint, bit, boolean, char, date, decimal, double, float, integer, numeric, real, smallint, time, time with time zone, time stamp, time stamp with time zone, tinyint, and varchar.
+Editing of array type data type is not supported.
+Any related errors during this operation reported by database will be displayed in Data Studio. Time with time zone and timestamp with time zone columns are non-editable columns.
+You can perform the following operations in the Edit Table Data tab:
+ + +The Edit Table Data tab status bar shows the Query Submit Time, Number of rows fetched, Execution time and Status of the operation.
+Data Studio updates rows identified by the unique key. If a unique key is not identified for a table and there are identical rows, then an update operation made on one of the rows will affect all identical rows. Refresh the Edit Table Data tab to view the updated rows.
+Refer to Query Results for more information.
+Data Studio allows you to edit the distribution key column only for a new row.
+ +Define unique key dialog box is displayed.
+Click Use All Columns to define all columns as unique key.
+Click Cancel to modify the information in Edit Table Data tab.
+The Edit Table Data tab status bar shows the Query Submit Time, Number of rows fetched, Execution time and Status of the operation.
+Select Remember the selection for this window option to hide the unique definition window from displaying while continuing with the edit table data operation. Click from Edit Table Data toolbar to clear previously selected unique key definition and display unique definition window again.
Define unique key dialog box is displayed.
+Click Use All Columns to define all columns as unique key.
+Click Cancel to modify the information in Edit Table Data tab.
+The status bar shows the Execution Time and Status of the operation.
+Select Remember the selection for this window option to hide the unique definition window from displaying while continuing with the edit table data operation. Click from Edit Table Data toolbar to clear previously selected unique key definition and display unique definition window again.
During edit operation, Data Studio does not allow you to edit the distribution key column as it is used by the DB to locate data in the database cluster.
+You can copy data from the Edit Table Data tab.
+Follow the steps to copy data:
+Refer to Executing SQL Queries to understand the difference between copy and advanced copy.
+You can copy data from a CSV file and paste it into cells in the Edit Table Data tab to insert and update records. If you paste onto existing cell data, the data is overwritten with the new data from the CSV file.
+Follow the steps to paste data into a cell:
+The Define Unique Key dialogue box is displayed.
+Click Use All Columns to define all columns as the unique key.
+Click Cancel to modify the information in the Edit Table Data tab.
+The status bar shows the Execution Time and Status of the operation.
+Select Remember the selection for this window to hide the unique definition window from displaying while continuing with the edit table data operation. Click from the Edit Table Data toolbar to clear previously selected unique key definition and display the unique definition window again
During the pasting operation, Data Studio does not allow you to edit the distribution key column as it is used by the DB to locate data in the database cluster.
+Empty cells are shown as [NULL]. Empty cell in Edit Table Data tab can be searched using the Null Values search drop-down.
+Refer to Executing SQL Queries for information on show/hide search bar, sort, column reorder, and encoding options.
+Data Studio allows you to edit temporary tables. Temporary tables are deleted automatically when you close the connection that was used to create the table.
+Ensure that connection reuse is enabled when you use the SQL Terminal to edit temporary tables. Refer to Managing SQL Terminal Connections for information about enabling SQL Terminal Connection reuse.
+Follow the steps to edit a temporary table:
+The Result tab displays the results of the SQL query along with the query statement executed.
+Follow the steps below to create a sequence:
+The Create New Sequence dialog box is displayed.
+Select the Case check box to retain the capitalization of the text entered in Sequence Name field. For example, if the sequence name entered is "Employee", then the sequence name is created as "Employee".
+The minimum and maximum value should be between -9223372036854775808 and 9223372036854775807.
+The schema name auto-populates in the Schema field.
+In the SQL Preview tab, you can view the SQL query automatically generated for the inputs provided.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog is displayed.
+In the SQL Preview tab, you can view the SQL query automatically generated for the inputs provided.
+You can perform the following operations on an existing sequence:
+ +Individual or batch dropping can be performed on sequences. Refer to Batch Dropping Objects section for batch drop.
+Follow the steps to dropping a sequence:
+The Drop Sequence dialog box is displayed.
+The status bar displays the status of the completed operation.
+Follow the steps to drop a sequence cascade:
+The Drop Sequence Cascade dialog box is displayed.
+The status bar displays the status of the completed operation.
+This is only available for OLAP, not for OLTP.
+Follow the steps to grant/revoke a privilege:
+The Grant/Revoke dialog is displayed.
+Follow the steps below to create a new view:
+The DDL template for the view is displayed in the SQL Terminal tab.
+You can view the new view in the Object Browser.
+The status bar will not display message on completion of this operation.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog box is displayed.
+In the SQL Preview tab, you can view the SQL query automatically generated for the inputs provided.
+Views can be created to restrict access to specific rows or columns of a table. A view can be created from one or more tables and is determined by the query used to create the view.
+You can perform the following operations on an existing view:
+Follow the steps below to export view the DDL:
+The Data Studio Security Disclaimer dialog box is displayed.
+The Save As dialog box is displayed.
+The Export message and status bar display the status of the completed operation.
+ +Database Encoding + |
+File Encoding + |
+Supports Exporting DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
Individual or batch dropping can be performed on views. Refer to Batch Dropping Objects for batch dropping.
+Follow the steps below to drop the view:
+The Drop View dialog box is displayed.
+The status bar displays the status of the completed operation.
+Follow the steps below to drop a view and its dependent database objects:
+The Drop View dialog box is displayed.
+The status bar displays the status of the completed operation.
+Follow the steps below to rename a view:
+The Rename View dialog box is displayed.
+The status bar displays the status of the completed operation.
+Follow the steps below to set the schema for a view:
+The Set Schema dialog box is displayed.
+The status bar displays the status of the completed operation.
+If the required schema contains a view with the same name as the current view, then Data Studio does not allow setting the schema for the view.
+Follow the steps below to view the DDL of the view:
+The DDL is displayed in a new SQL Terminal tab. You must refresh the Object Browser to view the latest DDL.
+Follow the steps below to set the default value for a column in the view:
+A dialog box with the current default value (if it is set) is displayed which prompts you to provide the default value.
+Data Studio displays the status of the operation in the status bar.
+Follow the steps below to view the properties of the View:
+The properties (General and Columns) of the selected View is displayed in different tabs.
+If the property of a View is modified that is already opened, then refresh and open the properties of the View again to view the updated information on the same opened window.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog box is displayed.
+A database is used by many users, and the users are grouped for management convenience. A database role can be one or a group of database users.
+Users and roles have similar concepts in databases. In practice, you are advised to use a role to manage permissions rather than to access databases.
+Users - They are set of database users. These users are different from operating system users. These users can assign privileges to other users to access database objects.
+Role - This can be considered as a user or group based on the usage. Roles are at cluster level, and hence applicable to all databases in the cluster.
+You can perform the following operations on an existing user/role:
+ +Follow the steps below to drop a user/role:
+The Drop User/Role dialog box is displayed.
+The status bar displays the status of the completed operation.
+Follow the steps below to view the properties of a user/role:
+Data Studio displays the properties (General, Privilege, and Membership) of the selected user/role in different tabs.
+Editing of properties can be performed. OID is a non-editable field.
+Refer to Editing Table Data for information on edit, save, cancel, copy, and refresh operations.
+You can open multiple SQL Terminal tabs in Data Studio to execute multiple SQL statements for query in the current SQL Terminal tab. Perform the following steps to open a new SQL Terminal tab:
+You can also open multiple SQL Terminal tabs on different connection templates.
+The SQL Terminal tab is displayed.
+Data Studio displays an error message indicating that no result is found in the status bar. The Result tab displays the successful execution results.
+Perform the following steps to open a new SQL Terminal tab in another connection:
+The name format of the new SQL Terminal tab is as follows:
+Database name@Connection information(Tab number), for example, postgres@IDG_1(2). The number of each SQL Terminal tab in the same connection information is unique.
+You can copy or export cell data to an Excel file and generate a SQL query file.
+After the SQL query result is displayed in the Result tab, right-click the result. The following menu is displayed:
+Perform the following steps to add a row number and column header to the result set:
+The following table describes the right-click options.
+ +Option + |
+Sub-Item + |
+Description + |
+
---|---|---|
Copy Data + + |
+Copy + |
+Copies data in the selected cell. + |
+
Advanced Copy + |
+Copies data in the selected cell, row number, and column header based on the preference settings. + |
+|
Copy to Excel + + |
+Copy as xls + |
+Exports data of selected cells to an xls file, which contains a maximum of 64,000 rows and 256 columns. + |
+
Copy as xlsx + |
+Exports data of selected cells to an xlsx file, which contains a maximum of 1 million rows. + |
+|
Export + + |
+Current Page + |
+Exports the table data on the current page. + |
+
All Pages + |
+Exports all tables. + |
+|
Generate SQL + + + |
+Selected Line + |
+Selects data from the target table of the statement for inserting data to generate a SQL file. + |
+
Current Page + |
+Selects data of the current page from the target table of the statement for inserting data to generate a SQL file. + |
+|
All Pages + |
+Selects all table data from the target table of the statement for inserting data to generate a SQL file. + |
+|
Set Null + |
+- + |
+Sets the cell data to null. + |
+
Search + |
+- + |
+Searches for data in the selected cell and displays all data that meets the search criteria. + |
+
The preceding SQL files do not take effect for the result sets generated by queries that use JOIN, expressions, views, SET operators, aggregate functions, GROUP BY clauses, or column aliases.
+When a query is executed in the SQL Terminal pane, a progress bar is displayed to dynamically display the execution duration. After the query is complete, the time bar disappears. The total execution duration is displayed next to the time bar.
+If you want to cancel the query, click Cancel next to the time bar.
+The procedure is shown in the following figure.
+This section describes the constraints and limitations for using Data Studio.
+The filter count and filter status are not displayed in the filter tree.
+If the SQL statement, DDL, object name, or data to be viewed contains Chinese characters, and the OS supports GBK, set the encoding mode to GBK. For details, see Session Setting.
+On the Advanced tab of the New Connection and Edit Connection pages, commas (,) are used as separators in the include and exclude columns. Therefore, a schema name that contains a comma (,) is not supported.
+A function or procedure created in SQL Terminal or Create Function/Procedure wizard must end with a slash (/). Otherwise, the statement is considered as single query and an error may be reported during execution.
+Data Studio validates SSL connection parameters only for the first time of connection. If Enable SSL is selected, the same SSL connection parameters are used when a new connection is opened.
+Data Studio allows viewing and managing frequently executed SQL queries. The history of executed SQL queries is saved only in SQL Terminal.
+Perform the following steps to view the history of executed SQL queries:
+The SQL History dialog box is displayed.
+The scripts of historical SQL queries are not encrypted.
+The number of queries displayed in the SQL History dialog box depends on the value set in Preferences > Editor > SQL History. For details about setting the value, see SQL History. Data Studio overwrites the older queries into the SQL history after the list is full. The executed queries are automatically stored in the list.
+The SQL History dialog box contains the following columns:
+The connection information is deleted together with the query history. If the SQL History dialog box is closed, the query is not removed from the list.
+You can perform the following operations in the SQL History dialog box:
+Perform the following steps to load a SQL query into the SQL Terminal pane:
+The query is added to the cursor position in SQL Terminal.
+You can click the Load in SQL Terminal and close History button to load selected queries into SQL Terminal and close the SQL History dialog box.
+Perform the following steps to load multiple selected SQL queries into the SQL Terminal pane:
+The queries are added to the cursor position in SQL Terminal.
+If you continue the execution upon an error, each statement in SQL Terminal will be executed in sequence as a scheduled job. The execution status is updated in the console and each job is listed in the progress bar. When the time difference between Job Execution, Progress Bar Update and Console Update is small, you will not be able to stop the execution by opening the progress bar. In this case, you need to close SQL Terminal to stop the execution.
+To load more data in the Result tab, you need to scroll down to bottom, which is inconvenient in some scenarios. Data Studio provides a button that simplifies the loading operation.
+Perform the following steps to load more records:
+All the required records are listed.
+Perform the following steps to delete a SQL query from the SQL History list:
+A confirmation dialog box is displayed.
+Perform the following steps to delete all SQL queries from the SQL History list:
+A confirmation dialog box is displayed.
+You can pin queries that you do not want Data Studio to delete automatically from SQL History. You can pin a maximum of 50 queries. Pinned queries are displayed at the top of the list. The value set in SQL History does not affect the pinned queries. For details, see SQL History.
+The pinned queries are displayed at the top of the list once the SQL History pane is closed and opened again.
+Perform the following steps to pin a SQL query:
+The Pin Status column displays the pinned status of the query.
+Follow the steps to open an SQL script:
+If the SQL Terminal has existing content, then there will be an option to overwrite the existing content or append content to it.
+The selected SQL script is opened as a File Terminal.
+The icons on the file terminal tab is different from those in the SQL terminal. When you move the mouse cursor over the source file, corresponding database connection will be displayed on File Terminal.
+Data Studio allows you to save and open SQL scripts in the SQL Terminal. After saving the changes, SQL Terminal will be changed to a File Terminal.
+The Save option saves the File Terminal content to the associated file. ,
+Follow the steps to save an SQL script:
+The Data Studio Security Disclaimer dialog box is displayed.
+The Save As option saves the terminal content to a new file.
+Follow the steps to save an SQL script:
+The Data Studio Security Disclaimer dialog box is displayed.
+The Save As dialog box is displayed.
+When there are unsaved changes in File Terminals, user will be given an option to save or cancel on graceful exit of data studio.
+Data Studio allows you to view table properties and functions/procedures.
+Follow the steps to view table properties:
+The table properties are read-only.
+Follow the steps to view functions/procedures:
+Follow the steps to view the properties of a view:
+Data Studio allows you to save the unsaved content of the terminal before exiting the application.
+Follow the steps to save the content of the terminal:
+The Saving File Terminal dialog box will not appear in case of force exit.
+Data Studio allows you to cancel the execution of an SQL query being executed in the SQL Terminal.
+Follow the steps to cancel execution of an SQL query:
+Alternatively, you can choose Run > Cancel from the main menu or right-click SQL Terminal and select Cancel, or select Cancel from Progress View tab.
+When you cancel the query, the execution stops at the currently executing SQL statement.
+Database changes made by the canceled query are rolled back and the queries following the canceled query are not executed.
+A query cannot be canceled and the Result tab shows the result when:
+A query cannot be canceled while viewing the query Execution Plan. For more details, refer to Viewing the Query Execution Plan and Cost.
+The Messages tab shows the query cancelation message.
+The Cancel button is enabled only during query execution.
+Data Studio supports formatting and highlighting of SQL queries and PL/SQL statements.
+Follow the steps to format PL/SQL statements:
+Alternatively, use the key combination Ctrl+Shift+F or choose Edit > Format from the main menu.
+The PL/SQL statements are formatted.
+Data Studio supports formatting of simple SQL SELECT, INSERT, UPDATE, DELETE statements which are syntactically correct. The following are some of the statements for which formatting is supported:
+SELECT statement without SET operations like UNION, UNION ALL, MINUS, INTERSECT and so on.
+SELECT statements without sub-queries.
+Follow the steps below to format SQL queries:
+Alternatively, use the key combination Ctrl+Shift+F or choose Edit > Format from the main menu.
+The query is formatted.
+Refer following table for query formatting rules.
+ +Statement + |
+Clauses + |
+Formatting Rules + |
+
---|---|---|
SELECT + |
+SELECT list + |
+Line break before first column + |
+
Indent column list + |
+||
FROM + |
+Line break before FROM + |
+|
Line break after FROM + |
+||
Indent FROM list + |
+||
Stack FROM list + |
+||
JOIN (FROM clause) + |
+Line break before JOIN + |
+|
Line break after JOIN + |
+||
Line break before ON + |
+||
Line break after ON + |
+||
Indent table after JOIN + |
+||
Indent ON condition + |
+||
WHERE + |
+Line break before WHERE + |
+|
Line break after WHERE + |
+||
Indent WHERE condition + |
+||
Place WHERE condition on single line + |
+||
GROUP BY + |
+Line break before GROUP + |
+|
Line break before GROUP BY expression + |
+||
Indent column list + |
+||
Stack column list + |
+||
HAVING + |
+Line break before HAVING + |
+|
Line break after HAVING + |
+||
Indent HAVING condition + |
+||
ORDER BY + |
+Line break before ORDER + |
+|
Line break after BY + |
+||
Indent column list + |
+||
Stack column list + |
+||
CTE + |
+Indent subquery braces + |
+|
Each CTE in a new line + |
+||
INSERT + |
+INSERT INFO + |
+Line break before opening brace + |
+
Line break after opening brace + |
+||
Line break before closing brace + |
+||
Indent column list braces + |
+||
Indent column list + |
+||
Line break before VALUES + |
+||
Stack column list + |
+||
Line break before VALUES + |
+||
Line break before opening brace + |
+||
Line break after opening brace + |
+||
Line break before closing brace + |
+||
Indent VALUES expressions list braces + |
+||
Indent VALUES expressions list + |
+||
Stack VALUES expressions list + |
+||
DEFAULT + |
+Line break before DEFAULT + |
+|
Indent DEFAULT keyword + |
+||
CTE + |
+Each CTE in a new line + |
+|
RETURNING + |
+Line break before RETURNING + |
+|
Line break after RETURNING + |
+||
Indent RETURNING column list + |
+||
Place RETURNING column List on single line + |
+||
UPDATE + |
+UPDATE Table + |
+Line break before table + |
+
Indent table + |
+||
SET Clause + |
+Line break before SET + |
+|
Indent column assignments list + |
+||
Indent column assignments list + |
+||
FROM CLAUSE + |
+Line break before FROM + |
+|
Line break after FROM + |
+||
Indent FROM list + |
+||
Stack FROM list + |
+||
JOIN CLAUSE(FROM CLAUSE) + |
+Line break before JOIN + |
+|
Line break after JOIN + |
+||
Line break before ON + |
+||
Line break after ON + |
+||
Indent table after JOIN + |
+||
Indent ON condition + |
+||
WHERE CLAUSE + |
+Line break before WHERE + |
+|
Line break after WHERE + |
+||
Indent WHERE condition + |
+||
Indent WHERE condition + |
+||
CTE + |
+Each CTE in a new line + |
+|
RETURNING + + |
+Line break before RETURNING + |
+|
Line break after RETURNING + |
+||
DELETE + |
+
+ USING CLAUSE + |
+Indent RETURNING column list + |
+
Line break before FROM + |
+||
Line break after FROM + |
+||
Indent USING list + |
+||
Stack FROM list + |
+||
JOIN CLAUSE + |
+Line break before JOIN + |
+|
Line break after JOIN + |
+||
Line break before ON + |
+||
Line break after ON + |
+||
Indent table after JOIN + |
+||
Indent ON condition List + |
+||
WHERE CLAUSE + + + + |
+Line break before WHERE + |
+|
Line break after WHERE + |
+||
Indent WHERE condition + |
+||
Stack WHERE condition list + |
+||
CTE + |
+Each CTE in a new line + |
+|
RETURNING + + + |
+Line break before RETURNING + |
+|
Line break after RETURNING + |
+||
Indent RETURNING column list + |
+
Data Studio supports automatic highlighting of the following punctuation mark's pair when cursor is placed before or after the punctuation mark or the punctuation mark is selected.
+Follow the steps below to change case for SQL queries and PL/SQL statements:
+Text case can be changed in the SQL Terminal using one of the following methods:
+Method 1:
+The text changes to the case selected.
+Method 2:
+The text changes to the case selected.
+Method 3:
+The text changes to the case selected.
+Keywords are highlighted automatically when you enter them (according to the default color scheme) as shown below:
+The following figure shows the default color scheme for the specified type of syntax:
+Refer to Syntax Highlighting to customize the SQL highlighting color scheme for the specific type of syntax.
+Data Studio suggests a list of possible schema names, table names and column names, and views in the SQL Terminal.
+Follow the steps below to select a DB object:
+On selection, the child DB object will be appended to the parent DB object (with a period '.').
+If there are two schemas with the name public and PUBLIC, then all child objects for both these schemas will be displayed.
+The execution plan shows how the table(s) referenced by the SQL statement will be scanned (plain sequential scan and index scan).
+The SQL statement execution cost is the estimate at how long it will take to run the statement (measured in cost units that are arbitrary, but conventionally mean disk page fetches).
+Follow the steps below to view the plan and cost for a required SQL query:
+To view explain plan with analyze, click the drop-down from , select Include Analyze, and click
.
The Execution Plan opens in tree view format as a new tab at the bottom by default. The display mode has a tree shape and text style.
+The data shown in tree explain plan and visual explain may vary, since the execution parameters considered by both are not the same.
+Following are the parameters selected for explain plan with/without analyze and the columns displayed:
+ +Explain Plan Type + |
+Parameters Selected + |
+Columns + |
+
---|---|---|
Include Analyze unchecked (default setting) + |
+Verbose, Costs + |
+Node type, startup cost, total cost, rows, width, and additional Info + |
+
Include Analyze checked + |
+Analyze, Verbose, Costs, Buffers, Timing + |
+Node type, startup cost, total cost, rows, width, Actual startup time, Actual total time, Actual Rows, Actual loops, and Additional Info + |
+
Additional Info column includes, predicate information (filter predicate, hash condition), distribution key and output information along with the node type information.
+The tree view of plan categorizes nodes into 16 types. In tree view, each node will be preceded with corresponding type of icon. Following is the list of node categories with icons:
+ +Node Category + |
+Icon + |
+
---|---|
Aggregate + |
+|
Group Aggregate + |
+|
Function + |
+|
Hash + |
+|
Hash Join + |
+|
Nested Loop + |
+|
Nested Loop Join + |
+|
Modify Table + |
+|
Partition Iterator + |
+|
Row Adapter + |
+|
Seq Scan on + |
+|
Set Operator + |
+|
Sort + |
+|
Stream + |
+|
Union + |
+|
Unknown + |
+
Hover over the highlighted cells to identify the heaviest, costliest, and slowest node. Cells will be highlighted only for tree view.
+If multiple queries are selected, explain plan with/without analyze will be displayed only for last query selected.
+Each time execution plan is executed, the plan opens in a new tab.
+If the connection is lost and the database is still connected in Object Browser, then Connection Error dialog box is displayed:
+Toolbar menu in the Execution Plan window:
+ +Toolbar Name + |
+Toolbar Icon + |
+Description + |
+
---|---|---|
Tree Format + |
+This icon is used view explain plan in tree format. + |
+|
Text Format + |
+This icon is used view explain plan in text format. + |
+|
Copy + |
+This icon is used to copy selected content from result window to clipboard. Shortcut key - Ctrl+C. + |
+|
Save + |
+This icon is used to save the explain plan in text format. + |
+
Refer to Executing SQL Queries for information refresh, SQL preview, and search bar.
+Refresh operation re-executes the explain/analyze query and refreshes the plan in the existing tab.
+The result is displayed in the Messages tab.
+ +Visual Explain plan displays a graphical representation of the SQL query using information from the extended JSON format. This helps to refine query to enhance query and server performance. It helps to analyze the query path taken by the database and identifies heaviest, costliest and slowest node.
+The graphical execution plan shows how the table(s) referenced by the SQL statement will be scanned (plain sequential scan and index scan).
+The SQL statement execution cost is the estimate at how long it will take to run the statement (measured in cost units that are arbitrary, but conventionally mean disk page fetches).
+Costliest: Highest Self Cost plan node.
+Heaviest: Maximum number of rows output by a plan node is considered heaviest node.
+Slowest: Highest execution time by a plan node.
+Follow the steps to view the graphical representation of plan and cost for a required SQL query:
+Visual Plan Analysis window is displayed.
+Refer to Viewing the Query Execution Plan and Cost for information on reconnect option in case connection is lost while retrieving the execution plan and cost.
+Column Name + |
+Description + |
+
---|---|
Node Name + |
+Name of the node + |
+
Analysis + |
+Node analysis information + |
+
RowsOutput + |
+Number of rows output by the plan node + |
+
RowsOutput Deviation (%) + |
+Deviation % between estimated rows output and actual rows output by the plan node + |
+
Execution Time (ms) + |
+Execution time taken by the plan node + |
+
Contribution (%) + |
+Percentage of the execution time taken by plan node against the overall query execution time. + |
+
Self Cost + |
+Total Cost of the plan node - Total Cost of all child nodes + |
+
Total Cost + |
+Total cost of the plan node + |
+
Column Name + |
+Description + |
+
---|---|
Node Name + |
+Name of the node + |
+
Entity Name + |
+Name of the object + |
+
Cost + |
+Execution time taken by the plan node + |
+
Rows + |
+Number of rows output by the plan node + |
+
Loops + |
+Number of loops of execution performed by each node. + |
+
Width + |
+The estimated average width of rows output by the plan node in bytes + |
+
Actual Rows + |
+Number of estimated rows output by the plan node + |
+
Actual Time + |
+Actual execution time taken by the plan node + |
+
Row Name + |
+Description + |
+
---|---|
Output + |
+Provides the column information returned by the plan node + |
+
Analysis + |
+Provides analysis of the plan node like costliest, slowest, and heaviest. + |
+
RowsOutput Deviation (%) + |
+Deviation % between estimated rows output and actual rows output by the plan node + |
+
Row Width (bytes) + |
+The estimated average width of rows output by the plan node in bytes + |
+
Plan Output Rows + |
+Number of rows output by the plan node + |
+
Actual Output Rows + |
+Number of estimated rows output by the plan node + |
+
Actual Startup Time + |
+The actual execution time taken by the plan node to output the first record + |
+
Actual Total Time + |
+Actual execution time taken by the plan node + |
+
Actual Loops + |
+Number of iterations performed for the node + |
+
Startup Cost + |
+The execution time taken by the plan node to output the first record + |
+
Total Cost + |
+Execution time taken by the plan node + |
+
Is Column Store + |
+This field represents the orientation of the table (column or row store) + |
+
Shared Hit Blocks + |
+Number of shared blocks hit in buffer + |
+
Shared Read Blocks + |
+Number of shared blocks read from buffer + |
+
Shared Dirtied Blocks + |
+Number of shared blocks dirtied in buffer + |
+
Shared Written Blocks + |
+Number of shared blocks written in buffer + |
+
Local Hit Blocks + |
+Number of local blocks hit in buffer + |
+
Local Read Blocks + |
+Number of local blocks read from buffer + |
+
Local Dirtied Blocks + |
+Number of local blocks dirtied in buffer + |
+
Local Written Blocks + |
+Number of local blocks written in buffer + |
+
Temp Read Blocks + |
+Number of temporary blocks read in buffer + |
+
Temp Written Blocks + |
+Number of temporary blocks written in buffer + |
+
I/O Read Time (ms) + |
+Time taken for making any I/O read operation for the node + |
+
I/O Write Time (ms) + |
+Time taken for making any I/O write operation for the node + |
+
Node Type + |
+Represents the type of node + |
+
Parent Relationship + |
+Represents the relationship with the parent node + |
+
Inner Node Name + |
+Child node name + |
+
Node/s + |
+No description needed for this field, this will be removed from properties + |
+
Plan Node + |
+Additional Information + |
+
---|---|
Partitioned CStore Scan + |
+Table Name, Table Alias, Schema Name + |
+
Vector Sort + |
+Sort keys + |
+
Vector Hash Aggregate + |
+Group By Key + |
+
Vector Has Join + |
+Join Type, Hash Condition + |
+
Vector Streaming + |
+Distribution key, Spawn On + |
+
Refer to Viewing Table Data section for description on copy and search toolbar options.
+The Auto Commit option is available in the Preferences pane. For details, see Transaction.
+Reuse Connection
+The Reuse Connection option allows you to select the same SQL terminal connection or new connection for the result set. The selection affects the record visibility due to the isolation levels defined in the database server.
+For some databases, the temporary tables created or used by the terminal can be edited in the Result tab.
+For some databases, the temporary tables can be edited in the Result tab.
+: displayed when Reuse Connection is set to ON
: displayed when Reuse Connection is set to OFF
: displayed when Reuse Connection is disabled
Perform the following steps to set Reuse Connection to OFF:
+Reuse Connection is set to OFF for the terminal. is displayed.
For details about Auto Commit and Reuse Connection, see Table 1.
+ +Enter a function/procedure or SQL statement in the SQL Terminal tab and click , or press Ctrl+Enter, or choose Run > Compile/Execute Statement in the main menu.
Alternatively, you can right-click in the SQL Terminal tab and select Execute Statement.
+You can check the status bar to view the status of a query being executed.
+After the function/procedure or SQL query is executed, the result is generated and displayed in the Result tab.
+If the connection is lost during execution but the database remains connected in Object Browser, the Connection Error dialog box is displayed with the following options:
+If the reconnection fails after three attempts, the database will be disconnected in Object Browser. Connect to the database in Object Browser and try the execution again.
+You can choose Settings > Preferences to set the column width. For details, see Query Results.
+Column Reorder
+You can click a column header and drag the column to the desired position.
+This feature allows you to sort table data of some pages by multiple columns, as well as to set the priority of columns to be sorted.
+This feature is available for the following tabs:
+Perform the following steps to enable Multi-Column Sort:
+The Multi-Column Sort dialog box is displayed.
+The Multi-Column Sort dialog box contains the following elements.
+ +Name + |
+UI Element Type + |
+Description/Operation + |
+
---|---|---|
Priority + |
+Read-only text field + |
+Shows the column priority in Multi-Column Sort + |
+
Column Name + |
+Concatenated field, which can be all column names of the table + |
+Shows the name of the column added for sorting + |
+
Data Type + |
+Read-only text field + |
+Shows data type of the selected column + |
+
Sort Order + |
+Concatenated field, which can be in either ascending or descending order + |
+Shows the sort order of the selected column + |
+
Add Column + |
+Button + |
+Adds new columns to a table for multi-column sort + |
+
Delete Column + |
+Button + |
+Deletes selected columns from a table for multi-column sort + |
+
Up + |
+Button + |
+Moves the selected column up by one step to change the sort priority + |
+
Down + |
+Button + |
+Moves the selected column down by one step to change the sort priority + |
+
Apply + |
+Button + |
+Applies the sort priority + |
+
Data types will be sorted in an alphabetical order, except the following ones:
+TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, REAL, DOUBLE, NUMERIC, BIT, BOOLEAN, DATE, TIME, TIME_WITH_TIMEZONE, TIMESTAMP, and TIMESTAMP_WITH_TIMEZONE
+The Multi-Column Sort dialog box contains the following icons.
+ +Icon + |
+Description + |
+Operation + |
+
---|---|---|
Not Sorted + |
+If this icon is displayed in a column header, the column is not sorted. You can click this icon to sort the column in ascending order. +Alternatively, use Alt+Click to select the column header. + |
+|
Ascending Sort + |
+If this icon is displayed in a column header, the column is sorted in ascending order. You can click this icon to sort the column in descending order. +Alternatively, use Alt+Click to select the column header. + |
+|
Descending Sort + |
+If this icon is displayed in a column header, the column is sorted in descending order. If you click on this icon the column will be in no sort order. +Alternatively, use Alt+Click to select the column header. + |
+
Icons for the sort priority are as follows:
+: Icon with three dots indicates the highest priority.
: Icon with two dots indicates the second highest priority.
: Icon with one dot indicates the lowest priority.
Toolbar Name + |
+Toolbar Icon + |
+Description + |
+
---|---|---|
Copy + |
+This icon is used to copy selected data from the Result pane to clipboard. The shortcut key is Ctrl+C. + |
+|
Advanced Copy + |
+This icon is used to copy selected data from the Result pane to clipboard. The copied data includes column headers. See Query Results to set this preference. The shortcut key is Ctrl+Shift+C. + |
+|
Export all data + |
+This icon is used to export all data to files in Excel (xlsx/xls), CSV, text, or binary format. For details, see Exporting Table Data. + NOTE:
+
|
+|
Export current page data + |
+This icon is used to export current page data to files in Excel (xlsx/xls) or CSV format. + |
+|
Paste + |
+This icon is used to paste copied information. For details, see Paste. + |
+|
Add + |
+This icon is used to add a row to the result set. For details, see Insert. + |
+|
Delete + |
+This icon is used to delete a row from the result set. For details, see Delete. + |
+|
Save + |
+This icon is used to save the changes made in the result set. For details, see Editing Table Data. + |
+|
Rollback + |
+This icon is used to roll back the changes made in the result set. For details, see Editing Table Data. + |
+|
Refresh + |
+This icon is used to refresh information in the result set. If multiple result sets are open for the same table, changes made in one result set will take effect in other result sets after refresh. If the table is edited, the result sets will be updated after refresh. + |
+|
Clear Unique Key selection + |
+This icon is used to clear the previously selected unique key. For details, see Editing Table Data. + |
+|
Show/Hide Query bar + |
+This icon is used to display or hide the query executed for a specified result set. This is a toggle button. + |
+|
Show/Hide Search bar + |
+This icon is used to display or hide the Search field. This is a toggle button. + |
+|
Encoding + |
+Whether you can configure this field depends on the settings in Result Data Encoding. + . In this drop-down list, you can select the appropriate encoding to view the data accurately. The value defaults to UTF-8. For details about the encoding preference, see NOTE:
+Data editing operations, except data insertion, are restricted after the default encoding is modified. + |
+|
Multi Sort + |
+This icon is used to display the Multi Sort dialog box. + |
+|
Clear Sort + |
+This icon is used to reset all sorted columns. + |
+
Icons in the Search field are as follows:
+ +Icon Name + |
+Icon + |
+Description + |
+
---|---|---|
Search + |
+This icon is used to search for result sets according to the criteria defined. The text is case-insensitive. + |
+|
Clear Search Text + |
+This icon is used to clear the text entered in the Search field. + |
+
Right-click options in the Result pane are as follows:
+ +Option + |
+Description + |
+
---|---|
Close + |
+Closes only the active Result pane + |
+
Close Others + |
+Closes all other Result panes except the active one + |
+
Close Tabs to the Right + |
+Closes all Result panes to the right of the active one + |
+
Close All + |
+Closes all Result panes including the active one + |
+
Detach + |
+Opens only the active Result pane + |
+
Status information displayed in the Result pane is as follows:
+When you are viewing table data, Data Studio automatically adjusts the column width for better display. You can adjust the column width as required. If the text length exceeds the column width and you adjust the column width, Data Studio may fail to respond.
+For details, see Query Results.
+Data Studio backs up unsaved data in SQL Terminal and PL/SQL Viewer periodically based on the time interval defined in the Preferences pane. Data is encrypted and saved based on the Preference settings. See Query/Function/Procedure Backup to enable or disable the backup function, set time interval of data saving, and encrypt the saved data.
+Unsaved changes in SQL Terminal and PL/SQL Viewer are backed up and saved in the DataStudio\UserData\Username\Autosave folder. If these backup files have been saved before Data Studio is shut down unexpectedly, these files will be available upon the next login.
+If unsaved data exists in SQL Terminal and PL/SQL Viewer during graceful exit, Data Studio will not be closed until the backup is complete.
+When an error occurs during the execution of queries/functions/procedures, an error locating message will be displayed.
+Yes: Click Yes to proceed with the execution.
+No: Click No to stop the execution.
+You can select Do not display other errors that occur during the execution to hide the error messages and proceed with the current SQL query.
+The line number and position of an error message is displayed in the Messages pane. In SQL Terminal or PL/SQL Viewer, the corresponding line is marked with and a red underline at the position of the error. You can hover over
to display the error message. For details about why the line number does not match with the error detail, see FAQs.
If a query/function/procedure is modified during execution, the error locator may not display the correct line number and the position of the error.
+Perform the following steps to search in the PL/SQL Viewer or SQL Terminal pane:
+Press F3 to search for the next line or Shift+F3 to search for the previous line. You can use these shortcut keys after pressing Ctrl+F to search for text and key words. Ctrl+F, F3, and Shift+F3 will be available only when you search for keywords in the current instance.
+Alternatively press Ctrl+F.
+The Find and Replace dialog box is displayed.
+The desired text is highlighted.
+You can press F3 for forward search or Shift+F3 for backward search.
+When reaching the last line in a SQL query or PL/SQL statement, select Wrap around to proceed with the search.
+Perform the following steps to locate a specific line in the PL/SQL Viewer or SQL Terminal pane:
+Perform the following steps to go to a line in PL/SQL Viewer or SQL Terminal:
+The Go To Line dialog box is displayed, allowing you to skip to a specific line in SQL Terminal.
+You cannot enter the following characters in this field:
+Data Studio allows you to comment or uncomment lines or blocks.
+Perform the following steps to comment or uncomment lines in PL/SQL Viewer or SQL Terminal:
+Alternatively, press Ctrl+/ or right-click a line and select Comment/Uncomment Lines.
+Perform the following steps to comment or uncomment blocks in PL/SQL Viewer or SQL Terminal:
+Alternatively, press Ctrl+Shift+/ or right-click a line or the entire block and select Comment/Uncomment Block.
+You can indent or un-indent lines according to the indent size defined in Preferences.
+Perform the following steps to indent lines in PL/SQL Viewer or SQL Terminal:
+Move the selected lines according to the indent size defined in Preferences. For details about modifying the indent size, see Formatter.
+Perform the following steps to un-indent lines in PL/SQL Viewer or SQL Terminal:
+Move the selected lines according to the indent size defined in Preferences. For details about modifying the indent size, see Formatter.
+Only selected lines that have available tab space will be un-indented. For example, if multiple lines are selected and one of the selected line starts at position 1, pressing Shift+Tab will un-indent all lines except the one starting at position 1.
+The Insert Space option is used to replace a tab with spaces according to the indent size defined in Preferences.
+Perform the following steps to replace a tab with spaces in PL/SQL Viewer or SQL Terminal:
+A tab is replaced with spaces according to the indent size defined in Preferences. For details about modifying the indent size, see Formatter.
+Perform the following steps to execute multiple functions/procedures:
+Insert a forward slash (/) in a new line under the function/procedure in SQL Terminal.
+Add the new function/procedure in the next line.
+Perform the following steps to execute multiple SQL queries:
+Perform the following steps to execute a SQL query after executing a function/procedure:
+Insert a forward slash (/) in a new line under the function/procedure in SQL Terminal. Then add new query or function/procedure statements.
+Perform the following steps to execute PL/SQL statements and SQL queries on different connections:
+Select the required connection from the Connection drop-down list and click in SQL Terminal.
Perform the following steps to rename a SQL Terminal:
+The Rename Terminal dialog box is displayed prompting you to enter the new terminal name.
+The SQL Assistant tool provides suggestion or reference for the information entered in SQL Terminal and PL/SQL Viewer. Perform the following steps to open SQL Assistant:
+When Data Studio is started, related syntax is displayed in the SQL Assistant panel. After you enter a query in SQL Terminal, related syntax details are displayed, including precautions, examples, and description of syntax, functions, and parameters. Select the text and right-click to copy the selected text or copy and paste it to SQL Terminal.
+The Templates option of Data Studio allows you to insert frequently used SQL statements in SQL Terminal or PL/SQL Viewer. Some frequently used SQL statements have been saved in Data Studio. You can create, edit, or remove a template. For details, see Adding/Editing/Removing a Template.
+The following table lists the default templates.
+ +Name + |
+Description + |
+
---|---|
df + |
+delete from + |
+
is + |
+insert into + |
+
o + |
+order by + |
+
s* + |
+select from + |
+
sc + |
+select row count + |
+
sf + |
+select from + |
+
sl + |
+select + |
+
Perform the following steps to use the Templates option:
+A list of existing template information is displayed. For details, see the following tables.
+ +Exact Match + |
+Display + |
+
---|---|
On + |
+Displays all entries that start with the input text (case-sensitive). +For example, if SF is entered in SQL Terminal or PL/SQL Viewer, all entries that start with SF are displayed. + |
+
Off + |
+Displays all entries that start with the input text (case-insensitive). +For example, if SF is entered in SQL Terminal or PL/SQL Viewer, all entries that start with SF, Sf, sF, or sf are displayed. + |
+
Text Selection/Cursor Location + |
+Display + |
+
---|---|
Text is selected and the shortcut key is used. + |
+Displays entries that match the text between the leftmost character of the selected text and the space or newline character nearest to the character. + |
+
No text is selected and the shortcut key is used. + |
+Displays entries that match the text between the cursor position and the space or newline character nearest to that position. + |
+
You can export the results of an SQL query into a CSV, Text or Binary file.
+This section contains the following topics:
+ +The following functions are disabled while the export operation is in progress:
+Follow the steps below to export all results:
+Export ResultSet Data window is displayed.
+You can check the status bar to view the status of the result being exported.
+The Data Exported Successfully dialog box is displayed.
+If the disk is full while exporting the results, then Data Studio displays an error in the Messages tab. In this case, clear the disk, re-establish the connection and export the result data.
+The Messages tab shows the Execution Time, Total result records fetched, and the path where the file is saved.
+It is recommended to export all results instead of exporting the current page.
+Follow the steps below to export the current page:
+The Data Studio Security Disclaimer dialog box is displayed.
+You can check the status bar to view the status of the page being exported.
+The Data Exported Successfully dialog box is displayed.
+If the disk is full while exporting the results, then Data Studio displays an error in the Messages tab. In this case, clear the disk, re-establish the connection and export the result data.
+The following figure shows the structure of the Data Studio release package.
+Data Studio allows you to reuse an existing SQL Terminal connection or create a new SQL Terminal connection for execution plan and cost, visual explain plan, and operations in the resultset. By default, the SQL Terminal reuses the existing connection to perform these operations.
+Use new connection when there are multiple queries queued for execution in existing connection as the queries are executed sequentially and there may be a delay. Always reuse existing connection while working on temp tables. Refer to the Editing Temporary Tables section to edit temp tables.
+Complete the steps to enable or disable SQL Terminal connection reuse:
+Use the existing SQL Terminal connection to edit temporary tables.
+You can view accessible database objects in the navigation tree in Object Browser. Schema are displayed under databases, and tables are displayed under schemas.
+Object Browser displays only the objects that meet the following minimum permission requirements of the current user.
+ +Object Type + |
+Permissions displayed in Object Browser + |
+
---|---|
Database + |
+Connect + |
+
Schema + |
+Use + |
+
Table + |
+Select + |
+
Column + |
+Select + |
+
Sequence + |
+Use + |
+
Function/Procedure + |
+Execute + |
+
The child objects of the objects accessible to you do not need to be displayed in Object Browser. For example, if you have the permission to access a table but does not have the permission to access a column in the table, Object Browser only displays the columns you can access. If access to an object is revoked during an operation on the object, an error message will be displayed, indicating that you do not have permissions to perform the operation. After you refresh Object Browser, the object will not be displayed.
+The following objects can be displayed in the navigation tree:
+All default created schemas, except for the public schema, are grouped under Catalogs. User schemas are displayed under their databases in Schemas.
+The filter option in Object Browser opens a new tab, where you can specify the search scope. Press Enter to start the search. Object Browser also provides a search bar. You can search for an object by name. In an expanded navigation tree, only the objects that match the filter criteria are displayed.
+In a collapsed navigation tree, the filtering rule takes effect when a node is expanded.
+The batch drop operation allows you to drop multiple objects. This operation also applies to searched objects.
+Perform the following steps to batch drop objects:
+The Drop Objects tab displays the list of objects to be dropped.
+ +Column Name + |
+Description + |
+Example + |
+
---|---|---|
Type + |
+Displays information about the object type. + |
+Table, view + |
+
Name + |
+Displays the object name. + |
+public.bs_operation_201804 + |
+
Query + |
+Displays the query that will be executed to drop objects. + |
+DROP TABLE IF EXISTS public.a123 + |
+
Status + |
+Displays the status of the drop operation. +
|
+
|
+
Error Message + |
+Displays the failure cause of a drop operation. + |
+The table abc does not exist. Skip it. + |
+
+
Option + |
+Description + |
+
---|---|
Cascade + |
+The cascade drop operation is performed to drop dependent objects and attributes. The dropped dependent objects will be removed from Object Browser only after the refresh operation is performed. + |
+
Atomic + |
+The atomic drop operation is performed to drop all objects. If the operation fails, no objects will be dropped. + |
+
No selection + |
+If neither Cascade nor Atomic is selected, no dependent objects are dropped. + |
+
Runs: displays the number of objects that are dropped from the object list
+Errors: displays the number of objects that are not dropped due to errors
+The batch grant/revoke operation allows you select multiple objects to grant/revoke privileges. You can also perform batch grant/revoke operation on searched objects.
+This feature is only available for OLAP, not for OLTP.
+Batch grant/revoke is allowed only with the same object type within that schema.
+Follow the steps to grant/revoke privileges in a batch:
+Grant/Revoke dialog box is displayed.
+This section provides details on how to personalize Data Studio using preferences settings.
+This section describes how to customize shortcut keys.
+ +Perform the following steps to set or modify shortcut keys:
+The Preferences dialog box is displayed.
+The Shortcut Mapper pane is displayed.
+For example, to change the shortcut key for Step Into from F7 to F6, move the cursor to the Binding text box and enter F6.
+You can modify multiple shortcut keys before restarting Data Studio.
+Perform the following steps to remove shortcut keys:
+The Preferences dialog box is displayed.
+The Shortcut Mapper pane is displayed.
+You can remove multiple shortcut keys before restarting Data Studio.
+Perform the following steps to restore the default shortcut keys:
+The Preferences dialog box is displayed.
+The Shortcut Mapper pane is displayed.
+The Restart Data Studio dialog box is displayed.
+Function + |
+Shortcut Key + |
+
---|---|
Sorts the result sets of views and tables, editing tables, and queries in ascending or descending order, or in the order of results received by the server + |
+Alt+Click + |
+
Opens the Help menu + |
+Alt+H + |
+
Saves the SQL script + |
+Ctrl+S + |
+
Opens the Edit menu + |
+Alt+E + |
+
Compiles or executes statements in SQL Terminal + |
+Ctrl+Enter + |
+
Searches and replaces + |
+Ctrl+F + |
+
Searches for the previous one + |
+Shift+F3 + |
+
Searches for the next one + |
+F3 + |
+
Redoes an operation + |
+Ctrl+Y + |
+
Copies information of Execution Time and Status in the Edit Table Data tab + |
+Ctrl+Shift+K + |
+
Copies database objects from the automatic recommendation list + |
+Alt+U + |
+
Opens the Callstack, Breakpoints, or Variables pane + |
+Alt+V + |
+
Opens a SQL script + |
+Ctrl+O + |
+
Steps over + |
+F8 + |
+
Steps into + |
+F7 + |
+
Steps out + |
+Shift+F7 + |
+
Comments out or uncomments a row + |
+Ctrl+/ + |
+
Locates the first element in Object Browser + |
+Alt+Page Up or Alt+Home + |
+
Locates the last element in Object Browser + |
+Alt+Page Down or Alt+End + |
+
Locates a specific row + |
+Ctrl+G + |
+
Disconnects from the database + |
+Ctrl+Shift+D + |
+
Formats SQL or PL/SQL + |
+Ctrl+Shift+F + |
+
Changes to uppercase + |
+Ctrl+Shift+U + |
+
Changes to lowercase + |
+Ctrl+Shift+L + |
+
Updates the cells or columns in the Edit Table Data, Properties, or Results pane. Click the cell or column header to enable this option. + |
+F2 + |
+
Closes the PL/SQL Viewer, View Table Data, Execute Query, or Properties tab + |
+Shift+F4 + |
+
Continues the PL/SQL debugging + |
+F9 + |
+
Cuts content + |
+Ctrl+X + |
+
Copies the name of the object modified in Object Browser or in the terminal. You can copy the selected data from the Terminal, Result, View Table Data, or Edit Table Data tab. + |
+Ctrl+C + |
+
Copies the data in the Result, View Table Data, or Edit Table Data tab. The data contains/does not contain the column title and row number + |
+Ctrl+Shift+C + |
+
Copies queries in the Edit Table Data tab + |
+Ctrl+Alt+C + |
+
Copies content of the Variables tab + |
+Alt+K + |
+
Copies content of the Callstack tab + |
+Alt+J + |
+
Copies content of the Breakpoints tab + |
+Alt+Y + |
+
Visualizes the interpretation plan + |
+Alt+Ctrl+X + |
+
Displays online help (user manual) + |
+F1 + |
+
Template + |
+Alt+Ctrl+Space + |
+
Switches to the first SQL Terminal tab + |
+Alt+S + |
+
Selects all + |
+Ctrl+A + |
+
Opens the Setting menu + |
+Alt+G + |
+
Refreshes the Object Browser pane + |
+F5 + |
+
Searches for an object + |
+Ctrl+Shift+S + |
+
Opens the Debugging menu + |
+Alt+D + |
+
Debugs a template + |
+F10 + |
+
Debugs a database object + |
+Ctrl+D + |
+
Highlights Object Browser + |
+Alt+X + |
+
Opens the File menu + |
+Alt+F + |
+
Creates a connection + |
+Ctrl+N + |
+
Opens the Running menu + |
+Alt+R + |
+
Switches between the SQL Terminal tabs + |
+Ctrl+Page Up or Ctrl+Page Down + |
+
Expands or collapses all objects + |
+Ctrl+M + |
+
Pastes content + |
+Ctrl+V + |
+
Collapses objects to browse the navigation tree + |
+Alt+Q + |
+
Performs execution + |
+Ctrl+E + |
+
Displays the execution plan and expense + |
+Ctrl+Shift+X + |
+
Stops a running query + |
+Shift+Esc + |
+
Comments out or uncomments a row or the entire block + |
+Ctrl+Shift+/ + |
+
Enables Auto Suggest of the database object list + |
+Ctrl+Space + |
+
This section describes how to customize syntax highlighting, SQL history information, templates, and formatters.
+ +The Preferences dialog box is displayed.
+The Syntax Coloring pane is displayed.
+For example, click to customize the color for Strings. A dialog box is displayed prompting you to select a color.
Select a color for a specific syntax type. You can select one of the basic colors or customize a color.
+Click Restore Defaults in the Syntax Coloring pane to restore the default color.
+The Preferences.prefs file contains the custom color settings. If the file is damaged, Data Studio will display the default settings.
+The customized color will be used after you restart Data Studio.
+You can set the value of SQL History Count and also the number of characters saved for each query in SQL History.
+Perform the following steps to set the value of SQL History Count and also the number of characters saved for each query in SQL History:
+The Preferences dialog box is displayed.
+The SQL History pane is displayed.
+The value ranges from 1 to 1000. The current value of this field will be displayed.
+The value ranges from 1 to 1000. You can enter 0 to remove the character limit. The current value of this field will be displayed.
+Data Studio allows you to create, edit, and remove a template. For details about templates, see Using Templates.
+If the default settings are restored, all user-defined templates will be removed from the list.
+Perform the following steps to create a template:
+The Preferences dialog box is displayed.
+The Templates pane is displayed.
+The syntax of the text entered in Pattern will be highlighted.
+Perform the following steps to edit a template:
+The Preferences dialog box is displayed.
+The Templates pane is displayed.
+The syntax of the text entered in Pattern will be highlighted.
+Perform the following steps to remove a template:
+The Preferences dialog box is displayed.
+The Templates pane is displayed.
+The template is removed from the Templates pane.
+Default templates that are removed can be added back using the Restore Removed option. It will restore the template to the last updated version. However, the Restore Removed option is not applicable to user-defined templates.
+Perform the following steps to restore the default template settings:
+The Preferences dialog box is displayed.
+The Templates pane is displayed.
+Data Studio allows you to set the tab width and convert tabs to spaces during indent and unindent operations. For details, see Indenting or Un-indenting Lines.
+Perform the following steps to customize the indent size and convert tabs to spaces:
+The Preferences dialog box is displayed.
+The Formatter pane is displayed.
+Perform the following steps to edit settings in Transaction:
+The Preferences dialog box is displayed.
+The Transaction pane is displayed.
+Auto Commit defaults to Enable.
+Perform the following steps to fold a SQL statement:
+The Preferences dialog box is displayed.
+The Folding pane is displayed.
+Any change in the Folding parameter takes effect only in new editors, and will not take effect in opened editors until they are restarted.
+Perform the following steps to configure Font:
+The Preferences dialog box is displayed.
+The Font pane is displayed.
+Perform the following steps to configure Auto Suggest:
+The Preferences dialog box is displayed.
+The Auto Suggest pane is displayed.
+To enable the Auto Suggest feature, sort the following groups:
+The Preferences dialog box is displayed.
+The Session Setting pane is displayed.
+Data Studio supports only UTF-8 and GBK file encoding types.
+Click Restore Defaults in Session Setting to restore the default value. The default value for Data Studio Encoding and File Encoding is UTF-8.
+Perform the following steps to enable or disable SQL Assistant:
+The Preferences dialog box is displayed.
+The Session Setting pane is displayed.
+Click Restore Defaults in Session Setting to restore the default value. The default value for SQL Assistant is Enable.
+For details about the backup features of Data Studio, see Backing up Unsaved Queries/Functions/Procedures.
+Perform the following steps to enable or disable the backup of unsaved data in SQL Terminal and PL/SQL Viewer:
+The Preferences dialog box is displayed.
+The Session Setting pane is displayed.
+Click Restore Defaults in Session Setting to restore the default value. By default, data backup is enabled and Interval defaults to 5 minutes.
+Perform the following steps to enable or disable the encryption of saved data:
+The Preferences dialog box is displayed.
+The Session Setting pane is displayed.
+Click Restore Defaults in Session Setting to restore the default value. Encryption is enabled by default.
+Perform the following steps to configure the Import Table Data Limit and Import File Data Limit parameters:
+The Preferences dialog box is displayed.
+The Session Setting pane is displayed.
+In the File Limit area, configure the Import Table Data Limit and Import File Data Limit parameters.
+Import Table Data Limit: specifies the maximum size of the table data to import
+Import File Data Limit: specifies the maximum size of the file to import
+Values in the preceding figure are default values.
+Perform the following steps for rendering:
+The Preferences dialog box is displayed.
+The Session Setting pane is displayed.
+In the Lazy Rendering area, the Number of objects in a batch parameter is displayed.
+If the value input is less than 100 or more than 1000, the Invalid Range, (100 -1000) error message is displayed.
+This section describes the minimum system requirements for using Data Studio.
+OS
+The following table lists the OS requirements of Data Studio.
+ +Server + |
+OS + |
+Supported Version + |
+
---|---|---|
General-purpose x86 servers + + |
+Microsoft Windows + |
+Windows 7 (64 bit) + |
+
Windows 10 (64 bit) + |
+||
Windows 2012 (64 bit) + |
+||
Windows 2016 (64 bit) + |
+||
SUSE Linux Enterprise Server 12 + + |
+SP0 (SUSE 12.0) + |
+|
SP1 (SUSE 12.1) + |
+||
SP2 (SUSE 12.2) + |
+||
SP3 (SUSE 12.3) + |
+||
SP4 (SUSE 12.4) + |
+||
CentOS + + + |
+7.4 (CentOS7.4) + |
+|
7.5 (CentOS7.5) + |
+||
7.6 (CentOS7.6) + |
+||
TaiShan ARM server + |
+NeoKylin + |
+7.0 + |
+
Browser
+The following table lists the browser requirement of Data Studio.
+ +OS + |
+Version + |
+
---|---|
Microsoft Windows + |
+Internet Explorer 11 or later + |
+
Other software requirements
+The following table lists the software requirement of Data Studio.
+ +Software + |
+Specifications + |
+
---|---|
Java + |
+Open JDK 1.8 or later corresponding to the OS bit is recommended. + |
+
GTK + |
+For Linux OSs, GTK 2.24 or later is required. + |
+
GNU libc + |
+DDL can be displayed, imported, exported; and data operations can be performed only in libc 2.17 and later in GN. + |
+
Database + |
+Version + |
+
---|---|
GaussDB(DWS) + |
+1.2.x +1.5.x +8.0.x +8.1.x + |
+
The recommended minimum screen resolution is 1080 x 768. If the resolution is lower than this value, the page display will be abnormal.
+This section describes how to customize the settings in the Query Results pane, including the column width, number of records to be obtained, and copy of column headers or row numbers.
+ +The Preferences dialog box is displayed.
+The Query Results pane is displayed.
+The options of configuring the column width are as follows.
+ +Option + |
+Outcome + |
+
---|---|
Content Length + |
+You can set the column width based on the content length of the query result. + |
+
Custom Length + |
+You can customize the column width. + NOTE:
+This value ranges from 100 to 500. + |
+
Click Restore Defaults in Query Results to restore the default value. The default value is Content Length.
+Set the number of records to be obtained in the query results:
+The Preferences dialog box is displayed.
+The Query Results pane is displayed.
++
Option + |
+Outcome + |
+
---|---|
Fetch All records + |
+You can obtain all records in the query results. + |
+
Fetch custom number of records + |
+You can set the number of records to be obtained in the query results. + NOTE:
+This value ranges from 100 to 5000. + |
+
Click Restore Defaults in Query Results to restore the default value. The default value is Fetch custom number of records (1000).
+Copy column headers or row numbers from query results:
+The Preferences dialog box is displayed.
+The Query Results pane is displayed.
++
Option + |
+Outcome + |
+
---|---|
Include column header + |
+You can copy column headers from the query results. + |
+
Include row number + |
+You can copy the selected content along with the row number from the query results. + |
+
Click Restore Defaults in Query Results to restore the default value. The default value is Include column header.
+Determine how the result set window is opened:
+The Preferences dialog box is displayed.
++
Option + |
+Outcome + |
+
---|---|
Overwrite Resultset + |
+After an opened result set window is closed, a new result set window will be opened. + |
+
Retain Current + |
+After a new result set window is opened, the opened result set windows are not closed. + |
+
Database Type + |
+Auto Commit + |
+Reuse Connection + |
+Table Data Save Option + |
+Behavior + |
+
---|---|---|---|---|
GaussDB(DWS) + + + + + + |
+ON + |
+ON + |
+Save Valid Data + |
+Only the valid data will be saved and committed. + |
+
ON + |
+ON + |
+Do Not Save + |
+No data will be saved when an error occurs. + |
+|
ON + |
+OFF + |
+Save Valid Data + |
+Only the valid data will be saved and committed. + |
+|
ON + + |
+OFF + |
+Do Not Save + |
+No data will be saved when an error occurs. + |
+|
OFF + |
+ON + |
+Save Valid Data + |
+No data will be saved when an error occurs. Execute the COMMIT or ROLLBACK statement to save data. + |
+|
OFF + |
+ON + |
+Do Not Save + |
+No data will be saved when an error occurs. Execute the COMMIT or ROLLBACK statement to save data. + |
+
Click Restore Defaults in Edit Table Data to restore the default value. The default value is Save Valid Data.
+Perform the following steps to
+configure whether data encoding type is displayed in the Query Results, View Table Data, and Edit Table Data panes.
+The Preferences dialog box is displayed.
+The Query Results pane is displayed.
+This section describes how to customize the display of passwords and security disclaimers.
+You can configure whether to display the option of saving password permanently in the Connection pane.
+Perform the following steps to modify the display of the option of saving password permanently:
+The Preferences dialog box is displayed.
+The Password pane is displayed.
++
Option + |
+Description + |
+
---|---|
Yes + |
+The option of saving password permanently is displayed in the Save Password drop-down list in the Connection pane. + |
+
No + |
+The option of saving password permanently is displayed in the Save Password drop-down list in the Connection pane and the saved passwords will be deleted. + |
+
Click Force Restart to cancel the operations and restart Data Studio.
+Click Restore Defaults in Password to restore the default value. The default value is No.
+This topic describes how to continue or stop using Data Studio after password expires using the Password setting.
+Perform the following steps to modify the behavior of Data Studio upon password expiry:
+The Preferences dialog box is displayed.
+The Password pane is displayed.
++
Option + |
+Description + |
+
---|---|
Yes + |
+You can log in to Data Studio after the password expired. + NOTE:
+A message is displayed notifying you that the password has expired and that some operations may not be performed properly in the following scenarios: +
|
+
No + |
+You cannot log in to Data Studio after the password expired. A message is displayed notifying you that the password has expired. + |
+
The default value is Yes.
+You can configure whether to display the security disclaimer for any insecure connection or file operation.
+Perform the following steps to modify the display of the security disclaimer:
+The Preferences dialog box is displayed.
+The Security Disclaimer pane is displayed.
++
Option + |
+Description + |
+
---|---|
Enable + |
+The security disclaimer is displayed each time you try to establish an insecure connection or perform a file operation. + |
+
Disable + |
+The security disclaimer is not displayed each time you try to establish an insecure connection or perform a file operation. You need to agree to the security implications that may arise due to insecure connection. + |
+
Click Restore Defaults in Security Disclaimer to restore the default value. The default value is Enable.
+The loading and operation performance of Data Studio depends on the number of objects to be loaded in Object Browser, including tables, views, and columns.
+Memory consumption also depends on the number of loaded objects.
+To improve object loading performance and better utilize memory, you are advised to divide an object into multiple namespaces, and to avoid using namespaces that contain a large number of objects and cause data skew. By default, Data Studio loads the namespaces in the search_path set for the user logged in. Other namespaces and objects are loaded only when needed.
+To improve performance, you are advised to load all objects. Do not load objects based on user permissions. Table 1 describes the minimum access permissions required to list objects in the Object Browser.
+ +Object Type + |
+Type + |
+Object Browser - Minimum Permission + |
+
---|---|---|
Database + |
+Create, Connect, Temporary/Temp, All + |
+Connect + |
+
Schemas + |
+Create, Usage, All + |
+Usage + |
+
Tables + |
+Select, Insert, Update, Delete, Truncate, References, All + |
+Select + |
+
Columns + |
+Select, Insert, Update, References, All + |
+Select + |
+
Views + |
+Select, Insert, Update, Delete, Truncate, References, All + |
+Select + |
+
Sequences + |
+Usage, Select, Update, All + |
+Usage + |
+
Functions + |
+Execute, All + |
+Execute + |
+
To improve the performance of find and replace operations, you are advised to break a line that contains more than 10,000 characters into multiple short lines.
+The following test items and results can help you learn the performance of Data Studio.
+ +Recommended maximum memory (current version) + |
+1.4 GB + |
+|
Performance (The database contains a 150 KB table and a 150 KB view, each containing three columns. The maximum memory configuration is used.) + |
+||
> + |
+Time taken to refresh namespaces in Object Browser + |
+15s + |
+
> + |
+Time taken for initial loading and expanding of all tables/views in Object Browser + |
+90s-120s + |
+
> + |
+Time taken for subsequent loading and expanding of all tables/views in Object Browser + |
+<10s + |
+
> + |
+Total used memory + |
+700 MB + |
+
The performance data is for reference only. The actual performance may vary according to the application scenario.
+Solution: Check whether JRE is found. Verify the Java path configured in the environment. For details about the supported Java JDK versions, see System Requirements.
+Check whether the Java Runtime Environment (JRE) or Java Development Kit (JDK) version 1.8 that matches the bit version of the operating system has been installed in the system, and set the Java Home path. If multiple Java versions are installed, set the -vm parameter in the configuration file by referring to Installing and Configuring Data Studio. This is the prerequisite for running Data Studio.
+Query the version of the installed JRE or JDK. If an earlier version is installed in the system, this error is reported. Upgrade the JRE version to 1.8 that matches the number of bits of the operating system.
+Check the version of the JRE or JDK installed in the system. If the installed Java version is incompatible with the system, this error occurs. Upgrade the JRE version to 1.8 that matches the number of bits of the operating system.
+You are advised to run the BAT file to check the Java version compatibility, and then open Data Studio. For details, see Getting Started.
+information + |
+Solution + |
+
---|---|
You are trying to run 32-bit Data Studio in the following environment: + + |
+Install 32-bit Java 1.8. + |
+
The Java version supported by Data Studio must be 1.8 or later. Before using Data Studio, you need to install Java 1.8. + |
+Install Java 1.8 that matches the number of bits of the operating system. + |
+
You are trying to run 64-bit Data Studio in the following environment: + + |
+Install 64-bit Java 1.8. + |
+
You are trying to run 64-bit Data Studio in the following environment: + + |
+Install the 32-bit Data Studio. + |
+
Solution: Check whether the server is running on the specified IP address and port. Use gsql to connect to a specified user and check the user availability.
+Solution: If a connection problem occurs during the use of Data Studio, see the following example.
+Create a database connection.
+Used for executing queries.
+When a connection exception occurs in any database (PostgreSQL), the connection is closed. When the database connection is closed, all open procedure and function windows are closed.
+The system displays an error message. The Object Browser navigation tree displays the database status.
+Only the current database is interrupted. Other databases remain connected or are reconnected.
+Reconnect to the database and continue the query.
+Solution: Choose Preferences > Session Settings > Data Studio Encoding and set the encoding format to GBK so that Chinese characters can be displayed properly.
+Solution: When the Data Studio has used up the allocated maximum Java memory, the message "Out of Memory" or "Java Heap Error" is displayed. By default, the Data Studio.ini configuration file (in the Data Studio installation path) contains the entry -Xmx1200m. 1200m indicates 1200 MB, which is the maximum Java memory that can be used by Data Studio. The memory usage of Data Studio depends on the size of data obtained by users during the use of Data Studio.
+To solve this problem, you can expand the Java memory size to an ideal value. For example, change -Xmx1200m to -Xmx2000m and restart Data Studio. If the updated memory is used up, the same problem may occur again.
+-Xms1024m
+-Xmx1800m
+Solution: Data Studio disconnects from the database specified in the file. Re-establish the connection and continue the operation.
+Solution: The possible causes are as follows:
+Solution: The possible causes are as follows:
+"Can't start this program because MSVCRT100.dll is missing on your computer. Try reinstalling the program to resolve the problem."
+Solution: gs_dump.exe needs to be executed to display or export DDL, which requires the Microsoft VC Runtime Library file msvcrt100.dll.
+To resolve this issue, copy the msvcrt100.dll file from the \Windows\System32 folder to the \Windows\SysWOW64 folder.
+Solution: If the Profile folder in the User Data folder is unavailable or has been manually modified, this problem may occur. Ensure that the Profile folder exists and its name meets the requirements.
+Solution: If the Profile folder in the User Data folder is lost or manually modified, this problem may occur. Ensure that the Profile folder exists and its name meets the requirements.
+Solution: This problem may occur if the Preferences file does not exist or its name has been changed. Restart Data Studio.
+Solution: All edited data will be lost. Close the Edit Data dialog box and modify the data again.
+Solution: This problem occurs if you choose Preferences > Query Results and set the column headers to be included. The selected cell also contains the column header cell. Modify the settings to disable the Include column headers option and try again.
+Answer: After the Reuse Connection option is disabled, the tool creates a new session, but the temporary table can be edited only in the existing connection. To edit temporary tables, enable the Reuse Connections option. For details, see Managing SQL Terminal Connections.
+Answer: If you add the same column multiple times in the multi-column sorting dialog box and click Apply, the following message is displayed. You need to click OK and select non-duplicate columns for sorting.
+Answer: The following message is displayed. You need to set a valid column name and click Apply again. Then, the message is not displayed.
+Answer: Canceling a table query that is being executed may cause the console to display the names of tables that are not created. In this case, you are advised to delete the table so that you can perform operations on tables with the same name.
+Solution: Perform the following steps to generate a new security key:
+Ensure that the operating system and the required software's (refer to System Requirements for more details) are updated with the latest patches to prevent vulnerabilities and other security issues.
+This section provides the security management information for Data Studio.
+The following information is critical to the security management for Data Studio:
+If the message Last login details not available is displayed, the connected database cannot display information about the last login.
+The following information is critical to manage security for Data Studio:
+The following information is critical to manage security for Data Studio:
+While running Data Studio in a trusted environment, user must ensure to prevent malicious software scanning or accessing the memory which is used to store application data including sensitive information.
+Alternatively, you can choose Do Not Save while connecting to the database, so that password does not get saved in the memory.
+The following information is critical to manage security for Data Studio:
+You can ensure encryption of auto saved data by enabling encryption option from Preferences page. Refer to Query/Function/Procedure Backup section for steps to encrypt the saved data.
+The following information is critical to manage security for Data Studio:
+The information about using SSL certificates is for reference only. For details about the certificates and the security guidelines for managing the certificates and related files, see the database server documentation.
+Data Studio can connect to the database using the Secure Sockets Layer (SSL) option. Adding a Connection lists the files required.
+ +# + |
+Certificate/Key + |
+Description + |
+
---|---|---|
1 + |
+Client SSL Certificate + |
+Provided by the system/database administrator + |
+
2 + |
+Client SSL Key + |
+Provided by the system/database administrator + |
+
3 + |
+Root Certificate + |
+Provided by the system/database administrator + |
+
Perform the following steps to generate a certificate:
+Log in to SUSE Linux as user root and switch to user omm.
+Run the following commands:
+mkdir test +cd /etc/ssl+
Copy the configuration file openssl.cnf to the test directory.
+Run the following commands:
+cp openssl.cnf ~/test +cd ~/test+
Establish the CA environment under the test folder.
+Create a folder in the demoCA./demoCA/newcerts./demoCA/private directory.
+Run the following commands:
+mkdir ./demoCA ./demoCA/newcerts ./demoCA/private +chmod 777 ./demoCA/private+
Create the serial file and write it to 01.
+Run the following command:
+echo '01'>./demoCA/serial+
Create the index.txt file.
+Run the following command:
+touch /home/omm/test/demoCA/index.txt+
Modify parameters in the openssl.cnf configuration file.
+Run the following commands:
+dir = /home/omm/test/demoCA +default_md = sha256+
The CA environment has been established.
+Run the following command:
+openssl genrsa -aes256 -out demoCA/private/cakey.pem 2048+
A 2048-bit RSA private key is generated.
+Run the following commands:
+openssl req -config openssl.cnf -new -key demoCA/private/cakey.pem -out demoCA/careq.pem+
Enter the password of demoCA/private/cakey.pem.
+Enter the private key password of user root.
+You need to enter information that will be included in your certificate request.
+The information you need to enter is a Distinguished Name (DN).
+You can leave some fields blank.
+For a field that contains a default value, enter a period (.) to leave the field blank. Enter the following information in the generated server certificate and client certificate.
+Country Name (2 letter code) [AU]:CN +State or Province Name (full name) [Some-State]:shanxi +Locality Name (eg, city) []:xian +Organization Name (eg, company) [Internet Widgits Pty Ltd]:Abc +Organizational Unit Name (eg, section) []:hello +-Common name can be any name +Common Name (eg, YOUR name) []:world +-Email is optional. +Email Address []: +A challenge password []: +An optional company name []:+
Run the following command:
+openssl ca -config openssl.cnf -out demoCA/cacert.pem -keyfile demoCA/private/cakey.pem -selfsign -infiles demoCA/careq.pem+
Use the configurations of openssl.cnf.
+Enter the password of demoCA/private/cakey.pem.
+Enter the private key password of user root.
+Check whether the request matches the signature.
+Signature ok +Certificate Details: +Serial Number: 1 (0x1) +Validity +Not Before: Feb 28 02:17:11 2017 GMT +Not After : Feb 28 02:17:11 2018 GMT +Subject: +countryName = CN +stateOrProvinceName = shanxi +organizationName = Abc +organizationalUnitName = hello +commonName = world +X509v3 extensions: +X509v3 Basic Constraints: +CA:FALSE +Netscape Comment: +OpenSSL Generated Certificate +X509v3 Subject Key Identifier: +F9:91:50:B2:42:8C:A8:D3:41:B0:E4:42:CB:C2:BE:8D:B7:8C:17:1F +X509v3 Authority Key Identifier: +keyid:F9:91:50:B2:42:8C:A8:D3:41:B0:E4:42:CB:C2:BE:8D:B7:8C:17:1F +Certificate is to be certified until Feb 28 02:17:11 2018 GMT (365 days) +Sign the certificate? [y/n]:y +1 out of 1 certificate requests certified, commit? [y/n]y +Write out database with 1 new entries +Data Base Updated+
A CA root certificate named demoCA/cacert.pem has been issued.
+Run the following command:
+openssl genrsa -aes256 -out server.key 2048+
Run the following command:
+openssl req -config openssl.cnf -new -key server.key -out server.req+
Enter the password of server.key.
+You need to enter information that will be included in your certificate request.
+The information you need to enter is a Distinguished Name (DN).
+You can leave some fields blank.
+For a field that contains a default value, enter a period (.) to leave the field blank.
+Country Name (2 letter code) [AU]:CN +State or Province Name (full name) [Some-State]:shanxi +Locality Name (eg, city) []:xian +Organization Name (eg, company) [Internet Widgits Pty Ltd]:Abc +Organizational Unit Name (eg, section) []:hello +-Common name can be any name +Common Name (eg, YOUR name) []:world +Email Address []: +-- The following information is optional. +A challenge password []: +An optional company name []:+
vi demoCA/index.txt.attr+
Issue the generated server certificate request file. After it is issued, the official server certificate server.crt is generated.
+openssl ca -config openssl.cnf -in server.req -out server.crt -days 3650 –md sha256+
Use the configurations of /etc/ssl/openssl.cnf.
+Enter the password of /demoCA/private/cakey.pem.
+Check whether the request matches the signature.
+Signature ok +Certificate Details: +Serial Number: 2 (0x2) +Validity +Not Before: Feb 27 10:11:12 2017 GMT +Not After : Feb 25 10:11:12 2027 GMT +Subject: +countryName = CN +stateOrProvinceName = shanxi +organizationName = Abc +organizationalUnitName = hello +commonName = world +X509v3 extensions: +X509v3 Basic Constraints: +CA:FALSE +Netscape Comment: +OpenSSL Generated Certificate +X509v3 Subject Key Identifier: +EB:D9:EE:C0:D2:14:48:AD:EB:BB:AD:B6:29:2C:6C:72:96:5C:38:35 +X509v3 Authority Key Identifier: +keyid:84:F6:A1:65:16:1F:28:8A:B7:0D:CB:7E:19:76:2A:8B:F5:2B:5C:6A +Certificate is to be certified until Feb 25 10:11:12 2027 GMT (3650 days) +-- Choose y to sign and issue the certificate. +Sign the certificate? [y/n]:y +-- Select y, the certificate singing and issuing is complete. +1 out of 1 certificate requests certified, commit? [y/n]y +Write out database with 1 new entries +Data Base Updated+
Enable password protection for the private key: If the password protection for the server private key is not enabled, you need to use gs_guc to encrypt the password.
+gs_guc encrypt -M server -K root private key password -D ./+
After the password is encrypted using gs_guc, two private key password protection files server.key.cipher and server.key.rand are generated.
+openssl genrsa -aes256 -out client.key 2048+
Generate a client certificate request file.
+openssl req -config openssl.cnf -new -key client.key -out client.req+
After the generated client certificate request file is signed and issued, the official client certificate client.crt will be generated.
+openssl ca -config openssl.cnf -in client.req -out client.crt -days 3650 –md sha256+
If METHOD is set to cert in the pg_hba.conf file of the server, the client must use the username (common user) configured in the license file client.crt to connect to a database. If METHOD is set to md5 or sha256, the client does not have this restriction.
+If the password protection for the client private key is not removed, you need to use gs_guc to encrypt the password.
+gs_guc encrypt -M client -K root private key password -D ./+
After the password is encrypted using gs_guc, two private key password protection files client.key.cipher and client.key.rand are generated.
+Default security certificates and keys required for SSL connection are configured in LibrA. Before the operation, obtain official certificates and keys for the server and client from the CA.
+l Certificate name: server.crt +l Key name: server.key +l Key password and encrypted file: server.key.cipher and server.key.rand +Conventions for configuration file names on the client: +l Certificate name: client.crt +l Key name: client.key +l Key password and encrypted file: client.key.cipher and client.key.rand +l Certificate name: cacert.pem +l Names of files on in the revoked certificate list: sslcrl-file.crl+
Package name: db-cert-replacement.zip
+Package format: ZIP
+Package file list: server.crt, server.key, server.key.cipher, server.key.rand, client.crt, client.key, client.key.cipher, client.key.rand, and cacert.pem
+If you need to configure the certificate revocation list (CRL), the package file list must contain sslcrl-file.crl.
+zip db-cert-replacement.zip client.crt client.key client.key.cipher client.key.rand server.crt server.key server.key.cipher server.key.rand +zip -u ../db-cert-replacement.zip cacert.pem+
Run the following command to replace the certificate in Coodinator (CN):
+gs_om -t cert --cert-file=/home/gaussdba/test/db-cert-replacement.zip+
Starting SSL cert files replace.
+Backing up old SSL cert files.
+Backup SSL cert files on BLR1000029898 successfully.
+Backup SSL cert files on BLR1000029896 successfully.
+Backup SSL cert files on BLR1000029897 successfully.
+Backup gds SSL cert files on successfully.
+BLR1000029898 replace SSL cert files successfully.
+BLR1000029896 replace SSL cert files successfully.
+BLR1000029897 replace SSL cert files successfully.
+Replace SSL cert files successfully.
+Distribute cert files on all coordinators successfully.
+You can run the gs_om -t cert --rollback command to remotely call the interface and the gs_om -t cert --rollback -L command.
+openssl pkcs8 -topk8 -inform PEM -outform DER -in Client.key -out client.pk8+
When you select Client SSL Key on Data Studio, the key file cannot be selected and only the *.pk8 file can be selected. However, this file is not included in the downloaded certificate.
+hostssl all all 10.18.158.95/32 cert+
Configure One way SSL authentication for the client on the server.
+hostssl all all 10.18.158.95/32 sha256+
You need to enter the SSL password.
+Answer: Check the following items:
+Answer: If the same SSL certificates are used by different servers, then the second connection will succeed because the certificates are cached.
+When you establish a connection with a different server using different SSL certificates, the connection will fail due to certificate mismatch.
+Answer: This problem may occur if you drop a function/procedure and recreate it. In this case, refresh the parent folder to view the function/procedure in Object Browser.
+Answer: Critical error may occur in some of the following cases. Check whether:
+Answer: Constraints are used to deny the insertion of unwanted data in columns. You can create restrictions on one or more columns in any table. It maintains the data integrity of the table.
+The following constraints are supported:
+Answer: An index is a copy of the selected column of a table that can be searched very efficiently. It also includes a low level disk block address or a direct link to the complete row of data it was copied from.
+Answer: Exported, imported, and system files are encoded with the system's default encoding as configured in Settings > Preferences. The default encoding is UTF-8.
+Answer: A user cannot open multiple instances in Data Studio.
+Answer: This problem may occur if other DML/DDL operations are being performed on the same object. In this case, stop all the DML/DDL operations on the object and try again. If the problem persists, there may be another user performing DML/DDL operations on the object. Try again later. You can customize table data and check the operations in a transaction by following the instructions provided in Data Studio GUI.
+Answer: When a result set data is exported, a new connection is used to execute the query again. The exported results may be different from the data on the Result tab.
+Answer: This message is displayed when you connect to the database server of an earlier version or log in to the database for the first time after it is created.
+Answer: This problem occurs when the server returns an incorrect line number. You can view the error message on the Message tab and locate the correct row to rectify the fault.
+Answer: Yes.
+Answer: The value of -Xmx may be invalid. For details, see Installing and Configuring Data Studio.
+Answer: If the number of opened tabs reaches a certain limit (depending on your screen resolution), the icon will be displayed at the end of the tab list. Click this icon and select the required tab from the drop-down list. If this icon is not available, use the tooltip to identify the tabs. You also search for a SQL Terminal tab by its name. For example:
Answer: Sometimes the language may not reflect the selected change post restart. Manually restart DS to open the tool in selected language.
+Answer: At times the server returns an error while trying to fetch last login details. In such scenarios the last login pop-up message does not display.
+Answer: This happens if the SQL, DDL, object names or data contains Chinese text and the Data Studio file encoding is not set to GBK. To solve this, go to Settings > Preferences > Environment > File Encoding and set the encoding to GBK. The supported combinations of Database and Data Studio encoding for export operation are shown in Table1 Supported combinations of file encoding.
+To open/view the exported files in Windows Explorer: Files exported with UTF-8 encoding can be opened/viewed by double-clicking it or by right-clicking on the file and selecting Open. Files exported with GBK encoding must be opened in Microsoft Excel using the import external data feature (Data > Get External Data > From Text).
+ +Database Encoding + |
+Data Studio File Encoding + |
+Support for Chinese Text in Table Names + |
+Support for English Text in Table Names + |
+
---|---|---|---|
GBK + |
+GBK + |
+Yes + |
+Yes + |
+
GBK + |
+UTF-8 + |
+No - Incorrect details + |
+No - Incorrect details + |
+
UTF-8 + |
+GBK + |
+No - Export Fails + |
+No - Incorrect details + |
+
UTF-8 + |
+UTF-8 + |
+Yes + |
+Yes + |
+
UTF-8 + |
+LATIN1 + |
+No - Export Fails + |
+Yes + |
+
SQL_ASCII + |
+GBK + |
+Yes + |
+Yes + |
+
SQL_ASCII + |
+UTF-8 + |
+No - Incorrect details + |
+No - Incorrect details + |
+
Answer: This message occurs if the Data Studio and Database encoding selected are incompatible. To solve this, select the compatible encoding. Compatible encoding is shown in Table 2.
+ +Data Studio File Encoding + |
+Database Encoding + |
+Compatible or Not + |
+
---|---|---|
UTF-8 + |
+GBK + |
+Yes + |
+
LATIN1 + |
+Yes + |
+|
SQL_ASCII + |
+Yes + |
+|
GBK + |
+UTF-8 + |
+Yes + |
+
LATIN1 + |
+No + |
+|
SQL_ASCII + |
+Yes + |
+|
SQL_ASCII + |
+UTF-8 + |
+Yes + |
+
LATIN1 + |
+Yes + |
+|
GBK + |
+Yes + |
+
Answer: The database does not differentiate between PL/SQL function and procedure. All procedures in databases are functions. Hence PL/SQL procedure is saved as PL/SQL function.
+Answer: The database allows you to edit the distribution key only for the first insert operation.
+Answer: Yes, the database server will add the value but the value will not be visible after save in the Edit Table Data tab. Use the refresh option from the Edit Table Data tab or re-open the table again to view the added default value(s).
+Answer: This happens because there are additional rows detected for modification/deletion based on Custom Unique Key or All Columns selection. If Custom Unique Key is selected, then it will delete/modify the rows that have exact match of the data in the column selected for deletion/modification. If All Columns is selected, then it will delete/modify the rows that match data in all columns. Hence the duplicate records matching the Custom Unique Key or All Columns will be deleted/modified if Yes is selected. If No is selected, the row that is not saved will be marked for correction.
+Answer: The additional context menu options like Right to left Reading order, Show Unicode control characters and so on are provided by Windows 7 in case the keyboard you are using supports right to left and left to right input.
+Answer: Following objects are not supported for DDL & DDL and Data operations.
+Export DDL:
+Connection, database, foreign table, sequence, column, index, constraint, partition, function/procedure group, regular tables group, views group, schemas group, and system catalog group.
+Export DDL and Data
+Connection, database, namespace, foreign table, sequence, column, index, constraint, partition, function/procedure, view, regular tables group, schemas group, and system catalog group.
+Answer: No. Queries will only be committed when COMMIT command is executed in the Terminal.
+ +Auto Commit + |
+Reuse Connection + |
+Resultset Save + |
+
---|---|---|
On + |
+On + |
+Commit + |
+
On + |
+Off + |
+Commit + |
+
Off + |
+On + |
+Does not commit + |
+
Off + |
+Off + |
+Not supported + |
+
Answer: When you query a temp table from a new SQL Terminal or with the Reuse Connection off, the resultset displays information of a regular/partition/foreign table, if a table with the same name as the temp table exists.
+If the Reuse Connection is On, the resultset displays information of the temp table even if another table with the same name exists.
+Answer: Following are the operations that do not run in background while the object is locked in another operation:
+ +Operations + |
+|
---|---|
Renaming a table + |
+Creating a constraint + |
+
Setting schema on table + |
+Creating an index + |
+
Setting description in table + |
+Adding a column + |
+
Renaming a partition + |
+- + |
+
Answer: Yes. The .xlsx format supports a maximum of 1 million rows and 16,384 columns. The .xls format supports a maximum of 64,000 rows and 256 columns.
+This section describes how to install and configure Data Studio, and how to configure servers for debugging PL/SQL Functions.
+Topics in this section include:
+ + +Setting the Location of the Created Log File
+ + + +You can run Data Studio after decompressing the installation package.
+Perform the following steps to install Data Studio:
+The following files and folders are obtained after decompression:
+The UserData folder is created after the first user opens the instance using Data Studio. See Getting Started to rectify any error that occurs when Data Studio is started.
+See Adding a Connection to create a database connection.
+ +Restart Data Studio to view parameter changes. Invalid parameters added to the configuration file are ignored by Data Studio. All the following parameters are optional.
+Table 1 Configuration parameters lists the configuration parameters of Data Studio.
+ +Parameter + |
+Description + |
+Value Range + |
+Default Value + |
+
---|---|---|---|
-startup + |
+Defines the JAR files required to load Data Studio. This information varies based on the version used. + |
+N/A + |
+plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar + |
+
--launcher.library + |
+Specifies the library required for loading Data Studio. The library varies depending on the Data Studio version. + |
+N/A + |
+plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.300.v20150602-1417 or plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.300.v20150602-1417 depending on the installation package used + |
+
-clearPersistedState + |
+Used to remove any cached content on the GUI and reload Data Studio. + |
+N/A + |
+N/A + NOTE:
+You are advised to add this parameter. + |
+
-consoleLineCount + |
+Specifies the maximum number of rows to be displayed in the Messages window. + |
+1–5000 + |
+1000 + |
+
-logfolder + |
+Used to create a log folder. You can specify the path to save logs. If the default value . is used, the folder is created in the Data Studio\UserData\Username\logs directory. For details, see Setting the Location of the Created Log File. + |
+N/A + |
+- + |
+
-loginTimeout + |
+Specifies the waiting time for creating a connection, in second. Within the period specified by this parameter, Data Studio continuously attempts to connect to the database. If the connection times out, the system displays a message indicating that the connection times out or the connection fails. + |
+N/A + |
+180 + |
+
-data + |
+Specifies the instance data location of a session. + |
+N/A + |
+@none + |
+
@user.home/MyAppWorkspace + |
+Specifies the location where Eclipse workspace is created when Data Studio is being started. +@user.home refers to C:/Users/Username. +Eclipse log files are stored in @user.home/MyAppWorkspace/.metadata. + |
+N/A + |
+N/A + |
+
-detailLogging + |
+Defines the criteria for logging error messages. +True: All error messages are logged. +False: Only error messages explicitly specified by Data Studio are logged. +For details, see Fault Logging. +This parameter is not added by default and can be configured manually if logging is required. + |
+True/False + |
+False + |
+
-logginglevel + |
+Used to create a log file based on the specified value. If the value is out of range or empty, the default value WARN is used. For details, see Different Types of Log Levels. +This parameter is not added by default and can be configured manually if logging is required. + |
+FATAL, ERROR, WARN, INFO, DEBUG TRACE, ALL, and OFF + |
+WARN + |
+
-focusOnFirstResult + |
+Used for the auto-positioning of the Result tab. +False: The auto-positioning to the last opened Result tab is enabled. +True: The auto-positioning function is disabled. + |
+True/False + |
+False + |
+
NOTE:
+
|
+|||
-vmargs + |
+Specifies the starting location of VM parameters. + NOTE:
+-vmargs must locate at the end of the configuration file. + |
+N/A + |
+N/A + |
+
-vm + |
+Specifies the file name, for example, javaw.exe, and the relative path to Java. + |
+N/A + |
+N/A + |
+
-Dosgi.requiredJavaVersion + |
+Specifies the earliest Java version required for running Data Studio. Do not change the value of this parameter. + |
+N/A + |
+1.5 + NOTE:
+The recommended Java version is 1.8. + |
+
-Xms + |
+Specifies the initial heap size occupied by Data Studio. The value must be a multiple of 1024 and greater than 40 MB and less than or equal to the value of -Xmx. At the end of the value, add the letter k or K to indicate kilobytes, m or M to indicate megabytes, g or G to indicate gigabytes. Examples are as follows: +-Xms40m +-Xms120m +For details, see Java documentation. + |
+N/A + |
+-Xms40m + |
+
-Xmx + |
+Specifies the maximum heap size occupied by Data Studio. The value can be modified based on the available RAM space. At the end of the value, add the letter k or K to indicate kilobytes, m or M to indicate megabytes, g or G to indicate gigabytes. Examples are as follows: +-Xmx1200m +-Xmx1000m +For details, see Java documentation. + |
+N/A + |
+-Xmx1200m + |
+
-OLTPVersionOldST + |
+Used to configure the earlier OLTP versions. You can log in to gsql and run SELECT VERSION() to update the OLTPVersionOldST parameter in the .ini file using the obtained version number. + |
+- + |
+- + |
+
-OLTPVersionNewST + |
+Used to configure the latest OLTP version. You can log in to gsql and run SELECT VERSION() to update the OLTPVersionNewST parameter in the .ini file using the obtained version number. + |
+- + |
+- + |
+
-testability + |
+Used to enable the testability feature. When this function is enabled in the current version: +
This parameter is not available by default and needs to be added manually. + |
+True/False + |
+False + |
+
-Duser.language + |
+Specifies the language settings of Data Studio. Add this parameter after the language settings are changed. + |
+zh/en + |
+N/A + |
+
-Duser.country + |
+Specifies the country/region settings of Data Studio. Add this parameter after the language settings are changed. + |
+CN/IN + |
+N/A + |
+
-Dorg.osgi.framework.bundle.parent=ext + |
+Specifies the class loader used for boot delegation. + |
+boot/app/ext + |
+boot + |
+
-Dosgi.framework.extensions=org.eclipse.fx.osgi + |
+Specifies a list of framework extension names. The framework extension is a fragment of the system bundle (org.eclipse.osgi). You can use other classes provided by this framework. + |
+N/A + |
+N/A + |
+
Dorg.osgi.framework.bundle.parent=ext
+Dosgi.framework.extensions=org.eclipse.fx.osgi
+Check whether the client is connected to the server using the IPv6 or IPv4 protocol. You can also establish the connection by configuring the following parameters in the .ini file:
+-Djava.net.preferIPv4Stack=true
+-Djava.net.preferIPv6Stack=false
+Table 2 lists the supported communication scenarios.
+The first row and first column indicate the types of nodes that attempt to communicate with each other. x indicates that the nodes can communicate with each other.
+Node + |
+V4 Only + |
+V4/V6 + |
+V6 Only + |
+
---|---|---|---|
V4 only + |
+x + |
+x + |
+No communication possible + |
+
V4/V6 + |
+x + |
+x + |
+x + |
+
V6 only + |
+No communication possible + |
+x + |
+x + |
+
For example:
+-logfolder=c:\test1
+In this example, the Data Studio.log file is created in the c:\test1\Username\logs path.
+If you do not have the permission for accessing the path specified in the Data Studio.ini file, Data Studio is closed and the following dialog box is displayed.
+The Data Studio.log file will be created in the Data Studio\UserData\Username\logs path if:
+For example, the value of -logfolder= is empty.
+For details about server logs, see the server manual.
+You can use any text editor to open and view the Data Studio.log file.
+ +Configure the -detailLogging parameter to determine whether to log errors, exceptions, or stack running details of throwables.
+For example, set -detailLogging to False.
+If the value of -detailLogging is set to True, errors, exceptions, or stack running details of throwables will be logged.
+If the value of -detailLogging is set to False, errors, exceptions, or stack running details of throwables will not be logged.
+The log message is described as follows:
+When the size of the Data Studio.log file reaches 10,000 KB (the maximum value), the system automatically creates a file and saves it as Data Studio.log.1. Logs in Data Studio.log are stored in Data Studio.log.1. When the size of Data Studio.log reaches the maximum again, the system automatically creates a file and saves it as Data Studio.log.2. The latest logs are continuously written to Data Studio.log. This process continues till the size of Data Studio.log.5 reaches maximum, and the cycle restarts. Data Studio deletes the earliest log file Data Studio.log.1. For example, Data Studio.log.5 is renamed to Data Studio.log.4,, Data Studio.log.4 to Data Studio.log.3, and so on.
+To enable performance logging in server logs, enable the parameter log_min_messages and set the parameter to debug1 in the configuration file data/postgresql.conf, that is, log_min_messages = debug1.
+Different types of log levels that are displayed in Data Studio.log are as follows:
+The logger outputs all messages equal to or greater than its log level.
+The log levels of the Log4j framework are as follows.
+ +- + |
+FATAL + |
+ERROR + |
+WARN + |
+INFO + |
+DEBUG + |
+TRACE + |
+
---|---|---|---|---|---|---|
OFF + |
+x + |
+x + |
+x + |
+x + |
+x + |
+x + |
+
FATAL + |
+√ + |
+x + |
+x + |
+x + |
+x + |
+x + |
+
ERROR + |
+√ + |
+√ + |
+x + |
+x + |
+x + |
+x + |
+
WARN + |
+√ + |
+√ + |
+√ + |
+x + |
+x + |
+x + |
+
INFO + |
+√ + |
+√ + |
+√ + |
+√ + |
+x + |
+x + |
+
DEBUG + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+x + |
+
TRACE + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+
ALL + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+√ + |
+
√- Creating a log file x - Not creating a log file + |
+
This section describes the steps to be followed to start Data Studio.
+The StartDataStudio.bat batch file checks the version of Operating System (OS), Java and Data Studio as a prerequisite to run Data Studio.
+The batch file checks the version compatibility and will launch Data Studio or display appropriate message based on OS, Java and Data Studio version installed.
+If the Java version installed is earlier than 1.8, then error message is displayed.
+The scenarios checked by the batch file to confirm the required versions of the OS and Java for DS.
+ +DS Installation (32/64bit) + |
+OS (bit) + |
+Java (bit) + |
+Outcome + |
+
---|---|---|---|
32 + |
+32 + |
+32 + |
+Launches Data Studio + |
+
32 + |
+64 + |
+32 + |
+Launches Data Studio + |
+
32 + |
+64 + |
+64 + |
+Error message is displayed + |
+
64 + |
+32 + |
+32 + |
+Error message is displayed + |
+
64 + |
+64 + |
+32 + |
+Error message is displayed + |
+
64 + |
+64 + |
+64 + |
+Launches Data Studio + |
+
This section describes the Data Studio GUI.
+The Data Studio GUI contains the following:
+Data Studio provides the option to show sequence DDL or allow users to export sequence DDL. It provides "Show DDL", "Export DDL", "Export DDL and Data"
+Follow the steps to access the feature:
+Or Select the Export DDL menu option to export DDL statements.
+Or Select the Export DDL and Data menu option to export DDL statements and the select statement.
+Refer to the following image:
+Only the sequence owner or sysadmin or has the select privilege of the sequence, then only the operation can be performed.
+The File menu contains database connection options. Click File in the main menu or press Alt+F to open the File menu.
+ +Function + |
+Button + |
+Shortcut Key + |
+Description + |
+
---|---|---|---|
Creating a connection + |
+Ctrl+N + |
+Creates a database connection in the Object Browser and SQL Terminal tabs. + |
+|
Deleting a connection + |
+- + |
+Deletes the selected database connection from Object Browser. + |
+|
Opening a connection + |
+- + |
+Connects to the database. + |
+|
Disconnecting from the database + |
+Ctrl+Shift+D + |
+Disconnects from the specified database. + |
+|
Disconnecting all connections + |
+- + |
+Disconnects all the databases of a specified connection. + |
+|
Opening + |
+Ctrl+O + |
+Loads SQL queries in SQL Terminal. + |
+|
Saving SQL scripts + |
+Ctrl+S + |
+Saves the SQL scripts of the SQL Terminal to a SQL file. + |
+|
Saving SQL scripts to a new file + |
+CTRL+ALT+S + |
+Saves the SQL scripts in SQL Terminal to a new SQL file. + |
+|
Exiting + |
+- + |
+Alt+F4 + |
+Exits from Data Studio and disconnects from the database. + NOTE:
+Any unsaved changes will be lost. + |
+
Perform the following steps to stop Data Studio:
+Alternatively, choose File > Exit.
+The Exit Application dialog box is displayed prompting you to take the required actions.
+If you click Force Exit, the SQL execution history that is not saved may be lost.
+The Edit menu contains clipboard, Format, Find and Replace, and Search Objects operations to use in the PL/SQL Viewer and SQL Terminal tab. Press Alt+E to open the Edit menu.
+ +Function + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Cut + |
+Ctrl+X + |
+Cuts the selected text. + |
+
Copy + |
+Ctrl+C + |
+Copies the selected text or object name. + |
+
Paste + |
+Ctrl+V + |
+Pastes the selected text or object name. + |
+
Format + |
+Ctrl+Shift+F + |
+Formats all SQL statements and functions/procedures. + |
+
Select all + |
+Ctrl+A + |
+Selects all the text in SQL Terminal. + |
+
Find and replace + |
+Ctrl+F + |
+Finds and replaces text in SQL Terminal. + |
+
Search for objects + |
+Ctrl+Shift+S + |
+Searches for objects within a connected database. + |
+
Undo + |
+Ctrl+Z + |
+Undoes the previous operation. + |
+
Redo + |
+Ctrl+Y + |
+Redoes the previous operation. + |
+
Uppercase + |
+Ctrl+Shift+U + |
+Changes the selected text to uppercase. + |
+
Lowercase + |
+Ctrl+Shift+L + |
+Changes the selected text to lowercase. + |
+
Go to row + |
+Ctrl+G + |
+Redirects to a specific row in SQL Terminal or PL/SQL Viewer. + |
+
Comment/Uncomment lines + |
+Ctrl+/ + |
+Comments or uncomments all selected rows. + |
+
Comment/Uncomment blocks + |
+Ctrl+Shift+/ + |
+Comments or uncomments all selected rows or blocks. + |
+
You can choose Search Objects to search for objects from Object Browser based on the search criteria. Specifically, you can choose Edit > Search Objects or click in the Object Browser toolbar. The search result is displayed in a tree structure, similar to that in Object Browser. Operations in the right-click menu, except for Refresh, can be performed on objects in the search result. After the page is refreshed, objects that have been deleted or renamed or whose schemas have been set can be viewed only from the primary object browser. Right-click options on group names, such as tables, schemas, and views, cannot be performed on objects in the search result. Only objects that you have the permission to access will be displayed in Search Scope.
You can view newly added objects in the Search window by clicking Refresh at the end of the object type.
+Supported search options
+ +Search Option + |
+Search Behavior + |
+
---|---|
Contains + |
+Text that contains the searched content will be displayed. + |
+
Starts With + |
+Text that starts with the searched content will be displayed. + |
+
Exact Word + |
+Text that matches exactly with the searched content will be displayed. + |
+
Regular Expression + |
+When regular expression text is used, text that meets the search criteria is searched for in Object Browser. Select Regular Expression from the Search Criteria drop-down list. For more information, see POSIX Regular Expressions rules. +Examples are as follows: +
|
+
Search with underscores (_) or percentage (%)
+ +Search Value + |
+Search Behavior + |
+
---|---|
_ + |
+The underscore (_) in text is considered the wildcard of a single character. Search criteria such as Regular Expression, Starts With, and Exact Word are not applicable to text that contains underscores (_). +Examples are as follows: +
|
+
% + |
+The percentage (%) in text is considered the wildcard of multiple characters. Search criteria such as Regular Expression, Starts With, and Exact Word are not applicable to text that contains percentage (%). +Examples are as follows: +
|
+
If you select Match Case and perform the search, the system searches for the content that matches the case of the search text.
+The Run menu contains options of performing a database operation in the PL/SQL Viewer tab and executing SQL statements in the SQL Terminal tab. Press Alt+R to open the Run menu.
+ +Function + |
+Button + |
+Shortcut Key + |
+Description + |
+
---|---|---|---|
Executing the specified function/procedure + |
+Ctrl+E + |
+Starts to execute the specified function/procedure in normal mode. +Displays the result in the Result tab. +Displays information about the actions performed in the Messages tab. + |
+|
Compiling/Execution a statement + |
+Ctrl+Enter + |
+Compiles a function/procedure. +Executes SQL statements in SQL Terminal. + |
+|
Compiling/Executing statements in a new tab + |
+Ctrl+Alt+Enter + |
+Retains the current tab and executes statements in a new tab. +This function is disabled if Retain Current is selected. + |
+|
Canceling the query + + |
+Shift+Esc + |
+Cancels the query that is being executed. +Displays the result in the Result tab. +Displays information about the actions performed in the Messages tab. + |
+
The Debug menu contains debugging operations in the PL/SQL Viewer and SQL Terminal tabs. Press Alt+D to open the Debug menu.
+ +Function + |
+Button + |
+Shortcut Key + |
+Description + |
+
---|---|---|---|
Debugging + |
+Ctrl+D + |
+Starts the debugging process. + |
+|
Proceeding + |
+F9 + |
+Continues the debugging. + |
+|
Terminating + |
+F10 + |
+Terminates the debugging. + |
+|
Step Into + |
+F7 + |
+Steps into the debugging process. + |
+|
Step Over + |
+F8 + |
+Steps over the debugging process. + |
+|
Step Out + |
+Shift+F7 + |
+Steps out of the debugging process. + |
+
The Settings menu contains the option of changing the language. Press Alt+G to open the Settings menu.
+ +Function + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Language + |
+- + |
+Sets the language for the Data Studio GUI. + |
+
Preferences + |
+- + |
+Sets the user preferences in Data Studio. + |
+
The Help menu contains the user manual and version information of Data Studio. Press Alt+H to open the Help menu.
+ +Function + |
+Shortcut Key + |
+Description + |
+
---|---|---|
User manual + |
+F1 + |
+Opens the user manual of Data Studio. + |
+
About Data Studio + |
+- + |
+Displays the current version and copyright information of Data Studio. + |
+
The following figure shows the Data Studio Toolbar.
+The toolbar contains the following operations:
+This section describes the right-click menus of Data Studio.
+The following figure shows the Object Browser pane.
+Right-clicking a connection name allows you to select Rename Connection, Edit Connection, Remove Connection, Properties, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Rename Connection + |
+- + |
+Renames a connection. + |
+
Edit Connection + |
+- + |
+Modifies connection details. + |
+
Remove Connection + |
+- + |
+Removes the existing database connection. + |
+
Properties + |
+- + |
+Shows the details of a connection. + |
+
Refresh + |
+F5 + |
+Refreshes a connection. + |
+
Right-clicking the Databases tab allows you to select Create Database, Disconnect All, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Create Database + |
+- + |
+Creates a database of this connection. + |
+
Disconnect All + |
+- + |
+Disconnects all the databases of this connection. + |
+
Refresh + |
+F5 + |
+Refreshes a database group. + |
+
Right-clicking an active database allows you to select Disconnect from DB, Open Terminal, Properties, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Disconnect from DB + |
+Ctrl+Shift+D + |
+Disconnects from a database. + |
+
Open Terminal + |
+Ctrl+T + |
+Opens a terminal of this connection. + |
+
Properties + |
+- + |
+Displays the properties of a database. + |
+
Refresh + |
+F5 + |
+Refreshes a database. + |
+
Right-clicking an inactive database allows you to select Connect to DB, Rename Database, and Drop Database options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Connect to DB + |
+- + |
+Connects to a database. + |
+
Rename Database + |
+- + |
+Renames a database. + |
+
Drop Database + |
+- + |
+Drops a database. + |
+
Right-clicking the Catalogs tab allows you to select the Refresh option.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Refresh + |
+F5 + |
+Refreshes a function/procedure. + |
+
Right-clicking the Schemas tab allows you to select Create Schema, Grant/Revoke, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Create Schema + |
+- + |
+Creates a schema. + |
+
Grant/Revoke + |
+- + |
+Grants or revokes permissions on a schema group. + |
+
Refresh + |
+F5 + |
+Refreshes a schema. + |
+
Right-clicking a schema allows you to select Export DDL, Export DDL and Data, Rename Schema, Drop Schema, Grant/Revoke, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Export DDL + |
+- + |
+Exports DDL of a schema. + |
+
Export DDL and Data + |
+- + |
+Exports DDL and data of a schema. + |
+
Rename Schema + |
+- + |
+Renames a schema. + |
+
Drop Schema + |
+- + |
+Drops a schema. + |
+
Grant/Revoke + |
+- + |
+Grants or revokes permissions on a schema. + |
+
Refresh + |
+F5 + |
+Refreshes a schema. + |
+
Right-clicking Functions/Procedures allows you to select Create PL/SQL Function, Create PL/SQL Procedure, Create SQL Function, Create C Function, Grant/Revoke, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Create PL/SQL Function + |
+- + |
+Creates a PL/SQL function. + |
+
Create PL/SQL Procedure + |
+- + |
+Creates a PL/SQL procedure. + |
+
Create SQL Function + |
+- + |
+Creates a SQL function. + |
+
Create C Function + |
+- + |
+Creates a C function. + |
+
Grant/Revoke + |
+- + |
+Grants or revokes permissions on a function/procedure. + |
+
Refresh + |
+F5 + |
+Refreshes a function/procedure. + |
+
Right-clicking Tables allows you to select Create table, Create partitioned table, Grant/Revoke, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Create table + |
+- + |
+Creates an ordinary table. + |
+
Create partitioned table + |
+- + |
+Creates a partitioned table. + |
+
Grant/Revoke + |
+- + |
+Grants or revokes permissions on a table. + |
+
Refresh + |
+F5 + |
+Refreshes a table. + |
+
Right-clicking Views allows you to select Create View, Grant/Revoke, and Refresh options.
+ +Menu Item + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Create View + |
+- + |
+Creates a view. + |
+
Grant/Revoke + |
+- + |
+Grants or revokes permissions on a view. + |
+
Refresh + |
+F5 + |
+Refreshes a view. + |
+
Right-clicking the PL/SQL Viewer tab allows you to select Cut, Copy, Paste, Select All, Comment/Uncomment Lines, Comment/Uncomment Block, Compile, Execute, Add Variable To Monitor, Debug with Rollback, and Debug options.
+ +Right-Click Option + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Cut, Copy, Paste + |
+Ctrl+X, Ctrl+C, Ctrl+V + |
+Specifies clipboard operations. + |
+
Select All + |
+Ctrl+A + |
+Selects options in the PL/SQL Viewer tab. + |
+
Comment/Uncomment Lines + |
+- + |
+Comments or uncomments all selected rows. + |
+
Comment/Uncomment Block + |
+- + |
+Comments or uncomments all selected rows or blocks. + |
+
Compile + |
+- + |
+Compiles a function/procedure. + |
+
Execute + |
+- + |
+Executes a function/procedure. + |
+
Add Variable To Monitor + |
+- + |
+Adds variables to the monitor window. + |
+
Debug with Rollback + |
+- + |
+Debugs a function/procedure and rolls back the changes after the debugging is complete. + |
+
Debug + |
+- + |
+Debugs a function/procedure. + |
+
Right-clicking the SQL Terminal tab allows you to select Cut, Copy, Paste, Select All, Execute Statement, Open, Save, Find and Replace, Execution Plan, Comment/Uncomment Lines, Save As, Format , and Cancel options.
+ +Right-Click Option + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Cut, Copy, Paste + |
+Ctrl+X, Ctrl+C, Ctrl+V + |
+Specifies clipboard operations. + |
+
Select All + |
+- + |
+Selects all text. + |
+
Execute Statement + |
+- + |
+Executes a query. + |
+
Open + |
+- + |
+Opens a file. + |
+
Save + |
+- + |
+Saves a query. + |
+
Find and Replace + |
+- + |
+Finds and replaces text in the SQL Terminal tab. + |
+
Execution Plan + |
+- + |
+Executes a query. + |
+
Comment/Uncomment Lines + |
+Ctrl+/ + |
+Comments or uncomments all selected rows. + |
+
Comment/Uncomment Block + |
+Ctrl+Shift+/ + |
+Comments or uncomments all selected rows or blocks. + |
+
Cancel + |
+- + |
+Cancels the execution. + |
+
Save As + |
+CTRL+ALT+S + |
+Saves the query to a new file. + |
+
Format + |
+CTRL+SHIFT+F + |
+Formats the selected SQL statements using the rules configured in the query. + |
+
Right-clicking the Messages tab allows you to select Copy, Select All, and Clear options.
+ +Right-Click Option + |
+Shortcut Key + |
+Description + |
+
---|---|---|
Copy + |
+Ctrl+C + |
+Copies the text. + |
+
Select All + |
+Ctrl+A + |
+Selects all text. + |
+
Clear + |
+- + |
+Clears the text. + |
+
When Data Studio is started, the New Database Connection dialog box is displayed by default. To perform database operations, Data Studio must be connected to at least one database.
+Enter the connection parameters to create a connection between Data Studio and a database server. Hover the mouse cursor over the connection name to view the database information.
+You need to fill in all mandatory parameters that are marked with asterisks (*).
+Perform the following steps to create a database connection.
+Alternatively, click on the toolbar, or press Ctrl+N to connect to the database. The New Database Connection dialog box is displayed.
If the preference file is damaged or the preference settings are invalid during connection creation, an error message will be displayed indicating that the preferred value is invalid and prompting you to restore the default preference settings. Click OK to complete the operation of creating a database connection.
+The server information will be displayed only after the connection succeeds.
+If the password or key for any of the existing connections is damaged, you need to enter the password for whichever connection you use.
+SSL Mode + |
+Description + |
+
---|---|
require + |
+The certificate will not be verified as the used SSL factory does not need to be verified. + |
+
verify-ca + |
+The certificate authority (CA) will be verified using the corresponding SSL factory. + |
+
verify-full + |
+The CA and database will be verified using the corresponding SSL factory. + |
+
The default value is Objects allowed as per user privilege.
+The status of the completed operation is displayed in the status bar.
+When Data Studio is connecting to the database, the connection status is displayed as follows:
+Once the connection is created, all schemas will be displayed in the Object Browser pane.
+Perform the following steps to cancel the connection:
+The Cancel Connection dialog box is displayed.
+A confirmation dialog box is displayed.
+The lazy loading feature allows objects to be loaded only when you need.
+When you connect to a database only child objects of the schema saved under search_path will be loaded, as shown in the following figure.
+Unloaded schemas are displayed as Schema name (...).
+To load child objects, expand the schema. You will see that the objects under the schema are loading.
+If you try to load an unloaded object while another object is being loaded, a pop-up message is displayed indicating that another object is being loaded. The next to the unloaded object will disappear, and will be displayed again when you refresh the object or database level to load the object.
Expand a schema to load and view the child objects. You can load child objects of only one schema at a time in Object Browser.
+If you modify search_path after creating a connection, the modification will take effect only after the database is reconnected. The Auto Suggest feature is applicable to keywords, data types, schema names, table names, views, and table aliases of all schema objects that you have permissions for accessing.
+A maximum of 50,000 objects will be loaded in the Object Browser pane within one minute.
+The database connection timeout interval defaults to 3 minutes (180 seconds). If the connection fails within this interval, a timeout error is displayed.
+You can set the loginTimeout value in the Data Studio.ini file located in the Data Studio\ directory.
+When you log in to Data Studio, pg_catalog is loaded automatically.
+Perform the following steps to rename a database connection.
+A Rename Connection dialog box is displayed prompting you to enter the new connection name.
+The new connection name must be unique. Otherwise, the rename operation will fail.
+Perform the following steps to edit the properties of a database connection.
+To edit an active connection, you need to disable the connection and then open the connection with the new properties. A warning message about connection resetting is displayed.
+The Edit Connection dialog box is displayed.
+The Connection Name cannot be modified.
+If SSL is not enabled, a Connection Security Alert dialog box is displayed.
+If you select Do not show again, the Connection Security Alert dialog box is not displayed for the subsequent connections of logged Data Studio instances.
+A dialog box is displayed asking users to confirm whether the database whose connection has been edited is deleted.
+The status of the completed operation is displayed in the status bar.
+Follow the steps below to remove an existing database connection:
+A confirmation dialog box is displayed to remove the connection.
+The status bar displays the status of the completed operation.
+This action will remove the connection from the Object Browser. Any unsaved data will be lost.
+Follow the steps below to view the properties of a connection:
+The status bar displays the status of the completed operation.
+Properties of the selected connection is displayed.
+If the property of a connection is modified for the connection that is already opened, then open the properties of the connection again to view the updated information on the same opened window.
+Perform the following steps to refresh a database connection.
+The status of the completed operation is displayed in the status bar.
+The time taken to refresh a database depends on the number of objects in the database. Therefore, perform this operation as required on large databases.
+If a stored procedure has been deleted from the database before the refresh operation, this stored procedure will be deleted from Object Browser only when the refresh operation is performed.
+Data Studio allows you to export or import connection details from the connection dialog for future reference.
+The following parameters can be exported:
+Perform the following steps to import or export a connection configuration file:
+The following window is displayed:
+The Export Connection Profiles dialog box is displayed. You can select the connections to be exported in this dialog box.
+Select the connections you want to export and enter the name of the file where the exported connections will be saved. Click OK.
+Select the location where you want to save the file and click OK.
+The following dialog box is displayed after the connections are exported.
+If the connections to be imported match the existing ones, a dialog box is displayed as follows.
+Click any of the preceding options as required and click OK.
+Password and SSL password parameters will not be exported.
+A relational database is a database that has a set of tables which is manipulated in accordance with the relational model of data. It contains a set of data objects used to store, manage, and access data. Examples of such data objects are tables, views, indexes, functions and so on.
+Follow the steps below to create a database:
+This operation can be performed only when there is at least one active database.
+A Create Database dialog box is displayed prompting you to provide the necessary information to create the database.
+The database supports UTF-8, GBK, SQL_ASCII, and LATIN1 types of encoding character sets. Creating the database with other encoding character sets may result in erroneous operations.
+The status bar displays the status of the completed operation.
+You can view the created database in the Object Browser. The system related schema present in the server is automatically added to the new database.
+Data Studio allows you to login even if the password has expired with a message informing that some operations may not work as expected when no other database is connected in that connection profile. Refer to Password Expiry for information to change this behavior.
+You can disconnect all the databases from a connection.
+Follow the steps below to disconnect a connection from the database:
+This operation can be performed only when there is at least one active database.
+A confirmation dialog box is displayed to disconnect all databases for the connection.
+The status bar displays the status of the completed operation.
+Data Studio populates all the connection parameters (except password) that were provided during the last successful connection with the database. To reconnect, you need to enter only the password in the connection wizard.
+You can connect to the database.
+Follow the steps below to connect a database:
+This operation can be performed only on an inactive database.
+The database is connected.
+The status bar displays the status of the completed operation.
+You can disconnect the database.
+Follow the steps below to disconnect a database:
+This operation can be performed only on an active database.
+A confirmation dialog box is displayed to disconnect database.
+The database is disconnected.
+The status bar displays the status of the completed operation.
+Follow the steps below to rename a database:
+This operation can be performed only on an inactive database.
+A Rename Database dialog box is displayed prompting you to provide the necessary information to rename the database.
+A confirmation dialog box is displayed to rename the database.
+The status bar displays the status of the completed operation.
+You can view the renamed database in the Object Browser.
+ +Individual or batch drop can be performed on databases. Refer to Batch Dropping Objects section for batch drop.
+Follow the steps below to drop a database:
+This operation can be performed only on an inactive database.
+A confirmation dialog box is displayed to drop the database.
+A popup message and the status bar display the status of the completed operation.
+Follow the steps below to view the properties of a database:
+This operation can be performed only on an active database.
+The status bar displays the status of the completed operation.
+The properties of the selected database are displayed.
+If the property of a database is modified for the database that is already opened, then refresh and open the properties of the database again to view the updated information on the same opened window.
+This section describes working with database schemas. All system schemas are grouped under Catalogs and user schemas under Schemas.
+In relational database technology, schemas provide a logical classification of objects in the database. Some of the objects that a schema may contain include functions/procedures, tables, sequences, views, and indexes.
+Follow the steps below to define a schema:
+Only refresh can be performed on Catalogs group.
+You can view the new schema in the Object Browser pane.
+The status bar displays the status of the completed operation.
+You can perform the following actions on a schema:
+Data studio displays default schema of the user in the toolbar.
+When a create query without mentioning the schema name is executed from SQL Terminal, the corresponding objects are created under the default schema of the user.
+When a select query is executed in SQL terminal without mentioning the schema name, the default schemas are searched to find these objects.
+When Data Studio starts, the default schemas are set to <username>, public schemas in same priority.
+If another schema is selected in the drop-down, the selected schema will be set as the default schema, overriding previous setting.
+The selected schema is set as the default schema for all active connections of the database (selected in database list drop-down).
+This feature is not available for OLTP database.
+You can export the schema DDL to export the DDL of functions/procedures, tables, sequences, and views of the schema.
+Perform the following steps to export the schema DDL:
+The Data Studio Security Disclaimer dialog box is displayed. You can close the dialog box. For details, see Security Disclaimer.
+The Save As dialog box is displayed.
+The Data Exported Successfully dialog box and status bar display the status of the completed operation.
+ +Database Encoding + |
+File Encoding + |
+Support for Exporting DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
You can select multiple objects and export their DDL. Batch Export lists the objects whose DDL cannot be exported.
+The exported schema DDL and data include the following:
+Perform the following steps to export the schema DDL and data:
+The Data Studio Security Disclaimer dialog box is displayed.
+You can close the dialog box. For details, see Security Disclaimer.
+The Save As dialog box is displayed.
+The Data Exported Successfully dialog box and status bar display the status of the completed operation.
+ +Database Encoding + |
+File Encoding + |
+Support for Exporting DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
You can select multiple objects and export their DDL and data. Batch Export lists the objects whose DDL and data cannot be exported.
+Follow the steps to rename a schema:
+You can view the renamed schema in the Object Browser.
+The status bar displays the status of the completed operation.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog is displayed.
+In SQL Preview tab, you can view the SQL query automatically generated for the inputs provided.
+Individual or batch dropping can be performed on schemas. Refer to Batch Dropping Objects section for batch dropping.
+Follow the steps below to drop a schema:
+A confirmation dialog to drop the schema is displayed.
+A popup message and the status bar display the status of the completed operation.
+Perform the following steps to create a function/procedure and SQL function:
+The selected template is displayed in the new tab of Data Studio.
+The Created function/procedure Successfully dialog box is displayed, and the new function/procedure is displayed under the Object Browser pane. Click OK to close the NewObject() tab and add the debugging object to Object Browser.
+Refer to Executing SQL Queries for information on reconnection options if connection is lost during execution.
+Refresh Object Browser by pressing F5 to view the newly added debugging object.
+When a user creates a PL/SQL object from the template or by editing an existing PL/SQL object, the created PL/SQL object will be displayed in a new tab page.
+Perform the following steps to compile a created function:
+The function is displayed in a new tab page.
+Perform the following steps to edit a function/procedure or SQL function:
+The selected function/procedure or SQL function is displayed in the PL/SQL Viewer tab page.
+If multiple functions/procedures or SQL functions have the same schema, name, and input parameters, only one of them can be opened at a time.
+If you execute the function/procedure or SQL function before compilation, the Source Code Change dialog box is displayed.
+The status of the completed operation is displayed in the Message tab page.
+Refer to Executing SQL Queries for information on reconnect option in case connection is lost during execution.
+Perform the following steps to grant or revoke a permission:
+The Grant/Revoke dialog box is displayed.
+The Privilege Selection tab is displayed.
+The SQL Preview tab displays the SQL query automatically generated after the preceding operations.
+This feature is only supported in online analytical processing (OLAP), not in online transaction processing (OLTP).
+This section provides you with details on working with functions/procedures and SQL functions in Data Studio.
+Data Studio supports PL/pgSQL and SQL languages for the operations are listed as follows:
+ +During debugging, if the connection is lost but the database remains connected to Object Browser, the Connection Error dialog box is displayed with the following options:
+SQL language function cannot be debugged.
+Topics in this section include:
+A breakpoint is used to stop a PL/SQL program on the line where the breakpoint is set. You can use breakpoints to control the execution and debug the procedure.
+When you run a PL/SQL program, the execution stops on each line with a breakpoint set. In this case, Data Studio retrieves information about the current program state, such as the values of the program variables.
+Perform the following steps to debug a PL/SQL program:
+When a line with a breakpoint set is reached, monitor the program state in the Debugging pane, and continue to execute the program.
+Data Studio provides debugging options in the toolbar that help you step through the debugging objects.
+You can view and manage all breakpoints in the Breakpoints pane. Click the breakpoint option at the minimized pane to open the Breakpoints pane.
+The Breakpoints pane lists all lines with a breakpoint set and the debugging object names.
+You can enable or disable all the breakpoints by clicking in the Breakpoints pane. In the Breakpoints pane, you can select the breakpoint check box and click
,
, or
to enable, disable or remove a specific breakpoint.
In the PL/SQL Viewer pane, double-click the required breakpoint in the Breakpoint Info column to locate the breakpoint.
+Perform the following steps to set or add a breakpoint on a line:
+If the function is not interrupted or stopped during debugging, the breakpoint set for the function will not be validated.
+Once a breakpoint is set, you can temporarily disable it by selecting the corresponding check box in the Breakpoints pane and clicking at the top of Breakpoints. A disabled breakpoint will be grayed out as
in PL/SQL Viewer and Breakpoints panes. To enable a disabled breakpoint, select the corresponding check box and click
.
You can remove an unused breakpoint using the same method as that for creating a breakpoint.
+In the PL/SQL Viewer pane, open the function in which you want to remove the breakpoint. Double-click in PL/SQL Viewer to remove the breakpoint.
You can also enable or disable breakpoints in PL/SQL Viewer using the preceding method.
+If you debug an object after changing the source code obtained from the server, Data Studio displays an error.
+You are advised to refresh the object and debug it again.
+If you change the source code obtained from the server and execute or debug the source code without setting a breakpoint, the result of the source code obtained from the server will be displayed on Data Studio. You are advised to refresh the source code before executing or debugging it.
+Perform the following steps to debug a PL/SQL program using a breakpoint:
+An example is as follows:
+Lines 11, 12, 13
+If no parameter is entered, the Debug Function/Procedure dialog box will not be displayed.
+To set the parameter to NULL, enter NULL or null.
+After clicking Debug, you will see pointing to the line where the breakpoint is set. This line is the first line where the execution resumes.
You can terminate debugging by clicking in the toolbar, or pressing F10, or select Terminate Debugging in the Debug menu. After the debugging is complete, the function execution proceeds and will not be stopped at any breakpoint.
Relevant information will be displayed in Callstack and Variables panes.
+The Variables pane shows the current values of variables. If you hover over the variable of a function/procedure, the current value is also displayed.
+You can step through the code using Step Into, Step Out or Step Over. For details, see Controlling Execution.
+Perform the following operations to remove a breakpoint:
+You can arrange the Variables pane and its columns to the following positions:
+When debugging is complete, the Variables pane will be minimized regardless of its position. If the Variables pane is moved next to the SQL Terminal or Result tab, you need to minimize the pane after debugging is complete. The position of the Variables pane remains unchanged after it is rearranged.
+System variables are displayed by default in the Variables pane. You can disable the display of system variables if necessary.
+The button is toggled on by default.
+When a PL/SQL function or procedure is debugged or executed, the same parameter values are used for the next debugging or execution.
+When a PL/SQL object is executed, the following window is displayed.
+The Value column is empty upon the first execution. Enter the values as required.
+Click OK. The parameter values will be cached. The cached parameter values will be displayed in the next execution or debugging.
+Once a specific connection is removed, all the cached parameter values are cleared.
+Data Studio displays the variables which are being monitored in the Monitor pane during debugging.
+In the Monitor pane, add a variable in the following ways:
+If the variable is monitored, its value in the Monitor pane will always be the same as that in the Variables pane.
+The Monitor pane can be dragged to anywhere in the Data Studio window.
+When debugging a PL/SQL function in Data Studio, you can hover over a variable to view its information.
+Data Studio allows committing or rolling back the PL/SQL query result after debugging is complete.
+Perform the following steps to enable the rollback function:
+Or
+Right-click the SQL Terminal pane where the PL/SQL function is executed.
+Select Debug With Rollback to enable the rollback function after the debugging is complete.
+Or
+Right-click any PL/SQL function under Functions/Procedures in Object Browser.
+Topics in this section include:
+ +Select the function that you want to debug in the Object Browser pane. Start debugging by clicking on the toolbar or using any other method mentioned in previous sections. If no breakpoint is set or the breakpoint set is invalid, the debugging operation is performed without stopping any statement and Data Studio will simply execute the object and display the results (if any).
You can run the command for single stepping in the toolbar to debug a function. This allows you to debug the program line by line. When a breakpoint occurs during the operation of single stepping, the operation will be suspended and the program will be stopped.
+Single stepping means executing one statement at a time. Once a statement is executed, you can see the execution result in other debugging tabs.
+A maximum of 100 PL/SQL Viewer tabs can be displayed at a time. If more than 100 tabs are opened, the tabs of function calls will be closed. For example, if 100 tabs are opened and a new debugging object is called, Data Studio will close the tabs of function calls and open the tab of the new debugging object.
+To execute the code by statement, select Step Into from the Debug menu, or click , or press F7.
When stepping into a function, Data Studio executes the current statement and then enters the debugging mode. The debugged line will be indicated by an arrow on the left. If the executed statement calls another function, Data Studio will step into that function. Once you have stepped through all the statements in that function, Data Studio jumps to the next statement of the function it was called from.
Press F7 to go to the next statement. If you click Continue, PL/SQL code execution will continue.
+An example is as follows:
+When entering line 8, enter m := F3_TEST();. That is, go to line 9 in f3_test(). You can step through all the statements in f3_test() by pressing F7 repeatedly to step into each line. Once you have stepped through all the statements in that function, Data Studio jumps to line 10 in f2_test().
+The currently debugged object is marked with the symbol in the tab title, which indicates the function name.
Stepping over is the same as Stepping into, except that when it reaches a call for another function, it will not step into the function. The function will run, and you will be brought to the next statement in the current function. F8 is the shortcut key for Step Over. However, if there is a breakpoint set inside the called function, Step Over will enter the function, and hit the set breakpoint.
+In the following example, when you click Step Over in line 10, Data Studio executes the f3_test() function.
+The cursor will be moved to the next statement in f2_test(), that is, line 11 in f2_test().
+You can step over a function when you are familiar with the way the function works and ensure that its execution will not affect the debugging.
+Stepping over a line of code that does not contain a function call executes the line just like stepping into the line.
+Stepping out of a sub-program continues execution of the function and then suspends execution after the function returns to its calling function. You can step out of a long function when you have determined that the rest of the function is not significant to debug. However, if a breakpoint is set in the remaining part of the function, then that breakpoint will be hit before returning to the calling function.
+A function will be executed when you step over or step out of it. Shift+F7 is the shortcut key for Step Out.
+In the preceding example:
+When the debugged process stops at a specific location, you can select Continue (F9) from the Debug menu, or click in the toolbar to continue the PL/SQL function execution.
The Callstack pane displays the chain of functions as they are called. The Callstack pane can be opened from the minimized window. The most recent functions are listed at the top, and the least recent at the bottom. At the end of each function name is the current line number in that function.
+You can double-click the function names in the Callstack pane to open panes of different functions. For example, when f2_test() calls line 10 of f3_test(), the debugging pointer will point to the first executable line (that is, line 9 in the preceding example) in the function call.
+In this case, the Callstack pane will be displayed as follows.
+Press Alt+J to copy the content in the Callstack pane.
+When you use Data Studio, you can examine debugging information through several debugging panes. This section describes how to check the debugging information:
+ +The Variables pane is used to monitor information or evaluate values. The Variables pane can be opened from the minimized window to evaluate or modify variables or parameters in a PL/SQL procedure. As you step through the code, the values of some local variables may be changed.
+Press Alt+K to copy the content of the Variables pane.
+You can double-click the corresponding line of the variable and manually change its value during runtime.
+Click the Variable, Datatype, or Value column in the Variables pane to sort the values. For example, to change the value of the percentage variable from 5 to 15, double-click the corresponding line in the Variable pane. The Set Variable Value dialog box will be displayed, which prompts you to enter the variable value. Input the variable value and click OK.
+To set NULL as a variable value, enter NULL or null in the Value column.
+If a variable is read-only, will be displayed next to it.
Read-only variables cannot be updated. A variable declared as a constant will not be shown as read-only in the Variables pane. However, an error will be reported when this variable is updated.
+Setting/Displaying Variables + |
+Description + |
+
---|---|
Setting a variable to NULL + |
+
|
+
Setting a string value + |
+Some examples are as follows: +
|
+
Setting a BOOLEAN value + |
+Enclose the BOOLEAN value t or f within single quotation marks ('). For example, to set t to a BOOLEAN variable, enter 't' in the Value column. + |
+
Displaying a variable value + |
+If the variable value is NULL, it will be displayed as NULL. +If the variable value is NULL, it will be displayed as empty. +If the variable value is a string, for example, abc, it will be displayed as abc. + |
+
The Result tab displays the result of the PL/SQL debugging session, with the corresponding procedure name at the top of the tab. The Result tab is automatically displayed only when the result of executing a PL/SQL program exists.
+You can click in the Result tab to copy the content of the tab. For details, see Using SQL Terminals.
Data Studio suggests a list of possible schema names, table names, column names, views, sequences, and functions in the PL/SQL Viewer.
+Follow the steps below to select a DB object:
+On selection, the child DB object will be appended to the parent DB object (with a period(.)).
+If there are two schemas that are named public and PUBLIC, then all child objects for both these schemas will be displayed.
+Perform the following steps to export the DDL of a function or procedure:
+The Data Studio Security Disclaimer dialog box is displayed.
+The Save As dialog box is displayed.
+The Data Exported Successfully dialog box and status bar display the status of the completed operation.
+ +Database Encoding + |
+File Encoding + |
+Support for Exporting DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
Data Studio allows you to view table properties, procedures/functions and SQL functions.
+Follow the steps below to view table properties:
+The table properties are read-only.
+Follow the steps below to view functions/procedures or SQL functions:
+Follow the steps below to view object DDL:
+Individual or batch drop can be performed on functions/procedures. Refer to Batch Dropping Objects section for batch drop.
+Follow the steps below to drop a function/procedure or SQL function object:
+The status bar displays the status of the completed operation.
+After you connect to the database, all the stored functions/procedures and tables will be automatically populated in the Object Browser pane. You can use Data Studio to execute PL/SQL programs or SQL functions.
+For example:
+- To execute the function/procedure with string, enter the value as data.
+- To execute the function/procedure with date, enter the value as to_date('2012-10-10', 'YYYY-MM-DD').
+You can right-click the function/procedure in the Object Browser to perform the following operations:
+Follow the steps below to execute a PL/SQL program or SQL function:
+Alternatively, you can right-click in the PL/SQL Viewer tab and select Execute.
+If there is no input parameter, then the Execute Function/Procedure dialog box will not appear. Instead, the PL/SQL program will execute and the result (if any) will be displayed in the Result tab.
+To set NULL as the parameter value, enter NULL or null.
+For supported data types, the execution queries are as follows:
+select func('1'::INTEGER); +select func('1'::FLOAT); +select func('xyz'::VARCHAR);+
If the input value is ab'c, then you need to enter ab''c.
+The PL/SQL program is executed in the SQL Terminal tab and the result is displayed in the Result tab. You can copy the contents of the Result tab by clicking . Refer to Using SQL Terminals for more information on toolbar options.
Refer to Executing SQL Queries section for information on reconnect option in case connection is lost during execution.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog box is displayed.
+This section describes how to manage tables efficiently.
+This section describes how to create a common table.
+A table is a logical structure maintained by a database administrator and consists of rows and columns. You can define a table as a part of your data definitions from the data perspective. Before defining a table, you need to define a database and a schema. This section describes how to use Data Studio to create a table. To define a table in the database, perform the following steps:
+On the SQL Preview tab, you can check the automatically generated SQL query. For details, see SQL Preview.
+If you create a table in a schema, the current schema will be used as the schema of the table. Perform the following steps to create a common table:
+Select the Case check box to retain the capitalization of the value of the Table Name parameter. For example, if you enter the table name Employee, the table name will be created as Employee.
+The name of the table schema is displayed in Schema.
+If Fill Factor is set to a smaller value, the INSERT operation fills only the specified percentage of a table page. The free space of the page will be used to update rows on the page. In this way, the UPDATE operation can place the updated row content on the original page, which is more efficient than placing the update on a different page. Set it to 100 for a table that has never been updated. Set it to a smaller value for largely updated tables. TOAST tables do not support this parameter.
+You can configure the following parameters of a common table:
+ +Parameter + |
+Row-store Table + |
+Column-store Table + |
+ORC Table + |
+
---|---|---|---|
Table Type + |
+|||
If Not Exists + |
+|||
With OIDS + |
+|||
Fill Factor + |
+
A column defines a unit of information within a table's row. Each row is an entry in the table. Each column is a category of information that applies to all rows. When you add a table to a database, you can define the columns that compose it. Columns determine the type of data that the table can hold. After providing the general information about the table, click the Columns tab to define the list of table columns. Each column contains name, data type, and other optional properties.
+You can perform the following operations only in a common table:
+ +To define a column, perform the following steps:
+Select the Case check box to retain the capitalization of the value of the Column Name parameter. For example, if the column name entered is "Name", then the column name is created as "Name".
+Example: If array dimension for a column is defined as integer [], then it will add the column data as single dimension array.
+The marks column in the above table was created as single dimension and subject column as two dimensions.
+For complex data types,
+User-defined data types are not available for selection.
+You can configure the following parameters of a column in a common table:
+ +Parameter + |
+Row-store Table + |
+Column-store Table + |
+ORC Table + |
+
---|---|---|---|
Array Dimensions + |
+√ + |
+x + |
+x + |
+
Data Type Schema + |
+√ + |
+x + |
+x + |
+
NOT NULL + |
+√ + |
+√ + |
+√ + |
+
Default + |
+√ + |
+√ + |
+√ + |
+
UNIQUE + |
+√ + |
+x + |
+x + |
+
CHECK + |
+√ + |
+x + |
+x + |
+
Follow the steps to edit a column:
+You must complete the edit operation and save the changes to continue with other operations.
+You can move a column in a table. To move a column, select the column and click Up or Down.
+Data distribution specifies how the table is distributed or replicated among data nodes.
+Select one of the following options for the distribution type:
+ +Distribution Type + |
+Description + |
+
---|---|
DEFAULT DISTRIBUTION + |
+The default distribution type will be assigned for this table. + |
+
REPLICATION + |
+Each row of the table will be replicated in all the data nodes of the database cluster. + |
+
HASH + |
+Each row of the table will be placed based on the hash value of the specified column. + |
+
RANGE + |
+Each row of the table will be placed based on the range value. + |
+
LIST + |
+Each row of the table will be placed based on the list value. + |
+
After selecting data distribution, click Next.
+The following table lists the data distribution parameters that can be configured for common tables.
+ +Distribution Type + |
+Row-store Table + |
+Column-store Table + |
+ORC Table + |
+
---|---|---|---|
DEFAULT DISTRIBUTION + |
+√ + |
+√ + |
+x + |
+
HASH + |
+√ + |
+√ + |
+√ + |
+
REPLICATION + |
+√ + |
+√ + |
+x + |
+
Creating constraints is optional. A table can have one (and only one) primary key. Creating the primary key is a good practice.
+You can select the following types of constraints from the Constraint Type drop-down list:
+ +The primary key is the unique identity of a row and consists of one or more columns.
+Only one primary key can be specified for a table, either as a column constraint or as a table constraint. The primary key constraint must name a set of columns that is different from other sets of columns named by any unique constraint defined for the same table.
+Set the constraint type to PRIMARY KEY and enter the constraint name. Select a column from the Available Columns list and click Add. If you need a multi-column primary key, repeat this step for another column.
+Fill Factor for a table is in the range 10 and 100 (unit: %). The default value is 100 (filled to capacity). If Fill Factor is set to a smaller value, the INSERT operation fills only the specified percentage of a table page. The free space of the page will be used to update rows on the page. In this way, the UPDATE operation can place the updated row content on the original page, which is more efficient than placing the update on a different page. Set it to 100 for a table that has never been updated. Set it to a smaller value for largely updated tables. TOAST tables do not support this parameter.
+DEFERRABLE: Defer an option.
+INITIALLY DEFERRED: Check the constraint at the specified time point.
+In the Constraints area, click Add.
+You can click Delete to delete a primary key from the list.
+Mandatory parameters are marked with asterisks (*).
+Set the constraint type to UNIQUE and enter the constraint name.
+Select a column from the Available Columns list and click Add. To configure unique for multiple columns, repeat this step for another column. After adding the first column, the UNIQUE constraint name will be automatically filled. The name can be modified.
+ +Fill Factor: For details, see Primary Key.
+DEFERRABLE: For details, see Primary Key.
+INITIALLY DEFERRED: For details, see Primary Key.
+You can click Delete to delete UNIQUE from the list.
+Mandatory parameters are marked with asterisks (*).
+Set the constraint type to CHECK and enter the constraint name.
+When the INSERT or UPDATE operation is performed, and if the check expression fails, then table data is not altered.
+If you double-click column in Available Columns list, it is inserted to Check Expression edit line to current cursor position.
+In the Constraints area, click Add. You can click Delete to delete CHECK from the list. Mandatory parameters are marked with asterisks (*). After defining all constraints, click Next.
+The following table lists the table constraint parameters that can be configured for common tables.
+ +Constraint Type + |
+Row-store Table + |
+Column-store Table + |
+ORC Table + |
+
---|---|---|---|
CHECK + |
+√ + |
+x + |
+x + |
+
UNIQUE + |
+√ + |
+x + |
+x + |
+
PRIMARY KEY + |
+√ + |
+x + |
+x + |
+
Indexes are optional. They are used to enhance database performance. This operation constructs an index on the specified column(s) of the specified table. Select the Unique Index check box to enable this option.
+Choose the name of the index method from the Access Method list. The default method is B-tree.
+The Fill factor for an index is a percentage that determines how full the index method will try to pack index pages. For B-trees, leaf pages are filled to this percentage during initial index build, and also when extending the index at the right (adding new largest key values). If pages subsequently become completely full, they will be split, leading to gradual degradation in the index's efficiency. B-trees use a default fill factor of 90, but any integer value from 10 to 100 can be selected. If the table is static, then a fill factor of 100 can minimize the index's physical size. For heavily updated tables, an explain plan smaller fill factor is better to minimize the need for page splits. Other indexing methods use different fill factors but work in similar ways. The default fill factor varies between methods.
+You can either enter a user-defined expression for the index or you can create the index using the Available Columns list. Select the column in the Available Columns list and click Add. If you need a multi-column index, repeat this step for other columns.
+After entering the required information for the new index, click Add.
+You can also delete an index from the list using the Delete button. After defining all indexes, click Next.
+You can configure the following parameters of an index in a common table.
+ +Parameter + |
+Row-store Table + |
+Column-store Table + |
+ORC Table + |
+
---|---|---|---|
Unique Indexes + |
+√ + |
+x + |
+x + |
+
btree + |
+√ + |
+√ + |
+x + |
+
gin + |
+√ + |
+√ + |
+x + |
+
gist + |
+√ + |
+√ + |
+x + |
+
hash + |
+√ + |
+√ + |
+x + |
+
psort + |
+√ + |
+√ + |
+x + |
+
spgist + |
+√ + |
+√ + |
+x + |
+
Fill Factor + |
+√ + |
+x + |
+x + |
+
User Defined Expression + |
+√ + |
+x + |
+x + |
+
Partial Index + |
+√ + |
+x + |
+x + |
+
Data Studio generates a DDL statement based on the inputs provided in Create New table wizard.
+You can only view, select, and copy the query. You cannot edit the query.
+Click Finish to create the table. On clicking the Finish button, the generated query will be sent to the server. Any errors are displayed in the dialog box and status bar.
+After creating a table, you can add new columns in that table. You can also perform the following operations on the existing column only for a Regular table:
+Follow the steps below to add a new column to the existing table:
+The Add New Column dialog box is displayed prompting you to add information about the new column.
+Data Studio displays the status of the operation in the status bar.
+Follow the steps below to rename a column:
+A Rename Column dialog box is displayed prompting you to provide the new name.
+Follow the steps below to set or reset the Not Null option:
+A Toggle Not Null Property dialog box is displayed prompting you to set or reset the Not Null option.
+Follow the steps below to drop a column:
+A Drop Column dialog box is displayed.
+Follow the steps below to set the default value for a column:
+A dialog box with the current default value (if it is set) is displayed, prompting you to provide the default value.
+Follow the steps below to change the data type of a column:
+Change Data Type dialog box is displayed.
+The existing data type will show as Unknown while modifying complex data types.
+You can perform the following operations after a table is created only for a Regular table:
+ +Follow the steps below to add a new constraint to the existing table:
+The Add New Constraint dialog box is displayed prompting you to add information about the new constraint.
+Data Studio displays the status of the operation in the status bar.
+The status bar will show the name of the constraint if it has been provided in the Constraint Name field, or else the constraint name will not be displayed as it is created by database server.
+Follow the steps below to rename a constraint:
+The Rename Constraint dialog box is displayed prompting you to provide the new name.
+You can create indexes in a table to search for data efficiently.
+After a table is created, you can add indexes to it. You can perform the following operations only in a common table:
+ +Perform the following steps to add an index to a table:
+The Create Index dialog box is displayed.
+Follow the steps below to rename an index:
+The Rename Index dialog box is displayed.
+To change a fill factor, perform the following steps:
+The Change Fill Factor dialog box is displayed.
+Perform the following steps to delete an index:
+The Drop Index dialog box is displayed.
+When the last index of a table is deleted, the value of the Has Index parameter may still be TRUE. After a vacuum operation is performed on the table, this parameter will change to FALSE.
+Foreign tables created using query execution in SQL Terminal or any other tool can be viewed in the Object browser after refresh.
+Partitioning refers to splitting what is logically one large table into smaller physical pieces based on specific schemes. The table based on the logic is called a partitioned table, and a physical piece is called a partition. Data is stored on these smaller physical pieces, namely, partitions, instead of the larger logical partitioned table.
+Follow the steps below to define a table in your database:
+On the SQL Preview tab, you can check the automatically generated SQL query. For details, see Checking the SQL Preview.
+For details, see Providing Basic Information.
+Perform the following steps to configure other parameters:
+If table orientation is selected as ORC, then an HDFS Partitioned table is created.
+The following table describes the parameters of partitioned tables.
+ +Parameter + |
+Row Partition + |
+Column Partition + |
+ORC Partition + |
+
---|---|---|---|
Table Type + |
+x + |
+x + |
+x + |
+
If Not Exists + |
+√ + |
+√ + |
+√ + |
+
With OIDS + |
+x + |
+x + |
+x + |
+
Fill Factor + |
+√ + |
+x + |
+x + |
+
The following table describes the parameters of partitioned tables.
+ +Parameter + |
+Row Partition + |
+Column Partition + |
+ORC Partition + |
+
---|---|---|---|
Array Dimensions + |
+√ + |
+x + |
+x + |
+
Data Type + |
+√ + |
+x + |
+x + |
+
NOT NULL + |
+√ + |
+√ + |
+√ + |
+
Default + |
+√ + |
+√ + |
+√ + |
+
UNIQUE + |
+√ + |
+x + |
+x + |
+
CHECK + |
+√ + |
+x + |
+x + |
+
You can change the order of partition as required in the table. To change the order, select the required partition and click Up or Down.
+Perform the following steps to edit a partition:
+You must complete the edit operation and save the changes to continue with other operations.
+Perform the following steps to delete a partition:
+The following table describes the parameters of partitioned tables.
+ +Parameter + |
+Row Partition + |
+Column Partition + |
+ORC Partition + |
+
---|---|---|---|
Partition Type + |
+By Range + |
+By Range + |
+By Value + |
+
Partition Name + |
+√ + |
+√ + |
+x + |
+
Partition Value + |
+√ + |
+√ + |
+x + |
+
Perform the following steps to define a table partition:
+The column will be moved to the Partition Column area.
+You can perform the following operations on the partitions of a row-or column-partitioned table, but not on ORC partitioned tables:
+ + +For details about index definitions, see Defining an Index.
+ +Parameter + |
+Row Partition + |
+Column Partition + |
+ORC Partition + |
+
---|---|---|---|
Unique Indexes + |
+√ + |
+x + |
+x + |
+
btree + |
+√ + |
+√ + |
+x + |
+
gin + |
+√ + |
+√ + |
+x + |
+
gist + |
+√ + |
+√ + |
+x + |
+
hash + |
+√ + |
+√ + |
+x + |
+
psort + |
+√ + |
+√ + |
+x + |
+
spgist + |
+√ + |
+√ + |
+x + |
+
Fill Factor + |
+√ + |
+x + |
+x + |
+
User Defined Expression + |
+√ + |
+x + |
+x + |
+
Partial Index + |
+√ + |
+x + |
+x + |
+
For details about how to define table constraints, see Defining Table Constraints.
+ +Parameter + |
+Partition + |
+Column Partition + |
+ORC Partition + |
+
---|---|---|---|
Check + |
+√ + |
+x + |
+x + |
+
Unique + |
+√ + |
+x + |
+x + |
+
Primary Key + |
+√ + |
+x + |
+x + |
+
For details about how to select a distribution type, see Selecting Data Distribution.
+ +Parameter + |
+Row Partition + |
+Column Partition + |
+ORC Partition + |
+
---|---|---|---|
DEFAULT DISTRIBUTION + |
+√ + |
+√ + |
+x + |
+
Hash + |
+√ + |
+√ + |
+√ + |
+
Replication + |
+√ + |
+√ + |
+x + |
+
After creating a table, you can add/modify partitions. You can also perform the following operations on an existing partition:
+ + +Follow the steps below to rename a partition:
+Rename Partition Table dialog box is displayed prompting you to provide the new name for the partition.
+Data Studio displays the status of the operation in the status bar.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog box is displayed.
+In the SQL Preview tab, you can view the SQL query automatically generated for the inputs provided.
+This section describes how to manage tables efficiently.
+After creating the table, you can perform operations on the existing table. Right-click the selected table and select the required operation.
+Additional options for table operations are available in the table context menu. The context menu options available for table operations are:
+ +Menu Item + |
+Description + |
+
---|---|
View Table Data + |
+Opens the table data information. For details, see Viewing Table Data. + |
+
Edit Table Data + |
+Opens the window for editing table data. For details, see Editing Table Data. + |
+
Reindex Table + |
+Re-creates the table index. For details, see Reindexing a Table. + |
+
Analyze Table + |
+Analyzes a table. For details, see Analyzing a Table. + |
+
Truncate Table + |
+Truncates table data. For details, see Truncating a Table. + |
+
Vacuum Table + |
+Vacuums table data. For details, see Vacuuming a Table. + |
+
Set Table Description + |
+Sets table description. For details, see Setting the Table Description. + |
+
Set Schema + |
+Sets the schema of a table. For details, see Setting the Schema. + |
+
Export Table Data + |
+Exports table data. For details, see Exporting Table Data. + |
+
Import Table Data + |
+Imports table data. For details, see Importing Table Data. + |
+
Show DDL + |
+Shows the DDL of a table. For details, see Showing DDL. + |
+
Export DDL + |
+Exports Table DDL. For details, see Exporting Table DDL. + |
+
Export DDL and Data + |
+Exports DDL and table data. For details, see Exporting Table DDL and Data. + |
+
Rename Database + |
+Renames a table. For details, see Renaming a Table. + |
+
Drop Table + |
+Drops (deletes) a table. For details, see Dropping a Table. + |
+
Properties + |
+Shows table properties. For details, see Viewing Table Properties. + |
+
Grant/Revoke + |
+Grants or revokes permissions. For details, see Grant/Revoke Privilege. + |
+
Refresh + |
+Refreshes a table. + |
+
Follow the steps below to rename a table:
+The Rename Table dialog box is displayed prompting you to provide the new name.
+This operation is not supported for Partition ORC tables.
+Follow the steps below to truncate a table:
+Data Studio prompts you to confirm this operation.
+A popup message and status bar display the status of the completed operation.
+Index facilitate lookup of records. You need to reindex tables in the following scenarios:
+Follow the steps below to reindex a table:
+A pop-up message and status bar display the status of the completed operation.
+This operation is not supported for Partition ORC tables.
+The analyzing table operation collects statistics about tables and table indicies and stores the collected information in internal tables of the database where the query optimizer can access the information and use it to help make better query planning choices.
+Follow the steps below to analyze a table:
+The Analyze Table message and status bar displays the status of the completed operation.
+Vacuuming table operation reclaims space and makes it available for re-use.
+Follow the steps below to vacuum the table:
+The Vacuum Table message and status bar display the status of the completed operation.
+Follow the steps below to set the description of a table:
+The Update Table Description dialog box is displayed. It prompts you to set the table description.
+The status bar displays the status of the completed operation.
+Follow the steps below to set a schema:
+The Set Schema dialog box is displayed that prompts you to select the new schema for the selected table.
+The status bar displays the status of the completed operation.
+Individual or batch dropping can be performed on tables. Refer to Batch Dropping Objects section for batch dropping.
+This operation removes the complete table structure (including the table definition and index information) from the database and you have to re-create this table once again to store data.
+Follow the steps below to drop the table:
+Data Studio prompts you to confirm this operation.
+The status bar displays the status of the completed operation.
+Follow the steps below to view the properties of a table:
+Data Studio displays the properties (General, Columns, Constraints, and Index) of the selected table in different tabs.
+The following table lists the operations that can be performed on each tab along with data editing and refreshing operation. Edit operation is performed by double-clicking the cell.
+ +Tab Name + |
+Operations Allowed + |
+
---|---|
General + |
+Save, Cancel, and Copy + NOTE:
+Only Table Description field can be modified. + |
+
Columns + |
+Add, Delete, Save, Cancel, and Copy + |
+
Constraints + |
+Add, Delete, Save, Cancel, and Copy + |
+
Index + |
+Add, Delete, Save, Cancel, and Copy + |
+
Refer to Editing Table Data section for more information on edit, save, cancel, copy, paste, refresh operations.
+When viewing table data, Data Studio automatically adjusts the column widths for table view. Users can resize the columns as needed. If the text content of a cell exceeds the total available display area, then resizing the cell column may cause DS to become unresponsive.
+Follow the steps below to grant/revoke a privilege:
+The Grant/Revoke dialog is displayed.
+Perform the following steps to export the table DDL:
+The Data Studio Security Disclaimer dialog box is displayed.
+The Save As dialog box is displayed.
+The Data Exported Successfully dialog box and status bar display the status of the completed operation.
+ +Database Encoding + |
+File Encoding + |
+Support for Exporting DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
You can select multiple objects and export their DDL. Batch Export lists the objects whose DDL cannot be exported.
+The exported table DDL and data include the following:
+Perform the following steps to export the table DDL and data:
+The Data Studio Security Disclaimer dialog box is displayed.
+The Save As dialog box is displayed.
+The Data Exported Successfully dialog box and status bar display the status of the completed operation.
+ +Database Encoding + |
+File Encoding + |
+Support for Exporting DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
You can select multiple objects from ordinary and partitioned tables to export DDL and data, including columns, rows, indexes, constraints, and partitions. Batch Export lists the objects whose DDL and data cannot be exported.
+Perform the following steps to export table data:
+The Export Table Data dialog box is displayed with the following options:
+The file name follows the Windows file naming convention.
+The Save As dialog box is displayed.
+Perform the following steps to cancel table data export:
+The Messages tab and status bar display the status of the canceled operation.
+Follow the steps below to show the DDL query of a table:
+The DDL of the selected table is displayed.
+Database Encoding + |
+File Encoding + |
+Supports Show DDL + |
+
---|---|---|
UTF-8 + |
+UTF-8 + |
+Yes + |
+
GBK + |
+Yes + |
+|
LATIN1 + |
+Yes + |
+|
GBK + |
+GBK + |
+Yes + |
+
UTF-8 + |
+Yes + |
+|
LATIN1 + |
+No + |
+|
LATIN1 + |
+LATIN1 + |
+Yes + |
+
GBK + |
+No + |
+|
UTF-8 + |
+Yes + |
+
This document describes how to use GaussDB(DWS) tools, including client tools, as shown in Table 1, and server tools, as shown in Table 2.
+The client tools can be obtained by referring to Downloading Client Tools.
+The server tools are stored in the $GPHOME/script and $GAUSSHOME/bin paths on the database server.
+ +Tool + |
+Description + |
+
---|---|
gsql + |
+A command-line interface (CLI) SQL client tool running on the Linux OS. It is used to connect to the database in a GaussDB(DWS) cluster and perform operation and maintenance on the database. + |
+
Data Studio + |
+A client tool used to connect to a database. It provides a GUI for managing databases and objects, editing, executing, and debugging SQL scripts, and viewing execution plans. Data Studio can run on a 32-bit or 64-bit Windows OS. You can use it after decompression without installation. + |
+
GDS + |
+A CLI tool running on the Linux OS. It works with foreign tables to quickly import and export data. The GDS tool package needs to be installed on the server where the data source file is located. This server is called the data server or the GDS server. + |
+
DSC + |
+A CLI tool used for migrating SQL scripts from Teradata or Oracle to GaussDB(DWS) to rebuild a database on GaussDB(DWS). DSC runs on the Linux OS. You can use it after decompression without installation. + |
+
Tool + |
+Description + |
+
---|---|
+ | +gs_dump exports database information, such as the complete and consistent data of database objects (including databases, schemas, tables, and views), without affecting the normal access of users to the database. + |
+
+ | +gs_dumpall exports database information, such as the complete and consistent data of database objects, without affecting the normal access of users to the database. + |
+
+ | +gs_restore is a tool provided by GaussDB(DWS) to import data that is exported using gs_dump. It can also be used to import files that were exported using gs_dump. + |
+
+ | +gds_check is used to check the GDS deployment environment, including the OS parameters, network environment, and disk usage. It also supports the correction of system parameters. This helps detect potential problems during GDS deployment and running, improving the execution success rate. + |
+
+ | +gds_install is a script tool used to install GDS in batches, improving GDS deployment efficiency. + |
+
+ | +gds_uninstall is a script tool used to uninstall GDS in batches. + |
+
+ | +gds_ctl is a script tool used for starting or stopping GDS service processes in batches. You can start or stop GDS service processes, which use the same port, on multiple nodes at a time, and set a daemon for each GDS process during the startup. + |
+
+ | +During cluster installation, you need to execute commands and transfer files among hosts in the cluster. gs_sshexkey is used to help users establish mutual trust. + |
+
Log in to the GaussDB(DWS) management console at: https://console.otc.t-systems.com/dws/
+You can download the following tools:
+The gsql and Data Studio client tools have multiple historical versions. You can click Historical Version to download the tools based on the cluster version. GaussDB(DWS) clusters are compatible with earlier versions of gsql and Data Studio tools. You are advised to download the matching tool version based on the cluster version.
+Data Studio shows major database features using a GUI to simplify database development and application building.
+Data Studio allows database developers to create and manage database objects, such as databases, schemas, functions, stored procedures, tables, sequences, columns, indexes, constraints, and views, execute SQL statements or SQL scripts, edit and execute PL/SQL statements, as well as import and export table data.
+Data Studio also allows database developers to debug and fix defects in the PL/SQL code using debugging operations such as Step Into, Step Out, Step Over, Continue, and Terminate.
+The following figure shows the operating environment of the database and Data Studio.
+Handle errors that occurred during data import.
+Errors that occur when data is imported are divided into data format errors and non-data format errors.
+When creating a foreign table, specify LOG INTO error_table_name. Data format errors occurring during the data import will be written into the specified table. You can run the following SQL statement to query error details:
+1 | SELECT * FROM error_table_name; + |
Column + |
+Type + |
+Description + |
+
---|---|---|
nodeid + |
+integer + |
+ID of the node where an error is reported + |
+
begintime + |
+timestamp with time zone + |
+Time when a data format error is reported + |
+
filename + |
+character varying + |
+Name of the source data file where a data format error occurs +If you use GDS for importing data, the error information includes the IP address and port number of the GDS server. + |
+
rownum + |
+bigint + |
+Number of the row where an error occurs in a source data file + |
+
rawrecord + |
+text + |
+Raw record of the data format error in the source data file + |
+
detail + |
+text + |
+Error details + |
+
A non-data format error leads to the failure of an entire data import task. You can locate and troubleshoot a non-data format error based on the error message displayed during data import.
+Troubleshoot data import errors based on obtained error information and the description in the following table.
+ +gs_dump is tool provided by GaussDB(DWS) to export database information. You can export a database or its objects, such as schemas, tables, and views. The database can be the default postgres database or a user-specified database.
+When gs_dump is used to export data, other users still can access the database (readable or writable).
+gs_dump can export complete, consistent data. For example, if gs_dump is started to export database A at T1, data of the database at that time point will be exported, and modifications on the database after that time point will not be exported.
+gs_dump can export database information to a plain-text SQL script file or archive file.
+gs_dump can create export files in four formats, which are specified by -F or --format=, as listed in Table 1.
+ +Format + |
+Value of -F + |
+Description + |
+Suggestion + |
+Corresponding Import Tool + |
+
---|---|---|---|---|
Plain-text + |
+p + |
+A plain-text script file containing SQL statements and commands. The commands can be executed on gsql, a command line terminal, to recreate database objects and load table data. + |
+You are advised to use plain-text export files for small databases. + |
+Before using gsql to restore database objects, you can use a text editor to edit the exported plain-text file as required. + |
+
Custom + |
+c + |
+A binary file that allows the restoration of all or selected database objects from an exported file. + |
+You are advised to use custom-format archive files for medium or large database. + |
+You can use gs_restore to import database objects from a custom-format archive. + |
+
Directory + |
+d + |
+A directory containing directory files and the data files of tables and BLOB objects. + |
+- + |
+|
.tar + |
+t + |
+A tar-format archive that allows the restoration of all or selected database objects from an exported file. It cannot be further compressed and has an 8-GB limitation on the size of a single table. + |
+- + |
+
To reduce the size of an exported file, you can use the gs_dump tool to compress it to a plain-text file or custom-format file. By default, a plain-text file is not compressed when generated. When a custom-format archive is generated, a medium level of compression is applied by default. Archived exported files cannot be compressed using gs_dump.
+Do not modify an exported file or its content. Otherwise, restoration may fail.
+To ensure the data consistency and integrity, gs_dump acquires a share lock on a table to be dumped. If another transaction has acquired a share lock on the table, gs_dump waits until this lock is released and then locks the table for dumping. If the table cannot be locked within the specified time, the dump fails. You can customize the timeout duration to wait for lock release by specifying the --lock-wait-timeout parameter.
+gs_dump [OPTION]... [DBNAME]+
DBNAME does not follow a short or long option. It specifies the database to connect to.
+For example:
+Specify DBNAME without a -d option preceding it.
+gs_dump -p port_number postgres -f dump1.sql+
or
+export PGDATABASE=postgres+
gs_dump -p port_number -f dump1.sql+
Environment variable: PGDATABASE
+Common parameters:
+Sends the output to the specified file or directory. If this parameter is omitted, the standard output is generated. If the output format is (-F c/-F d/-F t), the -f parameter must be specified. If the value of the -f parameter contains a directory, the directory has the read and write permissions to the current user.
+Selects the exported file format. Its format can be:
+A .tar archive can be used as input of gsql.
+Specifies the verbose mode. If it is specified, gs_dump writes detailed object comments and the number of startups/stops to the dump file, and progress messages to standard error.
+Specifies the used compression level.
+Value range: 0 to 9
+For the custom-format archive, this option specifies the compression level of a single table data segment. By default, data is compressed at a medium level. Setting the non-zero compression level will result in that the entire text output files are to be compressed, as if the text has been compressed using the gzip tool, but the default method is non-compression. The .tar archive format does not support compression currently.
+Do not keep waiting to obtain shared table locks at the beginning of the dump. Consider it as failed if you are unable to lock a table within the specified time. The timeout duration can be specified in any of the formats accepted by SET statement_timeout.
+Dump parameters:
+Generates only the data, not the schema (data definition). Dumps the table data, big objects, and sequence values.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Before writing the command of creating database objects into the backup file, write the command of clearing (deleting) database objects to the backup files. (If no objects exist in the target database, gs_restore probably displays some error information.)
+This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
+The backup file content starts with the commands of creating the database and connecting to the created database. (If the script is in this format, any database to be connected is allowed before running the script.)
+This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
+Creates a dump file in the specified character set encoding. By default, the dump file is created in the database encoding. (Alternatively, you can set the environment variable PGCLIENTENCODING to the required dump encoding.)
+Dumps only schemas matching the schema names. This option contains the schema and all its contained objects. If this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by specifying multiple -n options. The schema parameter is interpreted as a pattern according to the same rules used by the \d command of gsql. Therefore, multiple schemas can also be selected by writing wildcard characters in the pattern. When you use wildcards, quote the pattern to prevent the shell from expanding the wildcards.
+Multiple schemas can be dumped. Entering -n schemaname multiple times dumps multiple schemas.
+For example:
+gs_dump -h host_name -p port_number postgres -f backup/bkp_shl2.sql -n sch1 -n sch2+
In the preceding example, sch1 and sch2 are dumped.
+Does not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for -n. -N can be specified multiple times to exclude schemas matching any of the specified patterns.
+When both -n and -N are specified, the schemas that match at least one -n option but no -N is dumped. If -N is specified and -n is not, the schemas matching -N are excluded from what is normally dumped.
+Dump allows you to exclude multiple schemas during dumping.
+Specifies -N exclude schema name to exclude multiple schemas while dumping.
+For example:
+gs_dump -h host_name -p port_number postgres -f backup/bkp_shl2.sql -N sch1 -N sch2+
In the preceding example, sch1 and sch2 will be excluded during the dumping.
+Dumps object identifiers (OIDs) as parts of the data in each table. Use this parameter if your application references the OID columns in some way (for example, in a foreign key constraint). If the preceding situation does not occur, do not use this parameter.
+Do not output commands to set ownership of objects to match the original database. By default, gs_dump issues the ALTER OWNER or SET SESSION AUTHORIZATION command to set ownership of created database objects. These statements will fail when the script is running unless it is started by a system administrator (or the same user that owns all of the objects in the script). To make a script that can be stored by any user and give the user ownership of all objects, specify -O.
+This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Specifies a list of tables, views, sequences, or foreign tables to be dumped. You can use multiple -t parameters or wildcard characters to specify tables.
+When using wildcards to specify dump tables, quote the pattern to prevent the shell from expanding the wildcards.
+The -n and -N options have no effect when -t is used, because tables selected by using -t will be dumped regardless of those options, and non-table objects will not be dumped.
+The number of -t parameters must be less than or equal to 100.
+If the number of -t parameters is greater than 100, you are advised to use the --include-table-file parameter to replace some -t parameters.
+If -t is specified, gs_dump does not dump any other database objects that the selected tables might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be automatically restored to an empty database.
+-t tablename only dumps visible tables in the default search path. -t '*.tablename' dumps tablename tables in all the schemas of the dumped database. -t schema.table dumps tables in a specific schema.
+-t tablename does not export the trigger information from a table.
+For example:
+gs_dump -h host_name -p port_number postgres -f backup/bkp_shl2.sql -t schema1.table1 -t schema2.table2+
In the preceding example, schema1.table1 and schema2.table2 are dumped.
+Specifies a list of tables, views, sequences, or foreign tables not to be dumped. You can use multiple -t parameters or wildcard characters to specify tables.
+When -t and -T are input, the object will be stored in -t list not -T table object.
+For example:
+gs_dump -h host_name -p port_number postgres -f backup/bkp_shl2.sql -T table1 -T table2+
In the preceding example, table1 and table2 are excluded from the dumping.
+Specifies the table file to be dumped.
+Same as --include-table-file, the content format of this parameter is as follows:
+schema1.table1
+schema2.table2
+...
+Prevents the dumping of access permissions (grant/revoke commands).
+Exports data by running the INSERT command with explicit column names {INSERT INTO table (column, ...) VALUES ...}. This will cause a slow restoration. However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content.
+Disables the use of dollar sign ($) for function bodies, and forces them to be quoted using the SQL standard string syntax.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Does not dump data that matches any of table patterns. The pattern is interpreted according to the same rules as for -t.
+--exclude-table-data can be entered more than once to exclude tables matching any of several patterns. When the user needs the specified table definition rather than data in the table, this option is helpful.
+To exclude data of all tables in the database, see --schema-only.
+Dumps data when the INSERT statement (rather than COPY) is issued. This will cause a slow restoration.
+However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content. The restoration may fail if you rearrange the column order. The --column-inserts option is unaffected against column order changes, though even slower.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Does not issue commands to select tablespaces. All the objects will be created during the restoration process, no matter which tablespace is selected when using this option.
+This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Forcibly quotes all identifiers. This parameter is useful when you dump a database for migration to a later version, in which additional keywords may be introduced.
+Specifies dumped name sections (pre-data, data, or post-data).
+Uses a serializable transaction for the dump to ensure that the used snapshot is consistent with later database status. Perform this operation at a time point in the transaction flow, at which everything is normal. This ensures successful transaction and avoids serialization failures of other transactions, which requires serialization again.
+This option has no benefits for disaster recovery. During the upgrade of the original database, load a database copy as a report or other shared read-only dump is helpful. The option does not exist, dump reveals a status which is different from the submitted sequence status of any transaction.
+This option will make no difference if there are no active read-write transactions when gs_dump is started. If the read-write transactions are in active status, the dump start time will be delayed for an uncertain period.
+Specifies that the standard SQL SET SESSION AUTHORIZATION command rather than ALTER OWNER is returned to ensure the object ownership. This makes dumping more standard. However, if a dump file contains objects that have historical problems, restoration may fail. A dump using SET SESSION AUTHORIZATION requires the system administrator rights, whereas ALTER OWNER requires lower permissions.
+Specifies that dumping data needs to be encrypted using AES128.
+Includes the TO NODE or TO GROUP statement in the dumped CREATE TABLE or CREATE FOREIGN TABLE statement. This parameter is valid only for HDFS and foreign tables.
+Includes information about the objects that depend on the specified object in the backup result. This parameter takes effect only if the -t or --include-table-file parameter is specified.
+Excludes information about the specified object from the backup result. This parameter takes effect only if the -t or --include-table-file parameter is specified.
+The existing files in plain-text, .tar, and custom formats will be overwritten. This parameter is not used for the directory format.
+For example:
+Assume that the backup.sql file exists in the current directory. If you specify -f backup.sql in the input command, and the backup.sql file is generated in the current directory, the original file will be overwritten.
+If the backup file already exists and --dont-overwrite-file is specified, an error will be reported with the message that the dump file exists.
+gs_dump -p port_number postgres -f backup.sql -F plain --dont-overwrite-file+
Connection parameters:
+Specifies the host name. If the value begins with a slash (/), it is used as the directory for the UNIX domain socket. The default is taken from the PGHOST environment variable (if available). Otherwise, a Unix domain socket connection is attempted.
+This parameter is used only for defining names of the hosts outside a cluster. The names of the hosts inside the cluster must be 127.0.0.1.
+Example: the host name
+Environment Variable: PGHOST
+Environment variable: PGPORT
+Specifies the user name of the host to connect to.
+Environment variable: PGUSER
+Never issue a password prompt. The connection attempt fails if the host requires password verification and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
+Specifies the user password to connect to. If the host uses the trust authentication policy, the administrator does not need to enter the -W option. If the -W option is not provided and you are not a system administrator, the Dump Restore tool will ask you to enter a password.
+Specifies a role name to be used for creating the dump. If this option is selected, the SET ROLE command will be issued after the database is connected to gs_dump. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_dump. It allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as a system administrator. This option allows dumping data without violating the policy.
+Scenario 1
+If your database cluster has any local additions to the template1 database, restore the output of gs_dump into an empty database with caution. Otherwise, you are likely to obtain errors due to duplicate definitions of the added objects. To create an empty database without any local additions, copy data from template0 rather than template1. Example:
+CREATE DATABASE foo WITH TEMPLATE template0;+
The .tar format file size must be smaller than 8 GB. (This is the tar file format limitations.) The total size of a .tar archive and any of the other output formats are not limited, except possibly by the OS.
+The dump file generated by gs_dump does not contain the statistics used by the optimizer to make execution plans. Therefore, you are advised to run ANALYZE after restoring from a dump file to ensure optimal performance. The dump file does not contain any ALTER DATABASE ... SET commands; these settings are dumped by gs_dumpall, along with database users and other installation settings.
+Scenario 2
+When the value of SEQUENCE reaches the maximum or minimum value, backing up the value of SEQUENCE using gs_dump will exit due to an execution error. Handle the problem by referring to the following example:
+Error message example:
+Object defined by sequence
+CREATE SEQUENCE seq INCREMENT 1 MINVALUE 1 MAXVALUE 3 START WITH 1;+
Perform the gs_dump backup.
+gs_dump -U dbadmin -W {password} -p 37300 postgres -t PUBLIC.seq -f backup/MPPDB_backup.sql
+gs_dump[port='37300'][postgres][2019-12-27 15:09:49]: The total objects number is 337.
+gs_dump[port='37300'][postgres][2019-12-27 15:09:49]: WARNING: get invalid xid from GTM because connection is not established
+gs_dump[port='37300'][postgres][2019-12-27 15:09:49]: WARNING: Failed to receive GTM rollback transaction response for aborting prepared (null).
+gs_dump: [port='37300'] [postgres] [archiver (db)] [2019-12-27 15:09:49] query failed: ERROR: Can not connect to gtm when getting gxid, there is a connection error.
+gs_dump: [port='37300'] [postgres] [archiver (db)] [2019-12-27 15:09:49] query was: RELEASE bfnextval
+Handling procedure:
+gsql -p 37300 postgres -r -c "ALTER SEQUENCE PUBLIC.seq MAXVALUE 10;"+
gs_dump -U dbadmin -W {password} -p 37300 postgres -t PUBLIC.seq -f backup/MPPDB_backup.sql
+gs_dump[port='37300'][postgres][2019-12-27 15:10:53]: The total objects number is 337.
+gs_dump[port='37300'][postgres][2019-12-27 15:10:53]: [100.00%] 337 objects have been dumped.
+gs_dump[port='37300'][postgres][2019-12-27 15:10:53]: dump database postgres successfully
+gs_dump[port='37300'][postgres][2019-12-27 15:10:53]: total time: 230 ms
+The gs_dump command does not support backup of the SEQUENCE value in this scenario.
+The SQL end does not support the modification of MAXVALUE when SEQUENCE reaches the maximum value of 2^63-2 or the modification of MINVALUE when SEQUENCE reaches the minimum value.
+Scenario 3
+gs_dump is mainly used to export metadata of the entire database. The performance of exporting a single table is optimized, but the performance of exporting multiple tables is poor. If multiple tables need to be exported, you are advised to export them one by one. Example:
+gs_dump -U dbadmin -W {password} -p 37300 postgres -t public.table01 -s -f backup/table01.sql +gs_dump -U dbadmin -W {password} -p 37300 postgres -t public.table02 -s -f backup/table02.sql+
When services are stopped or during off-peak hours, you can increase the value of --non-lock-table to improve the gs_dump performance. Example:
+gs_dump -U dbadmin -W {password} -p 37300 postgres -t public.table03 -s --non-lock-table -f backup/table03.sql
+Use gs_dump to dump a database as a SQL text file or a file in other formats.
+In the following examples, password indicates the password configured by the database user. backup/MPPDB_backup.sql indicates an exported file where backup indicates the relative path of the current directory. 37300 indicates the port ID of the database server. postgres indicates the name of the database to be accessed.
+Before exporting files, ensure that the directory exists and you have the read and write permissions on the directory.
+Example 1: Use gs_dump to export the full information of the postgres database. The exported MPPDB_backup.sql file is in plain-text format.
+gs_dump -U dbadmin -W {password} -f backup/MPPDB_backup.sql -p 37300 postgres -F p
+gs_dump[port='37300'][postgres][2018-06-27 09:49:17]: The total objects number is 356.
+gs_dump[port='37300'][postgres][2018-06-27 09:49:17]: [100.00%] 356 objects have been dumped.
+gs_dump[port='37300'][postgres][2018-06-27 09:49:17]: dump database postgres successfully
+gs_dump[port='37300'][postgres][2018-06-27 09:49:17]: total time: 1274 ms
+Use gsql to import data from the export plain-text file.
+Example 2: Use gs_dump to export the full information of the postgres database. The exported MPPDB_backup.tar file is in .tar format.
+gs_dump -U dbadmin -W {password} -f backup/MPPDB_backup.tar -p 37300 postgres -F t
+gs_dump[port='37300'][postgres][2018-06-27 10:02:24]: The total objects number is 1369.
+gs_dump[port='37300'][postgres][2018-06-27 10:02:53]: [100.00%] 1369 objects have been dumped.
+gs_dump[port='37300'][postgres][2018-06-27 10:02:53]: dump database postgres successfully
+gs_dump[port='37300'][postgres][2018-06-27 10:02:53]: total time: 50086 ms
+Example 3: Use gs_dump to export the full information of the postgres database. The exported MPPDB_backup.dmp file is in custom format.
+gs_dump -U dbadmin -W {password} -f backup/MPPDB_backup.dmp -p 37300 postgres -F c
+gs_dump[port='37300'][postgres][2018-06-27 10:05:40]: The total objects number is 1369.
+gs_dump[port='37300'][postgres][2018-06-27 10:06:03]: [100.00%] 1369 objects have been dumped.
+gs_dump[port='37300'][postgres][2018-06-27 10:06:03]: dump database postgres successfully
+gs_dump[port='37300'][postgres][2018-06-27 10:06:03]: total time: 36620 ms
+Example 4: Use gs_dump to export the full information of the postgres database. The exported MPPDB_backup file is in directory format.
+gs_dump -U dbadmin -W {password} -f backup/MPPDB_backup -p 37300 postgres -F d
+gs_dump[port='37300'][postgres][2018-06-27 10:16:04]: The total objects number is 1369.
+gs_dump[port='37300'][postgres][2018-06-27 10:16:23]: [100.00%] 1369 objects have been dumped.
+gs_dump[port='37300'][postgres][2018-06-27 10:16:23]: dump database postgres successfully
+gs_dump[port='37300'][postgres][2018-06-27 10:16:23]: total time: 33977 ms
+Example 5: Use gs_dump to export the information of the postgres database, excluding the information of the table specified in the /home/MPPDB_temp.sql file. The exported MPPDB_backup.sql file is in plain-text format.
+gs_dump -U dbadmin -W {password} -p 37300 postgres --exclude-table-file=/home/MPPDB_temp.sql -f backup/MPPDB_backup.sql
+gs_dump[port='37300'][postgres][2018-06-27 10:37:01]: The total objects number is 1367.
+gs_dump[port='37300'][postgres][2018-06-27 10:37:22]: [100.00%] 1367 objects have been dumped.
+gs_dump[port='37300'][postgres][2018-06-27 10:37:22]: dump database postgres successfully
+gs_dump[port='37300'][postgres][2018-06-27 10:37:22]: total time: 37017 ms
+Example 6: Use gs_dump to export only the information about the views that depend on the testtable table. Create another testtable table, and then restore the views that depend on it.
+Back up only the views that depend on the testtable table.
+gs_dump -s -p 37300 postgres -t PUBLIC.testtable --include-depend-objs --exclude-self -f backup/MPPDB_backup.sql -F p +gs_dump[port='37300'][postgres][2018-06-15 14:12:54]: The total objects number is 331. +gs_dump[port='37300'][postgres][2018-06-15 14:12:54]: [100.00%] 331 objects have been dumped. +gs_dump[port='37300'][postgres][2018-06-15 14:12:54]: dump database postgres successfully +gs_dump[port='37300'][postgres][2018-06-15 14:12:54]: total time: 327 ms+
Change the name of the testtable table.
+gsql -p 37300 postgres -r -c "ALTER TABLE PUBLIC.testtable RENAME TO testtable_bak;"+
Create a testtable table.
+CREATE TABLE PUBLIC.testtable(a int, b int, c int);+
Restore the views for the new testtable table.
+gsql -p 37300 postgres -r -f backup/MPPDB_backup.sql+
gs_dumpall and gs_restore
+gs_dumpall is a tool provided by GaussDB(DWS) to export all database information, including the data of the default postgres database, data of user-specified databases, and global objects of all databases in a cluster.
+When gs_dumpall is used to export data, other users still can access the databases (readable or writable) in a cluster.
+gs_dumpall can export complete, consistent data. For example, if gs_dumpall is started to export all databases from a cluster at T1, data of the databases at that time point will be exported, and modifications on the databases after that time point will not be exported.
+To export all databases in a cluster:
+Both of the preceding exported files are plain-text SQL scripts. Use gsql to execute them to restore databases.
+gs_dumpall [OPTION]...+
Common parameters:
+Sends the output to the specified file. If this parameter is omitted, the standard output is used.
+Specifies the verbose mode. This will cause gs_dumpall to output detailed object comments and start/stop times to the dump file, and progress messages to standard error.
+Do not keep waiting to obtain shared table locks at the beginning of the dump. Consider it as failed if you are unable to lock a table within the specified time. The timeout duration can be specified in any of the formats accepted by SET statement_timeout.
+Shows help about the command line parameters for gs_dumpall and exits.
+Dump parameters:
+Runs SQL statements to delete databases before rebuilding them. Statements for dumping roles and tablespaces are added.
+Dumps only global objects (roles and tablespaces) but no databases.
+Dumps object identifiers (OIDs) as parts of the data in each table. Use this parameter if your application references the OID columns in some way (for example, in a foreign key constraint). If the preceding situation does not occur, do not use this parameter.
+Do not output commands to set ownership of objects to match the original database. By default, gs_dumpall issues the ALTER OWNER or SET SESSION AUTHORIZATION statement to set ownership of created schema elements. These statements will fail when the script is running unless it is started by a system administrator (or the same user that owns all of the objects in the script). To make a script that can be stored by any user and give the user ownership of all objects, specify -O.
+Prevents the dumping of access permissions (grant/revoke commands).
+Exports data by running the INSERT command with explicit column names {INSERT INTO table (column, ...) VALUES ...}. This will cause a slow restoration. However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content.
+Disables the use of dollar sign ($) for function bodies, and forces them to be quoted using the SQL standard string syntax.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Dumps data when the INSERT statement (rather than COPY) is issued. This will cause a slow restoration. The restoration may fail if you rearrange the column order. The --column-inserts parameter is safer against column order changes, though even slower.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Do not output statements to create tablespaces or select tablespaces for objects. All the objects will be created during the restoration process, no matter which tablespace is selected when using this option.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Forcibly quotes all identifiers. This parameter is useful when you dump a database for migration to a later version, in which additional keywords may be introduced.
+Specifies that the standard SQL SET SESSION AUTHORIZATION command rather than ALTER OWNER is returned to ensure the object ownership. This makes dumping more standard. However, if a dump file contains objects that have historical problems, restoration may fail. A dump using SET SESSION AUTHORIZATION requires the system administrator rights, whereas ALTER OWNER requires lower permissions.
+Specifies that dumping data needs to be encrypted using AES128.
+Backs up all CREATE EXTENSION statements if the include-extensions parameter is set.
+Includes the TO NODE statement in the dumped CREATE TABLE statement.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Includes workload resource manager (resource pool, load group, and load group mapping) during the dump.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Specifies the number of concurrent backup processes. The value range is 1-1000.
+Connection parameters:
+Specifies the host name. If the value begins with a slash (/), it is used as the directory for the UNIX domain socket. The default value is taken from the PGHOST environment variable. If it is not set, a UNIX domain socket connection is attempted.
+This parameter is used only for defining names of the hosts outside a cluster. The names of the hosts inside the cluster must be 127.0.0.1.
+Environment Variable: PGHOST
+Specifies the name of the database connected to dump all objects and discover other databases to be dumped. If this parameter is not specified, the postgres database will be used. If the postgres database does not exist, template1 will be used.
+Specifies the TCP port listened to by the server or the local UNIX domain socket file name extension to ensure a correct connection. The default value is the PGPORT environment variable.
+Environment variable: PGPORT
+Specifies the user name to connect to.
+Environment variable: PGUSER
+Never issue a password prompt. The connection attempt fails if the host requires password verification and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
+Specifies the user password to connect to. If the host uses the trust authentication policy, the administrator does not need to enter the -W option. If the -W option is not provided and you are not a system administrator, the Dump Restore tool will ask you to enter a password.
+Specifies a role name to be used for creating the dump. This option causes gs_dumpall to issue the SET ROLE statement after connecting to the database. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_dumpall. It allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as a system administrator. This option allows dumping data without violating the policy.
+The gs_dumpall internally invokes gs_dump. For details about the diagnosis information, see gs_dump.
+Once gs_dumpall is restored, run ANALYZE on each database so that the optimizer can provide useful statistics.
+gs_dumpall requires all needed tablespace directories to exit before the restoration. Otherwise, database creation will fail if the databases are in non-default locations.
+Run gs_dumpall to export all databases from a cluster at a time.
+gs_dumpall supports only plain-text format export. Therefore, only gsql can be used to restore a file exported using gs_dumpall.
+gs_dumpall -f backup/bkp2.sql -p 37300 +gs_dump[port='37300'][dbname='postgres'][2018-06-27 09:55:09]: The total objects number is 2371. +gs_dump[port='37300'][dbname='postgres'][2018-06-27 09:55:35]: [100.00%] 2371 objects have been dumped. +gs_dump[port='37300'][dbname='postgres'][2018-06-27 09:55:46]: dump database dbname='postgres' successfully +gs_dump[port='37300'][dbname='postgres'][2018-06-27 09:55:46]: total time: 55567 ms +gs_dumpall[port='37300'][2018-06-27 09:55:46]: dumpall operation successful +gs_dumpall[port='37300'][2018-06-27 09:55:46]: total time: 56088 ms+
gs_dump and gs_restore
+gs_restore is a tool provided by GaussDB(DWS) to import data that was exported using gs_dump. It can also be used to import files that were exported using gs_dump.
+It has the following functions:
+If a database is specified, data is imported in the database. For parallel import, the password for connecting to the database is required.
+If the database storing imported data is not specified, a script containing the SQL statement to recreate the database is created and written to a file or standard output. This script output is equivalent to the plain text output format of gs_dump.
+gs_restore [OPTION]... FILE+
Common parameters:
+Connects to the dbname database and imports data to the database.
+Specifies the output file for the generated script, or uses the output file in the list specified using -l.
+The default is the standard output.
+-f cannot be used in conjunction with -d.
+Specifies the format of the archive. The format does not need to be specified because the gs_restore determines the format automatically.
+Value range:
+Lists the forms of the archive. The operation output can be used for the input of the -L parameter. If filtering parameters, such as -n or -t, are used together with -l, they will restrict the listed items.
+Shows help information about the parameters of gs_restore and exits.
+Import parameters
+Imports only the data, not the schema (data definition). gs_restore incrementally imports data.
+Cleans (deletes) existing database objects in the database to be restored before recreating them.
+Creates the database before importing data to it. (When this parameter is used, the database named with -d is used to issue the initial CREATE DATABASE command. All data is imported to the database that appears in the archive files.)
+Exits if an error occurs when you send the SQL statement to the database. If you do not exit, the commands will still be sent and error information will be displayed when the import ends.
+Imports only the definition of the specified index. Multiple indexes can be imported. Enter -I index multiple times to import multiple indexes.
+For example:
+gs_restore -h host_name -p port_number -d gaussdb -I Index1 -I Index2 backup/MPPDB_backup.tar
+In this example, Index1 and Index2 will be imported.
+Specifies the number of concurrent, the most time-consuming jobs of gs_restore (such as loading data, creating indexes, or creating constraints). This parameter can greatly reduce the time to import a large database to a server running on a multiprocessor machine.
+Each job is one process or one thread, depending on the OS; and uses a separate connection to the server.
+The optimal value of this option depends on the hardware settings of the server, the client, the network, the number of CPU cores, and hard disk settings. It is recommended that the parameter be set to the number of CPU cores on the server. In addition, a larger value can also lead to faster import in many cases. However, an overly large value will lead to decreased performance because of thrashing.
+This parameter supports custom-format archives only. The input file must be a regular file (not the pipe file). This parameter can be ignored when you select the script method rather than connect to a database server. In addition, multiple jobs cannot be used in conjunction with the --single-transaction parameter.
+Imports only archive elements that are listed in list-file and imports them in the order that they appear in the file. If filtering parameters, such as -n or -t, are used in conjunction with -L, they will further limit the items to be imported.
+list-file is normally created by editing the output of a previous -l parameter. File lines can be moved or removed, and can also be commented out by placing a semicolon (;) at the beginning of the row. An example is provided in this document.
+Restores only objects that are listed in schemas.
+This parameter can be used in conjunction with the -t parameter to import a specific table.
+Entering -n schemaname multiple times can import multiple schemas.
+For example:
+gs_restore -h host_name -p port_number -d gaussdb -n sch1 -n sch2 backup/MPPDB_backup.tar
+In this example, sch1 and sch2 will be imported.
+Do not output commands to set ownership of objects to match the original database. By default, gs_restore issues the ALTER OWNER or SET SESSION AUTHORIZATION statement to set ownership of created schema elements. Unless the system administrator or the user who has all the objects in the script initially accesses the database. Otherwise, the statement will fail. Any user name can be used for the initial connection using -O, and this user will own all the created objects.
+Imports only listed functions. You need to correctly spell the function name and the parameter based on the contents of the dump file in which the function exists.
+Entering -P alone means importing all function-name(args) functions in a file. Entering -P with -n means importing the function-name(args) functions in a specified schema. Entering -P multiple times and using -n once means that all imported functions are in the -n schema by default.
+You can enter -n schema-name -P 'function-name(args)' multiple times to import functions in specified schemas.
+For example:
+./gs_restore -h host_name -p port_number -d gaussdb -n test1 -P 'Func1(integer)' -n test2 -P 'Func2(integer)' backup/MPPDB_backup.tar
+In this example, both Func1 (i integer) in the test1 schema and Func2 (j integer) in the test2 schema will be imported.
+Imports only schemas (data definitions), instead of data (table content). The current sequence value will not be imported.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Imports only listed table definitions or data, or both. This parameter can be used in conjunction with the -n parameter to specify a table object in a schema. When -n is not entered, the default schema is PUBLIC. Entering -n schemaname -t tablename multiple times can import multiple tables in a specified schema.
+For example:
+Import table1 in the PUBLIC schema.
+gs_restore -h host_name -p port_number -d gaussdb -t table1 backup/MPPDB_backup.tar
+Import test1 in the test1 schema and test2 in the test2 schema.
+gs_restore -h host_name -p port_number -d gaussdb -n test1 -t test1 -n test2 -t test2 backup/MPPDB_backup.tar
+Import table1 in the PUBLIC schema and test1 in the test1 schema.
+gs_restore -h host_name -p port_number -d gaussdb -n PUBLIC -t table1 -n test1 -t table1 backup/MPPDB_backup.tar
+-t does not support the schema_name.table_name input format.
+Prevents the import of access permissions (GRANT/REVOKE commands).
+Executes import as a single transaction (that is, commands are wrapped in BEGIN/COMMIT).
+This parameter ensures that either all the commands are completed successfully or no application is changed. This parameter means --exit-on-error.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+By default, table data will be imported even if the statement to create a table fails (for example, the table already exists). Data in such table is skipped using this parameter. This operation is useful if the target database already contains the desired table contents.
+This parameter takes effect only when you import data directly into a database, not when you output SQL scripts.
+Specifies a reserved port for function expansion. This parameter is not recommended.
+Does not issue commands to select tablespaces. If this parameter is used, all objects will be created during the import process no matter which tablespace is selected.
+Imports the listed sections (such as pre-data, data, or post-data).
+Is used for plain-text backup.
+Outputs the SET SESSION AUTHORIZATION statement instead of the ALTER OWNER statement to determine object ownership. This parameter makes dump more standards-compatible. If the records of objects in exported files are referenced, import may fail. Only administrators can use the SET SESSION AUTHORIZATION statement to dump data, and the administrators must manually change and verify the passwords of exported files by referencing the SET SESSION AUTHORIZATION statement before import. The ALTER OWNER statement requires lower permissions.
+Specifies that the key length of AES128 must be 16 bytes.
+If the dump is encrypted, enter the --with-key <keyname> parameter in the gs_restore command. If it is not entered, you will receive an error message.
+Enter the same key while entering the dump.
+CREATE DATABASE foo WITH TEMPLATE template0;+
1. The -d/--dbname and -f/--file parameters do not coexist.
+2. The -s/--schema-only and -a/--data-only parameters do not coexist.
+3. The -c/--clean and -a/--data-only parameters do not coexist.
+4. When --single-transaction is used, -j/--jobs must be a single job.
+5. --role must be used in conjunction with --rolepassword.
+Connection parameters:
+Specifies the host name. If the value begins with a slash (/), it is used as the directory for the UNIX domain socket. The default value is taken from the PGHOST environment variable. If it is not set, a UNIX domain socket connection is attempted.
+This parameter is used only for defining names of the hosts outside a cluster. The names of the hosts inside the cluster must be 127.0.0.1.
+Specifies the TCP port listened to by the server or the local UNIX domain socket file name extension to ensure a correct connection. The default value is the PGPORT environment variable.
+Never issue a password prompt. The connection attempt fails if the host requires password verification and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
+Specifies the user password to connect to. If the host uses the trust authentication policy, the administrator does not need to enter the -W parameter. If the -W parameter is not provided and you are not a system administrator, gs_restore will ask you to enter a password.
+Specifies a role name for the import operation. If this parameter is selected, the SET ROLE statement will be issued after gs_restore connects to the database. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_restore. This parameter allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as the initial user. This parameter allows data to be imported without violating the policy.
+Special case: Execute the gsql tool. Run the following commands to import the MPPDB_backup.sql file in the exported folder (in plain-text format) generated by gs_dump/gs_dumpall to the gaussdb database:
+gsql -d gaussdb -p 8000 -W {password} -f /home/omm/test/MPPDB_backup.sql +SET +SET +SET +SET +SET +ALTER TABLE +ALTER TABLE +ALTER TABLE +ALTER TABLE +ALTER TABLE +CREATE INDEX +CREATE INDEX +CREATE INDEX +SET +CREATE INDEX +REVOKE +REVOKE +GRANT +GRANT +total time: 30476 ms+
gs_restore is used to import the files exported by gs_dump.
+Example 1: Execute the gs_restore tool to import the exported MPPDB_backup.dmp file (in custom format) to the gaussdb database.
+gs_restore -W {password} backup/MPPDB_backup.dmp -p 8000 -d gaussdb +gs_restore: restore operation successful +gs_restore: total time: 13053 ms+
Example 2: Execute the gs_restore tool to import the exported MPPDB_backup.tar file (in tar format) to the gaussdb database.
+gs_restore backup/MPPDB_backup.tar -p 8000 -d gaussdb +gs_restore[2017-07-21 19:16:26]: restore operation successful +gs_restore[2017-07-21 19:16:26]: total time: 21203 ms+
Example 3: Execute the gs_restore tool to import the exported MPPDB_backup file (in directory format) to the gaussdb database.
+gs_restore backup/MPPDB_backup -p 8000 -d gaussdb +gs_restore[2017-07-21 19:16:26]: restore operation successful +gs_restore[2017-07-21 19:16:26]: total time: 21003 ms+
Example 4: Execute the gs_restore tool and run the following commands to import the MPPDB_backup.dmp file (in custom format). Specifically, import all the object definitions and data in the PUBLIC schema. Existing objects are deleted from the target database before the import. If an existing object references to an object in another schema, you need to manually delete the referenced object first.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -d gaussdb -e -c -n PUBLIC +gs_restore: [archiver (db)] Error while PROCESSING TOC: +gs_restore: [archiver (db)] Error from TOC entry 313; 1259 337399 TABLE table1 gaussdba +gs_restore: [archiver (db)] could not execute query: ERROR: cannot drop table table1 because other objects depend on it +DETAIL: view t1.v1 depends on table table1 +HINT: Use DROP ... CASCADE to drop the dependent objects too. + Command was: DROP TABLE public.table1;+
Manually delete the referenced object and create it again after the import is complete.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -d gaussdb -e -c -n PUBLIC +gs_restore[2017-07-21 19:16:26]: restore operation successful +gs_restore[2017-07-21 19:16:26]: total time: 2203 ms+
Example 5: Execute the gs_restore tool and run the following commands to import the MPPDB_backup.dmp file (in custom format). Specifically, import only the definition of table1 in the PUBLIC schema.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -d gaussdb -e -c -s -n PUBLIC -t table1 +gs_restore[2017-07-21 19:16:26]: restore operation successful +gs_restore[2017-07-21 19:16:26]: total time: 21000 ms+
Example 6: Execute the gs_restore tool and run the following commands to import the MPPDB_backup.dmp file (in custom format). Specifically, import only the data of table1 in the PUBLIC schema.
+gs_restore backup/MPPDB_backup.dmp -p 8000 -d gaussdb -e -a -n PUBLIC -t table1 +gs_restore[2017-07-21 19:16:26]: restore operation successful +gs_restore[2017-07-21 19:16:26]: total time: 20203 ms+
gs_dump and gs_dumpall
+gds_check is used to check the GDS deployment environment, including the OS parameters, network environment, and disk usage. It also supports the recovery of system parameters. This helps detect potential problems during GDS deployment and running, improving the execution success rate.
+Parameter + |
+Recommended Value + |
+
---|---|
net.core.somaxconn + |
+65535 + |
+
net.ipv4.tcp_max_syn_backlog + |
+65535 + |
+
net.core.netdev_max_backlog + |
+65535 + |
+
net.ipv4.tcp_retries1 + |
+5 + |
+
net.ipv4.tcp_retries2 + |
+12 + |
+
net.ipv4.ip_local_port_range + |
+26000 to 65535 + |
+
MTU + |
+1500 + |
+
net.core.wmem_max + |
+21299200 + |
+
net.core.rmem_max + |
+21299200 + |
+
net.core.wmem_default + |
+21299200 + |
+
net.core.rmem_default + |
+21299200 + |
+
max handler + |
+1000000 + |
+
vm.swappiness + |
+10 + |
+
Check Item + |
+Warning + |
+
---|---|
Disk space usage + |
+Greater than or equal to 70% and less than 90% + |
+
Inode usage + |
+Greater than or equal to 70% and less than 90% + |
+
Check Item + |
+Error + |
+
---|---|
Network connectivity + |
+100% packet loss + |
+
NIC multi-queue + |
+When NIC multi-queue is enabled and different CPUs are bound, fix can be modified. + |
+
gds_check -t check --host [/path/to/hostfile | ipaddr1,ipaddr2...] --ping-host [/path/to/pinghostfile | ipaddr1,ipaddr2...] [--detail]+
gds_check -t fix --host [/path/to/hostfile | ipaddr1,ipaddr2...] [--detail]+
Operation type, indicating check or recovery.
+The value can be check or fix.
+IP addresses of the nodes to be checked or recovered.
+Value: IP address list in the file or character string format
+ +Destination IP address for the network ping check on each node to be checked.
+Value: IP address list in the file or character string format. Generally, the value is the IP address of a DN, CN, or gateway.
+ +Displays detailed information about check and repair items and saves the information to logs.
+Perform a check. Both --host and --ping-host are in character string format.
+gds_check -t check --host 192.168.1.100,192.168.1.101 --ping-host 192.168.2.100+
Perform a check. --host is in character string format and --ping-host is in file format.
+gds_check -t check --host 192.168.1.100,192.168.1.101 --ping-host /home/gds/iplist + +cat /home/gds/iplist +192.168.2.100 +192.168.2.101+
Perform a check. --host is in file format and --ping-host is in character string format.
+gds_check -t check --host /home/gds/iplist --ping-host 192.168.1.100,192.168.1.101+
Perform a recovery. --host is in character string format.
+gds_check -t fix --host 192.168.1.100,192.168.1.101+
Run the following command to perform the check, print the detailed information, and save the information to logs:
+gds_check -t check --host 192.168.1.100 --detail+
Run the following command to perform the repair, print the detailed information, and save the information to logs:
+gds_check -t fix --host 192.168.1.100 --detail+
gds_ctl is a script tool used for starting or stopping GDS service processes in batches. You can start or stop GDS service processes, which use the same port, on multiple nodes at a time, and set a daemon for each GDS process during the startup.
+gds_ctl start --host [/path/to/hostfile | ipaddr1,ipaddr2...] -p PORT -d DATADIR -H ALLOW_IPs [gds other original options]+
gds_ctl stop --host [/path/to/hostfile | ipaddr1,ipaddr2...] -p PORT+
gds_ctl restart --host [/path/to/hostfile | ipaddr1,ipaddr2...] -p PORT+
Sets the directory of the data file to be imported. If the GDS process has the permission, the directory specified by -d will be automatically created.
+This parameter is used together with the -R parameter to support automatic log splitting. After the -R parameter is set, GDS generates a new file based on the set value to prevent a single log file from being too large.
+Generation rule: By default, GDS identifies only files with the .log extension name and generates new log files.
+For example, if -l is set to gds.log and -R is set to 20 MB, a gds-2020-01-17_115425.log file will be created when the size of gds.log reaches 20 MB.
+If the log file name specified by -l does not end with .log, for example, gds.log.txt, the name of the new log file is gds.log-2020-01-19_122739.txt.
+When GDS is started, it checks whether the log file specified by -l exists. If the log file exists, a new log file is generated based on the current date and time, and the original log file is not overwritten.
+Sets the hosts that can be connected to GDS. This parameter must be the CIDR format and it supports the Linux system only. If multiple network segments need to be configured, use commas (,) to separate them. For example, -H 10.10.0.0/24, 10.10.5.0/24.
+Sets the saving path of error logs generated when data is imported.
+Default value: data file directory
+Sets the upper threshold of error logs generated when data is imported.
+Value range: 0 < size < 1 TB. The value must be a positive integer plus the unit. The unit can be KB, MB, or GB.
+Sets the upper limit of the exported file size.
+Value range: 1 MB < size < 100 TB. The value must be a positive integer plus the unit. The unit can be KB, MB, or GB. If KB is used, the value must be greater than 1024 KB.
+Sets the maximum size of a single GDS log file specified by -l.
+Value range: 1 MB < size < 1 TB. The value must be a positive integer plus the unit. The unit can be KB, MB, or GB. If KB is used, the value must be greater than 1024 KB.
+Default value: 16 MB
+Sets the number of concurrent imported and exported working threads.
+Value range: The value is a positive integer ranging between 0 and 200 (included).
+Default value: 8
+Recommended value: 2 x CPU cores in the common file import and export scenario; in the pipe file import and export scenario, set the value to 64.
+If a large number of pipe files are imported and exported concurrently, the value of this parameter must be greater than or equal to the number of concurrent services.
+Sets the status file. This parameter supports the Linux system only.
+Traverse files in the recursion directory and this parameter supports the Linux system only.
+Uses the SSL authentication mode to communicate with clusters.
+Sets the path for storing the authentication certificates when the SSL authentication mode is used.
+Sets the debug log level of the GDS to control the output of GDS debug logs.
+Value range: 0, 1, and 2
+Sets the timeout period for GDS to wait for operating a pipe.
+Value range: greater than 1s. Use a positive integer with the time unit, seconds (s), minutes (m), or hours (h). Example: 3600s, 60m, or 1h, indicating one hour.
+Default value: 1h/60m/3600s
+Start a GDS process. Its data files are stored in the /data directory, the IP address is 192.168.0.90, and the listening port number is 5000.
+gds_ctl start --host 192.168.0.90 -d /data/ -p 5000 -H 10.10.0.1/24 -D+
Start GDS processes in batches. The data files are stored in the /data directory, the IP addresses are 192.168.0.90, 192.168.0.91, and 192.168.0.92, and the listening port number is 5000.
+gds_ctl start --host 192.168.0.90,192.168.0.91,192.168.0.92 -d /data/ -p 5000 -H 0/0 -D+
Stop GDS processes on nodes 192.168.0.90, 192.168.0.91, and 192.168.0.92 whose port number is 5000 in batches.
+gds_ctl stop --host 192.168.0.90,192.168.0.91,192.168.0.92 -p 5000+
Restart GDS processes on nodes 192.168.0.90, 192.168.0.91, and 192.168.0.92 whose port number is 5000 in batches.
+gds_ctl restart --host 192.168.0.90,192.168.0.91,192.168.0.92 -p 5000+
gds_install is a script tool used to install GDS in batches, improving GDS deployment efficiency.
+gds_install -I /path/to/install_dir -U user -G user_group --pkg /path/to/pkg.tar.gz --host [/path/to/hostfile | ipaddr1,ipaddr2...] [--ping-host [/path/to/hostfile | ipaddr1,ipaddr2...]]+
Default value: /opt/${gds_user}/packages/, in which ${gds_user} indicates the operating system user of the GDS service.
+Path of the GDS installation package, for example, /path/to/GaussDB-8.1.1-REDHAT-x86_64bit-Gds.tar.gz.
+IP addresses of the nodes to be installed. The value can be a file name or a string.
+192.168.2.201
+The node where the command is executed must be one of the nodes to be deployed. The IP address of the node must be in the list.
+Destination IP address for the network ping check on each target node when gds_check is called.
+Value: IP address list in the file or string format. Generally, the value is the IP address of a DN, CN, or gateway.
+ +Install GDS on nodes 192.168.1.100 and 192.168.1.101, and specify the installation directory as /opt/gdspackages/install_dir. The GDS user is gds_test:wheel.
+gds_install -I /opt/gdspackages/install_dir --host 192.168.1.100,192.168.1.101 -U gds_test -G wheel --pkg /home/gds_test/GaussDB-8.1.1-REDHAT-x86_64bit-Gds.tar.gz+
gds_uninstall is a script tool used to uninstall GDS in batches.
+gds_uninstall --host [/path/to/hostfile | ipaddr1,ipaddr2...] –U gds_user [--delete-user | --delete-user-and-group]+
IP addresses of the nodes to be uninstalled. The value can be a file name or a string:
+ +The user is deleted when GDS is uninstalled. The user to be deleted cannot be the root user.
+When GDS is uninstalled, the user and the user group to which the user belongs are deleted. You can delete a user group only when the user to be deleted is the only user of the user group. The user group cannot be the root user group.
+Uninstall the GDS folders and environment variables installed and deployed by the gds_test user on nodes 192.168.1.100 and 192.168.1.101.
+gds_uninstall -U gds_test --host 192.168.1.100,192.168.1.101+
The user is deleted when GDS is uninstalled.
+gds_uninstall -U gds_test --host 192.168.1.100,192.168.1.101 --delete-user+
During the uninstallation, the user and user group are deleted at the same time.
+gds_uninstall -U gds_test --host 192.168.1.100,192.168.1.101 --delete-user-and-group+
During cluster installation, you need to execute commands and transfer files among hosts in the cluster. Therefore, mutual trust relationships must be established among the hosts before the installation. gs_sshexkey, provided by GaussDB(DWS), helps you establish such relationships.
+The mutual trust relationships among root users have security risks. You are advised to delete the mutual trust relationships among root users once operations completed.
+To check whether the SELinux OS has been installed and started, run the getenforce command. If the command output is Enforcing, the SELinux OS has been installed and started.
+To check the security contexts of the directories, run the following commands:
+ls -ldZ /root | awk '{print $4}'+
ls -ldZ /home | awk '{print $4}'+
To restore the security contexts of the directories, run the following commands:
+restorecon -r -vv /home/+
restorecon -r -vv /root/+
gs_sshexkey -f HOSTFILE [-W PASSWORD] [...] [--skip-hostname-set] [-l LOGFILE]+
gs_sshexkey -? | --help+
gs_sshexkey -V | --version+
Lists the IP addresses of all the hosts among which mutual trust relationships need to be established.
+Ensure that hostfile contains only correct IP addresses and no other information.
+Specifies the password of the user who will establish mutual trust relationships. If this parameter is not specified, you will be prompted to enter the password when the mutual trust relationship is established. If the password of each host is different, you need to specify multiple -W parameters. The password sequence must correspond to the IP address sequence. In interactive mode, you need to enter the password of the host in sequence.
+The password cannot contain the following characters: ;'$
+Specifies the path of saving log files.
+Value range: any existing, accessible absolute path
+Specifies whether to write the mapping relationship between the host name and IP address of the -f parameter file to the /etc/hosts file. If this parameter is specified, the relationship is not written to the file.
+The following examples describe how to establish mutual trust relationships for user root:
+Gauss@123 indicates the password of user root.
+./gs_sshexkey -f /opt/software/hostfile -W Gauss@123 +Checking network information. +All nodes in the network are Normal. +Successfully checked network information. +Creating SSH trust. +Creating the local key file. +Appending local ID to authorized_keys. +Successfully appended local ID to authorized_keys. +Updating the known_hosts file. +Successfully updated the known_hosts file. +Appending authorized_key on the remote node. +Successfully appended authorized_key on all remote node. +Checking common authentication file content. +Successfully checked common authentication content. +Distributing SSH trust file to all node. +Successfully distributed SSH trust file to all node. +Verifying SSH trust on all hosts. +Successfully verified SSH trust on all hosts. +Successfully created SSH trust.+
Gauss@234 indicates the root password of the first host in the host list, and Gauss@345 indicates the root password of the second host in the host list.
+./gs_sshexkey -f /opt/software/hostfile -W Gauss@123 -W Gauss@234 -W Gauss@345 +Checking network information. +All nodes in the network are Normal. +Successfully checked network information. +Creating SSH trust. +Creating the local key file. +Appending local ID to authorized_keys. +Successfully appended local ID to authorized_keys. +Updating the known_hosts file. +Successfully updated the known_hosts file. +Appending authorized_key on the remote node. +Successfully appended authorized_key on all remote node. +Checking common authentication file content. +Successfully checked common authentication content. +Distributing SSH trust file to all node. +Successfully distributed SSH trust file to all node. +Verifying SSH trust on all hosts. +Successfully verified SSH trust on all hosts. +Successfully created SSH trust.+
gs_sshexkey -f /opt/software/hostfile +Please enter password for current user[root]. +Password: +Checking network information. +All nodes in the network are Normal. +Successfully checked network information. +Creating SSH trust. +Creating the local key file. +Appending local ID to authorized_keys. +Successfully appended local ID to authorized_keys. +Updating the known_hosts file. +Successfully updated the known_hosts file. +Appending authorized_key on the remote node. +Successfully appended authorized_key on all remote node. +Checking common authentication file content. +Successfully checked common authentication content. +Distributing SSH trust file to all node. +Successfully distributed SSH trust file to all node. +Verifying SSH trust on all hosts. +Successfully verified SSH trust on all hosts. +Successfully created SSH trust.+
gs_sshexkey -f /opt/software/hostfile +Please enter password for current user[root]. +Password: +Notice :The password of some nodes is incorrect. +Please enter password for current user[root] on the node[10.180.10.112]. +Password: +Please enter password for current user[root] on the node[10.180.10.113]. +Password: +Checking network information. +All nodes in the network are Normal. +Successfully checked network information. +Creating SSH trust. +Creating the local key file. +Appending local ID to authorized_keys. +Successfully appended local ID to authorized_keys. +Updating the known_hosts file. +Successfully updated the known_hosts file. +Appending authorized_key on the remote node. +Successfully appended authorized_key on all remote node. +Checking common authentication file content. +Successfully checked common authentication content. +Distributing SSH trust file to all node. +Successfully distributed SSH trust file to all node. +Verifying SSH trust on all hosts. +Successfully verified SSH trust on all hosts. +Successfully created SSH trust.+
Stop GDS after data is imported successfully.
+ps -ef|grep gds+
For example, the GDS process ID is 128954.
+ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /input_data/ -p 192.168.0.90:5000 -l /log/gds_log.txt -D +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds+
kill -9 128954+
cd /opt/bin/dws/gds
+python3 gds_ctl.py stop
+gds_ctl.py can be used to start and stop gds if gds.conf has been configured.
+Run the following commands on Linux OS: You need to ensure that the directory structure is as follows before the execution:
+|----gds
+|----gds_ctl.py
+|----config
+|-------gds.conf
+|-------gds.conf.sample
+or
+|----gds
+|----gds_ctl.py
+|-------gds.conf
+|-------gds.conf.sample
+ +Content of gds.conf:
+<?xml version="1.0"?> +<config> +<gds name="gds1" ip="127.0.0.1" port="8098" data_dir="/data" err_dir="/err" data_seg="100MB" err_seg="1000MB" log_file="./gds.log" host="10.10.0.1/24" daemon='true' recursive="true" parallel="32"></gds> +</config>+
Configuration description of gds.conf:
+Value range: an integer ranging from 1024 to 65535
+Default value: 8098
+Value range:
+Value range:
+The default number of concurrencies is 8 and the maximum number is 200.
+gds_ctl.py [ start | stop all | stop [ ip: ] port | stop | status ]+
gds_ctl.py can be used to start or stop GDS if gds.conf is configured.
+Stop the running instance started by the configuration file in the GDS that can be disabled by the current users.
+Stop all the running instances in the GDS that can be disabled by the current users.
+Stop the specific running GDS instance that can be closed by the current user. If ip:port is specified when the GDS is started, stop the corresponding ip:port to be specified. If the IP address is not specified when the GDS is started, you need to stop the specified port only. The stop fails if different information is specified when the GDS is started or stopped.
+Query the running status of the GDS instance started by the gds.conf.
+Start the GDS.
+python3 gds_ctl.py start+
Stop the GDS started by the configuration file.
+python3 gds_ctl.py stop+
Stop all the GDS instances that can be stopped by the current user.
+python3 gds_ctl.py stop all+
Stop the GDS instance specified by [ip:]port that can be stopped by the current user.
+python3 gds_ctl.py stop 127.0.0.1:8098+
Query the GDS status.
+python3 gds_ctl.py status+
Released On + |
+Description + |
+
---|---|
2022-11-17 + |
+This issue is the second official release. Applicable to DWS 8.1.1.202. + |
+
2022-08-11 + |
+This issue is the first official release. + |
+
The data servers reside on the same intranet as the cluster. Their IP addresses are 192.168.0.90 and 192.168.0.91. Source data files are in CSV format.
+1 +2 +3 +4 +5 +6 | CREATE TABLE tpcds.reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +); + |
mkdir -p /input_data+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /input_data+
The GDS installation path is /opt/bin/dws/gds. Source data files are stored in /input_data/. The IP addresses of the data servers are 192.168.0.90 and 192.168.0.91. The GDS listening port is 5000. GDS runs in daemon mode.
+/opt/bin/dws/gds/gds -d /input_data -p 192.168.0.90:5000 -H 10.10.0.1/24 -D
+Start GDS on the data server whose IP address is 192.168.0.91.
+/opt/bin/dws/gds/gds -d /input_data -p 192.168.0.91:5000 -H 10.10.0.1/24 -D
+Set import mode parameters as follows:
+Information about the data format is set based on data format parameters specified during data export. The parameter settings are as follows:
+Set import error tolerance parameters as follows:
+Based on the above settings, the foreign table is created using the following statement:
+1 +2 +3 +4 +5 +6 +7 | CREATE FOREIGN TABLE tpcds.foreign_tpcds_reasons +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) +SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/* | gsfs://192.168.0.91:5000/*', format 'CSV',mode 'Normal', encoding 'utf8', delimiter E'\x08', quote E'\x1b', null '', fill_missing_fields 'false') LOG INTO err_tpcds_reasons PER NODE REJECT LIMIT 'unlimited'; + |
1 | INSERT INTO tpcds.reasons SELECT * FROM tpcds.foreign_tpcds_reasons; + |
1 | SELECT * FROM err_tpcds_reasons; + |
ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /input_data -p 192.168.0.90:5000 -D +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds +kill -9 128954+
The data server resides on the same intranet as the cluster. The server IP address is 192.168.0.90. Source data files are in CSV format. Data will be imported to two tables using multiple threads in Normal mode.
+1 +2 +3 +4 +5 +6 | CREATE TABLE tpcds.reasons1 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) ; + |
1 +2 +3 +4 +5 +6 | CREATE TABLE tpcds.reasons2 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) ; + |
mkdir -p /input_data+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /input_data+
/gds/gds -d /input_data -p 192.168.0.90:5000 -H 10.10.0.1/24 -D -t 2 -r+
The foreign table tpcds.foreign_tpcds_reasons1 is used as an example to describe how to configure parameters in a foreign table.
+Set import mode parameters as follows:
+Information about the data format is set based on data format parameters specified during data export. The parameter settings are as follows:
+Set import error tolerance parameters as follows:
+Based on the preceding settings, the foreign table tpcds.foreign_tpcds_reasons1 is created as follows:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE tpcds.foreign_tpcds_reasons1 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/import1/*', format 'CSV',mode 'Normal', encoding 'utf8', delimiter E'\x08', quote E'\x1b', null '',fill_missing_fields 'on')LOG INTO err_tpcds_reasons1 PER NODE REJECT LIMIT 'unlimited'; + |
Based on the preceding settings, the foreign table tpcds.foreign_tpcds_reasons2 is created as follows:
+1 +2 +3 +4 +5 +6 | CREATE FOREIGN TABLE tpcds.foreign_tpcds_reasons2 +( + r_reason_sk integer not null, + r_reason_id char(16) not null, + r_reason_desc char(100) +) SERVER gsmpp_server OPTIONS (location 'gsfs://192.168.0.90:5000/import2/*', format 'CSV',mode 'Normal', encoding 'utf8', delimiter E'\x08', quote E'\x1b', null '',fill_missing_fields 'on')LOG INTO err_tpcds_reasons2 PER NODE REJECT LIMIT 'unlimited'; + |
1 | INSERT INTO tpcds.reasons1 SELECT * FROM tpcds.foreign_tpcds_reasons1; + |
1 | INSERT INTO tpcds.reasons2 SELECT * FROM tpcds.foreign_tpcds_reasons2; + |
1 +2 | SELECT * FROM err_tpcds_reasons1; +SELECT * FROM err_tpcds_reasons2; + |
ps -ef|grep gds +gds_user 128954 1 0 15:03 ? 00:00:00 gds -d /input_data -p 192.168.0.90:5000 -D -t 2 -r +gds_user 129003 118723 0 15:04 pts/0 00:00:00 grep gds +kill -9 128954+
GaussDB(DWS) uses GDS to allocate the source data for parallel data import. Deploy GDS on the data server.
+If a large volume of data is stored on multiple data servers, install, configure, and start GDS on each server. Then, data on all the servers can be imported in parallel. The procedure for installing, configuring, and starting GDS is the same on each data server. This section describes how to perform this procedure on one data server.
+Therefore, use the latest version of GDS. After the database is upgraded, download the latest version of GaussDB(DWS) GDS as instructed in Procedure. When the import or export starts, GaussDB(DWS) checks the GDS versions. If the versions do not match, an error message is displayed and the import or export is terminated.
+To obtain the version number of GDS, run the following command in the GDS decompression directory:
+gds -V+
To view the database version, run the following SQL statement after connecting to the database:
+1 | SELECT version(); + |
mkdir -p /opt/bin/dws+
Use the SUSE Linux package as an example. Upload the GDS package dws_client_8.1.x_suse_x64.zip to the directory created in the previous step.
+cd /opt/bin/dws +unzip dws_client_8.1.x_suse_x64.zip+
groupadd gdsgrp +useradd -g gdsgrp gds_user+
chown -R gds_user:gdsgrp /opt/bin/dws/gds +chown -R gds_user:gdsgrp /input_data+
su - gds_user+
If the current cluster version is 8.0.x or earlier, skip 9 and go to 10.
+If the current cluster version is 8.1.x, go to the next step.
+cd /opt/bin/dws/gds/bin +source gds_env+
GDS is green software and can be started after being decompressed. There are two ways to start GDS. One is to run the gds command to configure startup parameters. The other is to write the startup parameters into the gds.conf configuration file and run the gds_ctl.py command to start GDS.
+gds -d dir -p ip:port -H address_string -l log_file -D -t worker_num+
Example:
++/opt/bin/dws/gds/bin/gds -d /input_data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 -l /opt/bin/dws/gds/gds_log.txt -D -t 2+
gds -d dir -p ip:port -H address_string -l log_file -D +-t worker_num --enable-ssl --ssl-dir Cert_file+
Example:
++/opt/bin/dws/gds/bin/gds -d /input_data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 -l /opt/bin/dws/gds/gds_log.txt -D --enable-ssl --ssl-dir /opt/bin/+
Replace the information in italic as required.
+GDS determines the number of threads based on the number of concurrent import transactions. Even if multi-thread import is configured before GDS startup, the import of a single transaction will not be accelerated. By default, an INSERT statement is an import transaction.
+vim /opt/bin/dws/gds/config/gds.conf
+Example:
+The gds.conf configuration file contains the following information:
+<?xml version="1.0"?> +<config> +<gds name="gds1" ip="192.168.0.90" port="5000" data_dir="/input_data/" err_dir="/err" data_seg="100MB" err_seg="100MB" log_file="/log/gds_log.txt" host="10.10.0.1/24" daemon='true' recursive="true" parallel="32"></gds> +</config>+
Information in the configuration file is described as follows:
+python3 gds_ctl.py start+
Example:
+cd /opt/bin/dws/gds/bin
+python3 gds_ctl.py start
+Start GDS gds1 [OK]
+gds [options]:
+ -d dir Set data directory.
+ -p port Set GDS listening port.
+ ip:port Set GDS listening ip address and port.
+ -l log_file Set log file.
+ -H secure_ip_range
+ Set secure IP checklist in CIDR notation. Required for GDS to start.
+ -e dir Set error log directory.
+ -E size Set size of per error log segment.(0 < size < 1TB)
+ -S size Set size of data segment.(1MB < size < 100TB)
+ -t worker_num Set number of worker thread in multi-thread mode, the upper limit is 32. If without setting, the default value is 1.
+ -s status_file Enable GDS status report.
+ -D Run the GDS as a daemon process.
+ -r Read the working directory recursively.
+ -h Display usage.
+Attribute + |
+Description + |
+Value Range + |
+
---|---|---|
name + |
+Identifier + |
+- + |
+
ip + |
+Listening IP address + |
+The IP address must be valid. +Default value: 127.0.0.1 + |
+
port + |
+Listening port + |
+Value range: 1024 to 65535 (integer) +Default value: 8098 + |
+
data_dir + |
+Data file directory + |
+- + |
+
err_dir + |
+Error log file directory + |
+Default value: data file directory + |
+
log_file + |
+Log file Path + |
+- + |
+
host + |
+Host IP address allowed to be connected to GDS (The value must in CIDR format and this parameter is available for the Linux OS only.) + |
+- + |
+
recursive + |
+Whether the data file directories are recursive + |
+Value range: +
Default value: false + |
+
daemon + |
+Whether the process is running in daemon mode + |
+Value range: +
Default value: false + |
+
parallel + |
+Number of concurrent data import threads + |
+Value range: 0 to 32 (integer) +Default value: 1 + |
+
If the gsql client is used to connect to a database, the connection timeout period will be 5 minutes. If the database has not correctly set up a connection and authenticated the identity of the client within this period, gsql will time out and exit.
+To resolve this problem, see Troubleshooting.
+Table 1 lists the advanced features of gsql.
+ +Feature + |
+Description + |
+
---|---|
Variable + |
+gsql provides a variable feature that is similar to the shell command of Linux. The following \set meta-command of gsql can be used to set a variable: +\set varname value+ To delete a variable, run the following command: +\unset varname+ NOTE:
+
For details about variable examples and descriptions, see Variable. + |
+
SQL substitution + |
+Common SQL statements can be set to variables using the variable feature of gsql to simplify operations. +For details about SQL substitution examples and descriptions, see Variable. + |
+
Customized prompt + |
+Prompts of gsql can be customized. Prompts can be modified by changing the reserved variables of gsql: PROMPT1, PROMPT2, and PROMPT3. +These variables can be set to customized values or the values predefined by gsql. For details, see Variable. + |
+
Client operation history record + |
+gsql records client operation history. This function is enabled by specifying the -r parameter when a client is connected. The number of historical records can be set using the \set command. For example, \set HISTSIZE 50 indicates that the number of historical records is set to 50. \set HISTSIZE 0 indicates that the operation history is not recorded. + NOTE:
+
|
+
1 | \set foo bar + |
1 +2 | \echo :foo +bar + |
This variable quotation method is suitable for regular SQL statements and meta-commands.
+When the CLI parameter --dynamic-param (for details, see Table 1) is used or the special variable DYNAMIC_PARAM_ENABLE (for details, see Table 2) is set to true, you can execute the SQL statement to set the variable. The variable name is the column name in the SQL execution result and can be referenced using ${}. Example:
+1 +2 +3 +4 +5 +6 +7 +8 +9 | \set DYNAMIC_PARAM_ENABLE true +SELECT 'Jack' AS "Name"; + Name +------ + Jack +(1 row) + +\echo ${Name} +Jack + |
In the preceding example, the SELECT statement is used to set the Name variable, and the ${} referencing method is used to obtain the value of the Name variable. In this example, the special variable DYNAMIC_PARAM_ENABLE controls this function. You can also use the CLI parameter --dynamic-param to control this function, for example, gsql -d postgres -p 25308 --dynamic-param -r.
+Examples of setting variables by executing SQL statements:
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 | \set DYNAMIC_PARAM_ENABLE true +CREATE TABLE student (id INT, name VARCHAR(32)) DISTRIBUTE BY HASH(id); +CREATE TABLE +INSERT INTO student VALUES (1, 'Jack'), (2, 'Tom'), (3, 'Jerry'); +INSERT 0 3 +-- Do not set variables when the SQL statement execution fails. +SELECT id, name FROM student ORDER BY idi; +ERROR: column "idi" does not exist +LINE 1: SELECT id, name FROM student ORDER BY idi; + ^ +\echo ${id} ${name} +${id} ${name} + +-- If the execution result contains multiple records, use specific characters to concatenate the values. +SELECT id, name FROM student ORDER BY id; + id | name +----+------- + 1 | Jack + 2 | Tom + 3 | Jerry +(3 rows) + +\echo ${id} ${name} +1,2,3 Jack,Tom,Jerry + +-- If the execution result contains only one record, execute the following statement to set the variable: +SELECT id, name FROM student where id = 1; + id | name +----+------ + 1 | Jack +(1 row) + +\echo ${id} ${name} +1 Jack + +-- If the execution result is empty, assign the variable with an empty string as follows: +SELECT id, name FROM student where id = 4; + id | name +----+------ +(0 rows) + +\echo ${id} ${name} + + + |
gsql pre-defines some special variables and plans the values of these variables. To ensure compatibility with later versions, do not use these variables for other purposes. For details about all special variables, see Table 2.
+Variable + |
+Setting Method + |
+Description + |
+
---|---|---|
DBNAME + |
+\set DBNAME dbname+ |
+Specifies the name of a connected database. This variable is set again when a database is connected. + |
+
ECHO + |
+\set ECHO all | queries+ |
+
|
+
ECHO_HIDDEN + |
+\set ECHO_HIDDEN on | off | noexec+ |
+When a meta-command (such as \dg) is used to query database information, the value of this variable determines the query behavior. +
|
+
ENCODING + |
+\set ENCODING encoding+ |
+Specifies the character set encoding of the current client. + |
+
FETCH_COUNT + |
+\set FETCH_COUNT variable+ |
+
NOTE:
+Setting this variable to a proper value reduces memory usage. Generally, values from 100 to 1000 are proper. + |
+
HISTCONTROL + |
+\set HISTCONTROL ignorespace | ignoredups | ignoreboth | none+ |
+
|
+
HISTFILE + |
+\set HISTFILE filename+ |
+Specifies the file for storing historical records. The default value is ~/.bash_history. + |
+
HISTSIZE + |
+\set HISTSIZE size+ |
+Specifies the number of commands in the history command. The default value is 500. + |
+
HOST + |
+\set HOST hostname+ |
+Specifies the name of a connected host. + |
+
IGNOREEOF + |
+\set IGNOREEOF variable+ |
+
|
+
LASTOID + |
+\set LASTOID oid+ |
+Specifies the last OID, which is the value returned by an INSERT or lo_import command. This variable is valid only before the output of the next SQL statement is displayed. + |
+
ON_ERROR_ROLLBACK + |
+\set ON_ERROR_ROLLBACK on | interactive | off+ |
+
|
+
ON_ERROR_STOP + |
+\set ON_ERROR_STOP on | off+ |
+
|
+
PORT + |
+\set PORT port+ |
+Specifies the port number of a connected database. + |
+
USER + |
+\set USER username+ |
+Specifies the connected database user. + |
+
VERBOSITY + |
+\set VERBOSITY terse | default | verbose+ |
+This variable can be set to terse, default, or verbose to control redundant lines of error reports. +
|
+
VAR_NOT_FOUND + |
+\set VAR_NOT_FOUND default | null | error+ |
+You can set this parameter to default, null, or error to control the processing mode when the referenced variable does not exist. +
|
+
VAR_MAX_LENGTH + |
+\set VAR_MAX_LENGTH variable+ |
+Specifies the variable value length. The default value is 4096. If the length of a variable value exceeds the specified parameter value, the variable value is truncated and an alarm is generated. + |
+
ERROR_LEVEL + |
+\set ERROR_LEVEL transaction | statement+ |
+Indicates whether a transaction or statement is successful or not. Value options: transaction or statement. Default value: transaction +
|
+
ERROR + |
+\set ERROR true | false+ |
+Indicates whether the previous SQL statement is successfully executed or whether an error occurs during the execution of the previous transaction. false: succeeded. true: failed. default value: false The setting can be updated by executing SQL statements. You are not advised to manually set this parameter. + |
+
LAST_ERROR_SQLSTATE + |
+\set LAST_ERROR_SQLSTATE state+ |
+Error code of the previously failed SQL statement execution. The default value is 00000. The setting can be updated by executing SQL statements. You are not advised to manually set this parameter. + |
+
LAST_ERROR_MESSAGE + |
+\set LAST_ERROR_MESSAGE message+ |
+Error message of the previously failed SQL statement execution. The default value is an empty string. The setting can be updated by executing SQL statements. You are not advised to manually set this parameter. + |
+
ROW_COUNT + |
+\set ROW_COUNT count+ |
+
If the SQL statement fails to be executed, set this parameter to 0. The default value is 0. The setting can be updated by executing SQL statements. You are not advised to manually set this parameter. + |
+
SQLSTATE + |
+\set SQLSTATE state+ |
+
The default value is 00000. The setting can be updated by executing SQL statements. You are not advised to manually set this parameter. + |
+
LAST_SYS_CODE + |
+\set LAST_SYS_CODE code+ |
+Returned value of the previous system command execution. The default value is 0. The setting can be updated by using the meta-command \! to run the system command. You are not advised to manually set this parameter. + |
+
DYNAMIC_PARAM_ENABLE + |
+\set DYNAMIC_PARAM_ENABLE true | false+ |
+Controls the generation of variables and the variable referencing method ${} during SQL statement execution. The default value is false. +
|
+
RESULT_DELIMITER + |
+\set RESULT_DELIMITER delimiter+ |
+Controls the delimiter used for concatenating multiple records when variables are generated during SQL statement execution. The default delimiter is comma (,). + |
+
COMPARE_STRATEGY + |
+\set COMPARE_STRATEGY default | natural | equal+ |
+Used to control the value comparison policy of the \if expression. The default value is default. +
For details, see \if conditional block comparison rules and examples. + |
+
COMMAND_ERROR_STOP + |
+\set COMMAND_ERROR_STOP on | off+ |
+Determines whether to report the error and stop executing the meta-command when an error occurs during meta-command execution. By default, the meta-command execution is not stopped. +For details, see the COMMAND_ERROR_STOP example. + |
+
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | \set ERROR_LEVEL statement +begin; +BEGIN +select 1 as ; +ERROR: syntax error at or near ";" +LINE 1: select 1 as ; + ^ +end; +ROLLBACK +\echo :ERROR +false + |
When ERROR_LEVEL is set to transaction, ERROR can be used to capture SQL execution errors in a transaction. In the following example, when a SQL execution error occurs in a transaction and the transaction ends, the value of ERROR is true.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 | \set ERROR_LEVEL transaction +begin; +BEGIN +select 1 as ; +ERROR: syntax error at or near ";" +LINE 1: select 1 as ; + ^ +end; +ROLLBACK +\echo :ERROR +true + |
When COMMAND_ERROR_STOP is set to on and an error occurs during the meta-command execution, the error is reported and the meta-command execution is stopped.
+When COMMAND_ERROR_STOP is set to off and an error occurs during the meta-command execution, related information is printed and the script continues to be executed.
+1 +2 +3 +4 | \set COMMAND_ERROR_STOP on +\i /home/omm/copy_data.sql + +select id, name from student; + |
When COMMAND_ERROR_STOP in the preceding script is set to on, an error message is displayed after the meta-command reports an error, and the script execution is stopped.
+1 | gsql:test.sql:2: /home/omm/copy_data.sql: Not a directory + |
When COMMAND_ERROR_STOP is set to off, an error message is displayed after the meta-command reports an error, and the SELECT statement continues to be executed.
+1 +2 +3 +4 +5 | gsql:test.sql:2: /home/omm/copy_data.sql: Not a directory + id | name +----+------ + 1 | Jack +(1 row) + |
1 +2 +3 +4 +5 +6 +7 +8 +9 | \set foo 'HR.areaS' +select * from :foo; + area_id | area_name +---------+------------------------ + 4 | Iron + 3 | Desert + 1 | Wood + 2 | Lake +(4 rows) + |
The above command queries the HR.areaS table.
+The value of a variable is copied character by character, and even an asymmetric quote mark or backslash (\) is copied. Therefore, the input content must be meaningful.
+The gsql prompt can be set using the three variables in Table 3. These variables consist of characters and special escape characters.
+ +Variable + |
+Description + |
+Example + |
+||||||
---|---|---|---|---|---|---|---|---|
PROMPT1 + |
+Specifies the normal prompt used when gsql requests a new command. +The default value of PROMPT1 is: +%/%R%#+ |
+PROMPT1 can be used to change the prompt. +
|
+||||||
PROMPT2 + |
+Specifies the prompt displayed when more command input is expected. For example, it is expected if a command is not terminated with a semicolon (;) or a quote (") is not closed. + |
+PROMPT2 can be used to display the prompt: +
|
+||||||
PROMPT3 + |
+Specifies the prompt displayed when the COPY statement (such as COPY FROM STDIN) is run and data input is expected. + |
+PROMPT3 can be used to display the COPY prompt. +
|
+
The value of the selected prompt variable is printed literally. However, a value containing a percent sign (%) is replaced by the predefined contents depending on the character following the percent sign (%). For details about the defined substitutions, see Table 4.
+ +Symbol + |
+Description + |
+
---|---|
%M + |
+Specifies the full host name (with domain name). The full name is [local] if the connection is over a Unix domain socket, or [local:/dir/name] if the Unix domain socket is not at the compiled default location. + |
+
%m + |
+Specifies the host name truncated at the first dot. It is [local] if the connection is over a Unix domain socket. + |
+
%> + |
+Specifies the number of the port that the host is listening on. + |
+
%n + |
+Specifies the database session user name. + |
+
%/ + |
+Specifies the name of the current database. + |
+
%~ + |
+Is similar to %/. However, the output is tilde (~) if the database is your default database. + |
+
%# + |
+Uses # if the session user is the database administrator. Otherwise, uses >. + |
+
%R + |
+
|
+
%x + |
+Specifies the transaction status. +
|
+
%digits + |
+Is replaced with the character with the specified byte. + |
+
%:name + |
+Specifies the value of the name variable of gsql. + |
+
%command + |
+Specifies command output, similar to ordinary "back-tick" ("^") substitution. + |
+
%[ . . . %] + |
+Prompts can contain terminal control characters which, for example, change the color, background, or style of the prompt text, or change the title of the terminal window. For example: +potgres=> \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%#' +The output is a boldfaced (1;) yellow-on-black (33;40) prompt on VT100-compatible, color-capable terminals. + |
+
Name + |
+Description + |
+
---|---|
COLUMNS + |
+If \set columns is set to 0, this parameter controls the width of the wrapped format. This width determines whether the width output mode is changed to a vertical bar format in automatic expansion mode. + |
+
PAGER + |
+If the query result cannot be displayed within one page, the query result will be redirected to the command. You can use the \pset command to disable the pager. Typically, the more or less command is used for viewing the query result page by page. The default value is platform-associated. + NOTE:
+Display of the less command is affected by the LC_CTYPE environmental variable. + |
+
PSQL_EDITOR + |
+The \e and \ef commands use the editor specified by the environment variables. Variables are checked according to the list sequence. The default editor on Unix is vi. + |
+
EDITOR + |
+|
VISUAL + |
+|
PSQL_EDITOR_LINENUMBER_ARG + |
+When the \e or \ef command is used with a line number parameter, this variable specifies the command-line parameter used to pass the starting line number to the editor. For editors, such as Emacs or vi, this is a plus sign. A space is added behind the value of the variable if whitespace is required between the option name and the line number. For example:
+PSQL_EDITOR_LINENUMBER_ARG = '+' +PSQL_EDITOR_LINENUMBER_ARG='--line '+ A plus sign (+) is used by default on Unix. + |
+
PSQLRC + |
+Specifies the location of the user's .gsqlrc file. + |
+
SHELL + |
+Has the same effect as the \! command. + |
+
TMPDIR + |
+Specifies the directory for storing temporary files. The default value is /tmp. + |
+
For details about how to download and install gsql and connect it to the cluster database, see "Using the gsql CLI Client to Connect to a Cluster" in the Data Warehouse Service (DWS) Management Guide.
+The example shows how to spread a command over several lines of input. Pay attention to prompt changes:
+1 +2 +3 +4 +5 | postgres=# CREATE TABLE HR.areaS( +postgres(# area_ID NUMBER, +postgres(# area_NAME VARCHAR2(25) +postgres-# )tablespace EXAMPLE; +CREATE TABLE + |
View the table definition.
+1 +2 +3 +4 +5 +6 | \d HR.areaS + Table "hr.areas" + Column | Type | Modifiers +-----------+-----------------------+----------- + area_id | numeric | not null + area_name | character varying(25) | + |
Insert four lines of data into HR.areaS.
+1 +2 +3 +4 +5 +6 +7 +8 | INSERT INTO HR.areaS (area_ID, area_NAME) VALUES (1, 'Wood'); +INSERT 0 1 +INSERT INTO HR.areaS (area_ID, area_NAME) VALUES (2, 'Lake'); +INSERT 0 1 +INSERT INTO HR.areaS (area_ID, area_NAME) VALUES (3, 'Desert'); +INSERT 0 1 +INSERT INTO HR.areaS (area_ID, area_NAME) VALUES (4, 'Iron'); +INSERT 0 1 + |
Change the prompt.
+1 +2 | \set PROMPT1 '%n@%m %~%R%#' +dbadmin@[local] postgres=# + |
View the table.
+1 +2 +3 +4 +5 +6 +7 +8 | dbadmin@[local] postgres=#SELECT * FROM HR.areaS; + area_id | area_name +---------+------------------------ + 1 | Wood + 4 | Iron + 2 | Lake + 3 | Desert +(4 rows) + |
Run the \pset command to display the table in different ways.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 | dbadmin@[local] postgres=#\pset border 2 +Border style is 2. +dbadmin@[local] postgres=#SELECT * FROM HR.areaS; ++---------+------------------------+ +| area_id | area_name | ++---------+------------------------+ +| 1 | Wood | +| 2 | Lake | +| 3 | Desert | +| 4 | Iron | ++---------+------------------------+ +(4 rows) + |
1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 | dbadmin@[local] postgres=#\pset border 0 +Border style is 0. +dbadmin@[local] postgres=#SELECT * FROM HR.areaS; +area_id area_name +------- ---------------------- + 1 Wood + 2 Lake + 3 Desert + 4 Iron +(4 rows) + |
Use the meta-command.
+1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 +10 +11 +12 +13 +14 +15 +16 +17 | dbadmin@[local] postgres=#\a \t \x +Output format is unaligned. +Showing only tuples. +Expanded display is on. +dbadmin@[local] postgres=#SELECT * FROM HR.areaS; +area_id|2 +area_name|Lake + +area_id|1 +area_name|Wood + +area_id|4 +area_name|Iron + +area_id|3 +area_name|Desert +dbadmin@[local] postgres=# + |
gsql --help+
The following information is displayed:
+...... +Usage: + gsql [OPTION]... [DBNAME [USERNAME]] + +General options: + -c, --command=COMMAND run only single command (SQL or internal) and exit + -d, --dbname=DBNAME database name to connect to (default: "postgres") + -f, --file=FILENAME execute commands from file, then exit +......+
help+
The following information is displayed:
+You are using gsql, the command-line interface to gaussdb. +Type: \copyright for distribution terms + \h for help with SQL commands + \? for help with gsql commands + \g or terminate with semicolon to execute query + \q to quit+
+
Description + |
+Example + |
+||||
---|---|---|---|---|---|
View copyright information. + |
+\copyright + |
+||||
View the help information about SQL statements supported by GaussDB(DWS). + |
+View the help information about SQL statements supported by GaussDB(DWS). +For example, view all SQL statements supported by GaussDB(DWS). +
For example, view parameters of the CREATE DATABASE command: +
|
+||||
View help information about gsql commands. + |
+For example, view commands supported by gsql. +
|
+
For details about gsql parameters, see Table 1, Table 2, Table 3, and Table 4.
+ +Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
-c, --command=COMMAND + |
+Specifies that gsql runs a string command and then exits. + |
+- + |
+
-C, --set-file=FILENAME + |
+Uses the file as the command source instead of interactive input. After processing the file, gsql does not exit and continues to process other contents. + |
+An absolute path or relative path that meets the OS path naming convention + |
+
-d, --dbname=DBNAME + |
+Specifies the name of the database to be connected. + |
+A character string. + |
+
-D, --dynamic-param + |
+Controls the generation of variables and the ${} variable referencing method during SQL statement execution. For details, see Variable. + |
+- + |
+
-f, --file=FILENAME + |
+Specifies that files are used as the command source instead of interactively-entered commands. After the files are processed, exit from gsql. If FILENAME is - (hyphen), then standard input is read. + |
+An absolute path or relative path that meets the OS path naming convention + |
+
-l, --list + |
+Lists all available databases and then exits. + |
+- + |
+
-v, --set, --variable=NAME=VALUE + |
+Sets the gsql variable NAME to VALUE. +For details about variable examples and descriptions, see Variable. + |
+- + |
+
-X, --no-gsqlrc + |
+Does not read the startup file (neither the system-wide gsqlrc file nor the user's ~/.gsqlrc file). + NOTE:
+The startup file is ~/.gsqlrc by default or it can be specified by the environment variable PSQLRC. + |
+- + |
+
-1 ("one"), --single-transaction + |
+When gsql uses the -f parameter to execute a script, START TRANSACTION and COMMIT are added to the start and end of the script, respectively, so that the script is executed as one transaction. This ensures that the script is executed successfully. If the script cannot be executed, the script is invalid. + NOTE:
+If the script has used START TRANSACTION, COMMIT, and ROLLBACK, this parameter is invalid. + |
+- + |
+
-?, --help + |
+Displays help information about gsql CLI parameters, and exits. + |
+- + |
+
-V, --version + |
+Prints the gsql version and exits. + |
+- + |
+
Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
-a, --echo-all + |
+Prints all input lines to standard output as they are read. + CAUTION:
+When this parameter is used in some SQL statements, sensitive information, such as user passwords, may be disclosed. Use this parameter with caution. + |
+- + |
+
-e, --echo-queries + |
+Copies all SQL statements sent to the server to standard output as well. + CAUTION:
+When this parameter is used in some SQL statements, sensitive information, such as user passwords, may be disclosed. Use this parameter with caution. + |
+- + |
+
-E, --echo-hidden + |
+Echoes the actual queries generated by \d and other backslash commands. + |
+- + |
+
-k, --with-key=KEY + |
+Uses gsql to decrypt imported encrypted files. + NOTICE:
+For key characters, such as the single quotation mark (') or double quotation mark (") in shell commands, Linux shell checks whether the input single quotation mark (') or double quotation mark (") matches. If it does not match, Linux shell regards that the user input is unfinished and waits for more input instead of entering the gsql program. + |
+- + |
+
-L, --log-file=FILENAME + |
+Writes normal output destination and all query output into the FILENAME file. + CAUTION:
+
|
+An absolute path or relative path that meets the OS path naming convention + |
+
-m, --maintenance + |
+Allows a cluster to be connected when a two-phase transaction is being restored. + NOTE:
+The parameter is for engineers only. When this parameter is used, gsql can be connected to the standby server to check data consistency between the primary server and standby server. + |
+- + |
+
-n, --no-libedit + |
+Closes the command line editing. + |
+- + |
+
-o, --output=FILENAME + |
+Puts all query output into the FILENAME file. + |
+An absolute path or relative path that meets the OS path naming convention + |
+
-q, --quiet + |
+Indicates the quiet mode and no additional information will be printed. + |
+By default, gsql displays various information. + |
+
-s, --single-step + |
+Runs in single-step mode. This indicates that the user is prompted before each command is sent to the server. This parameter can also be used for canceling execution. This parameter can be used to debug scripts. + CAUTION:
+When this parameter is used in some SQL statements, sensitive information, such as user passwords, may be disclosed. Use this parameter with caution. + |
+- + |
+
-S, --single-line + |
+Runs in single-row mode where a new line terminates a SQL statement in the same manner as a semicolon does. + |
+- + |
+
Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
-A, --no-align + |
+Switches to unaligned output mode. + |
+The default output mode is aligned. + |
+
-F, --field-separator=STRING + |
+Specifies the field separator. The default is the vertical bar (|). + |
+- + |
+
-H, --html + |
+Turns on the HTML tabular output. + |
+- + |
+
-P, --pset=VAR[=ARG] + |
+Specifies the print option in the \pset format in the command line. + NOTE:
+The equal sign (=), instead of the space, is used here to separate the name and value. For example, enter -P format=latex to set the output format to LaTeX. + |
+- + |
+
-R, --record-separator=STRING + |
+Specifies the record separators. + |
+- + |
+
-r + |
+Enables the function of recording historical operations on the client. + |
+This function is disabled by default. + |
+
-t, --tuples-only + |
+Prints only tuples. + |
+- + |
+
-T, --table-attr=TEXT + |
+Specifies options to be placed within the HTML table tag. +Use this parameter with the -H,--html parameter to specify the output to the HTML format. + |
+- + |
+
-x, --expanded + |
+Turns on the expanded table formatting mode. + |
+- + |
+
-z, --field-separator-zero + |
+Sets the field separator in the unaligned output mode to be blank. +Use this parameter with the -A, --no-align parameter to switch to unaligned output mode. + |
+- + |
+
-0, --record-separator-zero + |
+Sets the record separator in the unaligned output mode to be blank. +Use this parameter with the -A, --no-align parameter to switch to unaligned output mode. + |
+- + |
+
-g + |
+Displays separators for all SQL statements and specified files. + NOTE:
+The -g parameter must be configured with the -f parameter. + |
+- + |
+
Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
-h, --host=HOSTNAME + |
+Specifies the host name of the machine on which the server is running or the directory for the Unix-domain socket. + |
+If the host name is omitted, gsql connects to the server of the local host over the Unix domain socket or over TCP/IP to connect to local host without the Unix domain socket. + |
+
-p, --port=PORT + |
+Specifies the port number of the database server. +You can modify the default port number using the -p, --port=PORT parameter. + |
+The default value is 8000. + |
+
-U, --username=USERNAME + |
+Specifies the user that accesses a database. + NOTE:
+
|
+A string. The default user is the current user that operates the system. + |
+
-W, --password=PASSWORD + |
+Specifies a password when the -U parameter is used to connect to a remote database. + NOTE:
+To connect to a database, add an escape character before any backslash (\) or back quote (`) in the password. +If this parameter is not specified but database connection requires your password, you will be prompted to enter your password in interactive mode. The maximum length of the password is 999 bytes, which is restricted by the maximum value of the GUC parameter password max length. + |
+This parameter must meet the password complexity requirement. + |
+
This section describes meta-commands provided by gsql after the GaussDB(DWS) database CLI tool is used to connect to a database. A gsql meta-command can be anything that you enter in gsql and begins with an unquoted backslash.
+For details about meta-commands, see Table 1, Table 2, Table 3, Table 4, Table 6, Table 8, Table 9, Table 10, and Table 12.
+FILE mentioned in the following commands indicates a file path. This path can be an absolute path such as /home/gauss/file.txt or a relative path, such as file.txt. By default, a file.txt is created in the path where the user runs gsql commands.
+Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
\copyright + |
+Displays GaussDB(DWS) version and copyright information. + |
+- + |
+
\g [FILE] or ; + |
+Performs a query operation and sends the result to a file or pipe. + |
+- + |
+
\h(\help) [NAME] + |
+Provides syntax help on the specified SQL statement. + |
+If the name is not specified, then gsql will list all the commands for which syntax help is available. If the name is an asterisk (*), the syntax help on all SQL statements is displayed. + |
+
\parallel [on [num]|off] + |
+Controls the parallel execution function. +
NOTE:
+
|
+The default value of num is 1024. + NOTICE:
+
|
+
\q [value] + |
+Exits the gsql program. In a script file, this command is run only when a script terminates. The exit code is determined by the value. + |
+- + |
+
Parameter + |
+Description + |
+
---|---|
\e [FILE] [LINE] + |
+Use an external editor to edit the query buffer or file. + |
+
\ef [FUNCNAME [LINE]] + |
+Use an external editor to edit the function definition. If LINE is specified, the cursor will point to the specified line of the function body. + |
+
\p + |
+Prints the current query buffer to the standard output. + |
+
\r + |
+Resets (clears) the query buffer. + |
+
\w FILE + |
+Outputs the current query buffer to a file. + |
+
Parameter + |
+Description + |
+
---|---|
\copy { table [ ( column_list ) ] | ( query ) } { from | to } { filename | stdin | stdout | pstdin | pstdout } [ with ] [ binary ] [ oids ] [ delimiter [ as ] 'character' ] [ null [ as ] 'string' ] [ csv [ header ] [ quote [ as ] 'character' ] [ escape [ as ] 'character' ] [ force quote column_list | * ] [ force not null column_list ] ] + |
+After logging in to the database on any psql client, you can import and export data. This is an operation of running the SQL COPY command, but not the server that reads or writes data to a specified file. Instead, data is transferred between the server and the local file system. This means that the accessibility and permissions of the file are the permissions of the local user rather than the server. The initial database user permission is not required. + NOTE:
+\copy only applies to small-batch data import with uniform formats but poor error tolerance capability. GDS or COPY is preferred for data import. + |
+
\echo [STRING] + |
+Writes a character string to the standard output. + |
+
\i FILE + |
+Reads content from FILE and uses them as the input for a query. + |
+
\i+ FILE KEY + |
+Runs commands in an encrypted file. + |
+
\ir FILE + |
+Is similar to \i, but resolves relative path names differently. + |
+
\ir+ FILE KEY + |
+Is similar to \i, but resolves relative path names differently. + |
+
\o [FILE] + |
+Saves all query results to a file. + |
+
\qecho [STRING] + |
+Prints a character string to the query result output. + |
+
Parameter + |
+Description + |
+Value Range + |
+Example + |
+||
---|---|---|---|---|---|
\d[S+] + |
+Lists all tables, views, and sequences of all schemas in the search_path. When objects with the same name exist in different schemas in the search_path, only the object in the schema that ranks first in the search_path is displayed. + |
+- + |
+Lists all tables, views, and sequences of all schemas in the search_path. +
|
+||
\d[S+] NAME + |
+Lists the structure of specified tables, views, and indexes. + |
+- + |
+Lists the structure of table a. +
|
+||
\d+ [PATTERN] + |
+Lists all tables, views, and indexes. + |
+If PATTERN is specified, only tables, views, and indexes whose names match PATTERN are displayed. + |
+Lists all tables, views, and indexes whose names start with f. +
|
+||
\da[S] [PATTERN] + |
+Lists all available aggregate functions, together with their return value types and the data types. + |
+If PATTERN is specified, only aggregate functions whose names match PATTERN are displayed. + |
+Lists all available aggregate functions whose names start with f, together with their return value types and the data types. +
|
+||
\db[+] [PATTERN] + |
+Lists all available tablespaces. + |
+If PATTERN is specified, only tablespaces whose names match PATTERN are displayed. + |
+Lists all available tablespaces whose names start with p. +
|
+||
\dc[S+] [PATTERN] + |
+Lists all available conversions between character sets. + |
+If PATTERN is specified, only conversions whose names match PATTERN are displayed. + |
+Lists all available conversions between character sets. +
|
+||
\dC[+] [PATTERN] + |
+Lists all type conversions. + |
+If PATTERN is specified, only conversions whose names match PATTERN are displayed. + |
+Lists all type conversion whose patten names start with c. +
|
+||
\dd[S] [PATTERN] + |
+Lists descriptions about objects matching PATTERN. + |
+If PATTERN is not specified, all visible objects are displayed. The objects include aggregations, functions, operators, types, relations (table, view, index, sequence, and large object), and rules. + |
+Lists all visible objects. +
|
+||
\ddp [PATTERN] + |
+Lists all default permissions. + |
+If PATTERN is specified, only permissions whose names match PATTERN are displayed. + |
+Lists all default permissions. +
|
+||
\dD[S+] [PATTERN] + |
+Lists all available domains. + |
+If PATTERN is specified, only domains whose names match PATTERN are displayed. + |
+Lists all available domains. +
|
+||
\ded[+] [PATTERN] + |
+Lists all Data Source objects. + |
+If PATTERN is specified, only objects whose names match PATTERN are displayed. + |
+Lists all Data Source objects. +
|
+||
\det[+] [PATTERN] + |
+Lists all external tables. + |
+If PATTERN is specified, only tables whose names match PATTERN are displayed. + |
+Lists all external tables. +
|
+||
\des[+] [PATTERN] + |
+Lists all external servers. + |
+If PATTERN is specified, only servers whose names match PATTERN are displayed. + |
+Lists all external servers. +
|
+||
\deu[+] [PATTERN] + |
+Lists user mappings. + |
+If PATTERN is specified, only information whose name matches PATTERN is displayed. + |
+Lists user mappings. +
|
+||
\dew[+] [PATTERN] + |
+Lists foreign-data wrappers. + |
+If PATTERN is specified, only data whose name matches PATTERN is displayed. + |
+Lists foreign-data wrappers. +
|
+||
\df[antw][S+] [PATTERN] + |
+Lists all available functions, together with their parameters and return types. a indicates an aggregate function, n indicates a common function, t indicates a trigger, and w indicates a window function. + |
+If PATTERN is specified, only functions whose names match PATTERN are displayed. + |
+Lists all available functions, together with their parameters and return types. +
|
+||
\dF[+] [PATTERN] + |
+Lists all text search configurations. + |
+If PATTERN is specified, only configurations whose names match PATTERN are displayed. + |
+Lists all text search configurations. +
|
+||
\dFd[+] [PATTERN] + |
+Lists all text search dictionaries. + |
+If PATTERN is specified, only dictionaries whose names match PATTERN are displayed. + |
+Lists all text search dictionaries. +
|
+||
\dFp[+] [PATTERN] + |
+Lists all text search parsers. + |
+If PATTERN is specified, only analyzers whose names match PATTERN are displayed. + |
+Lists all text search parsers. +
|
+||
\dFt[+] [PATTERN] + |
+Lists all text search templates. + |
+If PATTERN is specified, only templates whose names match PATTERN are displayed. + |
+Lists all text search templates. +
|
+||
\dg[+] [PATTERN] + |
+Lists all database roles. + NOTE:
+Since the concepts of "users" and "groups" have been unified into "roles", this command is now equivalent to \du. The two commands are all reserved for forward compatibility. + |
+If PATTERN is specified, only roles whose names match PATTERN are displayed. + |
+List all database roles whose names start with j and end with e. +
|
+||
\dl + |
+This is an alias for \lo_list, which shows a list of large objects. + |
+- + |
+Lists all large objects. +
|
+||
\dL[S+] [PATTERN] + |
+Lists available procedural languages. + |
+If PATTERN is specified, only languages whose names match PATTERN are displayed. + |
+Lists available procedural languages. +
|
+||
\dn[S+] [PATTERN] + |
+Lists all schemas (namespace). + |
+If PATTERN is specified, only schemas whose names match PATTERN are displayed. By default, only schemas you created are displayed. + |
+Lists information about all schemas whose names start with d. +
|
+||
\do[S] [PATTERN] + |
+Lists available operators with their operand and return types. + |
+If PATTERN is specified, only operators whose names match PATTERN are displayed. By default, only operators you created are displayed. + |
+Lists available operators with their operand and return types. +
|
+||
\dO[S+] [PATTERN] + |
+Lists collations. + |
+If PATTERN is specified, only collations whose names match PATTERN are displayed. By default, only collations you created are displayed. + |
+Lists collations. +
|
+||
\dp [PATTERN] + |
+Lists tables, views, and related permissions. +The following result about \dp is displayed: +rolename=xxxx/yyyy --Assigning permissions to a role+ =xxxx/yyyy --Assigning permissions to public+ xxxx indicates the assigned permissions, and yyyy indicates the roles that are assigned to the permissions. For details about permission descriptions, see Table 5. + |
+If PATTERN is specified, only tables and views whose names match PATTERN are displayed. + |
+Lists tables, views, and related permissions. +
|
+||
\drds [PATTERN1 [PATTERN2]] + |
+Lists all modified configuration parameters. These settings can be for roles, for databases, or for both. PATTERN1 and PATTERN2 indicate a role pattern and a database pattern, respectively. + |
+If PATTERN is specified, only collations whose names match PATTERN are displayed. If the default value is used or * is specified, all settings are listed. + |
+Lists all modified configuration parameters of the database. +
|
+||
\dT[S+] [PATTERN] + |
+Lists all data types. + |
+If PATTERN is specified, only types whose names match PATTERN are displayed. + |
+Lists all data types. +
|
+||
\du[+] [PATTERN] + |
+Lists all database roles. + NOTE:
+Since the concepts of "users" and "groups" have been unified into "roles", this command is now equivalent to \dg. The two commands are all reserved for forward compatibility. + |
+If PATTERN is specified, only roles whose names match PATTERN are displayed. + |
+Lists all database roles. +
|
+||
\dE[S+] [PATTERN] +\di[S+] [PATTERN] +\ds[S+] [PATTERN] +\dt[S+] [PATTERN] +\dv[S+] [PATTERN] + |
+In this group of commands, the letters E, i, s, t, and v stand for a foreign table, index, sequence, table, or view, respectively. You can specify any or a combination of these letters sequenced in any order to obtain an object list. For example, \dit lists all indexes and tables. If a command is suffixed with a plus sign (+), physical dimensions and related descriptions of each object will be displayed. + NOTE:
+This version does not support sequences. + |
+If PATTERN is specified, only objects whose names match PATTERN are displayed. By default, only objects you created are displayed. You can specify PATTERN or S to view other system objects. + |
+Lists all indexes and views. +
|
+||
\dx[+] [PATTERN] + |
+Lists installed extensions. + |
+If PATTERN is specified, only extensions whose names match PATTERN are displayed. + |
+Lists installed extensions. +
|
+||
\l[+] + |
+Lists the names, owners, character set encoding, and permissions of all databases on the server. + |
+- + |
+Lists the names, owners, character set encoding, and permissions of all databases on the server. +
|
+||
\sf[+] FUNCNAME + |
+Shows function definitions. + NOTE:
+If the function name contains parentheses, enclose the function name with quotation marks and add the parameter type list following the double quotation marks. Also enclose the list with parentheses. + |
+- + |
+Assume a function function_a and a function func()name. This parameter will be as follows: +
|
+||
\z [PATTERN] + |
+Lists all tables, views, and sequences in the database and their access permissions. + |
+If a pattern is given, it is a regular expression, and only matched tables, views, and sequences are displayed. + |
+Lists all tables, views, and sequences in the database and their access permissions. +
|
+
Parameter + |
+Description + |
+
---|---|
r + |
+SELECT: allows users to read data from specified tables and views. + |
+
w + |
+UPDATE: allows users to update columns for specified tables. + |
+
a + |
+INSERT: allows users to insert data to specified tables. + |
+
d + |
+DELETE: allows users to delete data from specified tables. + |
+
D + |
+TRUNCATE: allows users to delete all data from specified tables. + |
+
x + |
+REFERENCES: allows users to create foreign key constraints. + |
+
t + |
+TRIGGER: allows users to create a trigger on specified tables. + |
+
X + |
+EXECUTE: allows users to use specified functions and the operators that are realized by the functions. + |
+
U + |
+USAGE: +
|
+
C + |
+CREATE: +
|
+
c + |
+CONNECT: allows users to access specified databases. + |
+
T + |
+TEMPORARY: allows users to create temporary tables. + |
+
A + |
+ANALYZE|ANALYSE: allows users to analyze tables. + |
+
arwdDxtA + |
+ALL PRIVILEGES: grants all available permissions to specified users or roles at a time. + |
+
* + |
+Authorization options for preceding permissions + |
+
Parameter + |
+Description + |
+
---|---|
\a + |
+Controls the switchover between unaligned mode and aligned mode. + |
+
\C [STRING] + |
+Sets the title of any table being printed as the result of a query or cancels such a setting. + |
+
\f [STRING] + |
+Sets a field separator for unaligned query output. + |
+
\H + |
+
|
+
\pset NAME [VALUE] + |
+Sets options affecting the output of query result tables. For details about the value of NAME, see Table 7. + |
+
\t [on|off] + |
+Switches the information and row count footer of the output column name. + |
+
\T [STRING] + |
+Specifies attributes to be placed within the table tag in HTML output format. If the parameter is not configured, the attributes are not set. + |
+
\x [on|off|auto] + |
+Switches expanded table formatting modes. + |
+
Option + |
+Description + |
+Value Range + |
+
---|---|---|
border + |
+The value must be a number. In general, a larger number indicates wider borders and more table lines. + |
+
|
+
expanded (or x) + |
+Switches between regular and expanded formats. + |
+
|
+
fieldsep + |
+Specifies the field separator to be used in unaligned output format. In this way, you can create tab- or comma-separated output required by other programs. To set a tab as field separator, type \pset fieldsep '\t'. The default field separator is a vertical bar ('|'). + |
+- + |
+
fieldsep_zero + |
+Sets the field separator to be used in unaligned output format to zero bytes. + |
+- + |
+
footer + |
+Enables or disables the display of table footers. + |
+- + |
+
format + |
+Selects the output format. Unique abbreviations are allowed. (That means a single letter is sufficient.) + |
+Value range: +
|
+
null + |
+Sets a character string to be printed in place of a null value. + |
+By default, nothing is printed, which can easily be mistaken for an empty character string. + |
+
numericlocale + |
+Enables or disables the display of a locale-specific character to separate groups of digits to the left of the decimal marker. + |
+
If this parameter is ignored, the default separator is displayed. + |
+
pager + |
+Controls the use of a pager for query and gsql help outputs. If the PAGER environment variable is set, the output is piped to the specified program. Otherwise, a platform-dependent default is used. + |
+
|
+
recordsep + |
+Specifies the record separator to be used in unaligned output format. + |
+- + |
+
recordsep_zero + |
+Specifies the record separator to be used in unaligned output format to zero bytes. + |
+- + |
+
tableattr (or T) + |
+Specifies attributes to be placed inside the HTML table tag in HTML output format (such as cellpadding or bgcolor). Note that you do not need to specify border here because it has been used by \pset border. If no value is given, the table attributes do not need to be set. + |
+- + |
+
title + |
+Specifies the table title for any subsequently printed tables. This can be used to give your output descriptive tags. If no value is given, the title does not need to be set. + |
+- + |
+
tuples_only (or t) + |
+Enables or disables the tuples-only mode. Full display may show extra information, such as column headers, titles, and footers. In tuples_only mode, only the table data is displayed. + |
+- + |
+
Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
\c[onnect] [DBNAME|- USER|- HOST|- PORT|-] + |
+Connects to a new database. (The current database is gaussdb.) If a database name contains more than 63 bytes, only the first 63 bytes are valid and are used for connection. However, the database name displayed in the gsql CLI is still the name before the truncation. + NOTE:
+If the database login user is changed during reconnection, you need to enter the password of the new user. The maximum length of the password is 999 bytes, which is restricted by the maximum value of the GUC parameter password max length. + |
+- + |
+
\encoding [ENCODING] + |
+Sets the client character set encoding. + |
+This command shows the current encoding if it has no parameter. + |
+
\conninfo + |
+Outputs information about the current database connection. + |
+- + |
+
Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
\cd [DIR] + |
+Changes the current working directory. + |
+An absolute path or relative path that meets the OS path naming convention + |
+
\setenv NAME [VALUE] + |
+Sets the NAME environment variable to VALUE. If VALUE is not provided, do not set the environment variable. + |
+- + |
+
\timing [on|off] + |
+Toggles a display of how long each SQL statement takes, in milliseconds. + |
+
|
+
\! [COMMAND] + |
+Escapes to a separate Unix shell or runs a Unix command. + |
+- + |
+
Parameter + |
+Description + |
+
---|---|
\prompt [TEXT] NAME + |
+Prompts the user to use texts to specify a variable name. + |
+
\set [NAME [VALUE]] + |
+Sets the NAME internal variable to VALUE. If more than one value is provided, NAME is set to the concatenation of all of them. If only one parameter is provided, the variable is set with an empty value. +Some common variables are processed in another way in gsql, and they are the combination of uppercase letters, numbers, and underscores. Table 11 describes a list of variables that are processed in a way different from other variables. + |
+
\set-multi NAME +[VALUE] +\end-multi + |
+Sets the internal variable NAME to VALUE that can consist of multiple lines of character strings. When \set-multi is used, the second parameter must be provided. For details, see the following example of using the \set-multi meta-command. + NOTE:
+The meta-commands in \set-multi and \end-multi will be ignored. + |
+
\unset NAME + |
+Deletes the variable name of gsql. + |
+
\set-multi meta-command example
+The file test.sql is used as an example.
+\set-multi multi_line_var +select + id,name +from + student; +\end-multi +\echo multi_line_var is "${multi_line_var}" +\echo ------------------------- +\echo result is +${multi_line_var}+
gsql -d gaussdb -p 25308 --dynamic-param -f test.sql execution result:
+multi_line_var is "select + id,name +from + student; " +------------------------- +result is + id | name +----+------- + 1 | Jack + 2 | Tom + 3 | Jerry + 4 | Danny +(4 rows) ++
Run the \set-multi \end-multi command to set the variable multi_line_var to a SQL statement and obtain the variable through dynamic variable parsing.
+The file test.sql is used as an example.
+\set-multi multi_line_var +select 1 as id; +select 2 as id; +\end-multi +\echo multi_line_var is "${multi_line_var}" +\echo ------------------------- +\echo result is +${multi_line_var}+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+multi_line_var is "select 1 as id; +select 2 as id;" +------------------------- +result is + id +---- + 1 +(1 row) + + id +---- + 2 +(1 row) ++
Run the \set-multi \end-multi command to set the variable multi_line_var to two SQL statement and obtain the variable through dynamic variable parsing. Because the content in the variable ends with a semicolon (;), gsql sends the SQL statement and obtains the printed execution result.
+ +Command + |
+Description + |
+Value Range + |
+
---|---|---|
\set VERBOSITY value + |
+This variable can be set to default, verbose, or terse to control redundant lines of error reports. + |
+Value range: default, verbose, terse + |
+
\set ON_ERROR_STOP value + + |
+If this variable is set, the script execution stops immediately. If this script is invoked from another script, that script will be stopped immediately as well. If the primary script is invoked using the -f option rather than from one gsql session, gsql will return error code 3, indicating the difference between the current error and critical errors. (The error code for critical errors is 1.) + |
+Value range: on/off, true/false, yes/no, 1/0 + |
+
\set RETRY [retry_times] + |
+Determines whether to enable the retry function if statement execution encounters errors. The parameter retry_times specifies the maximum number of retry times and the default value is 5. Its value ranges from 5 to 10. If the retry function has been enabled, when you run the \set RETRY command again, the retry function will be disabled. +The configuration file retry_errcodes.conf shows a list of errors. If these errors occur, retry is required. This configuration file is placed in the same directory as that for executable gsql programs. This configuration file is configured by the system rather than by users and cannot be modified by the users. +The retry function can be used in the following 13 error scenarios: +
If an error occurs, gsql queries connection status of all CNs and DNs. If the connection status is abnormal, gsql sleeps for 1 minute and tries again. In this case, the retries in most of the primary/standby switchover scenarios are involved. + NOTE:
+
|
+Value range of retry_times: 5 to 10 + |
+
Parameter + |
+Description + |
+
---|---|
\lo_list + |
+Shows a list of all GaussDB(DWS) large objects stored in the database, as well as the comments provided for them. + |
+
Parameter + |
+Description + |
+Value Range + |
+
---|---|---|
\if EXPR +\elif EXPR +\else +\endif + |
+This set of meta-commands can implement nested conditional blocks: +
|
+
|
+
\goto LABEL +\label LABEL + |
+This set of meta-commands can be used to implement unconditional redirections: +
NOTE:
+
|
+
|
+
\for +\loop +\exit-for +\end-for + |
+This set of meta-commands can be used to implement loops: +
|
+- + |
+
An example of using flow control meta-commands is as follows:
+The file test.sql is used as an example.
+SELECT 'Jack' AS "Name"; + +\if ${ERROR} + \echo 'An error occurred in the SQL statement' + \echo ${LAST_ERROR_MESSAGE} +\elif '${Name}' == 'Jack' + \echo 'I am Jack' +\else + \echo 'I am not Jack' +\endif+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+Name +------ + Jack +(1 row) + +I am Jack+
The preceding execution result indicates that the first SQL statement is successfully executed and the Name variable is set. Therefore, the \elif branch is executed and the output is I am Jack. For details about the usage of the special variables ERROR and LAST_ERROR_MESSAGE, see Table 2.
+The file test.sql is used as an example.
+\set Name 'Jack' +\set ID 1002 + +-- Parameters inside single quotation marks (') are identified as strings for comparison. +\if '${Name}' != 'Jack' + \echo 'I am not Jack' +-- Without single quotation marks ('), parameters are identified as numbers for comparison. +\elif ${ID} > 1000 + \echo 'Jack\'id is bigger than 1000' +\else + \echo 'error' +\endif+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+Jack'id is bigger than 1000+
If single quotation marks (') are used on one side of the operator and not used on the other side, the comparison is performed between a string and a number. Such comparison is not supported and an error is reported.
+postgres=> \set Name 'Jack' +postgres=> \if ${Name} == 'Jack' +ERROR: left[Jack] is a string without quote or number, and right['Jack'] is a string with quote, \if or \elif does not support this expression. +WARNING: The input with quote are treated as a string, and the input without quote are treated as a number. +postgres@> \endif+
The test.sql file is an example of comparing strings.
+\set COMPARE_STRATEGY natural +SELECT 'Jack' AS "Name"; + +-- The comparison result is equivalent to that of '${Name}' > 'Jack'. +\if ${Name} == 'Jack' + \echo 'I am Jack' +\else + \echo 'I am not Jack' +\endif+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+Name +------ + Jack +(1 row) + +I am Jack+
The test.sql file is an example of comparing numbers.
+\set COMPARE_STRATEGY natural +SELECT 1022 AS id; + +-- If ${id} == '01022' is used, the result is not equal because strings on both sides are compared. +\if ${id} == 01022 + \echo 'id is 1022' +\else + \echo 'id is not 1022' +\endif+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+id +------ + 1022 +(1 row) + +id is 1022+
Examples of comparison errors are shown as follows.
+-- One side of the operator cannot be identified as a string or number. +postgres=> \set COMPARE_STRATEGY natural +postgres=> \if ${Id} > 123sd +ERROR: The right[123sd] can not be treated as a string or a number. A numeric string should contain only digits and one decimal point, and a string should be enclosed in quote or contain dynamic variables, please check it. +-- Numbers on one side of the operator cannot be correctly converted. +postgres=> \set COMPARE_STRATEGY natural +postgres=> \if ${Id} <> 11101.1.1 +ERROR: The right[11101.1.1] can not be treated as a string or a number. A numeric string should contain only digits and one decimal point, and a string should be enclosed in quote or contain dynamic variables, please check it.+
The file test.sql is used as an example.
+\set COMPARE_STRATEGY equal +SELECT 'Jack' AS "Name"; + +\if ${ERROR} + \echo 'An error occurred in the SQL statement' +-- If the value is set to equal, only the equality comparison is supported. An error is reported when the values are compared, and there is no delimiter. The following comparison result is equivalent to that of ${Name} == Jack. +\elif '${Name}' == 'Jack' + \echo 'I am Jack' +\else + \echo 'I am not Jack' +\endif+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+Name +------ + Jack +(1 row) + +I am Jack+
The file test.sql is used as an example.
+\set Name Tom + +\goto TEST_LABEL +SELECT 'Jack' AS "Name"; + +\label TEST_LABEL +\echo ${Name}+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+Tom+
The preceding execution result indicates that the \goto meta-command directly executes the \echo command without re-assigning a value to the variable Name.
+The file test.sql is used as an example.
+\set Count 1 + +\label LOOP +\if ${Count} != 3 + SELECT ${Count} + 1 AS "Count"; + \goto LOOP +\endif + +\echo Count = ${Count}+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+Count +------- + 2 +(1 row) + + Count +------- + 3 +(1 row) + +Count = 3+
The preceding execution result indicates that a simple loop is implemented through the combination of the \if conditional block and \goto \label.
+To demonstrate this function, the example data is as follows:
+create table student (id int, name varchar(32)); +insert into student values (1, 'Jack'); +insert into student values (2, 'Tom'); +insert into student values (3, 'Jerry'); +insert into student values (4, 'Danny'); + +create table course (class_id int, class_day varchar(5), student_id int); +insert into course values (1004, 'Fri', 2); +insert into course values (1003, 'Tue', 1); +insert into course values (1003, 'Tue', 4); +insert into course values (1002, 'Wed', 3); +insert into course values (1001, 'Mon', 2);+
\for loop use sample file test.sql:
+\for +select id, name from student order by id limit 3 offset 0 +\loop + \echo -[ RECORD ]+----- + \echo id '\t'| ${id} + \echo name '\t'| ${name} +\end-for+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+-[ RECORD ]+----- +id | 1 +name | Jack +-[ RECORD ]+----- +id | 2 +name | Tom +-[ RECORD ]+----- +id | 3 +name | Jerry+
The preceding execution result indicates that the loop block is used to traverse the execution result of the SQL statement. More statements can appear between \loop and \end-for to implement complex logic.
+If the SQL statement used as a loop condition fails to be executed or the result set is empty, the statement between \loop and \end-for will not be executed.
+The file test.sql is used as an example.
+\for +select id, name from student_error order by id limit 3 offset 0 +\loop + \echo -[ RECORD ]+----- + \echo id '\t'| ${id} + \echo name '\t'| ${name} +\end-for+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+gsql:test.sql:3: ERROR: relation "student_error" does not exist +LINE 1: select id, name from student_error order by id limit 3 offse... + ^+
The preceding command output indicates that the student_error table does not exist. Therefore, the SQL statement fails to be executed, and the statement between \loop and \end-for is not executed.
+The file test.sql is used as an example.
+\for +select id, name from student order by id +\loop + \echo ${id} ${name} + \if ${id} == 2 + \echo find id(2), name is ${name} + \exit-for + \endif +\end-for+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+1 Jack +2 Tom +find id(2), name is Tom+
If the student table contains more than two rows of data and id is set to 2, run the \exit-for command to exit the loop. This process is also used together with the \if condition block.
+The file test.sql is used as an example.
+\for +select id, name from student order by id limit 2 offset 0 +\loop + \echo ${id} ${name} + \for + select + class_id, class_day + from course + where student_id = ${id} + order by class_id + \loop + \echo ' '${class_id}, ${class_day} + \end-for +\end-for+
gsql -d -p 25308 --dynamic-param -f test.sql execution result:
+1 Jack + 1003, Tue +2 Tom + 1001, Mon + 1004, Fri+
Obtain the information about Jack and Tom in the course table through the two-layer loop.
+The various \d commands accept a PATTERN parameter to specify the object name to be displayed. In the simplest case, a pattern is just the exact name of the object. The characters within a pattern are normally folded to lower case, similar to those in SQL names. For example, \dt FOO will display the table named foo. As in SQL names, placing double quotation marks (") around a pattern prevents them being folded to lower case. If you need to include a double quotation mark (") in a pattern, write it as a pair of double quotation marks ("") within a double-quote sequence, which is in accordance with the rules for SQL quoted identifiers. For example, \dt "FOO""BAR" will be displayed as a table named FOO"BAR instead of foo"bar. You cannot put double quotation marks around just part of a pattern, which is different from the normal rules for SQL names. For example, \dt FOO"FOO"BAR will be displayed as a table named fooFOObar if just part of a pattern is quoted.
+Whenever the PATTERN parameter is omitted completely, the \d commands display all objects that are visible in the current schema search path, which is equivalent to using an asterisk (*) as the pattern. An object is regarded to be visible if it can be referenced by name without explicit schema qualification. To see all objects in the database regardless of their visibility, use a dot within double quotation marks (*.*) as the pattern.
+Within a pattern, the asterisk (*) matches any sequence of characters (including no characters) and a question mark (?) matches any single character. This notation is comparable to Unix shell file name patterns. For example, \dt int* displays tables whose names begin with int. But within double quotation marks, the asterisk (*) and the question mark (?) lose these special meanings and are just matched literally.
+A pattern that contains a dot (.) is interpreted as a schema name pattern followed by an object name pattern. For example, \dt foo*.*bar* displays all tables (whose names include bar) in schemas starting with foo. If no dot appears, then the pattern matches only visible objects in the current schema search path. Again, a dot within double quotation marks loses its special meaning and is matched literally.
+Advanced users can use regular-expression notations, such as character classes. For example, [0-9] can be used to match any number. All regular-expression special characters work as specified in "POSIX regular expressions" in the Developer Guide, except the following characters:
+You can write ?, (R+|), (R|), and R to the following pattern characters: ., R*, and R?. The dollar sign ($) does not need to work as a regular-expression character since the pattern must match the whole name, which is different from the usual interpretation of regular expressions. In other words, the dollar sign ($) is automatically appended to your pattern. If you do not expect a pattern to be anchored, write an asterisk (*) at its beginning or end. All regular-expression special characters within double quotation marks lose their special meanings and are matched literally. Regular-expression special characters in operator name patterns (such as the \do parameter) are also matched literally.
+Problems are difficult to locate in this scenario. Try using the strace Linux trace command.
+strace gsql -U MyUserName -W {password} -d postgres -h 127.0.0.1 -p 23508 -r -c '\q'+
The database connection process will be printed on the screen. If the following statement takes a long time to run:
+sendto(3, "Q\0\0\0\25SELECT VERSION()\0", 22, MSG_NOSIGNAL, NULL, 0) = 22 +poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}])+
It indicates that SELECT VERSION() statement was run slowly.
+After the database is connected, you can run the explain performance select version() statement to find the reason why the initialization statement was run slowly. For details, see "Introduction to the SQL Execution Plan" in the Developer Guide.
+An uncommon scenario is that the disk of the machine where the CN resides is full or faulty, affecting queries and leading to user authentication failures. As a result, the connection process is suspended. To solve this problem, simply clear the data disk space of the CN.
+Adapt the steps of troubleshooting slow initialization statement execution. Use strace. If the following statement was run slowly:
+connect(3, {sa_family=AF_FILE, path="/home/test/tmp/gaussdb_llt1/.s.PGSQL.61052"}, 110) = 0+
Or
+connect(3, {sa_family=AF_INET, sin_port=htons(61052), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)+
It indicates that the physical connection between the client and the database was set up slowly. In this case, check whether the network is unstable or has high throughput.
+This problem occurs generally because an unreachable IP address or port number was specified. Check whether the values of -h and -p parameters are correct.
+This problem occurs generally because an incorrect user name or password was entered. Contact the database administrator to check whether the user name and password are correct.
+This problem occurs because the version of libpq.so used in the environment does not match that of gsql. Run the ldd gsql command to check the version of the loaded libpq.so, and then load correct libpq.so by modifying the environment variable LD_LIBRARY_PATH.
+This problem occurs because the version of libpq.so used in the environment does not match that of gsql (or the PostgreSQL libpq.so exists in the environment). Run the ldd gsql command to check the version of the loaded libpq.so, and then load correct libpq.so by modifying the environment variable LD_LIBRARY_PATH.
+Is the server running on host "xx.xxx.xxx.xxx" and accepting TCP/IP connections on port xxxx?
+This problem is caused by network connection faults. Check the network connection between the client and the database server. If you cannot ping from the client to the database server, the network connection is abnormal. Contact network management personnel for troubleshooting.
+ping -c 4 10.10.10.1 +PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data. +From 10.10.10.1: icmp_seq=2 Destination Host Unreachable +From 10.10.10.1 icmp_seq=2 Destination Host Unreachable +From 10.10.10.1 icmp_seq=3 Destination Host Unreachable +From 10.10.10.1 icmp_seq=4 Destination Host Unreachable +--- 10.10.10.1 ping statistics --- +4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms+
DETAIL: User does not have CONNECT privilege.
+This problem occurs because the user does not have the permission to access the database. To solve this problem, perform the following steps:
+gsql -d postgres -U dbadmin -p 8000+
Common misoperations may also cause a database connection failure, for example, entering an incorrect database name, user name, or password. In this case, the client tool will display the corresponding error messages.
+gsql -d postgres -p 8000 +gsql: FATAL: database "postgres" does not exist + +gsql -d postgres -U user1 -W gauss@789 -p 8000 +gsql: FATAL: Invalid username/password,login denied.+
This problem occurs because the number of system connections exceeds the allowed maximum. Contact the database administrator to release unnecessary sessions.
+You can check the number of connections as described in Table 1.
+You can view the session status in the PG_STAT_ACTIVITY view. To release unnecessary sessions, use the pg_terminate_backend function.
+select datid,pid,state from pg_stat_activity;+
datid | pid | state +-------+-----------------+-------- + 13205 | 139834762094352 | active + 13205 | 139834759993104 | idle +(2 rows)+
The value of pid is the thread ID of the session. Terminate the session using its thread ID.
+SELECT PG_TERMINATE_BACKEND(139834759993104);+
If information similar to the following is displayed, the session is successfully terminated:
+PG_TERMINATE_BACKEND +---------------------- + t +(1 row)+ +
Description + |
+Command + |
+
---|---|
View the upper limit of a user's connections. + |
+Run the following command to view the upper limit of user USER1's connections. -1 indicates that no connection upper limit is set for user USER1. +SELECT ROLNAME,ROLCONNLIMIT FROM PG_ROLES WHERE ROLNAME='user1'; + rolname | rolconnlimit +---------+-------------- + user1 | -1 +(1 row)+ |
+
View the number of connections that have been used by a user. + |
+Run the following command to view the number of connections that have been used by user user1. 1 indicates the number of connections that have been used by user user1. +SELECT COUNT(*) FROM V$SESSION WHERE USERNAME='user1'; + + count +------- + 1 +(1 row)+ |
+
View the upper limit of connections to database. + |
+Run the following command to view the upper limit of connections used by postgres. -1 indicates that no upper limit is set for the number of connections that have been used by postgres. +SELECT DATNAME,DATCONNLIMIT FROM PG_DATABASE WHERE DATNAME='postgres'; + + datname | datconnlimit +----------+-------------- + postgres | -1 +(1 row)+ |
+
View the number of connections that have been used by a database. + |
+Run the following command to view the number of connections that have been used by postgres. 1 indicates the number of connections that have been used by postgres. +SELECT COUNT(*) FROM PG_STAT_ACTIVITY WHERE DATNAME='postgres'; + count +------- + 1 +(1 row)+ |
+
View the total number of connections that have been used by users. + |
+Run the following command to view the number of connections that have been used by users: +SELECT COUNT(*) FROM V$SESSION; + + count +------- + 10 +(1 row)+ |
+
When gsql initiates a connection request to the database, a 5-minute timeout period is used. If the database cannot correctly authenticate the client request and client identity within this period, gsql will exit the connection process for the current session, and will report the above error.
+Generally, this problem is caused by the incorrect host and port (that is, the xxx part in the error information) specified by the -h and -p parameters. As a result, the communication fails. Occasionally, this problem is caused by network faults. To resolve this problem, check whether the host name and port number of the database are correct.
+Check whether CN logs contain information similar to "FATAL: cipher file "/data/coordinator/server.key.cipher" has group or world access". This error is usually caused by incorrect tampering with the permissions for data directories or some key files. For details about how to correct the permissions, see related permissions for files on other normal instances.
+In pg_hba.conf of the target CN, the authentication mode is set to gss for authenticating the IP address of the current client. However, this authentication algorithm cannot authenticate clients. Change the authentication algorithm to sha256 and try again. For details, see "Configuration File Reference" in the Developer Guide.
+Generally, this problem is caused by changes in loading the shared dynamic library (.so file in Linux) during process running. Alternatively, if the process binary file changes, the execution code for the OS to load machines or the entry for loading a dependent library will change accordingly. In this case, the OS kills the process for protection purposes, generating a core dump file.
+To resolve this problem, try again. In addition, do not run service programs in a cluster during O&M operations, such as an upgrade, preventing such a problem caused by file replacement during the upgrade.
+A possible stack of the core dump file contains dl_main and its function calling. The file is used by the OS to initialize a process and load the shared dynamic library. If the process has been initialized but the shared dynamic library has not been loaded, the process cannot be considered completely started.
+gds is used to import and export data of GaussDB(DWS).
+gds [ OPTION ] -d DIRECTORY+
The -d and -H parameters are mandatory and option is optional. gds provides the file data from DIRECTORY for GaussDB(DWS) to access.
+Before starting GDS, you need to ensure that your GDS version is consistent with the database version. Otherwise, the database will display an error message and terminate the import and export operations. You can view the specific version through the -V parameter.
+Set the directory of the data file to be imported. If the GDS process permission is sufficient, the directory specified by -d is automatically created.
+Set the IP address and port to be listened to of the GDS.
+Value range of the IP address: The IP address must be valid.
+Default value: 127.0.0.1
+Value range of the listening port is a positive integer ranging from 1024 to 65535.
+Default value of port: 8098
+Set the log file. This feature adds the function of automatical log splitting. After the -R parameter is set, GDS generates a new file based on the set value to prevent a single log file from being too large.
+Generation rule: By default, GDS identifies only files with the .log extension name and generates new log files.
+For example, if -l is set to gds.log and -R is set to 20 MB, a gds-2020-01-17_115425.log file will be created when the size of gds.log reaches 20 MB.
+If the log file name specified by -l does not end with .log, for example, gds.log.txt, the name of the new log file is gds.log-2020-01-19_122739.txt.
+When GDS is started, it checks whether the log file specified by -l exists. If the log file exists, a new log file is generated based on the current date and time, and the original log file is not overwritten.
+Set the hosts that can be connected to the GDS. This parameter must be the CIDR format and it supports the Linux system only. If multiple network segments need to be configured, use commas (,) to separate them. For example, -H 10.10.0.0/24, 10.10.5.0/24.
+Set the saving path of error logs generated when data is imported.
+Default value: data file directory
+Set the upper thread of error logs generated when data is imported.
+Value range: 0 < size < 1 TB. The value must be a positive integer plus the unit. The unit can be KB, MB, or GB.
+Set the upper limit of the exported file size.
+Value range: 1 MB < size < 100 TB. The value must be a positive integer plus the unit. The unit can be KB, MB, or GB. If KB is used, the value must be greater than 1024 KB.
+Set the maximum size of a single GDS log file specified by -l.
+Value range: 1 MB < size < 100 TB. The value must be a positive integer plus the unit. The unit can be KB, MB, or GB. If KB is used, the value must be greater than 1024 KB.
+Default value: 16 MB
+Set the number of concurrent imported and exported working threads.
+Value range: The value is a positive integer ranging between 0 and 200 (included).
+Default value: 8
+Recommended value: 2 x CPU cores in the common file import and export scenario; in the pipe file import and export scenario, set the value to 64.
+If a large number of pipe files are imported and exported concurrently, the value of this parameter must be greater than or equal to the number of concurrent services.
+Set the status file. This parameter supports the Linux system only.
+The GDS is running on the backend and this parameter supports the Linux system only.
+Traverse files in the recursion directory and this parameter supports the Linux system only.
+Use the SSL authentication mode to communicate with clusters.
+Before using the SSL authentication mode, specify the path for storing the authentication certificates.
+Set the debug log level of the GDS to control the output of GDS debug logs.
+Value range: 0, 1, and 2
+Default value: 0
+Specify the timeout period for GDS to wait for operating a pipe.
+Value range: greater than 1s Use a positive integer with the time unit, seconds (s), minutes (m), or hours (h). Example: 3600s, 60m, or 1h, indicating one hour.
+Default value: 1h/60m/3600s
+Data file is saved in the /data directory, the IP address is 192.168.0.90, and the listening port number is 5000.
+gds -d /data/ -p 192.168.0.90:5000 -H 10.10.0.1/24+
Data file is saved in the subdirectory of the /data directory, the IP address is 192.168.0.90, and the listening port number is 5000.
+gds -d /data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 -r+
Data file is saved in the /data directory, the IP address is 192.168.0.90, and the listening port number is 5000 which is running on the backend. The log file is saved in the /log/gds_log.txt file, and the specified number of the concurrently imported working threads is 32.
+gds -d /data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 -l /log/gds_log.txt -D -t 32+
Data file is saved in the /data directory, the IP address is 192.168.0.90, and the listening port number is 5000. Only the IP address of 10.10.0.* can be connected.
+gds -d /data/ -p 192.168.0.90:5000 -H 10.10.0.1/24+
Data files are stored in the /data/ directory, the IP address of the directory is 192.168.0.90, and the listening port number is 5000. Only the node whose IP address is 10.10.0.* can be connected to. The node communicates with the cluster using the SSL authentication mode, and the certificate files are stored in the /certfiles/ directory.
+gds -d /data/ -p 192.168.0.90:5000 -H 10.10.0.1/24 --enable-ssl --ssl-dir /certfiles/+