Before performing the following operations, ensure that you have configured a storage-compute decoupled cluster by referring to Configuring a Storage-Compute Decoupled Cluster (Agency) or Configuring a Storage-Compute Decoupled Cluster (AK/SK).
source ${client_home}/bigdata_env
kinit User performing Hive operations
In the left navigation tree, choose Hive > Customization. In the customized configuration items, add dfs.namenode.acls.enabled to the hdfs.site.customized.configs parameter and set its value to false.
beeline
For example, run the following command to create the table test in obs://OBS parallel file system name/user/hive/warehouse/Database name/Table name:
create table test(name string) location "obs://OBS parallel file system name/user/hive/warehouse/Database name/Table name";
You need to add the component operator to the URL policy in the Ranger policy. Set the URL to the complete path of the object on OBS. Select the Read and Write permissions.
vim /opt/Bigdata/client/Hive/config/hivemetastore-site.xml
beeline
create table test(name string);
desc formatted test;
If the database location points to HDFS, the table to be created in the database (without specifying the location) also points to HDFS. If you want to modify the default table creation policy, change the location of the database to OBS by performing the following operations:
show create database obs_test;
alter database obs_test set location 'obs://OBS parallel file system name/user/hive/warehouse/Database name'
Run the show create database obs_test command to check whether the database location points to OBS.
alter table user_info set location 'obs://OBS parallel file system name/user/hive/warehouse/Database name/Table name'
If the table contains data, migrate the original data file to the new location.