This section describes how to use the HDFS client in an O&M scenario or service scenario.
For example, the installation directory is /opt/hadoopclient. The client directory in the following operations is only an example. Change it to the actual installation directory.
cd /opt/hadoopclient
source bigdata_env
kinit Component service user
hdfs dfs -ls /
The following table lists common HDFS client commands.
For more commands, see https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/CommandsManual.html#User_Commands.
Command |
Description |
Example |
---|---|---|
hdfs dfs -mkdir Folder name |
Used to create a folder. |
hdfs dfs -mkdir /tmp/mydir |
hdfs dfs -ls Folder name |
Used to view a folder. |
hdfs dfs -ls /tmp |
hdfs dfs -put Local file on the client node |
Used to upload a local file to a specified HDFS path. |
hdfs dfs -put /opt/test.txt /tmp Upload the /opt/tests.txt file on the client node to the /tmp directory of HDFS. |
hdfs dfs -get Specified file on HDFS Specified path on the client node |
Used to download the HDFS file to the specified local path. |
hdfs dfs -get /tmp/test.txt /opt/ Download the /tmp/test.txt file on HDFS to the /opt path on the client node. |
hdfs dfs -rm -r -f Specified folder on HDFS |
Used to delete a folder. |
hdfs dfs -rm -r -f /tmp/mydir |
This problem occurs because the memory required for running the HDFS client exceeds the preset upper limit (128 MB by default). You can change the memory upper limit of the client by modifying CLIENT_GC_OPTS in <Client installation path>/HDFS/component_env. For example, if you want to set the upper limit to 1 GB, run the following command:
CLIENT_GC_OPTS="-Xmx1G"
After the modification, run the following command to make the modification take effect:
source <Client installation path>//bigdata_env
By default, the logs generated during the running of the HDFS client are printed to the console. The default log level is INFO. To enable the DEBUG log level for fault locating, run the following command to export an environment variable:
export HADOOP_ROOT_LOGGER=DEBUG,console
Then run the HDFS Shell command to generate the DEBUG logs.
If you want to print INFO logs again, run the following command:
export HADOOP_ROOT_LOGGER=INFO,console