proposalbot 3ccfbdc0f0 Changes to mrs_operation-guide from docs/doc-exports#475 (MRS component operatio
Reviewed-by: Kacur, Michal <michal.kacur@t-systems.com>
Co-authored-by: proposalbot <proposalbot@otc-service.com>
Co-committed-by: proposalbot <proposalbot@otc-service.com>
2022-12-09 14:50:38 +00:00

27 KiB

original_name

mrs_01_0828.html

Introduction to HDFS Logs

Log Description

Log path: The default path of HDFS logs is /var/log/Bigdata/hdfs/Role name.

  • NameNode: /var/log/Bigdata/hdfs/nn (run logs) and /var/log/Bigdata/audit/hdfs/nn (audit logs)
  • DataNode: /var/log/Bigdata/hdfs/dn (run logs) and /var/log/Bigdata/audit/hdfs/dn (audit logs)
  • ZKFC: /var/log/Bigdata/hdfs/zkfc (run logs) and /var/log/Bigdata/audit/hdfs/zkfc (audit logs)
  • JournalNode: /var/log/Bigdata/hdfs/jn (run logs) and /var/log/Bigdata/audit/hdfs/jn (audit logs)
  • Router: /var/log/Bigdata/hdfs/router (run logs) and /var/log/Bigdata/audit/hdfs/router (audit logs)
  • HttpFS: /var/log/Bigdata/hdfs/httpfs (run logs) and /var/log/Bigdata/audit/hdfs/httpfs (audit logs)

Log archive rule: The automatic HDFS log compression function is enabled. By default, when the size of logs exceeds 100 MB, logs are automatically compressed into a log file named in the following format: <Original log file name>-<yyyy-mm-dd_hh-mm-ss.[ID].log.zip. A maximum of 100 latest compressed files are reserved. The number of compressed files can be configured on Manager.

Table 1 HDFS log list
Type Name Description
Run log hadoop-<SSH_USER>-<process_name>-<hostname>.log HDFS system log, which records most of the logs generated when the HDFS system is running.
hadoop-<SSH_USER>-<process_name>-<hostname>.out Log that records the HDFS running environment information.
hadoop.log Log that records the operation of the Hadoop client.
hdfs-period-check.log Log that records scripts that are executed periodically, including automatic balancing, data migration, and JournalNode data synchronization detection.
<process_name>-<SSH_USER>-<DATE>-<PID>-gc.log Garbage collection log file
postinstallDetail.log Work log before the HDFS service startup and after the installation.
hdfs-service-check.log Log that records whether the HDFS service starts successfully.
hdfs-set-storage-policy.log Log that records the HDFS data storage policies.
cleanupDetail.log Log that records the cleanup logs about the uninstallation of the HDFS service.
prestartDetail.log Log that records cluster operations before the HDFS service startup.
hdfs-recover-fsimage.log Recovery log of the NameNode metadata.
datanode-disk-check.log Log that records the disk status check during the cluster installation and use.
hdfs-availability-check.log Log that check whether the HDFS service is available.
hdfs-backup-fsimage.log Backup log of the NameNode metadata.
startDetail.log Detailed log that records the HDFS service startup.
hdfs-blockplacement.log Log that records the placement policy of HDFS blocks.
upgradeDetail.log Upgrade logs.
hdfs-clean-acls-java.log Log that records the clearing of deleted roles' ACL information by HDFS.
hdfs-haCheck.log Run log that checks whether the NameNode in active or standby state has obtained scripts.
<process_name>-jvmpause.log Log that records JVM pauses during process running.
hadoop-<SSH_USER>-balancer-<hostname>.log Run log of HDFS automatic balancing.
hadoop-<SSH_USER>-balancer-<hostname>.out Log that records information of the environment where HDFS executes automatic balancing.
hdfs-switch-namenode.log Run log that records the HDFS active/standby switchover.
hdfs-router-admin.log Run log of the mount table management operation
Tomcat logs hadoop-omm-host1.out, httpfs-catalina.<DATE>.log, httpfs-host-manager.<DATE>.log, httpfs-localhost.<DATE>.log, httpfs-manager.<DATE>.log, localhost_access_web_log.log Tomcat run log
Audit log

hdfs-audit-<process_name>.log

ranger-plugin-audit.log

Audit log that records the HDFS operations (such as creating, deleting, modifying and querying files).
SecurityAuth.audit HDFS security audit log.

Log Level

Table 2 <mrs_01_0828__t9a69df8da9a84f41bb6fd3e008d7a3b8> lists the log levels supported by HDFS. The log levels include FATAL, ERROR, WARN, INFO, and DEBUG. Logs of which the levels are higher than or equal to the set level will be printed by programs. The higher the log level is set, the fewer the logs are recorded.

Table 2 Log levels
Level Description
FATAL Indicates the critical error information about system running.
ERROR Indicates the error information about system running.
WARN Indicates that the current event processing exists exceptions.
INFO Indicates that the system and events are running properly.
DEBUG Indicates the system and system debugging information.

To modify log levels, perform the following operations:

  1. Go to the All Configurations page of HDFS by referring to Modifying Cluster Service Configuration Parameters <mrs_01_2125>.

  2. On the left menu bar, select the log menu of the target role.

  3. Select a desired log level.

  4. Save the configuration. In the displayed dialog box, click OK to make the configurations take effect.

    Note

    The configurations take effect immediately without restarting the service.

Log Formats

The following table lists the HDFS log formats.

Table 3 Log formats
Type Format Example
Run log <yyyy-MM-dd HH:mm:ss,SSS><Name of the thread that generates the log><Location where the log event occurs> 2015-01-26 18:43:42,840 | INFO | IPC Server handler 40 on 8020 | Rolling edit logs | org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1096)
Audit log <yyyy-MM-dd HH:mm:ss,SSS><Name of the thread that generates the log><Location where the log event occurs> 2015-01-26 18:44:42,607 | INFO | IPC Server handler 32 on 8020 | allowed=true ugi=hbase (auth:SIMPLE) ip=/10.177.112.145 cmd=getfileinfo src=/hbase/WALs/hghoulaslx410,16020,1421743096083/hghoulaslx410%2C16020%2C1421743096083.1422268722795 dst=null perm=null | org.apache.hadoop.hdfs.server.namenode.FSNamesystem$DefaultAuditLogger.logAuditMessage(FSNamesystem.java:7950)