Yang, Tong 6182f91ba8 MRS component operation guide_normal 2.0.38.SP20 version
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com>
Co-authored-by: Yang, Tong <yangtong2@huawei.com>
Co-committed-by: Yang, Tong <yangtong2@huawei.com>
2022-12-09 14:55:21 +00:00

38 lines
5.6 KiB
HTML

<a name="mrs_01_1693"></a><a name="mrs_01_1693"></a>
<h1 class="topictitle1">DataNode Is Normal but Cannot Report Data Blocks</h1>
<div id="body1597735018677"><div class="section" id="mrs_01_1693__s8052e279fdad4c47aeb8257957dec0bb"><h4 class="sectiontitle">Question</h4><p id="mrs_01_1693__a8d7a3604fc594ef09128fa8a1b4b4d08">The DataNode is normal, but cannot report data blocks. As a result, the existing data blocks cannot be used.</p>
</div>
<div class="section" id="mrs_01_1693__sd45d6517a74d408890579902c23338bb"><h4 class="sectiontitle">Answer</h4><p id="mrs_01_1693__a4db44eab305b47268b87ccf7f924198d">This error may occur when the number of data blocks in a data directory exceeds four times the upper limit (4 x 1 MB). And the DataNode generates the following error logs:</p>
<pre class="screen" id="mrs_01_1693__s522ab129667c4824a657f42965b1eb76">2015-11-05 10:26:32,936 | ERROR | DataNode:[[[DISK]file:/srv/BigData/hadoop/data1/dn/]] heartbeating to
vm-210/10.91.8.210:8020 | Exception in BPOfferService for Block pool BP-805114975-10.91.8.210-1446519981645
(Datanode Uuid bcada350-0231-413b-bac0-8c65e906c1bb) service to vm-210/10.91.8.210:8020 | BPServiceActor.java:824
java.lang.IllegalStateException:com.google.protobuf.InvalidProtocolBufferException:Protocol message was too large.May
be malicious.Use CodedInputStream.setSizeLimit() to increase the size limit. at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:369)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:347) at org.apache.hadoop.hdfs.
protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:325) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.
blockReport(DatanodeProtocolClientSideTranslatorPB.java:190) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:473)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:685) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:745) Caused by:com.google.protobuf.InvalidProtocolBufferException:Protocol message was too large.May be malicious.Use CodedInputStream.setSizeLimit()
to increase the size limit. at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110) at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769) at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462) at com.google.protobuf.
CodedInputStream.readSInt64(CodedInputStream.java:363) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:363)</pre>
<p id="mrs_01_1693__aa6afed36df34467791109e906cd6da74">The number of data blocks in the data directory is displayed as <strong id="mrs_01_1693__b97755866522149">Metric</strong>. You can monitor its value through <strong id="mrs_01_1693__b58796653022149">http://&lt;datanode-ip&gt;:&lt;http-port&gt;/jmx</strong>. If the value is greater than four times the upper limit (4 x 1 MB), you are advised to configure multiple drives and restart HDFS.</p>
<p id="mrs_01_1693__a7579ccdca8ad48449836cc803c1d0c9a"><strong id="mrs_01_1693__a0a337e1a81d546ceb32c02052ed6cde1">Recovery procedure:</strong></p>
<ol id="mrs_01_1693__oe5858521d97646b29b2e5671e2f9b15d"><li id="mrs_01_1693__l89860edb309a44169dd6bc457f24e3b0">Configure multiple data directories on the DataNode.<p id="mrs_01_1693__a4e0cf2d491344f488d4b2969d14f9078"><a name="mrs_01_1693__l89860edb309a44169dd6bc457f24e3b0"></a><a name="l89860edb309a44169dd6bc457f24e3b0"></a>For example, configure multiple directories on the DataNode where only the <strong id="mrs_01_1693__a7dcf9d2bc46141f8a6db939d1705eb5b">/data1/datadir</strong> directory is configured:</p>
<pre class="screen" id="mrs_01_1693__s27c5c7e7885b478b9c7488ec55741ce3">&lt;property&gt; &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt; &lt;value&gt;/data1/datadir&lt;/value&gt; &lt;/property&gt;</pre>
<p id="mrs_01_1693__ad5d47357b2674e6bb8be901dc287fe90">Configure as follows:</p>
<pre class="screen" id="mrs_01_1693__sac61e83f3a244cf9a21a8a2dcbec7228">&lt;property&gt; &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt; &lt;value&gt;/data1/datadir/,/data2/datadir,/data3/datadir&lt;/value&gt; &lt;/property&gt;</pre>
<div class="note" id="mrs_01_1693__n15a6492c27fd42839263d2d20e07fbaf"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="mrs_01_1693__aa007a430b358436fb580c113bf82f82b">You are advised to configure multiple data directories on multiple disks. Otherwise, performance may be affected.</p>
</div></div>
</li><li id="mrs_01_1693__l6f67245eab4f48eda9d2f17af31e1eaa">Restart the HDFS.</li><li id="mrs_01_1693__lb268ebbc729644bb81725782699572ce">Perform the following operation to move the data to the new data directory:<p id="mrs_01_1693__ac4ca17ffaa674b128b66616147ea70f5"><a name="mrs_01_1693__lb268ebbc729644bb81725782699572ce"></a><a name="lb268ebbc729644bb81725782699572ce"></a><strong id="mrs_01_1693__b2579101651213">mv</strong> <i><span class="varname" id="mrs_01_1693__v0fc8b8ef793640f7ae1d04894ed756d1">/data1/datadir/current/finalized/subdir1 /data2/datadir/current/finalized/subdir1</span></i></p>
</li><li id="mrs_01_1693__la9d2a432ebfa4366a4947e177ec16a16">Restart the HDFS.</li></ol>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="mrs_01_1690.html">FAQ</a></div>
</div>
</div>