forked from docs/doc-exports
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: Yang, Tong <yangtong2@huawei.com> Co-committed-by: Yang, Tong <yangtong2@huawei.com>
27 lines
2.6 KiB
HTML
27 lines
2.6 KiB
HTML
<a name="mrs_01_1473"></a><a name="mrs_01_1473"></a>
|
|
|
|
<h1 class="topictitle1">Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?</h1>
|
|
<div id="body1595920217111"><div class="section" id="mrs_01_1473__sd8d6e6804f834be7ad5a256d68fec95e"><h4 class="sectiontitle">Question</h4><p id="mrs_01_1473__a36e5a7f0910a4e86a4e544f616038381">Why exception occurs in CarbonData when Disk Space Quota is set for the storage directory in HDFS?</p>
|
|
</div>
|
|
<div class="section" id="mrs_01_1473__s46da8583cb93405898dba5f9fef5c4bd"><h4 class="sectiontitle">Answer</h4><p id="mrs_01_1473__aaab04e5182974469aac9d3d6c66e1b61">The data will be written to HDFS when you during create table, load table, update table, and so on. If the configured HDFS directory does not have sufficient disk space quota, then the operation will fail and throw following exception.</p>
|
|
</div>
|
|
<pre class="screen" id="mrs_01_1473__s435700b5c5f24bfc9f4204f7141e8d3f">org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
|
|
The DiskSpace quota of /user/tenant is exceeded:
|
|
quota = 314572800 B = 300 MB but diskspace consumed = 402653184 B = 384 MB at
|
|
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211) at
|
|
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239) at
|
|
org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:941) at
|
|
org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:745)</pre>
|
|
<p id="mrs_01_1473__a0c67483a916b47b2ab939a98f24ea97f">If such exception occurs, configure a sufficient disk space quota for the tenant.</p>
|
|
<p id="mrs_01_1473__aa37be78269284e64823504fc45205c75">For example:</p>
|
|
<p id="mrs_01_1473__aad4a9dab7ff34d81ac057a3f8609732b">If the HDFS replication factor is 3 and HDFS default block size is 128 MB, then at least 384 MB (no. of block x block_size x replication_factor of the schema file = 1 x 128 x 3 = 384 MB) disk space quota is required to write a table schema file to HDFS.</p>
|
|
<div class="note" id="mrs_01_1473__n423b0977cf624128b855d71d1a212d5f"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="mrs_01_1473__a6099ccc414c24b2cac932908ae73611c">In case of fact files, as the default block size is 1024 MB, the minimum space required is 3072 MB per fact file for data load.</p>
|
|
</div></div>
|
|
</div>
|
|
<div>
|
|
<div class="familylinks">
|
|
<div class="parentlink"><strong>Parent topic:</strong> <a href="mrs_01_1457.html">CarbonData FAQ</a></div>
|
|
</div>
|
|
</div>
|
|
|