forked from docs/doc-exports
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: Yang, Tong <yangtong2@huawei.com> Co-committed-by: Yang, Tong <yangtong2@huawei.com>
61 lines
22 KiB
HTML
61 lines
22 KiB
HTML
<a name="mrs_01_1664"></a><a name="mrs_01_1664"></a>
|
|
|
|
<h1 class="topictitle1">Changing the DataNode Storage Directory</h1>
|
|
<div id="body1595904092485"><div class="section" id="mrs_01_1664__s86e6d1aae058425fac2ceb887313780b"><h4 class="sectiontitle">Scenario</h4><div class="note" id="mrs_01_1664__note17408121619136"><img src="public_sys-resources/note_3.0-en-us.png"><span class="notetitle"> </span><div class="notebody"><p id="mrs_01_1664__p788202018132">This section applies to MRS 3.<em id="mrs_01_1664__i1384718494575">x</em> or later clusters.</p>
|
|
</div></div>
|
|
<p id="mrs_01_1664__a075dde430f6d4d62992ee88dabb3cac4">If the storage directory defined by the HDFS DataNode is incorrect or the HDFS storage plan changes, the system administrator needs to modify the DataNode storage directory on FusionInsight Manager to ensure that the HDFS works properly. Changing the ZooKeeper storage directory includes the following scenarios:</p>
|
|
</div>
|
|
<ul id="mrs_01_1664__uf8cc1c4da7314862b85fbb12706c0b8e"><li id="mrs_01_1664__la15a239f68da430a9a8501e442a7463a">Change the storage directory of the DataNode role. In this way, the storage directories of all DataNode instances are changed.</li><li id="mrs_01_1664__l74dd6ade92754c54ab5c3e8c72beb4e8">Change the storage directory of a single DataNode instance. In this way, only the storage directory of this instance is changed, and the storage directories of other instances remain the same.</li></ul>
|
|
<div class="section" id="mrs_01_1664__s96b8a9dc800a44b7b35e3a2d8fafa0e1"><h4 class="sectiontitle">Impact on the System</h4><ul id="mrs_01_1664__ue0e0f55de41d436dbf9aee369490337e"><li id="mrs_01_1664__l306c6408a5844217bef0890172891d6b">The HDFS service needs to be stopped and restarted during the process of changing the storage directory of the DataNode role, and the cluster cannot provide services before it is completely started.</li></ul>
|
|
</div>
|
|
<ul id="mrs_01_1664__ubd5c1981cdf44085963062e54188f95d"><li id="mrs_01_1664__la4d2efbbd749452bb963bf22ef62c844">The DataNode instance needs to stopped and restarted during the process of changing the storage directory of the instance, and the instance at this node cannot provide services before it is started.</li><li id="mrs_01_1664__l1822349c8e3c4fbb9f4c3af82b0b5715">The directory for storing service parameter configurations must also be updated.</li></ul>
|
|
<div class="section" id="mrs_01_1664__s9e4b75fd0f044bc48aec7c870e0f70ad"><h4 class="sectiontitle">Prerequisites</h4><ul id="mrs_01_1664__ud0c72f5f4d29432da1f1571c527aa07d"><li id="mrs_01_1664__l31665d2223274d8da4c0a6a6410a8790">New disks have been prepared and installed on each data node, and the disks are formatted.</li></ul>
|
|
</div>
|
|
<ul id="mrs_01_1664__u2c69e229971c44169416f3fb28eda770"><li id="mrs_01_1664__l4507236e1d3940bc9f64331d7133013b">New directories have been planned for storing data in the original directories.</li><li id="mrs_01_1664__l81cf6ff3cfcf475f90fdc97ba9ddcb63">The HDFS client has been installed.</li><li id="mrs_01_1664__l279b33e2816f4a1099563fc802215440">The system administrator user <strong id="mrs_01_1664__b163959184315">hdfs</strong> is available.</li><li id="mrs_01_1664__l2d9114b6fbe2417c96ec1b1f35cc636a">When changing the storage directory of a single DataNode instance, ensure that the number of active DataNode instances is greater than the value of <strong id="mrs_01_1664__b1061124924219">dfs.replication</strong>.</li></ul>
|
|
<div class="section" id="mrs_01_1664__s0a76ba05c1cb4e6b81628d993064ff81"><h4 class="sectiontitle">Procedure</h4><p id="mrs_01_1664__a24dff20d0626451f85d5d03ff4b051a7"><strong id="mrs_01_1664__b456912718438">Check the environment.</strong></p>
|
|
</div>
|
|
<ol id="mrs_01_1664__o628084b784e647b8a0364438f8de425d"><li id="mrs_01_1664__l26714a238972422d9d6039177cb92414"><span>Log in to the server where the HDFS client is installed as user <strong id="mrs_01_1664__b318791034311">root</strong>, and run the following command to configure environment variables:</span><p><p id="mrs_01_1664__a83815530b02e458d9a319a28a468e997"><strong id="mrs_01_1664__b254013214312">source </strong><em id="mrs_01_1664__i1054543213438">Installation directory of the HDFS client</em><strong id="mrs_01_1664__b17545133219433">/bigdata_env</strong></p>
|
|
</p></li><li id="mrs_01_1664__l147b6c33cfe0462799ee6b80df46c062"><span>If the cluster is in security mode, run the following command to authenticate the user:</span><p><p id="mrs_01_1664__af8230d8c4be840269d92240b90f08554"><strong id="mrs_01_1664__a6a4225516af8484b895c75a42f239a3c">kinit hdfs</strong></p>
|
|
</p></li><li id="mrs_01_1664__l12a1f7e93322481da717ea03c5919047"><span>Run the following command on the HDFS client to check whether all directories and files in the HDFS root directory are normal:</span><p><p id="mrs_01_1664__a67abbfed1e09439797766c31a39280c9"><strong id="mrs_01_1664__acf21ef6ee0a34cd4afbc40ad2a01d5c8">hdfs fsck /</strong></p>
|
|
<p id="mrs_01_1664__a7ae6d56446e54d2d8241d9dd4e23de85">Check the fsck command output.</p>
|
|
<ul id="mrs_01_1664__u003bc33c4a5f462b821b4218402175c6"><li id="mrs_01_1664__l5a1f07e48dc84354868ccc452954991b">If the following information is displayed, no file is lost or damaged. Go to <a href="#mrs_01_1664__le587d508c49b4837bcabd9bd9cf98bc4">4</a>.<pre class="screen" id="mrs_01_1664__s5a0f5048292c471da6fc435fca8c9e0b">The filesystem under path '/' is HEALTHY</pre>
|
|
</li><li id="mrs_01_1664__lad2a9fd5d4c947149b2e9741c1a1d1a0">If other information is displayed, some files are lost or damaged. Go to <a href="#mrs_01_1664__l1ce08f0a7d2349b487dd6f19c38c7273">5</a>.</li></ul>
|
|
</p></li><li id="mrs_01_1664__le587d508c49b4837bcabd9bd9cf98bc4"><a name="mrs_01_1664__le587d508c49b4837bcabd9bd9cf98bc4"></a><a name="le587d508c49b4837bcabd9bd9cf98bc4"></a><span>Log in to FusionInsight Manager, choose <strong id="mrs_01_1664__b124211467436">Cluster</strong> > <em id="mrs_01_1664__i202431463437">Name of the desired cluster</em> > <strong id="mrs_01_1664__b824324654311">Services</strong>, and check whether <strong id="mrs_01_1664__b13244114624310">Running Status</strong> of HDFS is <strong id="mrs_01_1664__b18244154624313">Normal</strong>.</span><p><ul id="mrs_01_1664__u452e1c7aeacc4fb7b6d256ece21cd3c1"><li id="mrs_01_1664__l614bea7fb40143eca2d2ecf3d9433285">If yes, go to <a href="#mrs_01_1664__lff55f0ef8699449ab4cfc4eddeed1711">6</a>.</li><li id="mrs_01_1664__l1a1d22edec6242958310876df7a8623b">If no, the HDFS status is unhealthy. Go to <a href="#mrs_01_1664__l1ce08f0a7d2349b487dd6f19c38c7273">5</a>.</li></ul>
|
|
</p></li><li id="mrs_01_1664__l1ce08f0a7d2349b487dd6f19c38c7273"><a name="mrs_01_1664__l1ce08f0a7d2349b487dd6f19c38c7273"></a><a name="l1ce08f0a7d2349b487dd6f19c38c7273"></a><span>Rectify the HDFS fault.. The task is complete.</span></li><li id="mrs_01_1664__lff55f0ef8699449ab4cfc4eddeed1711"><a name="mrs_01_1664__lff55f0ef8699449ab4cfc4eddeed1711"></a><a name="lff55f0ef8699449ab4cfc4eddeed1711"></a><span>Determine whether to change the storage directory of the DataNode role or that of a single DataNode instance:</span><p><ul id="mrs_01_1664__u681c64621a284f7f9351f01fa3b819ff"><li id="mrs_01_1664__lfe4837e1ac694247bde20ed15ed078f9">To change the storage directory of the DataNode role, go to <a href="#mrs_01_1664__l4bc534684e1d4d3cb656e4ed55bb75af">7</a>.</li><li id="mrs_01_1664__l8fcedec6f03d407c949acb70897ec4ec">To change the storage directory of a single DataNode instance, go to <a href="#mrs_01_1664__lab34cabb4d324166acebeb18e1098884">12</a>.</li></ul>
|
|
</p></li></ol>
|
|
<p id="mrs_01_1664__a8fef16ba5d0b4c41bc64e1ee685d8ac9"><strong id="mrs_01_1664__b1777041618454">Changing the storage directory of the DataNode role</strong></p>
|
|
<ol start="7" id="mrs_01_1664__o914dafb130934574bf6980745fd5f155"><li id="mrs_01_1664__l4bc534684e1d4d3cb656e4ed55bb75af"><a name="mrs_01_1664__l4bc534684e1d4d3cb656e4ed55bb75af"></a><a name="l4bc534684e1d4d3cb656e4ed55bb75af"></a><span>Choose <strong id="mrs_01_1664__b15239181815459">Cluster</strong> > <em id="mrs_01_1664__i024081815459">Name of the desired cluster</em> > <strong id="mrs_01_1664__b7240151820457">Services</strong> > <strong id="mrs_01_1664__b2241818184512">HDFS</strong> > <strong id="mrs_01_1664__b7242111814452">Stop Instance</strong> to stop the HDFS service.</span></li><li id="mrs_01_1664__l952ee9d1cf8f4e0899e8434f8d4980ad"><span>Log in to each data node where the HDFS service is installed as user <strong id="mrs_01_1664__b1349617336455">root</strong> and perform the following operations:</span><p><ol type="a" id="mrs_01_1664__o1ca98f7511a34f94a26ac153696f2139"><li id="mrs_01_1664__l896ca76a4bcc42cb8ab77bf56dd424a9">Create a target directory (<strong id="mrs_01_1664__b91111204611">data1</strong> and <strong id="mrs_01_1664__b91711214463">data2</strong> are original directories in the cluster).<p id="mrs_01_1664__afc57f490891c422eb1a493b0a6efda2f">For example, to create a target directory <strong id="mrs_01_1664__b973313394615"><span id="mrs_01_1664__t5992f599462346fcad9986fdb8c15df6">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>, run the following command:</p>
|
|
<p id="mrs_01_1664__ac3f3323b35da4ee28ec2b305f21bbbad"><strong id="mrs_01_1664__b1557553114512">mkdir </strong><strong id="mrs_01_1664__a24873b7703b34fd3b3853b2f52ff1b4c">-p <span id="mrs_01_1664__td6e51d6ba5144eb5a5b7d8c4cb98212b">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__aef4023a9acd645c78602879741687f6c">/hadoop/data3/dn</strong></p>
|
|
</li><li id="mrs_01_1664__l71a593fe57ee44e4a82b6b1431bdd912">Mount the target directory to the new disk. For example, mount <strong id="mrs_01_1664__b164204044617"><span id="mrs_01_1664__tc6b8b15c9a1348599e422000afb971ff">${BIGDATA_DATA_HOME}</span>/hadoop/data3</strong> to the new disk.</li><li id="mrs_01_1664__l1b6b803928e64d058260d4bfe8c6db2e">Modify permissions on the new directory.<p id="mrs_01_1664__aa575395f09494b04add2addd0e5da3c8"><a name="mrs_01_1664__l1b6b803928e64d058260d4bfe8c6db2e"></a><a name="l1b6b803928e64d058260d4bfe8c6db2e"></a>For example, to create a target directory <strong id="mrs_01_1664__b1092214813465"><span id="mrs_01_1664__t5b784aa1ddba48eaa5013526d9b0170b">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>, run the following commands:</p>
|
|
<p id="mrs_01_1664__a6e9aa47657cb4b669d5255228f981636"><strong id="mrs_01_1664__b1660220566461">chmod 700 </strong><strong id="mrs_01_1664__b76037564468"><span id="mrs_01_1664__t3da602c45126478288be8b367a1e8903">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b15604656194611">/hadoop/data3/dn -R</strong> and <strong id="mrs_01_1664__b7604165614615">chown omm:wheel </strong><strong id="mrs_01_1664__b18605856114614"><span id="mrs_01_1664__tb5cf37879bac42ea9c3526132f5b8a9a">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b4605195613464">/hadoop/data3/dn -R</strong></p>
|
|
</li><li id="mrs_01_1664__l63f4856203e9425f9a23113c3d13f665"><a name="mrs_01_1664__l63f4856203e9425f9a23113c3d13f665"></a><a name="l63f4856203e9425f9a23113c3d13f665"></a>Copy the data to the target directory.<p id="mrs_01_1664__a24dfb2ed0e7b4f4586f4353231616b05"><a name="mrs_01_1664__l63f4856203e9425f9a23113c3d13f665"></a><a name="l63f4856203e9425f9a23113c3d13f665"></a>For example, if the old directory is <strong id="mrs_01_1664__b1675221212479"><span id="mrs_01_1664__tfaf5b9dea6304511ad7c6ab30b4962b2">${BIGDATA_DATA_HOME}</span>/hadoop/data1/dn</strong> and the target directory is <strong id="mrs_01_1664__b47532124475"><span id="mrs_01_1664__tacbe972a24864e229743c74bfd43fb2f">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>, run the following command:</p>
|
|
<p id="mrs_01_1664__abfb62c8592724a068bc0c645e2c3c895"><strong id="mrs_01_1664__b171992594711">cp -af </strong><strong id="mrs_01_1664__b1520202516476"><span id="mrs_01_1664__tc42eb70ac99b442782e18df48c604472">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b1620182574715">/hadoop/data1/dn/* </strong><strong id="mrs_01_1664__b162112254478"><span id="mrs_01_1664__tcf54f073f3dd4597abe05b4ab6c690e0">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b02282584717">/hadoop/data3/dn</strong></p>
|
|
</li></ol>
|
|
</p></li><li id="mrs_01_1664__lc073c2204ea5414cac84a364eb4e8a0a"><span>On FusionInsight Manager, choose <strong id="mrs_01_1664__b49531737164710">Cluster</strong> > <em id="mrs_01_1664__i1995343711473">Name of the desired cluster</em> > <strong id="mrs_01_1664__b8953133713475">Services</strong> > <strong id="mrs_01_1664__b99541137114715">HDFS</strong> > <strong id="mrs_01_1664__b14954113720471">Configurations</strong> > <strong id="mrs_01_1664__b10618335165015">All Configurations</strong> to go to the HDFS service configuration page.</span><p><p id="mrs_01_1664__ad5805c53366c43afb7c008568eccf0ab">Change the value of <strong id="mrs_01_1664__b1838324244714">dfs.datanode.data.dir</strong> from the default value <strong id="mrs_01_1664__b1638884274711">%{@auto.detect.datapart.dn}</strong> to the new target directory, for example, <strong id="mrs_01_1664__b5388154264714"><span id="mrs_01_1664__tbbadf70ff42948cdbc4397322aef226e">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>.</p>
|
|
<p id="mrs_01_1664__ace7970517e814895b8b91f7417f30d8e">For example, the original data storage directories are <span class="filepath" id="mrs_01_1664__filepath48431753204715"><b>/srv/BigData/hadoop/data1</b></span>, <span class="filepath" id="mrs_01_1664__filepath584335312475"><b>/srv/BigData/hadoop/data2</b></span>. To migrate data from the <span class="filepath" id="mrs_01_1664__filepath1784475318477"><b>/srv/BigData/hadoop/data1</b></span> directory to the newly created <strong id="mrs_01_1664__b10844125310478">/srv/BigData/hadoop/data3</strong> directory, replace the whole parameter with <strong id="mrs_01_1664__b1584515316477">/srv/BigData/hadoop/data2, /srv/BigData/hadoop/data3</strong>. Separate multiple storage directories with commas (,). In this example, changed directories are <strong id="mrs_01_1664__b123961124483">/srv/BigData/hadoop/data2</strong>, <strong id="mrs_01_1664__b4397102164810">/srv/BigData/hadoop/data3</strong>.</p>
|
|
</p></li><li id="mrs_01_1664__l196acfaf23374048a66c7623d7abefe6"><span>Click <strong id="mrs_01_1664__b2433184194818">Save</strong>. Choose <strong id="mrs_01_1664__b482319514815">Cluster</strong> > <em id="mrs_01_1664__i58231154486">Name of the desired cluster</em> > <strong id="mrs_01_1664__b08238513482">Services</strong>. On the page that is displayed, start the services that have been stopped.</span></li><li id="mrs_01_1664__leebc9117f1484ef1bd1d97c206c15382"><span>After the HDFS is started, run the following command on the HDFS client to check whether all directories and files in the HDFS root directory are correctly copied:</span><p><p id="mrs_01_1664__a09ae8d3b510e41d2885ce4ece0821703"><strong id="mrs_01_1664__aa3587ad5eb694148a6fb8b388f4d7389">hdfs fsck /</strong></p>
|
|
<p id="mrs_01_1664__ab0a486e5119e4631837bcf4222490b77">Check the fsck command output.</p>
|
|
<ul id="mrs_01_1664__u5f8f4d481d574f7ab1c6862ae923f6b0"><li id="mrs_01_1664__lcad57bb696d2481494c3230828eb1be7">If the following information is displayed, no file is lost or damaged, and data replication is successful. No further action is required.<pre class="screen" id="mrs_01_1664__s06b9808e05fd411fa1e7f31421460ac4">The filesystem under path '/' is HEALTHY</pre>
|
|
</li><li id="mrs_01_1664__lf7b5947819fc49439811c9b87f68adf9">If other information is displayed, some files are lost or damaged. In this case, check whether <a href="#mrs_01_1664__l63f4856203e9425f9a23113c3d13f665">8.d</a> is correct and run the <strong id="mrs_01_1664__b9435419204811">hdfs fsck</strong> <em id="mrs_01_1664__i7435181915484">Name of the damaged file</em><strong id="mrs_01_1664__b11436819114817"> -delete</strong> command.</li></ul>
|
|
</p></li></ol>
|
|
<p id="mrs_01_1664__afba1bfc3a5f544ba84ab44ce604aba1f"><strong id="mrs_01_1664__b15132202494813">Changing the storage directory of a single DataNode instance</strong></p>
|
|
<ol start="12" id="mrs_01_1664__o5b6f95be6f544f2e84ffeca333e93028"><li id="mrs_01_1664__lab34cabb4d324166acebeb18e1098884"><a name="mrs_01_1664__lab34cabb4d324166acebeb18e1098884"></a><a name="lab34cabb4d324166acebeb18e1098884"></a><span>Choose <strong id="mrs_01_1664__b19509192784818">Cluster</strong> > <em id="mrs_01_1664__i16514827164810">Name of the desired cluster</em> > <strong id="mrs_01_1664__b195144278484">Services</strong> > <strong id="mrs_01_1664__b4515427124816">HDFS</strong> > <strong id="mrs_01_1664__b851552713489">Instance</strong>. Select the HDFS instance whose storage directory needs to be modified, and choose <strong id="mrs_01_1664__b1351510271481">More</strong> > <strong id="mrs_01_1664__b18516132754813">Stop Instance</strong>.</span></li><li id="mrs_01_1664__le21559235c204991bcf5ce25fa466168"><span>Log in to the DataNode node as user <strong id="mrs_01_1664__b170134454814">root</strong>, and perform the following operations:</span><p><ol type="a" id="mrs_01_1664__oaf5cf613ad5e4f7a860f419abc359c1f"><li id="mrs_01_1664__l867efca11cdd4baf9bb029159855da1b">Create a target directory.<p id="mrs_01_1664__a0c1260d9b66340f19ded8e11116a3689"><a name="mrs_01_1664__l867efca11cdd4baf9bb029159855da1b"></a><a name="l867efca11cdd4baf9bb029159855da1b"></a>For example, to create a target directory <strong id="mrs_01_1664__b203744612493"><span id="mrs_01_1664__t358a229f1eb443aebfc3e0668b406b7b">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>, run the following command:</p>
|
|
<p id="mrs_01_1664__ab1c7a3497e944a8a811e7cae53c7b8cb"><strong id="mrs_01_1664__ac068ecc0ac034a76a19235861e929399">mkdir -p </strong><strong id="mrs_01_1664__a48458afc36bf413ba5274384c546dff5"><span id="mrs_01_1664__tabe5add53f2849beb7a5e513bce6ed16">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__afcb51019790b468ab12b790e25f88852">/hadoop/data3/dn</strong></p>
|
|
</li><li id="mrs_01_1664__ld4bbd7f07e794263a5beb45307810aaf">Mount the target directory to the new disk.<p id="mrs_01_1664__aa442d3c808f34a4db48c6b5782caed48"><a name="mrs_01_1664__ld4bbd7f07e794263a5beb45307810aaf"></a><a name="ld4bbd7f07e794263a5beb45307810aaf"></a>For example, mount <strong id="mrs_01_1664__b167132844910"><span id="mrs_01_1664__ta1dd1c694e84455baa953266c81eb389">${BIGDATA_DATA_HOME}</span>/hadoop/data3</strong> to the new disk.</p>
|
|
</li><li id="mrs_01_1664__l320fb0ec1cea4749bb99a51a589f975a">Modify permissions on the new directory.<p id="mrs_01_1664__ab6495dcb341d47dcb06064e4471a808c"><a name="mrs_01_1664__l320fb0ec1cea4749bb99a51a589f975a"></a><a name="l320fb0ec1cea4749bb99a51a589f975a"></a>For example, to create a target directory <strong id="mrs_01_1664__b3847935194916"><span id="mrs_01_1664__t0670871b27ae45e8bd6099536bb93292">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>, run the following commands:</p>
|
|
<p id="mrs_01_1664__a398e1fb5ef674001b8b7f02c8fa82e1b"><strong id="mrs_01_1664__b34642144910">chmod 700 </strong><strong id="mrs_01_1664__b4413428494"><span id="mrs_01_1664__ta8e24111e2304db5a39e732fdee0525d">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b19574218492">/hadoop/data3/dn -R</strong> and <strong id="mrs_01_1664__b05342174911">chown omm:wheel </strong><strong id="mrs_01_1664__b161442194913"><span id="mrs_01_1664__td405b94e3c3947aebf26032c20b10295">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b136114210499">/hadoop/data3/dn -R</strong></p>
|
|
</li><li id="mrs_01_1664__l346b03ead67546da88ac44ecf9d456e8">Copy the data to the target directory.<p id="mrs_01_1664__a1dc4c01b9d5c484f905d6abf0808e57c"><a name="mrs_01_1664__l346b03ead67546da88ac44ecf9d456e8"></a><a name="l346b03ead67546da88ac44ecf9d456e8"></a>For example, if the old directory is <strong id="mrs_01_1664__b430145624917"><span id="mrs_01_1664__t0daeb707126d4c6e9e48252767017e99">${BIGDATA_DATA_HOME}</span>/hadoop/data1/dn</strong> and the target directory is <strong id="mrs_01_1664__b14302115674910"><span id="mrs_01_1664__t832d21664d1f4a4b90d40fd9ab8b5b81">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>, run the following command:</p>
|
|
<p id="mrs_01_1664__aad74511683d2403d896ee961b5018cc1"><strong id="mrs_01_1664__b137845711501">cp -af </strong><strong id="mrs_01_1664__b1778457145012"><span id="mrs_01_1664__t15ec638de67d40df8cd88f57fddcdd96">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b7785117125017">/hadoop/data1/dn/* </strong><strong id="mrs_01_1664__b107865725016"><span id="mrs_01_1664__t4e3f7c8291b04040a5ddeb3c94ee8649">${BIGDATA_DATA_HOME}</span></strong><strong id="mrs_01_1664__b7786117175012">/hadoop/data3/dn</strong></p>
|
|
</li></ol>
|
|
</p></li><li id="mrs_01_1664__l0f49d954cd0445b2ad5db87d37a6993a"><span>On FusionInsight Manager, choose <strong id="mrs_01_1664__b1598202705215">Cluster</strong> > <em id="mrs_01_1664__i7797113914525">Name of the desired cluster</em> > <strong id="mrs_01_1664__b429119438524">Service</strong> > <strong id="mrs_01_1664__b9300348105219">HDFS</strong> > <strong id="mrs_01_1664__b526905705218">Instance</strong>. Click the specified DataNode instance and go to the <strong id="mrs_01_1664__b1070216155410">Configurations</strong> page.</span><p><p id="mrs_01_1664__ac355f1de0ce44371a126422af34c9864">Change the value of <strong id="mrs_01_1664__b252462375017">dfs.datanode.data.dir</strong> from the default value <strong id="mrs_01_1664__b1052916234500">%{@auto.detect.datapart.dn}</strong> to the new target directory, for example, <strong id="mrs_01_1664__b175291323205016"><span id="mrs_01_1664__t8b730564952545978169d7462505ade8">${BIGDATA_DATA_HOME}</span>/hadoop/data3/dn</strong>.</p>
|
|
<p id="mrs_01_1664__a5477797872224b90b0a593e9f6239705">For example, the original data storage directories are <strong id="mrs_01_1664__b1524471201315">/srv/BigData/hadoop/data1,/srv/BigData/hadoop/data2</strong>. To migrate data from the <strong id="mrs_01_1664__b1632173310503">/srv/BigData/hadoop/data1</strong> directory to the newly created <strong id="mrs_01_1664__b1663343310504">/srv/BigData/hadoop/data3</strong> directory, replace the whole parameter with <strong id="mrs_01_1664__b156330330506">/srv/BigData/hadoop/data2,/srv/BigData/hadoop/data3</strong>.</p>
|
|
</p></li><li id="mrs_01_1664__l3a4706fbf6fc49d6952e95db072df069"><span>Click <strong id="mrs_01_1664__b18581183517504">Save</strong>, and then click <strong id="mrs_01_1664__b10582203525010">OK</strong>.</span><p><p id="mrs_01_1664__a5b31e0d798df4577b0eda749bcb1c6b6"><strong id="mrs_01_1664__b3732153613502">Operation succeeded</strong> is displayed. click <strong id="mrs_01_1664__b96753813504">Finish</strong>.</p>
|
|
</p></li><li id="mrs_01_1664__li9931205544818"><span>Choose <strong id="mrs_01_1664__b18346039165014">More</strong> > <strong id="mrs_01_1664__b83461139105012">Restart Instance</strong> to restart the DataNode instance.</span></li></ol>
|
|
</div>
|
|
<div>
|
|
<div class="familylinks">
|
|
<div class="parentlink"><strong>Parent topic:</strong> <a href="mrs_01_0790.html">Using HDFS</a></div>
|
|
</div>
|
|
</div>
|
|
|