Yang, Tong 6182f91ba8 MRS component operation guide_normal 2.0.38.SP20 version
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com>
Co-authored-by: Yang, Tong <yangtong2@huawei.com>
Co-committed-by: Yang, Tong <yangtong2@huawei.com>
2022-12-09 14:55:21 +00:00

21 lines
1.7 KiB
HTML

<a name="mrs_01_2017"></a><a name="mrs_01_2017"></a>
<h1 class="topictitle1">Why Does the Stage Retry due to the Crash of the Executor?</h1>
<div id="body1595920219931"><div class="section" id="mrs_01_2017__s9fda75a28b86421ebbcb7ba6de4396be"><h4 class="sectiontitle">Question</h4><p id="mrs_01_2017__a66c46a95aa5348118b784a34f610630d">When I run Spark tasks with a large data volume, for example, 100 TB TPCDS test suite, why does the Stage retry due to Executor loss sometimes? The message "Executor 532 is lost rpc with driver, but is still alive, going to kill it" is displayed, indicating that the loss of the Executor is caused by a JVM crash.</p>
<p id="mrs_01_2017__a3f5a346bdd1a405ba71f8a8caaacf5d9">The log of the key JVM crash is as follows:</p>
<pre class="screen" id="mrs_01_2017__s492d2580584c42a0852d0a1dda29676e">#
# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (sharedRuntime.cpp:834), pid=241075, tid=140476258551552
# fatal error: exception happened outside interpreter, nmethods and vtable stubs at pc 0x00007fcda9eb8eb1</pre>
</div>
<div class="section" id="mrs_01_2017__sc9416d59bc2148b7a80f9f6d8f216c51"><h4 class="sectiontitle">Answer</h4><p id="mrs_01_2017__aae47043a9ad1437388872b22b4aa7e02">This error does not affect services. This error is caused by defects of the Oracle JVM, but not the platform code. There is the fault tolerance mechanism for Executors in Spark: the Stage retries in case of an Executor crash to ensure the success execution of tasks.</p>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="mrs_01_2003.html">Spark Core</a></div>
</div>
</div>