forked from docs/doc-exports
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: Yang, Tong <yangtong2@huawei.com> Co-committed-by: Yang, Tong <yangtong2@huawei.com>
55 lines
5.5 KiB
HTML
55 lines
5.5 KiB
HTML
<a name="mrs_01_2047"></a><a name="mrs_01_2047"></a>
|
|
|
|
<h1 class="topictitle1">Why Does Spark-beeline Fail to Run and Error Message "Failed to create ThriftService instance" Is Displayed?</h1>
|
|
<div id="body1595920223604"><div class="section" id="mrs_01_2047__s3ccd8da585424402ac5dc449b1e92b27"><h4 class="sectiontitle">Question</h4><p id="mrs_01_2047__ae04359aa3f6c4c2f96a75393da3b60bf">Why does "Failed to create ThriftService instance" occur when spark beeline fails to run?</p>
|
|
<p id="mrs_01_2047__a2454195228dd444eb1707f0d02ed26de">Beeline logs are as follows:</p>
|
|
<pre class="screen" id="mrs_01_2047__sd4eff3b656344b42878c2e5c6840e8d3"><strong id="mrs_01_2047__af65dfae4c5bb441f89e09117b7ec20e3">Error: Failed to create ThriftService instance</strong> (state=,code=0)
|
|
Beeline version 1.2.1.spark by Apache Hive
|
|
[INFO] Unable to bind key for unsupported operation: backward-delete-word
|
|
[INFO] Unable to bind key for unsupported operation: backward-delete-word
|
|
[INFO] Unable to bind key for unsupported operation: down-history
|
|
[INFO] Unable to bind key for unsupported operation: up-history
|
|
[INFO] Unable to bind key for unsupported operation: up-history
|
|
[INFO] Unable to bind key for unsupported operation: down-history
|
|
[INFO] Unable to bind key for unsupported operation: up-history
|
|
[INFO] Unable to bind key for unsupported operation: down-history
|
|
[INFO] Unable to bind key for unsupported operation: up-history
|
|
[INFO] Unable to bind key for unsupported operation: down-history
|
|
[INFO] Unable to bind key for unsupported operation: up-history
|
|
[INFO] Unable to bind key for unsupported operation: down-history
|
|
beeline> </pre>
|
|
<p id="mrs_01_2047__a1eac513635e04230a485495397ed16cd">In addition, the "Timed out waiting for client to connect" error log is generated on the JDBCServer. The details are as follows:</p>
|
|
<pre class="screen" id="mrs_01_2047__s3a8bd204376d42109af9cfe479cd44f2">2017-07-12 17:35:11,284 | INFO | [main] | Will try to open client transport with JDBC Uri: jdbc:hive2://192.168.101.97:23040/default;principal=spark/hadoop.<em id="mrs_01_2047__i1954248201617"><System domain name></em>@<em id="mrs_01_2047__i7651418182510"><System domain name></em>;healthcheck=true;saslQop=auth-conf;auth=KERBEROS;user.principal=spark/hadoop.<em id="mrs_01_2047__i1733592912515"><System domain name></em>@<em id="mrs_01_2047__i717833318252"><System domain name></em>;user.keytab=${BIGDATA_HOME}/FusionInsight_HD_<span id="mrs_01_2047__text83952038113714">8.1.0.1</span>/install/FusionInsight-Spark-<span id="mrs_01_2047__text5563355171417">3.1.1</span>/keytab/spark/JDBCServer/spark.keytab | org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:317)
|
|
2017-07-12 17:35:11,326 | INFO | [HiveServer2-Handler-Pool: Thread-92] | Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8 | org.apache.proxy.service.ThriftCLIProxyService.OpenSession(ThriftCLIProxyService.java:554)
|
|
2017-07-12 17:35:49,790 | ERROR | [HiveServer2-Handler-Pool: Thread-113] | <strong id="mrs_01_2047__a5cf24cff0d5f4f259538aa65ee23ecb0">Timed out waiting for client to connect</strong>.
|
|
Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc.
|
|
Please check YARN or Spark driver's logs for further information. | org.apache.proxy.service.client.SparkClientImpl.<init>(SparkClientImpl.java:90)
|
|
java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: <strong id="mrs_01_2047__aae07eb931e154982897138e5ea19f7a7">Timed out waiting for client connection</strong>.
|
|
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
|
|
at org.apache.proxy.service.client.SparkClientImpl.<init>(SparkClientImpl.java:87)
|
|
at org.apache.proxy.service.client.SparkClientFactory.createClient(SparkClientFactory.java:79)
|
|
at org.apache.proxy.service.SparkClientManager.createSparkClient(SparkClientManager.java:145)
|
|
at org.apache.proxy.service.SparkClientManager.createThriftServerInstance(SparkClientManager.java:160)
|
|
at org.apache.proxy.service.ThriftServiceManager.getOrCreateThriftServer(ThriftServiceManager.java:182)
|
|
at org.apache.proxy.service.ThriftCLIProxyService.OpenSession(ThriftCLIProxyService.java:596)
|
|
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1257)
|
|
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1242)
|
|
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
|
|
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
|
|
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:696)
|
|
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
|
|
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
|
|
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
|
|
at java.lang.Thread.run(Thread.java:748)
|
|
Caused by: java.util.concurrent.TimeoutException: Timed out waiting for client connection.</pre>
|
|
</div>
|
|
<div class="section" id="mrs_01_2047__s39fba2a02b4642809c6361d65d243eac"><h4 class="sectiontitle">Answer</h4><p id="mrs_01_2047__a4392eee52c9b4b029520726667112ee6">This problem occurs when the network is unstable. When a timed-out exception occurs in beeline, Spark does not attempt to reconnect to beeline. Therefore, you need to restart spark-beeline for reconnection.</p>
|
|
</div>
|
|
</div>
|
|
<div>
|
|
<div class="familylinks">
|
|
<div class="parentlink"><strong>Parent topic:</strong> <a href="mrs_01_2022.html">Spark SQL and DataFrame</a></div>
|
|
</div>
|
|
</div>
|
|
|