我对此还不太熟悉,经验不足,希望得到您的帮助。我正在尝试在现有的 Spark 安装上安装 Hive。
我基本上按照本页中的说明进行操作,没有任何问题。
https://github.com/dryshliak/hadoop/wiki/Installing-Hive-on-existing-Hadoop-cluster
我还创建了一个名为“test_table”的数据库warehouse
并插入了一个名为“test_table”的表。
hive> show tables;
OK
employee
test_table
Time taken: 0.084 seconds, Fetched: 2 row(s)
hive> desc test_table;
OK
col1 int Integer Column
col2 string String Column
Time taken: 0.052 seconds, Fetched: 2 row(s)
hive>
我面临的问题是,当我尝试test_table
使用命令 将数据插入到
hive> insert into test_table values(1,'aaa');
我收到以下错误信息
查询 ID = hadoop_20190703135836_4b17eeac-249d-4e54-bd98-1212f3cb5b5d 总作业数 = 1
正在启动作业 1(共 1 个
) 为了更改 Reducer 的平均负载(以字节为单位):
设置 hive.exec.reducers.bytes.per.reducer=<number>
为了限制 Reducer 的最大数量:
设置 hive.exec.reducers.max=<number>
为了设置恒定数量的 Reducer:
设置 mapreduce.job.reduces=<number>
无法执行 spark 任务,出现异常“org.apache.hadoop.hive.ql.metadata.HiveException(无法为 Spark 会话 821e05e7-74a8-4656-b4ed-3a622c9cadcc 创建 Spark 客户端)”
失败:执行错误,返回来自 org.apache.hadoop.hive.ql.exec.spark.SparkTask 的代码 30041。无法为 Spark 会话 821e05e7-74a8-4656-b4ed-3a622c9cadcc 创建 Spark 客户端
以下是我拥有的版本
RHEL 服务器版本 7.5
Hadoop 3.1.1
Spark 2.4.0
Hive 3.1.1
下面是从hive.log
发生错误的文件中截取的。
2019-07-03T12:56:00,269 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:正在执行命令(queryId = hadoop_20190703125557_f48a3966-691d-4c42-aee0-93f81fabef66):插入 test_table 值(1,'aaa')
2019-07-03T12:56:00,270 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:查询 ID = hadoop_20190703125557_f48a3966-691d-4c42-aee0-93f81fabef66
2019-07-03T12:56:00,270 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:总作业数 = 1
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:启动作业 1(共 1 个)
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:以串行模式启动任务 [Stage-1:MAPRED]
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:为了更改 Reducer 的平均负载(以字节为单位):
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:设置 hive.exec.reducers.bytes.per.reducer=
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:为了限制最大reducer数量:
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:设置hive.exec.reducers.max =
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:为了设置恒定数量的reducer:
2019-07-03T12:56:00,282 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:设置 mapreduce.job.reduces=
2019-07-03T12:56:00,284 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] session.SparkSessionManagerImpl:设置会话管理器。
2019-07-03T12:56:00,642 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] session.SparkSession:尝试打开 Spark 会话 e3b4aa82-29a5-4e82-b63b-742c5d35df3f
2019-07-03T12:56:00,700 错误 [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:无法执行 spark 任务,出现异常“org.apache.hadoop.hive.ql.metadata.HiveException(无法为 Spark 会话创建 Spark 客户端e3b4aa82-29a5-4e82-b63b-742c5d35df3f)'
org.apache.hadoop.hive.ql.metadata.HiveException:无法为 Spark 会话创建 Spark 客户端 e3b4aa82-29a5-4e82-b63b-742c5d35df3f
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221)
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)
at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)
在 org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115)
在 org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
在 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
在 org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)在 org.apache.hadoop.hive.ql.Driver.execute
(Driver.java:2335)
在 org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
在org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
导致:java.lang.NoClassDefFoundError:org/apache/spark/SparkConf
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263)
at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.(RemoteHiveSparkClient.java:98)
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76)
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87)
... 另外 24 个
由以下原因引起:java.lang.ClassNotFoundException:org.apache.spark.SparkConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
在 java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 还有 28 个
2019-07-03T12:56:00,700 错误 [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:无法执行 spark 任务,出现异常“org.apache.hadoop.hive.ql.metadata.HiveException(无法为 Spark 会话 e3b4aa82-29a5-4e82-b63b-742c5d35df3f 创建 Spark 客户端)” org.apache.hadoop.hive.ql.metadata.HiveException:无法为 Spark 会话 e3b4aa82-29a5-4e82-b63b -
742c5d35df3f 创建 Spark 客户端
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221)~[hive-exec-3.1.1.jar:3.1.1]在 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession
( SparkUtilities.java:136) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) ~[hive-exec-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) ~[hive-exec-3.1.1.jar:3.1.1] 在org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.1.jar:3.1.1] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) ~[hive-exec-3.1.1.jar:3.1.1] at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd( CliDriver.java:239) ~[hive-cli-3.1.1.jar:3.1.1] at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) ~[hive-cli-3.1.1.jar:3.1.1] at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) ~[hive-cli-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) ~[hive-cli-3.1.1.jar:3.1.1] 在 org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) ~[hive-cli-3.1.1.jar:3.1.1]
在 org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) ~[hive-cli-3.1.1.jar:3.1.1]
在 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]
在 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]
在 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]
在 java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]
在org.apache.hadoop.util.RunJar.run(RunJar.java:318) ~[hadoop-common-3.1.1.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ~[hadoop-common-3.1.1.jar:?]
导致:java.lang.NoClassDefFoundError: org/apache/spark/SparkConf
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.1.jar:3.1.1]
at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.1.jar:3.1.1]
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.1.jar:3.1.1]
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.1.jar:3.1.1]
... 另外 24 个
由以下原因引起:java.lang.ClassNotFoundException:org.apache.spark.SparkConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_191]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_191]
在 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_191]
在 java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_191]
在 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.1.jar:3.1.1]
在 org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.1.jar:3.1.1]
在 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.1.jar:3.1.1]
在 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.1.jar:3.1.1]
... 更多 24
2019-07-03T12:56:00,701 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] reexec.ReOptimizePlugin:重新优化:retryPossible:false
2019-07-03T12:56:00,701 错误 [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:失败:执行错误,来自 org.apache.hadoop.hive.ql.exec.spark.SparkTask 的返回代码 30041。无法为 Spark 会话 e3b4aa82-29a5-4e82-b63b-742c5d35df3f 创建 Spark 客户端
2019-07-03T12:56:00,701 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:已完成执行命令(queryId = hadoop_20190703125557_f48a3966-691d-4c42-aee0-93f81fabef66);所用时间:0.432 秒
2019-07-03T12:56:00,701 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver:并发模式已禁用,未创建锁管理器
2019-07-03T12:56:00,721 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] conf.HiveConf:使用传入的日志 ID 的默认值:6beaec32-ecac-4dc1-b118-f2c86c385005
2019-07-03T12:56:00,721 INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] session.SessionState:将线程名称重置为 main