提交PySpark应用程序到集群mod中的纱线上的spark

2024-05-20 15:47:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试测试一个为我所在的团队构建的大数据平台。它的纱线上有火花。

是否可以创建PySpark应用程序并将它们提交到一个YARN集群上?

我能够成功地提交一个示例SparkPi jar文件,它返回YARN stdout日志中的输出。

这是我要测试的PySpark代码

from pyspark import SparkConf
from pyspark import SparkContext

HDFS_MASTER = 'hadoop-master'

conf = SparkConf()
conf.setMaster('yarn')
conf.setAppName('spark-test')
sc = SparkContext(conf=conf)

distFile = sc.textFile('hdfs://{0}:9000/tmp/test/test.csv'.format(HDFS_MASTER))

nonempty_lines = distFile.filter(lambda x: len(x) > 0)
print ('Nonempty lines', nonempty_lines.count())

我在spark目录中的命令命令:

bin\spark-submit --master yarn --deploy-mode cluster --driver-memory 4g
executor-memory 2g --executor-cores 1 examples\sparktest2.py 10

我的脚本在我的spark目录的examples目录中称为sparktest2.py

日志(stderr):

 application from cluster with 3 NodeManagers
 17/03/22 15:18:39 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
 17/03/22 15:18:39 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
 17/03/22 15:18:39 INFO Client: Setting up container launch context for our AM
 17/03/22 15:18:39 ERROR SparkContext: Error initializing SparkContext.
 java.util.NoSuchElementException: key not found: SPARK_HOME
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at org.apache.spark.deploy.yarn.Client$$anonfun$findPySparkArchives$2.apply(Client.scala:1148)
at org.apache.spark.deploy.yarn.Client$$anonfun$findPySparkArchives$2.apply(Client.scala:1147)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.yarn.Client.findPySparkArchives(Client.scala:1147)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:829)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
17/03/22 15:18:39 INFO SparkUI: Stopped Spark web UI at http://10.0.9.24:42155
17/03/22 15:18:39 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
17/03/22 15:18:39 INFO YarnClientSchedulerBackend: Stopped
17/03/22 15:18:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/22 15:18:39 INFO MemoryStore: MemoryStore cleared
17/03/22 15:18:39 INFO BlockManager: BlockManager stopped
17/03/22 15:18:39 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/22 15:18:39 WARN MetricsSystem: Stopping a MetricsSystem that is not running
17/03/22 15:18:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/22 15:18:39 INFO SparkContext: Successfully stopped SparkContext
17/03/22 15:18:39 ERROR ApplicationMaster: User application exited with status 1
17/03/22 15:18:39 INFO ApplicationMaster: Final app status: FAILED, exitCode: 1, (reason: User application exited with status 1)
17/03/22 15:18:47 ERROR ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
17/03/22 15:18:47 INFO ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User application exited with status 1)
17/03/22 15:18:47 INFO ApplicationMaster: Deleting staging directory hdfs://hadoop-master.overlaynet:9000/user/ahmeds/.sparkStaging/application_1489001113497_0038
17/03/22 15:18:47 INFO ShutdownHookManager: Shutdown hook called
17/03/22 15:18:47 INFO ShutdownHookManager: Deleting directory /tmp/hadoop-root/nm-local-dir/usercache/ahmeds/appcache/application_1489001113497_0038/spark-1b4d971c-4448-4a5f-b917-3b6e2d31bb95

标准输出错误:

Traceback (most recent call last):
File "sparktest2.py", line 16, in <module>
sc = SparkContext(conf=conf)
File "/tmp/hadoop-root/nm-local dir/usercache/ahmeds/appcache/application_1489001113497_0038/container_1489001113497_0038_02_000001/pyspark.zip/pyspark/context.py", line 115, in __init__
File "/tmp/hadoop-root/nm-local-dir/usercache/ahmeds/appcache/application_1489001113497_0038/container_1489001113497_0038_02_000001/pyspark.zip/pyspark/context.py", line 168, in _do_init
File "/tmp/hadoop-root/nm-local-dir/usercache/ahmeds/appcache/application_1489001113497_0038/container_1489001113497_0038_02_000001/pyspark.zip/pyspark/context.py", line 233, in _initialize_context
File "/tmp/hadoop-root/nm-local-dir/usercache/ahmeds/appcache/application_1489001113497_0038/container_1489001113497_0038_02_000001/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1401, in __call__
File "/tmp/hadoop-root/nm-local-dir/usercache/ahmeds/appcache/application_1489001113497_0038/container_1489001113497_0038_02_000001/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.util.NoSuchElementException: key not found: SPARK_HOME
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at org.apache.spark.deploy.yarn.Client$$anonfun$findPySparkArchives$2.apply(Client.scala:1148)
at org.apache.spark.deploy.yarn.Client$$anonfun$findPySparkArchives$2.apply(Client.scala:1147)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.yarn.Client.findPySparkArchives(Client.scala:1147)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:829)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)

它似乎在抱怨我在环境变量中设置的SPARK_HOME

非常感谢您的帮助

Python 3.5版
Spark 2.0.1版
操作系统:Windows 7


Tags: orginfoclienthadoopapplicationapachejavaat
3条回答

我也有类似的问题。在hadoop-env.sh中设置“SPARK_HOME”,然后重新启动ResourceManager,NameNode,DataNode。应该修好。

使它对我起作用的是在我的命令中添加以下内容

--conf spark.yarn.appMasterEnv.SPARK_HOME=/dev/null
--conf spark.executorEnv.SPARK_HOME=/dev/null
--files pythonscript.py
./spark-submit --master yarn-cluster --queue default \
--num-executors 20 --executor-memory 1G --executor-cores 3 \
--driver-memory 1G \
--conf spark.yarn.appMasterEnv.SPARK_HOME=/dev/null \
--conf spark.executorEnv.SPARK_HOME=/dev/null \
--files  /home/user/script.py

相关问题 更多 >