ADLS Gen2上的Kubernetes sparksubmit错误:找不到类org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem

2024-09-19 23:36:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试将ADLS Gen2上的Pyspark作业提交给Azure Kubernetes服务(AKS),并获得以下异常:

Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2595)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3269)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.spark.deploy.DependencyUtils$.resolveGlobPath(DependencyUtils.scala:191)
    at org.apache.spark.deploy.DependencyUtils$.$anonfun$resolveGlobPaths$2(DependencyUtils.scala:147)
    at org.apache.spark.deploy.DependencyUtils$.$anonfun$resolveGlobPaths$2$adapted(DependencyUtils.scala:145)
    at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
    at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
    at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
    at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
    at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
    at org.apache.spark.deploy.DependencyUtils$.resolveGlobPaths(DependencyUtils.scala:145)
    at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$6(SparkSubmit.scala:365)
    at scala.Option.map(Option.scala:230)
    at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:365)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2499)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2593)
    ... 27 more

我的spark提交如下所示:

$SPARK_HOME/bin/spark-submit \
--master k8s://https://XXX \
--deploy-mode cluster \
--name spark-pi \
--conf spark.kubernetes.file.upload.path=file:///tmp \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.container.image=XXX \
--conf spark.hadoop.fs.azure.account.auth.type.XXX.dfs.core.windows.net=SharedKey \
--conf spark.hadoop.fs.azure.account.key.XXX.dfs.core.windows.net=XXX \
--py-files abfss://data@XXX.dfs.core.windows.net/py-files/ml_pipeline-0.0.1-py3.8.egg \
abfss://data@XXX.dfs.core.windows.net/py-files/main_kubernetes.py

该作业在我的虚拟机上运行良好,并且从ADLS Gen2加载数据时没有出现问题。 在本文java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found中,建议下载该软件包并将其添加到spark/jars文件夹中。但是我不知道从哪里下载它,如果它在本地运行良好的话,为什么必须首先包含它

编辑: 所以我设法把罐子放在Docker容器里。如果我ssh到该容器中并运行该作业,它就会正常工作并从adl加载文件。 但如果我将作业提交给Kubernetes,它会抛出与以前相同的异常。 拜托,有人能帮忙吗

Spark 3.1.1、Python 3.8.5、Ubuntu 18.04


Tags: orghadoopapacheconfjavafsatcollection
1条回答
网友
1楼 · 发布于 2024-09-19 23:36:05

所以我设法解决了我的问题。这肯定是一个解决办法,但它是有效的

我修改了PySpark Docker容器,将入口点更改为:

ENTRYPOINT [ "/opt/entrypoint.sh" ]

现在,我可以运行容器而不立即退出:

docker run -td <docker_image_id>

并且可以将ssh插入其中:

docker exec -it <docker_container_id> /bin/bash

此时,我可以在容器内提交带有package标志的spark作业:

$SPARK_HOME/bin/spark-submit \
   master local[*] \
   deploy-mode client \
   name spark-python \
   packages org.apache.hadoop:hadoop-azure:3.2.0 \
   conf spark.hadoop.fs.azure.account.auth.type.user.dfs.core.windows.net=SharedKey \
   conf spark.hadoop.fs.azure.account.key.user.dfs.core.windows.net=xxx \
   files "abfss://data@user.dfs.core.windows.net/config.yml" \
   py-files "abfss://data@user.dfs.core.windows.net/jobs.zip" \
  "abfss://data@user.dfs.core.windows.net/main.py"

Spark随后下载了所需的依赖项,并将它们保存在容器中的/root/.ivy2下,并成功地执行了作业

我将整个文件夹从容器复制到主机上:

sudo docker cp <docker_container_id>:/root/.ivy2/ /opt/spark/.ivy2/

并再次修改Dockerfile以将文件夹复制到图像中:

COPY .ivy2 /root/.ivy2

最后,我可以用这个新构建的映像将作业提交给Kubernetes,并且一切都按预期运行

相关问题 更多 >