如何在工作流模板Spark作业中传递参数

2024-09-24 06:25:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我的spark dataproc工作流有问题

这在发布时起作用:

gcloud dataproc jobs submit spark \
--project myproject \
--cluster=mycluster \
--region=europe-west3 \
--jars=gs:path\file.jar,gs://path//depende.jar \
--class=it.flow \
--properties spark.num.executors=2,spark.executor.cores=3,spark.executor.memory=5g,spark.driver.cores=2,spark.driver.memory=10g,spark.dynamicAllocation.enabled=false,spark.executor.userClassPathFirst=true,spark.driver.userClassPathFirst=true,spark.jars.packages=com.google.cloud:google-cloud-logging:2.2.0  
--  20210820 010000 000 0 000 TRY

我创建了一个dataproc工作流和python代码,通过composer启动它,它就可以工作了

现在我必须使最后的参数动态(-- 20210820 010000 000 0 000 TRY

但是,我无法将参数传递到工作流:

gcloud dataproc workflow-templates create try1 --region=europe-west3
 
gcloud dataproc workflow-templates add-job spark \
--workflow-template=try1 \
--step-id=create_try1 \
--class=it.flow \
 --region=europe-west3 \
--jars=gs:path\file.jar,gs://path//depende.jar \
 --properties spark.num.executors=2,spark.executor.cores=3,spark.executor.memory=5g,spark.driver.cores=2,spark.driver.memory=10g,spark.dynamicAllocation.enabled=false,spark.executor.userClassPathFirst=true,spark.driver.userClassPathFirst=true,spark.jars.packages=com.google.cloud:google-cloud-logging:2.2.0 \
 -- $arg1 $arg2  
 
gcloud dataproc workflow-templates set-cluster-selector TRY1  --region=europe-west3 --cluster-labels=goog-dataproc-cluster-name=cluster

此电话:

gcloud dataproc workflow-templates instantiate TRY1  --region=europe-west3 --parameters="arg1=20210820"

导致以下错误:

ERROR: (gcloud.dataproc.workflow-templates.instantiate) INVALID_ARGUMENT: Template does not contain a parameter with name arg1.

我如何解决这个问题

yaml文件

id: create_file
jobs:
- sparkJob:
    args:
    - ARG1
    - ARG2
    jarFileUris:
    - gs://mybucket/try_file.jar
    - gs://mybucket/try_dependencies_2.jar
    mainClass: org.apache.hadoop.examples.tryFile
    properties:
      spark.driver.cores: '2'
      spark.driver.memory: 10g
      spark.driver.userClassPathFirst: 'true'
      spark.dynamicAllocation.enabled: 'false'
      spark.executor.cores: '3'
      spark.executor.memory: 5g
      spark.executor.userClassPathFirst: 'true'
      spark.jars.packages: com.google.cloud:google-cloud-logging:2.2.0
      spark.num.executors: '2'
  stepId: create_file_try
  parameters:
- name: ARG1
  fields:
  - jobs['create_file_try'].sparkJob.args[0]
- name: ARG2
  fields:
  - jobs['create_file_try'].sparkJob.args[1]
name: projects/My-project-id/regions/europe-west3/workflowTemplates/create_file
updateTime: '2021-08-25T07:49:59.251096Z'

Tags: gstruedrivercreatecoressparkjarfile
1条回答
网友
1楼 · 发布于 2024-09-24 06:25:02

为了让工作流模板接受参数,最好使用yaml文件。运行完整命令gcloud dataproc workflow-templates add-job spark时,可以获取yaml文件。它将在CLI上返回yaml配置

在本例中,为了测试,我只使用了sample code from the Dataproc documentation,并使用了 properties处的值

注意:在本例中,我在yaml文件中使用了一个伪project-id。确保使用实际的project-id,这样就不会遇到任何问题

示例命令:

gcloud dataproc workflow-templates add-job spark \
 workflow-template=try1 \
 step-id=create_try1 \
 class=org.apache.hadoop.examples.WordCount \
 region=europe-west3 \
 jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \
 properties spark.num.executors=2,spark.executor.cores=3,spark.executor.memory=5g,spark.driver.cores=2,spark.driver.memory=10g,spark.dynamicAllocation.enabled=false,spark.executor.userClassPathFirst=true,spark.driver.userClassPathFirst=true,spark.jars.packages=com.google.cloud:google-cloud-logging:2.2.0 \
  ARG1 ARG2  

CLI输出(yaml配置):

id: try1
jobs:
- sparkJob:
    args:
    - ARG1
    - ARG2
    jarFileUris:
    - file:///usr/lib/spark/examples/jars/spark-examples.jar
    mainClass: org.apache.hadoop.examples.WordCount
    properties:
      spark.driver.cores: '2'
      spark.driver.memory: 10g
      spark.driver.userClassPathFirst: 'true'
      spark.dynamicAllocation.enabled: 'false'
      spark.executor.cores: '3'
      spark.executor.memory: 5g
      spark.executor.userClassPathFirst: 'true'
      spark.jars.packages: com.google.cloud:google-cloud-logging:2.2.0
      spark.num.executors: '2'
  stepId: create_try1
name: projects/your-project-id/regions/europe-west3/workflowTemplates/try1
placement:
  managedCluster:
    clusterName: mycluster
updateTime: '2021-08-25T03:30:47.365244Z'
version: 3

复制生成的yaml配置,打开文本编辑器并添加parameters:字段。它将包含您要接受的参数

parameters:
- name: ARG1
  fields:
  - jobs['create_try1'].sparkJob.args[0] # use the stepId in jobs[], in this example it is 'create_try1'
- name: ARG2
  fields:
  - jobs['create_try1'].sparkJob.args[1]

在这个例子中,我把它放在stepId:之后

编辑的yaml配置:

id: try1
jobs:
- sparkJob:
    args:
    - ARG1
    - ARG2
    jarFileUris:
    - file:///usr/lib/spark/examples/jars/spark-examples.jar
    mainClass: org.apache.hadoop.examples.WordCount
    properties:
      spark.driver.cores: '2'
      spark.driver.memory: 10g
      spark.driver.userClassPathFirst: 'true'
      spark.dynamicAllocation.enabled: 'false'
      spark.executor.cores: '3'
      spark.executor.memory: 5g
      spark.executor.userClassPathFirst: 'true'
      spark.jars.packages: com.google.cloud:google-cloud-logging:2.2.0
      spark.num.executors: '2'
  stepId: create_try1
parameters:
- name: ARG1
  fields:
  - jobs['create_try1'].sparkJob.args[0]
- name: ARG2
  fields:
  - jobs['create_try1'].sparkJob.args[1]
name: projects/your-project-id/regions/europe-west3/workflowTemplates/try1
placement:
  managedCluster:
    clusterName: mycluster
updateTime: '2021-08-25T03:13:25.014685Z'
version: 3

使用编辑的yaml文件覆盖您的工作流模板:

gcloud dataproc workflow-templates import try1 \
     region=europe-west3 \
     source=config.yaml

使用gcloud dataproc workflow-templates instantiate运行模板:

enter image description here

有关更多详细信息,请参阅Parameterization of Workflow Templates

相关问题 更多 >