spark在parallelism tas上只使用一个核心

2024-10-04 05:29:41 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我有这个代码:

conf = SparkConf().setAll((
    ("spark.python.profile", "true" if args.profile else "false"),
    ("spark.task.maxFailures", "20"),
    ("spark.driver.cores", "4"),
    ("spark.executor.cores", "4"),
    ("spark.shuffle.service.enabled", "true"),
    ("spark.dynamicAllocation.enabled", "true"),
))

# TODO could this be set somewhere in cosr-ops instead?
executor_environment = {}
if config["ENV"] == "prod":
    executor_environment = {
        "PYTHONPATH": "/cosr/back",
        "PYSPARK_PYTHON": "/cosr/back/venv/bin/python",
        "LD_LIBRARY_PATH": "/usr/local/lib"
    }

sc = SparkContext(appName="Common Search Index", conf=conf, environment=executor_environment)

# First, generate a list of all WARC files
warc_filenames = list_warc_filenames()

# Then split their indexing in Spark workers
warc_records = sc.parallelize(warc_filenames, 4).flatMap(iter_records)

当它打开所有的火花材料,它使用所有的核心。在

但当它开始执行任务(索引)时,它100%只使用1个核心。在

如何使一个spark任务使用所有核心?在


Tags: intrue核心ifenvironmentconfbackenabled