PySpark:将列表中的元素分组

2024-09-30 04:39:23 发布

您现在位置:Python中文网/ 问答频道 /正文

期望输出-[((1,2)、(3,4)、5)]

rdd = sc.parallelize([1,2,3,4,5])
rdd.map(lambda x: ((x[0],x[1]),(x[2],x[3]),x[4])).collect()

但是,我得到了一个错误--

TypeError: 'int' object is not subscriptable

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:456)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:592)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:575)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)

请更正代码。我正在使用Python和Spark


Tags: orgapireadapacheatsparkrddscala

热门问题