使用sp读取和访问json文件中的嵌套字段

2024-10-01 15:37:08 发布

您现在位置:Python中文网/ 问答频道 /正文

我有多个json文件,我想用它们来创建spark数据帧。在使用子集进行测试时,当我加载文件时,我得到的是json信息本身的行,而不是解析的json信息。我正在做以下工作:

    df = spark.read.json('gutenberg/test')
    df.show()
    +--------------------+--------------------+--------------------+
    |                   1|                  10|                   5|
    +--------------------+--------------------+--------------------+
    |                null|[WrappedArray(),W...|                null|
    |                null|                null|[WrappedArray(Uni...|
    |[WrappedArray(Jef...|                null|                null|
    +--------------------+--------------------+--------------------+

当我检查数据帧的模式时,它似乎在那里,但访问它时遇到了问题:

    df.printSchema()
    root
     |-- 1: struct (nullable = true)
     |    |-- author: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- formaturi: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- language: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- rights: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- subject: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- title: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- txt: string (nullable = true)
     |-- 10: struct (nullable = true)
     |    |-- author: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- formaturi: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- language: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- rights: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- subject: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- title: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- txt: string (nullable = true)
     |-- 5: struct (nullable = true)
     |    |-- author: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- formaturi: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- language: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- rights: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- subject: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- title: array (nullable = true)
     |    |    |-- element: string (containsNull = true)
     |    |-- txt: string (nullable = true)

我在尝试访问信息时不断出错,所以任何帮助都会很好。你知道吗

具体来说,我希望创建一个新的dataframe,其中的列是('author'、'formaturi'、'language'、'rights'、'subject'、'title'、'txt')

我使用的是pyspark 2.2


Tags: txtjsontruestringtitleelementarraylanguage
1条回答
网友
1楼 · 发布于 2024-10-01 15:37:08

因为我不知道json文件是什么样的,假设它是一个新行分隔的json,这应该可以工作。你知道吗

def _construct_key(previous_key, separator, new_key):
    if previous_key:
        return "{}{}{}".format(previous_key, separator, new_key)
    else:
        return new_key

def flatten(nested_dict, separator="_", root_keys_to_ignore=set()):
    assert isinstance(nested_dict, dict)
    assert isinstance(separator, str)
    flattened_dict = dict()

    def _flatten(object_, key):     
        if isinstance(object_, dict):
            for object_key in object_:
                if not (not key and object_key in root_keys_to_ignore):
                    _flatten(object_[object_key], _construct_key(key,\ 
                                       separator, object_key))
        elif isinstance(object_, list) or isinstance(object_, set):
            for index, item in enumerate(object_):
                _flatten(item, _construct_key(key, separator, index))
        else:
            flattened_dict[key] = object_

    _flatten(nested_dict, None)
    return flattened_dict

def flatten(_json):
    return flatt(_json.asDict(True))

df = spark.read.json('gutenberg/test',\
                     primitivesAsString=True,\
                     allowComments=True,\
                     allowUnquotedFieldNames=True,\
                     allowNumericLeadingZero=True,\
                     allowBackslashEscapingAnyCharacter=True,\
                     mode='DROPMALFORMED')\
                     .rdd.map(flatten).toDF()
df.show()

相关问题 更多 >

    热门问题