如何使用给定的reduce函数合并基于pyspark中一个字段的多个JSON数据行

2024-06-25 05:25:55 发布

您现在位置:Python中文网/ 问答频道 /正文

如何使用下面的merge函数与pyspark合并如下所示的JSON数据行?在

注意:假设这只是一个细节示例,我有1000行数据要合并。最有效的解决方案是什么?不管是好是坏,我必须使用pyspark。在

输入:

data = [
    {'timestamp': '20080411204445', 'address': '100 Sunder Ct', 'name': 'Joe Schmoe'},
    {'timestamp': '20040218165319', 'address': '100 Lee Ave', 'name': 'Joe Schmoe'},
    {'timestamp': '20120309173318', 'address': '1818 Westminster', 'name': 'John Doe'},
    ...  More ...
]

期望输出:

^{pr2}$

合并功能:

def reduce_on_name(a, b):
    '''Combines two JSON data rows based on name'''
    merged = {}
    if a['name'] == b['name']:
        addresses = (a['timestamp'], a['address']), (b['timestamp'], b['address'])
        merged['name'] = a['name']
        merged['addresses'] = addresses
    return merged

Tags: 数据函数namejsondataonaddressaddresses
2条回答

好吧,用maxymo的例子,我把我自己的可重用代码放在一起。这并不完全是我想要的,但它让我更接近于我想要如何解决这个特殊问题:没有lambdas和使用可重用代码。在

#!/usr/bin/env pyspark
# -*- coding: utf-8 -*-
data = [
    {'timestamp': '20080411204445', 'address': '100 Sunder Ct', 'name': 'Joe Schmoe'},
    {'timestamp': '20040218165319', 'address': '100 Lee Ave', 'name': 'Joe Schmoe'},
    {'timestamp': '20120309173318', 'address': '1818 Westminster', 'name': 'John Doe'},
]


def combine(field):
    '''Returns a function which reduces on a specific field

    Args:
        field(str): data field to use for merging

    Returns:
        func: returns a function which supplies the data for the field
    '''

    def _reduce_this(data):
        '''Returns the field value using data'''
        return data[field]

    return _reduce_this


def aggregate(*fields):
    '''Merges data based on a list of fields

    Args:
        fields(list): a list of fields that should be used as a composite key

    Returns:
       func: a function which does the aggregation
    '''

    def _merge_this(iterable):
        name, iterable = iterable
        new_map = dict(name=name, window=dict(max=None, min=None))
        for data in iterable:
            for field, value in data.iteritems():
                if field in fields:
                    new_map[field] = value
                else:
                    new_map.setdefault(field, set()).add(value)
        return new_map

    return _merge_this

# sc provided by pyspark context
combined = sc.parallelize(data).groupBy(combine('name'))
reduced = combined.map(aggregate('name'))
output = reduced.collect()

我想应该是这样的:

sc.parallelize(data).groupBy(lambda x: x['name']).map(lambda t: {'name':t[0],'addresses':[(x['timestamp'], x['address']) for x in t[1]]}).collect()

相关问题 更多 >