如何在pyspark Hive SQL中为分区获取postgres命令“n_值”的等效值?

2024-10-01 13:34:57 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在解决这个例子: https://www.windowfunctions.com/questions/grouping/6
在这里,他们使用Oracle或postgres命令nth_value来获得答案,但这并没有在pyspark使用的hivesql中实现,我想知道如何在pyspark中获得相同的结果

postgres sql代码


select distinct(breed),
       nth_value(weight, 2) over ( partition by breed order by weight
                                   RANGE BETWEEN UNBOUNDED PRECEDING
                                   AND UNBOUNDED FOLLOWING
                                   ) as imagined_weight
from cats 
order by breed

问题:如何使用pyspark获得以下结果

breed   imagined_weight
British Shorthair   4.8
Maine Coon  5.4
Persian 4.5
Siamese 6.1

资料

import numpy as np
import pandas as pd

import pyspark
from pyspark.sql.types import *
from pyspark.sql import functions as F
from pyspark.sql.window import Window
from pyspark import SparkConf, SparkContext, SQLContext
spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc)
sqc = sqlContext
# spark_df = sqlContext.createDataFrame(pandas_df)

df = pd.DataFrame({
    'name': [
        'Molly', 'Ashes', 'Felix', 'Smudge', 'Tigger', 'Alfie', 'Oscar',
        'Millie', 'Misty', 'Puss', 'Smokey', 'Charlie'
    ],
    'breed': [
        'Persian', 'Persian', 'Persian', 'British Shorthair',
        'British Shorthair', 'Siamese', 'Siamese', 'Maine Coon', 'Maine Coon',
        'Maine Coon', 'Maine Coon', 'British Shorthair'
    ],
    'weight': [4.2, 4.5, 5.0, 4.9, 3.8, 5.5, 6.1, 5.4, 5.7, 5.1, 6.1, 4.8],
    'color': [
        'Black', 'Black', 'Tortoiseshell', 'Black', 'Tortoiseshell', 'Brown',
        'Black', 'Tortoiseshell', 'Brown', 'Tortoiseshell', 'Brown', 'Black'
    ],
    'age': [1, 5, 2, 4, 2, 5, 1, 5, 2, 2, 4, 4]
})

schema = StructType([
    StructField('name', StringType(), True),
    StructField('breed', StringType(), True),
    StructField('weight', DoubleType(), True),
    StructField('color', StringType(), True),
    StructField('age', IntegerType(), True),
])

sdf = sqlContext.createDataFrame(df, schema)
sdf.createOrReplaceTempView("cats")

spark.sql('select * from cats limit 2').show()

我迄今为止的努力

# My attempt
q = """
select
    distinct(breed),
    ( max(case when rn = 2 then weight end)
      over(partition by breed order by weight
                                   RANGE BETWEEN UNBOUNDED PRECEDING
                                   AND UNBOUNDED FOLLOWING)
    ) imagined_weight
from (
  select 
    c.*,
    row_number() over(order by weight) rn
    from cats c
    ) c
"""
spark.sql(q).show()

参考资料


Tags: fromimporttruesqlbyorderselectspark
1条回答
网友
1楼 · 发布于 2024-10-01 13:34:57

如果您想要每个品种的第二低体重:

select breed,
       max(case when seqnum = 2 then weight end) as imagined_weight
from (select  c.*, row_number() over (partition by breed order by weight) as seqnum
      from cats c
     ) c
group by breed;

相关问题 更多 >