Bigquery根据时间/位置数据(当前行上方/下方的行)获取速度

2024-09-27 01:19:07 发布

您现在位置:Python中文网/ 问答频道 /正文

我在Bigquery中有一个表,其中包含Nascar驱动程序的跟踪数据(我正在处理的项目的虚拟数据)。x和y坐标每秒取10次。capture_frame表示当前帧,每个连续的capture_frame应该相隔100毫秒,因为数据是每100毫秒采集一次的

我想计算每个车手每圈的速度。我知道如何在熊猫身上做到这一点,但我认为这在bigquery中是可能的。为了计算速度,我查看了capture_frame之前的2行和之后的2行,然后除以历元时间的差,应该是400毫秒。你知道吗

这里是一个例子,一些捕捉帧为一个司机的第一圈比赛。每圈有几百个捕捉帧,然后还有20个车手混合在一起,但是如果我们只看一个车手/比赛/圈,就更容易理解了。你知道吗

+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| Race | Capture | Lap | Driver | …  | X    | Y   | Epoch_time | Delta_dist  | Curr_speed  |
|      | _frame  |     |        |    |      |     |            |             |             |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| I500 | 1       | 1   | Logano | …. | 2.1  | 1   | 1552089720 | NULL        | Null        |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| I500 | 2       | 1   | Logano | …  | 2.2  | 1.1 | 1552089820 | NULL        | Null        |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| I500 | 3       | 1   | Logano | …  | 2.22 | 1.2 | 1552089920 | 2.265921446 | 0.005664804 |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| I500 | 4       | 1   | Logano | .. | 3.22 | 1.5 | 1552090020 | 3.124163888 | 0.00781041  |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| I500 | 5       | 1   | Logano | .. | 4.22 | 1.8 | 1552090120 | NULL        | null        |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+
| I500 | 6       | 1   | Logano | .. | 5.22 | 1.9 | 1552090220 | NULL        | null        |
+------+---------+-----+--------+----+------+-----+------------+-------------+-------------+

帧3的delta_distsqrt((4.22-2.1)^2 + (1.8-1)^2)/1计算,curr_speed是该数字除以400。比赛的第一个/最后两个距离和速度将为空,因为没有先前的x或y坐标,这是可以的,因为在距离开始或停止0.1秒时没有任何速度。你知道吗

在熊猫中,我会这样做(这不是一个伟大的代码,因为我只是让每个车手和比赛在自己):

#laps_per_race dictionary with num laps per race
for driver in driver_list:
    for race in race_list:
        driver_race_query = “SELECT * from nascar_xyz where driver={driver} and Race={race}”.format(driver=driver, race=race)
        df_entire_race = client.query(driver_race_query).to_dataframe()
        num_laps = laps_per_race[race]
        for lap in num_laps: 
            #get subset of dataframe just for this lap 
            df = df_entire_race.loc[df_entire_race['Lap'] == lap]
            df.sort_values(‘Epoch_time’, inplace=True)
            df[‘prev_x’] = df[‘X’].shift(2)
            df[‘next_x’] = df[‘X’].shift(-2)
            df[‘prev_y’] = df[‘Y’].shift(2)
            df[‘next_y’] = df[‘Y’].shift(-2)
            #this is just distance function sqrt((x2-x1)^2 + (y2-y1)^2)
            df['delta_dist'] = np.sqrt((df[‘X’].shift(-2) - df[‘X’].shift(2))**2 + (df[‘Y’].shift(-2) - df[‘Y’].shift(2))**2))

            #400.0 is the time actual difference
            df['Curr_speed'] = df['delta_dist']/400.0

我认为在我的sql查询中,我要么必须执行group by,要么必须执行partition by to,因为我希望按driver_id查看每个种族,然后lap(如果这个抽象级别有意义的话)。也许为了速度和前瞻性,我可以使用窗口(https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts)或者类似于熊猫中的.shift()的一种叫做延迟的方法。你知道吗


Tags: 数据dfforshiftdistdriverframenull
1条回答
网友
1楼 · 发布于 2024-09-27 01:19:07

你走的路是对的。我将获取一个公共数据集,其中包含在斯塔滕岛周围行驶的公交车——我将通过查看它们的纬度、经度来使用地理距离:

WITH data AS (
  SELECT bus, ST_GeogPoint(longitude, latitude) point
    , PARSE_TIMESTAMP('%Y%m%d %H%M%S',FORMAT('%i %06d', day, time)) ts
  FROM `fh-bigquery.mta_nyc_si.201410_bustime`
  WHERE day=20141014
  AND bus IN (7043, 7086, 7076, 2421, 7052, 7071)
)


SELECT * 
FROM (
  SELECT bus, ts, distance/time speed
  FROM (
    SELECT bus, ts
      , ST_DISTANCE(point, LAG(point, 3) OVER(PARTITION BY bus ORDER BY ts)) distance
      , TIMESTAMP_DIFF(ts, LAG(ts, 3) OVER(PARTITION BY bus ORDER BY ts), SECOND) time
    FROM data
  )
  WHERE time IS NOT null 
)
WHERE speed < 500

enter image description here

相关问题 更多 >

    热门问题