我试图用pd.Grouper
作为答案 here.迭代多个单独的df
现在,这在我的8个df中有7个有效,只需几秒钟。然而,其中一个——即使是最大的一个也没有——被抓到并挂起,最终死于内存错误,我不知道为什么,因为df几乎是相同的
故障代码块如下所示:
g = df.groupby(pd.Grouper(freq="5s"))
df2 = pd.DataFrame(
dict(
open = g["price"].first(),
close = g["price"].last(),
high = g["price"].max(),
low = g["price"].min(),
volume = g["volume"].sum(),
buy_volume = g["buy_volume"].sum(),
sell_volume = -g["sell_volume"].sum(),
num_trades = g["size"].count(),
num_buy_trades = g["buy_trade"].sum(),
num_sell_trades = g["sell_trade"].sum(),
pct_buy_trades = g["buy_trade"].mean() * 100,
pct_sell_trades = g["sell_trade"].mean() * 100,
)
)
讨论中的示例df采用以下格式:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 3589964 entries, 1970-01-01 00:00:01.528000 to 2018-06-03 05:54:02.690000
Data columns (total 8 columns):
price float64
size float64
buy_sell bool
volume float64
buy_volume float64
sell_volume float64
buy_trade bool
sell_trade bool
dtypes: bool(3), float64(5)
memory usage: 254.6 MB
有3.5 mil条目,如下所示:
price size buy_sell volume buy_volume sell_volume buy_trade sell_trade
T
2018-05-18 12:05:11.407 8097.02 0.007823 False 0.007823 0.007823 0.000000 True False
2018-05-18 12:05:11.720 8097.02 0.129632 False 0.129632 0.129632 0.000000 True False
2018-05-18 12:05:12.402 8097.02 0.037028 False 0.037028 0.037028 0.000000 True False
2018-05-18 12:05:12.786 8097.03 0.307939 False 0.307939 0.307939 0.000000 True False
2018-05-18 12:05:12.786 8097.02 0.025517 False 0.025517 0.025517 0.000000 True False
2018-05-18 12:05:12.788 8097.03 0.014835 False 0.014835 0.014835 0.000000 True False
2018-05-18 12:05:14.226 8097.03 0.006198 False 0.006198 0.006198 0.000000 True False
2018-05-18 12:05:14.341 8092.00 -0.010989 True 0.010989 0.000000 -0.010989 False True
2018-05-18 12:05:15.307 8092.00 -0.000011 True 0.000011 0.000000 -0.000011 False True
2018-05-18 12:05:15.307 8091.99 -0.019989 True 0.019989 0.000000 -0.019989 False True
2018-05-18 12:05:15.387 8091.99 -0.007340 True 0.007340 0.000000 -0.007340 False True
2018-05-18 12:05:15.603 8091.99 -0.002440 True 0.002440 0.000000 -0.002440 False True
2018-05-18 12:05:15.679 8090.01 -0.098909 True 0.098909 0.000000 -0.098909 False True
这是另一个工作完全正常并在几秒钟内完成的df:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1952985 entries, 2018-05-18 12:05:11.791000 to 2018-06-03 05:53:57
Data columns (total 8 columns):
price float64
side object
size int64
volume int64
buy_volume float64
sell_volume float64
buy_trade bool
sell_trade bool
dtypes: bool(2), float64(3), int64(2), object(1)
memory usage: 188.0+ MB
price side size volume buy_volume sell_volume buy_trade sell_trade
timestamp
2018-05-18 12:05:11.791 8112.0 Sell -4085 4085 0.0 -4085.0 False True
2018-05-18 12:05:11.811 8111.5 Sell -598 598 0.0 -598.0 False True
2018-05-18 12:05:11.849 8111.5 Sell -3000 3000 0.0 -3000.0 False True
2018-05-18 12:05:11.876 8111.5 Sell -1300 1300 0.0 -1300.0 False True
2018-05-18 12:05:11.949 8111.5 Sell -3408 3408 0.0 -3408.0 False True
2018-05-18 12:05:12.476 8111.5 Sell -50000 50000 0.0 -50000.0 False True
2018-05-18 12:05:12.523 8111.5 Sell -2500 2500 0.0 -2500.0 False True
2018-05-18 12:05:12.698 8111.5 Sell -8000 8000 0.0 -8000.0 False True
2018-05-18 12:05:12.722 8111.5 Sell -8000 8000 0.0 -8000.0 False True
2018-05-18 12:05:12.809 8111.5 Sell -815 815 0.0 -815.0 False True
我一直在等待复制错误消息,但它被卡住了50分钟
谢谢你的帮助,这让我头痛不已
我的第一个想法是按^{} 对
index
进行排序如果仍然存在性能问题,则应该存在数据字符问题
DataetimeIndex
-groupby
创建许多小的5s
组编辑:
在double check
DatetimIndex
之后是:所以这里有大量的af组,这是性能差的原因
相关问题 更多 >
编程相关推荐