下面代码中的变量“data”包含数百个查询数据库的执行结果。每个执行结果是一天的数据,包含大约7000行数据(列是时间戳和值)。我每天都会互相追加数据,结果会产生几百万行的数据(这几百行的数据需要很长时间)。在我为一个传感器建立了完整的数据集之后,我将这些数据作为一列存储在unitdf DataFrame中,然后对每个传感器重复上述过程,并将它们全部合并到unitdf DataFrame中。在
我认为附加和合并都是代价高昂的操作。我找到的唯一可能的解决方案是将每个列拆分为列表,一旦所有数据都添加到列表中,则将所有列合并到一个数据帧中。有什么提速的建议吗?在
i = 0
for sensor_id in sensors: #loop through each of the 20 sensors
#prepared statement to query Cassandra
session_data = session.prepare("select timestamp, value from measurements_by_sensor where unit_id = ? and sensor_id = ? and date = ? ORDER BY timestamp ASC")
#Executing prepared statement over a range of dates
data = execute_concurrent(session, ((session_data, (unit_id, sensor_id, date)) for date in dates), concurrency=150, raise_on_first_error=False)
sensordf = pd.DataFrame()
#Loops through the execution results and appends all successful executions that contain data
for (success, result) in data:
if success:
sensordf = sensordf.append(pd.DataFrame(result.current_rows))
sensordf.rename(columns={'value':sensor_id}, inplace=True)
sensordf['timestamp'] = pd.to_datetime(sensordf['timestamp'], format = "%Y-%m-%d %H:%M:%S", errors='coerce')
if i == 0:
i+=1
unitdf = sensordf
else:
unitdf = unitdf.merge(sensordf, how='outer')
目前没有回答
相关问题 更多 >
编程相关推荐