pd.iterrows()消耗所有内存并给出一个错误(进程结束,退出代码137(被信号9:SIGKILL中断))

2024-09-28 21:55:53 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个csv文件,有超过750000行和2列(序列号,状态)
序列号是从0到750000的序列号,状态为0或1
我正在pandas中读取csv文件,然后读取与SN同名的.npy文件,然后将.npy文件附加到名为(x_train,x_val)的两个列表中
x_val应为2000元素,其中700应为state=1,其余为state=0。 剩下的就交给x_火车吧
问题是,在读取约190000行后,进程停止,RAM被消耗(PC RAM=32 GB)

x_train len=  195260
Size of list1: 1671784bytes
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)

我的代码是:

nodules_path = "~/cropped_nodules/"
nodules_csv = pandas.read_csv("~/cropped_nodules_2.csv")

positive = 0
negative = 0
x_val = []
x_train = []
y_train = []
y_val = []

for nodule in nodules_csv.iterrows():

    if nodule.state == 1 and positive <= 700 and len(x_val) <= 2000 :
        positive += 1
        x_val_img = str(nodule.SN) + ".npy"
        x_val.append(np.load(os.path.join(nodules_path,x_val_img)))
        y_val.append(nodule.state)

    elif nodule.state == 0 and negative <= 1300 and len(x_val) <= 2000:
        x_val_img = str(nodule.SN) + ".npy"
        negative += 1
        x_val.append(np.load(os.path.join(nodules_path,x_val_img)))
        y_val.append(nodule.state)

    else:

        if len(x_train) % 10000 == 0:
            gc.collect()
            print("gc done")
        x_train_img = str(nodule.SN) + ".npy"
        x_train.append(np.load(os.path.join(nodules_path,x_train_img)))
        y_train.append(nodule.state)
        print("x_train len= ", len(x_train))
        print("Size of list1: " + str(sys.getsizeof(x_train)) + "bytes")

我尝试使用以下方法:

  1. 手动调用gc
  2. 使用df.itertuples()
  3. 使用dataframe apply()

但同样的问题发生在大约100000行之后
我试着做熊猫矢量化,但我不知道怎么做,我认为这些条件不能用矢量化来实现
有没有比这更好的方法

我尝试按照@XtianP的建议实现块

with pandas.read_csv("~/cropped_nodules_2.csv", chunksize=chunksize) as reader:
    for chunk in reader:
        for index, nodule in chunk.iterrows():
            if nodule.state == 1 and positive <= 700 and len(x_val) <= 2000 :
........

但同样的问题发生了!(可能我的实现不正确)

也许问题不在于熊猫图片,而在于列表越来越大
但是sys.getsizeof(x_train)输出只是Size of list1: 1671784bytes

我使用了如下跟踪内存分配:

import tracemalloc

tracemalloc.start()

# ... run your application ...

snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')

print("[ Top 10 ]")
for stat in top_stats[:10]:
    print(stat)

结果是:

[ Top 10 ]
/home/mustafa/.local/lib/python3.8/site-packages/numpy/lib/format.py:741: size=11.3 GiB, count=204005, average=58.2 KiB
/home/mustafa/.local/lib/python3.8/site-packages/numpy/lib/format.py:771: size=4781 KiB, count=102002, average=48 B
/usr/local/lib/python3.8/dist-packages/pandas/core/indexes/base.py:4855: size=2391 KiB, count=102000, average=24 B
/home/mustafa/home/mustafa/project/LUNAMASK/nodule_3D_CNN.py:84: size=806 KiB, count=2, average=403 KiB
/home/mustafa/home/mustafa/project/LUNAMASK/nodule_3D_CNN.py:85: size=805 KiB, count=1, average=805 KiB
/usr/local/lib/python3.8/dist-packages/pandas/io/parsers.py:2056: size=78.0 KiB, count=2305, average=35 B
/usr/lib/python3.8/abc.py:102: size=42.5 KiB, count=498, average=87 B
/home/mustafa/.local/lib/python3.8/site-packages/numpy/core/_asarray.py:83: size=41.6 KiB, count=757, average=56 B
/usr/local/lib/python3.8/dist-packages/pandas/core/series.py:512: size=37.5 KiB, count=597, average=64 B
/usr/local/lib/python3.8/dist-packages/pandas/core/internals/managers.py:1880: size=16.5 KiB, count=5, average=3373 B

Tags: csvpathpypandassizelenlibcount