如何使用regex解析文本文件

2024-09-29 01:34:00 发布

您现在位置:Python中文网/ 问答频道 /正文

我正试图解析一些日志文件,以获取一些数字并将其转换成CSV文件。日志文件有很多日志消息,但下面是需要解析的行的摘录

我正在尝试将下面这个文本文件中的丢失和准确数字输入到CSV中。对bash或python技巧有什么建议吗

1500/1500 [==============================] - 1802s 1s/step - loss: 0.3430 - accuracy: 0.8753 - val_loss: 0.1110 - val_accuracy: 0.9670
Epoch 00002: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_02_0.069291627_0.98.h5
1500/1500 [==============================] - 1679s 1s/step - loss: 0.0849 - accuracy: 0.9739 - val_loss: 0.0693 - val_accuracy: 0.9807
Epoch 00003: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_03_0.055876694_0.98.h5
1500/1500 [==============================] - 1674s 1s/step - loss: 0.0742 - accuracy: 0.9791 - val_loss: 0.0559 - val_accuracy: 0.9845
Epoch 00004: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_04_0.053867317_0.99.h5
1500/1500 [==============================] - 1671s 1s/step - loss: 0.0565 - accuracy: 0.9841 - val_loss: 0.0539 - val_accuracy: 0.9850
Epoch 00005: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_05_0.053266536_0.99.h5
1500/1500 [==============================] - 1675s 1s/step - loss: 0.0409 - accuracy: 0.9881 - val_loss: 0.0533 - val_accuracy: 0.9855

以下是我在python中尝试的内容:

import re
text = r"""00 [==============================] - 1802s 1s/step - loss: 0.3430 - accuracy: 0.8753 - val_loss: 0.1110 - val_accuracy: 0.9670
Epoch 00002: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_02_0.069291627_0.98.h5
1500/1500 [==============================] - 1679s 1s/step - loss: 0.0849 - accuracy: 0.9739 - val_loss: 0.0693 - val_accuracy: 0.9807
Epoch 00003: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_03_0.055876694_0.98.h5
1500/1500 [==============================] - 1674s 1s/step - loss: 0.0742 - accuracy: 0.9791 - val_loss: 0.0559 - val_accuracy: 0.9845
Epoch 00004: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_04_0.053867317_0.99.h5
1500/1500 [==============================] - 1671s 1s/step - loss: 0.0565 - accuracy: 0.9841 - val_loss: 0.0539 - val_accuracy: 0.9850
Epoch 00005: saving model to /root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_05_0.053266536_0.99.h5"""
regular_exp = re.compile(r'^.*val_accuracy.*$', re.M)
for match in regular_exp.finditer(text)
   print(match)

Tags: tocachedatamodelmodelsstepvalroot
3条回答

在Python中,使用命名捕获组:

(?m)^(?P<iteration>\d+(?:/\d+)?)\s+\[=+]\s+-\s+(?P<seconds>\d+)s\s+1s/step\s+-\s+loss:\s*(?P<loss>\d+\.\d+)\s+-\s+accuracy:\s*(?P<accuracy>\d+\.\d+)\s+-\s+val_loss:\s*(?P<val_loss>\d+\.\d+)\s*-\s*val_accuracy:\s*(?P<val_accuracy>\d+\.\d+)\r?\nEpoch\s+(?P<epoch_num>\d+):\s*saving model to\s*(?P<epoch_file>.*)

proof

Python代码:

regular_exp = re.compile(r'^(?P<iteration>\d+(?:/\d+)?)\s+\[=+]\s+-\s+(?P<seconds>\d+)s\s+1s/step\s+-\s+loss:\s*(?P<loss>\d+\.\d+)\s+-\s+accuracy:\s*(?P<accuracy>\d+\.\d+)\s+-\s+val_loss:\s*(?P<val_loss>\d+\.\d+)\s*-\s*val_accuracy:\s*(?P<val_accuracy>\d+\.\d+)\r?\nEpoch\s+(?P<epoch_num>\d+):\s*saving model to\s*(?P<epoch_file>.*)', re.M)
with open(filepath, 'r') as file:
    results = [ match.groupdict() for match in re.finditer(file.read()) ]

Python proof online,输出

[
    {'iteration': '00', 'seconds': '1802', 'loss': '0.3430', 'accuracy': '0.8753', 'val_loss': '0.1110', 'val_accuracy': '0.9670', 'epoch_num': '00002', 'epoch_file': '/root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_02_0.069291627_0.98.h5'}, 
    {'iteration': '1500/1500', 'seconds': '1679', 'loss': '0.0849', 'accuracy': '0.9739', 'val_loss': '0.0693', 'val_accuracy': '0.9807', 'epoch_num': '00003', 'epoch_file': '/root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_03_0.055876694_0.98.h5'}, 
    {'iteration': '1500/1500', 'seconds': '1674', 'loss': '0.0742', 'accuracy': '0.9791', 'val_loss': '0.0559', 'val_accuracy': '0.9845', 'epoch_num': '00004', 'epoch_file': '/root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_04_0.053867317_0.99.h5'},
    {'iteration': '1500/1500', 'seconds': '1671', 'loss': '0.0565', 'accuracy': '0.9841', 'val_loss': '0.0539', 'val_accuracy': '0.9850', 'epoch_num': '00005', 'epoch_file': '/root/data-cache/data/tmp/models/ota-cfo-10k_20200527-001913_05_0.053266536_0.99.h5'}
]

把那些东西放到test.log

egrep -o  " loss: [0-9\.]* | accuracy: [0-9\.]* " test.log
 loss: 0.3430 
 accuracy: 0.8753 
 loss: 0.0849 
 accuracy: 0.9739 
 loss: 0.0742 
 accuracy: 0.9791 
 loss: 0.0565 
 accuracy: 0.9841 
 loss: 0.0409 
 accuracy: 0.9881 

要在输出中删除您要求的csv字段,请执行以下操作:

$: sed -En '/ loss: [.0-9]+ - accuracy: [.0-9]+ /{ s/^.* loss: ([.0-9]+) - accuracy: ([.0-9]+) .*$/\1,\2/; p; }' the.log
0.3430,0.8753
0.0849,0.9739
0.0742,0.9791
0.0565,0.9841
0.0409,0.9881

有很多方法可以提高它的健壮性和灵活性,但它在提供的示例中起作用

相关问题 更多 >