如何使用pythonlis在文本文件的缺失处填充NA

2024-06-26 13:38:58 发布

您现在位置:Python中文网/ 问答频道 /正文

我在原始文本文件(它是一个日志文件)中有一个数据集,我正在用这个文本文件逐行读取准备python列表,用这个列表我将用pyspark创建一个dataframe。如果你看到数据集,各个列中缺少一些值,我想用“NA”填充它。这是数据集的示例,缺少的值可以在任何列中,列之间用空格隔开

==============================================
empcode   Emnname   Date       DESC
12d      sf        2018-02-06      dghsjf  
asf2     asdfw2    2018-02-16      fsfsfg  
dsf21    sdf2      2016-02-06      sdgfsgf
sdgg     dsds      dkfd-sffddfdf   aaaa
dfd      gfg       dfsdffd         aaaa
df                 dfdf            efef
4fr                                freff         
----------------------------------------------

代码:

path="something/demo.txt"
EndStr="----------------------------------------------"
FilterStr="=============================================="
findStr="empcode   Emnname"

def PrepareList(findStr):
  with open(path) as f:
    out=[]
    for line in f:
        if line.rstrip()==Findstr:
            #print(line)
            tmp=[]
            tmp.append(re.sub("\s+",",",line.strip()))
            #print(tmp)
            for line in f:
                if line.rstrip()==EndStr:
                    out.append(tmp)
                    break

                tmp.append(re.sub("\s+",",",line.strip()))
            return (tmp)
  f.close()  
LstEmp=[]
LstEmp=prepareDataset("empcode   Emnname   Dept   DESC")
print(LstEmp)

我的输出是:

['empcode,Emnname,Date,DESC', 
 '12d,sf,2018-02-06,dghsjf',
 'asf2,asdfw2,2018-02-16,fsfsfg',
 'dsf21,sdf2,2016-02-06,sdgfsgf',
 'sdgg,dsds,dkfd-sffddfdf,aaaa',
 'dfd,gfg,dfsdffd,aaaa',
 'df,dfdf,efef',
 '4fr,freff']  

预期产量:

['empcode,Emnname,Date,DESC', 
 '12d,sf,2018-02-06,dghsjf',
 'asf2,asdfw2,2018-02-16,fsfsfg',
 'dsf21,sdf2,2016-02-06,sdgfsgf',
 'sdgg,dsds,dkfd-sffddfdf,aaaa',
 'dfd,gfg,dfsdffd,aaaa',
 'df,NA,dfdf,efef',
 '4fr,NA,NA,freff']

Tags: 数据datelinesfdesctmpnaaaaa
2条回答

从数据集中可以看出,字段中的文本长度是可变的,字段本身以固定位置开始和结束。这通常发生在制表符分隔的字段中。你知道吗

==============================================
empcode   Emnname   Date       DESC
12d      sf        2018-02-06      dghsjf  
asf2     asdfw2    2018-02-16      fsfsfg  
dsf21    sdf2      2016-02-06      sdgfsgf

如果是这种情况,应采用以下方法:

for line in f:
    if line.rstrip()==Findstr:
        tmp=[]
        tmp.append(re.sub("\t",",",line.strip()))
        #print(tmp)
        for line in f:
            if line.rstrip()==EndStr:
                out.append(tmp)
                break

            tmp.append(re.sub("\t",",",line.strip()))
        return (tmp)

我已经用\t替换了代码中的\s,并删除了+。 在python正则表达式中,+符号展开以匹配它前面的一个或多个正则表达式。在这种情况下,\s从第一个字段的末尾扩展到下一个字段。你知道吗

或者,如果输入文件不是以制表符分隔的,则可以提取考虑固定长度字段的字段值,然后执行strip()

fields = [ (0,10),
           (10, 20),
           (20,36),
           (36,100) # Assuming last field will not cross this length
         ]
field_values = [ line[ x[0]:x[1] ].strip() for x in fields ]

在这里,我尝试遵循一种通用方法,在这种方法中,您不必在代码中预编程列跨距。用于返回数据帧,可以使用pd.read\U csv文件和斯特林吉奥。请根据您的文件位置修改路径。这段代码是从您的代码中扩展出来的,以使您更容易理解,否则还有更有效的方法来编写相同的逻辑

    import re
import pandas as pd
import StringIO
path = "/home/clik/clik/demo.txt"
EndStr = "               "
FilterStr = "=================="
FindStr = "empcode   Emnname"


def match(sp1, sp2):
    disjunct = max(sp1[0] - sp2[1], sp2[0] - sp1[1])
    if disjunct >= 0:
        return -abs((sp1[0]+sp1[1])/2.0 - (sp2[0]+sp2[1])/2.0)
    return float(disjunct) / min(sp1[0] - sp2[1], sp2[0] - sp1[1])


def PrepareList():
    with open(path) as f:
        out = []
        for i, line in enumerate(f):
            print line.rstrip()
            if line.rstrip().startswith(FindStr):
                print(line)
                tmp = []
                col_spans = [m.span() for m in re.finditer("[^\s][^\s]+", line)]
                tmp.append(re.sub("\s+", ",", line.strip()))
                # print(tmp)
                for line in f:
                    if line.rstrip().startswith(EndStr):
                        out.append(tmp)
                        break
                    row = [None] * len(col_spans)
                    for m in re.finditer("[^\s][^\s]+", line):
                        colmatches = [match(m.span(), cspan) for cspan in col_spans]
                        max_index = max(enumerate(colmatches), key=lambda e: e[1])[0]
                        row[max_index] = m.group() if row[max_index] is None else (row[max_index] + ' ' + m.group())
                    tmp.append(','.join(['NA' if e is None else e for e in row]))
                    #tmp.append(re.sub("\s+", ",", line.strip()))
                #for pandas dataframe
                #return pd.read_csv(StringIO.StringIO('\n'.join(tmp)))

                #for returning list of tuples
                return map(tuple, tmp)
                #for returning list of list
                #return tmp
    f.close()


LstEmp = PrepareList()

为了将元组列表转换为pyspark数据帧,这里有一个教程http://bigdataplaybook.blogspot.in/2017/01/create-dataframe-from-list-of-tuples.html

相关问题 更多 >