python将变量中的CSV值与CSV文件issu匹配

2024-10-02 00:29:53 发布

您现在位置:Python中文网/ 问答频道 /正文

使用API查询,我收到了一个包含更多属性的巨大JSON响应。你知道吗

我试图以逗号分隔的CSV格式只解析响应中的某些字段。你知道吗

    >>> import json
    >>> resp = { "status":"success", "msg":"", "data":[ { "website":"https://www.blahblah.com", "severity":"low", "location":"unknown", "asn_number":"AS4134 Chinanet", "longitude":121.3997000000, "epoch_timestamp":1530868957, "id":"c1e15eccdd1f31395506fb85" }, { "website":"https://www.jhonedoe.co.uk/sample.pdf", "severity":"low", "location":"unknown", "asn_number":"AS4134 Chinanet", "longitude":120.1613998413, "epoch_timestamp":1530868957, "id":"933bf229e3e95a78d38223b2" } ] }
    >>> response = json.loads(json.dumps(resp))
    >>> KEYS = 'website', 'asn_number' , 'severity'
    >>> x = []
    >>> for attribute in response['data']:
            csv_response = ','.join(attribute[key] for key in KEYS)
            print csv_response

打印“csv\u response”时,它给出了所查询密钥的values。你知道吗

https://www.blahblah.com,AS4134 Chinanet,low
https://www.jhonedoe.co.uk/sample.pdf,AS4134 Chinanet,low

现在,我在/tmp/目录中有一个CSV文件。你知道吗

/tmp$cat 08_july_2018.csv
http://download2.freefiles-10.de,AS24940 Hetzner Online GmbH,high
https://www.jhonedoe.co.uk/sample.pdf,AS4134 Chinanet,low
http://download2.freefiles-11.de,AS24940 Hetzner Online GmbH,high
www.solener.com,AS20718 ARSYS INTERNET S.L.,low
https://www.blahblah.com,AS4134 Chinanet,low
www.telewizjairadio.pl,AS29522 Krakowskie e-Centrum Informatyczne JUMP Dziedzic,high 

我正在尝试检查/匹配从JSON响应“csv\u response”获得的值是否存在于“/tmp/08\u july\u 2018.csv”文件中。你知道吗

从“csv_response”值,如果任何一个行值与08_july_2018.csv匹配,我将把条件标记为“通过”。你知道吗

关于如何将变量中的CSV值与/tmp/目录中的文件相匹配并使条件为已传递的,有什么建议吗?你知道吗


Tags: csvhttpscomjsonresponsewwwwebsitetmp
1条回答
网友
1楼 · 发布于 2024-10-02 00:29:53

你可以利用熊猫(下面的代码来自jupyter笔记本)。Pandas将给您很大的灵活性来匹配csv中的列。你知道吗

您需要向要读取的csvfile添加一个头文件,因此添加:

website,asn,severity

到08\u july\u 2018.csv文件

import pandas as pd
import json

resp = { "status":"success", "msg":"",
         "data":[ { "website":"https://www.blahblah.com", 
                    "severity":"low",
                    "location":"unknown",
                    "asn_number":"AS4134 Chinanet", 
                    "longitude":121.3997000000, 
                    "epoch_timestamp":1530868957, 
                    "id":"c1e15eccdd1f31395506fb85" },
                  { "website":"https://www.jhonedoe.co.uk/sample.pdf", 
                    "severity":"low",
                    "location":"unknown",
                    "asn_number":"AS4134 Chinanet", 
                    "longitude":120.1613998413, 
                    "epoch_timestamp":1530868957, 
                    "id":"933bf229e3e95a78d38223b2" } ] }


t1 = pd.DataFrame(resp['data'])
t1.set_index('website', inplace=True)
print(t1) 

t2 = pd.read_csv('/tmp/08_july_2018.csv')
t2.set_index('website', inplace=True)
print(t2) 

# You try to check if one is present in the other. You can do that
# by querying the resulting (t3) dataframe holding all records
# mathing on the key (website). By selecting all rows that have
# equal severity you have those records. Extend/modify this query for
# the fields you want to match on. The columns of the first dataframe
# git the extension _1, the other dataframe _2. So, the colums with
# the same name in the original date now have these extension to 
# distinguish them
# If you want all rows that have a equal severity:
#   the query is: (t3['severity_1'] == t3['severity_2'])
# if you only want the 'low' severity:
#   (t3['severity_1'] == t3['severity_2']) & (t3['severity'] == 'low')
t3 = pd.concat([ t1.add_suffix('_1'), t2.add_suffix('_2')], axis=1)
t3['MTCH'] = t3[(t3['severity_1'] == t3['severity_2'])]['asn_number_1']
t3.dropna(inplace=True)
print(t3['MTCH'].values)

提供:

...
['AS4134 Chinanet' 'AS4134 Chinanet']

或输入所有匹配的记录,并从行中选择所需的字段:

for i, row in t3[(t3['severity_1'] == t3['severity_2'])].iterrows():
   print(i, row['severity_2']) # add other fields from t3           

相关问题 更多 >

    热门问题