我有一个JSON文件,其中包含10级嵌套数据。 从中获取数据的最佳方式是什么
[{'next': '?page=2',
'previous': None,
'results': [{'Accounts': [{'id': 17934,
'logo': '//112233.contify.com/client_data/custom_tag/logo/xsl0bzxk_400x400-17934.jpg',
'name': 'Tata Sons Ltd'}],
'Channels': [{'id': 17,
'logo': '//112233.contify.com/images/tags-ico.png',
'name': 'News and Other Websites'}],
'Content Types': [{'id': 3,
'logo': '//112233.contify.com/images/tags-ico.png',
'name': 'News Articles'}],
'Duns Number': [{'id': 18847,
'logo': '//112233.contify.com/images/tags-ico.png',
'name': '650048044'}],
'Sources': [{'id': 68636,
'logo': '//112233.contify.com/images/tags-ico.png',
'name': 'Factiva'}],
'Triggers': [{'id': 15174,
'logo': '//112233.contify.com/images/tags-ico.png',
'name': 'Cost Reduction'}],
'attachments': [],
'duplicate_count': 1,
'duplicates': [{'id': 20032653838695,
'source_name': 'Factiva',
'source_url': 'https://global.factiva.com/en/du/article.asp?accountid=9ERN000600&accessionno=TOIKOC0020200326eg3q00024',
'summary': 'As a result of the current air '
'travel shutdown, Tata Sons may be '
'strategically compelled to operate '
'just one rather than two airlines - '
'AirAsia India and Vistara, and Air '
"India's privatisation is unlikely "
'to proceed in FY21.',
'title': "'IndiGo, SpiceJet loss may hit "
"$1.5bn'"}],
'id': 20032653838666,
'image_url': '',
'previews': [],
'pub_date': '2020-03-26T00:00:00Z',
'source_name': 'Factiva',
'source_url': 'https://global.factiva.com/en/du/article.asp?accountid=9ERN000600&accessionno=TOICHE0020200326eg3q0002c',
'summary': 'AIR TRAVEL SHUTDOWNAs a result of the current air '
'travel shutdown, Tata Sons may be strategically '
'compelled to operate just one rather than two '
'airlines - AirAsia India and Vistara, and Air '
"India's privatisation is unlikely to proceed in "
'FY21.',
'title': 'IndiGo, SpiceJet combined loss could hit $1.5bn: '
'Report'},
{'Accounts': [{'id': 15159,
'logo': '//112233.contify.com/client_data/custom_tag/logo/thyssenkrupp-ag-15159.png',...........
我能够使用如下静态查询逐个获取数据
import json
with open("response.json", "r") as file:
info = json.load(file) # info contains all key-value pairs
for i in info:
try:
print(i["results"][0]["title"])
except:
print("")
输出:
IndiGo, SpiceJet combined loss could hit $1.5bn: Report
German ThyssenKrupp/ Cuts 3 thou jobs at steel business
Sameer Aggarwal Walmart India CEO
Edit1:我已经使用下面的脚本规范化了JSON
df = pd.json_normalize(data)
print(df['results'])
输出:
0 [{'source_name': 'Factiva', 'attachments': [],...
1 [{'source_name': 'Factiva', 'attachments': [],...
2 [{'source_name': 'Factiva', 'attachments': [],...
3 [{'source_name': 'Devdiscourse', 'attachments'...
4 []
5 [{'source_name': 'Factiva', 'attachments': [],...
6 [{'source_name': 'Factiva', 'attachments': [],...
7 [{'source_name': 'The Globe and Mail', 'attach...
Name: results, dtype: object
如何使用循环获取所有数据?请帮助我了解python
目前没有回答
相关问题 更多 >
编程相关推荐