从多个数据源在python中创建嵌套字典

2024-10-04 11:27:48 发布

您现在位置:Python中文网/ 问答频道 /正文

将此完全改写为原始帖子并不清楚。我要做的是逐行解析一些数据并创建一个字典。我在想有更好的方法来组织这些数据。我最初尝试的方法并不能解释很多事情,所以我想到了这个。我逐行遍历服务策略输出,按接口、策略名称将数据放在一起,然后拉出队列、丢弃和不丢弃缓冲区。我遇到的问题是它没有考虑额外的策略,所以原始的数据传递会被覆盖。你知道吗

服务策略输出:

GigabitEthernet11/1

Service-policy output: Gi11_1

Counters last updated 7191104 seconds ago

Class-map: class-default (match-any)
  0 packets, 0 bytes
  30 second offered rate 0000 bps, drop rate 0000 bps
  Match: any 
  Queueing
  queue limit 33025 packets
  (queue depth/total drops/no-buffer drops) 0/0/0
  (pkts output/bytes output) 0/0
  shape (average) cir 500000000, bc 2000000, be 2000000
  target shape rate 500000000

  Service-policy : child

  Counters last updated 7191104 seconds ago

    Class-map: class-default (match-any)
      0 packets, 0 bytes
      30 second offered rate 0000 bps, drop rate 0000 bps
      Match: any 
      Queueing
      queue limit 33025 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 0/0
      bandwidth remaining ratio 100 

for ints, int_strings in zip(int_names, int_output):
    counts.setdefault(ints, {})

    for line in int_strings.splitlines():
        matchpolicy = re.search(r'(Service-policy.*)', line)
        matchdrops = re.findall(r'total drops.*', line)
        if matchpolicy:
            spolicies = matchpolicy.group(0)
            counts[ints]['Policy'] = spolicies
        if matchdrops:
            regx = re.search(r'\s(\d+)\/(\d+)\/(\d+)', line)
            counts[ints]['queue'] = int(regx.group(1))
            counts[ints]['drops'] = int(regx.group(2))
            counts[ints]['no-buffer'] = int(regx.group(3))

return counts

我尝试创建一个具有附加深度级别的字典,但是在counts[ints][spolicies]行上出现了一个关键错误。从我所读到的,我以为这是嵌套字典的工作原理,但我想我误解了。你知道吗

for ints, int_strings in zip(int_names, int_output):
    counts.setdefault(ints, {})

    for line in int_strings.splitlines():
        matchpolicy = re.search(r'(Service-policy.*)', line)
        matchdrops = re.findall(r'total drops.*', line)
        if matchpolicy:
            spolicies = matchpolicy.group(0)
            counts[ints][spolicies] 
        if matchdrops:
            regx = re.search(r'\s(\d+)\/(\d+)\/(\d+)', line)
            counts[ints][spolicies]['queue'] = int(regx.group(1))
            counts[ints][spolicies]['drops'] = int(regx.group(2))
            counts[ints][spolicies]['no-buffer'] = int(regx.group(3))

return counts

不管是哪种方式,我都假设有更好的方法来组织这些数据,这样我以后就可以更轻松地浏览了。有什么想法吗?你知道吗


Tags: 数据reoutputratequeuelinegroup策略
1条回答
网友
1楼 · 发布于 2024-10-04 11:27:48
labels = ["depth", "drops", "buffer_drops"]
values = ['0', '14996', '0', '0', '2100', '0']
keys=['Gi1','Gi2']

values_grouped_by_3 = list(zip(*[iter(values)]*3))
data = dict(zip(keys,[dict(zip(labels,vals)) for vals in values_grouped_by_3]))

如果你想要更多的教程和实际的帮助,那么请先努力,发布你的努力和你的期望,以及你的输出是什么

相关问题 更多 >