我对一些代码有问题,我相信这是因为查询集的开销。我正在寻找一个更便宜(在时间上)的方式来达到这个目的。。在
log.info("Getting Users")
employees = Employee.objects.filter(is_active = True)
log.info("Have Users")
if opt.supervisor:
if opt.hierarchical:
people = getSubs(employees, " ".join(args))
else:
people = employees.filter(supervisor__name__icontains = " ".join(args))
else:
log.info("Filtering Users")
people = employees.filter(name__icontains = " ".join(args)) | \
employees.filter(unix_accounts__username__icontains = " ".join(args))
log.info("Filtered Users")
log.info("Processing data")
np = []
for person in people:
unix, p4, bugz = "No", "No", "No"
if len(person.unix_accounts.all()): unix = "Yes"
if len(person.perforce_accounts.all()): p4 = "Yes"
if len(person.bugzilla_accounts.all()): bugz = "Yes"
if person.cell_phone != "": exphone = fixphone(person.cell_phone)
elif person.other_phone != "": exphone = fixphone(person.other_phone)
else: exphone = ""
np.append({ 'name':person.name,
'office_phone': fixphone(person.office_phone),
'position': person.position,
'location': person.location.description,
'email': person.email,
'functional_area': person.functional_area.name,
'department': person.department.name,
'supervisor': person.supervisor.name,
'unix': unix, 'perforce': p4, 'bugzilla':bugz,
'cell_phone': fixphone(exphone),
'fax': fixphone(person.fax),
'last_update': person.last_update.ctime() })
log.info("Have data")
现在,这会生成一个日志,如下所示。。在
^{pr2}$如您所见,简单地迭代数据需要30秒以上。那太贵了。有人能告诉我一个更有效的方法吗。我想如果我做第一个过滤器,事情会变得简单,但似乎没有效果。我在这件事上不知所措。在
谢谢
需要说明的是,这大约有1500名员工——不算太多!!
QuerySet
sIS NULL
字段,而不是循环中的三个len()
调用。在相关问题 更多 >
编程相关推荐