任务在返回时冻结

2024-10-17 06:18:14 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个部分工作的Python Celery应用程序,它能够在远程工作者上执行作业。但是同步并不是一直都能正常工作。在

在花了一段时间来追查这个问题之后,我意识到这取决于工人们没有回来。他们能够完成任务,但有时他们只是在返回时冻结。虽然通常我可以看到Task ... succeeded in $time消息,但在这种情况下,消息不会出现。在

如果我尝试发送一个SIGQUIT(只需使用Ctrl+C),工作线程将一直处于冻结状态。我必须用SIGTERM杀死它。在

有什么提示吗?在

附加说明

没有任务调用任务。每个任务都由主进程调用

工作流程如下

async = bar.s(...) | baz.s(...)
# later on...
async.parent.get()
# later on...
async.get()

…但是即使不使用链执行bar和{},同样的效果仍然存在。结果不会被设置忽略,并且ignore_result=True不是在任何地方指定的。实际上,这个系统最初运行良好,然后就卡住了。在

{cd7>在作业上返回作业本身。在

当前配置

^{pr2}$

使用RabbitMQ的Redis insead也会发生同样的情况。在

工人日志:

有时,它可能碰巧有一个台球的通知,但这似乎不是相关的,因为它只是偶尔发生。在

[2014-11-25 10:47:20,995: DEBUG/MainProcess] | Worker: Starting Hub
[2014-11-25 10:47:20,996: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:20,996: DEBUG/MainProcess] | Worker: Starting Pool
[2014-11-25 10:47:21,002: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:21,004: DEBUG/MainProcess] | Worker: Starting Consumer
[2014-11-25 10:47:21,004: DEBUG/MainProcess] | Consumer: Starting Connection
[2014-11-25 10:47:22,053: INFO/MainProcess] Connected to redis://:**@...
[2014-11-25 10:47:22,053: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:22,053: DEBUG/MainProcess] | Consumer: Starting Events
[2014-11-25 10:47:22,094: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:22,094: DEBUG/MainProcess] | Consumer: Starting Mingle
[2014-11-25 10:47:22,095: INFO/MainProcess] mingle: searching for neighbors
[2014-11-25 10:47:23,152: INFO/MainProcess] mingle: all alone
[2014-11-25 10:47:23,152: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:23,152: DEBUG/MainProcess] | Consumer: Starting Gossip
[2014-11-25 10:47:23,183: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:23,184: DEBUG/MainProcess] | Consumer: Starting Tasks
[2014-11-25 10:47:23,193: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:23,194: DEBUG/MainProcess] | Consumer: Starting Control
[2014-11-25 10:47:23,223: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:23,223: DEBUG/MainProcess] | Consumer: Starting Heart
[2014-11-25 10:47:23,225: DEBUG/MainProcess] ^-- substep ok
[2014-11-25 10:47:23,225: DEBUG/MainProcess] | Consumer: Starting event loop
[2014-11-25 10:47:23,226: INFO/MainProcess] celery@... ready.
[2014-11-25 10:47:23,226: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2014-11-25 10:47:23,227: DEBUG/MainProcess] basic.qos: prefetch_count->4
[2014-11-25 10:47:24,095: INFO/MainProcess] Received task: simulation.launch[76b59c23-9035-49b9-9997-fa9d6c347a76]
[2014-11-25 10:47:24,096: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x26e6230> (args:(<all my args...
[2014-11-25 10:47:24,099: INFO/MainProcess] Received task: simulation.launch[13844d7b-90ad-427a-b7c0-4e5f4fd3aef0]
[2014-11-25 10:47:24,100: DEBUG/MainProcess] Task accepted: simulation.launch[76b59c23-9035-49b9-9997-fa9d6c347a76] pid:13997
[2014-11-25 10:47:24,102: INFO/MainProcess] Received task: simulation.launch[0e3a13ac-845e-47ab-b969-88931a7a4397]

<follows output of the task being executed. Then the task returns>

<silence for a long time>

[2014-11-25 10:54:32,244: ERROR/Worker-1] Pool process <Worker(Worker-1, started daemon)> error: OSError(32, 'Broken pipe')
Traceback (most recent call last):
  File "/home/testing/.local/lib/python2.7/site-packages/billiard/pool.py", line 289, in run
    sys.exit(self.workloop(pid=pid))
  File "/home/testing/.local/lib/python2.7/site-packages/billiard/pool.py", line 373, in workloop
    put((READY, (job, i, (False, einfo), inqW_fd)))
OSError: [Errno 32] Broken pipe

Tags: indebuginfotaskasyncconsumer作业ok