芹菜、Django和Scrapy:从Django应用程序导入时出错

2024-06-01 06:46:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用celery(和django-celery)来允许用户通过django管理程序定期进行清理。这是一个更大项目的一部分,但我已经将问题归结为一个最小的例子。在

首先,celery/celerybeat正在运行守护程序。如果我用django project dir中的celery -A evofrontend worker -B -l info来运行它们,那么我就不会有任何问题了。在

但是,当我将celery/celerybeat作为守护程序运行时,我会遇到一个奇怪的导入错误:

[2016-01-06 03:05:12,292: ERROR/MainProcess] Task evosched.tasks.scrapingTask[e18450ad-4dc3-47a0-b03d-4381a0e65c31] raised unexpected: ImportError('No module named myutils',)
Traceback (most recent call last):
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
    return self.run(*args, **kwargs)
  File "evosched/tasks.py", line 35, in scrapingTask
    cs = CrawlerScript('TestSpider', scrapy_settings)
  File "evosched/tasks.py", line 13, in __init__
    self.crawler = CrawlerProcess(scrapy_settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/crawler.py", line 209, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/crawler.py", line 115, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/crawler.py", line 296, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 30, in from_settings
    return cls(settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 21, in __init__
    for module in walk_modules(name):
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "retail/spiders/Retail_spider.py", line 16, in <module>
ImportError: No module named myutils

尽管正在从应用程序中添加相关的问题。在

我的直觉是,这可能是由初始化过程中的“循环导入”引起的,但我不确定(请参阅here以了解有关同一错误的注释)

芹菜守护进程配置

为了完整起见,celeryd和celerybeat配置脚本包括:

^{pr2}$

以及

# /etc/default/celerybeat 
CELERY_BIN="/home/lee/Desktop/pyco/evo-scraping-min/venv/bin/celery"

CELERY_APP="evofrontend"
CELERYBEAT_CHDIR="/home/lee/Desktop/pyco/evo-scraping-min/evofrontend/"

# Django settings module
export DJANGO_SETTINGS_MODULE="evofrontend.settings"

它们很大程度上是基于the generic ones,加入了Django设置,并使用了我的virtualenv中的芹菜箱,而不是系统。在

我还使用init.d脚本,它们是the generic ones。在

项目结构

{cd6}为项目而生。它下的所有文件都拥有lee:lee的所有权。 dir包含一个Scrapy(evo retail)和Django(evonfrontend)项目,它们位于它下面,完整的树结构如下所示

├── evofrontend
│   ├── db.sqlite3
│   ├── evofrontend
│   │   ├── celery.py
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   └── wsgi.py
│   ├── evosched
│   │   ├── __init__.py
│   │   ├── myutils.py
│   │   └── tasks.py
│   └── manage.py
└── evo-retail
    └── retail
        ├── logs
        ├── retail
        │   ├── __init__.py
        │   ├── settings.py
        │   └── spiders
        │       ├── __init__.py
        │       └── Retail_spider.py
        └── scrapy.cfg

Django项目相关文件

现在相关文件:evofrontend/evofrontend/celery.py看起来像

# evofrontend/evofrontend/celery.py
from __future__ import absolute_import
import os
from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'evofrontend.settings')

from django.conf import settings

app = Celery('evofrontend')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

Django设置文件evofrontend/evofrontend/settings.py中潜在的相关设置是

import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
PROJECT_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir))

INSTALLED_APPS = (
    ...
    'djcelery',
    'evosched',
)

# Celery settings
BROKER_URL = 'amqp://guest:guest@localhost//'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/London'
CELERYD_MAX_TASKS_PER_CHILD = 1  # Each worker is killed after one task, this prevents issues with reactor not being restartable
# Use django-celery backend database
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
# Set periodic task
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"

调度应用程序中的tasks.py看起来像(它只是在更改dir之后使用相关设置启动Scrapy spider)

# evofrontend/evosched/tasks.py
from __future__ import absolute_import
from celery import shared_task
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
import os
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from django.conf import settings as django_settings


class CrawlerScript(object):
    def __init__(self, spider, scrapy_settings):
        self.crawler = CrawlerProcess(scrapy_settings)
        self.spider = spider  # just a string

    def run(self, **kwargs):
        # Pass the kwargs (usually command line args) to the crawler
        self.crawler.crawl(self.spider, **kwargs)
        self.crawler.start()


@shared_task
def scrapingTask(**kwargs):

    logger.info("Start scrape...")

    # scrapy.cfg file here pointing to settings...
    base_dir = django_settings.BASE_DIR
    os.chdir(os.path.join(base_dir, '..', 'evo-retail/retail'))
    scrapy_settings = get_project_settings()

    # Run crawler
    cs = CrawlerScript('TestSpider', scrapy_settings)
    cs.run(**kwargs)

evofrontend/evosched/myutils.py只包含(在这个最小示例中):

 # evofrontend/evosched/myutils.py
 SCRAPY_XHR_HEADERS = 'SOMETHING'

垃圾项目相关文件

项目中的文件看起来很完整

# evo-retail/retail/retail/settings.py
BOT_NAME = 'retail'

import os
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))

SPIDER_MODULES = ['retail.spiders']
NEWSPIDER_MODULE = 'retail.spiders'

(在这个小例子中)蜘蛛只是

# evo-retail/retail/retail/spiders/Retail_spider.py
from scrapy.conf import settings as scrapy_settings
from scrapy.spiders import Spider
from scrapy.http import Request
import sys
import django
import os
import posixpath
SCRAPY_BASE_DIR = scrapy_settings['PROJECT_ROOT']
DJANGO_DIR = posixpath.normpath(os.path.join(SCRAPY_BASE_DIR, '../../../', 'evofrontend'))
sys.path.insert(0, DJANGO_DIR)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", 'evofrontend.settings')
django.setup()
from evosched.myutils import SCRAPY_XHR_HEADERS

class RetailSpider(Spider):

    name = "TestSpider"

    def start_requests(self):
        print SCRAPY_XHR_HEADERS
        yield Request(url='http://www.google.com', callback=self.parse)

    def parse(self, response):
        print response.url
        return []

编辑:

我通过大量的尝试和错误发现,如果我试图从中导入的应用程序在我的INSTALLED_APPSdjango设置中,那么它会因导入错误而失败,但如果我从那里删除应用程序,则不会再出现导入错误(例如,从INSTALLED_APPS删除evosched),那么spider中的导入将顺利进行。显然不是一个解决办法,但可能是一个线索。在

编辑2

我把sys.path的打印放在spider中导入失败之前,结果是

/home/lee/Desktop/pyco/evo-scraping-min/evofrontend/../evo-retail/retail 
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/plat-x86_64-linux-gnu
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/lib-tk
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/lib-old  
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/lib-dynload
/usr/lib/python2.7
/usr/lib/python2.7/plat-x86_64-linux-gnu
/usr/lib/python2.7/lib-tk
/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages
/home/lee/Desktop/pyco/evo-scraping-min/evofrontend 
/home/lee/Desktop/pyco/evo-scraping-min/evo-retail/retail`

编辑3

如果我是import evosched,那么print dir(evosched),我会看到“tasks”,如果我选择包含这样一个文件,我也可以看到“models”,因此从模型导入实际上是可能的。但是我没有看到“myutils”。即使from evosched import myutils也会失败,如果语句被放在下面的函数中而不是全局的,也会失败(我认为这可能会解决循环导入问题…)。直接import evosched起作用……很可能import evosched.utils也能工作。还没试过。。。在


Tags: pyimporthomesettingsoslibminscraping
1条回答
网友
1楼 · 发布于 2024-06-01 06:46:50

celery守护进程似乎是使用系统的python而不是virtualenv中的python二进制文件来运行的。你需要使用

# Python interpreter from environment. 
ENV_PYTHON="$CELERYD_CHDIR/env/bin/python"

如前所述,here告诉celeryd在virtualenv中使用python运行。在

相关问题 更多 >