剔除不需要的网址(如评论)

2024-05-02 00:58:24 发布

您现在位置:Python中文网/ 问答频道 /正文

我使用Scrapy来抓取所有页面,但是我当前的代码规则仍然允许我获取不需要的URL,比如除了帖子的主URL之外,还可以获得诸如“http://www.example.com/some-article/comment-page-1”之类的评论链接。我可以在规则中添加什么来排除这些不需要的项目?以下是我当前的代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item

class MySpider(CrawlSpider):
    name = 'crawltest'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']
    rules = [Rule(SgmlLinkExtractor(allow=[r'/\d+']), follow=True), Rule(SgmlLinkExtractor(allow=[r'\d+']), callback='parse_item')]

    def parse_item(self, response):
        #do something

Tags: 代码fromimportcomhttpurl规则example
1条回答
网友
1楼 · 发布于 2024-05-02 00:58:24

^{}有一个名为deny的可选参数,仅当allow regex为true且deny regex为false时,此参数才与规则匹配

来自docs的示例:

rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
    )

也许您可以检查url是否不包含单词comment?在

相关问题 更多 >