从网页中提取旧文本

2024-09-30 18:17:12 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经创建了一个蜘蛛,它检查一个特定的电影预订网站是否打开电影预订。它每10秒检查一次。但我面临的问题是,即使在网站上打开预订,我的代码也不会得到更新的网站,而是使用旧的废弃数据

例如:

我取消了网站,电影《A》早上8点还没有开拍。电影“A”的预订在晚上12点开放,但蜘蛛显示它没有开放预订。需要注意的是,我使用的是不定while循环,所以我从早上8点开始运行程序,从未停止过

代码:

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
import threading
import time
import datetime
import winsound

class NewFilmSpiderSpider(scrapy.Spider):
    name = 'new_film_spider'
    allowed_domains = ['www.spicinemas.in']
    start_urls = ['https://www.spicinemas.in/coimbatore/now-showing']

    def parse(self, response):
        t = threading.Thread(self.getDetails(response))
        t.start()

    def getDetails(self, response):
        while True:
            records = response.xpath('//section[@class="main-section"]/section[2]/section[@class="movie__listing now-showing"]/ul/li/div/dl/dt/a/text()').extract()
            if 'NGK' in str(records):
                try:
                    print("Booking Opened",datetime.datetime.now())
                    winsound.PlaySound('alert.wav', winsound.SND_FILENAME)
                except Exception:
                    print ("Error: unable to play sound")
            else:
                print("Booking Not Opened",datetime.datetime.now())
            time.sleep(10)

如果你现在运行代码,它会显示预订已打开。但我需要让网页在每一个循环报废。我该怎么做

更新#1:

我在使用下面给出的解决方案运行时得到这些跟踪

File "C:\Users\ranji\Documents\Spiders\SpiCinemasSpider\spicinemas_spider\spiders\new_film_spider.py", line 34, in <module>
    main()
  File "C:\Users\ranji\Documents\Spiders\SpiCinemasSpider\spicinemas_spider\spiders\new_film_spider.py", line 30, in main
    process.start()
  File "C:\Users\ranji\AppData\Local\Programs\Python\Python37-32\lib\site-packages\scrapy\crawler.py", line 293, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "C:\Users\ranji\AppData\Local\Programs\Python\Python37-32\lib\site-packages\twisted\internet\base.py", line 1271, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)
  File "C:\Users\ranji\AppData\Local\Programs\Python\Python37-32\lib\site-packages\twisted\internet\base.py", line 1251, in startRunning
    ReactorBase.startRunning(self)
  File "C:\Users\ranji\AppData\Local\Programs\Python\Python37-32\lib\site-packages\twisted\internet\base.py", line 754, in startRunning
    raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable

Tags: inpyimportselfdatetime电影网站line
1条回答
网友
1楼 · 发布于 2024-09-30 18:17:12

问题是因为线程每次只处理同一组“response”数据,并期望它发生更改。下面是一段经过修改的代码,说明如何使用它每隔10秒爬行一次并检查xpath值

# -*- coding: utf-8 -*-
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.http import Request
import time
import datetime
import winsound

class NewFilmSpiderSpider(scrapy.Spider):
    name = 'new_film_spider'
    allowed_domains = ['www.spicinemas.in']
    start_urls = ['https://www.spicinemas.in/coimbatore/now-showing']

    def parse(self, response):
        records = response.xpath('//section[@class="main-section"]/section[2]/section[@class="movie__listing now-showing"]/ul/li/div/dl/dt/a/text()').extract()
        if 'NGK' in str(records):
            try:
                print("Booking Opened",datetime.datetime.now())
                winsound.PlaySound('alert.wav', winsound.SND_FILENAME)
            except Exception:
                print ("Error: unable to play sound")
        else:
            print("Booking Not Opened",datetime.datetime.now())


def main():
    try:
        process = CrawlerProcess()
        process.crawl(NewFilmSpiderSpider)
        process.start()

        while True:
            process.crawl(NewFilmSpiderSpider)
            time.sleep(10)
    except KeyboardInterrupt:
        process.join()


if __name__ == "__main__":
    main()

引用:https://doc.scrapy.org/en/latest/topics/practices.htmlhttps://stackoverflow.com/a/43480164/1509809

相关问题 更多 >