<p>根据你的问题,解决办法如下:</p>
<p>我的代码:</p>
<pre><code>import scrapy
class ProductsSpider(scrapy.Spider):
name = "games"
product = input("laptop")
product2 = input("desktop")
product3 = input("cameras")
def start_requests(self):
urls =[f'https://www.czone.com.pk/search.aspx?kw={self.product}', f'https://www.czone.com.pk/search.aspx?kw={self.product2}', f'https://www.czone.com.pk/search.aspx?kw={self.product3}']
for url in urls:
yield scrapy.Request(
url =url,
callback=self.parse
)
def parse(self, response):
pass
</code></pre>
<p>与备选方案相同:</p>
<p>代码:</p>
<pre><code>import scrapy
class ProductsSpider(scrapy.Spider):
name = "games2"
product = input(["laptop","desktop","cameras"])
def start_requests(self):
yield scrapy.Request(
url=f'https://www.czone.com.pk/search.aspx?kw={self.product}',
callback=self.parse
)
def parse(self, response):
pass
</code></pre>
<p>输出:</p>
<pre><code>laptop
desktop
cameras
['laptop', 'desktop', 'cameras']
2021-08-12 16:53:39 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.czone.com.pk/search.aspx?kw=> (referer: None)
2021-08-12 16:53:39 [scrapy.core.engine] INFO: Closing spider (finished)
2021-08-12 16:53:39 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 312,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 19982,
'downloader/response_count': 1,
'downloader/response_status_count/200
</code></pre>