擅长:python、mysql、java
<p>在<strong><em>中有一个很好的例子,它可以用Python</em></strong>自动化那些无聊的东西。你知道吗</p>
<p><a href="https://automatetheboringstuff.com/chapter15/" rel="nofollow noreferrer">https://automatetheboringstuff.com/chapter15/</a></p>
<p>基本上,您需要使用<code>threading</code>模块为每个url创建一个不同的线程,然后等待它们全部完成。你知道吗</p>
<pre><code>import threading
def scrape_for_info(url):
scrape = CP_GetOdds(url)
for x in range(scrape.GameRange()):
sql_str = "INSERT INTO Scraped_Odds ('"
sql_str += str(scrape.Time()) + "', '"
sql_str += str(scrape.Text(x)) + "', '"
sql_str += str(scrape.HomeTeam()) + "', '"
sql_str += str(scrape.Odds1(x)) + "', '"
sql_str += str(scrape.Odds2(x)) + "', '"
sql_str += str(scrape.AwayTeam()) + "')"
cursor.execute(sql_str)
conn.commit()
# Create and start the Thread objects.
threads = []
for link in links:
thread = threading.Thread(target=scrape_for_info, args=(link))
threads.append(thread)
thread.start()
# Wait for all threads to end.
for thread in threads:
thread.join()
print('Done.')
</code></pre>