<p>希望没有一个重复的问题,我已经看了,因为我一直在搜索这个论坛的人谁已经发布了类似的下面一个。。。你知道吗</p>
<p>基本上,我已经创建了一个python脚本,它将从下面显示的url中刮取每艘船的呼号,并将它们附加到一个列表中。简而言之,它是有效的,但是每当我遍历列表并显示每个元素时,在每个callsigns之间似乎都有一个“[”和“]”。我的脚本输出如下:</p>
<p><strong>输出</strong></p>
<pre><code>*********************** Contents of 'listOfCallSigns' List ***********************
0 ['311062900']
1 ['235056239']
2 ['305500000']
3 ['311063300']
4 ['236111791']
5 ['245639000']
6 ['235077805']
7 ['235011590']
</code></pre>
<p>如您所见,它显示了每个呼号的方括号。我有一种感觉,这可能归结为BeautifulSoup库中的编码问题。你知道吗</p>
<p>理想情况下,我希望输出不带任何方括号,<strong>只将呼号作为字符串。</strong></p>
<pre><code>*********************** Contents of 'listOfCallSigns' List ***********************
0 311062900
1 235056239
2 305500000
3 311063300
4 236111791
5 245639000
6 235077805
7 235011590
</code></pre>
<p>我当前使用的脚本如下所示:</p>
<p><strong>我的脚本</p>
<pre><code># Importing the modules needed to run the script
from bs4 import BeautifulSoup
import urllib2
import re
import requests
import pprint
# Declaring the url for the port of hull
url = "http://www.fleetmon.com/en/ports/Port_of_Hull_5898"
# Opening and reading the contents of the URL using the module 'urlib2'
# Scanning the entire webpage, finding a <table> tag with the id 'vessels_in_port_table' and finding all <tr> tags
portOfHull = urllib2.urlopen(url).read()
soup = BeautifulSoup(portOfHull)
table = soup.find("table", {'id': 'vessels_in_port_table'}).find_all("tr")
# Declaring a list to hold the call signs of each ship in the table
listOfCallSigns = []
# For each row in the table, using a regular expression to extract the first 9 numbers from each ship call-sign
# Adding each extracted call-sign to the 'listOfCallSigns' list
for i, row in enumerate(table):
if i:
listOfCallSigns.append(re.findall(r"\d{9}", str(row.find_all('td')[4])))
print "\n\n*********************** Contents of 'listOfCallSigns' List ***********************\n"
# Printing each element of the 'listOfCallSigns' list
for i, row in enumerate(listOfCallSigns):
print i, row
</code></pre>
<p><strong>有人知道如何去掉每个呼号周围的方括号,只显示字符串吗?</strong></p>
<p>提前谢谢!:)</p>