无法从某些html元素中获取文本的特定部分

2024-09-27 18:07:48 发布

您现在位置:Python中文网/ 问答频道 /正文

我用python创建了一个脚本来解析一些html元素的地址。当我执行脚本时,我从元素中得到titleaddressphone数字,而我的目的是只得到地址。如果我使用next_sibling,我只能得到地址的第一部分,这就是为什么我跳过了这个方法。你知道吗

如何从下面的代码片段中只获取地址而不获取其他内容?你知道吗

from bs4 import BeautifulSoup

htmldoc = """
<div class="search-article-title-description">
    <div class="search-article-title">
      <a href="https://www.pga.com/pgapro/info/999918438?atrack=pgapro%3Anone&amp;seapos=result%3A1%3AJeff%20S%20Swangim%2C%20PGA&amp;page=1">Jeff S Swangim, PGA</a>
      <div class="search-article-protitle">
        Assistant Professional
      </div>
    </div>
    <div class="search-article-address">
      <div class="search-instructor-course">
        Lake Toxaway Country Club
      </div>
      4366 W Club Blvd<br>Lake Toxaway, NC  28747-8538<br> 
      <div class="spotlightphone_num">
        (828) 966-4661
      </div>
    </div>
</div>
"""
soup = BeautifulSoup(htmldoc,"lxml")
address = soup.select_one(".search-article-address").get_text(strip=True)
print(address)

我现在得到的是:

Lake Toxaway Country Club4366 W Club BlvdLake Toxaway, NC  28747-8538(828) 966-4661

我的预期产出:

4366 W Club BlvdLake Toxaway, NC  28747-8538

Tags: div脚本元素searchtitleaddress地址article
3条回答

我能想到的最简单的方法是使用.extract()函数去掉您不感兴趣的部分。如果我们可以忽略类search-instructor-coursespotlightphone_num的内容,那么剩余部分就是所需的部分。你知道吗

下面的脚本应该为我们提供地址:

from bs4 import BeautifulSoup

htmldoc = """
<div class="search-article-title-description">
    <div class="search-article-title">
      <a href="https://www.pga.com/pgapro/info/999918438?atrack=pgapro%3Anone&amp;seapos=result%3A1%3AJeff%20S%20Swangim%2C%20PGA&amp;page=1">Jeff S Swangim, PGA</a>
      <div class="search-article-protitle">
        Assistant Professional
      </div>
    </div>
    <div class="search-article-address">
      <div class="search-instructor-course">
        Lake Toxaway Country Club
      </div>
      4366 W Club Blvd<br>Lake Toxaway, NC  28747-8538<br> 
      <div class="spotlightphone_num">
        (828) 966-4661
      </div>
    </div>
</div>
"""
soup = BeautifulSoup(htmldoc,"lxml")
[item.extract() for item in soup.find_all(class_=["search-instructor-course","spotlightphone_num"])]
address = soup.select_one(".search-article-address").get_text(strip=True)
print(address)

可能有一种更优雅的方法,但是您希望使用.next_sibling是正确的

from bs4 import BeautifulSoup

htmldoc = """
<div class="search-article-title-description">
    <div class="search-article-title">
      <a href="https://www.pga.com/pgapro/info/999918438?atrack=pgapro%3Anone&amp;seapos=result%3A1%3AJeff%20S%20Swangim%2C%20PGA&amp;page=1">Jeff S Swangim, PGA</a>
      <div class="search-article-protitle">
        Assistant Professional
      </div>
    </div>
    <div class="search-article-address">
      <div class="search-instructor-course">
        Lake Toxaway Country Club
      </div>
      4366 W Club Blvd<br>Lake Toxaway, NC  28747-8538<br> 
      <div class="spotlightphone_num">
        (828) 966-4661
      </div>
    </div>
</div>
"""

soup = BeautifulSoup(htmldoc,"html.parser")

addr = soup.find('div', {'class':'search-instructor-course'}).next_sibling.strip()
state_zip = soup.find('div', {'class':'search-instructor-course'}).next_sibling.next_sibling.next_sibling.strip()


print (' '.join([addr, state_zip]))

输出:

print (' '.join([addr, state_zip]))
4366 W Club Blvd Lake Toxaway, NC  28747-8538

这里使用xpath表达式和lxml。您仍然可以将您的HTML内容传递给此。你知道吗

from lxml import html

h = '''
<div class="search-article-title-description">
    <div class="search-article-title">
      <a href="https://www.pga.com/pgapro/info/999918438?atrack=pgapro%3Anone&amp;seapos=result%3A1%3AJeff%20S%20Swangim%2C%20PGA&amp;page=1">Jeff S Swangim, PGA</a>
      <div class="search-article-protitle">
        Assistant Professional
      </div>
    </div>
    <div class="search-article-address">
      <div class="search-instructor-course">
        Lake Toxaway Country Club
      </div>
      4366 W Club Blvd<br>Lake Toxaway, NC  28747-8538<br> 
      <div class="spotlightphone_num">
        (828) 966-4661
      </div>
    </div>
</div>

'''

tree = html.fromstring(h)
links = [link.strip() for link in tree.xpath("//div[@class='search-article-address']/br/preceding-sibling::text()[1]")]
print(' '.join(links))

输出:

enter image description here

或者,更简单地说,感谢@SIM,只要:

print(' '.join(tree.xpath("//div[@class='search-article-address']/text()")))

相关问题 更多 >

    热门问题