
上集回顾:
- python -m pip install SomePackage :安装最新版本的包
- python -m pip install SomePackage==2.6.0 :安装特定版本的包
- python -m pip install --upgrade SomePackage :升级到最新版本
- python -m pip uninstall SomePackage :卸载已安装的包
- python -m pip show SomePackage :显示有关特定包的信息
- python -m pip list :显示已安装的包列表
网络里什么都有,网络最多的却是无用信息。
前面网络请求那一集讲到,可以通过urllib获取到网络资源。但是这些资源大部分都是无用的信息,怎么才能从中找到有用的信息呢?
网络请求回来的数据通常是HTML或XML格式,Beautiful Soup 库可以快速从中提取数据。
一、初识BeautifulSoup
先来快速领略一下 BeautifulSoup 的功能,下面的一段HTML代码将作为例子被多次用到.这是 爱丽丝梦游仙境 的一段内容(以后内容中简称为 爱丽丝 的文档):
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
使用BeautifulSoup解析这段代码,能够得到一个 BeautifulSoup 的对象,并能按照标准的缩进格式的结构输出:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.prettify())
# <html>
# <head>
# <title>
# The Dormouse's story
# </title>
# </head>
# <body>
# <p class="title">
# <b>
# The Dormouse's story
# </b>
# </p>
# <p class="story">
# Once upon a time there were three little sisters; and their names were
# <a class="sister" href="http://example.com/elsie" id="link1">
# Elsie
# </a>
# ,
# <a class="sister" href="http://example.com/lacie" id="link2">
# Lacie
# </a>
# and
# <a class="sister" href="http://example.com/tillie" id="link2">
# Tillie
# </a>
# ; and they lived at the bottom of a well.
# </p>
# <p class="story">
# ...
# </p>
# </body>
# </html>
几个简单的浏览结构化数据的方法:
soup.title
# <title>The Dormouse's story</title>
soup.title.name
# u'title'
soup.title.string
# u'The Dormouse's story'
soup.title.parent.name
# u'head'
soup.p
# <p class="title"><b>The Dormouse's story</b></p>
soup.p['class']
# u'title'
soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
soup.find(id="link3")
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
从文档中找到所有<a>标签的链接:
for link in soup.find_all('a'):
print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie
# http://example.com/tillie
从文档中获取所有文字内容:
print(soup.get_text())
# The Dormouse's story
#
# The Dormouse's story
#
# Once upon a time there were three little sisters; and their names were
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
#
# ...
二、安装和使用
官方文档介绍了4种安装方式,Windows推荐使用上集学习的pip安装:pip install beautifulsoup4。
安装完成以后就可以像前文里所讲的那样,将一段文档传入 BeautifulSoup 的构造方法,就能得到一个文档的对象。
from bs4 import BeautifulSoup
soup = BeautifulSoup("<html>data</html>")
也可以使用文件句柄或者文件路径,相当于解析文件的所有内容:
from bs4 import BeautifulSoup
soup = BeautifulSoup(open("index.html"))
得到文档对象以后,就可以通过访问对象属性或者方法的方式获取目标信息。BeautifulSoup 有哪些属性和方法呢?且待下集继续!
本集总结:
- 初识BeautifulSoup
- 安装和使用
下集见
网友评论