目标:爬取任意贴吧下前50页,并保存到本地
观察贴吧网页逻辑类型
LOL吧首页.png李毅吧首页.png
LOL2.png
LOL3.png
1.前两张图片分别是LOL吧首页,李毅吧的首页,对比发现:
当我们搜索不同的贴吧时,我们看到只有网址kw后边的参数有变化,此处参数表示不同的吧名。
2.后两张图片分别是LOL吧第二页和第三页的信息,对比发现:
代表页码的参数是pn后边的参数,以50的倍数增加。
编辑代码
import requests
class TiebaSpider:
def __init__(self,tieba_name):
self.tieba_name = tieba_name
self.url_temp = "https://tieba.baidu.com/f?kw="+tieba_name+"&ie=utf-8&pn={}"
self.headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0"}
def get_url_list(self):
url_list = []
for i in range(1000):
url_list.append(self.url_temp.format(i*50))
return url_list
def parse_url(self,url): #发送请求,获取回应
response = requests.get(url,headers=self.headers)
return response.content.decode()
def save_html(self,html_str,page_num): #保存HTML字符串
file_path = "{}-第{}页.html".format(self.tieba_name,page_num)
with open("", "")as f: #"lol-第x页.html"
f.write(html_str)
def run(self): #实现逻辑
#1.构造url列表
url_list = self.get_url_list()
#2.遍历,发送请求,获取响应
for url in url_list:
html_str = self.parse_url(url)
#3.保存
#3.保存
page_num = url_list.index(url)+i #页码数
self.save_html(html_str,page_num)
if__name___ == '__name__':
tieba_spider = TiebaSpider("lol")
tieba_spider.run()
网友评论