此处输入图片的描述
思路分析
1.https://movie.douban.com/explore#!type=movie固定
2.变量有tag-标,对应豆瓣的热门、最新等
3.sort排序,['recommend', 'time', 'rank'] # 分别对应热度/时间/评价
4.page_limit 每页多少条 ,page_start 第几页
抓取实施
-
得到要抓取的原始url
def get_url(self, tag, sort, page_limit, page_start): """获取开始抓取数据的baseUrl""" return self.baseUrl + "&tag=%s&sort=%s&page_limit=%s&page_start=%s" % (tag, sort, page_limit,page_start)
-
循环标签开始抓取,我这里只是按热度排序,获取第一页20条数据
def on_start(self): for tg in self.tags: url = self.get_url(tg, self.sorts[0], self.page_limit, self.page_start) self.crawl(url, callback=self.index_page, headers=self.heard, fetch_type='js', validate_cert=False)
-
保存封面和简介
使用urllib对url进行转码:
url = urllib.request.unquote(response.url)
按照标签建立文件夹把对应的20条数据保存到指定文件夹下info = response.doc('#info').text() path = str(url).split('?tag=')[1].split('&from=')[0] self.util.save_file(path + "/" + title, '简介.txt', info) self.util.save_img(path + "/" + title, '封面.png', img)
-
抓取每部电影的前20条评论保存到本地
评论的地址https://movie.douban.com/subject/26617393/reviews?start=0
https://movie.douban.com/subject/26617393这个地址可以通过截取url(# 'https://movie.douban.com/subject/1292052/?tag=%E7%BB%8F%E5%85%B8&from=gaia_video')拿到然后拼接上后缀就可以抓取评论了
评论的地址:final_url = url.split('?tag=')[0] + "reviews" + "?start=%s" % self.review -
保存评论到本地:
def detail_page(self, response): time.sleep(0.5) self.review += 20 path = response.save['path'] title = response.doc('h1').text() authors = [x.text() for x in response.doc('.author').items()] contents = [x.text() for x in response.doc('.short-content').items()] i = 0 with open(DIR + "/" + path + "/" + '评论.txt', 'ab')as f: f.write(str("\t\t\t\t\t\t\t" + title + "\n").encode('utf-8')) for content in contents: author = authors[i] s = author + ":\t" + content + "\n" f.write(s.encode('utf-8')) i += 1 return { 'title': title, 'author': authors, 'content': contents, }
-
注意事项
抓取过快会被豆瓣识别为机器人,此时会让你输入验证码
建议没抓取一条数据休眠一会,time.sleep(0.5) -
效果预览
最后下载地址:pyspider_doubanMovie
网友评论