目标:登录大话西游论坛入口网站http://dhxy.netease.com/forum-39-1.html 自定义页数爬取
版主回复 标志的帖子的文本内容
工具:scrapy框架
思路:获取start_urls 网页内容,获取帖子链接,存入数组,将帖子链接一个一个传递给parse_content 函数解析内容,parse_content生成字典并且获取下一页链接,回调parse函数,重新进行下一页的爬取
代码实现:
创建项目:scrapy startporject dhxy_luntan
根据提示创建 spider
spider.py
# -*- coding: utf-8 -*-
import scrapy
import re
import sys
from scrapy.http import Request
from dhxy_luntan.items import DhxyLuntanItem
reload(sys)
sys.setdefaultencoding('utf-8')
class DhxyLuntanSpiderSpider(scrapy.Spider):
name = "dhxy_luntan_spider"
#allowed_domains = ["http://dhxy.netease.com/forum-39-1.html"]
start_urls = (
'http://dhxy.netease.com/forum-39-1.html',
)
def parse(self, response):
urls = []
sel = scrapy.Selector(response)
content = sel.xpath('//*[@id="threadlist"]/div[2]/form/table/tbody[starts-with(@id,"normalthread_")]/tr/th').extract()
print len(content)
str_ = u'版主回复'
qiandao_ = u'签到'
for i in content:
if str_ in i:
hrefs = re.findall('</em> <a href="(.*?)" .*?onclick.*?</a>', i, re.S)
if hrefs:
href = hrefs[0]
full_href = 'http://dhxy.netease.com/' + href
urls.append(full_href)
for i in urls:
yield Request(i, callback=self.parse_content)
def parse_content(self, response):
item = DhxyLuntanItem()
sel = scrapy.Selector(response)
title = sel.xpath('//*[@id="thread_subject"]/text()').extract()[0]
item['url'] = u'\n['+title + u']' + response.url
data = sel.xpath('//tr/td[starts-with(@id, "postmessage")]')
content = data.xpath('string(.)').extract() #string(.) 当前层的所有内容作为一个字符串输出
if content:
item['content'] = content
yield item
for i in range(2,5):
next_page = 'http://dhxy.netease.com/forum-39-' + str(i) + '.html'
yield scrapy.http.Request(next_page, callback = self.parse)
items.py
Paste_Image.pngsetting.py
因为需要输出csv格式,在setting文件中设置
Paste_Image.png
pipeline.py
不用保存到数据库pipeline无需设置
运行代码:scrapy crawl dhxy_luntan_spider
生成csv文件:
使用excel的导出功能,更改文件类型:
Paste_Image.png特殊字符可在编辑器中替换,?分析为 (html 里是空格占位符,普通的空格在 html 里如果连续的多个可能被认为只有一个,而这个东西你写几个就能占几个空格位)
网友评论