首先声明,在Python和爬虫这方面,我是业余的那一卦,只是平时玩一玩,不能当真的,请各位大佬轻拍。虽然爬虫与传统意义上的大数据技术不属于同一类,但大概也只能放在大数据分类下面了。
今天接到了 @小阿妩 的一个需求(她是做产品经理的,也只有“需求”这个词最合适了)。大意是因为担心QQ群空间不稳定或者关闭,因此需要备份某个QQ群空间的所有帖子。帖子量比较大,有几千条,人工操作会很麻烦,才会想到用爬虫来解决问题。
事不宜迟,下班之后马上用Scrapy来搞一波。由于日更的时间快到了,因此下面写得有点简略,之后再来慢慢充实内容吧。
安装Scrapy
我之前就已经装过了,按照官方文档http://doc.scrapy.org/en/latest/intro/install.html的介绍来,基本就可以万无一失。中途可能需要解决一下个别组件的依赖版本问题,比如six、Twisted、pyOpenSSL。另外我用的Python版本是2.7.10。
从官方的架构说明文档http://doc.scrapy.org/en/latest/topics/architecture.html中盗一张图来看Scrapy的运行机制,不多废话了。
创建Scrapy项目
终端执行scrapy startproject qq_qgc_spider
,然后打开PyCharm导入项目即可。
分析页面结构
Chrome的“审查元素”功能派上用场了,还能一键导出XPath。贴张图感受一下。
保持登录状态
本来是想采用模拟登录的方法的,但是QQ群空间的网页版登录窗口在Chrome下显示不全,没有账号密码登录的选项,并且也没有单独的移动版页面,就只能采用提取Cookies的方法了。详情还是看图。
在settings.py中,加入默认请求headers,同时把COOKIES_ENABLED设为False,这样才会使用headers中定义的cookie。
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en,zh-CN;q=0.9,zh;q=0.8,ja;q=0.7,es;q=0.6',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Cookie': 'pgv_pvi=2645434368; RK=XGpQSgnlP6; ptcz=db5efe1457bcd4488f4edc672e564bbc4343ba3b8330faed74e74bbc3c1545a1; pgv_pvid=484757304; o_cookie=305546990; pac_uid=1_305546990; luin=o0305546990;******************************************* ; uniqueuid=f79135c957a48801cf1a97a7667dc22f',
'Host': 'qgc.qq.com',
'Upgrade-Insecure-Requests': '1'
}
定义爬取数据结构
只需要帖子ID、标题、内容三项,所以items.py中这样写。
from scrapy import Field, Item
class QgcTopicItem(Item):
# define the fields for your item here like:
# name = scrapy.Field()
topic_id = Field()
title = Field()
content = Field()
pass
编写爬虫主程序
没有用BeautifulSoup和Selenium等爬虫工程中常用的库。一是它们的效率都不算很高;二是数据规模比较小,直接基于Scrapy的Selector+XPath做解析就行。
逻辑比较简单,帖子列表+帖子详情,典型的两级爬取结构。唯一特别要注意的是“下一页”逻辑的递归调用,另外meta可以用来方便传参。
#!/usr/bin/python
# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.http import Request
from scrapy.selector import Selector
from qq_qgc_spider.items import *
QGC_ADDRESS = 'http://qgc.qq.com'
QQ_GROUP_ID = '89753069'
class QgcSpider(Spider):
name = 'QgcSpider'
allowed_domains = ['qq.com']
start_urls = ['%s/%s?page=1' % (QGC_ADDRESS, QQ_GROUP_ID)]
def parse(self, response):
for url in self.start_urls:
yield Request(url, self.parse_topic_list)
def parse_topic_list(self, response):
selector = Selector(response)
a_links_titles = selector.xpath('//div[@id="threadlist"]/div[@class="feed clearfix"]/dl/dt/a')
for a_link_title in a_links_titles:
link = a_link_title.xpath('./@href').extract_first()
title = a_link_title.xpath('./@title').extract_first()
detail_request = Request(QGC_ADDRESS + link + '?hostOnly=1', self.parse_topic_detail)
detail_request.meta['topic_id'] = link.split('/')[3]
detail_request.meta['title'] = title
yield detail_request
a_page_numbers = selector.xpath('//div[@id="threadlist"]/div[@class="page"]/p/a')
for a_page_no in a_page_numbers:
span_no = a_page_no.xpath('./span/text()').extract_first()
if span_no == u'下一页':
link = a_page_no.xpath('./@href').extract_first()
yield Request(QGC_ADDRESS + link, self.parse_topic_list)
def parse_topic_detail(self, response):
selector = Selector(response)
content = ''
div_contents = selector.xpath('//td[@id="plc_0"]/div[@class="pct xs2"]/div[@class="pctmessage mbm"]/div')
for div_content in div_contents:
div_paragraphs = div_content.xpath('./div//text()')
for para in div_paragraphs.extract():
content += (para + '\r\n')
item = QgcTopicItem()
item['topic_id'] = response.meta['topic_id']
item['title'] = response.meta['title']
item['content'] = content
yield item
UA伪造和AutoThrottle
说到底都是为了防止被封。UA伪造可以使用fake_useragent库来实现。在middlewares.py中定义一个下载中间件。
class FakeUAMiddleware(object):
def __init__(self, crawler):
super(FakeUAMiddleware, self).__init__()
self.ua = UserAgent()
self.ua_type = crawler.settings.get('UA_TYPE', 'random')
@classmethod
def from_crawler(cls, crawler):
return cls(crawler)
def process_request(self, request, spider):
fake_ua = getattr(self.ua, self.ua_type)
request.headers.setdefault('User-Agent', fake_ua)
然后在settings.py中启用之。顺便还有AutoThrottle的设定,这样就不会跑得太快了。
DOWNLOADER_MIDDLEWARES = {
'qq_qgc_spider.middlewares.FakeUAMiddleware': 543,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 10
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
将爬取结果保存到txt文件
用Scrapy的流水线来实现,每个帖子保存一个txt,并且采用GB18030编码,更平易近人一些。pipelines.py中加入:
import codecs
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
class TextFilePipeline(object):
def __init__(self):
self.path = '/Users/lmagic/Documents/bsm_plays_backup/'
def process_item(self, item, spider):
file_name = '%s - %s.txt' % (item['topic_id'], item['title'])
content = '%s\r\n\r\n%s'.decode('utf-8').encode('gb18030') % (item['title'], item['content'])
fd = codecs.open(self.path + file_name, 'w+', encoding='gb18030')
fd.write(content)
fd.close()
settings.py中:
ITEM_PIPELINES = {
'qq_qgc_spider.pipelines.TextFilePipeline': 300
# 'qq_qgc_spider.pipelines.QgcSpiderPipeline': 300,
}
跑起来吧
from scrapy import cmdline
cmdline.execute("scrapy crawl QgcSpider".split())
然后去输出路径下收结果就好了~
网友评论