Scrapy爬虫框架:一个为了爬取网站信息,提取结构性数据而编写的应用爬虫框架,该框架集数据字段定义、网络请求和解析、数据获取和处理等为一体,极大地方便了爬虫的编写过程。
1.下面以小猪短租网的信息为例,讲述Scrapy爬虫项目的创建。
①在CMD命令窗口中输入信息:
F:
cd F:\01_Python\03_Scrapy #选择路径
scrapy startproject xiaozhu #建立爬虫项目,名为xiaozhu
image.png
②通过pycharm查看xiaozhu项目下的所有文件:
image.png
③在spiders文件夹下新建xiaozhuspiders.py,用于编写爬虫代码:
image.png
2.Scrapy文件介绍
- 如上图,最顶层的xiaozhu文件夹为项目名,第二层由与项目同名的文件夹xiaozhu和scrapy.cfg组成,其中xiaozhu文件夹就是模块,通常叫包,所有的爬虫代码都在这个包中添加,scrapy是这个项目的设置文件,其中的内容如下:
# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html
[settings]
default = xiaozhu.settings
[deploy]
#url = http://localhost:6800/
project = xiaozhu
-
除了注释部分,这个配置文件声明了两件事:
①【settings】定义设置文件的位置为:xiaozhu模块下面的settings.py
②【deploy】定义项目名称为:xiaozhu -
第三层由5个Python文件文件和spiders文件夹构成。spiders文件夹实际上也是一个模块。而5个Python文件夹中,init.py是空文件,主要用于Python导入使用。middlewares.py是spiders的中间件,主要负责对Request对象和Response对象进行处理,属于可选件。
-此处着重介绍其他3个Python文件items.py、pipelines.py、settings.py的使用:
①items.py文件:用于定义爬取的字段信息。自动生成的代码如下:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class XiaozhuItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
②pipelines.py文件:用于爬虫数据的清洗和入库操作。
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
class XiaozhuPipeline(object):
def process_item(self, item, spider):
return item
③settings.py文件:用于对爬虫项目的一些设置,例如请求头的填写,设置pipelines.py爬取爬虫数据等。
# -*- coding: utf-8 -*-
# Scrapy settings for xiaozhu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'xiaozhu'
SPIDER_MODULES = ['xiaozhu.spiders']
NEWSPIDER_MODULE = 'xiaozhu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'xiaozhu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'xiaozhu.middlewares.XiaozhuSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'xiaozhu.middlewares.XiaozhuDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'xiaozhu.pipelines.XiaozhuPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
第4层为spiders模块下的2个Python文件,init.py与前面所述一致,xiaozhuspider.py是前面新建的Python文件,用于爬虫代码的编写。
综上所述,使用Scrapy爬虫框架相当于往里面填空,把scrapy项目中对应的文件代码补全即可实现爬虫。通常,我们设置以下4个文件即可完成爬虫任务。
- items.py: 用于定义字段。
- xiaozhuspider.py:用于爬虫数据的获取。
- pipelines.py:用于爬虫数据的处理。
- settings.py:用于爬虫的设置。
3.下面爬取小组短租网的一页住房信息。
爬取字段为:标题、地址、价格、出租类型、居住人数和床位数。
image.png
①items.py文件:文件开头的注释部分不需要修改,其他部分替换掉,即可完成爬虫的字段信息。
import scrapy
class XiaozhuItem(scrapy.Item):
# define the fields for your item here:
title = scrapy.Field()
address = scrapy.Field()
price = scrapy.Field()
lease_type = scrapy.Field()
suggestion = scrapy.Field()
bed = scrapy.Field()
②xiaozhuspider.py文件:爬虫代码的编写。
from scrapy.spiders import CrawlSpider #CrawlSpider作为xiaozhu的父类
from scrapy.selector import Selector #Selector用于解析请求网页后返回的数据,与Lxml库的用法一样
from xiaozhu.items import XiaozhuItem #XiaozhuItem是items.py中定义爬虫字段的类
class xiaozhu(CrawlSpider):
name = "xiaozhu" #定义Spider的名字,在Scrapy爬虫运行时会用到:scrapy crawl xiaozhu
start_urls = ['http://gz.xiaozhu.com/fangzi/26121722503.html'] # 默认情况下,spider会以start_urls中的链接为入口开始爬取
def parse(self, response): #response是请求网页返回的数据
item = XiaozhuItem() #初始化item
html = Selector(response) #导入Selector用于解析数据
title = html.xpath('//div[@class="pho_info"]/h4/em/text()').extract()[0] #唯一不同,需要加上.extract()来提取信息
address = html.xpath('//div[@class="pho_info"]/p/span/text()').extract()[0].strip()
price = html.xpath('//div[@class="day_l"]/span/text()').extract()[0]
lease_type = html.xpath('//li[@class="border_none"]/h6/text()').extract()[0]
suggestion = html.xpath('//h6[@class="h_ico2"]/text()').extract()[0]
bed = html.xpath('//h6[@class="h_ico3"]/text()').extract()[0]
item['title'] = title
item['address'] = address
item['price'] = price
item['lease_type'] = lease_type
item['suggestion'] = suggestion
item['bed'] = bed
yield item
③pipelines.py文件:将获取的item存入TXT文件中。
class XiaozhuPipeline(object):
def process_item(self, item, spider):
f = open('F:/xiaozhu.txt','a+')
f.write(item['title'] + '\n')
f.write(item['address'] + '\n')
f.write(item['price'] + '\n')
f.write(item['lease_type'] + '\n')
f.write(item['suggestion'] + '\n')
f.write(item['bed'] + '\n')
return item
④settings.py文件:在原有代码上恢复下述已经生成的代码,用于指定爬取的信息使用pipelines.py处理。
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'xiaozhu.pipelines.XiaozhuPipeline': 300,
}
4.Scrapy爬虫的运行
在xiaozhu文件夹中,输入以下命令:
scrapy crawl xiaozhu
TXT文件内容如下:
image.png
当然,也可以在xiaozhu文件夹中新建一个main.py的文件,这样直接运行main.py即可运行爬虫程序。
代码如下:
from scrapy import cmdline
cmdline.execute("scrapy crawl xiaozhu".split())
如果设置了使用“C:\Python36\python.exe”作为默认打开py文件的程序,直接双击main.py即可生成结果。
网友评论