一、scrapy 创建项目
scrapy startproject first 创建项目
cd first
scrapy genspider chouti dig.chouti.com 创建爬虫文件
scrapy crawl chouti --nolog 执行爬虫,忽略日志
windows打印中文出错解决方式(开头加上):
sys.stdout=io.TextIOWrapper(sys.stdout.buffer, encoding='gb18030')
代替bs4的内部解析器:
- response.xpath
- from scrapy.selector import HtmlXPathSelector
xph = HtmlXPathSelector()
spider爬取数据,parse返回 yield items
items相当于model,定义字段
pipline 做持久化,需要在配置文件注册
pipline
from_crawler(cls, crawler) 优先执行,可用来取配置信息
path = crawler.settings.get("PATH") # 配置文件必须大写
return cls(path)
init(self, path) 初始化
self.path = path
open_spider(self, spider) pipline开始前执行
close_spider(self, spider) pipline结束后执行
二级下载
from scrapy.http import Request
yield Request(url=page_url, callback=self.parse, meta={'cookiejar': True})
丢弃item,不传递给下一个pipline的process_item
from scrapy.exceptions import DropItem
raise DropItem()
拿cookies的方式
-
response.headers.getlist("Set-cookie")
-
from scrapy.http.cookies import cookie_jar
cookie_jar = CookieJar()
cookie_jar.extract_cookies(response, response.request)
cookie_jar._cookies.items() -
scrapy 自动操作
meta={'cookiejar': True}
去重
配置文件:DUPEFILTER_CLASS = 'scrapy.dumpfilter.MyDupeFilter'
取url唯一值:
from scrapy.util.request import request_fingerprint
unique = request_fingerprint(url)
USER_AGENT:配置文件可配置
网友评论