美文网首页
scrapy-开始

scrapy-开始

作者: EasyNetCN | 来源:发表于2020-02-15 10:47 被阅读0次

scrapy是基于python的开源爬虫项目

官方网站:https://scrapy.org/

项目地址:https://github.com/scrapy/scrapy

官方文档:https://scrapy.org/doc/

安装scrapy

pip install scrapy

创建爬虫项目

scrapy startproject {项目名称}

进入创建的项目文件,创建爬虫

scrapy genspider {爬虫名称} {域名}

运行爬虫

scrapy crawl {爬虫名称} 

运行爬虫,并导出文件(csv,xml,json)

scrapy crawl {爬虫名称} -o {文件名称.csv}
scrapy crawl {爬虫名称} -o {文件名称.xml}
scrapy crawl {爬虫名称} -o {文件名称.json}

修改配置,禁用robots,解决json输出乱码问题

# -*- coding: utf-8 -*-

# Scrapy settings for ydyun360_crawler project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'ydyun360_crawler'

SPIDER_MODULES = ['ydyun360_crawler.spiders']
NEWSPIDER_MODULE = 'ydyun360_crawler.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'ydyun360_crawler (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'ydyun360_crawler.middlewares.Ydyun360CrawlerSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'ydyun360_crawler.middlewares.Ydyun360CrawlerDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'ydyun360_crawler.pipelines.Ydyun360CrawlerPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

FEED_EXPORT_ENCODING = 'utf-8'

相关文章

  • scrapy-开始

    scrapy是基于python的开源爬虫项目 官方网站:https://scrapy.org/ 项目地址:http...

  • Scrapy学习纪录

    title: Scrapy爬虫项目纪录date: 2019年2月20日 14:14tags:- Scrapy- P...

  • scrapy->pywin32

    scrapy遇见三个问题:(仅供自己学习记录) 1:no module named win32API https:...

  • scrapy-组件执行顺序

    最近在学习scrapy,其中有四个重要的组件:Extension、Download Middleware、Spid...

  • Scrapy进阶-防ban策略

    在再识Scrapy-下载豆瓣图书封面中我们学会了如何下载图片。但是在大批量爬取的时候我们最怕的就是被网站ban了。...

  • scrapy-选择器(Selectors)

    选择器(Selectors) 当抓取网页时,你做的最常见的任务是从HTML源码中提取数据。现有的一些库可以达到这个...

  • scrapy- 分布式爬虫框架搭建

    1分布式使用 2 分布式爬虫开发的步骤: 注:如果想将 Scrapy 改造成分布式,就会有两个问题必须要解决①re...

  • scrapy-爬取王者荣耀--英雄皮肤

    花了半天时间,终于将爬虫写完了,中间遇到pipelines这块真的是一个坑点,希望也给大家出个避坑指南! 第一步新...

  • frontera——最好的scrapy-分布式框架

    注意:Frontera对Windows的兼容性不好,Windows开发者慎用 因为公司项目需求,最近在学习 por...

  • scrapy-新浪关注用户内容爬取

    第一个需求 这里不用登录,因为,热门内容主要是在微博首页,主要的要求就是使用selenium渲染工具去采集动态内容...

网友评论

      本文标题:scrapy-开始

      本文链接:https://www.haomeiwen.com/subject/xpdefhtx.html