美文网首页
scrapy-开始

scrapy-开始

作者: EasyNetCN | 来源:发表于2020-02-15 10:47 被阅读0次

    scrapy是基于python的开源爬虫项目

    官方网站:https://scrapy.org/

    项目地址:https://github.com/scrapy/scrapy

    官方文档:https://scrapy.org/doc/

    安装scrapy

    pip install scrapy
    

    创建爬虫项目

    scrapy startproject {项目名称}
    

    进入创建的项目文件,创建爬虫

    scrapy genspider {爬虫名称} {域名}
    

    运行爬虫

    scrapy crawl {爬虫名称} 
    

    运行爬虫,并导出文件(csv,xml,json)

    scrapy crawl {爬虫名称} -o {文件名称.csv}
    scrapy crawl {爬虫名称} -o {文件名称.xml}
    scrapy crawl {爬虫名称} -o {文件名称.json}
    

    修改配置,禁用robots,解决json输出乱码问题

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for ydyun360_crawler project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'ydyun360_crawler'
    
    SPIDER_MODULES = ['ydyun360_crawler.spiders']
    NEWSPIDER_MODULE = 'ydyun360_crawler.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'ydyun360_crawler (+http://www.yourdomain.com)'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'ydyun360_crawler.middlewares.Ydyun360CrawlerSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'ydyun360_crawler.middlewares.Ydyun360CrawlerDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    #ITEM_PIPELINES = {
    #    'ydyun360_crawler.pipelines.Ydyun360CrawlerPipeline': 300,
    #}
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    FEED_EXPORT_ENCODING = 'utf-8'
    

    相关文章

      网友评论

          本文标题:scrapy-开始

          本文链接:https://www.haomeiwen.com/subject/xpdefhtx.html