美文网首页
Scrapy的简单用法

Scrapy的简单用法

作者: _Clown_ | 来源:发表于2019-01-06 20:15 被阅读0次

    首先执行如下命令创建一个scrapy项目

    scrapy startproject kaijiang

    主要有如下几个核心文件:

    items.py: 在项目的根目录

    middlewares.py: 在项目的根目录

    pipelines.py: 在项目的根目录

    kaijiang.py: 在spiders目录

    settings.py: 在项目的根目录

    我的实例是爬取和讯人物信息,一个实例地址为:http://kaijiang.500.com

    项目目标是:根据几个初始的url,爬取到html源码,并从源码中提取出同样的url,进行迭代爬取

    我们还要创建一个数据库,因为要把提取到的内容存储到数据库中

    创建库的命令:
    create database shuangseqiu charset=utf8;
    选中库命令:
    use shuangseqiu;
    创建表的命令:
    create table kaijiang(times varchar(100),hong varchar(100),lan varchar(100));

    然后就看下面的操作就可以了

    第一,item.py文件的内容如下:

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://doc.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class ShuangseqiuItem(scrapy.Item):
        # define the fields for your item here like:
        url = scrapy.Field()
        times = scrapy.Field()
        hong = scrapy.Field()
        lan = scrapy.Field()
    
    

    该文件的作用是:定义需要从html中抽取出来的字段名称

    第二,spiders目录下的kaijiang.py文件内容,我的名字是叫kaijiang.py

    # -*- coding: utf-8 -*-
    import scrapy
    from ShuangSeqiu.items import ShuangseqiuItem
    
    class KaijiangSpider(scrapy.Spider):
        name = 'kaijiang'
        allowed_domains = ['kaijiang.500.com']
        start_urls = ['http://kaijiang.500.com/shtml/ssq/19002.shtml']
    
        def parse(self, response):
            # item = ShuangseqiuItem()
            se_url = response.xpath('//div[@class="iSelectList"]//a/@href').extract()
            for url in se_url:
                yield scrapy.Request(url=url,callback=self.zhongjiang)
    
    
        def zhongjiang(self,response):
            # item = response.meta.get('item')
            item = ShuangseqiuItem()
            times = response.xpath('//span[@class="span_right"]/text()').extract_first('')
            hong = ','.join(response.xpath('//div[@class="ball_box01"]/ul//li/text()').extract())[:-3]
            lan = ','.join(response.xpath('//div[@class="ball_box01"]/ul//li/text()').extract())[-2:]
    
            item['times'] = times
            item['hong'] = hong
            item['lan'] = lan
            yield item
    
    

    该文件的作用,是定义初始url

    第三,pipilines.py

    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    import pymysql
    
    
    class ShuangseqiuPipeline(object):
    
        def __init__(self, host, port, user, pwd, db):
            self.client = pymysql.Connect(host, user, pwd, db, port, charset='utf8')
            self.cursor = self.client.cursor()
    
        @classmethod
        def from_crawler(cls, crawler):
            host = crawler.settings['MYSQL_HOST']
            port = crawler.settings['MYSQL_PORT']
            user = crawler.settings['MYSQL_USER']
            pwd = crawler.settings['MYSQL_PWD']
            db = crawler.settings['MYSQL_DB']
    
            return cls(host, port, user, pwd, db)
    
        def process_item(self, item, spider):
            insert_sql = """
                   insert into kaijiang(times,hong,lan)
                   VALUE (%s, %s, %s)
            """
            try:
                self.cursor.execute(insert_sql, (item['times'], item['hong'], item['lan']))
                self.client.commit()
            except Exception as err:
                print(err)
                self.client.rollback()
            return item
    
    

    第二步中parse和zhongjiang方法返回的内容,这里可以获取到,可以在这里将爬取到的内容写到文件中

    第四, middlewares.py文件内容如下

    # -*- coding: utf-8 -*-
    
    # Define here the models for your spider middleware
    #
    # See documentation in:
    # https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    
    from scrapy import signals
    
    
    class ShuangseqiuSpiderMiddleware(object):
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the spider middleware does not modify the
        # passed objects.
    
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        def process_spider_input(self, response, spider):
            # Called for each response that goes through the spider
            # middleware and into the spider.
    
            # Should return None or raise an exception.
            return None
    
        def process_spider_output(self, response, result, spider):
            # Called with the results returned from the Spider, after
            # it has processed the response.
    
            # Must return an iterable of Request, dict or Item objects.
            for i in result:
                yield i
    
        def process_spider_exception(self, response, exception, spider):
            # Called when a spider or process_spider_input() method
            # (from other spider middleware) raises an exception.
    
            # Should return either None or an iterable of Response, dict
            # or Item objects.
            pass
    
        def process_start_requests(self, start_requests, spider):
            # Called with the start requests of the spider, and works
            # similarly to the process_spider_output() method, except
            # that it doesn’t have a response associated.
    
            # Must return only requests (not items).
            for r in start_requests:
                yield r
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)
    
    
    class ShuangseqiuDownloaderMiddleware(object):
        # Not all methods need to be defined. If a method is not defined,
        # scrapy acts as if the downloader middleware does not modify the
        # passed objects.
    
        @classmethod
        def from_crawler(cls, crawler):
            # This method is used by Scrapy to create your spiders.
            s = cls()
            crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
            return s
    
        def process_request(self, request, spider):
            # Called for each request that goes through the downloader
            # middleware.
    
            # Must either:
            # - return None: continue processing this request
            # - or return a Response object
            # - or return a Request object
            # - or raise IgnoreRequest: process_exception() methods of
            #   installed downloader middleware will be called
            return None
    
        def process_response(self, request, response, spider):
            # Called with the response returned from the downloader.
    
            # Must either;
            # - return a Response object
            # - return a Request object
            # - or raise IgnoreRequest
            return response
    
        def process_exception(self, request, exception, spider):
            # Called when a download handler or a process_request()
            # (from other downloader middleware) raises an exception.
    
            # Must either:
            # - return None: continue processing this exception
            # - return a Response object: stops process_exception() chain
            # - return a Request object: stops process_exception() chain
            pass
    
        def spider_opened(self, spider):
            spider.logger.info('Spider opened: %s' % spider.name)
    
    

    第五, settings.py文件

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for ShuangSeqiu project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://doc.scrapy.org/en/latest/topics/settings.html
    #     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'ShuangSeqiu'
    
    SPIDER_MODULES = ['ShuangSeqiu.spiders']
    NEWSPIDER_MODULE = 'ShuangSeqiu.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'ShuangSeqiu.middlewares.ShuangseqiuSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'ShuangSeqiu.middlewares.ShuangseqiuDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://doc.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'ShuangSeqiu.pipelines.ShuangseqiuPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    
    MONGODB_HOST = '127.0.0.1'
    MONGODB_PORT= 27017
    MONGODB_DBNAME = "shuangseqiu"
    MONGODB_SHEETNAME= "kaijiang"
    
    
    
    #关于数据库的相关配置
    MYSQL_HOST = '127.0.0.1'
    MYSQL_PORT = 3306
    MYSQL_USER = 'root'
    MYSQL_PWD = '1'
    MYSQL_DB = 'shuangseqiu'
    
    

    大功告成咯
    接下来就启动爬虫吧:

    scrapy crawl kaijiang

    步骤总结:

    爬取数据:
    scrapy的基本用法

    1. 通过命令创建项目
      scrapy startproject 项目名称
    2. 用pycharm打开项目
    3. 通过命令创建爬虫
      scrapy genspider 爬虫名称 域名
    4. 配置settings
      robots_obey=False
      Download_delay=0.5
      Cookie_enable=False
    5. 自定义UserAgentMiddleWare
      可以直接粘现成的
      或者自己通过研究源码实现
    6. 开始解析数据
    1. 先大致规划一下需要几个函数
    2. 函数1跳转到函数2使用 yield scrapy.Request(url,callback,meta,dont_filter)
    1. 将数据封装到items,记得yield item
    2. 自定义pipelines将数据存储到数据库/文件中

    相关文章

      网友评论

          本文标题:Scrapy的简单用法

          本文链接:https://www.haomeiwen.com/subject/zeqerqtx.html