美文网首页
scrapy爬取豆瓣书单

scrapy爬取豆瓣书单

作者: HouserLin | 来源:发表于2016-06-11 15:50 被阅读644次

    模仿:http://www.cnblogs.com/voidsky/p/5490798.html

    创建scrapy项目:scrapy startproject doubanbook
    cd 到提示目录,执行scrapy genspider doubanshu_spider root_url 继承基本spider创建spider

    spider代码:

    # -*- coding: utf-8 -*-
    import scrapy
    from doubanshu.items import DoubanshuItem
    import re
    import sys
    reload(sys)
    sys.setdefaultencoding('utf-8')
    
    class DoubanshuSpiderSpider(scrapy.Spider):
        name = "doubanshu_spider"
        #allowed_domains = ["https://www.douban.com/doulist/1264675/"]
        start_urls = (
            'https://www.douban.com/doulist/1264675//',
        )
        def parse(self, response):
            #print response.body
    
            sel = scrapy.Selector(response)
            books = sel.xpath('//*[@id="content"]/div/div[1]/div[starts-with(@id,"item")]')
            item = DoubanshuItem()
            print len(books)
            for each in books:
                title = each.xpath('div/div[2]/div[@class="title"]/a/text()').extract()[0].replace('・', '').replace(' ','').replace('\n', '')
                rate = each.xpath('div/div[2]/div[@class="rating"]/span[3]/text()').extract()[0].replace("(",'').replace(')', '').replace('\n', '')
                author =  re.search('<div class="abstract">(.*?)<br>', each.extract(), re.S).group(1).replace('•', '').replace(' ','').replace('・', '').replace('\n', '')
                item['title'] = title
                item['rate'] = rate
                item['author'] = author
                yield item
            next_page = sel.xpath('//*[@id="content"]/div/div[1]/div[31]/span[@class="next"]/a/@href').extract()
            if next_page:
                next  = next_page[0]
                yield scrapy.http.Request(next, callback = self.parse)
                
            
    
    

    items代码

    # -*- coding: utf-8 -*-
    
    # Define here the models for your scraped items
    #
    # See documentation in:
    # http://doc.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class DoubanshuItem(scrapy.Item):
        # define the fields for your item here like:
        # name = scrapy.Field()
        title = scrapy.Field()
        rate = scrapy.Field()
        author = scrapy.Field()
        
    
    

    setting 代码

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for doubanshu project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'doubanshu'
    
    SPIDER_MODULES = ['doubanshu.spiders']
    NEWSPIDER_MODULE = 'doubanshu.spiders'
    
    
    FEED_URI = u'file:///E://pythonExercises//20160611//doubanshudouban.csv'
    FEED_FORMAT = 'CSV'
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'doubanshu (+http://www.yourdomain.com)'
    
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = True
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    # Configure a delay for requests for the same website (default: 0)
    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    # Enable or disable spider middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'doubanshu.middlewares.MyCustomSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'doubanshu.middlewares.MyCustomDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    #ITEM_PIPELINES = {
    #    'doubanshu.pipelines.SomePipeline': 300,
    #}
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    

    pipelines
    不连接数据库不进行处理

    总结:

    Paste_Image.png

    逻辑:将下一个url生成Request对象,返回response, 调用 parse 函数

    2.403错误表示资源不可用。服务器理解客户的请求,但拒绝处理它,通常由于服务器上文件或目录的权限设置导致的WEB访问错误。需要加一个请求头部,模拟浏览器登录.

    Paste_Image.png

    3.将item输出为csv的方法
    setting:

    Paste_Image.png

    输出的csv乱码,使用sublime打开,保存编码为utf-8 with BOM,再使用excel打开

    4.处理书本资源被删除的情况:

    Paste_Image.png

    不足:解析书籍的作者存在编码问题,此处仅简单进行替换

    相关文章

      网友评论

          本文标题:scrapy爬取豆瓣书单

          本文链接:https://www.haomeiwen.com/subject/fhosdttx.html