美文网首页
刘硕的Scrapy笔记(五,关于item封装)

刘硕的Scrapy笔记(五,关于item封装)

作者: 费云帆 | 来源:发表于2018-11-28 09:17 被阅读0次

所谓"元数据",就是把自身的信息传递给其他组件.
scrapy提供的item类似于Python的字典,但是比字典的功能更为便捷,用来封装数据.用法如下:

import scrapy

class Bookitem(scrapy.Item):
    # 相当于定义字典的键
    name=scrapy.Field()
    price=scrapy.Field()
#实例化的时候,同时赋值
book1=Bookitem(name='The story',price='22.06元')
print(book1)
>>>{'name': 'The story', 'price': '22.06元'}
#也可以先实例化,以后再赋值(scrapy里面一般都是这样处理)
book2=Bookitem()
print(book2)
>>>{}
book2['name']='How to learn Python'
book2['price']='66.60元'
print(book2)
#字典的方法也适用的:
print(book1.get('price','65.00元'))
>>>22.06元
#复习一下字典的用法:
print(book1)
>>>{'name': 'The story', 'price': '22.06元'}
print(list(book1))
>>>['price', 'name']
print(list(book1.keys()))
>>>['price', 'name']
print(list(book1.values()))
>>>['22.06元', 'The story']
print(list(book1.items()))
>>>[('price', '22.06元'), ('name', 'The story')]

现在修改原来爬取books.toscrape.com的实例:

  • items.py这么写:
import scrapy

class BookItem(scrapy.Item):
    name=scrapy.Field()
    price=scrapy.Field()
  • spiders这么写:
# -*- coding: utf-8 -*-
import scrapy
from Book.items import BookItem

class BooksSpider(scrapy.Spider):
    name = 'books'
    allowed_domains = ['books.toscrape.com']
    #start_urls = ['http://books.toscrape.com/']
    def start_requests(self):
        yield scrapy.Request(
            'http://books.toscrape.com',
            callback=self.parse_book,
            headers={'User-Agent':'Mozilla/5.0'},
            dont_filter=True
        )

    def parse_book(self, response):
        path=response.xpath('//li[@class="col-xs-6 col-sm-4 col-md-3 col-lg-3"]/article')
        for book in path:
            book_info=BookItem()
            book_info['name']=book.xpath('./h3/a/text()').extract()
            book_info['price']=book.xpath('./div[2]/p[1]/text()').extract()
            yield book_info
        next_page=response.xpath('//li[@class="next"]/a/@href').extract_first()
        if next_page:
            next_page=response.urljoin(next_page)
            yield scrapy.Request(next_page,callback=self.parse_book)
            #scrapy crawl books -o first_scrapy.csv
            #忽略表头,在屏幕上输出
            #sed -n '2,$p' first_scrapy.csv|cat -n

关于scrapy.Field()字段的元数据,我是这么理解,通常来说,是用来定义键的,但是通过元数据,可以进行直接赋值处理:

import scrapy

class ExampleItem(scrapy.Item):
    x=scrapy.Field(a='hello',b=[1,2,3])
    y=scrapy.Field(a=lambda x:x**2)

e=ExampleItem(x=100,y=200)
print(e.fields)
>>>{'y': {'a': <function ExampleItem.<lambda> at 0x0000000000DAB840>}, 'x': {'b': [1, 2, 3], 'a': 'hello'}}
print(e.get('x'))
>>>100
print(issubclass(scrapy.Field,dict))
>>>True

书上的一个实例:
book['authors']=['李雷','韩梅梅','吉姆']
想实现这样的效果:
1.'李雷|韩梅梅|吉姆'
2.'李雷,韩梅梅,吉姆'
3."['李雷','韩梅梅','吉姆']"
只需这样定义author字段:
authors=scrapy.Field(serializer=lambda x:'|'.join(x))
底层源码的实现,可以再参考书本.

相关文章

网友评论

      本文标题:刘硕的Scrapy笔记(五,关于item封装)

      本文链接:https://www.haomeiwen.com/subject/cgwoqqtx.html