美文网首页
用scrapy全网爬取苏宁易购图书

用scrapy全网爬取苏宁易购图书

作者: 楚糖的糖 | 来源:发表于2018-11-08 20:14 被阅读0次

    分析:
    item["h_3"]里面的东西是通过json抓包之后分析的到的链接
    打印item["h_3"]可以知道各个类型的书籍的第一页的链接
    正则解析param.pageNumbers=(.?)和param.currentPage =(.?)来判断是否还有下一页,有就进行翻页,并且item["h_3"]的页面加1

    # -*- coding: utf-8 -*-
    import re
    from copy import deepcopy
    import scrapy
    
    
    class SnSpider(scrapy.Spider):
        name = 'sn2'
        allowed_domains = ['suning.com']
        start_urls = ['https://book.suning.com/']
    
        def parse(self, response):
            li_list = response.xpath("//div[@class='menu-list']//dl")
            for li in li_list:
                item = {}
                item["b_cate"] = li.xpath(".//dt/h3/a/text()").extract_first()  # 大分类
                a_list = li.xpath("./dd/a")  # 小分类
                for a in a_list:
                    item["s_href"] = a.xpath("./@href").extract_first()  # 小分类的链接
                    item["s_cate"] = a.xpath("./text()").extract_first()  # 小分类名字
                    item["h_2"] = item["s_href"][26:32]
                    item["h_3"] = "http://list.suning.com/emall/showProductList.do?ci=" + item["h_2"] + '&pg=03&cp=2'
                    print(item["h_3"])
    
                    if item["h_3"] is not None:
                        print(item["h_3"])
                        yield scrapy.Request(item["h_3"], callback=self.parse_book_list, meta={"item": item})
    
    
        def parse_book_list(self, response):
            item = response.meta["item"]
            # print(response.body.encode())
            li_list = response.xpath("//ul[@class='clearfix']/li")
            for li in li_list:
                    item["book_commit"] = li.xpath(".//div[@class='res-info']/p[3]/a/text()[1]").extract_first()  #收藏人数
                    item["book_desc"] = response.xpath(".//div[@class='res-info']/p[2]/a/em/text()").extract_first()  # 书籍详情
                    # item["book_price"] = response.xpath(".//div[@class='res-info']/p/em[1]").extract_first()  # 出版价格
                    item["book_name"] = li.xpath(".//div[@class='res-img']//a[@target='_blank']/img/@alt").extract_first()  # 书名
                    item["book_image"] = li.xpath(
                        ".//div[@class='res-img']//a[@target='_blank']/@href").extract_first()  # 书的封面链接
                    if item["book_image"] is not None:
                        item["book_image"] = "https:" + item["book_image"]
                        yield scrapy.Request(item["book_image"], callback=self.parse_book_detail, meta={"item": deepcopy(item)})
    
            page_count = int(re.findall("param.pageNumbers=(.*?);", response.body.decode())[0])
            current_page = int(re.findall("param.currentPage =(.*?);", response.body.decode())[0])
            if current_page < page_count:
                next_url = item["h_3"][:-1] + '{}'.format(current_page + 1)
                yield scrapy.Request(
                    next_url,
                    callback=self.parse_book_list,
                    meta={"item": response.meta["item"]}
                )
    
        def parse_book_detail(self, response):
            item = response.meta["item"]
            book_detail = response.xpath("//ul[@class='bk-publish clearfix']/li").extract()
            for book1 in book_detail:
                book2 = book1.replace("\n", "").replace("\r", "").replace("\t", "").replace("</span>", "").replace("<span>",
                                                                                                                   "")
                item["book_author"] = re.findall("作者:(.*?)</li>", book2)
                item["book_author"] = item["book_author"][0] if len(item["book_author"]) > 0 else None
                item["book_press"] =re.findall("出版社:(.*?)</li>",book2)
                item["book_press"] = item["book_press"][0] if len(item["book_press"]) > 0 else None
                item["publish_time"] =re.findall("出版时间:(.*?)</li>",book2)
                item["publish_time"] = item["publish_time"][0] if len(item["publish_time"]) > 0 else None
                print(item)
    
    ------------------------------------------------雍大喵
    
    

    爬取输入:

    scrapy crawl sn2

    相关文章

      网友评论

          本文标题:用scrapy全网爬取苏宁易购图书

          本文链接:https://www.haomeiwen.com/subject/vrkgxqtx.html