美文网首页大数据 爬虫Python AI Sql
Scrapy(Crawl spider)爬取微信小程序社区

Scrapy(Crawl spider)爬取微信小程序社区

作者: 小董不太懂 | 来源:发表于2019-07-25 17:08 被阅读0次

CrawlSpider相比于scrapy的强大之处就是之前的爬虫如果爬完一页了要去爬取第二页的数据需要自己yield发送请求过去,而CrawlSpider就只需要指定一些规则,满足规则的url就去下载,不满足的就不下载。

目标网站:http://www.wxapp-union.com/爬取教程内所有文章的标题、作者、发表日期和内容

Crawl spider的创建步骤:

  • scrapy startproject 项目名
  • cd 项目名
  • scrapy genspider -t crawl 文件名 网址
    创建好之后,我们在根目录下再新建run_start文件,方便快速运行爬虫,也方便快速去调试程序。


对目标网页进行分析,我就不分析了,我就挑重要的说。


Rule和LinkExtractor决定了爬虫的具体走向:

  • allow设置规则的方法:
    要能将爬虫限制在我们想要的url上面,不要跟其他url产生相同的正则表达式即可
  • 什么情况下使用follow:
    如果在爬取页面的时候,需要将满足当前条件的url再跟进,那么就设置为True,否则就设置为False
  • 什么情况下使用callback:
    如果这个url页面只是为了获取更多的url,并不需要里面的数据,就不需要用callback,如果想
    要获取url对应页面里的数据,那么就需要指定一个callback
下面给出一些完整的代码:
  • wxapp_spider.py:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

from wxapp.items import WxappItem


class WxappSpiderSpider(CrawlSpider):
    name = 'wxapp_spider'
    allowed_domains = ['wxapp-union.com']
    start_urls = ['http://www.wxapp-union.com/portal.php?mod=list&catid=2']

    rules = (
        Rule(LinkExtractor(allow=r'.*mod=list&catid=\d'), follow=True),
        Rule(LinkExtractor(allow=r'.*union.com/article-.*-1\.html'), callback='parse_detail', follow=False)
    )

    def parse_detail(self, response):
        item = {}
        title = response.xpath('//h1[@class="ph"]/text()').get()
        author_P = response.xpath('//p[@class="authors"]')
        author = author_P.xpath('./a/text()').get()
        pub_time = author_P.xpath('./span/text()').get()
        article_content_div = response.xpath('//td[@id="article_content"]/div//text()').extract()
        if article_content_div:
            article_content = article_content_div
        else:
            article_content = response.xpath('//td[@id="article_content"]//text()').extract()
        content = ''.join(article_content).strip()
        item = WxappItem(title=title,author=author,pub_time=pub_time,content=content)
        return item

  • settings.py:
# -*- coding: utf-8 -*-

# Scrapy settings for wxapp project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'wxapp'

SPIDER_MODULES = ['wxapp.spiders']
NEWSPIDER_MODULE = 'wxapp.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'wxapp (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
  'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'wxapp.middlewares.WxappSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'wxapp.middlewares.WxappDownloaderMiddleware': 543,
# }

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'wxapp.pipelines.WxappPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

  • items:
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class WxappItem(scrapy.Item):
    title = scrapy.Field()
    author = scrapy.Field()
    pub_time = scrapy.Field()
    content = scrapy.Field()

  • pipielines.py:
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.exporters import JsonLinesItemExporter

class WxappPipeline(object):

    def __init__(self):
        self.fp = open('wxapp.json','wb')
        self.exporter = JsonLinesItemExporter(self.fp, ensure_ascii=False, encoding='utf-8')
    def process_item(self, item, spider):
        self.exporter.export_item(item)
        return item
    def close_spider(self):
        self.fp.close()
  • run_start.py:
from scrapy import cmdline

cmdline.execute('scrapy crawl wxapp_spider'.split())
执行效果满足预期要求:

相关文章

  • Scrapy(Crawl spider)爬取微信小程序社区

    CrawlSpider相比于scrapy的强大之处就是之前的爬虫如果爬完一页了要去爬取第二页的数据需要自己yiel...

  • Python

    运行scrapy程序 scrapy crawl kaili_spider 编程最好都用空格 scrapy方法传参默...

  • scrapy crawl spider

    创建项目 创建爬虫 创建crawl爬虫

  • Crawlspider通用爬虫

    创建CrawlSpider模板: scrapy genspider -t crawl spider名称xxxx.c...

  • Scrapy爬虫框架使用

    keywords:python scrapy crawl mysql git 建材 爬虫 之前爬取过指定建材网站的...

  • 2018-06-24

    scrapy items.py决定爬取哪些内容 spider决定怎么爬 settings.py决定谁去处理爬取的内...

  • 初学scrapy的坑

    爬取腾讯招聘,scrapy项目 items配置 spider配置 settings配置 pipelines配置 蛋...

  • scrapy

    scrapy通用爬虫CrawlSpider它是Spider的派生类,Spider类的设计原则是只爬取start_u...

  • scrapy通用爬虫

    什么是scrapy通用爬虫 CrawlSpider它是Spider的派生类,Spider类的设计原则是只爬取sta...

  • python爬虫之Scrapy CrawlSpiders介绍和使

    1. scrapy通用爬虫 CrawlSpider它是Spider的派生类,Spider类的设计原则是只爬取sta...

网友评论

    本文标题:Scrapy(Crawl spider)爬取微信小程序社区

    本文链接:https://www.haomeiwen.com/subject/xyghrctx.html