美文网首页
scrapy项目下运行多个爬虫

scrapy项目下运行多个爬虫

作者: 中v中 | 来源:发表于2021-09-14 20:25 被阅读0次

一般创建了scrapy文件夹后,可能需要写多个爬虫,如果想让它们同时运行而不是顺次运行的话,得怎么做?


在这里插入图片描述

a、在spiders目录的同级目录下创建一个commands目录,并在该目录中创建一个crawlall.py,将scrapy源代码里的commands文件夹里的crawl.py源码复制过来,只修改run()方法即可!

import os
from scrapy.commands import ScrapyCommand
from scrapy.utils.conf import arglist_to_dict
from scrapy.utils.python import without_none_values
from scrapy.exceptions import UsageError

class Command(ScrapyCommand):
    requires_project = True
    def syntax(self):
        return "[options] <spider>"
    def short_desc(self):
        return "Run all spider"
    def add_options(self, parser):
        ScrapyCommand.add_options(self, parser)
        parser.add_option("-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE",
                          help="set spider argument (may be repeated)")
        parser.add_option("-o", "--output", metavar="FILE",
                          help="dump scraped items into FILE (use - for stdout)")
        parser.add_option("-t", "--output-format", metavar="FORMAT",
                          help="format to use for dumping items with -o")

    def process_options(self, args, opts):
        ScrapyCommand.process_options(self, args, opts)
        try:
            opts.spargs = arglist_to_dict(opts.spargs)
        except ValueError:
            raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False)
        if opts.output:
            if opts.output == '-':
                self.settings.set('FEED_URI', 'stdout:', priority='cmdline')
            else:
                self.settings.set('FEED_URI', opts.output, priority='cmdline')
            feed_exporters = without_none_values(
                self.settings.getwithbase('FEED_EXPORTERS'))
            valid_output_formats = feed_exporters.keys()
            if not opts.output_format:
                opts.output_format = os.path.splitext(opts.output)[1].replace(".", "")
            if opts.output_format not in valid_output_formats:
                raise UsageError("Unrecognized output format '%s', set one"
                                 " using the '-t' switch or as a file extension"
                                 " from the supported list %s" % (opts.output_format,
                                                                  tuple(valid_output_formats)))
            self.settings.set('FEED_FORMAT', opts.output_format, priority='cmdline')

    def run(self, args, opts):
        #获取爬虫列表
        spd_loader_list=self.crawler_process.spider_loader.list()#获取所有的爬虫文件。
        print(spd_loader_list)
        #遍历各爬虫
        for spname in spd_loader_list or args:
            self.crawler_process.crawl(spname, **opts.spargs)
            print ('此时启动的爬虫为:'+spname)
        self.crawler_process.start()
--------------------- 
作者:行者刘6 
来源:CSDN 
原文:https://blog.csdn.net/qq_38282706/article/details/80977576 
版权声明:本文为博主原创文章,转载请附上博文链接!

b、还得在里面加个init.py文件

在这里插入图片描述

c、到这里还没完,settings.py配置文件还需要加一条。

COMMANDS_MODULE = ‘项目名称.目录名称’

COMMANDS_MODULE = 'ds1.commands'

d、最后启动crawlall即可!

当然,安全起见,可以先在命令行中进入该项目所在目录,并输入scrapy -h,可以查看是否有命令crawlall 。如果有,那就成功了,可以启动了

我是写了个启动文件,放在第一级即可

在这里插入图片描述

要不直接在命令台cmd里输入 scrapy crawlall 就行了

注意的是,爬虫是同时并行运行

支持50个爬虫,当超出50个之后,从第51个开始就会跑出错误

# 2021-09-14 18:30:59 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method TelnetConsole.start_listening of <scrapy.extensions.telnet.TelnetConsole object at 0x110264ca0>>
# Traceback (most recent call last):
#   File "/usr/local/lib/python3.9/site-packages/twisted/internet/tcp.py", line 1333, in startListening
#     skt.bind(addr)
# OSError: [Errno 48] Address already in use

# During handling of the above exception, another exception occurred:

# Traceback (most recent call last):
#   File "/usr/local/lib/python3.9/site-packages/scrapy/utils/defer.py", line 157, in maybeDeferred_coro
#     result = f(*args, **kw)
#   File "/usr/local/lib/python3.9/site-packages/pydispatch/robustapply.py", line 55, in robustApply
#     return receiver(*arguments, **named)
#   File "/usr/local/lib/python3.9/site-packages/scrapy/extensions/telnet.py", line 65, in start_listening
#     self.port = listen_tcp(self.portrange, self.host, self)
#   File "/usr/local/lib/python3.9/site-packages/scrapy/utils/reactor.py", line 22, in listen_tcp
#     return reactor.listenTCP(x, factory, interface=host)
#   File "/usr/local/lib/python3.9/site-packages/twisted/internet/posixbase.py", line 568, in listenTCP
#     p.startListening()
#   File "/usr/local/lib/python3.9/site-packages/twisted/internet/tcp.py", line 1335, in startListening
#     raise CannotListenError(self.interface, self.port, le)
# twisted.internet.error.CannotListenError: Couldn't listen on 127.0.0.1:6073: [Errno 48] Address already in use.
# spider 52|爱陪护湛江养老中心 开始 11111
# 2021-09-14 18:30:59 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method TelnetConsole.start_listening of <scrapy.extensions.telnet.TelnetConsole object at 0x1102c1a90>>
# Traceback (most recent call last):
#   File "/usr/local/lib/python3.9/site-packages/twisted/internet/tcp.py", line 1333, in startListening
#     skt.bind(addr)
# OSError: [Errno 48] Address already in use


解决办法

因为不能使用telnet console同时运行两个scrapy进程(scrapy shell)
setting.py 里配置文件打开扩展配置

# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }

相关文章

  • scrapy项目下运行多个爬虫

    一般创建了scrapy文件夹后,可能需要写多个爬虫,如果想让它们同时运行而不是顺次运行的话,得怎么做? a、在sp...

  • Scrapy命令行动态传参给spider

    scrapy命令行执行传递多个参数给spider 动态传参 在命令行运行scrapy爬虫 若爬虫中有参数可以控制爬...

  • scrapy爬虫

    运行爬虫 scrapy crawl +<爬虫名字>Scrapy的安装:pip install scrapy创建s...

  • scrapy笔记

    1 scrapy的运行原理 参考:Learning Scrapy笔记(三)- Scrapy基础Scrapy爬虫入门...

  • 如何获取指定模块下所有的类

    前言 在使用 scrapy 时,运行爬虫仅需要通过 scrapy crawl 爬虫名 就可启动我们写好的爬虫,那么...

  • 1.关于scrapy的爬虫名name

    scrapy爬虫的name是可以修改的,parse函数名不能修改,如果多个爬虫的name相同,当他们同时运行时就有...

  • scrapy同时运行多个爬虫

    在工程根目录下创建start_spiders.py 参考文档:http://blog.leanote.com/po...

  • Scrapy的使用

    创建一个Scrapy项目 Scrapy的项目结构 spiders:编写爬虫的目录 爬虫的编写规则 运行你的爬虫

  • scrapy入门使用及pycharm远程调试

    一·scrapy的入门使用 scrapy的安装 创建scrapy项目 创建scrapy爬虫:在项目目录下执行 运行...

  • Scrapy爬虫入门基础

    制作Scrapy爬虫 1、新建项目(命令行中输入:scrapy startproject xxx):新建一个爬虫项...

网友评论

      本文标题:scrapy项目下运行多个爬虫

      本文链接:https://www.haomeiwen.com/subject/nfchgltx.html