美文网首页
CrawlSpider报错AttributeError: 'st

CrawlSpider报错AttributeError: 'st

作者: SuperDi | 来源:发表于2018-07-06 17:05 被阅读0次

打算学习 CrawlSpider,将现有的 Spider 改写为 CrawlSpider ,没想到在匹配规则这一块就遇到了坑,还是解决不了的那种,直接上代码

class SbcxCrawlSpider(CrawlSpider):
    name = 'sbcx_crawl'
    allowed_domains = ['sbcx.com']
    start_urls = ['http://sbcx.com/sbcx/apple']

    rules = (
        Rule(LinkExtractor(), callback='parse_item', follow=False),
    )

    def parse_item(self, response):
        print(response.url)

开始是打算用正则匹配,没有写匹配规则的情况下直接使用
Rule(LinkExtractor(), callback='parse_item', follow=False)
在 parse_item 函数中打印发现 Scrapy 的 LinkExtractor 居然没有提取到我需要的 url !页面是静态页面,比如其中一条 a 标签 :
<a target="_blank" href="/trademark-detail/16010/APPLE" login="1" >G80</a>,居然被 LinkExtractor 忽略了,目前还不知道原因和解决办法。。。

换成 xpath 匹配

修改 rules 如下

 rules = (
        Rule(LinkExtractor(restrict_xpaths='//table[@class="jsjieguo"]/tr/td[5]/a/@href'), callback='parse_item', follow=False),
        # Rule(LinkExtractor(), callback='parse_item', follow=False),
    )

运行程序直接报错,报错信息如下:

Traceback (most recent call last):
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
    yield next(it)
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output
    for x in result:
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spiders\crawl.py", line 82, in _parse_response
    for request_or_item in self._requests_to_follow(response):
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\spiders\crawl.py", line 61, in _requests_to_follow
    links = [lnk for lnk in rule.link_extractor.extract_links(response)
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\lxmlhtml.py", line 128, in extract_links
    links = self._extract_links(doc, response.url, response.encoding, base_url)
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\__init__.py", line 109, in _extract_links
    return self.link_extractor._extract_links(*args, **kwargs)
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\lxmlhtml.py", line 58, in _extract_links
    for el, attr, attr_val in self._iter_links(selector.root):
  File "E:\Develop\Virtualenv\py2_7\lib\site-packages\scrapy\linkextractors\lxmlhtml.py", line 46, in _iter_links
    for el in document.iter(etree.Element):
AttributeError: 'str' object has no attribute 'iter'

最后在 stackoverflow 找到了答案

The problem is that restrict_xpaths should point to elements - either the links directly or containers containing links, not attributes:
翻译过来就是:
这个错误的原因是 restrict_xpaths 应该指向元素,也就是直接链接或包含链接的容器,而不是它的属性,而我们的代码中用的是 a/@href,修改如下

 rules = (
        Rule(LinkExtractor(restrict_xpaths='//table[@class="jsjieguo"]/tr/td[5]/a'), callback='parse_item', follow=False),
        # Rule(LinkExtractor(), callback='parse_item', follow=False),
    )

相关文章

网友评论

      本文标题:CrawlSpider报错AttributeError: 'st

      本文链接:https://www.haomeiwen.com/subject/ztrbuftx.html