美文网首页
Python爬虫_01_Scrapy的安装和一个小示例

Python爬虫_01_Scrapy的安装和一个小示例

作者: ChZ_CC | 来源:发表于2017-02-03 17:55 被阅读252次
    • 编辑工具:vscode
    • 运行环境:Mac OS X11
    • python版本:3.5

    Windows下我也试过,没问题。


    安装:【虚拟环境安装】

    • Linux和MacOS下,推荐使用virtualenvwrapper创建虚拟环境。不要用电脑自带的python。
    • 终端安装:pip isntall scrapy
    • 如果已经安装Anaconda,升级即可。conda install -c conda-forge scrapy

    命令行操作:scrapy shell

    scrapy shell 'http://quotes.toscrape.com/page/1/'

    scrapy shell "http://quotes.toscrape.com/page/1/"
    【Windows必须用双引号。就是这么矫情。】

    可以用来测试抓取数据的效果,实时得到反馈。运行之后是这种样子:

    我安装了ipython,所以它直接进入ipython的shell的样子。没安装的话就是python三个大于号(>>>)的那个界面。像这样:

    [ ... Scrapy log here ... ]
    2016-09-19 12:09:27 [scrapy] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
    [s] Available Scrapy objects:
    [s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
    [s]   crawler    <scrapy.crawler.Crawler object at 0x7fa91d888c90>
    [s]   item       {}
    [s]   request    <GET http://quotes.toscrape.com/page/1/>
    [s]   response   <200 http://quotes.toscrape.com/page/1/>
    [s]   settings   <scrapy.settings.Settings object at 0x7fa91d888c10>
    [s]   spider     <DefaultSpider 'default' at 0x7fa91c8af990>
    [s] Useful shortcuts:
    [s]   shelp()           Shell help (print this help)
    [s]   fetch(req_or_url) Fetch request (or URL) and update local objects
    [s]   view(response)    View response in a browser
    >>>
    

    一个栗子

    爬取百度贴吧帖子的标题、作者、回复数。

    创建scrapy项目:

    • scrapy startproject tutorial 项目名字可以随便起。终端命令运行如下:

      目录结构是这样子的:

      tutorial/
          scrapy.cfg            # deploy configuration file
      
          tutorial/             # project's Python module, you'll import your code from here
              __init__.py
      
              items.py          # project items definition file
      
              pipelines.py      # project pipelines file
      
              settings.py       # project settings file
      
              spiders/          # a directory where you'll later put your spiders
                  __init__.py
      

    修改items.py

    import scrapy
    
    
    class TiebaItem(scrapy.Item):
        # name = scrapy.Field()
        title = scrapy.Field()
        author = scrapy.Field()
        comment_num = scrapy.Field()
    

    编写爬虫程序: tieba.py

    放在spider文件夹下。

    from scrapy.spider import Spider
    from scrapy.selector import Selector
    
    from tutorial.items import TiebaItem
    
    
    class TibaSpider(Spider):
        name = "tieba"
        allowed_domains = ['tieba.baidu.com']   # 域名列表
        start_urls = [
                'http://tieba.baidu.com/f?kw=%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB&ie=utf-8',
            ]
    
        def parse(self, response):
            #page = response.url.split("/")[-1]
            #filename = 'tieba.baidu-%s.html' % page
            #with open(filename, 'wb') as f:
            #    f.write(response.body)
            #self.log('Saved file %s' % filename)
            items = []
            for sel in response.xpath('//li[@class=" j_thread_list clearfix"]'):
                item = TiebaItem()
                item['title'] = sel.xpath('./div/div[2]/div[1]/div[1]/a/text()').extract()
                item['author'] = sel.xpath('.//span[@class="frs-author-name-wrap"]/a/text()').extract()
                item['comment_num'] = sel.xpath('.//span[@class="threadlist_rep_num center_text"]/text()').extract()
                items.append(item)
            return items
    

    这里用的是xpath,谷歌浏览器右键--检查--HTML代码右键copy--copy xpath。copy selector获得的是css结构。

    在终端运行

    scrapy crawl tieba -o items.json 运行爬虫程序,输出为json格式。也可以是其他格式比如txt、csv等等。

    结果

    成功提取到了标题、作者和回復數。json文件显示的是Unicode编码。

    保存CSV格式就可以直接看到汉字。

    这才是个最基础的scrapy项目,还在学习中。。。[围笑]

    相关文章

      网友评论

          本文标题:Python爬虫_01_Scrapy的安装和一个小示例

          本文链接:https://www.haomeiwen.com/subject/krbdittx.html