美文网首页
python 之 第一个爬虫(百度图片)

python 之 第一个爬虫(百度图片)

作者: 蝼蚁撼树 | 来源:发表于2019-08-23 20:44 被阅读0次

    开发环境

    IDE: pyCharm

    python环境: 3.7

    爬取百度图片的基本步骤

    一. 抓取百度网站内容
    def read_html(urlStr):
        result = request.urlopen(urlStr)
        webResult = result.read()
        with codecs.open('/Users/liliqiang/Desktop/imageFile/webResult.txt', 'w', 'utf-8') as file:
            file.write(webResult.decode('utf-8'))
        # 保存数据
        return webResult
    
    二. 提取图片内容
    def read_image(webResult):
        re_img = re.compile(b'"thumbURL":"(.*?.jpg)",')
        # re_img = re.compile(b'src="(.*?.jpg)" ')
        imgs = re_img.findall(webResult)
        imgsData = []
        for img in imgs:
            imgsData.append(img.decode('utf-8'))
        with open('/Users/liliqiang/Desktop/imageFile/images.txt', 'w') as file:
            json.dump(imgsData, file)
        return imgsData
    
    三. 下载图片
    def save_image(imgs):
        count = 0
        for img in imgs:
            tail = img[-3:]
            filename = '/Users/liliqiang/Desktop/imageFile/%s.%s'%(count, tail)
            opener = urllib.request.build_opener()
            opener.addheaders = [('User-Agent', 'ozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Mobile Safari/537.36')]
            urllib.request.install_opener(opener)
            urllib.request.urlretrieve(img, filename)
            count += 1
    

    后记: 遇到的问题

    I. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed

    解决方法: 关闭ssl验证

    ssl._create_default_https_context = ssl._create_unverified_context
    
    II.urlretrieve HTTP Error 403: Forbidden 很多网站禁止爬虫. 我们可以模拟浏览器的进行图片抓取, 模拟网站需要设置请求头的User-Agent, 假如示例中的代码不可用, 可以查看自己浏览器的请求头, 然后替换为自己的.就可以正常使用了
     opener = urllib.request.build_opener()
            opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0')]
            urllib.request.install_opener(opener)
            urllib.request.urlretrieve(img, filename)
    
    III. UnicodeEncodeError: 'ascii' codec can't encode characters in position 65-69: ordinal not in range(128)由于请求文件中含有特殊符号ͼƬ导致urlopen函数无法正常识别, 需要ASCII符号编码. 使用函数 urllib.parse.quote 可以经ASCII码转化为utf8

    解决方法:

    urlStr = 'http://image.baidu.com/search/index?tn=baiduimage&ct=201326592&lm=-1&cl=2&ie=gb18030&fr=ala&ala=1&alatpl=others&pos=0&&word='
    urlStr =  urlStr + urllib.parse.quote('ͼƬ')
    

    最后附录整个文件内容:

    #!/usr/bin/python/
    # -*- coding: UTF-8 -*-
    
    from urllib import request
    import urllib
    import re
    import ssl
    import json
    import sys
    
    
    # 关闭ssl验证
    ssl._create_default_https_context = ssl._create_unverified_context
    
    def read_html(urlStr):
        result = request.urlopen(urlStr)
        webResult = result.read()
        with open('/Users/<#你自己的用户名#>/Desktop/imageFile/webResult.txt', 'w') as file:
            file.write(webResult.decode('utf-8'))
        # 保存数据
        return webResult
    
    def read_image(webResult):
        re_img = re.compile(b'"thumbURL":"(.*?.jpg)",')
        # re_img = re.compile(b'src="(.*?.jpg)" ')
        imgs = re_img.findall(webResult)
        imgsData = []
        for img in imgs:
            imgsData.append(img.decode('utf-8'))
        with open('/Users/你自己的用户名/Desktop/imageFile/images.txt', 'w') as file:
            json.dump(imgsData, file)
        return imgsData
    
    def save_image(imgs):
        count = 0
        for img in imgs:
            tail = img[-3:]
            filename = '/Users/<#你自己的用户名#>/Desktop/imageFile/%s.%s'%(count, tail)
            opener = urllib.request.build_opener()
            opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0')]
            urllib.request.install_opener(opener)
            urllib.request.urlretrieve(img, filename)
            count += 1
    
    
    urlStr = 'http://image.baidu.com/search/index?tn=baiduimage&ct=201326592&lm=-1&cl=2&ie=gb18030&fr=ala&ala=1&alatpl=others&pos=0&&word='
    urlStr =  urlStr + urllib.parse.quote('ͼƬ')
    print('读取网络数据中...')
    webRes = read_html(urlStr)
    print('读取图片资源...')
    imgs = read_image(webRes)
    print('下载图片资源中...')
    save_image(imgs)
    print('下载图片资源完成!!')
    

    相关文章

      网友评论

          本文标题:python 之 第一个爬虫(百度图片)

          本文链接:https://www.haomeiwen.com/subject/bphxectx.html