美文网首页大数据 爬虫Python AI SqlWEB安全
[Python]爬虫批量获取哔哩哔哩Coser的相册

[Python]爬虫批量获取哔哩哔哩Coser的相册

作者: DYBOY | 来源:发表于2018-12-20 20:21 被阅读2次

    最近老在弄硬件综合设计课程的实验,实现CPU,身心俱疲,所以上哔哩哔哩网站看了看发现个好东西...

    哔哩哔哩网站顶部上有一个叫“画友”的功能,打开发现基本是一些 Coser 的照片,鉴于一篇篇的观赏过于麻烦,不如动手写个爬虫把妹子们放到磁盘里...

    0x02 看看爬虫的效果:

    执行过程

    运行结果

    0x03 直接放上源代码吧:

    # -*- coding:utf-8 -*-
    # @author: DYBOY
    # @link: https://blog.dyboy.cn
    
    import requests
    import json
    import os
    import time
    
    
    main_url = 'https://api.vc.bilibili.com/link_draw/v2/Photo/list?category=cos&type=hot&page_num='
    
    req = requests.Session()
    
    def get_html(page_num):
        header= {
            'Accept':'application/json, text/plain, */*',
            'Accept-Encoding':'gzip, deflate, br',
            'Accept-Language':'zh-CN,zh;q=0.9,en;q=0.8',
            'Connection':'keep-alive',
            'Cookie':' _dfcaptcha=2a3b6a18dc2f49833a6214509e784a6f; UM_distinctid=167c996fe832b-00aee39f185ce-3c604504-1fa400-167c996fe84325; CURRENT_QUALITY=16; fts=1545275440; bsource=seo_baidu'
            ,'Host':'api.vc.bilibili.com',
            'Origin':'https://h.bilibili.com',
            'Referer':'https://h.bilibili.com/eden/picture_area',
            'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
            }
    
        html = req.get(main_url+str(page_num))
        html.encode = 'utf-8'
        return html.text
    
    def create_dic(dic_name):
        curr_path = os.getcwd()
        temPath = curr_path+os.path.sep+dic_name
        if not os.path.exists(temPath):
            os.makedirs(temPath)
        else:
            print('文件夹已存在')
        return temPath
        
    
    def save_pic(dic,imgsrc,name):
        img = requests.get(imgsrc)
        with open(dic+'/'+name,"wb") as f:
            f.write(img.content)
        return None
    
    if __name__ == '__main__':
        print ("开始时间:"+time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
        for i in range(0,25):
            album_list = json.loads(get_html(i))
            print("第"+str(i)+"页")
            for album in album_list['data']['items']:
                username = album['user']['name']
                userid = album['user']['uid']
                userinfo = "用户名:"+username+"  用户id:"+str(userid)
                doc_id = str(album['item']['doc_id'])
                print("正在下载相册编号:"+doc_id)
                try:
                    os.mkdir(doc_id)
                except Exception as e:
                    print(e)
                    continue
                with open(doc_id+"/info.txt","w",encoding='utf-8') as info:
                    info.write(userinfo)
                pnum=0
                for photo in album['item']['pictures']:
                    pnum+=1
                    save_pic(doc_id,photo['img_src'],str(pnum)+".jpg")
                print(userinfo+"  》》》  下载"+str(pnum)+"张照片成功!\n")
        print ("结束时间:"+time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
    

    相关文章

      网友评论

        本文标题:[Python]爬虫批量获取哔哩哔哩Coser的相册

        本文链接:https://www.haomeiwen.com/subject/azpukqtx.html