京东文胸爬虫及数据分析

作者: 罗罗攀 | 来源:发表于2017-07-14 15:48 被阅读1735次

    许久不来写文章了,最近夏令营搞的确实没时间。这次把上次直播讲的东西写成文字,带大家开波车。

    爬虫代码

    import requests
    from lxml import etree
    import time
    import json
    import re
    import csv
    
    headers = {
        'Cookie':'ipLoc-djd=1-72-2799-0; unpl=V2_ZzNtbRZXF0dwChEEfxtbV2IKFQ4RUBcSdg1PVSgZCVAyCkBVclRCFXMUR1NnGFkUZgoZXkpcQxNFCHZXchBYAWcCGllyBBNNIEwHDCRSBUE3XHxcFVUWF3RaTwEoSVoAYwtBDkZUFBYhW0IAKElVVTUFR21yVEMldQl2VH4RWAVmBxVeS19AEHUJR1x6GFsBYQEibUVncyVyDkBQehFsBFcCIh8WC0QcdQ1GUTYZWQ1jAxNZRVRKHXYNRlV6EV0EYAcUX3JWcxY%3d; __jdv=122270672|baidu-pinzhuan|t_288551095_baidupinzhuan|cpc|0f3d30c8dba7459bb52f2eb5eba8ac7d_0_e1ec43fa536c486bb6e62480b1ddd8c9|1496536177759; mt_xid=V2_52007VwMXWllYU14YShBUBmIDE1NVWVNdG08bbFZiURQBWgxaRkhKEQgZYgNFV0FRVFtIVUlbV2FTRgJcWVNcSHkaXQVhHxNVQVlXSx5BEl0DbAMaYl9oUmofSB9eB2YGElBtWFdcGA%3D%3D; __jda=122270672.14951056289241009006573.1495105629.1496491774.1496535400.5; __jdb=122270672.26.14951056289241009006573|5.1496535400; __jdc=122270672; 3AB9D23F7A4B3C9B=EJMY3ATK7HCS7VQQNJETFIMV7BZ5NCCCCSWL3UZVSJBDWJP3REWXTFXZ7O2CDKMGP6JJK7E5G4XXBH7UA32GN7EVRY; __jdu=14951056289241009006573',
        'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'
    }
    
    fp = open('C:/Users/luopan/Desktop/wenxiong1.csv','wt',newline='',encoding='utf-8')
    writer = csv.writer(fp)
    writer.writerow(('content','creationTime','productColor','productSize','userClientShow','userLevelName'))
    
    def get_id(url):
        html = requests.get(url, headers=headers)
        selector = etree.HTML(html.text)
        infos = selector.xpath('//ul[@class="gl-warp clearfix"]/li')
        for info in infos:
            try:
                id = info.xpath('@data-sku')[0]
                comment_url = 'https://sclub.jd.com/comment/productPageComments.action?callback=fetchJSON_comment98vv6&productId={}&score=0&sortType=5&page=0&pageSize=10&isShadowSku=0&fold=1'.format(id)
                get_comment_info(comment_url,id)
            except IndexError:
                pass
    
    def get_comment_info(url,id):
        html = requests.get(url,headers=headers)
        t = re.findall('fetchJSON_comment98vv6\((.*)\);', html.text)
        json_data = json.loads(t[0])
        page = json_data['maxPage']
        urls = ['https://sclub.jd.com/comment/productPageComments.action?callback=fetchJSON_comment98vv6&productId=%s&score=0&sortType=5&page={}&pageSize=10&isShadowSku=0&fold=1'.format(str(i)) for i in range(0,int(page))]
        for path in urls:
            html1 = requests.get(path%id, headers=headers)
            t1 = re.findall('fetchJSON_comment98vv6\((.*)\);', html1.text)
            json_data = json.loads(t1[0])
            for comment in json_data['comments']:
                content = comment['content']
                creationTime = comment['creationTime']
                productColor = comment['productColor']
                productSize = comment['productSize']
                userClientShow = comment['userClientShow']
                userLevelName = comment['userLevelName']
                # print(content,creationTime,productColor,productSize,userClientShow,userLevelName)
                writer.writerow((content,creationTime,productColor,productSize,userClientShow,userLevelName))
            time.sleep(2)
    
    if __name__ == '__main__':
        url = 'https://search.jd.com/Search?keyword=%E6%96%87%E8%83%B8&enc=utf-8&qrst=1&rt=1&stop=1&vt=2&suggest=1.his.0.0&page=1&s=1&click=0'
        get_id(url)
    

    数据分析

    首先导入相应的库文件和读入数据。

    import pandas as pd
    import matplotlib.pyplot as plt
    %matplotlib inline
    from pylab import *  
    mpl.rcParams['font.sans-serif'] = ['SimHei'] 
    mpl.rcParams['axes.unicode_minus'] = False
    bra = pd.read_csv(open(r'C:\Users\luopan\Desktop\wenxiong1.csv'))
    bra
    

    老司机大概感兴趣的就是文胸尺寸、颜色、和购买的时间,我们对这些列数据进行简单的清洗,以便之后的可视化。

    • 购买时间
      通过describe可以看到购买时间是字符的格式,我们需要把它进行数据格式的转化。具体代码如下。
    bra['creationTime'] = pd.to_datetime(bra['creationTime'])
    bra['hour'] = [i.hour for i in bra['creationTime']]
    bra
    

    我们提取购买的时间。通过可视化表现出来。

    hour = bra.groupby('hour').size()
    plt.xlim(0,25)
    plt.plot(hour,linestyle='solid',color='royalblue',marker='8')
    

    通过图可以看出妹子们都喜欢10点后购买文胸,刚上会班,就开始“不务正业”了。

    • 罩杯情况
      首先通过unique方法,看看有哪些罩杯.....
    bra.productSize.unique()
    

    对于广大男同胞来说,这些看着头都晕,我们需要通过python进行数据的清洗,把它弄成ABCDE,嘿嘿。

    cup = bra.productSize.str.findall('[a-zA-Z]+').str[0]
    cup2 = cup.str.replace('M','B')
    cup3 = cup2.str.replace('L','C')
    cup4 = cup3.str.replace('XC','D')
    bra['cup'] = cup4
    bra
    

    通过可视化可以看出,B的妹子是最多的,可我感觉哪里不对劲,后面再京东查看了部分商品,发现A断码或者有的商品没有A码,所以这可能导致A偏少了,扎心了,老铁。

    • 购买颜色

    统一进行清洗可视化,直接上图。



    肤色的是最多的,大家知道原因么,嘿嘿。

    明天夏令营正式结束,感慨蛮多的!罗罗攀又再一次回归简书,此处该有掌声。

    相关文章

      网友评论

        本文标题:京东文胸爬虫及数据分析

        本文链接:https://www.haomeiwen.com/subject/sytohxtx.html