美文网首页
2019智联爬取

2019智联爬取

作者: hcc_9bf4 | 来源:发表于2019-05-30 19:28 被阅读0次
  1. 对网页进行分析

https://sou.zhaopin.com/?jl=736&kw=Python&kt=3

在网页界面下右键鼠标 ‘查看网页源代码’发现找不到与之对应的元素(包含职位名称,月薪等)
在网页界面下右键鼠标 ‘审查元素’发现对应元素在json数据里
对应的请求网页:
PS:可以复制以下url到浏览器查看json数据,或者到json.cn里解析数据,以下为代码解析:

https://fe-api.zhaopin.com/c/i/sou?pageSize=90&cityId=736&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=Python&kt=3&_v=0.73311243&x-zp-page-request-id=fb26d8cd8e974327bf070c1ee4083d0d-1559210939119-110823&x-zp-client-id=8f9c43e9-2e0b-49b0-a13f-3d1dda875b37

附上图片:


对请求网页进行分析:
第二页 request URL:

https://fe-api.zhaopin.com/c/i/sou?start=90&pageSize=90&cityId=736&salary=0,0&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=Python&kt=3&=0&_v=0.96394927&x-zp-page-request-id=ba7510ba53fd45ac91c7ade78fd99ffd-1559211870381-439910&x-zp-client-id=8f9c43e9-2e0b-49b0-a13f-3d1dda875b37

发现多了个'start=90',第三页变成'start=180' 第N页为(N-1)*90
kw为用户输入关键字,cityId为工作城市,这些元素是可变的,其他是固定拼接

url='https://fe-api.zhaopin.com/c/i/sou?'
pinjie='&kt=3&_v=0.72657627&x-zp-page-request-id=f0a9d5bde1884908ad700fa02ef6e9dd-1559124327853-823668&x-zp-client-id=8f9c43e9-2e0b-49b0-a13f-3d1dda875b37'
start_page = int(input('请输入起始页数:'))
end_page = int(input('请输入结束页数:'))
city=input('请输入工作地点:')
kw=input('请输入关键字:')

完整代码:

import urllib.request
import urllib.parse
import json
import time

url='https://fe-api.zhaopin.com/c/i/sou?'

pinjie='&kt=3&_v=0.72657627&x-zp-page-request-id=f0a9d5bde1884908ad700fa02ef6e9dd-1559124327853-823668&x-zp-client-id=8f9c43e9-2e0b-49b0-a13f-3d1dda875b37'
start_page = int(input('请输入起始页数:'))
end_page = int(input('请输入结束页数:'))
city=input('请输入工作地点:')
kw=input('请输入关键字:')
items = []
for page in range(start_page,end_page+1):

    data={
        'start': (page-1) * 90,
        'pageSize': '90',
        'cityId': city,
        'workExperience':'-1',
        'education':'-1',
        'companyType':'-1',
        'employmentType':'-1',
        'jobWelfareTag':'-1',
        'kw':kw,
        }
    headers={

        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',

    }
    data=urllib.parse.urlencode(data)

    url_now=url+data+pinjie

    print('开始爬取第%s页' % page)
    request=urllib.request.Request(url=url_now,headers=headers)


    content=urllib.request.urlopen(request)
    print('结束爬取第%s页' % page)
    time.sleep(2)
    json_text=content.read().decode()

    json_dict=json.loads(json_text)
    job_list=json_dict['data']['results']

    # for jobL in job_list:
    #   print(jobL)
    # print(len(job_list))
    # print(job_list[0]['jobName'])
    for index,job_L in enumerate(job_list):
        jobName = job_L['jobName']
        salary = job_L['salary']
        eduLevel = job_L['eduLevel']['name']
        city = job_L['city']['display']
        company = job_L['company']['name']
        timeState = job_L['timeState']
        endDate = job_L['endDate']
        rate = job_L['rate']
        if 'businessArea' in job_L:
            workingExp=job_L['workingExp']['name']
        
        if 'businessArea' in job_L:
            businessArea=job_L['businessArea']
        # print(index,'*' *50)  
        item={ 
            '职位名称': jobName,
            '公司名称': company,
            '职位月薪': salary,
            '学历程度': eduLevel,
            '工作经验': workingExp,
            '工作区域': businessArea,
            '工作城市': city,
            '发布状态': timeState,
            '结束发布时间':endDate,
            '回复率': rate,
        }
        items.append(item)
        #忽略assci编码,json.dumps是将一个Python数据类型列表进行json格式的编码解析
        string=json.dumps(items,ensure_ascii=False)
        #以TXT格式保存到本地
        with open('zhilian.txt','w',encoding='utf8') as fp:
            fp.write(string)

相关文章

网友评论

      本文标题:2019智联爬取

      本文链接:https://www.haomeiwen.com/subject/potbtctx.html