美文网首页Python四期爬虫作业
【Python爬虫】-第四期课后练习13

【Python爬虫】-第四期课后练习13

作者: 困困harper | 来源:发表于2017-08-28 20:59 被阅读15次
    # 导入包
    import requests
    #请求url
    url = 'http://www.ygdy8.com/'
    #构造headers字典
    headers = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
    'Accept-Encoding': 'gzip, deflate',
    'Accept-Language': 'zh-CN,zh;q=0.8',
    'Cache-Control': 'max-age=0',
    'Connection': 'keep-alive',
    'Cookie': 'UM_distinctid=15c5ec4f20e377-0798b30518d6b4-5393662-c0000-15c5ec4f20f28b; CNZZDATA5783118=cnzz_eid%3D1150691004-1496237600-%26ntime%3D1496237600; 37cs_user=37cs10138604998; cscpvrich4016_fidx=1; 37cs_show=69',
    'Host': 'www.ygdy8.com',
    'If-Modified-Since': 'Sun, 27 Aug 2017 15:18:27 GMT',
    'If-None-Match': "802356bb471fd31:530",
    'Referer': 'https://www.baidu.com/link?url=cnL9usny1BIZEe-NZUkUbeUE4m9CM23KIysNUsVvzlK&wd=&eqid=c50f090f0001d9880000000259a2e4b0',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36'
    }
    #定义req为一个requests请求的对象
    req = requests.get(url,headers=headers)
    #req这个请求对象的status_code方法获取请求的状态码
    status_code = req.status_code
    print(status_code)
    #指定网页解码方式
    req.encoding = 'gb2312'
    #获取网页源码 用html变量接收 text content方法灵活运用
    html = req.text
    fp = open(r"D:\work\python\temp\moive.html", 'ab+')
    fp.write((html).encode('gb2312'))
    fp.close()
    # print(html)

    相关文章

      网友评论

        本文标题:【Python爬虫】-第四期课后练习13

        本文链接:https://www.haomeiwen.com/subject/jhugdxtx.html