美文网首页python爬虫学习
爬取微信文章并保存在本地

爬取微信文章并保存在本地

作者: tonyemail_st | 来源:发表于2017-10-24 10:39 被阅读9次
import urllib.request
import random
import re

uapools=[
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36 QIHU 360SE",
    "Opera/12.02 (Android 4.1; Linux; Opera Mobi/ADR-1111101157; U; en-US) Presto/2.9.201 Version/12.02",
    ]

def ua(uapools):
    thisua=random.choice(uapools)
    print(thisua)
    headers=("User-Agent",thisua)
    opener=urllib.request.build_opener()
    opener.addheaders=[headers]
    urllib.request.install_opener(opener)

key="Python"
for i in range(0,10):
    key=urllib.request.quote(key)
    thispageurl = "http://weixin.sogou.com/weixin?query=" + key + "&type=2&page=" + str(i+1) + "&ie=utf8"
    thispagedata = urllib.request.urlopen(thispageurl).read().decode("utf-8", "ignore")
    # fh=open("D:/tmp/page.html", "w", encoding="utf-8")
    # fh.write(thispagedata)
    # fh.close()
    # break
    pat1='<div class="txt-box">.*?href="(.*?)"'
    rst1=re.compile(pat1,re.S).findall(thispagedata)
    print(rst1)
    if(len(rst1)==0):
        print("scrap failed!!!")
        continue
    for j in range(0, len(rst1)):
        thisurl=rst1[j]
        pat2='amp;'
        thisurl=thisurl.replace(pat2,"")
        print(thisurl)
        ua(uapools)
        thisdata = urllib.request.urlopen(thisurl).read().decode("utf-8", "ignore")
        print("scrap success!!! " + str(len(thisdata)))
        fh = open("D:/tmp/wx/" + str(i) + str(j) + ".html", "w", encoding="utf-8")
        fh.write(thisdata)
        fh.close()


相关文章

网友评论

    本文标题:爬取微信文章并保存在本地

    本文链接:https://www.haomeiwen.com/subject/gufauxtx.html