美文网首页
python爬虫爬取可可英语官网----四级翻译

python爬虫爬取可可英语官网----四级翻译

作者: panxd | 来源:发表于2017-10-14 15:02 被阅读0次
    可可英语四级备考界面

    爬虫基础介绍:

    • 1.url:某个网页的网址
    • 2.带反扒机制的网页,加个header
    header={'User-Agent':'Mozilla/5.0 
    (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) 
    Chrome/56.0.2924.87 Safari/537.36'}
    
    • 3.模拟浏览器进入网页:
    request = urllib2.Request(url,headers=header)
    
    • 4.打开网址:
    response = urllib2.urlopen(request)
    
    • 5.获取源码,读取网页
    html = response.read()
    
    • 6.编写正则表达式:
    pattern = re.compile(r"----正则表达式----")
    
    • 7.匹配正则表达式:
    items = re.findall(pattern,html)#  注意:此时的items是一个列表,用来存放匹配到的东西
    
    • 代码分享
    #coding=utf-8
    import urllib2
    from constants import url
    import re
    import sys
    import os
    
    reload(sys)
    sys.setdefaultencoding('utf-8')#解决编码中出现乱码问题
    
    def get_title(url):
        req = urllib2.Request(url)
        req.add_header('User-Agent',
                       'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36')#添加头部信息
        res = urllib2.urlopen(req)
        html = res.read()
        response = re.compile(r'id="nrtitle">(.*?)</h1>', re.S)
        title = re.findall(response, html)[0]#通过re模块进行内容匹配查找
        title = title.replace('​','')#字符串的替换
        title = title.replace('&','')
        title = title.replace(';','')
        title = title.replace('#','')
        title = title.replace('34','')
        return title
    
    
    
    def get_first_page(url):
        req = urllib2.Request(url)
        req.add_header('User-Agent',
        'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36')
        res = urllib2.urlopen(req)
        html = res.read()
        response = re.compile(r'<span id="article_eng">(.*?)<script>',re.S)
        contents = re.findall(response,html)[0]
        contents = contents.replace('</p>','')
        contents = contents.replace('<p>','')
        contents = contents.replace('<strong>','')
        contents = contents.replace('</strong>','\n')
        contents = contents.replace('<br />','')
        return  contents
    
    def get_second_page(url):
        # print '参考翻译:'
        req = urllib2.Request(url)
        req.add_header('User-Agent',
        'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36')
        res = urllib2.urlopen(req)
        html = res.read()
        response = re.compile(r'<span id="article_eng">(.*?)<script>', re.S)
        contents = re.findall(response, html)[0]
        contents = contents.replace('</p>', '')
        contents = contents.replace('<p>', '')
        contents = contents.replace('<strong>', '')
        contents = contents.replace('</strong>', '')
        contents = contents.replace(''', '')
        contents = contents.replace('<br />', '\n')
        return contents
    
    def get_third_page(url):
        req = urllib2.Request(url)
        req.add_header('User-Agent',
        'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36')
        res = urllib2.urlopen(req)
        html = res.read()
        response = re.compile(r'<span id="article_eng">(.*?)<script>', re.S)
        contents = re.findall(response, html)[0]
        contents = contents.replace('</p>', '')
        contents = contents.replace('<p>', '')
        contents = contents.replace('<strong>', '')
        contents = contents.replace('<br />', '')
        contents = contents.replace('</strong>', '\n')
        return contents
    
    
    req = urllib2.Request(url)
    res = urllib2.urlopen(req)
    html = res.read()
    reg = re.compile(r'http://www.kekenet.com/menu/201\d+/[0-9]{6}',re.S)
    url_items = re.findall(reg,html)
    fin_url=[]
    for i in url_items:
        urls = []
        u1 = i+'.shtml'
        u2 = i+'_2.shtml'
        u3 = i+'_3.shtml'
        urls.append(u1)
        urls.append(u2)
        urls.append(u3)
        fin_url.append(urls)
    
    print '开始。。。'
    for x in fin_url:
        title = get_title(x[0])#函数的调用
        # print title
        first = get_first_page(x[0])
        second = get_second_page(x[1])
        third = get_third_page(x[2])
        filename = u'%s.docx' %title.split(':')[1]#切片操作
        filename = filename.decode('utf-8')
        path = os.path.join('traslate12', filename)#python中路径的拼接
        f = open(path,'a')#打开文件
        f.write('\t\t'+title+'\n'+'翻译原文:\n'+first+'\n'+'译文:\n'+second+'\n'+'详细解释\n'+third)#写入处理后的数据
        f.close()
    
    print '爬虫工作结束。。。'
    
    自动化生成到文件中 word文档实例展示

    相关文章

      网友评论

          本文标题:python爬虫爬取可可英语官网----四级翻译

          本文链接:https://www.haomeiwen.com/subject/pagxuxtx.html