美文网首页我爱编程
python爬虫入门

python爬虫入门

作者: onlyHalfSoul | 来源:发表于2018-05-25 15:04 被阅读82次

    1. 前期准备及环境安装

    1.1 前期准备

    python语法基础,html,css基础。

    1.2 环境安装

    官网下载python3.5以上版本,傻瓜安装。查看环境变量是否配好,cmd键入python,可查看python版本,并进入python编译环境,逐条执python代码,ctrl+z退出python编译环境。

    安装pycharm,可直接在seting default中搜索需要安装的类库安装之。做python网络爬虫需要的类库有:lxml, BeautidulSoup4, Requests。

    2. 爬取网页入门

    2.1 爬取本地网页信息

    1)使用beautifulSoup解析网页

        Soup = BeautifulSoup(html,'lxml')
    
    
    1. 描述爬取信息位置

      XXX = Soup.select('???')

    两种描述元素字啊网页中位置的方法:

    css Selector:

    body > div.main-content > ul > li:nth-child(1) > img
    

    XPath:

    /html/body/div[2]/ul/li[1]/img
    

    BeautifulSoup只能使用css Selector定位

    3)从爬取标签中定位所需要信息

    <p>  Something  </p>
    

    并把获取到的有用信息装到合适的数据容器中方便查询。

    示例代码如下:

    # coding=utf-8
    from bs4 import BeautifulSoup
    
    info = []
    path = './web/new_index.html'#本地网页文件路径
    
    with open(path, 'r') as web_data: #打开本地网页文件
        Soup = BeautifulSoup(web_data,'lxml') #新生成一个BeautifulSoup对象
        #定位需求元素位置,BeautifulSoup只能使用css selector
        images = Soup.select('body > div.main-content > ul > li > img')
        titles = Soup.select("body > div.main-content > ul > li > div.article-info > h3 > a")
        descs = Soup.select('body > div.main-content > ul > li > div.article-info > p.description')
        rates = Soup.select('body > div.main-content > ul > li > div.rate > span')
        cates = Soup.select('body > div.main-content > ul > li > div.article-info > p.meta-info')
        #cates因为有一对多关系,所以需要取到父级标签
    
    #依靠循环取出标签中文本,并存入字典,图片取出图片地址
    for title,image,desc,rate,cate in zip(titles,images,descs,rates,cates):
        data = {
            "title":title.get_text(),
            "image":image.get('src'),
            "desc":desc.get_text(),
            "rate":rate.get_text(),
            "cate":list(cate.stripped_strings) #取出标签下子标签文本,并用存入list
        }
        info.append(data)
    
    for i in info:
        if float(i['rate'])>3:
            print(i['title'],':',i['cate'],i["rate"])
    

    2.2 爬取实际网页

    Get的请求方法:

    请求格式:

    GET /page_one.html HTTP/1.1
    HOST:www.sample.com
    

    1).输入实际网页网址并使用BeautifulSoup获取

    info = []
    url = 'https://www.liaoxuefeng.com/category/0013738748415562fee26e070fa4664ad926c8e30146c67000'
    wb_data = requests.get(url)
    soup = BeautifulSoup(wb_data.text,'lxml')
    

    2)定位所需元素的位置,注意将css Selector转换成BeautifulSoup所需要的格式(-of-type)

    titles = soup.select('body > div > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > div > h3 > a')
    
    authors = soup.select('#main > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > div > p:nth-of-type(1) > a:nth-of-type(1)')
    texts = soup.select('#main > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > div > p:nth-of-type(2)')
    images = soup.select('#main > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > a > img')
    

    3)将爬出数据存入适宜格式本地变量,方便查取

    for title,author,text,image in zip(titles,authors,texts,images):
        data={
            'title':title.get_text(),
            'author':author.get_text(),
            'text':text.get_text(),
            'image':image.get('src')
    
        }
        info.append(data)
    

    爬取廖雪峰个人网站首页的教程信息实例:

    from bs4 import BeautifulSoup
    import requests
    
    info = []
    url = 'https://www.liaoxuefeng.com/category/0013738748415562fee26e070fa4664ad926c8e30146c67000'
    wb_data = requests.get(url)
    soup = BeautifulSoup(wb_data.text,'lxml')
    
    titles = soup.select('body > div > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > div > h3 > a')
    authors = soup.select('#main > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > div > p:nth-of-type(1) > a:nth-of-type(1)')
    texts = soup.select('#main > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > div > p:nth-of-type(2)')
    images = soup.select('#main > div.uk-container.x-container > div > div > div.x-center > div.x-content > div > a > img')
    #print(images)
    
    for title,author,text,image in zip(titles,authors,texts,images):
        data={
            'title':title.get_text(),
            'author':author.get_text(),
            'text':text.get_text(),
            'image':image.get('src')
        }
        info.append(data)
    print(info)
    

    2.3 爬取网页动态数据

    1)异步加载概念

    一个网页,不跳转新页面,部分内容动态加载显示。一般用Ajax实现。

    2)爬取异步信息

    检点例子,分页异步:

    爬取一页数据函数:

    def get_page(url,data=None):
    
        wb_data = requests.get(url)
        soup = BeautifulSoup(wb_data.text,'lxml')
        imgs = soup.select('a.cover-inner > img')
        titles = soup.select('section.content > h4 > a')
        links = soup.select('section.content > h4 > a')
    
        if data==None:
            for img,title,link in zip(imgs,titles,links):
                data = {
                    'img':img.get('src'),
                    'title':title.get('title'),
                    'link':link.get('href')
                }
                print(data)
    

    页面动态增加函数:

    def get_more_pages(start,end):
        for one in range(start,end):
            get_page(url+str(one))
            time.sleep(2)
    

    爬取动态分页(异步分页网站)实例代码:

    from bs4 import BeautifulSoup
    import requests
    import time
    
    url = 'https://knewone.com/discover?page='
    
    def get_page(url,data=None):
    
        wb_data = requests.get(url)
        soup = BeautifulSoup(wb_data.text,'lxml')
        imgs = soup.select('a.cover-inner > img')
        titles = soup.select('section.content > h4 > a')
        links = soup.select('section.content > h4 > a')
    
        if data==None:
            for img,title,link in zip(imgs,titles,links):
                data = {
                    'img':img.get('src'),
                    'title':title.get('title'),
                    'link':link.get('href')
                }
                print(data)
    
    def get_more_pages(start,end):
        for one in range(start,end):
            get_page(url+str(one))
            time.sleep(2)
            
    get_more_pages(1,10)
    
    1. 爬取大规模数据,使用数据库(mongoDB)存储

    3.1 前期环境准备

    1).安装并启动mongoDB数据库
    
    2). 安装第三方库pymongo
    
    3). 安装mongoDB插件 
    
    4). 连接好pychram和本地mongoDB
    

    3.2 使用python对mongoDB进行操作

    第一步,导入pymongo库

    import pymongo
    

    第二步,连接mongodb,创建数据库(集合库)

    client = pymongo.MongoClient('localhost',27017)
    walden = client['walden']
    sheet_lines = walden['sheet_lines']
    

    第三步,导入文件中数据进入mongodb

    path = 'G:/tzsfile/walden.txt'
    with open(path,'r') as f:
        lines = f.readlines()
        for index, line in enumerate(lines):
            data = {
                'index':index,
                'line':line,
                'words':len(line.split())
            }
            sheet_lines.insert_one(data)
    

    第四步,根据条件筛选出所需要的数据

    for item in sheet_lines.find():
        print(item)
    
    for item in sheet_lines.find({'words':0}):
        print(item)
    
    #$lt/$lte/$gt/$gte/$ne,依次等价于<,<=,>,>=,!= (l表示less g表示greater e表示equal n表示not)
    for item in sheet_lines.find({'words':{'$lt':5}}):
        print(item)
    

    3.3 爬取大规模数据的工作流分析

    工作流分析:

    工作流主要分为两部分,首先先获取网页url并存入数据库:

    def get_links_from(channel, pages, who_sells=0):
        # td.t 没有这个就终止
        list_view = '{}{}/pn{}/'.format(channel, str(who_sells), str(pages))
        wb_data = requests.get(list_view)
        time.sleep(1)
        soup = BeautifulSoup(wb_data.text, 'lxml')
        if soup.find('td', 't'):
            for link in soup.select('td.t a.t'):
                item_link = link.get('href').split('?')[0]
                url_list.insert_one({'url': item_link})
                print(item_link)
                # return urls
        else:
            # It's the last page !
            pass
    

    再从数据库获取网页url,并解析出各网页元素:

    def get_item_info(url):
        wb_data = requests.get(url)
        soup = BeautifulSoup(wb_data.text, 'lxml')
        no_longer_exist = '404' in soup.find('script', type="text/javascript").get('src').split('/')
        if no_longer_exist:
            pass
        else:
            title = soup.title.text
            price = soup.select('span.price.c_f50')[0].text
            date = soup.select('.time')[0].text
            area = list(soup.select('.c_25d a')[0].stripped_strings) if soup.find_all('span', 'c_25d') else None
            item_info.insert_one({'title': title, 'price': price, 'date': date, 'area': area, 'url': url})
            print({'title': title, 'price': price, 'date': date, 'area': area, 'url': url})
    

    相关文章

      网友评论

        本文标题:python爬虫入门

        本文链接:https://www.haomeiwen.com/subject/xpfujftx.html