美文网首页
Python实战计划——第三节:如何爬取网页

Python实战计划——第三节:如何爬取网页

作者: 唐宗宋祖 | 来源:发表于2016-05-20 17:35 被阅读112次

视频重点####

练习代码####

总结#####

1. 视频重点###

  1. 查看请求:右键检查——network——刷新
  2. 用requests库构造请求。requests.get(url返回response,储存为wb_data
  3. 用beautifulsoup解析用text方法使wb_data可读BeautifulSoup(wb_data.text,'lxml')
  4. 选出需要的信息。利用样式的唯一性:在源码中查找样式信息以确定是否具有唯一性;selector方法:标签加方括号加属性selector('img[wight="160"]')
  5. 源码是请求之后加载出来的,源码的信息通过js处理过的。

2. 练习代码###

    from bs4 import BeautifulSoup
    import requests
    import time
    
   # url = 'http://bj.xiaozhu.com/search-duanzufang-p1-0/'
   # urla = 'http://bj.xiaozhu.com/fangzi/1580034935.html'
    def each_href(url):#提取一个页面上每个房东的链接
        wb_data = requests.get(url)
        soup = BeautifulSoup(wb_data.text, 'lxml')
        xiangqings = soup.find_all('a', target="_blank")
        urls=[]#装链接的列表
        for xiangqing in xiangqings:
            urls.append(xiangqing.get('href'))
        return (urls)
    def each_info(urla):#爬取每个房东的信息
        wb_data = requests.get(urla)
        time.sleep(2)
        soup = BeautifulSoup(wb_data.text, 'lxml')
        titles = soup.select('div.pho_info > h4 > em')
        prices = soup.select('#pricePart > div.day_l > span')
        adds = soup.select('span[class="pr5"]')
        img_rooms = soup.select('#detailImageBox > div.pho_show_r > div > ul > li > img')
        img_hosts = soup.select('#floatRightBox > div.js_box.clearfix > div.member_pic > a > img')
        ffs = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > span')#这是性别
        names = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > a')
    
        for title, price, add, img_room, img_host,ff,name in zip(titles, prices, adds, img_rooms, img_hosts,ffs,names):
            if str(ff)=='<span class="member_girl_ico"></span>':#将ff字符串化,否则ff = <span class="member_girl_ico"></span>
                ff= "女性"
            else:
                ff="男性"
            data = {
                'title': title.get_text(),
                'price': price.get_text(),
                'add': add.get_text().split("\n")[0],
                'img_room': img_room.get('data-bigimg'),
                'img_host': img_host.get('src'),
                'ff':  ff ,
                'name':name.get_text()
            }
            print(data)
    i=input("想爬取多少页的信息?")
    for url in ['http://bj.xiaozhu.com/search-duanzufang-p{}-0/'.format(k) for k in range(1, int(i)+1)]:#把每页的地址迭代出来,
#传给函数each_herf获得房东的链接,再传给ezch_info爬取详细信息
        for urla in each_href(url):
            each_info(urla)

爬取结果:

运行结果.jpg

3. 总结###

多对一关系,数据结构化.png

确定元素唯一性.png

相关文章

网友评论

      本文标题:Python实战计划——第三节:如何爬取网页

      本文链接:https://www.haomeiwen.com/subject/icunrttx.html