美文网首页
1.3 真实的网页解析

1.3 真实的网页解析

作者: doubleyou1001 | 来源:发表于2016-06-02 10:25 被阅读0次
  • HTTP1.1中Request方法7种
    get post head put options connect trace delete

  • 真实网页解析
    监视网页:Network
    刷新网页:第一个文件,request和response的信息全部显示在里面

import requests
from bs4 import BeautifulSoup
import time#插入时间
url = 'http://www.tripadvisor.cn/Attractions-g60763-Activities-New_York_City_New_York.html'
urls = ['http://www.tripadvisor.cn/Attractions-g60763-Activities-oa{}-New_York_City_New_York.html#ATTRACTION_LIST'.format(str(i)) for i in range(30,1030,30)]
user_saves = 'http://www.tripadvisor.cn/Saves#1'
headers = {    'user-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36',    'Cookie':'TAUnique=%1%enc%3AGt%2BTZhWhYRLlya%2Bb84AAmksGWRwjidrr8w%2F6Ze%2BL2cUnuvWISCXjiA%3D%3D; __gads=ID=14ede999d17f3c90:T=1461160891:S=ALNI_MZNR9_0t0Q1iGGOaY9f7Nxo_uwI4Q; bdshare_firstime=1461163798929; TAAuth2=%1%3%3A0196062bdc62174625411e900aaf8dc0%3AAAbn4kxcinEu%2FY1ZBVHGXA1vuNmknYlm2BX6q79fzLVxpkyNxzjcz03cx%2BjTj%2BnIDud%2FtrnQW1Kj08wg%2BXccFPaCh9673sKMNdESJOiei28DW8p%2F3GkBIRN8MDPdq486%2F3DicH7JxYeiHlJp03fLgXgKM6X%2FMereL6%2F7%2B%2BtKwRdsPT%2F31vFSIDei%2B%2FSSkT60CJ%2FwlSMY3sigkA%2BMWAsoex8%3D; _jzqy=1.1461160723.1461204501.2.jzqsr=baidu|jzqct=tripadvisor.jzqsr=baidu|jzqct=%E7%8C%AB%E9%80%94%E9%B9%B0%E7%BD%91; taMobileRV=%1%%7B%2210021%22%3A%5B1951181%5D%2C%2210028%22%3A%5B60763%5D%7D; ServerPool=A; TASSK=enc%3Ahwdy10o2uWvTDzq0MQZXeA5tD6r7MOpWpPLWsEVezsyeBefYE30WLhybhKPN4yl9; TAPD=tripadvisor.cn; _smt_uid=57178b12.4d58ed6c; _jzqckmp=1; TATravelInfo=V2*A.2*MG.-1*HP.2*FL.3*RVL.60763_153l1687489_153*RS.1; CM=%1%HanaPersist%2C%2C-1%7Ct4b-pc%2C%2C-1%7CHanaSession%2C%2C-1%7CFtrSess%2C%2C-1%7CRCPers%2C%2C-1%7CHomeAPers%2C%2C-1%7CWShadeSeen%2C%2C-1%7CRCSess%2C%2C-1%7CFtrPers%2C%2C-1%7CHomeASess%2C4%2C-1%7Csh%2C%2C-1%7CLastPopunderId%2C137-1859-null%2C-1%7Cpssamex%2C%2C-1%7C2016sticksess%2C%2C-1%7CCCPers%2C%2C-1%7CCpmPopunder_1%2C1%2C1464913708%7CCCSess%2C%2C-1%7CCpmPopunder_2%2C5%2C-1%7CWAR_RESTAURANT_FOOTER_SESSION%2C%2C-1%7Cb2bmcsess%2C%2C-1%7Csesssticker%2C%2C-1%7C%24%2C%2C-1%7C2016stickpers%2C%2C-1%7Ct4b-sc%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS2%2C%2C-1%7Cb2bmcpers%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS%2C%2C-1%7Csess_rev%2C11%2C-1%7Csessamex%2C%2C-1%7CSaveFtrPers%2C%2C-1%7CSaveFtrSess%2C%2C-1%7Cpers_rev%2C%2C-1%7CRBASess%2C%2C-1%7Cperssticker%2C%2C-1%7CMetaFtrSess%2C%2C-1%7Cmds%2C%2C-1%7CRBAPers%2C%2C-1%7CWAR_RESTAURANT_FOOTER_PERSISTANT%2C%2C-1%7CMetaFtrPers%2C%2C-1%7C; TAReturnTo=%1%%2FAttraction_Review-g60763-d1687489-Reviews-The_National_9_11_Memorial_Museum-New_York_City_New_York.html; _jzqx=1.1461163798.1464829056.3.jzqsr=tripadvisor%2Ecn|jzqct=/attractions-g60763-activities-new_york_city_new_york%2Ehtml.jzqsr=tripadvisor%2Ecn|jzqct=/attractions-g60763-activities-new_york_city_new_york%2Ehtml; roybatty=AMO%2BuRqD4X6mrI%2FdkihO6SQRm8U1MzgRaLqYtAv1%2BnH%2BbBqTWloasiGsBbHvzicfw5Hz1hzJidthRhOOdKhEyEmdAnN7dLInMp06y2BBQ23lWR4m%2FyebLmBmvWLYuIiDeaGI5CbGAr%2BA%2F3TYUxxLA947TrYhXrXWzQ0uG8paNGZd%2C1; TASession=%1%V2ID.BD0BBE2EED6EB075774995BCEB9C8B43*SQ.20*LS.SavesAjax*GR.56*TCPAR.67*TBR.92*EXEX.53*ABTR.32*PPRP.76*PHTB.6*FS.28*CPU.56*HS.popularity*ES.popularity*AS.popularity*DS.5*SAS.popularity*FPS.oldFirst*TS.5D6F093B439A5AD40CB39E156980DB8B*LF.zhCN*FA.1*DF.0*LP.%2FLangRedirect%3Fauto%3D3%26origin%3Den_US%26pool%3DA%26returnTo%3D%252F*IR.3*OD.en_US*MS.-1*RMS.-1*FLO.60763*TRA.true*LD.1687489; TAUD=LA-1464827274965-1*LG-1988476-2.1.F*LD-1988478-.....; Hm_lvt_2947ca2c006be346c7a024ce1ad9c24a=1464827094; Hm_lpvt_2947ca2c006be346c7a024ce1ad9c24a=1464829074; ki_t=1461160724394%3B1464827095962%3B1464829073873%3B3%3B24; ki_r=; _qzja=1.398601154.1461160723640.1464827095558.1464829055540.1464829055540.1464829073970..0.0.24.7; _qzjb=1.1464829055539.2.0.0.0; _qzjc=1; _qzjto=7.2.0; _jzqa=1.1187422896885783000.1461160723.1464827094.1464829056.7; _jzqc=1; _jzqb=1.2.10.1464829056.1; NPID='}
def get_attractions(url,data=None):    
      wb_data = requests.get(url)   
       soup = BeautifulSoup(wb_data.text,'lxml') #变为可读的文件,使用text方法              time.sleep(2)#2秒访问一次    
#print(soup)    
titles = soup.select('div.property_title > a[target="_blank]')#去除聚合性标签,通过观察发现,非聚合性的标签中target=_blank    
images = soup.select('img[width="160"]') #标签+[特定属性的值] 通过该这种方式找到想要的某种元素    
cates = soup.select('div.p13n_reasoning_v2') #标签一对多,要在它的上一级就停下来查找    
#为了方便查找,将以上的信息装入字典中    
for title, img, cate in zip(titles,images,cates):      
  data = {            
'title':title.get_text(),           
 'img': img.get('src'),           
 'cate':list(cate.stripped_strings) #stripped_strings方法获得一个副标签下的所有子标签的文本,由于内容是成组的,所以列表化        }        print(data)'''打印结果显示,图片地址完全一样,是因为网站进行了反爬取,这时在检查里复制一个图片链接然后在网页中点击显示原代码,查找ctrl+F这个图片链接,再查找loayload,需要正则匹配查找,但不是长久之计。之后会有简单方法爬取图片'''#对登录后保存的清单进行爬取。这需要登录和密码才能看到,现在需要告诉浏览器,我们是谁,需要在network中header里能作为身份识别的cookie,就能告诉服务器我们的状态#构造向服务器提交的参数:headers,在request headersdef get_fav(url,data=None):    
    wb_data = requests.get(user_saves,headers=headers)#添加默认参数,    
    soup = BeautifulSoup(wb_data.text,'lxml')    
    titles = soup.select('a.location-name')   
     imgs = soup.select('img.photo_image')    
    addresses = soup.select('span.format_address')    
    for title,img,address in zip(titles,imgs,addresses):       
     data = {            
        'title':title.get_text(),           
         'img':img.get('src'),            
        'address':list(address.stripped_strings),        
    }        
    print(data)
#print(urls)for singgle_url in urls:    
get_attractions(singgle_url)

相关文章

  • 1.3 真实的网页解析

    HTTP1.1中Request方法7种get post head put options connect tra...

  • 2018-05-09 D2 1.3解析库的安装

    1.3 解析库的安装 解析库:lxml, Beautiful Soup, pyquery 解析方法:Xpath解析...

  • 真实世界中的网页解析

    重点 status_code 200正常 404不正常 间隔抓取,防止反爬措施 soup.select出来的是列表...

  • ios解析优酷视频地址

    客户端解析优酷等三方网站的真实的播放地址。 解析优酷真实的地址,只需要获取到优酷的网页的播放地址即可。 流程:1)...

  • python网页解析器

    python 之网页解析器 一、什么是网页解析器 1、网页解析器名词解释 首先让我们来了解下,什么是网页解析器,简...

  • Python爬虫--真实世界的网页解析

    用Requests + BeautifulSoup 爬取Tripadvistor 爬取一个网页需要两步:1.服务器...

  • Python实战计划学习笔记(2)网页解析

    python中解析网页内容基本步骤 使用BeautifulSoup解析网页Soup = BeautifulSoup...

  • Jsoup解析HTML基础用法

    Jsoup可以解析网络和本地HTML,常用一般为通过网址解析网页,解析网页可以通过get和post方法获取网页内容...

  • DNS与HTTPDNS 笔记

    DNS域名解析 我们在浏览一个网页时,人们很难去记住访问网页的真实IP地址,通过域名与IP的绑定,实现知道域名即可...

  • python 之网页解析器

    一、什么是网页解析器 1、网页解析器名词解释 首先让我们来了解下,什么是网页解析器,简单的说就是用来解析html网...

网友评论

      本文标题:1.3 真实的网页解析

      本文链接:https://www.haomeiwen.com/subject/putkdttx.html