美文网首页
第一爬虫脚本(斗鱼,未完成)

第一爬虫脚本(斗鱼,未完成)

作者: richard520 | 来源:发表于2016-12-20 11:41 被阅读49次
    # __author__="richard"
    # _*_ conding:utf-8 _*_
    #写的第一个爬斗鱼的脚本,但是没有做完全 遇到的问题是:在获得主播的后进入无法抓去到要的数据,在查了相关的资料后 ,斗鱼的数据是通过flash里面获得 这个在网页上这种机制上没有
    # 办法去得。可以在研究看看 自己在写的时候的不足:没有全局的去把握方向,模块化处理上较差 对日志和异常做的较差
    import requests
    import urllib2
    import urllib
    import re
    from bs4 import BeautifulSoup
    import lxml
    
    domain="www.douyu.com"
    
    def getNowTime():
        return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time()))
    
    def open_url(url):
    
          heads= { "Accept":"text/html,application/xhtml+xml,application/xml;",
                   "Accept-Encoding":"gzip",
                   "Accept-Language":"zh-CN,zh;q=0.8",
                    "Referer":"http://www.douyu.com/",
                    "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
                 }
          r =requests.get(url,heads)
          return r
    def get_fenlei():
            response = urllib2.urlopen("https://www.douyu.com/directory")
            html = response.read()
            soup = BeautifulSoup(html,'lxml')
            fl_url_list=[]
            fl_id=[]
            for link in soup.find_all('a',attrs={"class": "thumb"}):
                    fl_link= link['href']
                    id = link['data-tid']
                    fl_id.append(id)
                    fl_url_list.append("http://www.douyu.com"+fl_link)
            return fl_url_list,fl_id
    
    def get_fl_room(url,data_tid):
           # print url
           response_fl = urllib2.urlopen(url)
           html_fl=response_fl.read()
           soup_fl=BeautifulSoup(html_fl,'lxml')
           room_id1=[]
           for room_id in soup_fl.find_all('a', attrs={"data-tid":data_tid,"data-rpos":0}):
                   #room_id1.append(room_id["data-rid"])
                   room_id1= room_id["data-rid"]
                   room_url_t = domain +"/"+ room_id1
                   room_url="https://%s" %(room_url_t)
                   print room_url
                   room = get_room(room_url)
                   print room
                   exit()
                   # return room_id1
                   # print  room_id["title"]
                   # name=room_id.div.p.find_all('span',attrs={"class":"dy-name ellipsis fl"})
                   # print name[0].string
                   # fs = room_id.div.p.find_all('span', attrs={"class": "dy-num fr"})
                   # print fs[0].string
           #         print "-*****************************-"
           # print "-----------------------------------------"
    
    def get_room(room_url):
        room = open_url(room_url).content
        # html_room = room.read()
        # soup_room = BeautifulSoup(html_room, "lxml")
        return room
    
    def start():
           fl = get_fenlei()
           for i in range(0,len(fl[0])-1):
                   get_fl_room(fl[0][i], fl[1][i])
    
    if __name__=="__main__":
            start()
    

    相关文章

      网友评论

          本文标题:第一爬虫脚本(斗鱼,未完成)

          本文链接:https://www.haomeiwen.com/subject/szbxvttx.html