美文网首页
Python 实战计划学习笔记:自动设置代理爬取58同城商品信息

Python 实战计划学习笔记:自动设置代理爬取58同城商品信息

作者: 一大桃 | 来源:发表于2016-07-09 15:00 被阅读379次

    案例目的:爬取58同城宠物狗信息
    难点 - 自动代理:爬取58信息前,自动设置 http://www.xicidaili.com/nn 代理网站公布的免费代理。如果代理不可用,自动更换代理设置。

    成果:

    Paste_Image.png

    源代码:

    import bs4
    import requests
    import requests.exceptions
    import time
    import re
    
    heads = {
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    }
    
    def get_detail(url):
        soup = get_soup_from_url(url,True)
    
        title = soup.find("div", class_="mainTitle_content").h1.get_text()
        price = soup.find("span",class_="price c_f50").get_text()
        age = soup.find("span",class_="nianling").get_text().strip()
        breed = soup.find("div",class_="su_tit",text=re.compile(r"品\s*种")).next_sibling.next_sibling.span.get_text().strip()
        contactor = soup.find("span",class_="lianxiren").a.get_text()
    
        print(price,age,breed,contactor,title)
        time.sleep(1)
    
    def get_list(url):
    
        def atag_exclude_adds(tag):
            return tag.name == "a" and tag.has_attr("class") and "t" in tag["class"] and "xct" not in tag.parent["class"]
    
        soup = get_soup_from_url(url,True)
        detail_urls = [i.get("href") for i in soup.find_all(atag_exclude_adds)]
    
        # print("detail_urls ready")
        for i in detail_urls:
            get_detail(i)
            # print(i)
    
    def get_soup_from_url(url,auto_proxy = False):
        if auto_proxy:
            # proxy = my_proxy.get_current_proxy()
            proxy = my_proxy.get_current_proxy()
        else:
            proxy = {}
    
        while True:
            # if proxy: print("Trying Proxy:%s" % proxy["http"])
            try:
                respond = requests.get(url,headers=heads,proxies = proxy,timeout=8)
            except (requests.exceptions.ProxyError, requests.exceptions.ConnectionError, requests.exceptions.ReadTimeout) as e:
                print("Proxy %s isn't working well..." % proxy["http"])
                proxy = my_proxy.get_next_proxy()
            else:
                if respond.status_code!= 200:
                    print("respond status code problem: %s" % respond.status_code)
                    proxy = my_proxy.get_next_proxy()
                    continue
                break
    
        soup = bs4.BeautifulSoup(respond.text,'lxml')
        return soup
    
    class MyProxy(object):
        def __init__(self):
            self.proxy_list = []
            self.refresh_ip_list()
            self.__next__()
    
        def refresh_ip_list(self):
            print("Refreshing MyProxy IP List...")
            soup = get_soup_from_url("http://www.xicidaili.com/nn/")
    
            #正则寻找IP地址
            re_ip = re.compile(
                r"(?:(?:25[0-5]|2[0-4]\d|((1\d{2})|([1-9]?\d)))\.){3}(?:25[0-5]|2[0-4]\d|((1\d{2})|([1-9]?\d)))")
            ip_tag = soup.find_all("td", text=re_ip)  # get_text()
    
            #获得IP端口
            ports = soup.select("tr > td:nth-of-type(3)")
    
            #获取Proxy速度
            speed_work = [i.get("class")[1] for i in soup.select("tr > td:nth-of-type(7) > div > div")]
    
            #获取Proxy连接速度
            speed_connect = [i.get("class")[1] for i in soup.select("tr > td:nth-of-type(8) > div > div")]
    
            #合并所有数据至ip_data : {"ip":XX.XX.XX.XX,"speed_work":"fast","speed_connect":"medium"}
            ip_data = []
            for ip, workspeed, connectspeed,port in zip(ip_tag, speed_work, speed_connect,ports):
                data = {
                    "ip": "{ip}:{port}".format(ip = ip.get_text(),port = port.get_text()),
                    "speed_work": workspeed,
                    "speed_connect": connectspeed
                }
                ip_data.append(data)
    
            #筛选出速度快的Proxy
            ip_data_filtered = list(filter(lambda i: i["speed_work"] == "fast" and i["speed_connect"] == "fast", ip_data))
            self.proxy_list = [i["ip"] for i in ip_data_filtered]
            self.proxy_list = iter(self.proxy_list)
    
        def __iter__(self):
            return self
    
        def __next__(self):
            print("Switching IP Address...")
            try:
                self.currentIP = next(self.proxy_list)
                print("New Proxy: %s" % (self.currentIP))
                return self.currentIP
            except StopIteration as e:
                self.refresh_ip_list()
                return self.__next__()
    
        def get_current_proxy(self):
            return {"http": self.currentIP}
    
        def get_next_proxy(self):
            return {"http": self.__next__()}
    
    my_proxy = MyProxy()
    
    dog_lists = ["http://sh.58.com/dog/pn{}".format(i) for i in range(5)]
    for list_url in dog_lists:
        print("Scrawling %s" % (list_url))
        get_list(list_url)
    
    

    总结:

    • soup.find_all()可以使用方法作为参数,实现完全自定义。
    • MyProxy 为迭代器,无穷迭代,当proxy服务器列表用完时,刷新网页获得新列表
    • soup 中tag的next_sibling通常是空行。所以表面上相邻的tag,需要写成tag.next_sibling.next_sibling来访问
    • requests.get()可以设置timeout参数作为等待秒数。 同时需要抓取requests.exceptions.ReadTimeout
    • ip 地址的正则表达式非常复杂,直接搬运网上的:r"(?:(?:25[0-5]|2[0-4]\d|((1\d{2})|([1-9]?\d))).){3}(?:25[0-5]|2[0-4]\d|((1\d{2})|([1-9]?\d)))")
    • 代理服务器网址必须带有端口,否则无法工作

    相关文章

      网友评论

          本文标题:Python 实战计划学习笔记:自动设置代理爬取58同城商品信息

          本文链接:https://www.haomeiwen.com/subject/yggrjttx.html