美文网首页
python实战计划:爬取赶集网

python实战计划:爬取赶集网

作者: black_crow | 来源:发表于2016-10-15 01:34 被阅读334次

    Date:2016-10-15
    By:Black Crow

    前言:

    本次作业为第二周大作业。
    作业思路分为三部分:第一部分是爬取版块,第二部分爬取单个版块内的产品链接,第三部分是通过产品链接爬取产品详情。

    作业效果:

    产品详情.png

    我的代码:

    20161015代码PART1:爬取页面

    from bs4 import BeautifulSoup
    import requests
    from pymongo import MongoClient
    client =MongoClient('localhost',27017)
    ganji = client['ganji']
    page_urls =ganji['page_urls']
    def get_page_list():
    page = 'http://bj.ganji.com/wu/'
    path = 'http://bj.ganji.com/'
    wb_data = requests.get(page)
    wb_data.encoding = 'utf-8'
    soup = BeautifulSoup(wb_data.text, 'lxml')
    page_lists = soup.select('dd > a:nth-of-type(1)')
    # page_url_lists = []
    for page_list in page_lists:
    page_name = page_list.get('href').split('/')[1]
    # print(page_name)
    page_url = path + page_name
    if page_name == 'zibubaojian':
    pass
    else:
    page_urls.insert_one({'url':page_url})
    # print(page_url)
    get_page_list()

    
    #####20161015代码PART2:爬取产品链接
    >```
    from bs4 import BeautifulSoup
    from multiprocessing import Pool
    import requests,time
    # from content import list#从另外一个文件中导入,须保证为同一文件夹下,后面修改为从数据库调取
    from pymongo import MongoClient
    client =MongoClient('localhost',27017)
    ganji = client['ganji']
    page_urls = ganji['page_urls']
    product_list =ganji['product_list1']
    def counter(i=[0]):
        next = i[-1] + 1
        i.append(next)
        return i[-1]
    def get_product_urls(channel,page):
        page_url ='{}/o{}/'.format(channel,str(page))#page记得转为str
        page_data =requests.get(page_url)
        page_soup =BeautifulSoup(page_data.text,'lxml')
        product_urls = page_soup.select('td.t > a')
        for product_url in product_urls:
            url =product_url.get('href').split('?')[0]#shtml后面有一大串字符,可以删掉
            if url == None:#抓不到就跳过
                pass
            else:
                product_list.insert_one({'url':url})#应该以规范的格式传到数据库中,否则会报错
                print(counter())
                time.sleep(1)
    #单线程的代码
    # def get_list():
    #     for item in list:
    #         for i in range(1, 100):
    #             get_product_urls(item,i)
    #             time.sleep(1)
    def get_list(channel):
        for i in range(1,100):
            get_product_urls(channel,i)
    if __name__ == '__main__':
      list =[]
      for item in page_urls.find():
          list.append(item['url'])
          # print(type(list))
      pool = Pool()
      # pool = Pool(processes=6)
      pool.map(get_list,list)#第二个参数是list
    
    20161015代码PART3:爬取产品详情

    from bs4 import BeautifulSoup
    from multiprocessing import Pool
    import requests,time
    from pymongo import MongoClient
    client =MongoClient('localhost',27017)
    ganji = client['ganji']
    page_urls = ganji['page_urls']
    product_list =ganji['product_list']
    product_info = ganji['product_info']

    url = 'http://zhuanzhuan.ganji.com/detail/783666465658650628z.shtml'

    def counter(i=[0]):
    next = i[-1] + 1
    i.append(next)
    return i[-1]
    def get_product_info(url):
    wb_data =requests.get(url)
    soup =BeautifulSoup(wb_data.text,'lxml')
    titles = soup.select('h1')
    now_prices = soup.select('div.price_li > span > i')
    original_prices = soup.select('b.price_ori')
    areas = soup.select('div.palce_li > span > i')
    seller_names =soup.select('p.personal_name')
    # print(seller_names)
    for title,now_price,original_price,
    area,seller_name in zip(titles,now_prices,
    original_prices,areas,seller_names):
    data ={
    'title':title.get_text(),
    'now_price':now_price.get_text(),
    'original_price':original_price.get_text(),
    'area':area.get_text(),
    'seller_name':seller_name.get_text(),
    'url':url
    }
    # print(data)
    product_info.insert_one(data)
    print(counter())
    time.sleep(1)
    if name == 'main':
    list =[]
    for item in product_list.find():
    list.append(item['url'])
    # print(type(list))
    pool = Pool()

    pool = Pool(processes=6)

    pool.map(get_product_info,list)#第二个参数是list

    
    ####总结:
    >1. pool()函数添加进去了,比单进程要快很多,但怕被封,还是设置了暂停时间;
    2. pool()函数第二个参数是list,直接从数据库提取出来的数据暂时不知道如何表达,所以直接是添加到了一个list里面,后面再看是否有更简单的表达;
    3. 调用pool函数的时候前面的参数传导一定要确保正确,本次作业在数据库插入部分坑了好久,好在后来找出了问题。
    4. 爬了6万条,数据里有重复,还得查阅和解决MongoDB查重的问题。

    相关文章

      网友评论

          本文标题:python实战计划:爬取赶集网

          本文链接:https://www.haomeiwen.com/subject/ubpnyttx.html