Python利用Scrapy爬取智联招聘和前程无忧的招聘数据

作者: 赵镇 | 来源:发表于2017-12-08 08:16 被阅读84次

    爬虫起因

      前面两个星期,利用周末的时间尝试和了解了一下Python爬虫,紧接着就开始用Scrapy框架做了一些小的爬虫,不过,由于最近一段时间的迷茫,和处于对职业生涯的规划。以及对市场需求的分析,我通过网上查阅资料。对比较大的前程无忧和智联招聘进行了数据爬取。
      这里我们以智联招聘为例做一些讲解。

    前期准备

    首先我在我自己做爬虫之前就已经规划好了我需要爬取什么数据,并且创建了数据库表,并提前对网页内容有大概的了解。其次处于对数据分析的考虑,我对我比较关系的字段例如,经验,学历,薪资等都要求尽量能够爬取到。最后,通过书本以及网络资源等各种工具了解Scrapy,正则表达式,Xpath,BeautifulSoup等各种知识,为后面做好爬虫打下了基础。

    实战

    在本次小练习中,我们主要会用到,piplines,items,和我们自己新建的Spider类,
    items是针对实体的,与数据库表中最好具有对应关系,代码如下:

    import scrapy
    
    
    class ZhaopinItem(scrapy.Item):
        jobname = scrapy.Field()
        salary = scrapy.Field()
        experience = scrapy.Field()
        address = scrapy.Field()
        comany_name = scrapy.Field()
        head_count = scrapy.Field()
        education_require = scrapy.Field()
        comany_size = scrapy.Field()
        job_require =scrapy.Field()
        release_date = scrapy.Field()
    

    piplines在本例中主要是对items进行数据操作的。代码如下:

    import pymysql
    from zhaopin import settings
    
    class ZhaopinPipeline(object):
        def __init__(self, ):
            self.conn = pymysql.connect(
                host=settings.MYSQL_HOST,
                db=settings.MYSQL_DBNAME,
                user=settings.MYSQL_USER,
                passwd=settings.MYSQL_PASSWORD,
                charset='utf8',  # 编码要加上,否则可能出现中文乱码问题
                use_unicode=False)
            self.cursor = self.conn.cursor()
    
        def process_item(self, item, spider):
            self.insertData(item)
            return item
    
        def insertData(self, item):
            sql = "insert into shenzhen(jobname,salary,company_name,job_require,address,experience,company_size,head_count,education_require,release_date) VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s);"
            params = (item['jobname'],item['salary'],item['comany_name'],item['job_require'],item['address'],item['experience'],item['comany_size'],item['head_count'],item['education_require'],item['release_date'])
            self.cursor.execute(sql, params)
            self.conn.commit()
    

    最后最为主要的是,数据的获取以及解析,代码如下。

    from zhaopin.items import ZhaopinItem
    from scrapy import Spider,Request
    from bs4 import BeautifulSoup
    import  re
    class ZhaopinSpider(Spider):
        name = 'zhaopin'
    
        allowed_domains = ['www.zhaopin.com']
        start_urls = ['http://www.zhaopin.com/']
        #start_urls = ['http://sou.zhaopin.com/jobs/searchresult.ashx?jl=%E4%B8%8A%E6%B5%B7&kw=java%E5%B7%A5%E7%A8%8B%E5%B8%88&sm=0&sg=720f662a0e894031b9b072246ac2f919&p=1']
    
        def start_requests(self):
            #for num in (1,60):
            url='http://sou.zhaopin.com/jobs/searchresult.ashx?jl=%E6%B7%B1%E5%9C%B3&kw=java%E5%B7%A5%E7%A8%8B%E5%B8%88&sm=0&isadv=0&sg=cc9fe709f8cc4139afe2ad0808eb7983&p=42'
                #.format(num)
            #yield Request(url,callback=self.parse)
            yield Request(url,callback=self.parse)
    
        def parse(self, response):
            #self.log('page url is ' + response.url)
            wbdata = response.text
            soup = BeautifulSoup(wbdata, 'lxml')
            job_name = soup.select("table.newlist > tr > td.zwmc > div > a:nth-of-type(1)")
            salary = soup.select("table.newlist > tr > td.zwyx")
            #company_name = soup.select("table.newlist > tr > td.gsmc > div > a:nth-of-type(2)")
            times = soup.select("table.newlist > tr > td.gxsj > span")
            for name,salary,time in zip(job_name,salary,times):
                item = ZhaopinItem()
                item["jobname"] = name.get_text()
                url= name.get('href')
                #print("职位"+name.get_text()+"工资"+salary.get_text()+"发布日期"+time.get_text()+"连接"+url)
                item["salary"] = salary.get_text()
                item["release_date"] = time.get_text()
                # item["comany_name"] = company     _name.get_text()
                #yield item
    
                yield Request(url=url, meta={"item": item}, callback=self.parse_moive,dont_filter=True)
    
    
    
        def parse_moive(self, response):
            #item = ZhaopinItem()
            jobdata = response.body
            require_data = response.xpath(
                '//body/div[@class="terminalpage clearfix"]/div[@class="terminalpage-left"]/div[@class="terminalpage-main clearfix"]/div[@class="tab-cont-box"]/div[1]/p').extract()
            require_data_middle = ''
            for i in require_data:
                i_middle = re.sub(r'<.*?>', r'', i, re.S)
                require_data_middle = require_data_middle + re.sub(r'\s*', r'', i_middle, re.S)
            jobsoup = BeautifulSoup(jobdata, 'lxml')
            item = response.meta['item']
            item['job_require'] = require_data_middle
            item['experience'] = jobsoup.select('div.terminalpage-left strong')[4].text.strip()
            item['comany_name']  = jobsoup.select('div.fixed-inner-box h2')[0].text
            item['comany_size']  = jobsoup.select('ul.terminal-ul.clearfix li strong')[8].text.strip()
            item['head_count']  = jobsoup.select('div.terminalpage-left strong')[6].text.strip()
            item['address'] = jobsoup.select('ul.terminal-ul.clearfix li strong')[11].text.strip()
            item['education_require'] = jobsoup.select('div.terminalpage-left strong')[5].text.strip()
            yield item
    

    当然最后还需要对一些基础的配置在setting文件中进行设置,如下

    ROBOTSTXT_OBEY = False
    
    ITEM_PIPELINES = {
        'zhaopin.pipelines.ZhaopinPipeline':300
    }
    
    MYSQL_HOST = '127.0.0.1'
    MYSQL_DBNAME = 'zhaopin'     # 数据库名
    MYSQL_USER = 'root'         # 数据库用户
    MYSQL_PASSWORD = '123456'   # 数据库密码
    

    最后,运行成功会获得如下结果:


    这里写图片描述

    后记

    后面如果我开发了数据分析相关的技能包,可能还会对这里的数据进行分析,到时候会将分析的一些有趣的东西分析出来,

    代码请戳这里

    相关文章

      网友评论

        本文标题:Python利用Scrapy爬取智联招聘和前程无忧的招聘数据

        本文链接:https://www.haomeiwen.com/subject/dnvsixtx.html