美文网首页拉钩-爬虫-Python
Python爬虫入门-scrapy爬取拉勾网

Python爬虫入门-scrapy爬取拉勾网

作者: 小小佐 | 来源:发表于2017-09-21 10:44 被阅读109次

    之前就爬过拉勾网,但是遇到一些错误一直没有办法解决,果断放弃了,今天又重新试着写写看,对于一个菜鸟来说,真的都是处处是坑,写篇文章记录一些,供接下去学习参考。

    首先就是打开拉勾网,在搜索栏中输入Python,打开F12,刷新:

    在这个原始的请求的response中是没有我们要的数据的,一般这种情况下我就切换到XHR中取中取找:

    URL:https://www.lagou.com/jobs/positionAjax.jsonneedAddtionalResult=false&isSchoolJob=0中可以找到我们想要的JSON数据。所以可以模拟浏览器对这个URL进行请求,再对返回的JSON数据进行解析就可以得到我们想要的结果。

    所以在scrapy中的spider.py开始编写代码:

    import scrapy

    classLagouSpider(scrapy.Spider):

        name='lagou'

        def start_requests(self):

            url='https://www.lagou.com/jobs/positionAjax.jsonneedAddtionalResul

    t=false&isSchoolJob=0'

            yield scrapy.FormRequest(url,formdata={'first':'true','pn':'1','kd':'python'},method='Post',meta={'pn':1},callback=self.parse)

        def parse(self,response):

            html=response.text

            data=json.loads(html)

            if data:

                content=data.get('content')

                positionResult=content.get('positionResult')

                results=positionResult.get('result')

                for result in results:

                    companyFullName=result.get('companyFullName')

                    print(companyFullName)

    在settings.py下使用的是默认的DEFAULT_REQUEST_HEADERS,并在里面我添加了随机的User-Agent,然后我开始运行代码,结果出现报错:

    File "E:\Python\pycharm\lagouposition\lagouposition\spiders\lagou.py", line 60, in parse

    content=data['content']

    KeyError: 'content'

    明明代码看起来没有什么问题,为什么一直就是提示这个错误呢,着实让我很奔溃,后面在知乎上看到了有人回答说要把request headers全部加上(具体为什么回答的人也说还不知道),然后我就在settings.py设置如下:

     DEFAULT_REQUEST_HEADERS = {

         'Accept': 'application/json, text/javascript, */*; q=0.01',

         'Accept-Encoding':'gzip, deflate, br',

         'Accept-Language': 'zh-CN,zh;q=0.8',

         'Connection':'keep-alive',

         'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',

          'Cookie':'LGUID=20170624104910-b3421612-5887-11e7-805a-525400f775ce; user_trace_token=20170624104912-161b9c7475a6448381c393fd68935f6b; index_location_city=%E5%85%A8%E5%9B%BD; JSESSIONID=ABAAABAAAFCAAEGF2DB2AA232B68C2B16743FE83939C1E9; _gat=1; PRE_UTM=; PRE_HOST=; PRE_SITE=; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2F; TG-TRACK-CODE=index_search; _gid=GA1.2.705404459.1505118253; _ga=GA1.2.1378071003.1498273550; LGSID=20170911225046-98307e76-9700-11e7-8f76-525400f775ce; LGRID=20170911225056-9dbaf56b-9700-11e7-9168-5254005c3644; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1504697344,1504751304,1504860546,1505142452; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1505142462; SEARCH_ID=1875185cf5904051845b74a20b82bebd',

         'Host':'www.lagou.com',

         'Origin':'https://www.lagou.com',

         'Referer':'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',

      #   'User-Agent':'User-Agent:Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',

         'X-Anit-Forge-Code':'0',

         'X-Anit-Forge-Token':'None',

         'X-Requested-With':'XMLHttpRequest'}

    然后运行,上面的报错是消失了,但是却出现了一个编码的报错(我使用的是window7系统):

    同样的在网上找了很多,试了一些方法还是没什么用,一直报这个错误,最后找到了一种解决方法,在spider.py中添加了如下代码:

    import sys,io

    sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gbk')

    解决了上面的编码问题。

    然后继续编码,在items.py:

    from scrapy importItem,Field

    classLagoupositionItem(Item):

        companyFullName=Field()

        companyId=Field()

        companyLabelList=Field()

        companyLogo=Field()

        companyShortName=Field()

        companySize=Field()

        createTime=Field()

        deliver=Field()

        district=Field()

        education=Field()

        explain=Field()

        financeStage=Field()

        firstType=Field()

        formatCreateTime=Field()

        gradeDescription=Field()

        industryField=Field()

        industryLables=Field()

        isSchoolJob=Field()

        jobNature=Field()

        positionAdvantage=Field()

        positionId=Field()

        positionLables=Field()

        positionName=Field()

        salary=Field()

        secondType=Field()

        workYear=Field()

    在spider.py

    def parse(self,response):

        html=response.text

        data=json.loads(html)

        ifdata:

            content=data.get('content')

            positionResult=content.get('positionResult')

            totalCount=positionResult.get('totalCount')

            pages=int(totalCount/15)

            if pages>=30:

                pages=30

            else:

                pages=pages

            results=positionResult.get('result')

            for result in results:

                item=LagoupositionItem()

                item['companyFullName']=result.get('companyFullName')

                item['companyId']=result.get('companyId')

                item['companyLabelList']=result.get('companyLabelList')

                item['companyLogo']=result.get('companyLogo')

                item['companyShortName']=result.get('companyShortName')

                item['companySize']=result.get('companySize')

                item['createTime']=result.get('createTime')

                item['deliver']=result.get('deliver')

                item['district']=result.get('district')

                item['education']=result.get('education')

                item['explain']=result.get('explain')

                item['financeStage']=result.get('financeStage')

                item['firstType']=result.get('firstType')

                item['formatCreateTime']=result.get('formatCreateTime')

                item['gradeDescription']=result.get('gradeDescription')

                item['industryField']=result.get('industryField')

                item['industryLables']=result.get('industryLables')

                item['isSchoolJob']=result.get('isSchoolJob')

                item['jobNature']=result.get('jobNature')

                item['positionAdvantage']=result.get('positionAdvantage')

                item['positionId']=result.get('positionId')

                item['positionLables']=result.get('positionLables')

                item['positionName']=result.get('positionName')

                item['salary']=result.get('salary')

                item['secondType']=result.get('secondType')

                item['workYear']=result.get('workYear')

                yield item

                pn=int(response.meta.get('pn'))+1

                if pn<=pages:

                    yield scrapy.FormRequest(response.url,formdata={'first':'False','pn':str(pn),'kd':'python'},method='Post',meta{'pn':pn},callback=self.parse)

    原本以为能够把前面的30页都抓取下来,没想到只是抓取了一页的内容后,就可以报前面的错误:

    File "E:\Python\pycharm\lagouposition\lagouposition\spiders\lagou.py", line 60, in parse

    content=data['content']

    KeyError: 'content'

    考虑到前面一开始也报这个错误,我觉得是后面的:

    yield scrapy.FormRequest(response.url,formdata{'first':'False','pn':str(pn),'kd':'python'},

    method='Post',meta{'pn':pn},callback=self.parse)

    没有headers的缘故。所以做了如下的调整,将settings.py中的DEFAULT_REQUEST_HEADERS注释掉然后在spider.py中添加如下:

    headers={

    'Accept': 'application/json, text/javascript, */*; q=0.01',

    'Accept-Encoding':'gzip, deflate, br',

    'Accept-Language': 'zh-CN,zh;q=0.8',

    'Connection':'keep-alive',

    'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',

    'Cookie':'LGUID=20170624104910-b3421612-5887-11e7-805a-525400f775ce; user_trace_token=20170624104912-161b9c7475a6448381c393fd68935f6b; index_location_city=%E5%85%A8%E5%9B%BD; JSESSIONID=ABAAABAAAFCAAEGF2DB2AA232B68C2B16743FE83939C1E9; _gat=1; PRE_UTM=; PRE_HOST=; PRE_SITE=; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2F; TG-TRACK-CODE=index_search; _gid=GA1.2.705404459.1505118253; _ga=GA1.2.1378071003.1498273550; LGSID=20170911225046-98307e76-9700-11e7-8f76-525400f775ce; LGRID=20170911225056-9dbaf56b-9700-11e7-9168-5254005c3644; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1504697344,1504751304,1504860546,1505142452; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1505142462; SEARCH_ID=1875185cf5904051845b74a20b82bebd',

    'Host':'www.lagou.com',

    'Origin':'https://www.lagou.com',

    'Referer':'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=',

    #   'User-Agent':'User-Agent:Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',

    'X-Anit-Forge-Code':'0',

    'X-Anit-Forge-Token':'None',

    'X-Requested-With':'XMLHttpRequest'}

    并修改:

    yield scrapy.FormRequest(url,formdata{'first':'true','pn':'1','kd':'python'},method='Post',

    meta{'pn':1},headers=self.headers,callback=self.parse)

    同时修改:

    yield scrapy.FormRequest(response.url,formdata={'first':'False','pn':str(pn),'kd':'python'},

    method='Post',meta{'pn':pn},headers=self.headers,callback=self.parse)

    然后运行,终于可以跑起来了抓了30页的内容。这个过程中oooO ↘┏━┓ ↙ Oooo

    ( 踩)→┃你┃ ←(死 )\ ( →┃√┃ ← ) /\_)↗┗━┛ ↖(_/的坑比较多。

    相关文章

      网友评论

        本文标题:Python爬虫入门-scrapy爬取拉勾网

        本文链接:https://www.haomeiwen.com/subject/jojhsxtx.html