需求和上次一样,只是职位信息和详情内容分开保存到不同的文件,并且获取下一页和详情页的链接方式有改动。
这次用到了CrawlSpider。
class scrapy.spiders.CrawlSpider
它是Spider的派生类,Spider类的设计原则是只爬取start_url列表中的网页,而CrawlSpider类定义了一些规则(rule)来提供跟进link的方便的机制,从爬取的网页中获取link并继续爬取的工作更适合。
items.py
import scrapy
class TencentItem(scrapy.Item):
position_name = scrapy.Field()
position_type = scrapy.Field()
people_number = scrapy.Field()
work_location = scrapy.Field()
publish_times = scrapy.Field()
position_link = scrapy.Field()
class DetailItem(scrapy.Item):
detailContent = scrapy.Field()
可以看到,items.py里有两个类,分别处理。
tencent_crawl.py
建立这个文件不是scrapy genspider XXX "xxx.com"
而是scrapy genspider -t crawl tencent_crawl "tencent.com"
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from tencent2.items import TencentItem, DetailItem
class TencentCrawlSpider(CrawlSpider):
name = 'tencent_crawl'
allowed_domains = ['tencent.com']
start_urls = ['https://hr.tencent.com/position.php']
base_url = "https://hr.tencent.com/"
rules = (
# 符合规则的url请求返回函数为parse_item,并跟进,response传下去继续匹配
Rule(LinkExtractor(allow=r'start=\d+'), callback='parse_item', follow=True),
# 规则的url请求返回函数为detail, 不跟进
Rule(LinkExtractor(allow=r'position_detail\.php\?id=\d+'), callback='detail', follow=False)
)
#回调函数千万不能是parse,因为crawlspider底层是调用了parse,如果覆盖重写parse,运行会报错
def parse_item(self, response):
node_list = response.xpath('//tr[@class="even"] | //tr[@class="odd"]')
for node in node_list:
item = TencentItem()
item['position_name'] = node.xpath('./td/a/text()').extract_first()
item['position_link'] = node.xpath('./td/a/@href').extract_first()
item['position_type'] = node.xpath('./td[2]/text()').extract_first()
item['people_number'] = node.xpath('./td[3]/text()').extract_first()
item['work_location'] = node.xpath('./td[4]/text()').extract_first()
item['publish_times'] = node.xpath('./td[5]/text()').extract_first()
yield item
def detail(self, response):
item = DetailItem()
item['detailContent'] = "".join(response.xpath('//ul[@class="squareli"]/li/text()').extract())
yield item
piplines.py
from tencent2.items import TencentItem
import json
import time
class TencentPipeline(object):
def open_spider(self, spider):
self.file = open("tencent.json", "w")
self.position_num = 0
self.start_time = time.time()
def process_item(self, item, spider):
if isinstance(item, TencentItem):
self.position_num+=1
content = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.file.write(content)
return item
def close_spider(self, spider):
self.end_time = time.time()
print("----------保存" + str(self.position_num) + "条职位信息数据----------")
print("共耗时+" + str(self.end_time - self.start_time) + "秒")
self.file.close()
class DetailPipeline(object):
def open_spider(self, spider):
self.file = open("detail.json", "w")
self.detail_num = 0
self.start_time = time.time()
def process_item(self, item, spider):
if not isinstance(item, TencentItem):
self.detail_num +=1
content = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.file.write(content)
return item
def close_spider(self, spider):
self.end_time = time.time()
print("----------保存" + str(self.detail_num) + "条职位详情数据----------")
print("共耗时+" + str(self.end_time - self.start_time) + "秒")
self.file.close()
在piplines.py文件里同样有两个类,一个是处理职位信息的,一个是处理详情内容的。而通过isinstance(item, TencentItem)
这个方法来区别不同item,第一个参数是实例对象,第二个参数是类名,如果相匹配就返回true。
设置settings.py
ITEM_PIPELINES = {
'tencent2.pipelines.TencentPipeline': 300,
'tencent2.pipelines.DetailPipeline': 400 ,
}
运行项目:scrapy crawl tencent_crawl
项目源码:https://gitee.com/stefanpy/Scrapy_projects/tree/dev/tencent2
网友评论