美文网首页我爱编程
爬取全国电影厅座位数(基于Scrapy)

爬取全国电影厅座位数(基于Scrapy)

作者: 我叫GTD | 来源:发表于2018-02-09 19:52 被阅读51次

爬取的是某眼的网站,刚开始应同学的要求,按地区获取,但发现从中至少要有三个页面的获取,第二个页面仍然要获取多次才能完全得到所有厅的信息,如下:


只有2,4,5三个厅

这时,还需模拟点击进入到含有其他厅的页面,进入每个影院都会多次用到Selenium+PhantomJS(当时还对Selenium不熟悉),于是我就本着菜鸟笨方法的态度认认真真分析了它选取座位时的网站。
机会来了:

#下面网址需要认真观察改正后才能正常进入
http://mouyan.com/xseats/201802090120619?movieId=1208122&cinemaId=7349
http://mouyan.com/xseats/201802090120619

20180209(日期)+ 0120619(随机数,我试了几次,猜测是全国范围内按照时间排序的),那网址是不是可以如下:

    name = 'seats'
    #如果预计爬取2天可以完成任务,那就设置后天,及201802XX
    base_url = 'http://mouyan.com/xseats/20180209'
    start_urls = []
    for i in range(1,1000000):
        s = '%07d' % i
        url = base_url + str(s)
        start_urls.append(url)

下面分析页面了:


页面分析

div.row>span有三种不同的属性:seat selectable,seat empty,seat sold。我们需要计数的时seat selectable和seat sold。于是乎:

cinemas = CinemaSeatsItem()
row = response.css('div.seats-wrapper>div.row')
all_seats = []
i = 0
for seats in row:
      seat1 = seats.css('span.seat.selectable::attr(data-column-id)').extract()
      seat2 = seats.css('span.seat.sold::attr(data-column-id)').extract()
      row_num = len(seat1 + seat2)
      all_seats.append(int(row_num))
cinemas['all_num'] = sum(all_seats)

还需要加入对下面情况的判断解决:


场次信息不存在

那就有了:

if response.css('div.modal p.tip::text').extract_first() == '场次信息不存在':
   yield None

还有某眼不是吃素的,会让我们轻易的爬吗?答案一定不是肯定的。N次的爬取后会见证奇迹。


输入验证码

输入就行了,但总不能手动刷新页面取输入啊,显得有点Low,虽说我的1.0版本是这样的。但同学在正规地方,总要弄好一点啊,我就好好的认认真真的看了一下Selenium+PhantomJS。于是就有了下面:

    def parse(self, response):
        #判断是否被限制
        if response.css('head>title::text').extract_first() == '猫眼访问控制':

            driver = webdriver.PhantomJS()
            driver.get('http://maoyan.com/')

            captcha_url = driver.find_element_by_css_selector('div.row>img').get_attribute('src')
            resp = requests.get(captcha_url)
            img = Image.open(BytesIO(resp.content))
            #验证码有些比较复杂,有的我用肉眼都不大好分辨,就没有利用Tesseract,还有,自己也菜。
            img.show()
            captcha = input('请输入验证码:')

            element = driver.find_element_by_name('captcha_code')
            element.send_keys(captcha, Keys.ENTER)
            #driver.find_element_by_css_selector('button.row').click()
            driver.close()

        else: #计算厅座位数

全部代码:

import scrapy
from ..items import CinemaSeatsItem
import requests
from PIL import Image
from io import BytesIO
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

class SeatsSpider(scrapy.Spider):
    name = 'seats'
    base_url = 'http://maoyan.com/xseats/20180211'
    start_urls = []
    for i in range(1,1000000):
        s = '%07d' % i
        url = base_url + str(s)
        start_urls.append(url)

    def parse(self, response):
        #判断是否被限制
        if response.css('head>title::text').extract_first() == '猫眼访问控制':

            driver = webdriver.PhantomJS()
            driver.get('http://maoyan.com/')

            captcha_url = driver.find_element_by_css_selector('div.row>img').get_attribute('src')
            resp = requests.get(captcha_url)
            img = Image.open(BytesIO(resp.content))
            img.show()
            captcha = input('请输入验证码:')

            element = driver.find_element_by_name('captcha_code')
            element.send_keys(captcha, Keys.ENTER)
            #driver.find_element_by_css_selector('button.row').click()
            driver.close()

        else: #计算厅座位数
            if response.css('div.modal p.tip::text').extract_first() == '场次信息不存在':
                yield None
            else:
                cinemas = CinemaSeatsItem()
                row = response.css('div.seats-wrapper>div.row')
                all_seats = []
                i = 0
                for seats in row:
                    seat1 = seats.css('span.seat.selectable::attr(data-column-id)').extract()
                    seat2 = seats.css('span.seat.sold::attr(data-column-id)').extract()
                    row_num = len(seat1 + seat2)
                    all_seats.append(int(row_num))
                cinemas['all_num'] = sum(all_seats)
                cinema = response.css('div.show-info>div.info-item:nth-child(1)>span.value.text-ellipsis::text').extract_first()
                room = response.css('div.show-info>div.info-item:nth-child(2)>span.value.text-ellipsis::text').extract_first()
                cinemas['cinema_name'] = cinema
                cinemas['room_num'] = room
                yield cinemas

由于一个厅不可能每天只上映一场电影,所以会爬取多次,要在pipeline里设置去重:

from scrapy.exceptions import DropItem

class CinemaSeatsPipeline(object):
    def process_item(self, item, spider):
        return item

class DuplicatesPipeline(object):

    def __init__(self):
        self.cinema_set = set()

    def process_item(self, item, spider):
        cinema_info = item['cinema_name'] + item['room_num']
        if cinema_info in self.cinema_set:
            raise DropItem('Duplicate book found:%s' % item)
        self.cinema_set.add(cinema_info)
        return item

item.py:

import scrapy

class CinemaSeatsItem(scrapy.Item):
    cinema_name = scrapy.Field()
    room_num = scrapy.Field()
    all_num = scrapy.Field()

seeting.py:

import random
BOT_NAME = 'cinema_seats'
SPIDER_MODULES = ['cinema_seats.spiders']
NEWSPIDER_MODULE = 'cinema_seats.spiders'
FEED_EXPORT_FIELDS = ['cinema_name', 'room_num', 'all_num']
ROBOTSTXT_OBEY = False
DEFAULT_REQUEST_HEADERS = {
    'DNT': 1,
    'Accept-Encoding': "gzip, deflate",
    'Accept-Language': "zh-CN,zh;q=0.9",
    'Upgrade-Insecure-Requests': "1",
    'User-Agent': "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36",
    'Accept': "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    'Cache-Control': "no-cache",
    'Cookie': "_lxsdk=161658cf0cdc8-0de22982ba06fc-3c604504-e1000-161658cf0cd80; uuid=1A6E888B4A4B29B16FBA1299108DBE9CEA6C7AFFDEFBF75A8235B02C850AA34F; _lx_utm=utm_source%3Dbaidu%26utm_medium%3Dorganic; _csrf=f25a28e378b1ddc3698f259656964e65c83aba80df56e2c6e2950a5f94895d8b; __mta=214153768.1517827207525.1517881319737.1517881413851.47; _lxsdk_s=cdbd58d2a6156068c4179155a0f1%7C%7C24",
    'Connection': "keep-alive",
}
DOWNLOAD_DELAY = random.random()
ITEM_PIPELINES = {
    'cinema_seats.pipelines.CinemaSeatsPipeline': 300,
    'cinema_seats.pipelines.DuplicatesPipeline': 350,
}

好了,就到这了。下面是缺点剖析:
自己没爬那么多,同学爬取数据比例大概为1:7(包括去重),即每爬取15次获取1个有效信息。对于时间,还有电脑资源造成了浪费,对某眼的资源也造成了浪费。
不能按照其要求获取影院所在地区。

相关文章

网友评论

    本文标题:爬取全国电影厅座位数(基于Scrapy)

    本文链接:https://www.haomeiwen.com/subject/rbrqtftx.html