一直以来下载电影都是在80s上,突发奇想,是否可以爬取到好评电影的下载地址呢,以下是尝试阶段:
网址:https://www.80s.tw/movie/list/----g
在当前页面上可以爬取到电影具体网址、名称'name',以及豆瓣评分'rate'
先构建Item:
import scrapy
class Test80SItem(scrapy.Item):
name = scrapy.Field()
rate = scrapy.Field()
download_url = scrapy.Field()
分析页面:
页面分析
提取信息的代码如下:
def parse(self, response):
movies = response.css('ul.me1.clearfix li')
for mov in movies:
mov_url = mov.css('a::attr(href)').extract_first()
mov_url = response.urljoin(mov_url)
mov_url = str(mov_url) + '/bd-1' #标记一下,下面会说为什么要这样做
name = mov.css('h3.h3>a::text').extract_first().strip('\n').strip()
rate = mov.css('a::attr(title)').extract_first().split('豆瓣')[1] #只用获取评分
再获取下一页信息:
获取下一页
很容易获取到,但需要作一下判断,当没有‘下一页’时意味着所有页面获取结束。
next_url = response.css('div.pager>a:nth-last-child(2)::attr(href)').extract_first()
if response.css('div.pager>a:nth-last-child(2)::text').extract_first() == '下一页':
next_url = response.urljoin(next_url)
yield Request(next_url, callback=self.parse)
下面进入具体电影界面,就打开大名鼎鼎的肖申克了
打开后的第一页面
TV格式文件太大,只需要下载1024就够了,但当前页面并没有1024的内容,需要点击后动态加载出来,先点击查看一下
果然出现了相关内容
但对于爬虫,不能每个页面都手动点击一下,点击'Network'查看各个加载出来的项目。会发现多了'bd-1',如图:
bd-1
在Response里找到了要爬取的下载地址
查看其Header
发现只是在电影网址后面多了‘/bd-1’
mov_url = str(mov_url) + '/bd-1' #原因出来了
最后发现'name'、'rate'、'download_url'不在同一个函数中,这时就要用到Request的meta属性,其作用时传递信息给下一个函数。
movie = Test80SItem()
movie['name'] = name
movie['rate'] = rate
yield Request(mov_url, callback=self.parse_movie, meta={'_movie': movie})
下一个函数通过response.meta[]接收
def parse_movie(self, response):
movie = response.meta['_movie']
download_url = response.css('input#downid_0::attr(value)').extract_first()
movie['download_url'] = download_url
yield movie
此时进入最后阶段,setting.py
#模拟浏览器访问
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
#忽略规则
ROBOTSTXT_OBEY = False
#设置延时
DOWNLOAD_DELAY = random.random() + random.random()
#输出排序
FEED_EXPORT_FIELDS = ['name', 'rate', 'download_url']
整体代码如下:
import scrapy
from scrapy import Request
from ..items import Test80SItem
class A80sSpider(scrapy.Spider):
name = '80s'
allowed_domains = ['www.80s.tw']
start_urls = ['https://www.80s.tw/movie/list/----g']
def parse(self, response):
movies = response.css('ul.me1.clearfix li')
for mov in movies:
mov_url = mov.css('a::attr(href)').extract_first()
mov_url = response.urljoin(mov_url)
mov_url = str(mov_url) + '/bd-1'
name = mov.css('h3.h3>a::text').extract_first().strip('\n').strip()
rate = mov.css('a::attr(title)').extract_first().split('豆瓣')[1]
movie = Test80SItem()
movie['name'] = name
movie['rate'] = rate
yield Request(mov_url, callback=self.parse_movie, meta={'_movie': movie})
next_url = response.css('div.pager>a:nth-last-child(2)::attr(href)').extract_first()
if response.css('div.pager>a:nth-last-child(2)::text').extract_first() == '下一页':
next_url = response.urljoin(next_url)
yield Request(next_url, callback=self.parse)
def parse_movie(self, response):
movie = response.meta['_movie']
download_url = response.css('input#downid_0::attr(value)').extract_first()
movie['download_url'] = download_url
yield movie
运行:scrapy crawl 80s -o 80s.csv
编写完后发现有些影源没有1024的,会出现错误,而这段代码只能爬取1024的电影。等下会改进并调试下。
网友评论