刚学这个框架(其实还不太会用),写个小爬虫,仅供新人练习
没有搞章节排序,其实有个想法(搞完排序之后,爬取速度会更快)
学习Python中有不明白推荐加入交流群
号:516107834
群里有志同道合的小伙伴,互帮互助,
群里有不错的学习教程!
1/检测章节是否是中文数字
2/如果是中文数字就转换为阿拉伯数字
3/通过阿拉伯数字进行排序操作
注意:
1/ 需要在setting中配置延时,如果不配置也可以,框架的异步多线程,
不添延时可能会造成章节错乱,延时添加为0.2s。0.1s也会错误
2/需要PIP一些包
# 各种引用
import os
import re
import requests
import scrapy
from lxml import etree
# 笔趣阁小说爬取下载
# 爬取目录页面HTML数据
def Get_Context(_PATH, head):
r = requests.get(_PATH, head)
r.encoding = 'gb2312'
se = etree.HTML(r.text)
return se
# 获取小说地址
def Re_Book_Path():
book_name = input("请输入您想看的小说名")
header_dict = {'User-Agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.10 Safari/537.36"}
bookname = str(book_name).encode("gb2312").hex()
names = re.sub(r"(?<=w)(?=(?:ww)+$)", " ", bookname.upper())
name = " " + names
urls = name.replace(" ","%")
se = Get_Context("http://www.biquge.com.tw/modules/article/soshu.php?searchkey=+"+urls,header_dict)
book_path = se.xpath("//*[@id='nr']/td[1]/a[text()='%s']/@href" %book_name)[0]
return str(book_path)
# 获取各个章节的URL
def Text_Url():
se = Get_Context(Re_Book_Path(),{'User-Agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.10 Safari/537.36"})
Text_Url = se.xpath("//*[@id='list']/dl/dd/a/@href")
Text_Urls=[]
for URLS in Text_Url:
Text_Urls.append("http://www.biquge.com.tw"+URLS)
return Text_Urls
class Books(scrapy.Spider):
name = "Books" # 爬虫名称
start_urls = Text_Url() # 配置爬取URL列表
def parse(self, response):
Book_Chapter = response.xpath("//div[@class='bookname']/h1/text()").extract_first() # 章节名称
Book_Contents = response.xpath("//div[@id='content']/text()").extract() # 未处理文本内容
Book_Name = response.xpath("//div[@class='bookname']//a[3]/text()").extract_first() # 小说名称
Save_Path = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + os.path.sep + ".") # 保存位置设置为保存在当前文件夹下
texts = "".join(Book_Contents).replace("", "").replace(" ", "").strip() # 处理文本内容
# 进行保存操作
with open(Save_Path + "/" + Book_Name + ".txt", "a+") as f:
f.write(Book_Chapter.strip()+""+texts+"")
网友评论