Python爬取三国演义

作者: bluescorpio | 来源:发表于2016-08-06 16:40 被阅读986次

爬虫四部曲:
1.从哪爬 where
2.爬什么 what
3.怎么爬 how
4.爬了之后信息如何保存 save

本文是练手程序之爬取三国演义

从哪爬

三国演义

爬什么

三国演义全文

怎么爬

在Chrome页面打开F12,就可以发现文章内容在节点

<div id="con" class="bookyuanjiao">

只要找到这个节点,然后把内容写入到一个html文件即可。

content = soup.find("div", {"class": "bookyuanjiao", "id": "con"})

爬了之后如何保存

主要就是拿到内容,拼接到一个html文件,然后保存下来就可以了。

#!usr/bin/env  
# -*-coding:utf-8 -*-
import urllib2
import os
from bs4 import BeautifulSoup as BS
import locale
import sys
from lxml import etree
import re

reload(sys)
sys.setdefaultencoding('gbk')

sub_folder = os.path.join(os.getcwd(), "sanguoyanyi")
if not os.path.exists(sub_folder):
    os.mkdir(sub_folder)

path = sub_folder

# customize html as head of the articles
input = open(r'0.html', 'r')
head = input.read()

domain = 'http://www.shicimingju.com/book/sanguoyanyi.html'
t = domain.find(r'.html')
new_domain = '/'.join(domain.split("/")[:-2])
first_chapter_url = domain[:t] + "/" + str(1) + '.html'
print first_chapter_url

# Get url if chapter lists
req = urllib2.Request(url=domain)
resp = urllib2.urlopen(req)
html = resp.read()
soup = BS(html, 'lxml')
chapter_list = soup.find("div", {"class": "bookyuanjiao", "id": "mulu"})
sel = etree.HTML(str(chapter_list))
result = sel.xpath('//li/a/@href')

for each_link in result:
    each_chapter_link = new_domain + "/" + each_link
    print each_chapter_link
    req = urllib2.Request(url=each_chapter_link)
    resp = urllib2.urlopen(req)
    html = resp.read()

    soup = BS(html, 'lxml')
    content = soup.find("div", {"class": "bookyuanjiao", "id": "con"})
    title = soup.title.text
    title = title.split(u'_《三国演义》_诗词名句网')[0]
    
    html = str(content)
    html = head + html + "</body></html>"

    filename = path + "\\" + title + ".html"
    print filename
    # write file
    output = open(filename, 'w')
    output.write(html)
    output.close()

0.html的内容如下

<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body>

相关文章

网友评论

  • bluescorpio:发生错误会跳出去了吧,可以把那一页的URL打印出来。。单独爬这个
    掂吾掂:@bluescorpio 问题解决了,报错之后,程序确实是没办法运行下去的时候,我目前就是用python连接mysql,然后在mysql记录所有文章的信息,如果报错,那下次再运行的时候,会继续爬...非常感谢作者你这篇文章对我的启迪...
  • 掂吾掂:不过想问多楼主一个问题,如果在爬的过程中三国演义某一章节失败了,那整个程序就停止了,我也尝试用try语句,但是貌似用try也不能让程序在发生错误的时候,继续执行..
  • 掂吾掂:非常感谢楼主的分享...看了楼主的教程之后...然后自己就http://www.shicimingju.com/book/里面的全部书藉都爬到了...
  • 咸鱼爱学习:新手默默看着

本文标题:Python爬取三国演义

本文链接:https://www.haomeiwen.com/subject/ffiksttx.html