美文网首页
抓取一切中文网页文字

抓取一切中文网页文字

作者: Dotartisan | 来源:发表于2018-10-30 18:28 被阅读327次

做网页内容分类或者NLP研究时,往往需要研究者自己建文本数据集,对模型进行训练。Stallions对抓取中文网页有深度优化,解决构建数据获取中可能遇到的麻烦,是一件不错的利器。
安装
pip install stallions

注:仅支持python3

使用方式

from stallions import extract

url = "https://www.163.com/"
article = extract(url=url)
# 提取 title
print("title", article.title)
# 提取 h1
print("h1", article.h1)
# 提取 meta_keywords
print("meta_keywords", article.meta_keywords)
# 提取 meta_description
print("meta_description", article.meta_description)
# 提取网页的整个页面内容
print(article.content)

title,h1,keywords,description对网页分类影响较大的标签内容。

在实际工作中,抓取中文会通常遇到两个头疼问题:
1.由于中文网页编码格式不统一,经常会出现网页乱码。
2.页面中<script>标签和注释会包含文信息,在提取时候不易剔除。

import re
def clean_content(content):
  """streamline \r\n\ space"""
  # Eliminate Chinese characters
  return re.sub(r'[^\u4e00-\u9fa5]+', ' ', content)

起初,笔者如上用正则表达式暴力页面提取中文,这样会丢失文字中包含的数字和英文信息。

Stallions很好的解决了这两个问题。
适配不同编码的网页:

def get_html(self, url):
    # do request
    try:
        req = requests.get(url, headers=self.headers, timeout=self.http_timeout)
        if req.encoding == 'ISO-8859-1':
            if req.apparent_encoding is not None:
                req.encoding = requests.utils.get_encodings_from_content(req.text)[0]
            else:
                req.encoding = req.apparent_encoding
        elif len(requests.utils.get_encodings_from_content(req.text)) > 0:
            if requests.utils.get_encodings_from_content(req.text)[0] == "GBK":
                req.encoding = requests.utils.get_encodings_from_content(req.text)[0]
        req.keep_alive = False
        html = req.text
    except Exception as e:
        print(e)

默认剔除页面中<script>标签和注释部分包含的文字。

class EliminateScript:
    @staticmethod
    def delete_all_tag(html_raw):
        # <!--done-->  style script
        html_raw = EliminateScript.delete_notes(html_raw)
        html_raw = EliminateScript.delete_tags(html_raw, "style")
        return EliminateScript.delete_tags(html_raw, "script")

    @staticmethod
    def delete_tags(html_raw, tags):
        html_list = html_raw.split("<{0}".format(tags))
        fresh_html = ""
        if html_raw.startswith("<{0}".format(tags)):
            for i, htm in enumerate(html_list):
                if "</{0}>".format(tags) not in htm:
                    continue
                fresh_html += htm.split("</{0}>".format(tags))[-1]
        else:
            for i, htm in enumerate(html_list):
                if i == 0:
                    fresh_html += htm
                    continue
                if "</{0}>".format(tags) not in htm:
                    continue
                fresh_html += htm.split("</{0}>".format(tags))[-1]
        return fresh_html

    @staticmethod
    def delete_notes(html_raw):
        # delete note
        html_list = html_raw.split("<!--")
        fresh_html = ""
        if html_raw.startswith("<!--"):
            for i, htm in enumerate(html_list):
                if "-->" not in htm:
                    continue
                fresh_html += htm.split("-->")[-1]
        else:
            for i, htm in enumerate(html_list):
                if i == 0:
                    fresh_html += htm
                    continue
                if "-->" not in htm:
                    continue
                fresh_html += htm.split("-->")[-1]
        return fresh_html

相关文章

网友评论

      本文标题:抓取一切中文网页文字

      本文链接:https://www.haomeiwen.com/subject/zrtntqtx.html