美文网首页
用xpath爬取小说href

用xpath爬取小说href

作者: Rain师兄 | 来源:发表于2020-10-24 17:24 被阅读0次

    import requests

    from bs4 import BeautifulSoup as bf

    from lxml import etree

    url ='https://www.soxscc.com/MangHuangJi/'

    headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}

    resp = requests.get(url,headers=headers)

    resp_xpath = etree.HTML(resp.text)

    result = resp_xpath.xpath('//*[@id="novel4451"]/dl/dd/a/@href')

    for i in range(0,4):

            url1 ='https://www.soxscc.com/' +result[i]

            print(url1)

    输出结果是

    因为设置只循环四次,从第一章节循环到第四章网址

    for i in range(0,4):

            url1 ='https://www.soxscc.com/' +result[i]

            print(url1)

    用xpath方便了很多,这次只需要在网页检查的页面点击复制xpath

    之后

    result = resp_xpath.xpath('//*[@id="novel4451"]/dl/dd/a/@href')

    就得到了href列表result,添加一些代码打印出来

    for i in result:

        print()

    方便多了。。。

    之后就更简单了,只用了3次for循环,而且不复杂

    import requests

    from bs4import BeautifulSoupas bf

    from lxmlimport etree

    url ='https://www.soxscc.com/MangHuangJi/'

    headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}

    resp = requests.get(url,headers=headers)

    resp_xpath = etree.HTML(resp.text)

    result = resp_xpath.xpath('//*[@id="novel4451"]/dl/dd/a/@href')

    for i in range(0,10):

        output ="\n\n{}\n\n{}\n\n\n\n\n\n"

        url1 ='https://www.soxscc.com/' +result[i]

        resp1 = requests.get(url1,headers=headers)

        soup = bf(resp1.text,'lxml')

        title = soup.find('h1').string

        contents = soup.findAll('div',class_='content')

        for i in contents:

            content = i.get_text()

            output1 = output.format(title,content)

        for i in output1:

            with open('souxiaoshuo.txt','a',encoding='utf-8')as f:

                f.write(i)

    这次可以指定下载多少章节,10次循环下载10章节

    下次爬取网抑云音乐评论。。。

    假如获得了网页的字符串格式,也就是上面的resp.text,用etree可以把它变成xpath的格式,也就是说可以用.xpath()这个函数,括号里面写要爬取的东西的xpath,直接在网页源代码里复制就行了。有时候复制了也爬取不了。

    /@href就是取出标签a里的那一段地址

    如果是/text()的话,会出现文字信息。

    .format()的作用就是把括号里的东西填入{}中去

    比如

    得到

    相关文章

      网友评论

          本文标题:用xpath爬取小说href

          本文链接:https://www.haomeiwen.com/subject/epfwmktx.html