小福利,教你用python爬虫获取腾讯新闻
在尝试了多个模块bs4,lxml,re,json,多种方法之后,终于找到了方法,功夫不负有心人,付出必有回报。
import requests, re, lxml
from lxml import etree
from bs4 import BeautifulSoup
import json
import pandas as pd
headers = {
'Referer': 'https://news.qq.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; rv:46.0) Gecko/20100101 Firefox/46.0',
}
url = 'https://i.news.qq.com/trpc.qqnews_web.kv_srv.kv_srv_http_proxy/list?sub_srv_id=24hours&srv_id=pc&offset=0&limit=20&strategy=1&ext={"pool":["top"],"is_filter":7,"check_type":true}'
res = requests.get(url, headers=headers)
res1=res.json()
res2=res1['data']
datas=res2['list']
datas_list = []
for i in datas:
datas_dict = {}
datas_dict['标题']=i['title']
datas_dict['网址'] = i['url']
datas_dict['图片网址'] = i['thumb_nail']
datas_list.append(datas_dict)
df=pd.DataFrame(datas_list)
print(df)
获得数据截图,还可以存到excel文件里面
[图片上传失败...(image-6c2235-1648970395328)]
网友评论