python是一门强大的语言,其中爬虫功能尤为突出。因为工作的原因,需要去爬取一些信息,对于普通的http get/post请求或者html解析,那都不是事,然而,有些操作必须验证用户身份,比如说需要先登录。对于登录的,无法也就是发送一个post请求,然后再在请求的时候带上cookies就可以了,可以,有些操作却无法通过代码发送,怎么办呢?这时候该祭出强大的浏览器模拟器驱动了。
下面简单说一下python模拟浏览器操作的流程:
1、选择相应浏览器驱动(查看note匹配适合的版本)
http://chromedriver.storage.googleapis.com/index.html?path=2.37/
https://www.cnblogs.com/JHblogs/p/7699951.html
2、选择适应的版本
chrome://version/
https://jingyan.baidu.com/article/19192ad8373460e53f570752.html
Google Chrome 65.0.3325.181 (正式版本) (32 位)
可以下载ChromeDriver v2.36 (2018-03-02) 以上的版本
Supports Chrome v63-65
3、解压到指定路径并填写环境变量或代码指定驱动路径(或者放到python安装目录的lib中)
(python如果单独安装可以找到安装目录,试过是通过Anaconda3安装的,放到Anaconda3安装目录的Scripts即可)
4、安装selenium
pip install selenium
如果不行参考https://www.cnblogs.com/harvey888/p/5467276.html
(python如果单独安装可以找到安装目录,试过是通过Anaconda3安装的,放到Anaconda3安装目录的Scripts即可)
5、调起浏览器
from selenium import webdriver
browser = webdriver.Chrome()
6、操作交互
browser.execute_script("console.log('运行结束')")
7、完整代码
import requests
from bs4 import BeautifulSoup
import time
import json
from selenium import webdriver
#全局参数
listHeader = {
"Referer": "http://radar.itjuzi.com/",
"Host": "radar.itjuzi.com",
"X-Requested-With": "XMLHttpRequest",
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1"
}
detailHeader = {
"Cache-Control": "no-cache",
"Host": "www.itjuzi.com",
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1"
}
session_requests = requests.session()
browser = None
loadingTotal = 0
repeatTotal = 0
#打印标题
def PrintTitle(title):
print("\n==========",title,"==========")
#官网访问-打印title
PrintTitle("IT桔子投融事件")
#HTTP登录
def HttpLogin(account, password):
data = {"identity":account, "password":password}
req = session_requests.post("https://www.itjuzi.com/user/login?redirect=&flag=&radar_coupon=", data=data, headers=listHeader)
req.encoding = "UTF-8"
print(req.cookies)
#模仿浏览器登录行为
def BrowserLogin(account, password):
global loadingTotal
try:
time.sleep(1)
loadingTotal = loadingTotal + 1
print("等待页面加载完成("+ str(loadingTotal) +")")
login_form = browser.find_element_by_id("login_form")
browser.execute_script("document.getElementById('create_account_email').value ='"+ account +"'")
browser.execute_script("document.getElementById('create_account_password').value ='"+ password +"'")
browser.find_element_by_id("login_btn").click()
time.sleep(2.2)
loadingTotal = 0
login_cookies = []
for item in browser.get_cookies():
login_cookies.append(item["name"] + "=" + item["value"])
listHeader["Cookie"] = "; ".join(login_cookies)
detailHeader["Cookie"] = "; ".join(login_cookies)
browser.close()
except:
if(loadingTotal >= 120):
print("请求超时")
return False
else:
BrowserLogin(account, password)
#递归获取融资事件列表
def GetList(pageno):
PrintTitle("第" + str(pageno) + "页")
req = requests.get("http://radar.itjuzi.com/investevent/info?location=in&orderby=def&page=" + str(pageno), headers=listHeader)
req.encoding = "UTF-8"
res = json.loads(req.text)
if(res["status"] != 1):
print(res["msg"])
return False
if(len(res["data"]["rows"]) > 0):
for row in res["data"]["rows"]:
investor = []
if(type(row["invsest_with"]) is list):
for invest in row["invsest_with"]:
investor.append(invest["invst_name"])
elif(type(row["invsest_with"]) is dict):
for key in row["invsest_with"]:
investor.append(row["invsest_with"][key]["invst_name"])
record = [
row["com_id"],#公司主键(投融事件主键invse_id)
row["com_name"],#标题
row["com_logo"],#缩略图
"-",#标签(行业)
"-",#城市
row["round"],#轮次
row["money"],#融资金额
",".join(investor),#投资方
row["date"],#融资时间
]
print("\n|————正在爬取:" + record[1])
print("融资 " + record[6])
GetDetail(record)
time.sleep(0.5)
time.sleep(1.2)
if(repeatTotal > 30):
print("重复次数过多,停止获取下一页")
else:
GetList(pageno + 1)
else:
PrintTitle("全部获取完成")
#获取项目介绍和融资历史
def GetDetail(record):
url = "https://www.itjuzi.com/company/" + str(record[0])
print(url)
req = requests.get(url, headers=detailHeader)
req.encoding = "UTF-8"
if(req.status_code != 200):
print("该项目已下线或不存在,跳过搜索")
return False
soup = BeautifulSoup(req.text, "html.parser")
description = "-"
if(soup.find("div", class_="introduction") != None):
description = soup.find("div", class_="introduction").text #介绍
elif(soup.find("div", class_="des") != None):
description = soup.find("div", class_="des").text #介绍
tabs_a = soup.find("div", class_="tagset dbi c-gray-aset tag-list").find_all("a")
tabs = []
for t in tabs_a:
tabs.append(t.text)
record.append("-")#行业
record.append(description) #介绍
record.append(soup.find("ul", class_="contact-list").find_all("li")[-1].find("span").text) #地址
record.append(soup.find("h1", class_="seo-important-title")["data-fullname"]) #公司
record.append(soup.find("div", class_="des-more").find("h3", class_="seo-secand-tilte").find("span").text) #成立时间
record.append(url) #链接
record.append("IT桔子") #来源
record[3] = ",".join(tabs) #标签
print(record)
PrintTitle("获取信息需要会员,权限不够")
#浏览器登录[正式版]
PrintTitle("即将打开浏览器自动登录")
browser = webdriver.Chrome()
browser.get("https://www.itjuzi.com/user/login")
BrowserLogin("你的账号","你的比")
#开始获取投融事件
GetList(1)
8、执行效果
正常执行时,会自动打开谷歌浏览器,然后在登录表单中填写账号密码,最后模拟用户触发点击,提交成功后获取到cookies,保存到缓存中,即可完成登录状态的保存,请求其他接口时带上cookies就可以了。
9、说明
该资料只供学习使用,请勿做其他非法操作
网友评论