最近需要找个代理上网,但是网上免费的ip实在是不太好用,随机找到好用ip的概率实在有点小,就只能把它们都下下来挨个试。我爬的是西刺代理-高匿,翻看两页,想看看数据来源,完了是最不喜欢的在网页中直接显示,而且就是最普通的td,tr包裹信息,没有class,id之类的快选,想来只能用正则了额,看看别人的想法,果然如此。
简书中一篇也是爬西刺
突然想起了pandas里read_html可以直接读取网页中的表格,简直是为这种情况而生的。官方文档在这里
但是pandas毕竟不是专业做http访问的模块,不能传递参数,加入headers等来伪装浏览器,对于可以用不加任何参数的get方法请求的,是能直接用pandas.read_html(url)[0],直接得到结果的。[0]代表页面上第几张表格。但是西刺这么干,得到的是503.所以先用requests获取主体然后再用pandas获取表格就完美了。
基本构架就是这么三行了。
import requests
import pandas as pd
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
re = requests.get('http://www.xicidaili.com/nn/',headers =headers)
df = pd.read_html(re.content,header=0)[0]
print(df)
效果如下
然后就是多爬两页,整合,一下。这里我整合了前6页的ip,当然也可以用scrapy来爬,但是ip的实效性,爬前几页我觉得就差不多了,scrapy有点太大。
爬之后的信息用pandas整合成一个dataframe,然后根据验证的需求构造proxy,这里我用了两个验证方法,一个是 telnetlib.Telnet,不用构造ip地址,直接用,telnetlib.Telnet(x, port=str(y), timeout=0.5),x是ip地址,y是port。
另一个是用requests加上代理看是否能访问相应网站,基本格式是proxy = { 'HTTP': '162.105.30.101:8080'},要把ip和端口串起来。
单ip,多ip的使用
df1['add'] = list(map(lambda x, y: ''http://' + str(x) + ':' + str(y) + ''', df1['IP地址'],df1['端口']))就是把ip端口串成需要的格式。
验证网址是百度一下,之所以用第二个方法是我发现第一个方法筛选之后的ip有些还是不能用,加上第二个方法能够返回‘百度一下,你就知道’,基本上就是真的能用了。
所以,基本流程是先用requests得到html主体,用pandas.read_html,得到整个表,验证即可。
import random
from pyquery import PyQuery as pq
import telnetlib
import requests
import pandas as pd
def pa():
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
con = [1]*6
for i in range(6):
if i == 0:
re = requests.get('http://www.xicidaili.com/nn/',headers =headers)
else:
re = requests.get('http://www.xicidaili.com/nn/{}'.format(i), headers=headers)
con[i] = pd.read_html(re.content,header=0)[0]
df1 = pd.concat([con[i] for i in range(6)],ignore_index=True)
df1['status'] = list(map(lambda x, y: yanz(x, y), df1['IP地址'], df1['端口']))
df1.to_csv('vpn4.csv')
def yanz(x,y):
try:
tn = telnetlib.Telnet(x, port=str(y), timeout=0.5)
except:
return 'no'
else:
return 'yes'
def shuai():
df = pd.read_csv('vpn4.csv')
df1 = df[df['status'] == 'yes']
df1['add'] = list(map(lambda x, y: '\'http://' + str(x) + ':' + str(y) + '\'', df1['IP地址'], df1['端口']))
df1['st'] = df1['add'].map(lambda x:yanzheng(x))
df1.to_csv('vpn.csv')
def yanzheng(pro):
url = 'https://www.baidu.com/'
user_agent = [
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
"Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
"Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
"Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]
try:
re = requests.get(url,proxies={'http':pro},timeout=0.5,headers={'User-Agent':random.choice(user_agent)})
re.encoding = 'jbk'
res = pq(re.text)
except requests.RequestException as e:
return e
else:
return res('title').text()
结果这样
网友评论