写了一个爬取免费代理网站的爬虫,时间仓促,本着工具类能用就行的原则,所以很多细节没有做。
由于免费代理有很多没法用,所以写了检测可用的功能,只保存可用的ip,并且以字典形式保存,方便requests中的proxies直接调用。
想要一次性爬取10页的内容,如果不开多线程速度有点慢。
先上爬虫代码:
import requests
import re, os, datetime, time, random
from pyquery import PyQuery as pq
from bs4 import BeautifulSoup as bs
user_agents = ['Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1','Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50','Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11']
headers = {'User-Agent':random.choice(user_agents)}
ip = []
port = []
typ = []
http_url = 'https://www.xicidaili.com/nt/' # 爬取目标,免费代理网站
test_url = 'http://icanhazip.com' # 检测ip的网站
ip_list = []
r = requests.get(http_url, headers=headers, timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
html = r.text
doc = pq(html)
page_url = []
# 页码列表,页码改变时,底下750这个数值也要改变,每一页50个ip,可以自己算一下
page = ['1', '2', '3', '4', '5','6','7','8','9','10']
for i in page:
page_url.append('https://www.xicidaili.com/nt/' + i)
for i in page_url:
odd = doc('.odd')
for i in odd.items():
for i in i('td:nth-child(2)').items():
ip.append(i.text())
for i in odd.items():
for i in i('td:nth-child(3)').items():
port.append(i.text())
for i in odd.items():
for i in i('td:nth-child(6)').items():
typ.append(i.text())
def get_ip(i, path):
url = {}
url[ip_list[750+i]] = ip_list[750+i] + ":" + ip_list[i] + ":" + ip_list[500+i]
try:
response = requests.get(test_url, headers=headers, proxies=url, timeout=30) # 此段代码检测代理是否可用,可用的话就保存到txt文件
print(response.status_code)
if response.status_code == 200:
with open(path, 'a+', encoding='utf-8') as f:
f.write(str(url) + '\n')
except requests.ConnectionError as e:
print(e.args)
ip_list = ip + port + typ
def ip_check(ip_list):
path = 'C:\\Users\\hj506\\Desktop\\ip.txt' # 保存路径
for i in range(250):
get_ip(ip, path)
ip_check(ip_list)
多线程需要引入threading库,只需要把ip_check函数改一下即可,其他不用动:
import threading
def ip_check(ip_list):
path = 'C:\\Users\\hj506\\Desktop\\ip.txt' # 保存路径
for i in range(250):
t = threading.Thread(target=get_ip, args=(i, path))
t.start()
网友评论