Scrapy : 1.4.0
Python : 3.6.2
MySQL : 5.6
Platform : Windows-7-6.1.7601-SP1
1. 目标网站分析
如下图所示,本文准备爬取国内高匿代理部分的IP信息,通过火狐浏览器可以看到有效信息都在id为ip_list的table里面,所以通过xpath和正则表达式即可拿到想要的信息。通过观察不同分页的url后发现从每二页开始每个url后面的数字即为当前页次,因而可以列举所有页面(本文示例代码爬取前三页)。
2. 数据库准备
Python需要安装MySQLdb模块,根据我们所需要的信息建表
CREATE TABLE proxy(
IP char(20),
PORT char(20),
TYPE char(20),
POSITION char(20),
SPEED char(20),
LAST_CHECK_TIME char(20)
);
3. 项目开发
3.1 在cmd下通过以下命令即可创建典型Scrapy项目结构。
scrapy startproject collectips
3.2 定义Item
Item中的信息与网站和数据库对应,我们需要IP地址,端口,服务器地址,类型,速度,验证时间这六个字段。
items.py
import scrapy
class CollectipsItem(scrapy.Item):
IP = scrapy.Field()
PORT = scrapy.Field()
POSITION = scrapy.Field()
TYPE = scrapy.Field()
SPEED = scrapy.Field()
LAST_CHECK_TIME = scrapy.Field()
3.3 创建spider
通过以下命令创建名为xici的spider模板
scrapy genspider xici xicidaili.com
然后编写具体处理逻辑:
xici.py:
# -*- coding: utf-8 -*-
import scrapy
from collectips.items import CollectipsItem
class XiciSpider(scrapy.Spider):
name = 'xici'
allowed_domains = ['xicidaili.com']
start_urls = ['http://www.xicidaili.com']
def start_requests(self):
reqs = []
for i in range(1,3):
req = scrapy.Request("http://www.xicidaili.com/nn/%s"%i)
reqs.append(req)
return reqs
def parse(self, response):
ip_list = response.xpath('//table[@id="ip_list"]')
trs = ip_list[0].xpath('tr')
items = []
for ip in trs[1:]:
pre_item = CollectipsItem()
pre_item['IP'] = ip.xpath('td[2]/text()')[0].extract()
pre_item['PORT'] = ip.xpath('td[3]/text()')[0].extract()
pre_item['POSITION'] = ip.xpath('string(td[4])')[0].extract().strip()
pre_item['TYPE'] = ip.xpath('td[6]/text()')[0].extract()
pre_item['SPEED'] = ip.xpath('td[8]/div[@class="bar"]/@title').re('\d{0,2}\.\d{0,}')[0]
pre_item['LAST_CHECK_TIME'] = ip.xpath('td[10]/text()')[0].extract()
items.append(pre_item)
return items
3.3 相关参数设置
数据库连接,日志文件,代理浏览器相关设置
settings.py
# -*- coding: utf-8 -*-
BOT_NAME = 'collectips'
SPIDER_MODULES = ['collectips.spiders']
NEWSPIDER_MODULE = 'collectips.spiders'
#database connection parameters
DBKWARGS = {'db':'mysql','user':'root','passwd':'root',
'host':'localhost','use_unicode':True,'charset':'utf8'}
ITEM_PIPELINES = {
'collectips.pipelines.CollectipsPipeline':300,
}
LOG_FILE = "scrapy.log"
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0'
3.3 数据存储
将提取的数据写入MySQL数据库,注意数据库的连接关闭和异常的获取。
pipelines.py:
# -*- coding: utf-8 -*-
import MySQLdb
class CollectipsPipeline(object):
def process_item(self, item, spider):
DBKWARGS = spider.settings.get('DBKWARGS')
con = MySQLdb.connect(**DBKWARGS)
cur = con.cursor()
sql = ("insert into proxy(IP,PORT,TYPE,POSITION,SPEED,LAST_CHECK_TIME) "
"values(%s,%s,%s,%s,%s,%s)")
lis = (item['IP'],item['PORT'],item['TYPE'],item['POSITION'],item['SPEED'],
item['LAST_CHECK_TIME'])
try:
cur.execute(sql,lis)
except Exception as e:
print("Insert Error:",e)
con.rollback()
else:
con.commit()
cur.close()
con.close()
return item
效果如下:
网友评论