之前看到一篇文章关于SpringMVC中Request线程安全问题,文中提到每次请求服务器都会从线程池中取一个线程接收处理,而Request是每个线程的变量。看完后不禁引起我的思考,Request是从怎样产生的,是什么把请求数据封装成Request的呢?带着问题,开始了我的研究道路。
Http请求处理流程
从本质来说,Http请求其实是客户端与服务器建立socket进行数据通讯。
为什么我会这么说,希望看完这篇文章你能心领神会。
从宏观角度看问题, Tomcat接收Http请求过程如下:
Http请求 -> Connector -> Protocol -> Endpoint
NioEndpoint是非阻塞IO,所以对请求进行了Nio处理,它被Acceptor
、Poller
(NioEndpoint的内部类)、Worker
分开处理。Acceptor只负责控制连接数和接收请求,Acceptor请求接收请求后会通过队列(PollerEvent栈)发送请求给Poller,使用了典型的生产者-消费者
模式。在Poller中,维护了一个Seletor对象,
Acceptor
、Poller
、Worker
的工作流程可以总结如下图:
下面以Tomcat9.0的Nio为例,进行分析源码:
Connector的生命周期:构造器 -> initInternal( ) -> startInternal( ) -> stopInternal( )
为了抓住重点,我们从startInternal( )执行完毕开始,此时Connector、Protocol、Endpoint已经初始化好实例。Acceptor和Poller开始监听请求。
Acceptor
①只要endpoint处于运行(running)状态,Acceptor线程会不断接受http请求;
②如果当前endpoint连接数大于最大连接数(maxConnections)事,它会阻塞等待至有空闲连接后继续轮询。
③Acceptor会调用endpoint.serverSocketAccept( )接受请求获取的SocketChannel。实际就是通过NioServerSocketChannel.accept( )获取SocketChannel。
④随后会把获取的NioChannel绑定一个PollerEvent加入到Poller的PollerEvent栈中(见NioEndpoint.java)
Acceptor.java
: 代码备注与上面序号对应
public void run() {
int errorDelay = 0;
// Loop until we receive a shutdown command
while (endpoint.isRunning()) { //①
...
try {
//if we have reached max connections, wait
endpoint.countUpOrAwaitConnection(); //②
// Endpoint might have been paused while waiting for latch
// If that is the case, don't accept new connections
if (endpoint.isPaused()) {
continue;
}
U socket = null; //Http11NioProtocol中的U是SocketChannel
try {
// Accept the next incoming connection from the server
// socket
socket = endpoint.serverSocketAccept(); //③
} catch (Exception ioe) {
...
}
// Successful accept, reset the error delay
errorDelay = 0;
// Configure the socket
if (endpoint.isRunning() && !endpoint.isPaused()) {
// setSocketOptions() will hand the socket off to
// an appropriate processor if successful
if (!endpoint.setSocketOptions(socket)) {
endpoint.closeSocket(socket); //④
}
}
...
}
state = AcceptorState.ENDED;
}
Poller
Poller是NioEndpoint的内部类,是Nio协议与其他协议不同的特殊处理类,也是关键类。它使用事件驱动方式处理socket,非阻塞交给Worker的线程池执行。这也是NIO模式与BIO模式的最主要区别,在并发量大的场景下可以显著提升Tomcat的效率。继续上面的代码分析:
1. 绑定PollerEvent
①Acceptor调用NioEndpoint.setSocketOptions( ),首先将SocketChannel设置为非阻塞状态;然后获取Socket将其封装成NioChannel,注册到NioEndpoint第一个Poller。
NioEndpoint.java
:
@Override
protected boolean setSocketOptions(SocketChannel socket) {
// Process the connection
try {
//disable blocking, APR style, we are gonna be polling it
socket.configureBlocking(false); //设置为非阻塞
Socket sock = socket.socket();
socketProperties.setProperties(sock);
...
//复用NioChannel池中的NioChannel,如果没有则使用socket新建一个
...
getPoller0().register(channel); //将NioChannel注册到第一个Poller(实际最多只能2个)
} catch (Throwable t) {
...
// Tell to close the socket
return false;
}
return true;
}
②Poller.register( )中会把NioChannel与当前Poller绑定,并创建一个NioSocketWrapper赋值给NioChannel。NioSocketWrapper包含着很多重要的管理这次连接的属性,如读写超时时间等。然后,Poller会用NioChannel封装成PollerEvent,如果eventCache有可复用则拿出来 reset( ) 没有就 new 一个。
NioEndpoint.Poller.java
:
public class Poller implements Runnable {
private Selector selector;
private final SynchronizedQueue<PollerEvent> events =
new SynchronizedQueue<>();
...
public void register(final NioChannel socket) {
socket.setPoller(this);
NioSocketWrapper ka = new NioSocketWrapper(socket, NioEndpoint.this);
socket.setSocketWrapper(ka);
ka.setPoller(this);
ka.setReadTimeout(getConnectionTimeout());
ka.setWriteTimeout(getConnectionTimeout());
ka.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
ka.setSecure(isSSLEnabled());
PollerEvent r = eventCache.pop(); //复用已用的PollerEvent
ka.interestOps(SelectionKey.OP_READ);//this is what OP_REGISTER turns into.
if ( r==null) r = new PollerEvent(socket,ka,OP_REGISTER);
else r.reset(socket,ka,OP_REGISTER);
addEvent(r);
}
private void addEvent(PollerEvent event) {
events.offer(event); //添加PollerEvent到栈,给Poller轮询调用
if ( wakeupCounter.incrementAndGet() == 0 ) selector.wakeup();
}
}
2. 处理PollerEvent与Socket
①Poller会轮询通过events( )监听PollerEvent,当有新的PollerEvent加入栈,它会执行PollerEvent.run把它消费掉。消费过程中会把NioChannel注册到Poller的Selector中,类型为读。典型的Nio操作channel.register(selector, SelectionKey.OP_READ)
。
②SocketChannel事件注册好了,自然会触发阻塞等待的selector.select(selectorTimeout)
。
③接下来就是Nio的操作了。遍历selectedKeys获取SelectionKey逐个处理。这里Poller交给了processKey( )
④processSocket中会根据SelectionKey的读写类型执行processSocket( )
⑤processSocket( )会复用或创建一个SocketProcessor(相当于Worker)使用线程池执行SocketChannel
NioEndpoint.Poller.java
:
@Override
public void run() {
// Loop until destroy() is called
while (true) {
boolean hasEvents = false;
try {
if (!close) {
hasEvents = events(); // ①
if (wakeupCounter.getAndSet(-1) > 0) {
//if we are here, means we have other stuff to do
//do a non blocking select
keyCount = selector.selectNow();
} else {
keyCount = selector.select(selectorTimeout); // ②
}
wakeupCounter.set(0);
}
....
//either we timed out or we woke up, process events first
if ( keyCount == 0 ) hasEvents = (hasEvents | events());
Iterator<SelectionKey> iterator =
keyCount > 0 ? selector.selectedKeys().iterator() : null;
// Walk through the collection of ready keys and dispatch
// any active event.
while (iterator != null && iterator.hasNext()) {
SelectionKey sk = iterator.next();
NioSocketWrapper attachment = (NioSocketWrapper)sk.attachment();
// Attachment may be null if another thread has called
// cancelledKey()
if (attachment == null) {
iterator.remove();
} else {
iterator.remove();
processKey(sk, attachment); // ③
}
}//while
//process timeouts
timeout(keyCount,hasEvents);
}//while
getStopLatch().countDown();
}
protected void processKey(SelectionKey sk, NioSocketWrapper attachment) {
...
if ( sk.isValid() && attachment != null ) {
...
if (sk.isReadable()) {
if (!processSocket(attachment, SocketEvent.OPEN_READ, true)) { // ④
closeSocket = true;
}
}
if (!closeSocket && sk.isWritable()) {
if (!processSocket(attachment, SocketEvent.OPEN_WRITE, true)) { // ④
closeSocket = true;
}
...
}
//AbstractEndpoint.java
public boolean processSocket(SocketWrapperBase<S> socketWrapper,
SocketEvent event, boolean dispatch) {
try {
...
SocketProcessorBase<S> sc = processorCache.pop();
if (sc == null) {
sc = createSocketProcessor(socketWrapper, event);
} else {
sc.reset(socketWrapper, event);
}
Executor executor = getExecutor(); // ⑤
if (dispatch && executor != null) {
executor.execute(sc);
} else {
sc.run();
}
...
}
}
类比Nio Demo
大家最初学习Nio时,大概都接触过一个经典的Demo。下面我们就用它来类比Tomcat接收请求的流程:
①对应的是NioEndpoint.bind()->initServerSocket()
它是在NioPoint初始化时执行的
②对应的是Poller.run( )轮询监听selector。得到SelectionKey后根据类型执行对应的操作,即执行Poller.processKey( )
③Tomcat与demo最大不同之处在于,它把accept( )抽出来,用一个线程接收请求,也就是Acceptor。Acceptor将请求封装成PollerEvent丢给Poller处理。
/** ① Begin **/
Selector selector = Selector.open();
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
/** ① End **/
/** ② Begin **/
while(true) {
int readyChannels = selector.selectNow();
if(readyChannels == 0) continue;
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
SelectionKey key = keyIterator.next();
if(key.isAcceptable()) {
/**endpoint.serverSocketAccept()**/
accept(selectionKey);
/** End **/
channel.register(selector, SelectionKey.OP_READ); //endpoint.setSocketOptions(socket)
} else if (key.isConnectable()) {
// a connection was established with a remote server.
} else if (key.isReadable()) {
// a channel is ready for reading
} else if (key.isWritable()) {
// a channel is ready for writing
}
keyIterator.remove();
}
/** ② End **/
}
/** ③ Begin **/
private void accept(SelectionKey selectionKey) throws IOException {
ServerSocketChannel ssc = (ServerSocketChannel) selectionKey.channel();
SocketChannel channel = ssc.accept(); //endpoint.serverSocketAccept()
channel.configureBlocking(false);
channel.register(selector, SelectionKey.OP_READ); //endpoint.setSocketOptions(socket)
}
/** ③ End **/
对比后能发现,Tomcat用Nio处理Socket其实万变不离其中,都源于这个demo;Tomcat只是将其中的步骤封装成Acceptor,Poller, Worker分工合作而已。
小结
本文介绍了Tomcat使用Nio协议接收Http请求的过程,通过源码分析了解Acceptor是如何接收请求,通过生产者-消费者模式通知到Poller处理。其中涉及到Nio接收socket的模型;最后用Nio的经典demo与Tomcat进行对比,更加简化、深入理解当中的原理。
写到这里我们已经知道Tomcat接收Http请求的实现原理(接收socket到处理socket),但仍未看见Request,我们一开始的目标仍未实现。
想知道Work是如何将socket一步步处理转化成servlet的Request。由于篇幅有限,欲知后事如何请关注Http请求是如何转化成Request的(二)。
网友评论