Netty是一个java的高性能同步/异步通讯框架,基于SEDA模型。最近因为要逐渐接触java项目,就看了下它的实现,顺便也练练手。
Netty的概念模型中第一层就是EventLoop,我们也先从EventLoop来了解它的实现。EventLoop是Netty做出的一个事件队列,无论是网络事件(消息发送/接收)还是Netty内部的任务都会丢到EventLoop中然后等待触发执行。可以说EventLoop就是Netty的运行引擎。
本文根据EpollEventLoop的实现,来参考了解下Netty框架的思路。
EpollEventLoop
Epoll是linux下提供的一个高效的异步网络通信io接口,具体信息请自行查询。本文重点关注Netty是如何包裹Epoll接口并利用它实现事件队列的。
注意的是,EventLoop不仅仅是执行socket事件,还可以用来执行自定义的task,他利用了epoll来做底层的触发,具体方法参见下文。
epollFd和eventFd
在EpollEventLoop的初始化函数中,新建了两个fd:
this.epollFd = epollFd = Native.newEpollCreate();
this.eventFd = eventFd = Native.newEventFd();
try {
Native.epollCtlAdd(epollFd.intValue(), eventFd.intValue(), Native.EPOLLIN);
} catch (IOException e) {
throw new IllegalStateException("Unable to add eventFd filedescriptor to epoll", e);
}
success = true;
其中Epollfd对应Epoll API,用来轮询之后绑定的socket,而它首先绑定的就是eventFd,eventFd就成为用来唤醒事件队列的内部接口了。具体的使用方法继续往下看。
epollWait
private int epollWait(boolean oldWakenUp) throws IOException {
int selectCnt = 0;
long currentTimeNanos = System.nanoTime();
long selectDeadLineNanos = currentTimeNanos + delayNanos(currentTimeNanos);
for (;;) {
long timeoutMillis = (selectDeadLineNanos - currentTimeNanos + 500000L) / 1000000L;
if (timeoutMillis <= 0) {
if (selectCnt == 0) {
int ready = Native.epollWait(epollFd.intValue(), events, 0);
if (ready > 0) {
return ready;
}
}
break;
}
// If a task was submitted when wakenUp value was 1, the task didn't get a chance to produce wakeup event.
// So we need to check task queue again before calling epoll_wait. If we don't, the task might be pended
// until epoll_wait was timed out. It might be pended until idle timeout if IdleStateHandler existed
// in pipeline.
if (hasTasks() && WAKEN_UP_UPDATER.compareAndSet(this, 0, 1)) {
return Native.epollWait(epollFd.intValue(), events, 0);
}
int selectedKeys = Native.epollWait(epollFd.intValue(), events, (int) timeoutMillis);
selectCnt ++;
if (selectedKeys != 0 || oldWakenUp || wakenUp == 1 || hasTasks() || hasScheduledTasks()) {
// - Selected something,
// - waken up by user, or
// - the task queue has a pending task.
// - a scheduled task is ready for processing
return selectedKeys;
}
currentTimeNanos = System.nanoTime();
}
return 0;
}
epollWait会通过轮询的方式不断的调用操作系统提供的epollWait API,另外轮询之前也设置了timeout,因此当:
- 监听的fd(socket,eventFd)上有事件发生。
- timeout时
EpollWait都会返回,这个不断的EpollWait就成为EventLoop底层的事件发动机了,驱动着整个Netty不断执行新的任务,可以参见run函数,如下:
run
protected void run() {
for (;;) {
try {
int strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
switch (strategy) {
case SelectStrategy.CONTINUE:
continue;
case SelectStrategy.SELECT:
strategy = epollWait(WAKEN_UP_UPDATER.getAndSet(this, 0) == 1);
// 'wakenUp.compareAndSet(false, true)' is always evaluated
// before calling 'selector.wakeup()' to reduce the wake-up
// overhead. (Selector.wakeup() is an expensive operation.)
//
// However, there is a race condition in this approach.
// The race condition is triggered when 'wakenUp' is set to
// true too early.
//
// 'wakenUp' is set to true too early if:
// 1) Selector is waken up between 'wakenUp.set(false)' and
// 'selector.select(...)'. (BAD)
// 2) Selector is waken up between 'selector.select(...)' and
// 'if (wakenUp.get()) { ... }'. (OK)
//
// In the first case, 'wakenUp' is set to true and the
// following 'selector.select(...)' will wake up immediately.
// Until 'wakenUp' is set to false again in the next round,
// 'wakenUp.compareAndSet(false, true)' will fail, and therefore
// any attempt to wake up the Selector will fail, too, causing
// the following 'selector.select(...)' call to block
// unnecessarily.
//
// To fix this problem, we wake up the selector again if wakenUp
// is true immediately after selector.select(...).
// It is inefficient in that it wakes up the selector for both
// the first case (BAD - wake-up required) and the second case
// (OK - no wake-up required).
if (wakenUp == 1) {
Native.eventFdWrite(eventFd.intValue(), 1L);
}
default:
// fallthrough
}
final int ioRatio = this.ioRatio;
if (ioRatio == 100) {
if (strategy > 0) {
processReady(events, strategy);
}
runAllTasks();
} else {
final long ioStartTime = System.nanoTime();
if (strategy > 0) {
processReady(events, strategy);
}
final long ioTime = System.nanoTime() - ioStartTime;
runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
}
if (allowGrowing && strategy == events.length()) {
//increase the size of the array as we needed the whole space for the events
events.increase();
}
if (isShuttingDown()) {
closeAll();
if (confirmShutdown()) {
break;
}
}
} catch (Throwable t) {
logger.warn("Unexpected exception in the selector loop.", t);
// Prevent possible consecutive immediate failures that lead to
// excessive CPU consumption.
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Ignore.
}
}
}
}
run函数会不断的调用epollWait,每次epollWait返回的时候,就先执行监听的fd中的事件处理,然后再另外执行所有的tasks,这个tasks就是用户直接通过schedule等方法安排的任务。
可以看到ioRatio还会影响每次执行的任务数量的百分比。
wakeup
protected void wakeup(boolean inEventLoop) {
if (!inEventLoop && WAKEN_UP_UPDATER.compareAndSet(this, 0, 1)) {
// write to the evfd which will then wake-up epoll_wait(...)
Native.eventFdWrite(eventFd.intValue(), 1L);
}
}
这是wakeup函数,其实现就是往eventFd里写一个1,因为在epollWait里绑定了eventFd,所以wakeup函数会迅速唤醒Eventloop来执行tasks。否则的话,如果没有IO事件的情况下,eventloop就退化成一个定时的task执行队列,这个时延在有些高事件敏感性的任务里是不可接受的。
EventLoopGroup
EventLoopGroup是一个EventLoop的容器。可以看一下常用的register函数
register
register函数是Netty中最常用的,将一个Channel(背后代表了一个socket)注册到EventLoopGroup中。
@Override
public ChannelFuture register(Channel channel) {
return next().register(channel);
}
而next函数总是用来选择一个eventloop。因此eventLoopGroup会选出其下的某一个eventLoop供channel注册。
channel注册其实就是将其背后的socket交给EventLoop去监听,当socket上有事件发生时,调用channel中对应的handler来执行。
研究了EpollEventLoop后,我们可以再研究下其它的几个EventLoop的实现。
DefaultEventLoop
DefaultEventLoop就是维护一个tasks列表,不断的取tasks run就可以了。
protected void run() {
for (;;) {
Runnable task = takeTask();
if (task != null) {
task.run();
updateLastExecutionTime();
}
if (confirmShutdown()) {
break;
}
}
}
综述
EventLoop是Netty的执行引擎,就像心跳一样提供了Netty程序执行的动力。它的核心是一个不断的轮询,而epollEventLoop在轮询时一部分CPU时间会因为epoll系统调用的原因让出给其它线程,因此EpollEventLoop的超时时长对最后Netty的工作有重大影响,调高时长会降低Netty的CPU占用,但是会影响系统实时性。降低时长会提高系统响应时间,但是相对的会耗费更多的CPU。
网友评论