2021-05-05 16:51  阅读(107)
文章分类:Netty 学习之旅 文章标签:NettyNetty 学习
©  原文作者:中间件兴趣圈 原文地址:https://blog.csdn.net/prestigeding/article/details/53977445

1、NioSocketChannel类图

202105051651433091.png

2、NioSocketChannel read方法详解

从上文的NioEventLoop事件模型的时候提到,在处理Selector的事件的时候,io线程会触发读相关的操作,具体代码如下:

202105051651434582.png

代码摘录自NioEventLoop

unsafe.read()方法,将会进入到AbstractNioChannel.AbstractNioUnsafe的read方法,本文将从该方法深入展开进行学习与研究:

        @Override
                public final void read() {
                    final ChannelConfig config = config();
                    if (!config.isAutoRead() && !isReadPending()) {                                            //@1
                        // ChannelConfig.setAutoRead(false) was called in the meantime
                        removeReadOp();                                                                                        //@2
                        return;
                    }
    
                    final ChannelPipeline pipeline = pipeline();
                    final ByteBufAllocator allocator = config.getAllocator();
                    final int maxMessagesPerRead = config.getMaxMessagesPerRead();          //@3
                    RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();             //@4
    
                    ByteBuf byteBuf = null;
                    int messages = 0;                                                                                        
                    boolean close = false;
                    try {
                        int totalReadAmount = 0;
                        boolean readPendingReset = false;                                                       
                        do {
                            byteBuf = allocHandle.allocate(allocator);                                            //@5
                            int writable = byteBuf.writableBytes();                                                 
                            int localReadAmount = doReadBytes(byteBuf);                                  //@6
                            if (localReadAmount <= 0) {                                                                //@7
                                // not was read release the buffer
                                byteBuf.release();
                                byteBuf = null;
                                close = localReadAmount < 0;
                                break;
                            }
                            if (!readPendingReset) {                                                                      
                                readPendingReset = true;
                                setReadPending(false);
                            }
                            pipeline.fireChannelRead(byteBuf);                                                   //@8
                            byteBuf = null;
    
                            if (totalReadAmount >= Integer.MAX_VALUE - localReadAmount) {
                                // Avoid overflow.
                                totalReadAmount = Integer.MAX_VALUE;
                                break;
                            }
    
                            totalReadAmount += localReadAmount;
    
                            // stop reading
                            if (!config.isAutoRead()) {
                                break;
                            }
    
                            if (localReadAmount < writable) {                                //@9
                                // Read less than what the buffer can hold,
                                // which might mean we drained the recv buffer completely.
                                break;
                            }
                        } while (++ messages < maxMessagesPerRead);             //@10
    
                        pipeline.fireChannelReadComplete();                              //@11
                        allocHandle.record(totalReadAmount);                           //@12
    
                        if (close) {
                            closeOnRead(pipeline);
                            close = false;
                        }
                    } catch (Throwable t) {
                        handleReadException(pipeline, byteBuf, t, close);      //@13
                    } finally {
                        // Check if there is a readPending which was not processed yet.
                        // This could be for two reasons:
                        // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
                        // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
                        //
                        // See https://github.com/netty/netty/issues/2254
                        if (!config.isAutoRead() && !isReadPending()) {
                            removeReadOp();                                                   
                        }
                    }
                }

代码@1,如果未开启自动读,并且没有读事件等待,则移除读事件。

代码@2,移除读事件,在具体的子类中实现,在 AbstractNioUnsafe中的具体实现:

        protected final void removeReadOp() {
                    SelectionKey key = selectionKey();
                    // Check first if the key is still valid as it may be canceled as part of the deregistration
                    // from the EventLoop
                    // See https://github.com/netty/netty/issues/2104
                    if (!key.isValid()) {
                        return;
                    }
                    int interestOps = key.interestOps();
                    if ((interestOps & readInterestOp) != 0) {
                        // only remove readInterestOp if needed
                        key.interestOps(interestOps & ~readInterestOp);
                    }
                }

代码@3,允许消息预读取最大次数,循环次数,主要是避免一个通道大量数据的读写而拖慢其他通道的处理。

代码@4,内存分配的方式,这里的目的就是可以分配足够的内存字节以便于从通道读取数据。下文会重点讲解。

代码@5,利用代码@4处的分配器,分配一个ByteBuf,用于接收通道中的数据。

代码@6,从通道中读取数据,具体在子类中实现,在NioSocketChannel中的实现如下:

        @Override
            protected int doReadBytes(ByteBuf byteBuf) throws Exception {
                return byteBuf.writeBytes(javaChannel(), byteBuf.writableBytes());//javaChannel()方法返回的就是java.nio.SocketChannel
            } 

这里涉及到ByteBuf与通道打交道,最终还是会落到java.nio.Channel和java.nio.ByteBuffer上去,所以下文我也觉得有必要深究一下整个API的调用,增强对java.nio原生api的使用。

代码@7,如果没有读到可用数据,则回收刚申请的ByteBuf,如果读到的数据小于0,则说明需要将通道关闭。设置close=true,并跳出循环。

代码@8,将读到的内容(ByteBuf)通过管道传播到各个Handler,此时Handler的处理,默认都会在IO线程中处理。

代码@9,如果本次读到的字节数小于分配ByteBuf的可写字节数,说明该通道已经没有数据可读,结束本次循环。

代码@10,再次从通道中读取,直到读取次数已经超过配置的最大预读取次数,maxMessagesPreRead,如果是ServerChannel或AbstractNioChannel,则默认为16,其他的默认为1。

代码@11,触发读完成事件。

代码@12,该方法非常重要,IO线程将本次读取的字节数反馈给 接收ByteBuf分配器,方便下一次分配合理的ByteBuf,足够用,但不能超过太多。

代码@13,如果发生异常触发异常事件。

对整个通道读事件处理,上述是主要的处理流程,接下来重点分析如下两个方面代码@4: RecvByteBufAllocator和代码@8

2.1 RecvByteBuffAllocator系列类分析

        /**
         * Allocates a new receive buffer whose capacity is probably large enough to read all inbound data and small enough
         * not to waste its space.
         */
        public interface RecvByteBufAllocator {
    
            /**
             * Creates a new handle.  The handle provides the actual operations and keeps the internal information which is
             * required for predicting an optimal buffer capacity.
             */
            Handle newHandle();
    
            interface Handle {
                /**
                 * Creates a new receive buffer whose capacity is probably large enough to read all inbound data and small
                 * enough not to waste its space.
                 */
                ByteBuf allocate(ByteBufAllocator alloc);
    
                /**
                 * Similar to {@link #allocate(ByteBufAllocator)} except that it does not allocate anything but just tells the
                 * capacity.
                 */
                int guess();
    
                /**
                 * Records the the actual number of read bytes in the previous read operation so that the allocator allocates
                 * the buffer with potentially more correct capacity.
                 *
                 * @param actualReadBytes the actual number of read bytes in the previous read operation
                 */
                void record(int actualReadBytes);
            }
        }

该接口主要要解决的问题就是: 在处理通道读事件的时候,如何确定需要分配多大的ByteBuf呢?

对如下英文进行通俗化的理解:

  1. Allocates a new receive buffer whose capacity is probably large enough to read all inbound data and s

mall enough

* not to waste its space.

分配一个接收ByteBuf,希望这个容量足够大,能够容纳通道中可读数据,但又尽量少,够用就好,别浪费空间。

2)guess 方法,只返回猜测,建议的容量(capacity),不执行实际的内存申请。

3) Records the the actual number of read bytes in the previous read operation so that the allocator allocat es the buffer with potentially more correct capacity.

记录上传读到的实际字节的大小,以便分配器更加准确的分配正确的容量。

2.1.1 FixedRecvByteBufAllocator 固定容量分配,该方式简单,但忽略了IO线程的反馈。

/**

* The {@link RecvByteBufAllocator} that always yields the same buffer

* size prediction. This predictor ignores the feed back from the I/O thread.

*/

public class FixedRecvByteBufAllocator implements RecvByteBufAllocator

2.1.2 AdaptiveRecvByteBufAllocator 自适应分配算法

1)概述

/**

* The {@link RecvByteBufAllocator} that automatically increases and

* decreases the predicted buffer size on feed back.

*

* It gradually increases the expected number of readable bytes if the previous

* read fully filled the allocated buffer. It gradually decreases the expected

* number of readable bytes if the read operation was not able to fill a certain

* amount of the allocated buffer two times consecutively. Otherwise, it keeps

* returning the same prediction.

*/

AdaptiveRecvByteBufAllocator 根据IO线程的反馈自动增加或减少预期的buffer size。

如果上一次读填满了分配的缓存区,则增大猜测缓存区的大小,如果上一次读没有填满分配的缓冲区,则减少。

2)AdaptiveRecvByteBufAllocator 源码分析

1、核心属性详解

            static final int DEFAULT_MINIMUM = 64;            // 分配的最小容量
            static final int DEFAULT_INITIAL = 1024;             //  初始容量
            static final int DEFAULT_MAXIMUM = 65536;    //  最大容量64k
    
            private static final int INDEX_INCREMENT = 4;   //增长索引数,在SIZE_TABLE下的索引
            private static final int INDEX_DECREMENT = 1;  // 减少索引数
    
            private static final int[] SIZE_TABLE;    //小于512个字节,按16个字节递增,大于512字节,成倍增长
    
        // SIZE_TABLE的初始化,其实这里有效使用的最大内存为65536, 64K
            static {
                List<Integer> sizeTable = new ArrayList<Integer>();
                for (int i = 16; i < 512; i += 16) {     //小于512字节,从16开始,以16递增
                    sizeTable.add(i);
                }
    
                for (int i = 512; i > 0; i <<= 1) { //大于512字节,成倍增长,该循环在超过int最大值时退出
                    sizeTable.add(i);
                }
    
                SIZE_TABLE = new int[sizeTable.size()];
                for (int i = 0; i < SIZE_TABLE.length; i ++) {
                    SIZE_TABLE[i] = sizeTable.get(i);
                }
            }
    
            private final int minIndex;    // DEFAULT_MINIMUM 所在SIZE_TABLE中的下标
            private final int maxIndex;   //  DEFAULT_MAXIMUM 所在SIZE_TABLE中的下标
            private final int initial;          // 初始时,分配的buffer大小,默认为1K

2、构造方法

        /**
             * Creates a new predictor with the default parameters.  With the default
             * parameters, the expected buffer size starts from {@code 1024}, does not
             * go down below {@code 64}, and does not go up above {@code 65536}.
             */
            private AdaptiveRecvByteBufAllocator() {
                this(DEFAULT_MINIMUM, DEFAULT_INITIAL, DEFAULT_MAXIMUM);
            }
    
            /**
             * Creates a new predictor with the specified parameters.
             *
             * @param minimum  the inclusive lower bound of the expected buffer size
             * @param initial  the initial buffer size when no feed back was received
             * @param maximum  the inclusive upper bound of the expected buffer size
             */
            public AdaptiveRecvByteBufAllocator(int minimum, int initial, int maximum) {
                if (minimum <= 0) {
                    throw new IllegalArgumentException("minimum: " + minimum);
                }
                if (initial < minimum) {
                    throw new IllegalArgumentException("initial: " + initial);
                }
                if (maximum < initial) {
                    throw new IllegalArgumentException("maximum: " + maximum);
                }
    
                int minIndex = getSizeTableIndex(minimum);     //@1
                if (SIZE_TABLE[minIndex] < minimum) {
                    this.minIndex = minIndex + 1;
                } else {
                    this.minIndex = minIndex;
                }
    
                int maxIndex = getSizeTableIndex(maximum);
                if (SIZE_TABLE[maxIndex] > maximum) {
                    this.maxIndex = maxIndex - 1;
                } else {
                    this.maxIndex = maxIndex;
                }
    
                this.initial = initial;
            }

构造方法,主要是初始化minIndex、maxIndex,initial。重点看一下 getSizeTableIndex方法的详解:

        private static int getSizeTableIndex(final int size) {
                for (int low = 0, high = SIZE_TABLE.length - 1;;) { //典型的二分查找算法
                    if (high < low) {  //@1
                        return low;
                    }
                    if (high == low) { //@2
                        return high;
                    }
    
                    int mid = low + high >>> 1;
                    int a = SIZE_TABLE[mid];
                    int b = SIZE_TABLE[mid + 1];
                    if (size > b) {
                        low = mid + 1;
                    } else if (size < a) {
                        high = mid - 1;
                    } else if (size == a) {
                        return mid;
                    } else {
                        return mid + 1;
                    }
                }
            }

二分查找的原理,对于一个有序排序,用中间的数与需要查找的数进行比较,然后在另外一半里进行再分半查询,每次将范围缩小一半。

二分查找,对于high的初始值有学问,使用的是SIZE_TABLE.length - 1,就是为了确保越界。(min = low + hign >>> 1),如果在后半部分,确保该值最大为该值。

3、内部Handler实现类

        private static final class HandleImpl implements Handle {
                private final int minIndex;
                private final int maxIndex;
                private int index;
                private int nextReceiveBufferSize;
                private boolean decreaseNow;
    
                HandleImpl(int minIndex, int maxIndex, int initial) {
                    this.minIndex = minIndex;
                    this.maxIndex = maxIndex;
    
                    index = getSizeTableIndex(initial);
                    nextReceiveBufferSize = SIZE_TABLE[index];
                }
    
                @Override
                public ByteBuf allocate(ByteBufAllocator alloc) {
                    return alloc.ioBuffer(nextReceiveBufferSize);
                }
    
                @Override
                public int guess() {
                    return nextReceiveBufferSize;
                }
    
                @Override
                public void record(int actualReadBytes) {
                    if (actualReadBytes <= SIZE_TABLE[Math.max(0, index - INDEX_DECREMENT - 1)]) {    // @1
                        if (decreaseNow) {
                            index = Math.max(index - INDEX_DECREMENT, minIndex);
                            nextReceiveBufferSize = SIZE_TABLE[index];
                            decreaseNow = false;
                        } else {
                            decreaseNow = true;
                        }
                    } else if (actualReadBytes >= nextReceiveBufferSize) {     //@2
                        index = Math.min(index + INDEX_INCREMENT, maxIndex);
                        nextReceiveBufferSize = SIZE_TABLE[index];
                        decreaseNow = false;
                    }
                }
            }

属性描述:decreaseNow:是否需要减少待分配内存的大小。

nextReceiveBufferSize :下一次分配的ByteBuf大小。

主要实现方法为record,该方法是有IO线程反馈当前读取的字节大小,方便分配器合理的分配内存。

首先,计算当前位置,往前(INDEX_DECREMENT -1)位置代表的内存大小,如果actualReadBytes 小于这个内存,则需要减少分配的内存大小,INDEX_DECREMENT 这里默认为4,起到一个自适应的频率控制。

从代码@1,可知,需要联系两次读都小于上面那个值,才会缩小分配的内存数量。

代码@2,如果时间读取字节数大于nextReceiveBufferSize 的大小,则直接增大下次分配的大小。

2.2 read处理过程代码@8详解讲解之pipeline.fireChannelRead事件处理

对于ChannelPipeline的执行原理在 http://blog.csdn.net/prestigeding/article/details/58648843 博客中详细说明了,会沿着执行链条一条一条传播往下看。执行链设计模式有一个核心步骤是在一个执行器中,需要显示调用一下下一个执行器的执行(当然是通过上下文对象来进行的),比如Servlet中,著名的doFilter方法的显示调用。在这里一样的,上游执行器可以决定下游执行器是否执行到。在Netty中,一个比较常用的是,解码器处理器,如果没有解码出一条正确的消息,就没有必要触发下游业务器的执行,请看代码:

202105051651437823.png

Netty在处理读事件总结:

1、网络读所在的线程为IO线程,也就是Selctor所在线程,并且select的运行与直接从通道读数据,各Handler执行都在一个线程中串行化处理,也就是先select得到选择键,然后一个一个处理键。

2、正因为第一点,Netty在处理通道有大量数据可读时做了优化,首先调用一次socketChannel.read方法,将读到的消息(字节)直接通过ChannelPipeline的fireChannelRead事件传播处理,处理完该内容后,再从通道读剩下的字节,循环读,但可以配置最大的循环次数,默认NioSocketChannel循环读次数为16,如果超过最大的循环次数,剩下的可读内容,会在下一次Select后执行,这样保证每个通道都能获得较为均衡的处理时间。

3、对于一个通道读预先需要分配多少容量的ByteBuf,Netty实现了一种自适应的内存分配,根据IO线程上一次读取的字节数,来动态调整下一次分配的大小。


来源:https://blog.csdn.net/prestigeding/article/details/53977445

点赞(0)
版权归原创作者所有,任何形式转载请联系作者; Java 技术驿站 >> NioSocketChannel源码分析之读事件处理逻辑
上一篇
再谈线程模型之源码分析NioEventLoopGroup、SingleThreadEventExecutor
下一篇
写事件处理NioSocketChannel、ChannelOutbondBuffer源码分析