在spring-mvc demo程序运行到DispatcherServlet的mvc处理 一文中,我们实践了浏览器输入一个请求,然后到SpringMvc的DispatcherServlet
处理的整个流程. 设计上这些都是tomcat servlet的处理
那么究竟这是怎么到DispatcherServlet处理的,本文将给出。
目录
- Tomcat架构
- Connector
- Http11NioProtocol -> NioEndpoint
- 接收tcp/ip请求的线程,Acceptor
- 处理socket连接数据的线程,Poller
- socket具体的读写处理,Processor(Executor线程池)
- 具体在抽象类AbstractEndpoint中org.apache.tomcat.util.net.AbstractEndpoint#processSocket
- org.apache.coyote.http11.Http11Processor#service
Tomcat架构
显然需要先了解tomcat, 可以下载tomcat源码分析。 其处理流程基本如下:
Tomcat包含几个容器:Engine, Host, Context,Wrapper,Servlet,最后是Servlet实例执行service
方法处理请求。
而Servlet 生命周期可被定义为从创建直到毁灭的整个过程。以下是 Servlet 遵循的过程:
- Servlet 通过调用
init()
方法进行初始化。 - Servlet 调用
service()
方法来处理客户端的请求。 - Servlet 通过调用
destroy()
方法终止(结束)。 - 最后,Servlet 是由 JVM 的垃圾回收器进行垃圾回收的。
service()
方法是执行实际任务的主要方法。Servlet 容器(即 Web 服务器)调用 service()
方法来处理来自客户端(浏览器)的请求,并把格式化的响应写回给客户端。每次服务器接收到一个 Servlet 请求时,服务器会产生一个新的线程并调用服务。service()
方法检查 HTTP 请求类型(GET、POST、PUT、DELETE 等),并在适当的时候调用 doGet
、doPost
、doPut
,doDelete
等方法。
Connector
connector是什么?
The HTTP Connector element represents a Connector component that supports the HTTP/1.1 protocol. It enables Catalina to function as a stand-alone web server, in addition to its ability to execute servlets and JSP pages. A particular instance of this component listens for connections on a specific TCP port number on the server.One or more such Connectors can be configured as part of a single Service, each forwarding to the associated Engine to perform request processing and create the response.
(https://tomcat.apache.org/tomcat-8.0-doc/config/http.html)
HTTP Connector 能支持 HTTP/1.1 协议。它使 Catalina 能够作为独立的 Web 服务器运行,此外还能够执行 servlet 和 JSP 页面。此组件的特定实例会侦听服务器上特定 TCP 端口号上的连接。可以将一个或多个这样的 Connector 配置为单个服务的一部分,每个 Connector 都会转发到关联的 Engine 以执行请求处理并创建响应。
public Connector(String protocol) {setProtocol(protocol);// Instantiate protocol handlerProtocolHandler p = null;try {Class<?> clazz = Class.forName(protocolHandlerClassName);p = (ProtocolHandler) clazz.getConstructor().newInstance();} catch (Exception e) {log.error(sm.getString("coyoteConnector.protocolHandlerInstantiationFailed"), e);} finally {this.protocolHandler = p;}if (Globals.STRICT_SERVLET_COMPLIANCE) {uriCharset = StandardCharsets.ISO_8859_1;} else {uriCharset = StandardCharsets.UTF_8;}// Default for Connector depends on this (deprecated) system propertyif (Boolean.parseBoolean(System.getProperty("org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH", "false"))) {encodedSolidusHandling = EncodedSolidusHandling.DECODE;}}
第一句 setProtocol就根据配置了socket编程协议类型
/*** Set the Coyote protocol which will be used by the connector.** @param protocol The Coyote protocol name** @deprecated Will be removed in Tomcat 9. Protocol must be configured via* the constructor*/@Deprecatedpublic void setProtocol(String protocol) {boolean aprConnector = AprLifecycleListener.isAprAvailable() &&AprLifecycleListener.getUseAprConnector();if ("HTTP/1.1".equals(protocol) || protocol == null) {if (aprConnector) {setProtocolHandlerClassName("org.apache.coyote.http11.Http11AprProtocol");} else {setProtocolHandlerClassName("org.apache.coyote.http11.Http11NioProtocol");}} else if ("AJP/1.3".equals(protocol)) {if (aprConnector) {setProtocolHandlerClassName("org.apache.coyote.ajp.AjpAprProtocol");} else {setProtocolHandlerClassName("org.apache.coyote.ajp.AjpNioProtocol");}} else {setProtocolHandlerClassName(protocol);}}
Http11NioProtocol -> NioEndpoint
接收tcp/ip请求的线程,Acceptor
protected final void startAcceptorThreads() {int count = getAcceptorThreadCount();acceptors = new Acceptor[count];for (int i = 0; i < count; i++) {acceptors[i] = createAcceptor();String threadName = getName() + "-Acceptor-" + i;acceptors[i].setThreadName(threadName);Thread t = new Thread(acceptors[i], threadName);t.setPriority(getAcceptorThreadPriority());t.setDaemon(getDaemon());t.start();}
}
直接:java.nio.channels.ServerSocketChannel#accept
(所以时刻都要知道socket编程和tcp协议,IO基础可复习:https://doctording.blog.csdn.net/article/details/145839941)
/**
* The background thread that listens for incoming TCP/IP connections and
* hands them off to an appropriate processor.
*/
protected class Acceptor extends AbstractEndpoint.Acceptor {@Override
public void run() {int errorDelay = 0;// Loop until we receive a shutdown commandwhile (running) {// Loop if endpoint is pausedwhile (paused && running) {state = AcceptorState.PAUSED;try {Thread.sleep(50);} catch (InterruptedException e) {// Ignore}}if (!running) {break;}state = AcceptorState.RUNNING;try {//if we have reached max connections, waitcountUpOrAwaitConnection();SocketChannel socket = null;try {// Accept the next incoming connection from the server// socketsocket = serverSock.accept();} catch (IOException ioe) {// We didn't get a socketcountDownConnection();if (running) {// Introduce delay if necessaryerrorDelay = handleExceptionWithDelay(errorDelay);// re-throwthrow ioe;} else {break;}}// Successful accept, reset the error delayerrorDelay = 0;// Configure the socketif (running && !paused) {// setSocketOptions() will hand the socket off to// an appropriate processor if successfulif (!setSocketOptions(socket)) {closeSocket(socket);}} else {closeSocket(socket);}} catch (Throwable t) {ExceptionUtils.handleThrowable(t);log.error(sm.getString("endpoint.accept.fail"), t);}}state = AcceptorState.ENDED;
}
处理socket连接数据的线程,Poller
Acceptor的到的连接socket,进行如下处理
/**
* Process the specified connection.
* @param socket The socket channel
* @return <code>true</code> if the socket was correctly configured
* and processing may continue, <code>false</code> if the socket needs to be
* close immediately
*/
protected boolean setSocketOptions(SocketChannel socket) {// Process the connectiontry {//disable blocking, APR style, we are gonna be polling itsocket.configureBlocking(false);Socket sock = socket.socket();socketProperties.setProperties(sock);NioChannel channel = nioChannels.pop();if (channel == null) {SocketBufferHandler bufhandler = new SocketBufferHandler(socketProperties.getAppReadBufSize(),socketProperties.getAppWriteBufSize(),socketProperties.getDirectBuffer());if (isSSLEnabled()) {channel = new SecureNioChannel(socket, bufhandler, selectorPool, this);} else {channel = new NioChannel(socket, bufhandler);}} else {channel.setIOChannel(socket);channel.reset();}getPoller0().register(channel);} catch (Throwable t) {ExceptionUtils.handleThrowable(t);try {log.error("",t);} catch (Throwable tt) {ExceptionUtils.handleThrowable(tt);}// Tell to close the socketreturn false;}return true;
}
Pollor类定义如下:
Pollor负责轮询网络连接上的数据。在 NIO 或 NIO2 模型中,Poller 线程会检查注册在其上的 Channel(例如,SocketChannel)是否有数据可读或可写。
其中使用了java nio的Selector,即允许一个单一的线程来操作多个 Channel.
/*** Poller class.*/
public class Poller implements Runnable {private Selector selector;private final SynchronizedQueue<PollerEvent> events =new SynchronizedQueue<>();private volatile boolean close = false;private long nextExpiration = 0;//optimize expiration handlingprivate AtomicLong wakeupCounter = new AtomicLong(0);private volatile int keyCount = 0;public Poller() throws IOException {this.selector = Selector.open();}
socket具体的读写处理,Processor(Executor线程池)
Poller判断连接是否有读写,交给Processor具体处理
protected void processKey(SelectionKey sk, NioSocketWrapper attachment) {try {if ( close ) {cancelledKey(sk);} else if ( sk.isValid() && attachment != null ) {if (sk.isReadable() || sk.isWritable() ) {if ( attachment.getSendfileData() != null ) {processSendfile(sk,attachment, false);} else {unreg(sk, attachment, sk.readyOps());boolean closeSocket = false;// Read goes before writeif (sk.isReadable()) {if (!processSocket(attachment, SocketEvent.OPEN_READ, true)) {closeSocket = true;}}if (!closeSocket && sk.isWritable()) {if (!processSocket(attachment, SocketEvent.OPEN_WRITE, true)) {closeSocket = true;}}if (closeSocket) {cancelledKey(sk);}}}} else {//invalid keycancelledKey(sk);}} catch ( CancelledKeyException ckx ) {cancelledKey(sk);} catch (Throwable t) {ExceptionUtils.handleThrowable(t);log.error("",t);}
}
背后是扩展的Executor线程池处理
/*** This class is the equivalent of the Worker, but will simply use in an* external Executor thread pool.*/protected class SocketProcessor extends SocketProcessorBase<NioChannel> {public SocketProcessor(SocketWrapperBase<NioChannel> socketWrapper, SocketEvent event) {super(socketWrapper, event);}@Overrideprotected void doRun() {NioChannel socket = socketWrapper.getSocket();SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());try {int handshake = -1;try {if (key != null) {if (socket.isHandshakeComplete()) {// No TLS handshaking required. Let the handler// process this socket / event combination.handshake = 0;} else if (event == SocketEvent.STOP || event == SocketEvent.DISCONNECT ||event == SocketEvent.ERROR) {// Unable to complete the TLS handshake. Treat it as// if the handshake failed.handshake = -1;} else {handshake = socket.handshake(key.isReadable(), key.isWritable());// The handshake process reads/writes from/to the// socket. status may therefore be OPEN_WRITE once// the handshake completes. However, the handshake// happens when the socket is opened so the status// must always be OPEN_READ after it completes. It// is OK to always set this as it is only used if// the handshake completes.event = SocketEvent.OPEN_READ;}}} catch (IOException x) {handshake = -1;if (log.isDebugEnabled()) log.debug("Error during SSL handshake",x);} catch (CancelledKeyException ckx) {handshake = -1;}if (handshake == 0) {SocketState state = SocketState.OPEN;// Process the request from this socketif (event == null) {state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);} else {state = getHandler().process(socketWrapper, event);}if (state == SocketState.CLOSED) {close(socket, key);}} else if (handshake == -1 ) {getHandler().process(socketWrapper, SocketEvent.CONNECT_FAIL);close(socket, key);} else if (handshake == SelectionKey.OP_READ){socketWrapper.registerReadInterest();} else if (handshake == SelectionKey.OP_WRITE){socketWrapper.registerWriteInterest();}} catch (CancelledKeyException cx) {socket.getPoller().cancelledKey(key);} catch (VirtualMachineError vme) {ExceptionUtils.handleThrowable(vme);} catch (Throwable t) {log.error("", t);socket.getPoller().cancelledKey(key);} finally {socketWrapper = null;event = null;//return to cacheif (running && !paused) {processorCache.push(this);}}}}
具体在抽象类AbstractEndpoint中org.apache.tomcat.util.net.AbstractEndpoint#processSocket
/*** Process the given SocketWrapper with the given status. Used to trigger* processing as if the Poller (for those endpoints that have one)* selected the socket.** @param socketWrapper The socket wrapper to process* @param event The socket event to be processed* @param dispatch Should the processing be performed on a new* container thread** @return if processing was triggered successfully*/
public boolean processSocket(SocketWrapperBase<S> socketWrapper,SocketEvent event, boolean dispatch) {try {if (socketWrapper == null) {return false;}SocketProcessorBase<S> sc = processorCache.pop();if (sc == null) {sc = createSocketProcessor(socketWrapper, event);} else {sc.reset(socketWrapper, event);}Executor executor = getExecutor();if (dispatch && executor != null) {executor.execute(sc);} else {sc.run();}} catch (RejectedExecutionException ree) {getLog().warn(sm.getString("endpoint.executor.fail", socketWrapper) , ree);return false;} catch (Throwable t) {ExceptionUtils.handleThrowable(t);// This means we got an OOM or similar creating a thread, or that// the pool and its queue are fullgetLog().error(sm.getString("endpoint.process.fail"), t);return false;}return true;
}
(可能不同版本不一样,如上是在tomcat-8.5.57源码中)
org.apache.coyote.http11.Http11Processor#service
具体的processor处理,则依据配置和socket连接,如Http11Processor处理如下:
@Overridepublic SocketState service(SocketWrapperBase<?> socketWrapper)throws IOException {RequestInfo rp = request.getRequestProcessor();rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);// Setting up the I/OsetSocketWrapper(socketWrapper);// FlagskeepAlive = true;openSocket = false;readComplete = true;boolean keptAlive = false;SendfileState sendfileState = SendfileState.DONE;while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&sendfileState == SendfileState.DONE && !endpoint.isPaused()) {// Parsing the request headertry {if (!inputBuffer.parseRequestLine(keptAlive)) {if (inputBuffer.getParsingRequestLinePhase() == -1) {return SocketState.UPGRADING;} else if (handleIncompleteRequestLineRead()) {break;}}// Process the Protocol component of the request line// Need to know if this is an HTTP 0.9 request before trying to// parse headers.prepareRequestProtocol();if (endpoint.isPaused()) {// 503 - Service unavailableresponse.setStatus(503);setErrorState(ErrorState.CLOSE_CLEAN, null);} else {keptAlive = true;// Set this every time in case limit has been changed via JMXrequest.getMimeHeaders().setLimit(endpoint.getMaxHeaderCount());// Don't parse headers for HTTP/0.9if (!http09 && !inputBuffer.parseHeaders()) {// We've read part of the request, don't recycle it// instead associate it with the socketopenSocket = true;readComplete = false;break;}if (!disableUploadTimeout) {socketWrapper.setReadTimeout(connectionUploadTimeout);}}
关键处理在getAdapter().service(request, response);