In this article, I am trying to explain the difference between Async-IO and Async-Request processing in the HTTP request in the Java world.
In the pre-Java 1.4 world, Java provides an API to send/receive data over the network socket. The original authors of JVM mapped this API behavior to OS socket API, almost one to one.
So, what is the OS socket behaviour? OS provides Socket programming api, which has send/recv blocking call. Since java is just a process running on top of linux(OS), hence this java program has to use this blocking api provided by OS.
The world was happy and java developers started using the API to send/receive the data. But they had to keep one java thread for every socket(client).
Everybody was writing their own flavor of HTTP servers. Code sharing was becoming hard, the java world demanded a standardization.
Enters the java servlet Spec.
Before moving on lets define few terms:
Java Server Developer: People who are using the java socket api and implementing http protocol like tomcat.
java Application Developer: People who are building buisness application on top of tomcat.
GETTING BACK NOW
Once the java servlet spec entered the world, it said:
Dear java server developers, please provide a method like below:
so that java application developer can implement
doGet and they can write their business logic. Once “application developer” wants to send the
response, he can call
A thing to Note:Since socket api is blocking, hence OutPutRes.write() is also blocking. Also, the additional limitation was that the response object is committed on doGet method exit.
Due to these limitations, people had to use one thread for processing one request.
Time passed and the internet took over the world. one Thread per Request started to show limitations.
The thread-per-request model fails when there are long pauses during the processing of each request.
For Example: fetching data from sub-service take long time.
Under such a situation, the thread is mostly sitting idle and JVM can run out of thread easily.
Things got even worse with http1.1 persistent connection. As with persistent connection, the underlying TCP connection will be kept alive and the server has to block one thread per connection.
But why does the server have to block one thread per connection?
Since OS provides a blocking socket Recv api, the jvm has to call the OS blocking Recv method in order to listen for more requests on same tcp connection from the client.
The world demanded a solution!
The First Solution came from the creator of JVM. They introduced NIO(ASYNC-IO). Nio is the non-blocking API for sending/receiving data over socket.
Some background: the OS along with blocking socket api also provides a non-blocking version of the socket api.
But how does the OS provide that .. Does it fork a thread internally and that thread gets blocked???
The ANSWER is no… the OS instruct the hardware to interupt when there is data to read or write.
NIO allowed the “java server developer” to tackle problem 2 of blocking one thread per TCP connection. With NIO being an HTTP persistent connection, the thread does not require it to block on recv call. Instead, it can now process it only when there is data to be processed. This allowed one thread to monitor/handle a large number of persistent connections.
The Second Solution came from servlet spec. Servlet Spec got an upgrade and they introduced async support (Async Request Processing).
AsyncContext acontext = req.startAsync();
IMPORTANT: This upgrade removed the limitation of committing the response object on doGet method completion.
This allowed the “Java Application Developer” to tackle Problem 1, by offloading work to background threads. Now instead of keeping the thread waiting during the long pause, the thread can be used to handle other requests.
Async-IO in java is basically using the non-blocking version on OS socket API.
Async request processing is basically the servlet spec standardization of how to process more requests with one thread.
Motivation of article: Team Learning/Knowledge Sharing