How to reject request from client using Jetty config in kairosdb

479 views Asked by At

I am using kairosdb latest version. I tried enabling the jetty thread pool. My expectation was if the queue size is filled with request then all the subsequent request are rejected immediately. But the request is served after sometime eventhough I see

 java.util.concurrent.RejectedExecutionException

The client request should be rejected if the queue is full. How to achieve the same?

For testing I added these params.

kairosdb.jetty.threads.queue_size=2 #queue
kairosdb.jetty.threads.min=2 # minThread
kairosdb.jetty.threads.max=4 #maxThread
kairosdb.jetty.threads.keep_alive_ms=1000

The corresponding jetty thread pool code

new ExecutorThreadPool(minThreads, maxThreads, keepAliveMs, TimeUnit.MILLISECONDS, queue);

The jetty version used in kairosdb is 8.1.16

1

There are 1 answers

2
Joakim Erdfelt On BEST ANSWER

Jetty 8.1.16 was released in Sept 2014, its now EOL (End of Life), consider using a version of Jetty that is up to date, stable, and supported. (such as Jetty 9.4.12.20180830)

The fact that you get a java.util.concurrent.RejectedExecutionException screams you have an insufficient thread pool configuration.

The threadpool configuration you have is EXTREMELY small.

That would only be suitable for a single core, single cpu, single thread hardware configuration. Why? well, that's because your cpu/core/thread hardware configuration determines your NIO behavior, and dictates the minimum demands on your threadpool.

On a MacOS laptop from 2009 (nearly 10 years ago!) you would need a minimum of 9 threads just to support a single connection making a single blocking REST request on that hardware.

On a modern Ryzen Threadripper system you would often need a minimum thread count of 69 threads just to support a single connection making a single blocking REST request on this hardware.

On the other hand, your configuration is quite suitable on a Raspberry Pi Zero, and could support about 3 connections, with 1 request active per connection.

With that configuration you would only be able to handle simple requests in serial, and your application not using any async processing or async I/O behaviors. Why? that's because even a typical modern web page will require a minimum thread count around 40, due to how browsers utilize your server.

The ExecutorThreadPool is also a terrible choice for your situation (that's only suitable for highly concurrent environments, think 24+ cpu/cores, and with minimum thread configurations above 500, often in the thousands).

You would be better off using the standard QueuedThreadPool its much more performant for the low end, and is capable of growing to handle demand (and scaling back over time to to lower resource utilization as demand subsides).

The QueuedThreadPool (in Jetty 9.4.x) also has protections against bad configurations and will warn you if the configuration is insufficient for either your hardware configuration, your chosen set of features in Jetty, or your specific configuration within Jetty.

If you want to reject connections when resources a low, then consider using DoSFilter (or if you want to be more gentle, consider QoSFilter).

Attempting to limit usage via the ThreadPool will never work, as in order to reject a connection a thread is needed to accept it (acceptor thread, one per server connector), another to process the NIO events (selector thread, shared resource, handling multiple connections), and another to handle the request (to return the http status code 503).

If you want an implementation in your own code (not Jetty), you could probably just write a Filter that counts active exchanges (request and response) and forces a 503 response status if the count is above some configurable number.

But if you do that, you should probably force all responses to close. aka Connection: close response header and not allow persistent connections.