Linked Questions

Popular Questions

We have a jersey client, with the basic config:

public class HttpClient {

    private transient final WebTarget target;

    public HttpClient(final String host, final int port, final String path, final int requestTimeout) {
        final URI uri = UriBuilder.fromUri("http://" + host).port(port).build();
        final Client client = ClientBuilder.newClient();

        client.property(ClientProperties.READ_TIMEOUT, requestTimeout);
        target = client.target(uri).path(path);
    }

    public byte[] makeRequest(final byte[] request) throws HsmException {
        try {
            return target.request()
                    .accept(MediaType.APPLICATION_OCTET_STREAM)
                    .post(Entity.entity(request, MediaType.APPLICATION_OCTET_STREAM), byte[].class);
        }
        catch (Exception e) {
            // Box JAX-RS exceptions as they get weirdly handled by the outer Jersey layer.
            throw new Exception("Could not make request: " + e.getMessage(), e);
        }
    }

}

Now, with that client, we manage to make around 900 requests per second. So in my attempt to get better results I thought about implementing pooling, using Apache Http client with the jersey connector, like this:

public class HttpClient {
    private transient final WebTarget target;

    public HttpClient(final String host, final int port, final String path, final int requestTimeout) {
        final ClientConfig clientConfig = new ClientConfig();
        clientConfig.property(ClientProperties.READ_TIMEOUT, requestTimeout);
        clientConfig.property(ClientProperties.CONNECT_TIMEOUT, 500);

        final PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
        connectionManager.setMaxTotal(150);
        connectionManager.setDefaultMaxPerRoute(40);
        connectionManager.setMaxPerRoute(new HttpRoute(new HttpHost(host)), 80);

        clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, connectionManager);

        final ApacheConnectorProvider connector = new ApacheConnectorProvider();
        clientConfig.connectorProvider(connector);

        final URI uri = UriBuilder.fromUri("http://" + host).port(port).build();
        final Client client = ClientBuilder.newClient(clientConfig);
        target = client.target(uri).path(path);
    }

    @Override
    public byte[] makeRequest(final byte[] request) throws HsmException {
        try {
            return target.request()
                    .accept(MediaType.APPLICATION_OCTET_STREAM)
                    .post(Entity.entity(command, MediaType.APPLICATION_OCTET_STREAM), byte[].class);
        }
        catch (Exception e) {
            // Box JAX-RS exceptions as they get weirdly handled by the outer Jersey layer.
            throw new Exception("Could not make request:" + e.getMessage(), e);
        }
    }
}

And the results are exactly the same, about 900 requests per second.

I am not being limited by CPU, ram, disk, etc. I can't find the bottleneck. I tested multiple values when setting the connection manager but with the exact same results.

Am I missing something? Are there any other parameters I am missing? Am I using this the wrong way?

Related Questions