HikariCP connection pool Metric information - slick

Hell,
Is there any that I can get the HkariCP connection pool metric information such as total connections, idle connections and so on?
I know HikariPool logs such information like:
Before cleanup pool stats db (total=20, inUse=0, avail=20, waiting=0)
But it is too frequently and my code cannot control it. I would like to log such information in a configurable period such as 1 minutes. BTW, I use Scala Slick 3.0

HikariCP supports Dropwizard Metrics. Check out this link:
https://github.com/brettwooldridge/HikariCP/wiki/Dropwizard-Metrics

Dropwizard Metrics: (from https://stackoverflow.com/a/42301023)
private MetricRegistry metricRegistry;
...
if(dataSource instanceof HikariDataSource) {
((HikariDataSource) dataSource).setMetricRegistry(metricRegistry);
}
Prometheus Metrics:
private DataSource dataSource;
...
if (dataSource instanceof HikariDataSource) {
((HikariDataSource) dataSource).setMetricsTrackerFactory(new PrometheusMetricsTrackerFactory());
}

Related

Efficient OkHttp configuration for Multithreaded environment

What is the best configuration I can use to set up the OkHttp3 client correctly in a multi threaded environment? Had 2 main questions:
Connection pool - How do we define the number of available connections in the pool? Can it be scaled at runtime? The number of concurrent users will be very high and need to make sure users aren't waiting a long time for the connection to be available from the pool.
I read the OkHttp might end up doing multiple retries in case of failures or timeouts. Is it possible to only enable this for only the "Gets" and not "Post" while using just 1 OkHttp client?
Also Anything else I should be considering?
Here is my starting code for the client.
private static final int timeout = 15000;
private static final OkHttpClient okClient = new OkHttpClient()
.newBuilder()
.connectTimeout(timeout, TimeUnit.MILLISECONDS)
.readTimeout(timeout, TimeUnit.MILLISECONDS)
.writeTimeout(timeout, TimeUnit.MILLISECONDS)
.retryOnConnectionFailure(false)
.addInterceptor(new HttpLoggingInterceptor().setLevel(HttpLoggingInterceptor.Level.BASIC))
.build();
You can configure the connection pool then pass into the client builder.
https://square.github.io/okhttp/3.x/okhttp/okhttp3/ConnectionPool.html
See Connection Pool - OkHttp for an example.
For the second question, you can disable automatic retries and do this in your application code instead. Use retryOnConnectionFailure(false) as you show above.
To have this applied differently for get and posts you should use customise one client like the following
val postClient = client.newBuilder().retryOnConnectionFailure(false).build()

Troubleshooting the websocket limit in Azure, active connections

I'm in the process of troubleshooting an App Service that is using websockets.
It's running on service plan Basic which allows for 350 websockets.
This is the only app on that plan that uses websockets.
The problem is that after abou 20 hours I get 503 responses saying I reached my websocket limit.
The setup right now has 3 clients connecting to the service.
In the process of investigating websocket leakage in my app I would like to track the number of websockets in use.
Is there anywhere, from my app or in Azure portal, where I can see the number of active websocket connections?
Follow up:
I've logged the websocket connections as Amor suggested.
The HTTP part of my app is still working, I can get dynamic results from the app which now reports what websocket connections are active and how many has been created since start.
After restarting the app service and configured one client to reconnect indefinetely.
It worked fine until the "total websocket connections" reached 350. At this time I shut down the client.
The limit should be 350 concurrent connections but it looks like it is 350 in total since start.
Most (at least 340) of these connections were initiated by a single client which disposed each connection before starting a new one, it has been shutdown once the limit was reached.
I've been suggested to upgrade from Basic to Standard since standard doesn't have the artificial limitation. The only reason I can see this work would be if there is a bug in the websocket limitation for the Basic plan.
Update 2
In parallel I've been in contact with Microsoft Developer Support and they noticed what appears to be that the sockets are stuck in IIS whereas not in Kestrel. The cause of this is still being investigated.
Support could show me graphs of the connection usage over time which clearly showed how the limit was reached.
I'll keep this question updated in case there was some error in my code.
I suggest you define a variable to count the connections. If a web socket connection is opened, just increase the number of connections. If a web socket connection is closed, decrease the number of connections. Code below is for your reference.
Count the connections for ASP.NET SignalR.
public class MyHub : Hub
{
private int _connectionCount = 0;
public override Task OnConnected()
{
_connectionCount++;
return base.OnConnected();
}
public override Task OnReconnected()
{
_connectionCount++;
return base.OnReconnected();
}
public override Task OnDisconnected(bool stopCalled)
{
_connectionCount--;
return base.OnDisconnected(stopCalled);
}
}
Count the connections in traditional ASP.NET.
public class WSChatController : ApiController
{
private int _connectionCount = 0;
public HttpResponseMessage Get()
{
if (HttpContext.Current.IsWebSocketRequest)
{
HttpContext.Current.AcceptWebSocketRequest(ProcessWSChat);
}
return new HttpResponseMessage(HttpStatusCode.SwitchingProtocols);
}
private async Task ProcessWSChat(AspNetWebSocketContext context)
{
WebSocket socket = context.WebSocket;
while (true)
{
ArraySegment<byte> buffer = new ArraySegment<byte>(new byte[1024]);
WebSocketReceiveResult result = await socket.ReceiveAsync(
buffer, CancellationToken.None);
if (socket.State == WebSocketState.Open)
{
_connectionCount++;
//Process the request
}
else
{
_connectionCount--;
break;
}
}
}
}

scala: apache httpclient in multi-threaded environment

I am writing a singleton class (Object in scala) which uses apache httpclient(4.5.2) to post some file content and return status to caller.
object HttpUtils{
protected val retryHandler = new HttpRequestRetryHandler() {
def retryRequest(exception: IOException, executionCount: Int, context: HttpContext): Boolean = {
//retry logic
true
}
}
private val connectionManager = new PoolingHttpClientConnectionManager()
// Reusing same client for each request that might be coming from different threads .
// Is it correct ????
val httpClient = HttpClients.custom()
.setConnectionManager(connectionManager)
.setRetryHandler(retryHandler)
.build()
def restApiCall (url : String, rDD: RDD[SomeMessage]) : Boolean = {
// Creating new context for each request
val httpContext: HttpClientContext = HttpClientContext.create
val post = new HttpPost(url)
// convert RDD to text file using rDD.collect
// add this file as MultipartEntity to post
var response = None: Option[CloseableHttpResponse] // Is it correct way of using it ?
try {
response = Some(httpClient.execute(post, httpContext))
val responseCode = response.get.getStatusLine.getStatusCode
EntityUtils.consume(response.get.getEntity) // Is it require ???
if (responseCode == 200) true
else false
}
finally {
if (response.isDefined) response.get.close
post.releaseConnection() // Is it require ???
}
}
def onShutDown = {
connectionManager.close()
httpClient.close()
}
}
Multiple threads (More specifically from spark streaming context) are calling restApiCall method. I am relatively new to scala and apache httpClient. I have to make frequent connections to only few fixed server (i.e. 5-6 fixed URL's with different request parameters).
I went through multiple online resource but still not confident about it.
Is it the best way to use http client in multi-threaded environment?
Is it possible to keep live connections and use it for various requests ? Will it be beneficial in this case ?
Am i using/releasing all resources efficiently ? If not please suggest.
Is it good to use it in Scala or there exist some better library ?
Thanks in advance.
It seems the official docs have answers to all your questions:
2.3.3. Pooling connection manager
PoolingHttpClientConnectionManager is a more complex implementation
that manages a pool of client connections and is able to service
connection requests from multiple execution threads. Connections are
pooled on a per route basis. A request for a route for which the
manager already has a persistent connection available in the pool will
be serviced by leasing a connection from the pool rather than creating
a brand new connection.
PoolingHttpClientConnectionManager maintains a maximum limit of
connections on a per route basis and in total. Per default this
implementation will create no more than 2 concurrent connections per
given route and no more 20 connections in total. For many real-world
applications these limits may prove too constraining, especially if
they use HTTP as a transport protocol for their services.
2.4. Multithreaded request execution
When equipped with a pooling connection manager such as
PoolingClientConnectionManager, HttpClient can be used to execute
multiple requests simultaneously using multiple threads of execution.
The PoolingClientConnectionManager will allocate connections based on
its configuration. If all connections for a given route have already
been leased, a request for a connection will block until a connection
is released back to the pool. One can ensure the connection manager
does not block indefinitely in the connection request operation by
setting 'http.conn-manager.timeout' to a positive value. If the
connection request cannot be serviced within the given time period
ConnectionPoolTimeoutException will be thrown.

Cassandra Cluster Recovery

I have a Spring Boot Application that uses Spring Data for Cassandra. One of the requirements is that the application will start even if the Cassandra Cluster is unavailable. The Application logs the situation and all its endpoints will not work properly but the Application does not shutdown. It should retry to connect to the cluster during this time. When the cluster is available the application should start to operate normally.
If I am able to connect during the application start and the cluster becomes unavailable after that, the cassandra java driver is capable of managing the retries.
How can I manage the retries during application start and still use Cassandra Repositories from Spring Data?
Thanx
It is possible to start a Spring Boot application if Apache Cassandra is not available but you need to define the Session and CassandraTemplate beans on your own with #Lazy. The beans are provided out of the box with CassandraAutoConfiguration but are initialized eagerly (default behavior) which creates a Session. The Session requires a connection to Cassandra which will prevent a startup if it's not initialized lazily.
The following code will initialize the resources lazily:
#Configuration
public class MyCassandraConfiguration {
#Bean
#Lazy
public CassandraTemplate cassandraTemplate(#Lazy Session session, CassandraConverter converter) throws Exception {
return new CassandraTemplate(session, converter);
}
#Bean
#Lazy
public Session session(CassandraConverter converter, Cluster cluster,
CassandraProperties cassandraProperties) throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(cassandraProperties.getKeyspaceName());
session.setSchemaAction(SchemaAction.NONE);
return session.getObject();
}
}
One of the requirements is that the application will start even if the Cassandra Cluster is unavailable
I think you should read this session from the Java driver doc: http://datastax.github.io/java-driver/manual/#cluster-initialization
The Cluster object does not connect automatically unless some calls are executed.
Since you're using Spring Data Cassandra (that I do not recommend since it has less feature than the plain Mapper Module of the Java driver ...) I don't know if the Cluster object or Session object are exposed directly to the users ...
For retry, you can put the cluster.init() call in a try/catch block and if the cluster is still unavaible, you'll catch an NoHostAvailableException according to the docs. Upon the exception, you can schedule a retry of cluster.init() later

Reconnect to DB within log4j

If I have a JDBCAppender configured to throw log messages to MySQL
and, while my system is up I restart the database is it reconnect to DB?
I have had this use case occur over this past weekend. My database is hosted by Amazon AWS. It rolled over my log database and all of the instances logging to that database via the Log4j JDBC Appender stopped logging. I bounced one of the apps and it resumed logging.
So the answer to this question, through experience, appears to be No.
If the database goes down and comes back online, the JDBC appender does not reconnect automatically.
edit
JDBCAppender getConnection might be overridden to fix.
JDBCAppender in log4j 1.2.15 has following code
protected Connection getConnection() throws SQLException {
if (!DriverManager.getDrivers().hasMoreElements())
setDriver("sun.jdbc.odbc.JdbcOdbcDriver");
if (connection == null) {
connection = DriverManager.getConnection(databaseURL, databaseUser,
databasePassword);
}
return connection;
}
so if connection is not null, but is broken (needs reconnect) log4j will return broken connection to its logic, and executing statement which does logging to db will fail.
Not a workaround, but a proper solution is to replace log4j with logback: see related answer: Log to a database using log4j

Resources