Considering the huge load that I am getting for authentication, using multiple threads and Connectionpool will be a best solution to handle the load. However I have two design options in my mind
Pass Connectionpool to individual threads as argument and get a connection to do the bind request from ConnectionPool
Pass connection to threads rather than passing Connectionpool to threads and do bind request
Which Design would you prefer most and what are the reasons for them?
I found an answer for this question after a discussion happened with UnboundIDSDK forum. I will add the final finding for others. according to this
It is recommended to available the pool to threads by passing the pool to thread as argument.
public BindResult doBind(LDAPConnectionPool pool,
BindRequest bindRequest)
throws LDAPException
{
return pool.bind(bindRequest);
}
Related
I know that there is an analogous question but that is about ASP.NET and not about ASP.NET Core. The answers are 7-9 years old, and mixing there talking about ASP.NET and ASP.NET Core may not be a good idea.
What I mean thread safe in this case:
Is it safe to use the read write methods (like Set(...)) of the Session (accessed via HttpContext, which accessed via an injected IHttpContextAccessor) in multiple requests belonging to the same session?
The obvious answer would be yes, because if it would not be safe, then all developers should make their session accessing code thread safe...
I took a look of the DistributedSession source code which seems to be the default (my session in the debugger which accessed as described above is an instance of DistributedSession) and no traces of any serialization or other techniques, like locks... even the private _store member is a pure Dictionary...
How could be this thread safe for concurrent modification usage? What am I missing?
DistributedSession is created by DistributedSessionStore which is registered as a transient dependency. That means that the DistributedSessionStore itself is implicitly safe because it isn’t actually shared between requests.
The session uses a dictionary as the underlying data source which is also local to the DistributedSession object. When the session is initialized, the session initializes the _store dictionary lazily when the session is being accessed, by deserializing the stored data from the cache. That looks like this:
var data = _cache.Get(_sessionKey);
if (data != null)
{
Deserialize(new MemoryStream(data));
}
So the access to _cache here is a single operation. The same applies when writing to the cache.
As for IDistributedCache implementations, you can usually expect them to be thread-safe to allow parallel access. The MemoryCache for example uses a concurrent collection as the backing store.
What all this means for concurrent requests is basically that you should not expect one request to directly impact the session of the other request. Sessions are usually only deserialized once so updates that happen during the request (by other requests) will not appear.
I use GF 4 as JavaEE server.
This is how I understand servlet processing: There is a pool of threads and when request comes one thread from this pool is taken to process the request. After that the thread is put back to pool.
Based on the information above I suppose (I am not sure) that websockets (server end points) are processed this way: There is pool of threads, when
Client creates new websocket a thread is taken from pool to create new instance of ServerEndpoint and to execute #OnOpen method. After that thread is put back to pool.
Client sends message over websocket to server. Thread is taken from pool to execute #OnMessage method. After that thread is put back to pool.
Client closes the websocket - thread is taken from pool to execute #OnClose method. After that thread is put back to pool.
It all means that every method of ServerEndpoint can be executed by different threads. Is my understanding right?
Yes.
The ServerEndpoint instance lives as long as the associated WebSocket session is available as Session argument during #OnOpen. During that WebSocket session, many HTTP and WebSocket requests may be fired. Each such request accounts as an individual thread.
In other words, if your ServerEndpoint class needs to deal with instance variables in multiple methods for some reason, it must be implemented in a thread safe manner. Depending on the concrete functional requirement, you'd probably better use Session#getUserProperties() instead to carry around state associated with the WS session (think of it as session attributes).
Noted should be that this all is regardless of the container and WS implementation used.
My MessageListener implementation is not thread safe.
This causes issues when i try to wire it in DefaultMessageListenerContainer with multiple consumers, since, all the consumers share the same MessageListener object.
Is there a way to overcome this problem by making the DefaultMessageListener container create multiple instances of MessageListeners, so that, MessageListener is not shared among consumer threads.
In that way each consumer thread will have its own MessageListener instance.
Please advise.
There's nothing built in to support this. It is generally considered best practice to make services stateless (and thus thread-safe).
If that's not possible, you would need to create a wrapper listener; two simple approaches would be to store instances of your listener in a ThreadLocal or maintain a pool of objects and retrieve/return instances from/to the pool on each message.
I'm using multithreaded wcf maxConcurrentCalls = 10. By logging calls to my service I see that 10 different threads are executing in my service class and that they are reused in the following calls.
Can I tell WCF to destroy/delete a thread so it will create a new one on the next call?
This is because I have thread-static state that I sometimes want to be cleared (on unexpected exceptions). I am using the thread-static scope to gain performance.
WCF doesn't create new threads. It uses threads from a thread pool to service requests. So when a request begins it draws a thread from this pool to execute the request and after it finishes it returns the thread to the pool. The way that WCF uses threads underneath is an implementation detail that you should not rely on. You should never use Thread Static in ASP.NET/WCF to store state.
In ASP.NET you should use HttpContext.Items and in WCF OperationContext to store some state that would be available through the entire request.
Here's a good blog post you may take a look at which illustrates a nice way to abstract this.
I have a severe problem with my database connection in my web application. Since I use a single database connection for the whole application from singleton Database class, if i try concurrent db operations (two users) the database rollsback the transactions.
This is my static method used:
All threads/servlets call static Database.doSomething(...) methods, which in turn call the the below method.
private static /* synchronized*/ Connection getConnection(final boolean autoCommit) throws SQLException {
if (con == null) {
con = new MyRegistrationBean().getConnection();
}
con.setAutoCommit(true); //TODO
return con;
}
What's the recommended way to manage this db connection/s I have, so that I don't incurr in the same problem.
Keeping a Connection open forever is a very bad idea. It doesn't have an endless lifetime, your application may crash whenever the DB times out the connection and closes it. Best practice is to acquire and close Connection, Statement and ResultSet in the shortest possible scope to avoid resource leaks and potential application crashes caused by the leaks and timeouts.
Since connecting the DB is an expensive task, you should consider using a connection pool to improve connecting performance. A decent applicationserver/servletcontainer usually already provides a connection pool feature in flavor of a JNDI DataSource. Consult its documentation for details how to create it. In case of for example Tomcat you can find it here.
Even when using a connection pool, you still have to write proper JDBC code: acquire and close all the resources in the shortest possible scope. The connection pool will on its turn worry about actually closing the connection or just releasing it back to pool for further reuse.
You may get some more insights out of this article how to do the JDBC basics the proper way. As a completely different alternative, learn EJB and JPA. It will abstract away all the JDBC boilerplate for you into oneliners.
Hope this helps.
See also:
Is it safe to use a static java.sql.Connection instance in a multithreaded system?
Am I Using JDBC Connection Pooling?
How should I connect to JDBC database / datasource in a servlet based application?
When is it necessary or convenient to use Spring or EJB3 or all of them together?
I've not much experience with PostgreSql, but all the web applications I've worked on have used a single connection per set of actions on a page, closing it and disposing it when finished.
This allows the server to pool connections and stops problems such as the one that you are experiencing.
Singleton should be the JNDI pool connection itself; Database class with getConnection(), query methods et al should NOT be singleton, but can be static if you prefer.
In this way the pool exists indefinitely, available to all users, while query blocks use dataSource.getConnection() to draw a connection from the pool; exec the query, and then close statement, result set, and connection (to return it to the pool).
Also, JNDI lookup is quite expensive, so it makes sense to use a singleton in this case.