Connection object with multiple threads using Dependency injection - multithreading

I have been trying to add parallel foreach in existing application and faced an odd issue
Application Architecture
Controller resolve BO layer
BO Layer resolve Service Layer
Service Layer resolve UOW and Repository Layer
UOW layer resolve DB Connection
BO Layer
private IUserService _userService;
public BOUser(IUserService userService) => _userService=userService;
public void AddUser(User user) => _userService.addUser(user);
Service Layer
private IUnitofWork _uow;
private IUserRepository _userRepo;
UserService(IUOW _uow, IUserRepository _userRepo)
{
uow = _uow;
_userRepo.uow = _uow;
}
public void AddUser (User user) {
_uow.BeginTransaction();
_userRepo.Add(user);
_uow.CommitTransaction();
}
Repo Layer
public IUnitOfWork UnitOfWork { get; set; }
public void Add(user){
UnitOfWork.Connection.Insert<UserContact>(userContact, UnitOfWork.Transaction);
}
Unit of Work
public UnitOfWork(IDbConnection connection)
{
Connection = connection; // responsible for creating new connection
}
This works fine as of today but as i tried to add multiple user using parallel.foreach in BO layer i got one failed and one success
Reason behind is service layer is initiated when BO layer initiated and UOW.connection remains to be 1 for whole process
when i tried multiple threads because Connection was one object that was shared between multiple threads
it failed because one thread completed the work and closed Connection object
i have solution in mind by removing UOW from Constructor and by using service location pattern that would be
IUserSrevice userService = new UserService();
that will create multiple connection object based on threads.
its not rite way of doing it. Any expert opinion will help

Connection is not a thread safe object because the idea is to create a pool of connection objects and have each thread borrow a connection from the pool, do its work and return it to the pool once done.
Thus no more than one thread can access a connection object at a time.
You should inject the connection pool and not the individual connection objects.

Related

IMediatr with Autofac in Domain Objects DDD

I have set my Domain Model objects to be independent of any service and infrastructure logic.
I am also using Domain Events to react to some changes in Domain Models.
Now my problem is how to raise those events from the Domain Model objects itself.
Currently I am using Udi Dahan's DomainEvents static class for this (I need evens to be handled exactly when they happen and not at a latter time).
The events are used for many things, like logging, updating the data in related services and other Domain Model objects and db, publishing messages to the MassTransit bus etc.
The DomainEvents static class uses Autofac scope that I inject at some point in it, to find the IMediatr instance and to publish the events, like this:
public static class DomainEvents
{
private static ILifetimeScope Scope;
public async static Task RaiseAsync<TDomainEvent>(TDomainEvent #event) where TDomainEvent : IDomainEvent
{
var mediator = Scope?.Resolve<IMediatorBus>();
if (mediator != null)
{
await mediator!.Publish(#event).ConfigureAwait(false);
}
else
{
Debug.WriteLine("Mediator not set for DomainEvents!");
}
}
public static void SetScope(ILifetimeScope scope)
{
Scope = scope;
}
}
This all works ok in a single-threaded environment, but the method DomainEvents.SetScope() is a possible racing problem in multhi-threaded environment.
Ie. When I introduce MassTransit and create message consumers, each Message consumer will set the current LifetimeScope to DomainEvents by that method, and here is the problem, each consumer will overwrite the lifetime scope with the new one.
Why I use DomainEvents static class? Because I don't want to pollute my Domain Model Objects with infrastructure stuff.
I thought about making DomainEvents non static (define an interface), but then I need them injected in every Domain Model Object and I'm still thinking about this, but maybe there is a better way.
I want to know if there is a better way to handle this?
Maybe some change in DomainEvents class? Or maybe remove the DomainEvents static class end use an interface or DomainService to do this.
The problem is I don't like static classes, but I also don't like pushing non domain-specific dependencies into Domain Model Objects.
Please help.
UPDATE
To better clarify the process and for what I use DomainEvents...
I have a long-running process that can take from few minutes to few hours/days to complete.
So the process is going like this:
I receive an message from MassTransit ie ProcessStartMessage(processId)
Get the ProcessData for (processId) from Db.
Construct an in-memory Domain Model ProcessTracker (singleton) and put all the data I loaded from DB in it. (in-memory cache)
I receive another message from Masstransit ie. ProcessStatusChanged(processId, data).
Forward this message data to in-memory singleton ProcessTracker to process.
ProcessTracker process the data.
For ProcessTracker to be able to process this data it instantiates many Domain Model Objects, each responsible to process some part of the data. (Note there is NO more db calls and entity hydration from db, it all happens in memory, also Domain Model is not mapped to any entity, it is not connected to any db object).
At some point I need to log what a Domain Model object in the chain has done, has it work finished or started, has reached some milestone etc. This is done by raising DomainEvents. I also need to notify the GUI of those events, so they are used to send Masstransit messages too.
Ie.(pseudo code):
public class ProcessTracker
{
private Step _currentStep;
public void ProcessData(data)
{
_currentStep.ProcessData(data);
DomainEvents.Raise(new ProcesTrackerDataProcessed());
...
}
}
public class Step
{
public Phase _currentPhase;
public void ProcessData(data)
{
if (data.IsManual && _someOtherCondition())
{
DomainEvents.Raise(new StepDataEvent1());
...
}
if(data.CanTransition)
{
DomainEvents.Raise(new TransitionToNewPhase(this, data));
}
_currentPhase.DoSomeWork(data);
DomainEvents.Raise(new StepDataProcessed(this, data));
...
}
}
About db updates, those are not transactional and not important to the process and the Domain Model Object state is kept only in memory, if the process crash the process MUST begin from the start (there is NO recovery).
To end the process:
I receive ProcessEnd from the MassTransit
The message data is forwarded to the ProcessTracker
ProcessTracker handles the data an nets a result of the proceess
The result of the process is saved to db
A message is sent to other parties in the process that notifies them of a process completion.
Ask yourself first what are you going to do when you raise an event from your domain model?
Normally it works like this:
Get a command
Load a domain object from a repository
Execute behaviour
(here probably) Raise an event
Persist the new domain object state
So, where your extra domain event handlers would fit? Are you going to execute some other database calls, send an email? Remember that it all happens now, when you haven't even persisted the changed state of your domain object. What if your persistence fails? It will happen after you executed all the domain handlers.
You should not execute more than one transaction when you handle a single command. The Aggregate pattern clearly tells you that the aggregate is the transaction boundary. You should raise domain events after you complete the transaction, or within the same technical transaction, but it should only persist the aggregate state and the event. Domain event reactions potentially trigger transactions for other domain objects, and it should be done outside of the scope of handling the current command.
The issue is not at all technical, it is a design problem.
If you use MassTransit, you can only make it (relatively) reliable if you handle the command in a message consumer. Then, you can use in-memory outbox, which will not send an event unless the consumer succeeded. It is still not guaranteed that the event will be published in case of the broker failure.
Unless you go to Event Sourcing, you have two 100% reliable options:
Use a transactional outbox pattern (NServiceBus has one and it's quite complex). It has limitations on what type of database you use.
Store the event to the same database as the domain object, in a different table, within the same transaction. Poll the table with DELETE INTO and emit events to the broker from there.

How to reach the security context in a thread created with Async

I use Async with a method that call a remote service with Feign and I need to append an oauth2 token to the request, for that I use a RequestInterceptor.
#Bean
public RequestInterceptor requestTokenBearerInterceptor() {
return requestTemplate -> {
Object principal = SecurityContextHolder
.getContext()
.getAuthentication()
.getPrincipal();
if (!principal.equals("anonymousUser")) {
OAuth2AuthenticationDetails details = (OAuth2AuthenticationDetails)
SecurityContextHolder.getContext().getAuthentication().getDetails();
requestTemplate.header("Authorization", "bearer " + details.getTokenValue());
}
};
}
But when the requestInterceptor is used in another thread, I don't have acces to the same security context so getAuhentication return null.
I try to fix it in the executor configuration, I create a DelegatingSecurityContextExecutor wrapping the executor and the security context. But it seems that the bean is created in the 'main' thread and the security context is not the same used then, when a RestController method is executed, so the getAuthentication() still return null.
#Bean(name = "asyncExecutor")
public Executor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(3);
executor.setMaxPoolSize(3);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("AsynchThread-");
executor.initialize();
Executor wrappedExecutor = new DelegatingSecurityContextExecutor(executor, SecurityContextHolder.getContext());
return wrappedExecutor;
}
How can I configure the executor the right way ?
I finally found the solution, it is possible to propagate the security context automaticly to the other threads.
Just add this line of code in the static main method of your spring boot application :
SecurityContextHolder.setStrategyName(SecurityContextHolder.MODE_INHERITABLETHREADLOCAL);
The solution is well explained here : https://www.baeldung.com/spring-security-async-principal-propagation?fbclid=IwAR1zeGKvRvBb7GG8SmxO4x8-NlKkG39Q29WoLKxZ8NRzyKEcnDWx4Q6EUk0
!! WARNING !! : I noticed an unexpected behaviour with that solution, at least on my local dev environment. I'm connected to my app with two different accounts using sessionbox tool of chrome (same with different browsers), and it seems that when I 'm connected with user A SecurityContextHolder.getContext().getAuthentication().getPrincipal() return the security context of user B ... So Huge security problem ! I'm looking for a solution at the moment.
Reading this post : How to set up Spring Security SecurityContextHolder strategy? the solution seems to be here Spring Security and #Async (Authenticated Users mixed up)
It seems to me you can not use RequestInterceptor here. As far as I know when you use #Async you lose request context in the method which you want to execute in asynchronous way. To do so you have to explicitly pass access token to asynchronous method and provide it as request header:
#FeignClient(name = "userClient", url ="${userService.hostname}")
public interface MyFeignClient {
String AUTH_TOKEN = “Authorization”;
#GetMapping(“/users”)
List<User> findUsers(#RequestHeader(AUTH_TOKEN) String bearerToken);
}

Understanding asynchronous web processing

I've just finished reading up about asynchronous WebServlet processing. [This article] is a good read.
However, fundamentally I'm confused why this method is the "next generation of web processing" and is in fact used at all. It seems we are avoiding better configuring our Web Application Servers (WAS) - nginx, apache, tomcat, IIS - and instead, putting the problem on to the Web Developer.
Before I dive into my reasoning, I want to briefly explain how Requests are accepted and then handled by a WAS.
NETWORK <-> OS -> QUEUE <- WEB APPLICATION SERVER (WAS) <-> WEB APPLICATION (APP)
A Web Application Server (WAS) tells the Operating System (OS) that it wants to receive Requests on a specific Port, e.g. Port 80 for HTTP.
The OS opens a Listener on the Port (if it's free) and waits for Clients to connect.
When the OS receives a Connection, it adds it to a Queue assigned to the WAS (if there is space, otherwise the Client's Connection is rejected) - the size of the Queue is defined by the WAS when it requests the Port).
The WAS monitors the Queue for Connections and when a Connection is available, accepts the Connection for processing - removing it from the Queue.
The WAS passes the Connection on to the Web Application for processing - it could also handle the process itself if programmed to.
The WAS can handle multiple Connections at the same time by using multiple Processors (normally one per CPU core), each with multiple Threads.
So this now brings me to my query. If the amount of Requests the WAS can handle depends on the speed at which it can process the Queue, which is down to the number of Processors/Threads assigned to the WAS, why do we create an async method inside our APP to offload the Request from the WAS to another Thread not belonging to the WAS instead of just increasing the number of Threads available to the WAS?
If you consider the (not so) new Web Sockets that are popping up, when a Web Socket makes a connection to a WAS, a Thread is assigned to that Connection which is held open so Client and WAS can have continual communication. This Thread is ultimately a Thread on the WAS - meaning it is taking up Server resources - whether belonging to the WAS or independent of it (depending on APP design).
However, instead of creating an independent Thread not belonging to the WAS, why not just increase the number of Threads available to the WAS? Ultimately, the number of Threads you can have is down to the resources - MEMORY, CPU - available on the Server. Or is it a case that by offloading the Connection to a new Thread, you simply don't need to think about how many Threads to assign to the WAS (which seems dangerous because now you can use up Server resources without proper monitoring). It just seems as if a problem is being passed down to the APP - and thus the Developer - instead of being managed at the WAS.
Or am I simply misunderstanding how a Web Application Server works?
Putting it into a simple Web Application Server example. The following offloads the incoming Connection straight to a Thread. I am not limiting the number of Threads that can be created, however I am limited to the number of Open Connections allowed on my macbook. I have also noticed that if the backlog (the second number in the ServerSocket, currently 50) is set too small, I start receiving Broken Pipes and Connection Resets on the Client side.
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 50)) {
while (true) {
new Run(listener.accept()).start();
}
}
}
static class Run extends Thread {
private Socket socket;
Run(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
And now using Asynchronous, you are just passing the Thread on to another Thread. You are still limited by System Resources - allowed number of open files, connections, memory, CPU, etc..
import java.io.IOException;
import java.io.PrintWriter;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Server {
public static void main(String[] args) throws IOException {
try (ServerSocket listener = new ServerSocket(9090, 100)) {
while (true) {
new Synchronous(listener.accept()).start();
}
}
}
// assumed Synchronous but really it's a Thread from the WAS
// so is already asynchronous when it enters this Class
static class Synchronous extends Thread {
private Socket socket;
Synchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
System.out.println("Passing Socket to Asynchronous " + getName());
new Asynchronous(this.socket).start();
}
}
static class Asynchronous extends Thread {
private Socket socket;
Asynchronous(Socket socket) {
this.socket = socket;
}
#Override
public void run() {
try {
System.out.println("Processing Thread " + getName());
PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true);
out.println(new Date().toString());
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
this.socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
Looking at this Blog about Netflix 'tuning-tomcat-for-a-high-throughput', it looks like Tomcat does the same as my first code example. So Asynchronous processing in the Application shouldn't be necessary.
Tomcat by default has two properties that affect load, acceptCount which defines the maximum Queue size (default: 100) and maxThreads which defines the maximum number of simultaneous request processing threads (default: 200). There is also maxConnections but I'm not sure the point of this with maxThreads defined. You can read about them at Tomcat Config
Late, but maybe better than never. :)
I don't have a great answer to "why asych servlets?" but I think there is another bit on information which would be helpful to you.
What you are describing for the WAS is what Tomcat used to do in it's BIO connector. It was basically a thread per connection model. This limits the number of requests you can serve not just because of the maxThreads setting, but also because the worker thread would potentially continue to be tied up waiting for additional requests on the connection if a Connection:Close wasn't sent. (See https://www.javaworld.com/article/2077995/java-concurrency/java-concurrency-asynchronous-processing-support-in-servlet-3-0.html and What is the difference between Tomcat's BIO Connector and NIO Connector?)
Switching to NIO connector allows tomcat to maintain thousands of connections while still maintaining only a small pool of worker threads.

ServiceStack ORMLite Bug

Is there anywhere to report bugs/ request features in ServiceStack?
While using ServiceStack, my ServiceStack.ServiceInterface.Service object was throwing this error:
ExecuteReader requires an open and available Connection. The connection's current state is closed.
The Service class includes a Db property (used in examples), which is a IDbConnection - db connections are not thread safe.
I'm interested to know why this non thread safe method of access a database is included in the Service class. It's no good for servicing multiple web service requests.
Service.cs will try to resolve an IDbConnectionFactory that will create a new IDbConnection for you, so there isn't a thread safety issue here.
If you'd like to handle it differently, you can override it.
private IDbConnection db;
public virtual IDbConnection Db
{
get { return db ?? (db = TryResolve<IDbConnectionFactory>().OpenDbConnection()); }
}
Source:
https://github.com/ServiceStack/ServiceStack/blob/ada0f43012610dc9ee9ae863e77dfa36b7abea28/src/ServiceStack/Service.cs#L68
Edit:
Maybe it's not clear that OrmLiteConnectionFactories automatically create a new connection in conjunction with an OpenDbConnection call, but they do:
Source:
https://github.com/ServiceStack/ServiceStack.OrmLite/blob/db40347532a14441eba32e575bcf07f3b2f45cef/src/ServiceStack.OrmLite/OrmLiteConnectionFactory.cs#L72

Hibernate Session Threading

I have a problem regarding Hibernate and lazy loading.
Background:
I have a Spring MVC web app, I use Hibernate for my persistence layer. I'm using OpenSessionInViewFilter to enable me to lazy load entities in my view layer. And I'm extending the HibernateDaoSupport classes and using HibernateTemplate to save/load objects. Everything has been working quite well. Up until now.
The Problem:
I have a task which can be started via a web request. When the request is routed to a controller, the controller will create a new Runnable for this task and start the thread to run the task. So the original thread will return and the Hibernate session which was put in ThreadLocal (by OpenSessionInViewFilter) is not available to the new thread for the Task. So when the task does some database stuff I get the infamous LazyInitializationException.
Can any one suggest the best way I can make a Hibernate session available to the Task?
Thanks for reading.
Make your Runnable a Spring bean and add #Transactional annotation over run. You must be warned thou that this asynchronous task won't run in the same transaction as your web request.
And please don't start new thread, use pooling/executor.
Here is a working example on how to use the Hibernate session inside a Runnable:
#Service
#Transactional
public class ScheduleService {
#Autowired
private SessionFactory sessionFactory;
#Autowired
private ThreadPoolTaskScheduler scheduler;
public void doSomething() {
ScheduledFuture sf = scheduler.schedule(new Runnable() {
#Override
public void run() {
SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(scheduler);
final Session session = sessionFactory.openSession();
// Now you can use the session
}
}, new CronTrigger("25 8 * * * *"));
}
}
SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext() takes a reference to any Spring managed bean, so the scheduler itself is fine. Any other Spring managed bean would work as well.
Do I understand correctly, you want to perform some action in a completely dedicated background thread, right? In that case, I recommend you not accessing the Hibernates OpenSessionInViewFilter and further session logic for that thread at all, because it will, is you correctly noted, run in a decoupled thread and therefore information loaded in the original thread (i.e, the one that dealt with the initial HttpRequest). I think it would be wise to open and close the session yourself within that thread.
Otherwise, you might question why you are running that operation in a separated thread. May be it is sufficient to run the operation normally and present the user with some 'loading' screen in the meantime?

Resources