Service Fabric runtime is not reclaiming unused memory from Actor Service Instance - service-fabric-actor

Our app, one request creates one actor in any one partition, and for processing of one request Memory increasing 200MB-250MB during execution of actor, and after execution is finished I have deleted the actor
Task.Run(() =>
{
actorObject.ExecuteGrainAsync(requestId, jsonModel).ContinueWith(async (t) =>
{
await GrainFactory.DeleteActor(actorObject.GetActorId(), "Workflow", CancellationToken.None);
});
});
IActorService myActorServiceProxy = ActorServiceProxy.Create(
new Uri($"fabric:/APPSeConnect.WebAgent/{actorName}"), actorId);
await myActorServiceProxy.DeleteActorAsync(actorId, cancellationToken);
There are no active actors in partition which we can check by querying the fabric, but still the memory is blocked many minutes(5min - more than 50min). Later exe memory size decreases just few MB.
According to this Document memory should be claimed by runtime in idealtimeout. My setting is
new ActorGarbageCollectionSettings(10, 2)
I am using this attribute too
[StatePersistence(StatePersistence.None)]
During parallel processing of actors it's impact is huge.

We have been facing similar issue as yours, where the Actor Service is not removing its claimed memory in 15 minutes.
The test case is like :
We request an Actor and wait for the actor to complete its execution.
The claimed memory is in 100s of MBs.
After execution is complete, the Actor is programmatically deleted using DeleteActorAsync API as mentioned by you.
The garbageCollection settings is also set to ActorGarbageCollectionSettings = new ActorGarbageCollectionSettings(10, 5).
But in our case the whole of the memory is getting released in at least 30 minutes time. Which makes us think probably it is because of Generations of GC collection.

Related

Appropriate solution for long running computations in Azure App Service and .NET Core 3.1?

What is an appropriate solution for long running computations in Azure App Service and .NET Core 3.1 in an application that has no need for a database and no IO to anything outside of this application ? It is a computation task.
Specifically, the following is unreliable and needs a solution.
[Route("service")]
[HttpPost]
public Outbound Post(Inbound inbound)
{
Debug.Assert(inbound.Message.Equals("Hello server."));
Outbound outbound = new Outbound();
long Billion = 1000000000;
for (long i = 0; i < 33 * Billion; i++) // 230 seconds
;
outbound.Message = String.Format("The server processed inbound object.");
return outbound;
}
This sometimes returns a null object to the HttpClient (not shown). A smaller workload will always succeed. For example 3 billion iterations always succeeds. A bigger number would be nice specifically 240 billion is a requirement.
I think in the year 2020 a reasonable goal in Azure App Service with .NET Core might be to have a parent thread count to 240 billion with the help of 8 child threads so each child counts to 30 billion and the parent divides an 8 M byte inbound object into smaller objects inbound to each child. Each child receives a 1 M byte inbound and returns to the parent a 1 M byte outbound. The parent re-assembles the result into a 8 M byte outbound.
Obviously the elapsed time will be 12.5%, or 1/8, or one-eighth, of the time a single thread implementation would need. The time to cut-up and re-assemble objects is small compared to the computation time. I am assuming the time to transmit the objects is very small compared to the computation time so the 12.5% expectation is roughly accurate.
If I can get 4 or 8 cores that would be good. If I can get threads that give me say 50% of the cycles of a core, then I would need may be 8 or 16 threads. If each thread gives me 33% of the cycles of a core then I would need 12 or 24 threads.
I am considering the BackgroundService class but I am looking for confirmation that this is the correct approach. Microsoft says...
BackgroundService is a base class for implementing a long running IHostedService.
Obviously if something is long running it would be better to make it finish sooner by using multiple cores via System.Threading but this documentation seems to mention System.Threading only in the context of starting tasks via System.Threading.Timer. My example code shows there is no timer needed in my application. An HTTP POST will serve as the occasion to do work. Typically I would use System.Threading.Thread to instantiate multiple objects to use multiple cores. I find the absence of any mention of multiple cores to be a glaring omission in the context of a solution for work that takes a long time but may be there is some reason Azure App Service doesn't deal with this matter. Perhaps I am just not able to find it in tutorials and documentation.
The initiation of the task is the illustrated HTTP POST controller. Suppose the longest job takes 10 minutes. The HTTP client (not shown) sets the timeout limit to 1000 seconds which is much more than 10 minutes (600 seconds) in order for there to be a margin of safety. HttpClient.Timeout is the relevant property. For the moment I am presuming the HTTP timeout is a real limit; rather than some sort of non-binding (fake limit) such that some other constraint results in the user waiting 9 minutes and receiving an error message. A real binding limit is a limit for which I can say "but for this timeout it would have succeeded". If the HTTP timeout is not the real binding limit and there is something else constraining the system, I can adjust my HTTP controller to instead have three (3) POST methods. Thus POST1 would mean start a task with the inbound object. POST2 means tell me if it is finished. POST3 means give me the outbound object.
What is an appropriate solution for long running computations in Azure App Service and .NET Core 3.1 in an application that has no need for a database and no IO to anything outside of this application ? It is a computation task.
Prologue
A few years ago a ran in to a pretty similar problem. We needed a service that could process large amounts of data. Sometimes the processing would take 10 seconds, other times it could take an hour.
At first we did it how your question illustrates: Send a request to the service, the service processes the data from the request and returns the response when finished.
Issues At Hand
This was fine when the job only took around a minute or less, but anything above this, the server would shut down the session and the caller would report an error.
Servers have a default of around 2 minutes to produce a response before it gives up on the request. It doesn't quit the processing of the request... but it does quit the HTTP session. It doesn't matter what parameters you set on your HttpClient, the server is the one that delegates how long is too long.
Reasons For Issues
All this is for good reasons. Server sockets are extremely expensive. You have a finite amount to go around. The server is trying to protect your service by severing requests that are taking longer than a specified time in order to avoid socket starvation issues.
Typically you want your HTTP requests to take only a few milliseconds. If they are taking longer than this, you will eventually run in to socket issues if your service has to fulfil other requests at a high rate.
Solution
We decided to go the route of IHostedService, specifically the BackgroundService. We use this service in conjunction with a Queue. This way you can set up a queue of jobs and the BackgroundService will process them one at a time (in some instances we have service processing multiple queue items at once, in others we scaled horizontally producing two or more queues).
Why an ASP.NET Core service running a BackgroundService? I wanted to handle this without tightly-coupling to any Azure-specific constructs in case we needed to move out of Azure to some other cloud service (back in the day we were contemplating this for other reasons we had at the time.)
This has worked out quite well for us and we haven't seen any issues since.
The work flow goes like this:
Caller sends a request to the service with some parameters
Service generates a "job" object and returns an ID immediately via 202 (accepted) response
Service places this job in to a queue that is being maintained by a BackgroundService
Caller can query the job status and get information about how much has been done and how much is left to go using this job ID
Service finishes the job, puts the job in to a "completed" state and goes back to waiting on the queue to produce more jobs
Keep in mind your service has the capability to scale horizontally where there would be more than one instance running. In this case I am using Redis Cache to store the state of the jobs so that all instances share the same state.
I also added in a "Memory Cache" option to test things locally if you don't have a Redis Cache available. You could run the "Memory Cache" service on a server, just know that if it scales then your data will be inconsistent.
Example
Since I'm married with kids, I really don't do much on Friday nights after everyone goes to bed, so I spent some time putting together an example that you can try out. The full solution is also available for you to try out.
QueuedBackgroundService.cs
This class implementation serves two specific purposes: One is to read from the queue (the BackgroundService implementation), the other is to write to the queue (the IQueuedBackgroundService implementation).
public interface IQueuedBackgroundService
{
Task<JobCreatedModel> PostWorkItemAsync(JobParametersModel jobParameters);
}
public sealed class QueuedBackgroundService : BackgroundService, IQueuedBackgroundService
{
private sealed class JobQueueItem
{
public string JobId { get; set; }
public JobParametersModel JobParameters { get; set; }
}
private readonly IComputationWorkService _workService;
private readonly IComputationJobStatusService _jobStatusService;
// Shared between BackgroundService and IQueuedBackgroundService.
// The queueing mechanism could be moved out to a singleton service. I am doing
// it this way for simplicity's sake.
private static readonly ConcurrentQueue<JobQueueItem> _queue =
new ConcurrentQueue<JobQueueItem>();
private static readonly SemaphoreSlim _signal = new SemaphoreSlim(0);
public QueuedBackgroundService(IComputationWorkService workService,
IComputationJobStatusService jobStatusService)
{
_workService = workService;
_jobStatusService = jobStatusService;
}
/// <summary>
/// Transient method via IQueuedBackgroundService
/// </summary>
public async Task<JobCreatedModel> PostWorkItemAsync(JobParametersModel jobParameters)
{
var jobId = await _jobStatusService.CreateJobAsync(jobParameters).ConfigureAwait(false);
_queue.Enqueue(new JobQueueItem { JobId = jobId, JobParameters = jobParameters });
_signal.Release(); // signal for background service to start working on the job
return new JobCreatedModel { JobId = jobId, QueuePosition = _queue.Count };
}
/// <summary>
/// Long running task via BackgroundService
/// </summary>
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while(!stoppingToken.IsCancellationRequested)
{
JobQueueItem jobQueueItem = null;
try
{
// wait for the queue to signal there is something that needs to be done
await _signal.WaitAsync(stoppingToken).ConfigureAwait(false);
// dequeue the item
jobQueueItem = _queue.TryDequeue(out var workItem) ? workItem : null;
if(jobQueueItem != null)
{
// put the job in to a "processing" state
await _jobStatusService.UpdateJobStatusAsync(
jobQueueItem.JobId, JobStatus.Processing).ConfigureAwait(false);
// the heavy lifting is done here...
var result = await _workService.DoWorkAsync(
jobQueueItem.JobId, jobQueueItem.JobParameters,
stoppingToken).ConfigureAwait(false);
// store the result of the work and set the status to "finished"
await _jobStatusService.StoreJobResultAsync(
jobQueueItem.JobId, result, JobStatus.Success).ConfigureAwait(false);
}
}
catch(TaskCanceledException)
{
break;
}
catch(Exception ex)
{
try
{
// something went wrong. Put the job in to an errored state and continue on
await _jobStatusService.StoreJobResultAsync(jobQueueItem.JobId, new JobResultModel
{
Exception = new JobExceptionModel(ex)
}, JobStatus.Errored).ConfigureAwait(false);
}
catch(Exception)
{
// TODO: log this
}
}
}
}
}
It is injected as so:
services.AddHostedService<QueuedBackgroundService>();
services.AddTransient<IQueuedBackgroundService, QueuedBackgroundService>();
ComputationController.cs
The controller used to read/write jobs looks like this:
[ApiController, Route("api/[controller]")]
public class ComputationController : ControllerBase
{
private readonly IQueuedBackgroundService _queuedBackgroundService;
private readonly IComputationJobStatusService _computationJobStatusService;
public ComputationController(
IQueuedBackgroundService queuedBackgroundService,
IComputationJobStatusService computationJobStatusService)
{
_queuedBackgroundService = queuedBackgroundService;
_computationJobStatusService = computationJobStatusService;
}
[HttpPost, Route("beginComputation")]
[ProducesResponseType(StatusCodes.Status202Accepted, Type = typeof(JobCreatedModel))]
public async Task<IActionResult> BeginComputation([FromBody] JobParametersModel obj)
{
return Accepted(
await _queuedBackgroundService.PostWorkItemAsync(obj).ConfigureAwait(false));
}
[HttpGet, Route("computationStatus/{jobId}")]
[ProducesResponseType(StatusCodes.Status200OK, Type = typeof(JobModel))]
[ProducesResponseType(StatusCodes.Status404NotFound, Type = typeof(string))]
public async Task<IActionResult> GetComputationResultAsync(string jobId)
{
var job = await _computationJobStatusService.GetJobAsync(jobId).ConfigureAwait(false);
if(job != null)
{
return Ok(job);
}
return NotFound($"Job with ID `{jobId}` not found");
}
[HttpGet, Route("getAllJobs")]
[ProducesResponseType(StatusCodes.Status200OK,
Type = typeof(IReadOnlyDictionary<string, JobModel>))]
public async Task<IActionResult> GetAllJobsAsync()
{
return Ok(await _computationJobStatusService.GetAllJobsAsync().ConfigureAwait(false));
}
[HttpDelete, Route("clearAllJobs")]
[ProducesResponseType(StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status401Unauthorized)]
public async Task<IActionResult> ClearAllJobsAsync([FromQuery] string permission)
{
if(permission == "this is flakey security so this can be run as a public demo")
{
await _computationJobStatusService.ClearAllJobsAsync().ConfigureAwait(false);
return Ok();
}
return Unauthorized();
}
}
Working Example
For as long as this question is active, I will maintain a working example you can try out. For this specific example, you can specify how many iterations you would like to run. To simulate long-running work, each iteration is 1 second. So, if you set the iteration value to 60, it will run that job for 60 seconds.
While it's running, run the computationStatus/{jobId} or getAllJobs endpoint. You can watch all the jobs update in real time.
This example is far from a fully-functioning-covering-all-edge-cases-full-blown-ready-for-production example, but it's a good start.
Conclusion
After a few years of working in the back-end, I have seen a lot of issues arise by not knowing all the "rules" of the back-end. Hopefully this answer will shed some light on issues I had in the past and hopefully this saves you from having to deal with said problems.
One option could be to try out Azure Durable Functions, which are more oriented to long-running jobs that warrant checkpoints and state as against attempting to finish within the context of the triggering request. It also has the concept of fan-out/fan-in, in case what you're describing could be divided into smaller jobs with an aggregated result.
If just raw compute is the goal, Azure Batch might be a better option since it facilitates that scaling.
I assume the actual work that needs be done is something other than iterating over a loop doing nothing, so in terms of possible parallelization I can't offer much help right now. Is the work CPU intensive or IO related?
When it comes to long running work in an Azure App Service, one of the option is to use a Web Job. A possible solution would be to post the request for computation to a queue (Storage Queue or Azure Message Bus Queues). The webjob then processes those messages and possibly puts a new message on another queue that the requester can use to handle the results.
If the time needed for processing is guaranteed to be less than 10 minutes you could replace the Web Job with an Queue Triggered Azure Function. It is a serverless offering on Azure with great scaling possibilities.
Another option is indeed using a Service Worker or an instance of an IHostingService and do some queue processing there.
Since you're saying that your computation succeeds at fewer iterations, a simple solution is to simply save your results periodically and resume the computation.
For example, say you need to perform 240 Billion iterations, and you know that the highest number of iterations to perform reliably is 3 Billion iterations, I would set up the following:
A slave, that actually performs the task (240Billion iterations)
A master that periodically received input from the slave about progress.
The slave can periodically send a message to the master (say once every 2billion iterations ?). This message could contain whatever is relevant to resume the computation should the computation be interrupted.
The master should keep track of the slave. If the master determines that the slave has died / crashed / whatever, the master should simply create a new slave which should resume computation from the last reported position.
How exactly you implement the master and slave is a matter of your personal preference.
Rather than have a single loop perform 240 billion iterations, if you can split your computation across nodes, I would try to simultaneously compute the solution in parallel across as many nodes as possible.
I personally use node.js for multicore projects. Although you are using asp.net, I include this example of node.js to illustrate the architecture that works for me.
Node.js on multi-core machines
https://dzone.com/articles/multicore-programming-in-nodejs
As Noah Stahl has mentioned in his answer, Azure Durable Functions and Azure Batch seem like options to help you achieve your goal on your platform. Please see his answer for more details.
The standard answer is to use asynchronous messaging. I have a blog series on the topic. This is particularly the case since you're already in Azure.
You already have an Azure web app service, but now you want to run code outside of a request - "request-extrinsic code". The proper way to run that code is in a separate process - Azure Functions or Azure WebJobs are a good match for Azure webapps.
First, you want a durable queue. Azure Storage Queues are a good fit since you're in Azure anyway. Then your webapi can just write a message into the queue and return. The important part here is that this is a durable queue, not an in-memory queue.
Meanwhile, the Azure Function / WebJob is processing that queue. It will pick up the work from the queue and execute it.
The final piece of the puzzle is the completion notification. This is a pretty common approach:
I can adjust my HTTP controller to instead have three (3) POST methods. Thus POST1 would mean start a task with the inbound object. POST2 means tell me if it is finished. POST3 means give me the outbound object.
To do this, your background processor should save the "in-progress" / "complete/result" state somewhere where the webapi process can access it. If you already have a shared database (and it makes sense to keep results), then this may be the easiest choice. I would also consider using Azure Cosmos DB, which has a nice time-to-live setting so the background service can inject the results that are "good for 24 hours" or whatever, after which they're automatically cleaned up.

Maximum number of tasks supported in AUTOSAR

What is the maximum number of tasks supported in AUTOSAR compliant systems?
In Linux, I can check the maximum process IDs supported to get the maximum number of tasks supported.
However, I couldn't find any source that states the maximum number of tasks supported by AUTOSAR.
Thank you very much for your help!
Well, we are still in an embedded automotive world and not on a PC.
There is usually a tradeoff between the number of tasks you have and what it takes to schedule them and what RAM/ROM and runtime resources your configuration uses.
As already said, if you just need a simple timed loop with some interrupts in between, one task may be ok.
It might be also enough, to have e.g. 3 tasks running at 5ms, 10ms and 20ms cycle. But you could also schedule this in simple cases like this with a single 5ms task:
TASK(TASK_5ms)
{
static uint8 cnt = 0;
cnt++;
// XXX and YYY Mainfunctions shall only be called every 10ms
// but do a load balancing, that does not run 3 functions every 10ms
// and 1 every 5ms, but only two every 5ms
if (cnt & 1)
{
XXX_Mainfunction_10ms();
}
else
{
YYY_Mainfunction_10ms();
}
ZZZ_Mainfunction_5ms();
}
So, if you need something to be run every 5, 10 or 20ms, you put these runnables into the corresponding tasks.
The old OSEK also had a notion of BASIC vs EXTENDED Tasks, where only extended tasks where able to react on OsEvents. This tasks might not run cyclically, but only on configured OsEvents. You would have an OS Waitpoint there, where the tasks is more or less stopped and only woken up by the OS on the arrival of an event. There are also OSALARM, which could either directly trigger the activation of a OsTask, or indirectly over an Event, so, you could e.g. wait on the same Waitpoint on both a cyclic event from an OsAlarm or an OsEvent set by something else e.g. by another task or from an ISR.
TASK(TASK_EXT)
{
EventMaskType evt;
for(;;)
{
WaitEvent(EVT_XXX_START | EVT_YYY_START | EVT_YYY_FINISHED);
GetEvent(TASK_EXT, &evt);
// Start XXX if triggered, but YYY has reported to be finished
if ((evt & (EVT_XXX_START | EVT_YYY_FINISHED) == (EVT_XXX_START | EVT_YYY_FINISHED))
{
ClearEvent(EVT_XXX_START);
XXX_Start();
}
// Start YYY if triggered, will report later to start XXX
if (evt & EVT_YYY_START)
{
ClearEvent(EVT_YYY_START);
YYY_Start();
}
}
}
This direct handling of scheduling is now mostly done/generated within the RTE based on the events you have configured for your SWCs and the Event to Task Mapping etc.
Tasks are scheduled mainly by their priority, that's why they can be interrupted anytime by a higher priority taks. Exception here is, if you configure your OS and tasks to be not preemptive but cooperative. Then it might be necessary to also use Schedule() points in your code, to give up the CPU.
On bigger systems and also on MultiCore systems with an MultiCore OS, there will be higher nunbers of Tasks, because Tasks are bound to a Core, though the Tasks on different Cores run independently, except maybe for the Inter-Core-Synchronization. This can also have a negative performance impact (Spinlocks can stop the whole system)
e.g. there could be some Cyclic Tasks for normal BaseSW components and one specific only for Communication components (CAN Stack and Comm-Services).
We usually separate the communication part, since they need a certain cycle time like 5..10ms, since this cycle is used by the Comm-Stack for message transmission scheduling and also reception timeout monitoring.
Then there might be a task to handle the memory stack (Ea/Fls, Eep/Fee, NvM).
There might be also some kind of Event based Tasks to trigger certain HW-control and processing chains of measured data, since they might be put on different cores, and can be scheduled by start or finished events of each other.
On the other side, for all your cyclic tasks, you should also make sure, that the functions run within such task do not run longer than your task cycle, otherwise you get an OS Shutdown due to multiple activation of the same task, since your task is started again, before it actually finished. And you might have some constraints, that require some tasks to finish in your applications expected measurement cycle.
In safety relevant systems (ASIL-A .. ASIL-D) you'll also have at least one task fpr each safety-level to get freedome-from-interference. In AUTOSAR, you already specify that on the OSApplication which the tasks are assigned to, which also allows you to configure the MemoryProtection (e.g. WrAccess to memory partitions by QM, ASIL-A, ASIL-B application and tasks). That is then another part, the OS has to do at runtime, to reconfigure the MPU according to the OsApplications MemoryAccess settings.
But again, the more tasks you create, the higher the usage of RAM, ROM and runtime.
RAM - runtime scheduling structures and different task stacks
ROM - the actual task and event configurations
Runtime - the context switches of the tasks and also the scheduling itself
It seems to vary. I found that ETAS RTA offers 1024 tasks*, whereas Vector's MICROSAR OS has 65535.
For task handling, OSEK/ASR provides the following functions:
StatusType ActivateTask (TaskType TaskID)
StatusType TerminateTask (void)
StatusType Schedule (void)
StatusType GetTaskID (TaskRefType TaskID)
StatusType GetTaskState (TaskType TaskID, TaskStateRefType State)
*Link might change in future, but it is easy to search ETAS page directly for manuals etc.: https://www.etas.com/en/products/download_center.php
Formally you can have an infinite number of OsTasks. According to the spec. the configuration of the Os can have 0..* OsTask.
Apart from that the (OS) software uses data type TaskType for Task-Index variables. Therefore, if TaskType is of uint16 you could not have more than 65535 tasks.
Besides that, if you have a lot of tasks, you might re-think your design.

<Spring Batch> Why does making ItemReader thread-safe leads us to loosing restartability?

I have a multi-threaded batch job reading from a DB and I am concerned about different threads re-reading records as ItemReader is not thread safe in Spring batch. I went through SpringBatch FAQ section which states that
You can synchronize the read() method (e.g. by wrapping it in a delegator that does the synchronization). Remember that you will lose restartability, so best practice is to mark the step as not restartable and to be safe (and efficient) you can also set saveState=false on the reader.
I want to know why will I loose re-startability in this case? What has restartability got to do with synchronizing my read operations? It can always try again,right?
Also, will this piece of code be enough for synchronizing the reader?
public SynchronizedItemReader<T> implements ItemReader<T> {
private final ItemReader<T> delegate;
public SynchronizedItemReader(ItemReader<T> delegate) {
this.delegate = delegate;
}
public synchronized T read () {
return delegate.read();
}
}
When using an ItemReader with multithreads, the lack of restartability is not about the read itself. It's about saving the state of the reader which occurs in the update method. The issue is that there needs to be coordination between the calls to read() - the method providing the data and update() - the method persisting the state. When you use multiple threads, the internal state of the reader (and therefore the update() call) may or may not reflect the work that has been done. Take for example the FlatFileItemReader using a chunk size of 5 and running on multiple threads. You could have thread1 having read 5 items (time to update), yet thread 2 could have read an additional 3. This means that the call to update would save that 8 items have been read. If the chunk on thread 2 fails, the state would due incorrect and the restart would miss the three items that were already read.
This is not to say that it is impossible to write a thread safe ItemReader. However, as your example above illustrates, if delegate is a stateful ItemReader (implements ItemStream as well), the state will not be persisted correctly with calls to update (in fact, your example above doesn't even take the ItemStream aspect of stageful readers into account).
If you want make restartable your job, with parallel execution of items, you can save item, that reader read plus state of this item by yourself.

Best NHibernate multithreading pattern?

As we know, NHibernate sessions are not thread safe. But we have a code path split in several long running threads, all using objects loaded in the initial thread.
using (var session = factory.OpenSession())
{
var parent = session.Get<T>(parentId);
DoSthWithParent(session, parent);
foreach (var child in parent.children)
{
parallelThreadMethodLongRunning.BeginInvoke(session, child);
//[Thread #1] DoSthWithChild(child #1) -> SaveOrUpdate(child #1) + Flush()
//[Thread #2] DoSthWithChild(child #2) -> SaveOrUpdate(child #2) + Flush()
//[Thread #3] DoSthWithChild(child #3) -> SaveOrUpdate(child #3) + Flush()
// -> etc... changes to be persisted immediately, not all at the end.
EndInvoke();
}
DoFinalChangesOnParentAndChildren(parent);
session.Flush();
}
}
One way would be a session for each thread, but that would require the parent object to be reloaded in each. Plus, the final method is also doing changes on the children and would run in a StaleObjectException if another session changed it meanwhile, or had to be evicted/reloaded.
So all threads have to use the same session. What is the best way to do this?
Use save queue in initial thread (thread safe implementation), which is polled in a loop (instead of EndInvoke()) from the main thread. Child threads can insert NHibernate objects to be saved by the main thread.
Use some callback mechanism to save/flush objects in main thread. Is there something similar possible to UI thread callback in WPF, Control.Invoke() or BackgroundWorker?
Put Save/Flush accesses into lock(session) blocks? Maybe dangerous, because modifying the NHibernate objects might change the session, even if not doing a Save()/Flush().
Or should I live with the database overhead to load the same objects for separate sessions in each thread, evict and reload them in the main thread and then do changes again? [edit: bad "solution" due to object concurrency/risk of stale objects]
Consider also that the application has a business logic layer above NHibernate, which has similar objects, but sends it's property values to the NHibernate objects on it's own Save() command, only then modifying them and doing NHibernate Save()/Flush() immediately.
Edit:
It's important that any read operation on NHibernate objects may change the session - lazy loading, chilren collection change under certain conditions. So it is really better to have a business object layer on top, which synchronizes all access to NHibernate objects. Considering the database operations take only a minimum time of the threads (mainly occasional status settings), and most is for calculations, watching, web service access and similar, the performance loss by data layer synchronization is negligible.
Firstly, if I understand correctly, different threads may be updating the same objects. In that case, nHibernate or not, you're performing several updates on the same objects concurrently, which may lead to unexpected results.
You may want to tweak your design a bit to ensure that an object can be only updated by (at most) a single thread.
Now, assuming your flow may include having the same threads reading the same data (but writing different data), I'd suggest using different sessions- one per thread, and utilizing 2nd level cache;
2nd level cache is kept at the SessionFactory (rather than in the Session) level, and is therefore shared by all session instances.
The session object is not thread safe, you can't use it over different threads. The SaveOrUpdate in your sepperate threads will most likely crash your program or corrupt your database. However what about creating the data set you want to update and do the SaveOrUpdate actions in your main thread (were your session is created)?
You should observe the following practices when creating NHibernate
Sessions: • Never create more than one concurrent ISession or
ITransaction instance per database connection.
• Be extremely careful when creating more than one ISession per
database per transaction. The ISession itself keeps track of updates
made to loaded objects, so a different ISession might see stale data.
• The ISession is not threadsafe! Never access the same ISession in
two concurrent threads. An ISession is usually only a single
unit-of-work!

Concurrent Handle calls in NServiceBus

I would like my IHandleMessages<X>.Handle(X x) methods to be called concurrently by NSB. Even when configuring the default host AsA_Client - which turns off transactions - and providing two or more threads (NumberOfWorkerThreads="3" in App.Config), the following handler is called twice sequentially when there are two messages on the queue:
public void Handle(EventMessage message)
{
Logger.Info(string.Format("Subscriber 1 received EventMessage with Id {0}.", message.EventId));
Logger.Info(string.Format("Message time: {0}.", message.Time));
Logger.Info(string.Format("Message duration: {0}.", message.Duration));
Thread.Sleep(10000);
}
This is merely a modified version of the PubSub demo that is supplied with NSB. No matter what settings I provide - I've also tried tweaking the IsolationLevel, to no avail - this handler blocks concurrent calls.
In practice, this is not desirable for one specific set of handlers that we are writing. The desired behavior would be - at minimum - to let concurrent threads into the Handle method and we would manually mediate access to state with software locks.
Is this not possible or am I missing a trick?
The most likely cause is that you're using the free Express Edition of NServiceBus which is limited to a single thread. The commercially available Standard Edition allows you to run multiple threads.
NOTE: NServiceBus now performs at full speed in the free trial - no more performance throttling.

Resources