.net 4.0 Tasks: Synchronize on one or more objects - multithreading

I have read a lot about the new Task functionality in .net 4.0, but I haven't found a solution for the following problem:
I am writing a server application that processes requests from many users and I want to use Tasks to distribute these request on multiple cores. However, these Tasks should be synchronized on objects - for the beginning, users -, so that just one task is processed for each object at a time. This would be simple to achieve with Task.ContinueWith(), but it should also be possible to synchonize a task on multiple objects (e.g. when a user transfers money to another user, a variable should be decremented at user A and incremented at user B without other tasks interfering).
So, my first attempt is a class that receives delegates, creates tasks and stores them in a dictionary with the objects to sync on as keys. If a new task is scheduled, it can be appended to the last task of the given object with Task.ContinueWith(). If it should be synchronized on multiple objects, the new Task is created using TaskFactory.ContinueWhenAll(). The created task is stored in the dictionary for every object it is synchronized on.
Here is my first draft:
public class ActionScheduler:IActionScheduler
{
private readonly IDictionary<object, Task> mSchedulingDictionary = new Dictionary<object, Task>();
private readonly TaskFactory mTaskFactory = new TaskFactory();
/// <summary>
/// Schedules actions synchonized on one or more objects. Only one action will be processed for each object at any time.
/// </summary>
/// <param name="synchronisationObjects">Array of objects the current action is synchronized on</param>
/// <param name="action">The action that will be scheduled and processed</param>
public void ScheduleTask(object[] synchronisationObjects, Action action)
{
// lock the dictionary in case two actions are scheduled on the same object at the same time
// this is necessary since reading and writing to a dictionary can not be done in an atomic manner
lock(mSchedulingDictionary)
{
// get all current tasks for the given synchronisation objects
var oldTaskList = new List<Task>();
foreach (var syncObject in synchronisationObjects)
{
Task task;
mSchedulingDictionary.TryGetValue(syncObject, out task);
if (task != null)
oldTaskList.Add(task);
}
// create a new task for the given action
Task newTask;
if (oldTaskList.Count > 1)
{
// task depends on multiple previous tasks
newTask = mTaskFactory.ContinueWhenAll(oldTaskList.ToArray(), t => action());
}
else
{
if (oldTaskList.Count == 1)
{
// task depends on exactly one previous task
newTask = oldTaskList[0].ContinueWith(t => action());
}
else
{
// task does not depend on any previous task and can be started immediately
newTask = new Task(action);
newTask.Start();
}
}
// store the task in the dictionary
foreach (var syncObject in synchronisationObjects)
{
mSchedulingDictionary[syncObject] = newTask;
}
}
}
}
This even works if a task "multiSyncTask" was created for multiple objects, and afterwards tasks for each of the objects are scheduled. Since they are all created with multiSyncTask.ContinueWith(), they start synchronously:
static void Main()
{
IActionScheduler actionScheduler = new ActionScheduler();
var syncObj1 = new object();
var syncObj2 = new object();
// these two start and complete simultaneously:
actionScheduler.ScheduleTask(new[] { syncObj1 }, () => PrintTextAfterWait("1"));
actionScheduler.ScheduleTask(new[] { syncObj2 }, () => PrintTextAfterWait("2"));
// this task starts after the first two and "locks" both objects:
actionScheduler.ScheduleTask(new[] { syncObj1, syncObj2 }, () => PrintTextAfterWait("1 and 2"));
// these two - again - start and complete simultaneously after the task above:
actionScheduler.ScheduleTask(new[] { syncObj1 }, () => PrintTextAfterWait("1"));
actionScheduler.ScheduleTask(new[] { syncObj2 }, () => PrintTextAfterWait("2"));
}
static void PrintTextAfterWait(string text)
{
Thread.Sleep(3000);
Console.WriteLine(text);
}
What do you think - is this a good solution for my problem? I am a bit sceptic about the big lock on the dictionary, but it is necessary in case two tasks are scheduled on one object at once to prevent race conditions. Of course, the dictionary is just locked for the time it takes to create a task, not when it is processed.
Also, I would love to know if there are any already existing solutions or coding paradigms out there that solve my problem better using .net 4.0 Tasks that I have failed to track down.
Thank you and with best regards,
Johannes

If I got you right.. you would like to have a Task.ContinueWith(task1, task2, lambda)?
Something like the Join arbiter in CCR?
http://msdn.microsoft.com/en-us/library/bb648749.aspx
If so, probably the most elegant option is to use the JoinBlock in TPL dataflow (http://www.microsoft.com/download/en/confirmation.aspx?id=14782).
Or, maybe, have you tried to do use Task.WaitAll() as the first instruction of your dependent task?

Related

On Hyperledger Fabric Context

I would appreciate some help with this small issue.
According to the comercial paper smart contract, present in Fabric-samples, one can define a custom Context, which allows handling logic across different transactions.
Can we define a variable in a Context, initialized as 0, and increment it at each transaction? I do not seem able to increment it, i.e., the counter always resets at each transation:
class CustomContext extends Context {
constructor() {
super();
this.comercialPaperList = new PaperList(this);
this.counter = 0;
}
generateLogId() {
return this.numberLogs++;
}
getLatestLogId() {
return this.numberLogs;
}
}
class MyContract extends Contract {
constructor() {
// Unique namespace when multiple contracts per chaincode file
super('...');
}
/**
* Define a custom context for a citius log
*/
createContext() {
return new CustomContext();
I have a test transaction which increments the counter:
async incrementC (ctx) {
let before = await ctx.getLatestLogId();
console.log(before);
let new = await ctx.generateLogId();
console.log(new);
console.log("============== after inc")
let after = await ctx.getLatestLogId();
console.log(after);
}
On the first time I execute the incrementC transaction, I obtain 0, 1, 1, as expected.
On the following times, I obtain exacly the same, as if context did not store the updates.
Any insights?
A Custom context has the same scope as a non custom context, it is the context for the currently executing transaction only. It is not able to span across multiple transaction requests. If the documentation implies this then I would suggest that there is something wrong with the documentation and a jira should be raised at https://jira.hyperledger.org to raise this as an issue.
So unfortunately what you are trying to do will not work.

Calling a C++ function from Qml asynchronously or in a new thread

I completed my entire project on building an application that basically fetches sql query result based on some user input combination in the GUI and I believe that there is some performance issue when calling a Q_INVOKABLE function in C++ code which performs some computation to produce the query and also run the query and return the result, so I wanted to know if there is a way to run that function in a separate thread from qml.
The skeleton is somewhat like this:
Result.h
class Result{
public:
Q_INVOKABLE void on_submit_button_clicked();
}
Result.cpp
Result::on_submit_button_clicked()
{
/*Code that prepare the query*/
/*Code that runs the query using a Qthread*/
}
main.qml
ApplicationWindow
{
id:appwindow
/*Other parts of the GUI*/
Button{
id:submitButton
onClicked:{
result_object.on_submit_button_clicked() //how to run this function in a new thread ?
/*code to load my table view columns*/
}
}
}

Strange Out of Memory exception in C# with List or Strings or ado.net?

I have a big doubt. The problem is Out of Memory Exception in my class. But it seems something strange here. I have class in a dll.
public class MyClass : IDisposible
{
List<ClassA> a_classLists = new .....// new instance.
List<ClassB> b_classLists = new .....// new instance.
public string Method1(int IDValue)
{
// do here some web service call and get some XML data from it.
// parse the xml.
// Iterate through a for loop and add each node value to a_classLists
// Usually contains 10 or 15 items
Method2(); // from here calling another method
FinalSaveToDB(); // finally save the data to DB
return "";
}
private void Method2()
{
// do here some web service call and get some XML data from it.
// Iterate through a forloop.
// parse the xml. [large xml data. ie, image in binary format]
// For each loop add image binary data and other xml to b_classLists
// Usually it contains 50 or 60 such large lists.
}
private void FinalSaveToDB()
{
// using sqlbulkcopy, i am saving the data in the 2 lists to 2 different
// tables in the DB.
// Tab lock is mentioned in sqlbulk class.
// Actually 2 sqlbulkcopy class for 2 lists.
// Only 1 sql connection opens, then do the sqlbulkcopy. [there is no dataset or datareader]
// sqlconnection closes. I am using "using" clause for sqlconnection, bulkcopy etc
// these all are working fine.
}
private void Dispose()
{
// here nulling everything
// proxy null
// all the lists null....
}
}
This is the class I am instantiating 1000 times using reactive framework's Observable.Start
method as shown below...
private IObservable<string> SendEmpDetails(Employee emp)
{
using (MyClass p = new MyClass())
{
return Observable.Start(() => p.Method1(emp.ID), Scheduler.ThreadPool);
}
// here I hope it will call the Dispose and release all objects in the class.
}
// This EmployeeLists contains 1000 employee objects
EmployeeLists.ToObservable().Select(x => SendEmpDetails(x).Select(y => new { emp = x, retval = y }))
.Merge(10)
.ObserveOn(Scheduler.CurrentThread)
.Subscribe(x =>
{
SendStatus(x.retval.Item1, x.retval);
});
Even though, why i am getting out of memory exception ??? After starting the app, when it
process the 200th (or above) MyClass object, it throws error.
I forgot to mention 1 more thing, I am using VS 2010 and C# 4.0 (win7, 64 bit OS).
I need to log each activity. [ie, i need to understand the each and every process the app has gone through]. SO i declared a class [MyClass] level private string variable and assign each process details like "called this method", "got 5 records from this web service" etc.
logdata = Environment.Newline() + "This method has completed";
So the error is thrown here saying out of memory with some evalution failed.
So I turned off the string evaluation check box from Options in VS.
Again, there is no use.
So I changed the string to StringBuilder and tried to append the activity string each time.
Still no use. I dont understand what is the problem in it.
Is this because all the threads are working parallel, do they exchange the MyClass resources ??? Why the objects are not released ???
Please help me in this matter.

Java ME Runnable object takes up memory although not made an instance yet

I am facing a strange problem with memory in Java ME.
here is a part of my code:
int variable=1;
while (true) {
if (variable==2) {
display = Display.getDisplay(this);
MyCanvas mc = new MyCanvas(this); // MyCanvas is a runnable object
mcT = new Thread(mc); // new thread for MyCanvas
mc.repaint();
display.setCurrent(mc);
mcT.start(); // run thread
}
if (variable==1) {
// Do some other stuff
}
}
The problem is that although still the variable is set to 1, so it does not come through the if (variable==2) condition the program consumes 300kB more memory than when I delete the code after condition if (variable==2).
As far as I know the code should by executed and the objects shall be created only when I set variable to value 2. But it consumes the memory also when the code after condition "if (variable==2)" is not executed.
Why does this happen?

How to use IObservable/IObserver with ConcurrentQueue or ConcurrentStack

I realized that when I am trying to process items in a concurrent queue using multiple threads while multiple threads can be putting items into it, the ideal solution would be to use the Reactive Extensions with the Concurrent data structures.
My original question is at:
While using ConcurrentQueue, trying to dequeue while looping through in parallel
So I am curious if there is any way to have a LINQ (or PLINQ) query that will continuously be dequeueing as items are put into it.
I am trying to get this to work in a way where I can have n number of producers pushing into the queue and a limited number of threads to process, so I don't overload the database.
If I could use Rx framework then I expect that I could just start it, and if 100 items are placed in within 100ms, then the 20 threads that are part of the PLINQ query would just process through the queue.
There are three technologies I am trying to work together:
Rx Framework (Reactive LINQ)
PLING
System.Collections.Concurrent
structures
Drew is right, I think the ConcurrentQueue even though it sounds perfect for the job is actually the underlying data structure that the BlockingCollection uses. Seems very back to front to me too.
Check out chapter 7 of this book*
http://www.amazon.co.uk/Parallel-Programming-Microsoft-NET-Decomposition/dp/0735651590/ref=sr_1_1?ie=UTF8&qid=1294319704&sr=8-1
and it will explain how to use the BlockingCollection and have multiple producers and multiple consumers each taking off the "queue". You will want to look at the "GetConsumingEnumerable()" method and possibly just call .ToObservable() on that.
*the rest of the book is pretty average.
edit:
Here is a sample program that I think does what you want?
class Program
{
private static ManualResetEvent _mre = new ManualResetEvent(false);
static void Main(string[] args)
{
var theQueue = new BlockingCollection<string>();
theQueue.GetConsumingEnumerable()
.ToObservable(Scheduler.TaskPool)
.Subscribe(x => ProcessNewValue(x, "Consumer 1", 10000000));
theQueue.GetConsumingEnumerable()
.ToObservable(Scheduler.TaskPool)
.Subscribe(x => ProcessNewValue(x, "Consumer 2", 50000000));
theQueue.GetConsumingEnumerable()
.ToObservable(Scheduler.TaskPool)
.Subscribe(x => ProcessNewValue(x, "Consumer 3", 30000000));
LoadQueue(theQueue, "Producer A");
LoadQueue(theQueue, "Producer B");
LoadQueue(theQueue, "Producer C");
_mre.Set();
Console.WriteLine("Processing now....");
Console.ReadLine();
}
private static void ProcessNewValue(string value, string consumerName, int delay)
{
Thread.SpinWait(delay);
Console.WriteLine("{1} consuming {0}", value, consumerName);
}
private static void LoadQueue(BlockingCollection<string> target, string prefix)
{
var thread = new Thread(() =>
{
_mre.WaitOne();
for (int i = 0; i < 100; i++)
{
target.Add(string.Format("{0} {1}", prefix, i));
}
});
thread.Start();
}
}
I don't know how best to accomplish this with Rx, but I would recommend just using BlockingCollection<T> and the producer-consumer pattern. Your main thread adds items into the collection, which uses ConcurrentQueue<T> underneath by default. Then you have a separate Task that you spin up ahead of that which uses Parallel::ForEach over the BlockingCollection<T> to process as many items from the collection as makes sense for the system concurrently. Now, you will probably also want to look into using the GetConsumingPartitioner method of the ParallelExtensions library in order to be most efficient since the default partitioner will create more overhead than you want in this case. You can read more about this from this blog post.
When the main thread is finished you call CompleteAdding on the BlockingCollection<T> and Task::Wait on the Task you spun up to wait for all the consumers to finish processing all the items in the collection.

Resources