I need to execute the line completed after all the tasks completed.I thought Task.WaitAll(tasks) will take care but after executing callback method my completed line gets executed.Is there a way to block the main thread untill the Task aray completes it.
Taskpprcessor.Batchstart(definition)
public void BatchStart(List<TaskDefinition> definition)
{
int i = 0;
tasks = new Task[definition.Count];
definition.ForEach((a) =>
{
tasks[i] = Task<TaskResult>.Factory.StartNew(() => (TaskResult)a.MethodTocall.DynamicInvoke(a.ARguments));
tasks[i].ContinueWith(task => RunTaskRetObjResultIns((Task<TaskResult>)task, a.CompleteMethod));
i++;
});
Task.WaitAll(tasks);
Console.WriteLine("completed");
}
I would try this:
public void BatchStart(List<TaskDefinition> definition)
{
Task.WaitAll(
definition.Select
(a => Task<TaskResult>.Factory.StartNew(
() => (TaskResult)a.MethodTocall.DynamicInvoke(a.ARguments)).ContinueWith(task => RunTaskRetObjResultIns((Task<TaskResult>)task, a.CompleteMethod))
).ToArray()
);
Console.WriteLine("completed");
}
I think the problem is that ContinueWith returns a new Task, and that's the one you want to Wait for. You're waiting for the original tasks but not the continuation.
You could just use PLINQ, as in:
List<Object> items = new List<Object>();
items.AsParallel().ForAll(obj => {
// Write whatever the object is to the string, but do it parallelly
Console.WriteLine(obj.ToString());
});
Conosle.WriteLine("Done");
That will execute all of your tasks parallelly, and then return when complete.
Related
So looking at Coroutines for the first time, I want to process a load of data in parallel and wait for it to finish. I been looking around and seen RunBlocking and Await etc but not sure how to use it.
I so far have
val jobs = mutableListOf<Job>()
jobs += GlobalScope.launch { processPages(urls, collection) }
jobs += GlobalScope.launch { processPages(urls, collection2) }
jobs += GlobalScope.launch { processPages(urls, collection3) }
I then want to know/wait for these to finish
You don't need to manually keep track of your cuncurrent jobs if you use the concept of structured concurrency. Assuming that your processPages function performs some kind of blocking IO, you can encapsulate your code into the following suspending function, which executes your code in an IO dispatcher designed for this kind of work:
suspend fun processAllPages() = withContext(Dispatchers.IO) {
// withContext waits for all children coroutines
launch { processPages(urls, collection) }
launch { processPages(urls, collection2) }
launch { processPages(urls, collection3) }
}
Now, from if a topmost function of your application is not already a suspending function, then you can use runBlocking to call processAllPages:
runBlocking {
processAllPages()
}
You can use async builder function to process a load of data in parallel:
class Presenter {
private var job: Job = Job()
private var scope = CoroutineScope(Dispatchers.Main + job) // creating the scope to run the coroutine. It consists of Dispatchers.Main (coroutine will run in the Main context) and job to handle the cancellation of the coroutine.
fun runInParallel() {
scope.launch { // launch a coroutine
// runs in parallel
val deferredList = listOf(
scope.asyncIO { processPages(urls, collection) },
scope.asyncIO { processPages(urls, collection2) },
scope.asyncIO { processPages(urls, collection3) }
)
deferredList.awaitAll() // wait for all data to be processed without blocking the UI thread
// do some stuff after data has been processed, for example update UI
}
}
private fun processPages(...) {...}
fun cancel() {
job.cancel() // invoke it to cancel the job when you don't need it to execute. For example when UI changed and you don't need to process data
}
}
Extension function asyncIO:
fun <T> CoroutineScope.asyncIO(ioFun: () -> T) = async(Dispatchers.IO) { ioFun() } // CoroutineDispatcher - runs and schedules coroutines
GlobalScope.launch is not recommended to use unless you want the coroutine to be operating on the whole application lifetime and not cancelled prematurely.
Edit: as mentioned by Roman Elizarov you can try not to use awaitAll() function unless you want to update UI or do something else right away after all data are processed.
Following approach can be used.
fun myTask() {
GlobalScope.launch {
val task = listOf(
async {
},
async {
}
)
task.awaitAll()
}
}
I am trying to implement a thread-safe solution to keep a count of successful tasks that have been completed, which will ultimately get bound to label displayed on the UI. However, when I use the AtomicInteger below it locks up my UI when the tasks start running, however, if I remove all AtomicInteger refs everything works fine. Is there a non-blocking, thread-safe way which this can be accomplished?
public void handleSomeButtonClick(){
if(!dataModel.getSomeList().isEmpty()) {
boolean unlimited = false;
int count = 0;
AtomicInteger successCount = new AtomicInteger(0);
if(countSelector.getValue().equalsIgnoreCase("Unlimited"))
unlimited = true;
else
count = Integer.parseInt(countSelector.getValue());
while(unlimited || successCount.get() < count) {
Task task = getSomeTask();
taskExecutor.submit(task);
task.setOnSucceeded(event -> {
if (task.getValue())
log.info("Successfully Completed Task | Total Count: " + successCount.incrementAndGet());
else
log.error("Failed task");
});
}
}
}
Your loop waits for a certain number of tasks to be completed. It may even be an infinite loop.
This is not a good idea:
You block the calling thread which seems to be the JavaFX application thread.
You don't have any control of how many tasks are submitted. count could be 3, but since you only schedule the tasks in the loop, 1000 or more tasks could be created&scheduled before the first one completes.
Furthermore if you use onSucceeded/onFailed, you don't need to use AtomicInteger or any similar kind of synchronisation, since those handlers all run on the JavaFX application thread.
Your code could be rewritten like this:
private int successCount;
private void scheduleTask(final boolean unlimited) {
Task task = getSomeTask();
task.setOnSucceeded(event -> {
// cannot get a Boolean from a raw task, so I assume the task is successfull iff no exception happens
successCount++;
log.info("Successfully Completed Task | Total Count: " + successCount);
if (unlimited) {
// submit new task, if the number of tasks is unlimited
scheduleTask(true);
}
});
// submit new task on failure
task.setOnFailed(evt -> scheduleTask(unlimited));
taskExecutor.submit(task);
}
public void handleSomeButtonClick() {
if(!dataModel.getSomeList().isEmpty()) {
successCount = 0;
final boolean unlimited;
final int count;
if(countSelector.getValue().equalsIgnoreCase("Unlimited")) {
unlimited = true;
count = 4; // set limit of number of tasks submitted to the executor at the same time
} else {
count = Integer.parseInt(countSelector.getValue());
unlimited = false;
}
for (int i = 0; i < count; i++) {
scheduleTask(unlimited);
}
}
}
Note: This code runs the risk of handleButtonClick being clicked multiple times before the previous tasks have been completed. You should either prevent scheduling new tasks before the old ones are completed or use some reference type containing an int instead for the count, create this object in handleSomeButtonClick and pass this object to scheduleTask.
Your UI lock up means you do the counting(successCount.get() < count) in your FX application thread. I cannot understand why you keep submit the task in the while loop,
which one do you want to do? (1) start X(e.g. 10) task and count how many task is success. or (2) just keep starting new task and see the count go up.
if(2) then run the whole while loop in a background thread, update the UI in a Platform->runlater().
if(1) use the Future / CompletableFuture, or more powerful version Future in 3rd party package like vavr.
Your problem is future.get() block and wait for result.
This will be simple if you use Vavr library.
Because it can attach a code to its future which run automatically when success or fail.
So you don't have to wait.
Here is a example which using Vavr's future.
CheckedFunction0<String> thisIsATask = () -> {
if ( /*do something*/ ){
throw new Exception("Hey");
}
return "ABC";
};
List<Future<String>> futureList = new ArrayList<>();
for (int x = 0; x < 10; x++) {
futureList.add(Future.of(getExecutorService(), thisIsATask));
}
futureList.forEach((task) -> {
// This will run if success
task.onSuccess(s -> s.equals("ABC") ? Platform.runLater(()->UpdateCounter()) : wtf());
// Your get the exception if it is fail;
task.onFailure(e -> e.printStackTrace());
// task.onComplete() will run on any case when complete
});
This is not blocking, the code at onSucess onFailure or onComplete will run when the task is finish or an exception is catch.
Note: Future.of will use the executorService you pass in to run each task at new thread, the code you provide at onSuccess will continue to run at that thread once the task is done so if you calling javafx remember the Platform.runLater()
Also if you want to run something when all task is finish, then
// the code at onComplete will run when tasks all done
Future<Seq<String>> all = Future.sequence(futureList);
all.onComplete((i) -> this.btnXYZ.setDisable(false));
I have situation where i have to Use Task.Run In my ForEach loop
Requirement:
I'm going to be forced to manually kill thread
I have button where i can start and stop this Thread or Task.Run in For loop.
Problem
My problem is when i start the Task.Run method Its running but when i try to stop with using CancellationTokenSource or runningTaskThread.Abort(); it will not kill. its just stop when i start new Task.Run at that time it run with old thread so it become multiple thread every start process.
Code:
Below is my code for start Thread
var messages = rootObject.MultiQData.Messages.Where(m => m.TimeStamp > DateTime.Now).OrderBy(x => x.TimeStamp).ToList();
//Simulate MultiQ file in BackGroud
if (messages.Count > 0)
{
cancellationTokenSource = new CancellationTokenSource();
cancellationToken = cancellationTokenSource.Token;
Task.Factory.StartNew(
() =>
{
runningTaskThread = Thread.CurrentThread;
messages.ForEach(
m => SetUpTimer(m, rootObject.MultiQData.Connection.FleetNo));
}, cancellationToken);
}
For stop Task.Run
if (cancellationTokenSource != null)
{
if (cancellationToken.IsCancellationRequested)
return;
else
cancellationTokenSource.Cancel();
}
I have also use Thread with Thread.Abort but it is not working
Please Help to solve this issue
I got solution using timer.Stop(),timer.Dispose(). On creation of Thread i am calling SetUpTimer and this SetupTimer i have created multiple timer.
So on call of stop thread i have dispose timer and its work for me
For reference see below code
private void SetUpTimer(Message message, string fleetNo)
{
var ts = new MessageTimer();
var interval = (message.TimeStamp - DateTime.Now).TotalMilliseconds;
interval = interval <= 0 ? 100 : interval;
ts.MessageWrapper = new MessageWrapper(message, fleetNo);
ts.Interval = interval;
ts.Elapsed += ts_Elapsed;
ts.Start();
//Add timer in to the lost for disposing timer at time of stop Simulation
lsTimers.Add(ts);
}
private void StopTask()
{
try
{
// Attempt to cancel the task politely
if (cancellationTokenSource != null)
{
if (cancellationToken.IsCancellationRequested)
return;
else
cancellationTokenSource.Cancel();
}
//Stop All Timer
foreach (var timer in lsTimers)
{
timer.Stop();
timer.Dispose();
}
}
catch (Exception ex)
{
errorLogger.Error("Error while Stop simulation :", ex);
}
}
My code is supposed to simultanously start sorting 3 different lists using different methods and return the first one to finish. However it always performs the first task on the list instead. How can I fix that?
Below is part of my code which seemed relevant to show.
static List<Task<List<int>>> listoftasks = new List<Task<List<int>>>() { QuickSortAsync(list1), BubbleSortAsync(list2), SelectionSortAsync(list3) };
public async static void caller()
{
List<int> result = await Task.WhenAny(listoftasks).Result;
foreach (var item in result)
Console.Write(item + ", ");
}
static Task<List<int>> QuickSortAsync(List<int> l)
{
return Task.Run<List<int>>(() =>
{
l.Sort();
return l;
});
}
Since your list of tasks is static, you're starting all three tasks very early. Then, when you call WhenAny, it's likely that they've already all completed.
I suggest you start the tasks when you call WhenAny:
public static async Task CallerAsync()
{
List<int> result = await await Task.WhenAny(QuickSortAsync(list1),
BubbleSortAsync(list2), SelectionSortAsync(list3));
foreach (var item in result)
Console.Write(item + ", ");
}
Suppose I have a BlockingCollection OutputQueue, which has many items. Current my code is:
public void Consumer()
{
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
PlayMessage(workItem);
Console.WriteLine("Works on {0}", workItem.TaskID);
OutLog.Write("Works on {0}", workItem.TaskID);
Thread.Sleep(500);
}
}
Now I want PlayMessage(workItem) running in the multiple tasks way because some workItem need more time, the others need less time. There are huge difference.
As for the method PlayMessage(workItem), it has a few service calls, play text to speech and some logging.
bool successRouting = serviceCollection.SvcCall_GetRoutingData(string[] params, out ex);
bool successDialingService = serviceCollection.SvcCall_GetDialingServiceData(string[] params, out excep);
PlayTTS(workItem.TaskType); // playing text to speech
So how to change my code?
What I thought was:
public async Task Consumer()
{
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
await PlayMessage(workItem);
Console.WriteLine("Works on {0}", workItem.TaskID);
OutLog.Write("Works on {0}", workItem.TaskID);
Thread.Sleep(500);
}
}
Since you want parallelism with your PlayMessage, i would suggest looking into TPL Dataflow, as it combines both parallel work with async, so you could await your work properly.
TPL Dataflow is constructed of Blocks, and each block has its own characteristics.
Some popular ones are:
ActionBlock<TInput>
TransformBlock<T, TResult>
I would construct something like the following:
var workItemBlock = new ActionBlock<WorkItem>(
workItem =>
{
PlayMessage(workItem);
Console.WriteLine("Works on {0}", workItem.TaskID);
OutLog.Write("Works on {0}", workItem.TaskID);
}, new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = // Set max parallelism as you wish..
});
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
workItemBlock.Post(workItem);
}
workItemBlock.Complete();
Here's another solution, not based on TPL Dataflow. It uses uses SemaphoreSlim to throttle the number of parallel playbacks (warning, untested):
public async Task Consumer()
{
var semaphore = new SemaphoreSlim(NUMBER_OF_PORTS);
var pendingTasks = new HashSet<Task>();
var syncLock = new Object();
Action<Task> queueTaskAsync = async(task) =>
{
// be careful with exceptions inside "async void" methods
// keep failed/cancelled tasks in the list
// they will be observed outside
lock (syncLock)
pendingTasks.Add(task);
await semaphore.WaitAsync().ConfigureAwait(false);
try
{
await task;
}
catch
{
if (!task.IsCancelled && !task.IsFaulted)
throw;
// the error will be observed later,
// keep the task in the list
return;
}
finally
{
semaphore.Release();
}
// remove successfully completed task from the list
lock (syncLock)
pendingTasks.Remove(task);
};
foreach (var workItem in OutputQueue.GetConsumingEnumerable())
{
var item = workItem;
Func<Task> workAsync = async () =>
{
await PlayMessage(item);
Console.WriteLine("Works on {0}", item.TaskID);
OutLog.Write("Works on {0}", item.TaskID);
Thread.Sleep(500);
});
var task = workAsync();
queueTaskAsync(task);
}
await Task.WhenAll(pendingTasks.ToArray());
}