I am using QT 5.11.2 on both linux and windows. I am trying to optimize my UI to prevent it from freezing in large functions. I started using QThread and was able to do exactly what I want on windows. However, I tried to test the same functions on linux RHEL7 but threads never finished.
Here is what I tried:
void MainWidget::Configure_BERT_DSO(bool isOptimization, double lineRate, int Scaling)
{
QThread *bertThread = QThread::create([this, lineRate, Scaling]{ ConfigureBert(lineRate, Scaling);
QThread *dsoThread = QThread::create([this, lineRate]{ ConfigureDSO(lineRate); });
bertThread->setObjectName("My Bert Thread");
dsoThread->setObjectName("My DSO Thread");
bertThread->start();
dsoThread->start();
while(bertThread->isRunning() || dsoThread->isRunning()) // even tried isFinished()
{
qApp->processEvents();
}
bertThread->exit();
dsoThread->exit();
delete bertThread;
delete dsoThread;
}
In windows the while loop exits after a while and both threads execute correctly with no problem.
On linux, to make sure that both functions execute correctly, I added qDebug() at the start and end of each function and they are all reached at the excpected time. But the problem is that isRunning never becomes true again and the same goes for isFinished and my loop gets stuck.
Thread: QThread(0x1d368b0, name = "My Bert Thread") started.
Thread: QThread(0x1d368e0, name = "My DSO Thread") started.
Thread: QThread(0x1d368b0, name = "My Bert Thread") finished.
Thread: QThread(0x1d368e0, name = "My DSO Thread") finished.
Is it something platform dependent or is there something that I might be missing?
EDIT
I also tried to use bertThread->wait() and dsoThread->wait() to check if it exits after my functions finish but it never returns from the first one I called although both functions reach their end successfully
Your help is much appreciated
Related
As far as I know, programs using await async uses only 1 thread. That means we do not have to worry about any conflicting threads. When the compiler see "await" it will simply do other things on that same thread till what they're awaiting for is done.
I mean the stuff we're awaiting for may run in another thread. However, the program doesn't create another thread. It simply do something else in that same thread.
Hence, we shouldn't worry about conflicts
Yet, today I discover that something is running on at least 2 different thread
Public Sub LogEvents(ByVal whatToLog As String, Optional ByVal canRandom As Boolean = True)
Static logNumber As Integer
Dim timeStamp As String
timeStamp = CStr(Microsoft.VisualBasic.Now)
whatToLog = timeStamp & " " & " " & whatToLog & Microsoft.VisualBasic.vbNewLine
Try
Debug.Print(whatToLog)
System.IO.File.AppendAllText("log.txt", whatToLog, defaultEncoding)
...
Looking at the thread.
So one is worker thread
And another is main thread
Both threads stuck at the same place.
What confuses me is I thought everything should have been running on main thread. That's just how await async works. How can anything run on worker thread?
The task is created like this
For Each account In uniqueAccounts().Values
Dim newtask = account.getMarketInfoAsync().ContinueWith(Sub() account.LogFinishTask("Geting Market Info", starttime))
LogEvents("marketinfo account " + account.Exchange + " is being done by task " + newtask.Id.ToString + " " + newtask.ToString)
tasklist.Add(newtask)
'newtask.ContinueWith(Sub() LogEvents(account.ToString))
Next
This is the screen shot
That is followed by
LogEvents("Really Start Getting Market Detail of All")
Try
Await jsonHelper.whenAllWithTimeout(tasklist.ToArray, 500000)
Catch ex As Exception
Dim b = 1
End Try
That calls
Public Shared Async Function whenAllWithTimeout(taskar As Task(), timeout As Integer) As Task
Dim timeoutTask = Task.Delay(timeout)
Dim maintask = Task.WhenAll(taskar)
Await Task.WhenAny({timeoutTask, maintask})
If maintask.IsCompleted Then
Dim b = 1
For Each tsk In taskar
LogEvents("Not Time Out. Status of task " + tsk.Id.ToString + " is " + tsk.IsCompleted.ToString)
Next
End If
If timeoutTask.IsCompleted Then
Dim b = 1
For Each tsk In taskar
LogEvents("status of task " + tsk.Id.ToString + " is " + tsk.IsCompleted.ToString)
Next
End If
End Function
So I created a bunch of tasks and I use Task.Whenall and Task.Whenany
Is that why they run it on a different thread than the main thread?
How do I make it run on main thread only?
As far as I know, programs using await async uses only 1 thread.
This is incorrect.
When the compiler see "await" it will simply do other things on that same thread till what they're awaiting for is done.
Also incorrect.
I recommend reading my async intro.
await actually causes a return from the method. The thread may or may not be returned to the runtime.
How can anything run on worker thread?
When async methods resume executing after an await, by default they will resume executing on a context captured by that await. If there was no context (common in console applications), then they resume on a thread pool thread.
How do I make it run on main thread only?
Give them a single-threaded context. GUI main threads use a single-threaded context, so you could run this on a GUI main thread. Or if you are writing a console application, you can use AsyncContext from my AsyncEx library.
I'm working with an API that can only access its objects on the main thread, so I need to create a new thread to be used for my GUI and then swap back to the original thread for any lengthy calculations involving the API.
So far I have the following code:
[<EntryPoint; STAThread>]
let main _ =
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Inital thread")
let initCtx = SynchronizationContext.Current
let uiThread = new Thread(fun () ->
let guiCtx = SynchronizationContext.Current
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - New UI thread")
async{
do! Async.SwitchToContext initCtx
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Back to initial thread")
// Lengthy API calculation here
do! Async.SwitchToContext guiCtx
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Back to UI thread")
} |> Async.RunSynchronously
)
uiThread.SetApartmentState(ApartmentState.STA)
uiThread.Start()
1
However when I run this I get the output:
[1] - Inital thread
[4] - New UI thread
[5] - Back to initial thread
[5] - Back to UI thread
So it doesn't seem to be switching contexts the way I would expect. How can I switch back to the original thread after creating a new thread this way?
I have tried calling
SynchronizationContext.SetSynchronizationContext(new DispatcherSynchronizationContext(Dispatcher.CurrentDispatcher)) first to ensure that the original thread has a valid SynchronizationContext but that causes the program to exit at the Async.SwitchToContext lines without throwing any exception.
I have also tried using Async.StartImmediate instead of RunSynchronously with the same result.
If I try both of these at the same time then the program just freezes up at the Async.SwitchToContext lines instead of exiting out.
I am trying to understand how cats effect Cancelable works. I have the following minimal app, based on the documentation
import java.util.concurrent.{Executors, ScheduledExecutorService}
import cats.effect._
import cats.implicits._
import scala.concurrent.duration._
object Main extends IOApp {
def delayedTick(d: FiniteDuration)
(implicit sc: ScheduledExecutorService): IO[Unit] = {
IO.cancelable { cb =>
val r = new Runnable {
def run() =
cb(Right(()))
}
val f = sc.schedule(r, d.length, d.unit)
// Returning the cancellation token needed to cancel
// the scheduling and release resources early
val mayInterruptIfRunning = false
IO(f.cancel(mayInterruptIfRunning)).void
}
}
override def run(args: List[String]): IO[ExitCode] = {
val scheduledExecutorService =
Executors.newSingleThreadScheduledExecutor()
for {
x <- delayedTick(1.second)(scheduledExecutorService)
_ <- IO(println(s"$x"))
} yield ExitCode.Success
}
}
When I run this:
❯ sbt run
[info] Loading global plugins from /Users/ethan/.sbt/1.0/plugins
[info] Loading settings for project stackoverflow-build from plugins.sbt ...
[info] Loading project definition from /Users/ethan/IdeaProjects/stackoverflow/project
[info] Loading settings for project stackoverflow from build.sbt ...
[info] Set current project to cats-effect-tutorial (in build file:/Users/ethan/IdeaProjects/stackoverflow/)
[info] Compiling 1 Scala source to /Users/ethan/IdeaProjects/stackoverflow/target/scala-2.12/classes ...
[info] running (fork) Main
[info] ()
The program just hangs at this point. I have many questions:
Why does the program hang instead of terminating after 1 second?
Why do we set mayInterruptIfRunning = false? Isn't the whole point of cancellation to interrupt a running task?
Is this the recommended way to define the ScheduledExecutorService? I did not see examples in the docs.
This program waits 1 second, and then returns () (then unexpectedly hangs). What if I wanted to return something else? For example, let's say I wanted to return a string, the result of some long-running computation. How would I extract that value from IO.cancelable? The difficulty, it seems, is that IO.cancelable returns the cancelation operation, not the return value of the process to be cancelled.
Pardon the long post but this is my build.sbt:
name := "cats-effect-tutorial"
version := "1.0"
fork := true
scalaVersion := "2.12.8"
libraryDependencies += "org.typelevel" %% "cats-effect" % "1.3.0" withSources() withJavadoc()
scalacOptions ++= Seq(
"-feature",
"-deprecation",
"-unchecked",
"-language:postfixOps",
"-language:higherKinds",
"-Ypartial-unification")
you need shutdown the ScheduledExecutorService, Try this
Resource.make(IO(Executors.newSingleThreadScheduledExecutor))(se => IO(se.shutdown())).use {
se =>
for {
x <- delayedTick(5.second)(se)
_ <- IO(println(s"$x"))
} yield ExitCode.Success
}
I was able to find an answer to these questions although there are still some things that I don't understand.
Why does the program hang instead of terminating after 1 second?
For some reason, Executors.newSingleThreadScheduledExecutor() causes things to hang. To fix the problem, I had to use Executors.newSingleThreadScheduledExecutor(new Thread(_)). It appears that the only difference is that the first version is equivalent to Executors.newSingleThreadScheduledExecutor(Executors.defaultThreadFactory()), although nothing in the docs makes it clear why this is the case.
Why do we set mayInterruptIfRunning = false? Isn't the whole point of cancellation to interrupt a running task?
I have to admit that I do not understand this entirely. Again, the docs were not especially clarifying on this point. Switching the flag to true does not seem to change the behavior at all, at least in the case of Ctrl-c interrupts.
Is this the recommended way to define the ScheduledExecutorService? I did not see examples in the docs.
Clearly not. The way that I came up with was loosely inspired by this snippet from the cats effect source code.
This program waits 1 second, and then returns () (then unexpectedly hangs). What if I wanted to return something else? For example, let's say I wanted to return a string, the result of some long-running computation. How would I extract that value from IO.cancelable? The difficulty, it seems, is that IO.cancelable returns the cancelation operation, not the return value of the process to be cancelled.
The IO.cancellable { ... } block returns IO[A] and the callback cb function has type Either[Throwable, A] => Unit. Logically this suggests that whatever is fed into the cb function is what the IO.cancellable expression will returned (wrapped in IO). So to return the string "hello" instead of (), we rewrite delayedTick:
def delayedTick(d: FiniteDuration)
(implicit sc: ScheduledExecutorService): IO[String] = { // Note IO[String] instead of IO[Unit]
implicit val processRunner: JVMProcessRunner[IO] = new JVMProcessRunner
IO.cancelable[String] { cb => // Note IO.cancelable[String] instead of IO[Unit]
val r = new Runnable {
def run() =
cb(Right("hello")) // Note "hello" instead of ()
}
val f: ScheduledFuture[_] = sc.schedule(r, d.length, d.unit)
IO(f.cancel(true))
}
}
You need explicitly terminate the executor at the end, as it is not managed by Scala or Cats runtime, it wouldn't exit by itself, that's why your App hands up instead of exit immediately.
mayInterruptIfRunning = false gracefully terminates a thread if it is running. You can set it as true to forcely kill it, but it is not recommanded.
You have many way to create a ScheduledExecutorService, it depends on need. For this case it doesn't matter, but the question 1.
You can return anything from the Cancelable IO by call cb(Right("put your stuff here")), the only thing blocks you to retrieve the return A is when your cancellation works. You wouldn't get anything if you stop it before it gets to the point. Try to return IO(f.cancel(mayInterruptIfRunning)).delayBy(FiniteDuration(2, TimeUnit.SECONDS)).void, you will get what you expected. Because 2 seconds > 1 second, your code gets enough time to run before it has been cancelled.
I would like to use the library threads (or perhaps parallel) for loading/preprocessing data into a queue but I am not entirely sure how it works. In summary;
Load data (tensors), pre-process tensors (this takes time, hence why I am here) and put them in a queue. I would like to have as many threads as possible doing this so that the model is not waiting or not waiting for long.
For the tensor at the top of the queue, extract it and forward it through the model and remove it from the queue.
I don't really understand the example in https://github.com/torch/threads enough. A hint or example as to where I would load data into the queue and train would be great.
EDIT 14/03/2016
In this example "https://github.com/torch/threads/blob/master/test/test-low-level.lua" using a low level thread, does anyone know how I can extract data from these threads into the main thread?
Look at this multi-threaded data provider:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua
It runs this file in the thread:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L18
by calling it here:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L30-L43
And afterwards, if you want to queue a job into the thread, you provide two functions:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L84
The first one runs inside the thread, and the second one runs in the main thread after the first one completes.
Hopefully that makes it a bit more clear.
If Soumith's examples in the previous answer are not very easy to use, I suggest you build your own pipeline from scratch. I provide here an example of two synchronized threads : one for writing data and one for reading data:
local t = require 'threads'
t.Threads.serialization('threads.sharedserialize')
local tds = require 'tds'
local dict = tds.Hash() -- only local variables work here, and only tables or tds.Hash()
dict[1] = torch.zeros(4)
local m1 = t.Mutex()
local m2 = t.Mutex()
local m1id = m1:id()
local m2id = m2:id()
m1:lock()
local pool = t.Threads(
1,
function(threadIdx)
end
)
pool:addjob(
function()
local t = require 'threads'
local m1 = t.Mutex(m1id)
local m2 = t.Mutex(m2id)
while true do
m2:lock()
dict[1] = torch.randn(4)
m1:unlock()
print ('W ===> ')
print(dict[1])
collectgarbage()
collectgarbage()
end
return __threadid
end,
function(id)
end
)
-- Code executing on master:
local a = 1
while true do
m1:lock()
a = dict[1]
m2:unlock()
print('R --> ')
print(a)
end
The following code run in a Windows Store app blocks the UI for 30 seconds despite that fact that the loop should run in a separate thread:
int seconds(30);
// Create a thread pool
ComPtr<IThreadPoolStatics> threadPool;
HRESULT hr = GetActivationFactory(HStringReference(RuntimeClass_Windows_System_Threading_ThreadPool).Get(), &threadPool);
//
// Create an asynchronous task and start it
ComPtr<ABI::Windows::Foundation::IAsyncAction> asyncActionPtr;
hr = threadPool->RunAsync(Callback<IWorkItemHandler>( // Line 1
//
// Lambda for task. Loops doing nothing until 30 seconds have passed
[seconds](ABI::Windows::Foundation::IAsyncAction* asyncAction) -> HRESULT {
std::chrono::system_clock::time_point end(std::chrono::system_clock::now() + std::chrono::seconds(seconds)); // Line 3
while (std::chrono::system_clock::now() < end);
return S_OK; // Line 4
}).Get(), &asyncActionPtr);
if (FAILED(hr)) throw std::exception("Cannot start thread"); // Line 2
When I set breakpoints in the marked lines, I can see that Line 1 is hit before Line 2, then Line 3 and after 30 seconds Line 4. In these 30 seconds the UI is blocked and the Thread view in Visual Studio show the same thread (SHcore.dll) for all of the breakpoints.
I am using Windows 8 and Visual Studio 2012.
Can somebody explain?
Bill Messmer has given the perfect answer on MSDN. In short:
The delegate object created by Callback is not agile which means that it cannot be passed to the thread pool thread. Instead that thread receives a proxy and calls upon it are marshaled back to the delgate object in the UI thread. Bill has also given an easy fix for the problem.