I am learning F# and very interested in this language
I try to create async expression to run asynchronously.
for example
let prop1=async{
for i=0 to 1000000 do ()
MessageBox.Show("Done")
}
let prop2=async{
for i=0 to 1000000 do ()
MessageBox.Show("Done2")
}
Async.Start(prop1)
Async.Start(prop2)
when i run the program, i got that there are thread amount increasing of program process, from 6 to 8 , when i done close 2 message box , the process seem not destroy those created threads , the count also 8 , what happened or i got misunderstand about F# asynchronous
Thank for your help
The threads are taken from a thread pool (which is why there are more threads than actions, incidentally).
The pool exists until the application terminates.
Nothing to worry about
Edit For a nice in-depth article on F#, async and ThreadPool: http://www.voyce.com/index.php/2011/05/27/fsharp-async-plays-well-with-others/
The runtime might use a thread pool, that is threads are not destroyed, but waiting for another asynchronous tasks. This technique helps the runtime reduce time to start a new async. operation, because creating a new thread might consume some time and resources.
Related
When using the multi-threaded approach to solve IO Bound problems in Python, this works by freeing the GIL. Let us suppose we have Thread1 which takes 10 seconds to read a file, during this 10 seconds it does not require the GIL and can leave Thread2 to execute code. Thread1 and Thread2 are effectively running in parallel because Thread1 is doing system call operations and can execute independently of Thread2, however Thread1 is still executing code.
Now, suppose we have a setup using asyncio or any asynchronous programming code. When we do something such as,
file_content = await ten_second_long_file_read()
During the time in which await is called, system calls are done to read the content of the files and when it is done an event is sent back and code execution can be later continue. During the time we are await'ing, other code can be ran.
My confusion comes from the fact that asynchronous programming is primarily single threaded. With the multiple threaded approach when T1 is reading from a file, it is still performing code execution, it simply free'd the GIL to perform work in parallel with another thread. However with asynchronous programming, when we are awaiting, how is it performing other tasks when we are waiting, aswell as reading data in a single thread? I understand the multiple-threaded idea, but not asynchronous because it is still performing the system calls in a single thread. With asynchronous programming it has nowhere to free the GIL to, considering there is only one thread. Is asyncio secretly using threads?
The number of filehandles is independent of the GIL, and threads. Posix select documentation gives a bit of an idea of the distinct mechanism around file handles.
To illustrate I created three files, 1.txt etc. These are just:
1
one
Obviously open for reading is ok but not for writing. To make a ten second read I just held the filehandle open for ten seconds, reading the first line, waiting 10 seconds, then reading the second line.
asyncio version
import asyncio
from threading import active_count
do = ['1.txt', '2.txt', '3.txt']
async def ten_second_long_file_read():
while do:
doing = do.pop()
with open(doing, 'r') as f:
print(f.readline().strip())
await asyncio.sleep(10)
print(f"threads {active_count()}")
print(f.readline().strip())
async def main():
await asyncio.gather(asyncio.create_task(ten_second_long_file_read()),
asyncio.create_task(ten_second_long_file_read()))
asyncio.run(main())
This produces a very predictable output and as expected, one thread only.
3
2
threads 1
three
1
threads 1
two
threads 1
one
threading - changes
Remove async of course. Swap asyncio.sleep(10) for time.sleep(10). The main change is the calling function.
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as e:
e.submit(ten_second_long_file_read)
e.submit(ten_second_long_file_read)
Also a fairly predictable output, however you cannot rely on this.
3
2
threads 3
three
threads 3
two
1
threads 2
one
Running the same threaded version in debug the output is a bit random, on one run on my computer this was:
23
threads 3threads 3
twothree
1
threads 2
one
This highlights a difference in threads in that the running thread is pre-emptively switched creating a whole bundle of complexity under the heading thread safety. This issue does not exist in asyncio as there is a single thread.
multi-processing
Similar to the threaded code however __name__ == '__main__' is required and the process pool executor provides a snapshot of the context.
def main():
with concurrent.futures.ProcessPoolExecutor(max_workers=2) as e:
e.submit(ten_second_long_file_read)
e.submit(ten_second_long_file_read)
if __name__ == '__main__': # required for executor
main()
Two big differences. No shared understanding of the do list so everything is done twice. Processes don't know what the other process has done. More CPU power available, however more work required to manage the load.
Three processes required for this so the overhead is large, however each process only has one thread.
3
3
threads 1
threads 1
three
three
2
2
threads 1
threads 1
two
two
1
1
threads 1
threads 1
one
one
Let's consider this simple code with coroutines
import kotlinx.coroutines.*
import java.util.concurrent.Executors
fun main() {
runBlocking {
launch (Executors.newFixedThreadPool(10).asCoroutineDispatcher()) {
var x = 0
val threads = mutableSetOf<Thread>()
for (i in 0 until 100000) {
x++
threads.add(Thread.currentThread())
yield()
}
println("Result: $x")
println("Threads: $threads")
}
}
}
As far as I understand this is quite legit coroutines code and it actually produces expected results:
Result: 100000
Threads: [Thread[pool-1-thread-1,5,main], Thread[pool-1-thread-2,5,main], Thread[pool-1-thread-3,5,main], Thread[pool-1-thread-4,5,main], Thread[pool-1-thread-5,5,main], Thread[pool-1-thread-6,5,main], Thread[pool-1-thread-7,5,main], Thread[pool-1-thread-8,5,main], Thread[pool-1-thread-9,5,main], Thread[pool-1-thread-10,5,main]]
The question is what makes these modifications of local variables thread-safe (or is it thread-safe?). I understand that this loop is actually executed sequentially but it can change the running thread on every iteration. The changes done from thread in first iteration still should be visible to the thread that picked up this loop on second iteration. Which code does guarantee this visibility? I tried to decompile this code to Java and dig around coroutines implementation with debugger but did not find a clue.
Your question is completely analogous to the realization that the OS can suspend a thread at any point in its execution and reschedule it to another CPU core. That works not because the code in question is "multicore-safe", but because it is a guarantee of the environment that a single thread behaves according to its program-order semantics.
Kotlin's coroutine execution environment likewise guarantees the safety of your sequential code. You are supposed to program to this guarantee without any worry about how it is maintained.
If you want to descend into the details of "how" out of curiosity, the answer becomes "it depends". Every coroutine dispatcher can choose its own mechanism to achieve it.
As an instructive example, we can focus on the specific dispatcher you use in your posted code: JDK's fixedThreadPoolExecutor. You can submit arbitrary tasks to this executor, and it will execute each one of them on a single (arbitrary) thread, but many tasks submitted together will execute in parallel on different threads.
Furthermore, the executor service provides the guarantee that the code leading up to executor.execute(task) happens-before the code within the task, and the code within the task happens-before another thread's observing its completion (future.get(), future.isCompleted(), getting an event from the associated CompletionService).
Kotlin's coroutine dispatcher drives the coroutine through its lifecycle of suspension and resumption by relying on these primitives from the executor service, and thus you get the "sequential execution" guarantee for the entire coroutine. A single task submitted to the executor ends whenever the coroutine suspends, and the dispatcher submits a new task when the coroutine is ready to resume (when the user code calls continuation.resume(result)).
From: http://blog.nindalf.com/how-goroutines-work/
As the goroutines are scheduled cooperatively, a goroutine that loops continuously can starve other goroutines on the same thread.
Goroutines are cheap and do not cause the thread on which they are multiplexed to block if they are blocked on
network input
sleeping
channel operations or
blocking on primitives in the sync package.
So given the above, say that you have some code like this that does nothing but loop a random number of times and print the sum:
func sum(x int) {
sum := 0
for i := 0; i < x; i++ {
sum += i
}
fmt.Println(sum)
}
if you use goroutines like
go sum(100)
go sum(200)
go sum(300)
go sum(400)
will the goroutines run one by one if you only have one thread?
A compilation and tidying of all of creker's comments.
Preemptive means that kernel (runtime) allows threads to run for a specific amount of time and then yields execution to other threads without them doing or knowing anything. In OS kernels that's usually implemented using hardware interrupts. Process can't block entire OS. In cooperative multitasking thread have to explicitly yield execution to others. If it doesn't it could block whole process or even whole machine. That's how Go does it. It has some very specific points where goroutine can yield execution. But if goroutine just executes for {} then it will lock entire process.
However, the quote doesn't mention recent changes in the runtime. fmt.Println(sum) could cause other goroutines to be scheduled as newer runtimes will call scheduler on function calls.
If you don't have any function calls, just some math, then yes, goroutine will lock the thread until it exits or hits something that could yield execution to others. That's why for {} doesn't work in Go. Even worse, it will still lead to process hanging even if GOMAXPROCS > 1 because of how GC works, but in any case you shouldn't depend on that. It's good to understand that stuff but don't count on it. There is even a proposal to insert scheduler calls in loops like yours
The main thing that Go's runtime does is it gives its best to allow everyone to execute and don't starve anyone. How it does that is not specified in the language specification and might change in the future. If the proposal about loops will be implemented then even without function calls switching could occur. At the moment the only thing you should remember is that in some circumstances function calls could cause goroutine to yield execution.
To explain the switching in Akavall's answer, when fmt.Printf is called, the first thing it does is checks whether it needs to grow the stack and calls the scheduler. It MIGHT switch to another goroutine. Whether it will switch depends on the state of other goroutines and exact implementation of the scheduler. Like any scheduler, it probably checks whether there're starving goroutines that should be executed instead. With many iterations function call has greater chance to make a switch because others are starving longer. With few iterations goroutine finishes before starvation happens.
For what its worth it. I can produce a simple example where it is clear that the goroutines are not ran one by one:
package main
import (
"fmt"
"runtime"
)
func sum_up(name string, count_to int, print_every int, done chan bool) {
my_sum := 0
for i := 0; i < count_to; i++ {
if i % print_every == 0 {
fmt.Printf("%s working on: %d\n", name, i)
}
my_sum += 1
}
fmt.Printf("%s: %d\n", name, my_sum)
done <- true
}
func main() {
runtime.GOMAXPROCS(1)
done := make(chan bool)
const COUNT_TO = 10000000
const PRINT_EVERY = 1000000
go sum_up("Amy", COUNT_TO, PRINT_EVERY, done)
go sum_up("Brian", COUNT_TO, PRINT_EVERY, done)
<- done
<- done
}
Result:
....
Amy working on: 7000000
Brian working on: 8000000
Amy working on: 8000000
Amy working on: 9000000
Brian working on: 9000000
Brian: 10000000
Amy: 10000000
Also if I add a function that just does a forever loop, that will block the entire process.
func dumb() {
for {
}
}
This blocks at some random point:
go dumb()
go sum_up("Amy", COUNT_TO, PRINT_EVERY, done)
go sum_up("Brian", COUNT_TO, PRINT_EVERY, done)
Well, let's say runtime.GOMAXPROCS is 1. The goroutines run concurrently one at a time. Go's scheduler just gives the upper hand to one of the spawned goroutines for a certain time, then to another, etc until all are finished.
So, you never know which goroutine is running at a given time, that's why you need to synchronize your variables. From your example, it's unlikely that sum(100) will run fully, then sum(200) will run fully, etc
The most probable is that one goroutine will do some iterations, then another will do some, then another again etc.
So, the overall is that they are not sequential, even if there is only one goroutine active at a time (GOMAXPROCS=1).
So, what's the advantage of using goroutines ? Plenty. It means that you can just do an operation in a goroutine because it is not crucial and continue the main program. Imagine an HTTP webserver. Treating each request in a goroutine is convenient because you do not have to care about queueing them and run them sequentially: you let Go's scheduler do the job.
Plus, sometimes goroutines are inactive, because you called time.Sleep, or they are waiting for an event, like receiving something for a channel. Go can see this and just executes other goroutines while some are in those idle states.
I know there are a handful of advantages I didn't present, but I don't know concurrency that much to tell you about them.
EDIT:
Related to your example code, if you add each iteration at the end of a channel, run that on one processor and print the content of the channel, you'll see that there is no context switching between goroutines: Each one runs sequentially after another one is done.
However, it is not a general rule and is not specified in the language. So, you should not rely on these results for drawing general conclusions.
#Akavall Try adding sleep after creating dumb goroutine, goruntime never executes sum_up goroutines.
From that it looks like go runtime spawns next go routines immediately, it might execute sum_up goroutine until go runtime schedules dumb() goroutine to run. Once dumb() is scheduled to run then go runtime won't schedule sum_up goroutines to run, as dumb runs for{}
I am implementing a REST service for financial calculation. So each request is supposed to be a CPU intensive task, and I think that the best place to create threads it's in the following function:
exports.execute = function(data, params, f, callback) {
var queriesList = [];
var resultList = [];
for (var i = 0; i < data.lista.length; i++)
{
var query = (function(cod) {
return function(callbackFlow) {
params.paramcodneg = cod;
doCdaQuery(params, function(err, result)
{
if (err)
{
return callback({ERROR: err}, null);
}
f(data, result, function(ret)
{
resultList.push(ret);
callbackFlow();
});
});
}
})(data.lista[i]);
queriesList.push(query);
}
flow.parallel(queriesList, function() {
callback(null, resultList);
});
};
I don't know what is best, run flow.parallel in a separeted thread or run each function of the queriesList in its own thread. What is best ? And how to use threads-a-gogo module for that ?
I've tried but couldn't write the right code for that.
Thanks in advance.
Kleyson Rios.
I'll admit that I'm relatively new to node.js and I haven't yet used threads a gogo, but I have had some experience with multi-threaded programming, so I'll take a crack at answering this question.
Creating a thread for every single query (I'm assuming these queries are CPU-bound calculations rather than IO-bound calls to a database) is not a good idea. Creating and destroying threads in an expensive operation, so creating an destroying a group of threads for every request that requires calculation is going to be a huge drag on performance. Too many threads will cause more overhead as the processor switches between them. There isn't any advantage to having more worker threads than processor cores.
Also, if each query doesn't take that much processing time, there will be more time spent creating and destroying the thread than running the query. Most of the time would be spent on threading overhead. In this case, you would be much better off using a single-threaded solution using flow or async, which distributes the processing over multiple ticks to allow the node.js event loop to run.
Single-threaded solutions are the easiest to understand and debug, but if the queries are preventing the main thread from getting other stuff done, then a multi-threaded solution is necessary.
The multi-threaded solution you propose is pretty good. Running all the queries in a separate thread prevents the main thread from bogging down. However, there isn't any point in using flow or async in this case. These modules simulate multi-threading by distributing the processing over multiple node.js ticks and tasks run in parallel don't execute in any particular order. However, these tasks still are running in a single thread. Since you're processing the queries in their own thread, and they're no longer interfering with the node.js event loop, then just run them one after another in a loop. Since all the action is happening in a thread without a node.js event loop, using flow or async in just introduces more overhead for no additional benefit.
A more efficient solution is to have a thread pool hanging out in the background and throw tasks at it. The thread pool would ideally have the same number of threads as processor cores, and would be created when the application starts up and destroyed when the application shuts down, so the expensive creating and destroying of threads only happens once. I see that Threads a Gogo has a thread pool that you can use, although I'm afraid I'm not yet familiar enough with it to give you all the details of using it.
I'm drifting into territory I'm not familiar with here, but I believe you could do it by pushing each query individually onto the global thread pool and when all the callbacks have completed, you'll be done.
The Node.flow module would be handy here, not because it would make processing any faster, but because it would help you manage all the query tasks and their callbacks. You would use a loop to push a bunch of parallel tasks on the flow stack using flow.parallel(...), where each task would send a query to the global threadpool using threadpool.any.eval(), and then call ready() in the threadpool callback to tell flow that the task is complete. After the parallel tasks have been queued up, use flow.join() to run all the tasks. That should run the queries on the thread pool, with the thread pool running as many tasks as it can at once, using all the cores and avoiding creating or destroying threads, and all the queries will have been processed.
Other requests would also be tossing their tasks onto the thread pool as well, but you wouldn't notice that because the request being processed would only get callbacks for the tasks that the request gave to the thread pool. Note that this would all be done on the main thread. The thread pool would do all the non-main-thread processing.
You'll need to do some threads a gogo and node.flow documentation reading and figure out some of the details, but that should give you a head start. Using a separate thread is more complex than using the main thread, and making use of a thread pool is even more complex, so you'll have to choose which one is best for you. The extra complexity might or might not be worth it.
Trying to understanding the difference between the TPL & async/await when it comes to thread creation.
I believe the TPL (TaskFactory.StartNew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool. That's of course unless you use TaskCreationOptions.LongRunning which creates a new thread.
I thought async/await would work similarly so essentially:
TPL:
Factory.StartNew( () => DoSomeAsyncWork() )
.ContinueWith(
(antecedent) => {
DoSomeWorkAfter();
},TaskScheduler.FromCurrentSynchronizationContext());
Async/Await:
await DoSomeAsyncWork();
DoSomeWorkAfter();
would be identical. From what I've been reading it seems like async/await only "sometimes" creates a new thread. So when does it create a new thread and when doesn't it create a new thread? If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise I would think it would have to. I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I believe the TPL (TaskFactory.Startnew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool.
Pretty much.
From what i've been reading it seems like async/await only "sometimes" creates a new thread.
Actually, it never does. If you want multithreading, you have to implement it yourself. There's a new Task.Run method that is just shorthand for Task.Factory.StartNew, and it's probably the most common way of starting a task on the thread pool.
If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise i would think it would have to.
Bingo. So methods like Stream.ReadAsync will actually create a Task wrapper around an IOCP (if the Stream has an IOCP).
You can also create some non-I/O, non-CPU "tasks". A simple example is Task.Delay, which returns a task that completes after some time period.
The cool thing about async/await is that you can queue some work to the thread pool (e.g., Task.Run), do some I/O-bound operation (e.g., Stream.ReadAsync), and do some other operation (e.g., Task.Delay)... and they're all tasks! They can be awaited or used in combinations like Task.WhenAll.
Any method that returns Task can be awaited - it doesn't have to be an async method. So Task.Delay and I/O-bound operations just use TaskCompletionSource to create and complete a task - the only thing being done on the thread pool is the actual task completion when the event occurs (timeout, I/O completion, etc).
I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I wrote an article on SynchronizationContext. Most of the time, SynchronizationContext.Current:
is a UI context if the current thread is a UI thread.
is an ASP.NET request context if the current thread is servicing an ASP.NET request.
is a thread pool context otherwise.
Any thread can set its own SynchronizationContext, so there are exceptions to the rules above.
Note that the default Task awaiter will schedule the remainder of the async method on the current SynchronizationContext if it is not null; otherwise it goes on the current TaskScheduler. This isn't so important today, but in the near future it will be an important distinction.
I wrote my own async/await intro on my blog, and Stephen Toub recently posted an excellent async/await FAQ.
Regarding "concurrency" vs "multithreading", see this related SO question. I would say async enables concurrency, which may or may not be multithreaded. It's easy to use await Task.WhenAll or await Task.WhenAny to do concurrent processing, and unless you explicitly use the thread pool (e.g., Task.Run or ConfigureAwait(false)), then you can have multiple concurrent operations in progress at the same time (e.g., multiple I/O or other types like Delay) - and there is no thread needed for them. I use the term "single-threaded concurrency" for this kind of scenario, though in an ASP.NET host, you can actually end up with "zero-threaded concurrency". Which is pretty sweet.
async / await basically simplifies the ContinueWith methods ( Continuations in Continuation Passing Style )
It does not introduce concurrency - you still have to do that yourself ( or use the Async version of a framework method. )
So, the C# 5 version would be:
await Task.Run( () => DoSomeAsyncWork() );
DoSomeWorkAfter();