I retrieve data from the Bloomberg API, and am quite surprised by the slowness.
My computation is IO bounded by this.
Therefore I decided to use some async monad builder to unthrottle it.
Upon running it, the results are not so much better, which was obvious as I make a call to a function, NextEvent, which is thread blocking.
let outerloop args dic =
...
let rec innerloop continuetoloop =
let eventObj = session.NextEvent(); //This blocks
...
let seqtable = reader.ReadFile( #"C:\homeware\sector.csv", ";".[0], true)
let dic = ConcurrentDictionary<_,_> ()
let wf = seqtable |> Seq.mapi (fun i item -> async { outerloop item dic } )
wf |> Async.Parallel
|> Async.RunSynchronously
|> ignore
printfn "%A" ret
Is there a good way to wrap that blocking call to a nonblocking call ?
Also, why is the async framework not creating as many threads as I have requests (like 200)? when I inspect the threads from which I receive values I see only 4-5 that are used..
UPDATE
I found a compelling reason of why it will never be possible.
async operation take what is after the async instruction and schedule it somewhere in the threadpool.
for all that matters, as long as async function are use correctly, that is, always returning to the threadpool it originated from, we can consider that we are execution on a single thread.
Being on a single thread mean all that scheduling will always be executed somewhere later, and a blocking instruction has no way to avoid the fact that, eventually, once it runs, it will have to block at some point in the future the worflow.
Is there a good way to wrap that blocking call to a nonblocking call ?
No. You can never wrap blocking calls to make them non-blocking. If you could, async would just be a design pattern rather than a fundamental paradigm shift.
Also, why is the async framework not creating as many threads as I have requests (like 200)?
Async is built upon the thread pool which is designed not to create threads aggressively because they are so expensive. The whole point of (real) async is that it does not require as many threads as there are connections. You can handle 10,000 simultaneous connections with ~30 threads.
You seem to have a complete misunderstanding of what async is and what it is for. I'd suggest buying any of the F# books that cover this topic and reading up on it. In particular, your solution is not asynchronous because you just call your blocking StartGetFieldsValue member from inside your async workflow, defeating the purpose of async. You might as well just do Array.Parallel.map getFieldsValue instead.
Also, you want to use a purely functional API when doing things in parallel rather than mutating a ConcurrentDictionary in-place. So replace req.StartGetFieldsValue ret with
let! result = req.StartGetFieldsValue()
...
return result
and replace ignore with dict.
Here is a solution I made that seems to be working.
Of course it does not use only async (minus a), but Async as well.
I define a simple type that has one event, finished, and one method, asyncstart, with the method to run as an argument. it launches the method, then fires the event finished at the appropriate place (in my case I had to capture the synchronization context etc..)
Then in the consumer side, I use simply
let! completion = Async.Waitfromevent wapper.finished |> Async.StartAsChild
let! completed = completion
While running this code, on the consumer side, I use only async calls, making my code non blocking. Of course, there has to be some thread somewhere which is blocked, but this happens outside of my main serving loop which remains reactive and fit.
Related
https://www.aeracode.org/2018/02/19/python-async-simplified/
It's not going to ruin your day if you call a non-blocking synchronous
function, like this:
def get_chat_id(name):
return "chat-%s" % name
async def main():
result = get_chat_id("django")
However, if you call a blocking function, like the Django ORM, the
code inside the async function will look identical, but now it's
dangerous code that might block the entire event loop as it's not
awaiting:
def get_chat_id(name):
return Chat.objects.get(name=name).id
async def main():
result = get_chat_id("django")
You can see how it's easy to have a non-blocking function that
"accidentally" becomes blocking if a programmer is not super-aware of
everything that calls it. This is why I recommend you never call
anything synchronous from an async function without doing it safely,
or without knowing beforehand it's a non-blocking standard library
function, like os.path.join.
So I am looking for a way to automatically catch instances of this mistake. Are there any linters for Python which will report sync function calls from within an async function as a violation?
Can I configure Pylint or Flake8 to do this?
I don't necessarily mind if it catches the first case above too (which is harmless).
Update:
On one level I realise this is a stupid question, as pointed out in Mikhail's answer. What we need is a definition of a "dangerous synchronous function" that the linter should detect.
So for purpose of this question I give the following definition:
A "dangerous synchronous function" is one that performs IO operations. These are the same operations which have to be monkey-patched by gevent, for example, or which have to be wrapped in async functions so that the event loop can context switch.
(I would welcome any refinement of this definition)
So I am looking for a way to automatically catch instances of this
mistake.
Let's make few things clear: mistake discussed in article is when you call any long running sync function inside some asyncio coroutine (it can be I/O blocking call or just pure CPU function with a lot of calculations). It's a mistake because it'll block whole event loop what will lead to significant performance downgrade (more about it here including comments below answer).
Is there any way to catch this situation automatically? Before run time - no, no one except you can predict if particular function will take 10 seconds or 0.01 second to execute. On run time it's already built-in asyncio, all you have to do is to enable debug mode.
If you afraid some sync function can vary between being long running (detectable in run time in debug mode) and short running (not detectable) just execute function in background thread using run_in_executor - it'll guarantee event loop will not be blocked.
Lets say I have a function A that uses a function B which uses C, etc:
A -> B -> C -> D and E
Now assume that function D has to use async/await. This means I have to use async/await to the call of function C and then to the call of function B and so on. I understand that this is because they depend on each other and if one of them is waiting for a function to resolve, then transitively, they all have to. What alternatives can I do to make this cleaner?
There is a way to do this, but you'll loose the benefits of async-await.
One of the reason for async-await, is, that if your thread has to wait for another process to complete, like a read or write to the hard-disk, a database query, or fetching some internet information, your thread might do some other useful stuff instead of just waiting idly for this other process to complete.
This is done by using the keyword await. Once your thread sees the await. The thread doesn't really wait idly. Instead, it remembers the context (think of variable values, parts of the call stack etc) and goes up the call stack to see if the caller is not awaiting. If not, it starts executing these statements until it sees an await. It goes up the call stack again to see if the caller is not awaiting, etc.
Once this other process is completed the thread (or maybe another thread from the thread pool that acts as if it is the original thread) continues with the statements after the await until the procedure is finished, or until it sees another await.
To be able to do this, your procedure must know, that when it sees an await, the context needs to be saved and the thread must act like described above. That is why you declare a method async.
That is why typical async functions are functions that order other processes to do something lengthy: disk access, database access, internet communications. Those are typical functions where you'll find a ReadAsync / WriteAsync, next to the standard Read / Write functions. You'll also find them in classes that are typically designed to call these processes, like StreamReaders, TextWriters etc.
If you have a non-async class that calls an async function and waits until the async function completes before returning, the go-up-the-call-stack-to-see-if-the-caller-is-not-awaiting stops here: your program acts as if it is not using async-await.
Almost!
If you start an awaitable task, without waiting for it to finish, and do something else before you wait for the result, then this something else is executed instead of the idly wait, that the thread would have done if you would have used the non-async version.
How to call async function from non-async function
ICollection<string> ReadData(...)
{
// call the async function, don't await yet, you'll have far more interesting things to do
var taskReadLines = myReader.ReadLinesAsync(...);
DoSomethingInteresting();
// now you need the data from the read task.
// However, because this method is not async, you can't await.
// This Wait will really be an idle wait.
taskReadLines.Wait();
ICollection<string> readLines= taskRead.Result;
return readLines();
}
Your callers won't benefit from async-await, however your thread will be able to do something interesting while the lines have not been read yet.
I know that writing async functions is recommended in nodejs. However, I feel it's not so nescessary to write some non IO events asynchronously. My code can get less convenient. For example:
//sync
function now(){
return new Date().getTime();
}
console.log(now());
//async
function now(callback){
callback(new Date().getTime());
}
now(function(time){
console.log(time);
});
Does sync method block CPU in this case? Is this remarkable enough that I should use async instead?
Async style is necessary if the method being called can block for a long time waiting for IO. As the node.js event loop is single-threaded you want to yield to the event loop during an IO. If you didn't do this there could be only one IO outstanding at each point in time. That would lead to total non-scalability.
Using callbacks for CPU work accomplishes nothing. It does not unblock the event loop. In fact, for CPU work it is not possible to unblock the event loop. The CPU must be occupied for a certain amount of time and that is unavoidable. (Disregarding things like web workers here).
Callbacks are nothing good. You use them when you have to. They are a necessary consequence of the node.js event loop IO model.
That said, if you later plan on introducing IO into now you might eagerly use a callback style even if not strictly necessary. Changing from synchronous calls to callback-based calls later can be time-consuming because the callback style is viral.
By adding a callback to a function's signature, the code communicates that something asynchronous might happen in this function and the function will call the callback with an error and/or result object.
In case a function does nothing asynchronous and does not involve conditions where a (non programming) error may occur don't use a callback function signature but simply return the computation result.
Functions with callbacks are not very convenient to handle by the caller so avoid callbacks until you really need them.
Trying to understanding the difference between the TPL & async/await when it comes to thread creation.
I believe the TPL (TaskFactory.StartNew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool. That's of course unless you use TaskCreationOptions.LongRunning which creates a new thread.
I thought async/await would work similarly so essentially:
TPL:
Factory.StartNew( () => DoSomeAsyncWork() )
.ContinueWith(
(antecedent) => {
DoSomeWorkAfter();
},TaskScheduler.FromCurrentSynchronizationContext());
Async/Await:
await DoSomeAsyncWork();
DoSomeWorkAfter();
would be identical. From what I've been reading it seems like async/await only "sometimes" creates a new thread. So when does it create a new thread and when doesn't it create a new thread? If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise I would think it would have to. I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I believe the TPL (TaskFactory.Startnew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool.
Pretty much.
From what i've been reading it seems like async/await only "sometimes" creates a new thread.
Actually, it never does. If you want multithreading, you have to implement it yourself. There's a new Task.Run method that is just shorthand for Task.Factory.StartNew, and it's probably the most common way of starting a task on the thread pool.
If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise i would think it would have to.
Bingo. So methods like Stream.ReadAsync will actually create a Task wrapper around an IOCP (if the Stream has an IOCP).
You can also create some non-I/O, non-CPU "tasks". A simple example is Task.Delay, which returns a task that completes after some time period.
The cool thing about async/await is that you can queue some work to the thread pool (e.g., Task.Run), do some I/O-bound operation (e.g., Stream.ReadAsync), and do some other operation (e.g., Task.Delay)... and they're all tasks! They can be awaited or used in combinations like Task.WhenAll.
Any method that returns Task can be awaited - it doesn't have to be an async method. So Task.Delay and I/O-bound operations just use TaskCompletionSource to create and complete a task - the only thing being done on the thread pool is the actual task completion when the event occurs (timeout, I/O completion, etc).
I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I wrote an article on SynchronizationContext. Most of the time, SynchronizationContext.Current:
is a UI context if the current thread is a UI thread.
is an ASP.NET request context if the current thread is servicing an ASP.NET request.
is a thread pool context otherwise.
Any thread can set its own SynchronizationContext, so there are exceptions to the rules above.
Note that the default Task awaiter will schedule the remainder of the async method on the current SynchronizationContext if it is not null; otherwise it goes on the current TaskScheduler. This isn't so important today, but in the near future it will be an important distinction.
I wrote my own async/await intro on my blog, and Stephen Toub recently posted an excellent async/await FAQ.
Regarding "concurrency" vs "multithreading", see this related SO question. I would say async enables concurrency, which may or may not be multithreaded. It's easy to use await Task.WhenAll or await Task.WhenAny to do concurrent processing, and unless you explicitly use the thread pool (e.g., Task.Run or ConfigureAwait(false)), then you can have multiple concurrent operations in progress at the same time (e.g., multiple I/O or other types like Delay) - and there is no thread needed for them. I use the term "single-threaded concurrency" for this kind of scenario, though in an ASP.NET host, you can actually end up with "zero-threaded concurrency". Which is pretty sweet.
async / await basically simplifies the ContinueWith methods ( Continuations in Continuation Passing Style )
It does not introduce concurrency - you still have to do that yourself ( or use the Async version of a framework method. )
So, the C# 5 version would be:
await Task.Run( () => DoSomeAsyncWork() );
DoSomeWorkAfter();
This is probably one of the most elementary things in F#, but I've just realized I've got no idea what's going on behind the sceens.
let testMe() =
async { printfn "before!"
do! myAsyncFunction() // Waits without blocking thread
printfn "after!" }
testMe |> Async.RunSynchronously
What's happening at do! myAsyncFunction()? I'm aware of that it waits for myAsyncFunction to finish, before moving on. But how can it do that, without blocking the thread?
My best guess is that everything after do! myAsyncFunction() is passed along as a continuation, which gets executed on the same thread myAsyncFunction() was scheduled on, once myAsyncFunction() has finished executing.. but then again, that's just a guess.
As you correctly pointed out, the myAsyncFunction is passed a continuation and it calls it to resume the rest of the asynchronous workflow when it completes.
You can understand it better by looking at the desugared version of the code:
let testMe() =
async.Delay(fun () ->
printfn "before!"
async.Bind(myAsyncFunction(), fun () ->
printfn "after!"
async.Zero()))
They key thing is that the asynchronous workflow created by myAsyncFunction is given to the Bind operation that starts it and gives it the second argument (a continuation) as a function to call when the workflow completes. If you simplify a lot, then an asynchronous workflow could be defined like this:
type MyAsync<'T> = (('T -> unit) * (exn -> unit)) -> unit
So, an asynchronous workflow is just a function that takes some continuations as argument. When it gets the continuations, it does something (i.e. create a timer or start I/O) and then it eventually calls these continuations. The question "On which thread are the continuations called?" is an interesting one - in a simple model, it depends on the MyAsync that you're starting - it may decide to run them anywhere it wants (i.e. Async.SwithcToNewThread runs them on a new thread). The F# library includes some additional handling that makes GUI programming using workflows easier.
Your example uses Async.RunImmediate, which blocks the current thread, but you could also use Async.Start, which just starts the workflow and ignores the result when it is produced. The implementation of Async.Start could look like this:
let Start (async:MyAsync<unit>) = async (ignore, ignore)