Does an asynchronous call always create/call a new thread? - multithreading

Does asynchronous call always create a new thread?
Example:
If JavaScript is single threaded then how can it do an async postback? Is it actually blocking until it gets a callback? If so, is this really an async call?

This is an interesting question.
Asynchronous programming is a paradigm of programming that is principally single threaded, i.e. "following one thread of continuous execution".
You refer to javascript, so lets discuss that language, in the environment of a web browser. A web browser runs a single thread of javascript execution in each window, it handles events (such as onclick="someFunction()") and network connections (such as xmlhttprequest calls).
<script>
function performRequest() {
xmlhttp.open("GET", "someurl", true);
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4) {
alert(xmlhttp.responseText);
}
}
xmlhttp.send(sometext);
}
</script>
<span onclick="performRequest()">perform request</span>
(This is a nonworking example, for demonstration of concepts only).
In order to do everything in an asynchronous manner, the controlling thread has what is known as a 'main loop'. A main loop looks kind of like this:
while (true) {
event = nextEvent(all_event_sources);
handler = findEventHandler(event);
handler(event);
}
It is important to note that this is not a 'busy loop'. This is kind of like a sleeping thread, waiting for activity to occur. Activity could be input from the user (Mouse Movement, a Button Click, Typing), or it could be network activity (The response from the server).
So in the example above,
When the user clicks on the span, a ButtonClicked event would be generated, findEventHandler() would find the onclick event on the span tag, and then that handler would be called with the event.
When the xmlhttp request is created, it is added to the all_event_sources list of event sources.
After the performRequest() function returns, the mainloop is waiting at the nextEvent() step waiting for a response. At this point there is nothing 'blocking' further events from being handled.
The data comes back from the remote server, nextEvent() returns the network event, the event handler is found to be the onreadystatechange() method, that method is called, and an alert() dialog fires up.
It is worth noting that alert() is a blocking dialog. While that dialog is up, no further events can be processed. It's an eccentricity of the javascript model of web pages that we have a readily available method that will block further execution within the context of that page.

The Javascript model is single-threaded. An asynchronous call is not a new thread, but rather interrupts an existing thread. It's analogous to interrupts in a kernel.
Yes it makes sense to have asynchronous calls with a single thread. Here's how to think about it: When you call a function within a single thread, the state for the current method is pushed onto a stack (i.e. local variables). The subroutine is invoked and eventually returns, at which time the original state is popped off the stack.
With an asynchronous callback, the same thing happens! The difference is that the subroutine is invoked by the system, not by the current code invoking a subroutine.

A couple notes about JavaScript in particular:
XMLHttpRequests are non-blocking by default. The send() method returns immediately after the request has been relayed to the underlying network stack. A response from the server will schedule an invocation of your callback on the event loop as discussed by the other excellent answers.
This does not require a new thread. The underlying socket API is selectable, similar to java.nio.channels in Java.
It's possible to construct synchronous XMLHttpRequest objects by passing false as the third parameter to open(). This will cause the send() method to block until a response has been received from the server, thus placing the event loop at the mercy of network latency and potentially hanging the browser until network timeout. This is a Bad Thing™.
Firefox 3.5 will introduce honest-to-god multithreaded JavaScript with the Worker class. The background code runs in a completely separate environment and communicates with the browser window by scheduling callbacks on the event loop.

In many GUI applications, an async call (like Java's invokeLater) merely adds the Runnable object to its GUI thread queue. The GUI thread is already created, and it doesn't create a new thread. But threads aren't even strictly required for an asynchronous system. Take, for example, libevent, which uses select/poll/kqueue, etc. to make non-blocking calls to sockets, which then fires callbacks to your code, completely without threads.

No, but more than one thread will be involved.
An asynchronous call might launch another thread to do the work, or it might post a message into a queue on another, already running thread. The caller continues and the callee calls back once it processes the message.
If you wanted to do a synchronous call in this context, you'd need to post a message and actively wait for the callback to happen.
So in summary: More than one thread will be involved, but it doesn't necessarily create a new thread.

I don't know about javascript, but for instance in the Windows Forms world, asynchronous invocations can be made without multiple threads. This has to do with the way the Windows Message Pump operates. Basically a Windows Forms application sets up a message queue through which Windows places messages notifying it about events. For instance, if you move the mouse, messages will be placed on that queue. The Windows Forms application will be in an endless loop consuming all the messages that are thrown at it. According to what each message contains it will move windows around, repaint them or even invoke user-defined methods, amongst other things. Calls to methods are identified by delegates. When the application finds a delegate instance in the queue, it happily invokes the method referred by the delegate.
So, if you are in a method doing something and want to spawn some asynchronous work without creating a new thread, all you have to do is place a delegate instance into the queue, using the Control.BeginInvoke method. Now, this isn't actually multithreaded, but if you throw very small pieces of work to the queue, it will look like multithreaded. If, on the other hand you give it a time consuming method to execute, the application will freeze until the method is done, which will look like a jammed application, even though it is doing something.

Related

Why using sync functions in nodejs is known as "blocking the event loop"?

If I understand correctly, the EventLoop is the mechanism that node uses to resolve asynchronous operations and then passing them to the call stack, right? My question is, when I use a synchronous method (for example pbkdf2Sync) it will get stuck in the call stack until it is finished but it won't be moved to the EventLoop because is not an async operation, so why is this known as blocking the event loop if in reality it is blocking everything? not JUST the event loop (that as far as I understand, can continue working and will pass the callbacks to the call stack when they're finished)
Is my understanding of the NodeJs inner workins completly wrong? This topic specifically is kinda hard to understand because every resource I read differs in some way or another, so even if I think I get the bigger picture, these are the "details" that confuse me.
What is blocking
Why using sync functions in nodejs is known as "blocking the event loop"?
In a nutshell, it's because the event loop can only process the next event when you return from whatever your current Javascript is doing and allow the event loop to look for the next event. A sync function blocks the interpreter until it finishes. So, the entire time a sync function is working and you're waiting for it to return, the interpreter is blocked and control is not returned back to the event loop. This blocks the event loop and also blocks your Javascript from running.
Single Thread
Nodejs runs your Javascript all with a single thread. Other threads are used internally, but your Javascript itself runs only in a single thread (we're assuming there is no use of WorkerThreads in your code). So, when you make a synchronous function call, that single thread that runs your Javascript is busy and blocked until the synchronous function call returns and can then continue executing more of your Javascript.
This blocks everything. It blocks running more of your Javascript after the synchronous function call and it blocks getting back to the event loop to run any other event handlers that are pending such as incoming network events, timers, completion events from other things such as disk I/O, etc... So, while this is blocking the event loop, it's also blocking running more of your own code after the function call.
Non-Blocking, Asynchronous Operations
On the other hand, asynchronous functions such as fs.readFile(), for example, don't block. They initiate their operation and return immediately. This allows the interpreter to continue running any more of your own Javascript after the call to fs.readFile() and it also allows you to return from whatever event triggered your work in the first place which will return control back to the event loop so it can service other waiting events or other events that will trigger in the future. fs.readFile() then does most of its work in native code (behind the scenes) outside of the main thread that runs your Javascript. So, these type of asynchronous functions don't block the event loop - instead they cooperate with the event loop so that other things can get run while waiting for the completion of the asynchronous operation that was previously initiated. When they complete, they insert an event into the event loop that causes the event loop to call the completion callback at it's earliest convenience (when it's not blocked).
Differences in Blocking
It's also worth noting that functions that represent both synchronous and asynchronous operations both block the execution of your Javascript and block the event loop until they return. The difference is that an asynchronous operation returns from the function nearly immediately, long before the asynchronous operation itself is complete and communicates its completion and/or eventual result back via a promise, callback or event (which are all callbacks at the lowest level of the event loop). The synchronous operation does not return until the operation itself is complete. So, the asynchronous operation only blocks for a very short duration while the operation is being initiated whereas the synchronous operation blocks for the entire duration of the operation (until it completes).
More About the Event Loop
So, in what moment during the event loop is my javascript code executed?
When control returns back to the event loop, it goes through several different phases looking for things to do. When it finds something to do, that "something" results in calling a Javascript callback that starts running some of your Javascript. For example if the "something to do" is a setTimeout() timer that is ready to fire, then it will call the Javascript callback that was passed to setTimeout(). That callback runs to its completion and only when your Javascript returns from that callback does the event loop regain control and get to look for the next event to run and call its callback.
it won't be moved to the EventLoop because is not an async operation
This is not really the correct way to think about things. Things are not really "moved to the event loop".
A synchronous operation is just a blocking function call that returns when it returns and execution of any other Javascript is blocked until that blocking function call returns. Things are blocked because the single threaded interpreter running your Javascript is stuck waiting for this function to finish. It can't do anything else and the event loop is also blocked because it can't do anything until the interpreter returns control back to the event loop.
An asynchronous operation, on the other hand, initiates some operation (let's say it issues an http request to some other host) and then immediately returns, long before it has the result of that http request. Since this asynchronous operation returns before it has its result, it is considered non-blocking and because it returns quickly, you can then return from whatever event caused your code to run and that will then return control back to the event loop. That allows the event loop to then look for other events to handle and run their corresponding callbacks. Meanwhile, the asynchronous operation that was previously started has some native code associated with it (that may or may not be running in an native code OS thread - depending upon what type of operation it is). But regardless, that native code is configured such that when the asynchronous operation completes, it will insert an event into the appropriate event queue. So, at some future point when nodejs has control back in the event loop, it will find that event and run the Javascript callback associated with that event, thus notifying the original Javascript code that the asynchronous operation is now complete and providing some sort of result or error code.
Example
As a simple example, let's say you run this code:
// timer that wants to fire in 1 second
setTimeout(function() {
console.log("timer fired")
}, 1000);
// loop that blocks for 5 seconds
const start = Date.now();
while (Date.now() - start < 5000) { }
console.log("blocking loop finished");
This will output:
blocking loop finished
timer fired
Even though the timer was set to run 1 second from now, the while() loop blocked everything for 5 seconds so it wasn't until after the while() loop finished and returned back to the system that the event loop could look at what it had to do next and call the callback associated with the timer.
This while loop is similar to a blocking function such as pbkdf2Sync(). Both block the interpreter and don't return until they are finished and therefore nothing in the event loop gets a chance to run until they are finished.
A Simple Analogy
Here's a simple analogy. Imagine you need to contact your cable company to troubleshoot a problem. There are two ways for you to get ahold of them. The first way is that you call them and sit on the phone on hold for an hour waiting for a customer service representative to pick up your call. You can't really do much else while you're sitting on hold because you have to be right there waiting and ready to respond when someone finally comes on the line to help you. Thus, you are "blocked" from doing a lot of other things. This is a "blocking, synchronous operation". You can't really do much else while you're waiting on hold.
The second way you can call the cable company is to request a callback sometime in the future and when a representative is available, they will call you back. As soon as you're done requesting the callback, you can go about your business doing other things. You have to be able to answer an incoming call when it arrives, but other than that you're not blocked from doing many other things. This is a "non-blocking, asynchronous operation".
But, if while you're supposed to be able to answer your telephone callback from the second scenario above, you make another call and are on the phone for 30 minutes, the cable company is blocked from reaching you on the phone for the duration of your other call. In our analogy, you are "blocking the event loop" during your other call as the incoming event from the cable company can't be processed while you're on a call.

Can two callbacks execute the code at the same time(Parrallely) or not?

I am doing an IO wait operation inside a for loop now the thing is when all of the operations terminates I want to send the response to the server. Now I was just wondering that suppose two IO operation terminates exactly at the same time now can they execute code at the same time(parallel) or will they execute serially?
As far as I know, as Node is Concurrent but not the Parallel language so I don't think they will execute at the same time.
node.js runs Javascript with a single thread. That means that two pieces of Javascript can never be running at the exact same moment.
node.js processes I/O completion using an event queue. That means when an I/O operation completes, it places an event in the event queue and when that event gets to the front of the event queue and the JS interpreter has finished whatever else it was doing, then it will pull that event from the event queue and call the callback associated with it.
Because of this process, even if two I/O operations finish at basically the same moment, one of them will put its completion event into the event queue before the other (access to the event queue internally is likely controlled by a mutex so one will get the mutex before the other) and that one's completion callback will get into the event queue first and then called before the other. The two completion callbacks will not run at the exact same time.
Keep in mind that more than one piece of Javascript can be "in flight" or "in process" at the same time if it contains non-blocking I/O operations or other asynchronous operations. This is because when you "wait" for an asynchronous operation to complete in Javscript, you return control back to the system and you then resume processing only when your completion callback is called. While the JS interpreter is waiting for an asynchronous I/O operation to complete and the associated callback to be called, then other Javascript can run. But, there's still only one piece of Javascript actually ever running at a time.
As far as I know, as Node is Concurrent but not the Parallel language so I don't think they will execute at the same time.
Yes, that's correct. That's not exactly how I'd describe it since "concurrent" and "parallel" don't have strict technical definitions, but based on what I think you mean by them, that is correct.
you can use Promise.all :
let promises = [];
for(...)
{
promises.push(somePromise); // somePromise represents your IO operation
}
Promise.all(promises).then((results) => { // here you send the response }
You don't have to worry about the execution order.
Node.js is designed to be single thread. So basically there is no way that 'two IO operation terminates exactly at the same time' could happen. They will just finish one by one.

NSURLSession dataTaskWithURL

I am using NSURLSession dataTaskWithURL:completionHandler. It looks like completionHandler is executed in a thread which is different than the thread(in my case, it's the main thread) which calls dataTaskWithURL. So my question is, since it is asynchronized, is it possible that the main thread exit, but the completionHandler thread is still running since the response has not come back, which is the case I am trying to avoid. If this could happen, how should I solve the problem? BTW, I am building this as a framework, not an application.Thanks.
In the first part of your question you seem un-sure that the completion handler is running on a different thread. To confirm this let's look at the NSURLSession Class Reference. If we look at the "Creating a Session" section we can see in the description for the following method the answer.
+ sessionWithConfiguration:delegate:delegateQueue:
Swift
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)
Objective-C
+ (NSURLSession *)sessionWithConfiguration:(NSURLSessionConfiguration *)configuration
delegate:(id<NSURLSessionDelegate>)delegate
delegateQueue:(NSOperationQueue *)queue
In the parameters table for the NSOperationQueue queue parameter is the following quote.
An operation queue for scheduling the delegate calls and completion handlers. The queue need not be a serial queue. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
So we can see the default behavior is to provide a queue whether from the developer or as the default class behavior. Again we can see this in the comments for the method + sessionWithConfiguration:
Discussion
Calling this method is equivalent to calling
sessionWithConfiguration:delegate:delegateQueue: with a nil delegate
and queue.
If you would like a more information you should read Apple's Concurrency Programming Guide. This is also useful in understanding Apple's approach to threading in general.
So the completion handler from - dataTaskWithURL:completionHandler: is running on a different queue, with queues normally providing their own thread(s). This leads the main component of your question. Can the main thread exit, while the completion handler is still running?
The concise answer is no, but why?
To answer this answer this we again turn to Apple's documentation, to a document that everyone should read early in their app developer career!
The App Programming Guide
The Main Run Loop
An app’s main run loop processes all user-related events. The
UIApplication object sets up the main run loop at launch time and uses
it to process events and handle updates to view-based interfaces. As
the name suggests, the main run loop executes on the app’s main
thread. This behavior ensures that user-related events are processed
serially in the order in which they were received.
All of the user interact happens on the main thread - no main thread, no main run loop, no app! So the possible condition you question mentions should never exist!
Apple seems more concerned with you doing background work on the main thread. Checkout the section "Move Work off the Main Thread"...
Be sure to limit the type of work you do on the main thread of your
app. The main thread is where your app handles touch events and other
user input. To ensure that your app is always responsive to the user,
you should never use the main thread to perform long-running or
potentially unbounded tasks, such as tasks that access the network.
Instead, you should always move those tasks onto background threads.
The preferred way to do so is to use Grand Central Dispatch (GCD) or
NSOperation objects to perform tasks asynchronously.
I know this answer is long winded, but I felt the need to offer insight and detail in answering your question - "the why" is just as important and it was good review :)
NSURLSessionTasks always run in background by default that's why we have completion handler which can be used when we get response from Web service.
If you don't get any response explore your request URL and whether HTTPHeaderFields are set properly.
Paste your code so that we can help it
I just asked the same question. Then figured out the answer. The thread of the completion handler is setup in the init of the NSURLSession.
From the documentation:
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)`
queue - A queue for scheduling the delegate calls and completion handlers. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.*
My code that sets up for completion on main thread:
var session = NSURLSession(configuration: configuration, delegate:nil, delegateQueue:NSOperationQueue.mainQueue())
(Shown in Swift, Objective-C the same) Maybe post more code if this does not solve.

Node.js nested callbacks

I have a very basic question in node.js programming .
I am finding it interesting as I have started understanding it deeper.
I came across the following code :
Abc.do1(‘operation’,2, function(err,res1)
{
If(err) {
Callback(err);
}
def.do2(‘operation2’,function(l) {
}
}
My question is :
since def.do2 is written in the callback of abc.do1 ,
is it true that def.do2 will be executed after the ‘operation’ of abc.do1 is completed and the callback function gets called . If yes, is this a good programming practice , because we speak only asynchronous and non blocking code in node.js
Yes, you're correct. def.do2() is executed after abc.do1() is done. However, this done on purpose to make sure that do1() is done before do2() can start. If it's meant to be done in parallel, then do2() would have been outside of do1()'s callback. This code is not exactly blocking. after do1() is started, the code is still continuing on to execute everything below do1() functon (outside of do1's callback), just not do2() until do1() is done which is meant to be.
Yes you are absolutely correct and you have given a correct example of callback function.
The main browser process is a single threaded event loop. If you execute a long-running operation within a single-threaded event loop, the process "blocks". This is bad because the process stops processing other events while waiting for your operation to complete. 'alert' is one of the few blocking browser methods: if you call alert('test'), you can no longer click links, perform ajax queries, or interact with the browser UI.
In order to prevent blocking on long-running operations, the XMLHttpRequest provides an asynchronous interface. You pass it a callback to run after the operation is complete, and while it is processing it cedes control back to the main event loop instead of blocking.
There's no reason to use a callback unless you want to bind something to an event handler, or your operation is potentially blocking and therefore requires an asynchronous programming interface.
See this video
http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/

How to do asynchronuous programming in Delphi?

I have an application, where most of the actions take some time and I want to keep the GUI responsive at all times. The basic pattern of any action triggered by the user is as follows:
prepare the action (in the main thread)
execute the action (in a background thread while keeping the gui responsive)
display the results (in the main thread)
I tried several things to accomplish this but all of them are causing problems in the long run (seemingly random access violations in certain situations).
Prepare the action, then invoke a background thread and at the end of the background thread, use Synchronize to call an OnFinish event in the main thread.
Prepare the action, then invoke a background thread and at the end of the background thread, use PostMessage to inform the GUI thread that the results are ready.
Prepare the action, then invoke a background thread, then busy-wait (while calling Application.ProcessMessages) until the background thread is finished, then proceed with displaying the results.
I cannot come up with another alternative and none of this worked perfectly for me. What is the preferred way to do this?
1) Is the 'Orignal Delphi' way, forces the background thread to wait until the synchronized method has been executed and exposes the system to more deadlock-potential than I am happy with. TThread.Synchronize has been re-written at least twice. I used it once, on D3, and had problems. I looked at how it worked. I never used it again.
2) I the design I use most often. I use app-lifetime threads, (or thread pools), create inter-thread comms objects and queue them to background threads using a producer-consumer queue based on a TObjectQueue descendant. The background thread/s operate on the data/methods of the object, store results in the object and, when complete, PostMessage() the object, (cast to lParam) back to the main thread for GUI display of results in a message-handler, (cast the lParam back again). The background threads in the main GUI thread then never have to operate on the same object and never have to directly access any fields of each other.
I use a hidden window of the GUI thread, (created with RegisterWindowClass and CreateWindow), for the background threads to PostMessage to, comms object in LParam and 'target' TwinControl, (usually a TForm class), as WParam. The trivial wndproc for the hidden window just uses TwinControl.Perform() to pass on the LParam to a message-handler of the form. This is safer than PostMessaging the object directly to a TForm.handle - the handle can, unfortunately, change if the window is recreated. The hidden window never calls RecreateWindow() and so its handle never changes.
Producer-consumer queues 'out from GUI', inter-thread comms classes/objects and PostMessage() 'in to GUI' WILL work well - I've been doing it for decades.
Re-using the comms objects is fairly easy too - just create a load in a loop at startup, (preferably in an initialization section so that the comms objects outlive all forms), and push them onto a P-C queue - that's your pool. It's easier if the comms class has a private field for the pool instance - the 'releaseBackToPool' method then needs no parameters and, if there is more than one pool, ensures that the objects are always released back to their own pool.
3) Can't really improve on David Hefferman's comment. Just don't do it.
You can implement the pattern questioned by using OTL as demonstrated by the OTL author here
You could communicate data between threads as messages.
Thread1:
allocate memory for a data structure
fill it in
send a message to Thread2 with the pointer to this structure (you could either use Windows messages or implement a queue, insuring its enque and dequeue methods don't have race conditions)
possibly receive a response message from Thread2...
Thread2:
receive the message with the pointer to the data structure from Thread1
consume the data
deallocate the data structure's memory
possibly send a message back to Thread1 in a similar fashion (perhaps reusing the data structure, but then you don't deallocate it)
You may end up with more than 1 non-GUI thread if you want your GUI not only live, but also responding to some input, while the input that takes long time to be processed is being processed.

Resources