node-amqp, multiple bind callbacks - node.js

I would like some pointers of best practice. I need to wait for a binding to complete before polling for data through it, and much of this happens in parallell all the way back to the client. But if there is already a binding operation in progress, issuing another one will replace the callback.
If I would like to patch node-amqp to support a separate callback for each binding operation, how would I proceed?

The short answer is that this is not possible in node-amqp. If you call bind on a queue that already has a binding operation in progress, the callback will be replaced. So only one callback will be called for concurrent bindings on the same queue, namely the last one passed.
For this reason, and others, I have switched to using amqp-coffee which handles the binding callback differently.

Related

Cancel scheduled operation for azure function durable entities

Is it possible to cancel a scheduled operation in azure function durable entity ? Below is an example code.I want to cancel the call to operation "DeviceTimeout"after it is scheduled.
Entity.Current.SignalEntity(Entity.Current.EntityId, DateTime.UtcNow.Add(TimeSpan.FromSeconds(30)), nameof(DeviceTimeout));
Unfortunately not. There is an open issue requesting this ability but afaik it has not been implemented yet:
I can' think of a very straightforward API to provide this. One approach would be to include some sort of unique identifier for the signal so that a cancellation request can be precisely matched to the signal being cancelled. But there are some open questions on what exactly the implementation would have to guarantee, and whether that can lead to new issues. For example, would we need to store a cancellation request that we can't match to a signal until such a signal arrives? what if it never arrives?
I suppose the next best thing is to just exit immediately once the operation is executed.

Can the same line of Node.js code run at the same time?

I'm pretty new to node and am trying to setup an express server. The server will handle various requests and if any of them fail call a common failure function that will send e-mail. If I was doing this in something like Java I'd likely use something like a synchronized block and a boolean to allow the first entrance into the code to send the mail.
Is there anything like a synchronized block in Node? I believe node is single threaded and has a few helper threads to handle asyncronous/callback code. Is it at all possible that the same line of code could run at exactly the same time in Node?
Thanks!
Can the same line of Node.js code run at the same time? Is it at all possible that the same line of code could run at exactly the same time in Node?
No, it is not. Your Javascript in node.js is entirely single threaded. An event is pulled from the event queue. That calls a callback associated with that event. That callback runs until it returns. No other events can be processed until that first one returns. When it returns, the interpreter pulls the next event from the event queue and then calls the callback associated with it.
This does not mean that there are not concurrency issues in node.js. There can be, but it is caused not by code running at the same physical time and creating conflicting access to shared variables (like can happen in threaded languages like Java). Concurrency issues can be caused by the asynchronous nature of I/O in node.js. In the asynchronous case, you call an asynchronous function, pass it a callback (or expect a promise in return). Your code then continues on and returns to the interpreter. Some time later an event will occur inside of node.js native code that will add something to the event queue. When the interpreter is free from running other Javascript, it will process that event and then call your callback which will cause more of your code to run.
While all this is "in process", other events are free to run and other parts of your Javascript can run. So, the exposure to concurrency issues comes, not from simultaneous running of two pieces of code, but from one piece of your code running while another piece of your code is waiting for a callback to occur. Both pieces of code are "in process". They are not "running" at the same time, but both operations are waiting from something else to occur in order to complete. If these two operations access variables in ways that can conflict with each other, then you can have a concurrency issue in Javascript. Because none of this is pre-emptive (like threads in Java), it's all very predictable and is much easier to design for.
Is there anything like a synchronized block in Node?
No, there is not. It is simply not needed in your Javascript code. If you wanted to protect something from some other asynchronous operation modifying it while your own asynchronous operation was waiting to complete, you can use simple flags or variables in your code. Because there's no preemption, a simple flag will work just fine.
I believe node is single threaded and has a few helper threads to handle asyncronous/callback code.
Node.js runs your Javascript as single threaded. Internally in its own native code, it does use threads in order to do its work. For example, the asynchronous file system access code internal to node.js uses threads for disk I/O. But these threads are only internal and the result of these threads is not to call Javascript directly, but to insert events in the event queue and all your Javascript is serialized through the event queue. Pull event from event queue, run callback associated with the event. Wait for that callback to return. Pull next event from the event queue, repeat...
The server will handle various requests and if any of them fail call a common failure function that will send e-mail. If I was doing this in something like Java I'd likely use something like a synchronized block and a boolean to allow the first entrance into the code to send the mail.
We'd really have to see what your code looks like to understand what exact problem you're trying to solve. I'd guess that you can just use a simple boolean (regular variable) in node.js, but we'd have to see your code to really understand what you're doing.

Purpose behind context.success() and callback()

After looking at the AWS Lambda documentation and some stackoverflow questions (especially this one: Context vs Callback in AWS Lambda), I'm still a little confused what the purpose of callback or context.success() is. Also, what was the original reason behind having context.success() back when callback couldn't be used?
I'm asking because I was given a Lambda function that uses both calls, and I don't know why one was chosen over the other at a given point.
Thanks!
From this article:
Context.succeed [is] more than just bookkeeping – [it] cause[s] the
request to return after the current task completes and freeze[s] the
process immediately, even if other tasks remain in the Node.js event
loop... [On the other hand,] the callback waits for all the tasks
in the Node.js event loop to complete, just as it would if you ran
the function locally. If you chose to not use the callback parameter
in your code, then AWS Lambda implicitly calls it with a return value
of null. You can still use the Context methods to terminate the
function, but the callback approach of waiting for all tasks to
complete is more idiomatic to how Node.js behaves in general.*

Node.js: Accessing the correct/current request object from any function during the request lifetime

I've gathered that due to the event queue design of node.js and the way that JavaScript references work, that there's not an effective way of figuring out which active request object (if any) spawned the currently-executing function.
If I am in fact mistaken, how do I maintain some sort of statically-accessible context object that is available for the life of a request, short of manually passing it around every function in the event/callback chain?
If there's no way to do that, is there a way to somehow wrap every event call so that when it is spawned, it copies the context data from the event that spawned it?
What I'm actually trying to achieve is having every log message also contain the username of the person to whom the log message is related. Because the callback event chain can become quite complex, I'd say trying to pass the request object around every function in the application is a bit messy for my taste, so hopefully there's another way.

ASIO strand::wrap does it not have to serialize in order?

I'm lost on the distinction between posting using strand::wrap and a strand::post? Seems like both guarantee serialization yet how can you serialize with wrap and not get consistent order? Seems like they both would have to do the same thing. When would I use one over the other?
Here is a little more detail pseudo code:
mystrand(ioservice);
mystrand.post(myhandler1);
mystrand.post(myhandler2);
this guarantees my two handlers are serialized and executed in order even in a thread pool.
Now, how is that different from below?
ioservice->post(mystrand.wrap(myhandler1));
ioservice->post(mystrand.wrap(myhandler2));
Seems like they do the same thing?
Why use one over the other? I see both used and am trying to figure out when
one makes more sense than the other.
wrap creates a callable object which, when called, will call dispatch on a strand. If you don't call the object returned by wrap, nothing much will happen at all. So, calling the result of wrap is like calling dispatch. Now how does that compare to post? According to the documentation, post differs from dispatch in that it does not allow the passed function to be invoked right away, within the same context (stack frame) where post is called.
So wrap and post differ in two ways: the immediacy of their action, and their ability to use the caller's own context to execute the given function.
I got all this by reading the documentation.
This way
mystrand(ioservice);
mystrand.post(myhandler1);
mystrand.post(myhandler2);
myhandler1 is guaranteed by mystrand to be executed before myhandler2
but
ioservice->post(mystrand.wrap(myhandler1));
ioservice->post(mystrand.wrap(myhandler2));
the execution order is the order of executing wrapped handlers, which io_service::post does not guarantee.

Resources