So I've recently hopped on the async/await train, well attempting to I'm still grasping some concepts.
I've started by switching as much as I can to async/await and the packages that don't offer it yet I found promise versions.
Anyways, I stumbled upon the request-promise-native module which is just like request but utilizes promises as I'm sure you can see.
I've been experimenting with using async/await with it and it works but I'm not sure I'm using it right. In fact I'm not even sure if it has advantages over using the promise but this particular function I'm converting has a lot of callbacks so I'm trying to keep the amount of tabs at a minimum.
testFunction: async (param) => {
let results;
try {
results = await request(requestOptions);
} catch (e) {
return (e);
}
// Do stuff with results
Now this works and I get the desired result but my question is 1. Is this even the right way to use async/await and 2. Is there any benefit to this over the standard promise offered by the library?
You are indeed using async/await correctly. The function definition must be preceded by async exactly like you've done, and the await operator should precede the code that returns the Promise, exactly like you've done. It is also correct to wrap await in a try/catch because if the Promise is rejected, the await expression will throw the rejected value (see MDN).
The benefit is code that appears synchronous, which makes it easier to follow, understand, and reason about.
Related
Imagine you are creating a framework for nodejs.
Is there any good practices to handle sync/async code.
I'm talking about the sync-to-async problem as follows:
All over your libraries you have code that from the beginning is sync for example:
function validate(email) {
return email.match(/regex-blah/)
}
When you need some IO async operation inside the functions they should become async (with async keyword or promises). Doesnt matter.
async function validate(email){
...
}
What matters is that all callers (called "x") to "validate" should become async also - and all callers to "x"
Which becomes very hard to support and maintain.
This could be a problem in a client side framework also i think.
Is there any good rules to avoid that problem.
Thanks
Let foo be sub or method. I have programmed a blocking and an async variant, so looking from the outside the essential difference is in the return value. I first thought of specifying it in the signature, but the dispatcher unfortunately only looks at the incoming end instead of both:
> multi sub foo (--> Promise) {}; multi sub foo (--> Cool) {};
> my Promise $p = foo
Ambiguous call to 'foo(...)'; these signatures all match:
:( --> Promise)
:( --> Cool)
in block <unit> at <unknown file> line 1
Should I add a Bool :$async to the signature? Should I add a name suffix (i.e. foo and foo-async) like in JS? Both don't feel much perlish to me. What are the solutions currently in use for this problem?
Multiple dispatch on return type cannot work, since the return value itself could be used as the argument to a multiple dispatch call (and since nearly every operator in Perl 6 is a multiple dispatch call, this would be a very common occurrence).
As to the question at hand: considering code in core, modules, and a bunch of my own code, it seems that a given class or module will typically offer a synchronous interface or an asynchronous interface, whichever feels most natural for the problem at hand. In cases where both make sense, they are often differentiated at type or module level. For example:
Core: there are both Proc and Proc::Async, and IO::Socket::INET and IO::Socket::Async. While it's sometimes the case that a reasonable asynchronous API can be obtained by providing Promise-returning alternatives for each synchronous routine, in other cases the overall workflow will be a bit different. For example, for a synchronous socket API it's quite reasonable to sit in a loop asking for data, whereas the asynchronous API is much more naturally expressed in Perl 6 by providing a Supply of the packets arriving over the network.
Libraries: Cro::HTTP::Client offers a consistently asynchronous interface to doing HTTP requests. There is no synchronous API.
Applications: considering a lot of my application code, things seem to be either consistently synchronous or consistently asynchronous in terms of their API. The only exceptions I'm finding are classes that are almost entirely synchronous, except they have a few Supply-returning methods in order to provide notifications of events. This isn't really the case being asked about, however, since notifications are naturally asynchronous.
It's interesting that we've ended up here, in contrast to in various other languages where providing async variants through a naming convention is common. I think much of the reason is that one can use await anywhere in Perl 6. This is not the case in languages that have an async/await pair, where in order to use await one must first refactor the calling routine to be async, and then refactor its callers to be async, etc.
So if we are writing a completely synchronous bit of code and want to use something from a module that returns a Promise, our entire cost is "just write await". That's it. Writing in a call to await is the same length as a -sync or -async suffix, or a :sync or :async named argument.
On the other hand, one might choose to provide a synchronous API to something, even if on the inside it is doing an await, because the feeling is most consumers will just want to use it synchronously. Should someone wish to call it asynchronously, there's another 5-letter word, start, that will trigger it on the threadpool, and any await that is performed inside of the code will not (assuming Perl 6.d) block up a real thread, but instead just schedule it to continue when the awaited work is done. That's, again, the same length - or shorter - than writing an async suffix, named argument, etc.
Which means the pattern we seem to be ending up with (given the usual caveats about young languages, and conventions taking time to evolve) is:
For the simple case: pick the most common use case and provide that, letting the caller adapt it with start (sync -> async) or await/react (async -> sync) if they want the other thing
For more complex cases where the sync and async workflows for using the functionality might look quite different, and are both valuable: provide them separately. Granted, one may be a facade of the other (for example, Proc in core is actually just a synchronous adaptation layer over Proc::Async).
A final observation I'd make is that individual consumers of a module will almost certainly be using it synchronously or asynchronously, not a mixture of the two. If wishing to provide both, then I'd probably instead look to using export tags, so I can do:
use Some::Thing :async;
say await something();
Or:
use Some::Thing :sync;
say something();
And not have to declare which I want upon each call.
In the document here: https://docs.python.org/3/library/asyncio-task.html, I found many yield from can be replaced by await.
I was wondering whether they are equivalent all the time in Python 3.5. Does anyone have ideas about this?
No, they are not equivalent. await in an async function and yield from in a generator are very similar and share most of their implementation, but depending on your Python version, trying to use yield or yield from inside an async function will either cause an outright SyntaxError or make your function an asynchronous generator function.
When the asyncio docs say "await or yield from", they mean that async functions should use await and generator-based coroutines should use yield from.
I see the following approach often when working on certain projects that use Node.js and Bluebird.js:
function someAsyncOp(arg) {
return somethingAsync(arg).then(function (results) {
return somethingElseAsync(results);
});
}
This is, creating a wrapper function/closure around another function that accepts the exact same arguments. It seems this could be written more cleanly as:
function someAsyncOp(arg) {
return somethingAsync(arg).then(somethingElseAsync);
}
When I propose it to others, they usually like it and switch to it.
There is, however, an important caveat: if you're calling something like object.function, and the function relies on this (like console.log does), then this will lose its binding. You have to do object.function.bind(object):
return somethingAsync(arg).then(somethingElseAsync).catch(console.log.bind(console));
This does seem potentially undesirable, and the .bind call feels a little awkward. You can't go wrong with the let's-always-do-the-closure approach.
I can't seem to find any discussion on this on google, there doesn't seem to be anything in ESLint about unnecessary wrapper functions. I'm trying to find out more about it so here I am. I guess it's a case of I don't know what I don't know. Is there a name for this? (Useless use of closures?) Any other thoughts or wisdoms? Thank you.
Edit: someone's going to comment that someAsyncOp is also redundant, yes, it is, let's pretend it does something useful.
The discussion here is pretty straightforward. If your function is OK being called directly by the promise system, with the exact arguments and this value that will be in place when its called directly by the promise system and its return value is exactly what you want in the promise chain, then by all means, just specify the function reference directly as the .then() handler:
somethingAsync(arg).then(somethingElseAsync)
But, if your function isn't set up to be called directly that way, then you need a wrapper function or something like .bind() to fix the mismatch and call your function exactly as you want or set up the proper return value.
There's really nothing more to it than that. It's no different than specifying any callback anywhere in Javascript. If you have a function that already meets the specs of the callback exactly, then you can specify that function name as a direct reference with no wrapper. But, if the function you have doesn't quite work the way the callback is designed to work, then you use a wrapper function to smooth over the mismatch.
All callback functions have the same issue with passing obj.method as the callback. If your .method expects the this value to be obj, then you will probably have to do something to make sure that the this value is set accordingly before your function executes. The callbacks in .then() handlers are no different than callbacks for any other Javascript/node.js function such as setTimeout() or fs.readFile() or another other function that takes a callback as an argument. So, neither of the issues you mention is unique to promises at all. It just so happens that promises live by callbacks so if you're trying to make method calls via a callback, you will run into the issue with the object value getting passed appropriately to the method.
FYI, it is possible to code methods so that they are permanently bound to their own object and can be passed as obj.method, but that can only be used in your method implementation and has some other tradeoffs. In general, experienced Javascript developers are perfectly fine using obj.method.bind(obj) as the reference to pass. Seeing the .bind() in the code also indicates that you're aware that you need the proper obj value inside the method and that you have made a provision for that.
As for some of your bolded questions or comments:
Is there a name for this?
Not that I'm aware of. Technically it's "passing a named reference to a previously defined function as a callback", but I doubt that's something you can search for and find useful discussion of.
Any other thoughts or wisdoms?
For reasons, I'm not entirely sure of (though has been a topic of discussion elsewhere), Javascript programming style conventions seem to encourage the use of anonymous inline callbacks rather than defining a method or function elsewhere and then passing that named reference (like you would be more likely to do in many other languages). Obviously, if you put the actual code to process the callback in an inline anonymous function, then neither of the issues you mention comes up. Using arrow functions in ES6 now even allows you to preserve the current value of this in the inline callback. I'm not saying that this is an answer to your question just an observation about common Javascript coding conventions.
You can't go wrong with the let's-always-do-the-closure approach.
As you seem to already know, it's a waste to wrap something if it doesn't need wrapping. I would vote for wrapping only when there's a mismatch between the specification for the callback and the already existing named function and there's a reason not to just fix the named function to match the specification of the callback.
Promise.coroutine supports Promise as the yieldable value type. And via the addYieldHandler(function handler), Promise.coroutine can also support any types that retuning result only once. But how could I write a yieldHandler that can handle a generator type like co does?
First of all, Bluebird coroutines return promises that are of course yieldable on their own:
var foo = Promise.coroutine(function*(){
yield Promise.delay(3000);
});
var bar = Promise.coroutine(function*(){
yield foo(); // you can do this.
});
Generally, a generator is syntactic sugar for a function that returns an iterable sequence. There is nothing asynchronous about it in nature. You can write code that returns iterables in ES5 and even ES3 and consume it with yield in a generator.
Now - as for your question:
I would not recommend using Promise.coroutine like that.
Yielding arbitrary iterables from the specific iterable you're using is error prone, where wrapping it in Promise.coroutine makes it an explicit coroutine which you can easily yield. This is explicit, clear, and preserves all sorts of useful properties.
Yielding promises is not coincidental, and there is a good reason that async functions in ES7 await promises. Promises represent a temporal value and they're really the only thing waiting for makes sense - they represent exactly that - something you wait for. For that reason - explicitly waiting for promises is beneficial and makes for a solid abstraction.
If you do want to yield from an arbitrary generator - you don't need to add a yield handler via Bluebird.
Generators in JavaScript already have that ability built in using special notation, there is no need to add a specific exception to a specific promise library in order to yield from other iterables.
function genThreeToFour*(){
yield 3;
yield 4;
}
function genOneToFive*(){
yield 1;
yield 2;
yield * genThreeToFour(); // note the *, delegate to another sequence
yield 5;
}
Yielding from another sequence is already built into generators, if you want to yield from another generator you can simply yield * invoking it. That way it's explicit you're delegating. It's only one more character of code but it's a lot more explicit and it's also easier to step through with debuggers since the engine is aware of delegation. I still prefer yielding promises only but if you feel strongly about this - delegating to other generators sounds a lot better than yielding generators explicitly.
Note that this is also much faster than yielding generators explicitly since the conversion process of Promise.coroutine isn't cheap since it's meant to be executed once and then used so the conversion itself isn't fast but it produces a very fast function (faster than async in almost all use cases, and faster than how most people write callacks in most use cases. If you delegate instead of creating a coroutine every time and running it - you'll have better performance.
If you insist, you can addYieldHandler
It's quite possible to do this. If you're ok with the speed penalty and the (in my opinion worse) abstraction. You can use addYieldHandler to make it possible to yield generators.
First, the "good" way:
Promise.coroutine.addYieldHandler(function(gen) {
if (gen && gen.next && gen.next.call ){ // has a next which is a function
return Promise.try(function cont(a){
var n = gen.next(a);
if(n.done) return n.value; // `return` in generator
if(!n.value.then) return cont(n.value); // yield plain value
// handle promise case, propagate errors, and continue
return n.value.catch(gen.throw.bind(gen)).then(cont);
});
}
});
Here - we added the ability to yield iterables and not generators, the advantage is that functions that are not generators (for example - from third party libraries) but still produce iterables can still be yielded. The usage of the above is something along the lines of: yield generatorFn() where you invoke the generator. Note that our code here replicates what Promise.coroutine actually does and we almost ended up with a "native" implementation of it.
Now, if you want to yield generators, you can still do that:
var Gen = (function*(){}).constructor;
Promise.coroutine.addYieldHandler(function(gen) {
if (gen && (gen instanceof Gen)){ // detect explicit generator
return Promise.coroutine(gen)();
}
});
That's for the sake of completeness though :)