Why and when should we prefer read/writeFileSync to the asynchronous ones on Nodejs in particular for server applications?
For them are blocking functions, you should ever prefer the async versions in production environments.
Anyway, it could make sense to use them during the bootstrap of your application, as an example if you use to load configurations from files and you don't want either to let your fully initialized components to interact with partially initialized ones or design them to be able to work with each other in such a case.
I cannot see any other meaningful use for them.
I prefer to use writeFileSync or readFileSync if it does not make any sense for the program to continue in the event of a failure and there is no other asynchronous work to do. Typically this is at the beginning/end of a program, but I have also used them while "checkpointing" a long running process to make it clear that nothing is changing while the checkpoint is being written to disk.
Of course you can implement this with the asynchronous versions too, it is just more verbose and prone to mistakes.
Related
When extending V8, how involved do I/we have to be in making sure microtasks are correctly managed? V8, in general, has almost no documentation outside of the code itself, but I'm finding absolutely nothing on microtasks. Specifically, I'd like to learn about MicrotasksScope and how I need to implement it.
You generally shouldn't need to use MicrotasksScope.
Usually you will either be using MicrotasksPolicy::kExplicit or MicrotasksPolicy::kAuto.
With a kAuto policy, any time the script evaluation stack is emptied, microtasks will be run. With kExplicit, you have to do it yourself, using Isolate::RunMicrotasks.
In most situations, the default (kAuto) will work. If you are chromium or node, using kExplicit will make more sense since you need to time your microtask queue with all the other platform stuff like timers and networking.
As for MicrotasksScope, I personally am not aware of any project that uses it, but it will behave the same as kAuto, except the microtask runs when the stack of MicrotasksScope objects empties, instead of Scripts.
I'm trying to connect to a 3rd party library, that has a function that can block. I would like to use it, but without blocking. Is it possible to wrap a blocking call that I don't have the control of, to make it async?
// calling this function will block the nodejs thread
blockingCall();
What I would like would be something like this.
// wrapper for the blocking call
var wrapper = wrapBlockingCall(blockingCall);
wrapper.on('complete', function() {});
Is this possible? Does this make sense?
There is no way to make a blocking JavaScript code non-blocking in Node.js - the mechanism which Node uses for its non-blocking behaviour is implemented in the C/C++ layer, which in turn is used only when doing I/O operations (reading from disk, networking etc.).
In reality, every line of JavaScript your program uses will be executed one-by-one, because it is always executed on the same thread, no matter what you do.
The only option I see is to execute the offending code in a separate Node process using the built-in Child Process module. However, this will have significant performance impact, even bigger one if the code needs to be executed frequently.
Note:
After reading the comments under your question, it seems that you are actually the author of the blocking function, which in turn calls a C API which performs blocking I/O. There are ways of calling C functions which would normally block in a manner which does not block the upper JavaScript layer.
While I am not a C expert, I think this is accomplished using the libuv library included in Node - have a look at the addons documentation for more info.
What is the best way to use NodeJS's require function? By this, I'm referring to the placement of the require statement. Isf it better to load all dependencies from the beginning of the script, as you need them, or does it not make a notable difference whatsoever?
This article has a lot useful information regarding how require works, though I still can't come to a definitive conclusion as to which method would be most efficient.
Assuming you're using node.js for some sort of server environment, several things are generally true about that server environment:
You want fast response time to any given request.
The code that runs for processing requests should not use synchronous I/O operations because that seriously lessens the scalability of the server.
Server startup time is generally not something you need to optimize for (within reason) so if you're going to pay an initialization cost somewhere, it is usually better paid once at server startup time.
So, given that require() uses synchronous I/O when the module has not yet been cached, that means you really don't generally want to be doing require() operations inside a request handler. And, you want fast response times for your request handlers so you don't want require() calls inside your handler anyway.
All of these leads to a general rule of thumb that you load necessary modules at startup time into a module level variable that you can reuse from one request to the next and you don't load modules inside your request handlers.
In addition to all of this, if you put all your require() statements in a block near the top of your module, it makes your module a lot more self-documenting about what other modules it depends on and how it initializes those modules. If require() statements are sprinkled all over the code, then it makes it a lot harder for a developer to see what this module is using without a lot more study of the code.
It depends what performance characteristics you're looking for.
require() is not cheap; it has to read the JS file from disk, parse it, and execute any top-level code (and do all of that recursively for all files require()d by that file).
If you put all of your require()s on top, your code may take more time to start, but it won't suddenly slow down later. (note that moving the require() further down in the synchronous top-level code will make no difference beyond order of execution).
If you only require() other modules when first used asynchronously, your first request may be noticeably slower, as Node parses all of your dependencies. This also means that any errors from dependencies won't be caught until later. (note that all require() calls are cached, so the second request won't do any more work)
The other disadvantage to scattering require() calls throughout your code is that it makes it less readable; it's very nice to easily see exactly what each file depends on up top.
This is a general question about testing, but I will frame it in the context of Node.js. I'm not as concerned with a particular technology, but it may matter.
In my application, I have several modules that are called upon to do work when my web server receives a request. In the case of some of these modules, I close the request before I call upon them.
What is a good way to test that these modules are doing what they are supposed to do?
The advice here for RSpec is to mock out the work these modules are doing and just ensure that the appropriate methods are being called. This makes sense to me, but in Node.js, since my modules are not global, I don't think I cannot mock out functions without changing my program architecture so that every instance receives instances of objects that it needs1.
[1] This is a well known programming paradigm, but I cannot remember its name right now.
The other option I see is to use setTimeout and take my best guess at when these modules are done with their work.
Neither of these seems ideal.
Am I missing something? Are background processes not tested?
Since you are speaking of integration tests of these background components, a few strategies come to mind.
Take all the asynchronicity out of their operation for test mode. I'm imagining you have some sort of queueing process (that could be a faulty assumption), you toss work into the queue, and then your modules pick up that work and do their task. You could rework your test harness such that the test harness stands in as the queuing mechanism and you effectively get direct control over when the modules execute.
Refactor your modules to take some sort of next callback function. They would end up functioning a bit like Express's middleware layer or how async's each function works, but into each module you'd pass some callback that you call when that module's task is complete. Once all of the modules have reported in, then you can check the state of the program.
Exactly what you already suggested-- wait some amount of time, and if it still isn't done, consider that a failure. Mocha sort of does that, in that if a given test is over a definable threshold, then it's a failure. I don't like this way though, because if you add more tests, they all have to wait the same amount of time.
Redis is very fast. For most part on my machine it is as fast as say native Javascript statements or function calls in node.js. It is easy/painless to write regular Javascript code in node.js because no callbacks are needed. I don't see why it should not be that easy to get/set key/value data in Redis using node.js.
Assuming node.js and Redis are on the same machine, are there any npm libraries out there that allow interacting with Redis on node.js using blocking calls? I know this has to be a C/C++ library interfacing with V8.
I suppose you want to ensure all your redis insert operations have been performed. To achieve that, you can use the MULTI commands to insert keys or perform other operations.
The https://github.com/mranney/node_redis module queues up the commands pushed in multi object, and executes them accordingly.
That way you only require one callback, at the end of exec call.
This seems like a common bear-trap for developers who are trying to get used to Node's evented programming model.
What happens is this: you run into a situation where the async/callback pattern isn't a good fit, you figure what you need is some way of doing blocking code, you ask Google/StackExchange about blocking in Node, and all you get is admonishment on how bad blocking is.
They're right - blocking, ("wait for the result of this before doing anything else"), isn't something you should try to do in Node. But what I think is more helpful is to realize that 99.9% of the time, you're not really looking for a way to do blocking, you're just looking for a way to make your app, "wait for the result of this before going on to do that," which is not exactly the same thing.
Try looking into the idea of "flow control" in Node rather than "blocking" for some design patterns that could be a clearer fit for what you're trying to do. Here's a list of libraries to check out:
https://github.com/joyent/node/wiki/modules#wiki-async-flow
I'm new to Node too, but I'm really digging Async: https://github.com/caolan/async
Blocking code creates a MASSIVE bottleneck.
If you use blocking code your server will become INCREDIBLY slow.
Remember, node is single threaded. So any blocking code, will block node for every connected client.
Your own benchmarking shows it's fast enough for one client. Have you benchmarked it with a 1000 clients? If you try this you will see why blocking code is bad
Whilst Redis is quick it is not instantaneous ... this is why you must use a callback if you want to continue execution ensuring your values are there.
The only way I think you could (and am not suggesting you do) achieve this use a callback with a variable that is the predicate for leaving a timer.