implement mutex in node.js - node.js

I would like to implement a mutex inside my node.js application, here is the mutex in wiki http://en.wikipedia.org/wiki/Mutual_exclusion.
Is there any ready module for this topic? if not, any idea can help me to implement it?

There are many ways to accomplish this. Two easy ways are via Redis or Zookeeper servers. Node.js has very good modules for both of them.
In Redis you can use WATCH + MULTI commands to implement locking. In Zookeeper you can create ephemeral nodes. In both way no two processes will execute the critical operation at the same time.
I have recently implemented Redis approach in a node-ratelimiter module which is a critical part of our production applications where we need to guarantee no two processes increment the same value in Redis. Refer to WATCH and MULTI for details. The code is in fact very easy to understand and read.
For Zookeeper example, refer to Locks Recipe. It is possible to implement much more complex logic for distributed locks with Zookeeper ephemeral nodes. Redis solution is just a special case and works very well if you don't need more than that.
Using these two approaches you can implement mutexes for any app and any logic.

Related

Does microservices with node.js prevent the main thread from blocking

I have just started to understand how node.js event loop and microservices works and I was wondering if microservices can prevent the main thread of node.js application from blocking. What I mean is that we can run synchronous code on a different microservice which can send the response back when done and we can scale only that instance of microservice.
Is my understanding correct or please let me know if I got something wrong?
I think you're mixing up two concepts.
Microservices are relatively small, loosely-coupled services written in whatever language. For example if I work at BigEcommerceCompany I might have a variety of microservices written in a variety of technologies to manage, such as an auth service, cart service, payments service, reviews service, etc., and they might all be in the same language or all be in different languages.
Node's event loop is single threaded, but also has a worker pool that can be used for work without blocking the event loop, and Node can also be clustered with the build in cluster module (or various wrappers) across available CPUs. An example of a blocking function call in Node would be child_process.spawnSync; an example of a non-blocking call would be child_process.spawn. It's common when writing Node code to use a lot of Promises or callbacks to avoid blocking the event loop as much as possible.
The two concepts aren't really related except in that by writing small microservices it may be easier to find, isolate, and fix problems with Node performance.

Can the same Redis instance be used manually alongside kue.js?

I am using kue.js, which is a redis-backed priority queue for node, for pretty straightforward job-queue stuff (sending mails, tasks for database workers).
As part of the same application (albeit in a different service), I now want to use redis to manually store some mappings for a url-shortener. Does concurrent manual use of the same redis instance and database as kue.js interfere with kue, i.e., does kue require exclusive access to its redis instance?
Or can I use the same redis instance manually as long as I, e.g., avoid certain key prefixes?
I do understand that I could use multiple databases on the same instances but found a lot of chatter from various sources that discourage the use of the database feature as well as talk of it being deprecated in the future, which is why I would like to use the same database for now if safely possibly.
Any insight on this as well as considerations or advice why this might or might not be a bad idea are very welcome, thanks in advance!
I hope I am not too late with this answer, I just came across this post ...
It should be perfectly safe. See the README, especially the section on redis connections.
You will notice that each queue can have its own prefix (default is q), so as long as you are aware of how prefixes are used in your system, you should be fine. I am not sure why it would be a bad idea as long as you know about the prefixes and load usage by various apps hitting the redis server. Can you reference a post/page where this was described as a bad idea ?

Distributing scheduled tasks across multi-datacenter environment in Node.js with Cassandra

We are attempting to build a system that gets a list of task to execute from a Cassandra database and then through some kind of group consensus creates an execution plan (preferably on one node) which is then agreed on and executed by the entire cluster of servers. We really do not want to add any additional pieces of software such as Redis or a AMPQ system, rather have the consensus built directly into all of the servers running the jobs. So far we have found Skiff, an implementation of the Raft algorithm that looks like it could accomplish the task, but I was wondering if anyone has found an elegant solution to this problem in a pure Node.js way not involving external messaging systems.
Cassandra supports lightweight transactions, which is basically Paxos implementation that offers linearizable consistency and CAS operation (consensus). So you can use Cassandra itself to serialize the execution plan.

NodeJS Clustering Issues

I am looking for a way to share the same data structure (which contains functions, so JSON is not an option) across all cluster instances within NodeJS. I have a data structure called 'Users' that tracks user sessions and contains functions that they have access to. I need to be able to share this datastructure across all node processes, or I need an alternative design pattern. Does anyone know of any solutions to this issue? Thanks
I realize this is old and answered, but it may be beneficial to others to note an alternative. The recommended way to handle a situation as this is to place this data structure with its functions in a separate file and require it when needed. This will essentially pull in the "code/functions" and you store (serialize/deserialize) the data itself in any data store.
There are multiple options for setting up proper IPC (inter process communication) on nodejs:
using a document/key-value storage solution like Redis (key-value) or MongoDB (NoSQL Document-Storage)
using the integrated IPC functionality of the cluster module (see send method)
Deciding which one of those solutions fits best depends on your requirements and your project setup. For our last project, i decided to use both methods:
IPC for triggering jobs and dispatching partial tasks to different nodejs instances
Redis for centralized session- and api-token management
If you are using Express, i highly recommend you use the Redis middleware connect-redis. This session middleware automatically handles centralized session management for express based applications (which also means you can store complex JS objects and have access to them from all your instances).

communication between two processes running node.js

I'm writing a server, and decided to split up the work between different processes running node.js, because I heard node.js was single threaded and figured this would parallize better. The application is going to be a game. I have one process serving html pages, and then other processes dealing with the communication between clients playing the game. The clients will be placed into "rooms" and then use sockets to talk to each other relayed through the server. The problem I have is that the html server needs to be aware of how full the different rooms are to place people correctly. The socket servers need to update this information so that an accurate representation of the various rooms is maintained. So, as far as I see it, the html server and the room servers need to share some objects in memory. I am planning to run it on one (multicore) machine. Does anyone know of an easy way to do this? Any help would be greatly appreciated
Node currently doesn't support shared memory directly, and that's a reflection of JavaScript's complete lack of semantics or support for threading/shared memory handling.
With node 0.7, only recently usable even experimentally, the ability to run multiple event loops and JS contexts in a single process has become a reality (utilizing V8's concept of isolates and large changes to libuv to allow multiple event loops per process). In this case it's possible, but still not directly supported or easy, to have some kind of shared memory. In order to do that you'd need to use a Buffer or ArrayBuffer (both which represent a chunk of memory outside of JavaScript's heap but accessible from it in a limited manner) and then some way to share a pointer to the underlying V8 representation of the foreign object. I know it can be done from a minimal native node module but I'm not sure if it's possible from JS alone yet.
Regardless, the scenario you described is best fulfilled by simply using child_process.fork and sending the (seemingly minimal) amount of data through the communication channel provided (uses serialization).
http://nodejs.org/docs/latest/api/child_processes.html
Edit: it'd be possible from JS alone assuming you used node-ffi to bridge the gap.
You may want to try using a database like Redis for this. You can have a process subscribed to a channel listening new connections and publishing from the web server every time you need.
You can also have multiple processes waiting for users and use a list and BRPOP to subscribe to wait for players.
Sounds like you want to not do that.
Serving and message-passing are both IO-bound, which Node is very good at doing with a single thread. If you need long-running calculations about those messages, those might be good for doing separately, but even so, you might be surprised at how well you do with a single thread.
If not, look into Workers.
zeromq is also becomming quite popular as a process comm method. Might be worth a look. http://www.zeromq.org/ and https://github.com/JustinTulloss/zeromq.node

Resources