NodeJS Clustering Issues - node.js

I am looking for a way to share the same data structure (which contains functions, so JSON is not an option) across all cluster instances within NodeJS. I have a data structure called 'Users' that tracks user sessions and contains functions that they have access to. I need to be able to share this datastructure across all node processes, or I need an alternative design pattern. Does anyone know of any solutions to this issue? Thanks

I realize this is old and answered, but it may be beneficial to others to note an alternative. The recommended way to handle a situation as this is to place this data structure with its functions in a separate file and require it when needed. This will essentially pull in the "code/functions" and you store (serialize/deserialize) the data itself in any data store.

There are multiple options for setting up proper IPC (inter process communication) on nodejs:
using a document/key-value storage solution like Redis (key-value) or MongoDB (NoSQL Document-Storage)
using the integrated IPC functionality of the cluster module (see send method)
Deciding which one of those solutions fits best depends on your requirements and your project setup. For our last project, i decided to use both methods:
IPC for triggering jobs and dispatching partial tasks to different nodejs instances
Redis for centralized session- and api-token management
If you are using Express, i highly recommend you use the Redis middleware connect-redis. This session middleware automatically handles centralized session management for express based applications (which also means you can store complex JS objects and have access to them from all your instances).

Related

How to share DB schema when there are two backend servers?

I am developing some server. This server consists of one front-end and two back-ends. So far, I have completed the development of one back-end, and I want to develop the other one. Both are express servers and db is using mongodb. At this time, I am developing using the mongoose module, and I want to share a collection (ie schema). But I have already created a model file on one server. If so, I am wondering if I need to generate the same model file on the server I am developing now. Because if I modify the model file later, I have to modify both.
If there is a good way, please let me know with an example.
Thank you.
I have two answers for you one is direct and the other will to introduce the concept of microservice.
Answer 1 - Shared module (NPM or GIT)
You can create an additional project that will be an NPM lib (It can be installed via NPM or git submodules).
This lib will expose a factory method that will accept the mongoose option and return the mongoose connection.
Using a single Shared module will make it easier to update each backend after updating the DB (A bit cumbersome if you have many backends).
Answer 2 - The microservice approach
In the microservice approach, each service (backend) manages its own DB and only it. This means that each service needs to expose an internal API for other services to use.
So instead of sharing lib, each service has a well-defined internal API that other services can use.
I would recommend looking into NestJS (NodeJS microservice framework) to get a better feel of how to approach microservice
It goes without saying that I prefer Answer 2 but it's more complex and you may need to learn more before giving it a go. But I highlight recommend it because microservice (If implemented right) will make your code more future proof.

Can the same Redis instance be used manually alongside kue.js?

I am using kue.js, which is a redis-backed priority queue for node, for pretty straightforward job-queue stuff (sending mails, tasks for database workers).
As part of the same application (albeit in a different service), I now want to use redis to manually store some mappings for a url-shortener. Does concurrent manual use of the same redis instance and database as kue.js interfere with kue, i.e., does kue require exclusive access to its redis instance?
Or can I use the same redis instance manually as long as I, e.g., avoid certain key prefixes?
I do understand that I could use multiple databases on the same instances but found a lot of chatter from various sources that discourage the use of the database feature as well as talk of it being deprecated in the future, which is why I would like to use the same database for now if safely possibly.
Any insight on this as well as considerations or advice why this might or might not be a bad idea are very welcome, thanks in advance!
I hope I am not too late with this answer, I just came across this post ...
It should be perfectly safe. See the README, especially the section on redis connections.
You will notice that each queue can have its own prefix (default is q), so as long as you are aware of how prefixes are used in your system, you should be fine. I am not sure why it would be a bad idea as long as you know about the prefixes and load usage by various apps hitting the redis server. Can you reference a post/page where this was described as a bad idea ?

What does building an application in Arango Foxx offer beyond a regular node application

I'm learning more about ArangoDB and it's Foxx framework. But it's not clear to me what I gain by using that framework over building my own stand alone nodejs app for API/access control, logic, etc.
What does Foxx offer that a regular nodejs app wouldn't?
Full disclosure: I'm an ArangoDB core maintainer and part of the Foxx team.
I would recommend taking a look at the webinar I gave last year for a detailed overview of the differences between Foxx and Node and the advantages of using Foxx when you are using ArangoDB. I'll try to give a quick summary here.
If you apply ideas like the Single Responsibility Principle to your architecture, your server-side code has two responsibilities:
Backend: persist and query data using the backend data storage (i.e. ArangoDB or other databases).
Frontend: transform the query results into a format acceptable for the client (e.g. HTML, JSON, XML, CSV, etc).
In most conventional applications, these two responsibilities are fulfilled by the same monolithic application code base running in the same process.
However the task of interacting with the data storage usually requires writing a lot of code that is specific to the database technology. You need to write queries (e.g. using SQL, AQL, ReQL or any other technology-specific language) or use database-specific drivers.
Additionally in many non-trivial applications you need to interact with things like stored procedures which are also part of the "backend code" but live in the database. So in addition to having the application server do two different tasks (storage and rendering), half the code for one of the tasks ends up living somewhere else, often using an entirely different language.
Foxx lets you solve this problem by allowing you to move the logic we identified as the "backend" of your server-side code into ArangoDB. Not only can you hide all the nitty gritty of query languages, edges and collections behind a more application-specific API, you also eliminate the network overhead often necessary to handle requests that would cause more than a single roundtrip to the database.
For trivial applications this may mean that you can eliminate the Node server completely and access your Foxx API directly from the client. For more complicated scenarios you may want to use Node to build external micro services your Foxx service can tap into (e.g. to interface with external non-HTTP APIs). Or you just put your conventional Node app in front of ArangoDB and use Foxx to create an HTTP API that better represents your application's problem domain than the database's raw HTTP API.
It's also worth keeping in mind that structurally Foxx services aren't entirely dissimilar from Node applications. You can use NPM dependencies and split your code up into modules and it can all live in version control and be deployed from zip bundles. If you're not convinced I'd suggest giving it a try by implementing a few of your most frequent queries as Foxx endpoints and then deciding whether you want to move more of your logic over or not.

Two nodejs applications, one mongodb database

Good day! I have created an application using nodejs + mongoose and now I want to make something like a superuser application. I need my admin panel application to connect to the same database. So, i have a question.
Should i store the same Schema file in both applications to have an ability to use my Schema methods? In other words, what is the best way to create one more API using the same db?
Thank you!
If I'm not mistaken, why not create another service that only interacts with the database? That way, the systems will refer to the same schema/DB regardless of which application you want to connect to it. So the superuser application and the normal application will just query the DB microservice that interacts the database.
Pro: source of truth for the schema for all applications and the DB queries will just be API calls
Con: additional overhead in creating your ecosystem
If you are using the same DB from two different applications, you will want to make sure those schemas are the same between the two. If one changes its inputs, the other might need to change its display (or risk not expecting all that information). Keep all this in mind during your release process.
I would suggest making the schemas an external library to both, or have the admin panel require the current app. You'll avoid getting two sets out of sync and know to look at one place for the schema definitions.

Multi node.js instances - communication over HTTP

I'm currently writing an application which has sockets engine running on one node.js instance and restful API for various resources on other. I'm wondering what could be the best way (performance wise) to communicate between these to instances if they would be located on a different servers. I don't want to duplicate data access logic on different instances and I want to keep everything (what's related to data access) in one place.
I thought about using simple HTTP API where sockets engine would make request to resources instance and get what it asked for but I'm not sure if I won't hit any performance bumps. In fact I could actually connect straight to mongodb and get the data that I want from there without making a request to other service but I feel like it would be nice to isolate data so that could be only used by resources instance and that would be the only application that uses mongodb instance.
Is there any other better ways to achieve something similar?

Resources