Building Node application that has services that are somewhat coupled - node.js

I am building a Node application that has a front end standard API for a SPA and a backend service that is constantly polling another API, I am going through the Node documentation in regards to clustering, forking and workers and am a bit confused and am unable to come across anything that adequately addresses the question I have.
I would like to have the 2 different API's on 2 different server cores but use the same codebase. I have experimented with 2 node entrance files app.js and app1.js both work just fine on the same server, but this was a very very simple implementation.
Is this wise to do? As stated above I will need to have the same codebase available for both services to perform business logic.
I am having a hard time finding examples or envisioning what this would look like, would I just do the 2 app.js entrance files? What about getting these services on different processor cores? Is this possible? I am assuming I could communicate between them using IPC? But do I use clustering? Forking?
A quick and simple example would be very useful as I am still pretty new and this is very advanced for me at this stage but I just need a small boost in how to get this rolling. Thanks for any help.

Related

How to implement Bull-Board and error-handling in a non web server NodeJS application?

To preface I'd like to say I apologize if this question is too general or has been asked before. I'm struggling with some concepts here and truly don't know what I don't know and have no idea where to look. For context, i'm a full stack dev whose experience has mainly been with writing web servers using Node.js. In such applications implementing BullMQ, bull-board and error-handling is fairly straight forward since you can use middlewares and bull-board ships with server framework specific adapters.
Currently I am working on a project written in Node.js and Typescript. It listens to events emitted by the new Opensea Stream package (a socket based application) and accordingly adds jobs to the BullMQ queue. So on startup it connects to the database, connects to the Opensea socket and then acts accordingly to emitted events. In my mind I would classify this as a Job/Worker/Process sort of application. I've had a couple of issues building out this project and understanding how its made and its place in the web application architecture.
Consequently I have come up with a couple of questions:
What are these sort of applications called? I've had a hard time looking up answers for my questions since I don't know what to categorize this application as. When I try to Google 'Node.js Job Application', as expected it gives me links to job postings.
Am I using the wrong tools for the job? Is Node.js only supposed to be used to write web servers and not for such job based applications?
How does error-handling work here? With web servers if an error shows up in the server, the middleware catches the error and sends a response to the client. In such job applications what happens if an error is thrown? Is anything done with the error or is it just logged in errors, is the job reran/cancelled etc?
How do I implement bull-board here to graphically observe my queue? Since i'm not using a web framework here in this application how is the integration done? Is it even possible?
How different are socket based architectures to REST API servers? Is it a completely different use case/style of implementation or can they have the same server architecture and are just different ways of communicating data between processes? Are sockets more for microservices?
Are there any resources or books that detail how these applications are supposed to be built? Any guidance on best practices or architecture would be greatly appreciated.

How do I make NodeJS applications scalable

I am designing a chat application in NodeJS using express, mongo db, socket io. What points should I keep in focus while designing the architecture for this application. The target audience for this app is going to be more then 50K users concurrently using it.
I have previously in my career designed apps that were used by 2k end users at max. But this is something new for me. I did a lot of research on it and came up with the following points.
1- Start using queuing services like RabbitMQ
2- Increase your server space/ram as the usage increases.
Can someone please point me in the write direction a book on NodeJS architecture patterns and scalability. A guide, a walk through any sort of help is highly appreciated.
Here some tips:
You should take a look at the Cluster module you can also use wrk for HTTP benchmark.
Make sure you use caching.
If you are using Docker you should use the swarm mode.
Use Amazon Elastic https://aws.amazon.com/ec2/

Node.js send event to another running application

I'm looking for something to make my 2 running apps to communicate with each other.
I have two apps - main.js and events.js and I want to send event from events.js to main.js. I don't even know what to google for that coz everything I can find seems a little bit outdated and I'm looking something different then redis. I found out I can use uuid to communicate between different node.js processes but I don't know how. Any information would be great!
Your best bet is to use a message queue system similar to Kue.
This way you will be able to make your apps communicate to each other even if they are on different servers.
If you want to work without the redis backend you can skip the filesystem entirely and move to Sockets/TCP communication, which is one of the ways of getting around semaphores. This is a common method of low latency communication used by applications on the same machine or across the network. I have seen and used this method in games as well as desktop/server applications.
Depending on how far down the rabbit hole you want to go there are a number of useful guides about this. The Node.js TCP API is actually pretty great.
Ex.
https://www.hacksparrow.com/tcp-socket-programming-in-node-js.html

Do I need 2 separate applications in Node.js, one for visitors and one for CRUD admins?

Intro
I'm trying out Node.js right now ( coming from PHP background).
I'm already catching the vibe of the workflow with it (events, promises, inheritance..haven't figured out streams yet).
I've chosen a graphic portfolio web app as my first nodejs project. I know node.js might not fit best for this use case but I it's a good playground and I need to do this anyway.
The concept:
The visitors will only browse through pretty pictures in albums, no
logging in or subscirptions, nothing.
The administrators will add,modify, reorder.. CRUD the photo
albums. So I need there Auth, ACL, Validation, imagemagick... a lot
more than just on the frontend.
Currently I'm running one instance of Node.js, so both admin and visitor code is in one shared codebase and shared node memory runtime, which to me looks unnecessary performance-wise.
Question
For both performance and usability:
Should I continue running one instance of node for both admin and visitor areas of the web app or should I run them as 2 separate instances? (or subtasks? - honestly i haven't worked with subtasks/child processes)
Ideas floating around
Use nginx as proxy if splitting into 2 applications
Look into https://stackoverflow.com/a/5683938/339872. There's some interesting
mentions of tools to help manage processes.
I would setup admin.mysite.com and have that hosted on another server...or use nginx to proxy requests from that domain to your admin.js node app.

Is there a compelling reason to use an AMQP based server over something like beanstalkd or redis?

I'm writing a piece to a project that's responsible for processing tasks outside of the main application facing data server, which is written in javascript using Node.js. It needs to handle tasks which are scheduled in the future and potentially handle tasks that are "right now". The "right now" just means the next time a worker becomes available it will operate on that task, so that bit might not matter. The workers are going to all talk to external resources, an example job would be to send an email. We are a small shop and we don't have a ton of resources so one thing I don't want to do is start mixing languages at this point in the process, and I already see that Node can do this for us pretty easily, so that's what we're going to go with unless I see a compelling reason not to before I start coding, which is soon.
All that said, I can't tell if there is a compelling reason to use an AMQP based server, like OpenAMQ or RabbitMQ over something like Kue or Beanstalkd with a node client. So, here we go:
Is there a compelling reason to use an AMQP based server over something like beanstalkd or redis with Kue? If yes, which AMPQ based server would fit best with the architecture that I laid out? If no, which nosql solution (beanstalkd, redis/Kue) would be easiest to set up and fastest to deploy?
FWIW, I'm not accepting my answer yet, I'm going to explain what I've decided and why. If I don't get any answers that appear to be better than what I've decided, I'll accept my own later.
I decided on Kue. It supports multiple workers running asynchronously, and with cluster it can take advantage of multicore systems. It is easily extended to provide security. It's backed with Redis, which is used all over for this exact thing, so I know I'm not backing my job process server with unproven software (that's not to say that any of the others are unproven.)
The most compelling reasons that I picked Kue is that it provides a JSON api so that the client applications (The first client is going to be a web based application, but we're planning on making smartphone apps also) can add jobs easily without going through the main application facing node instance, so I can be totally out of the way of the rest of my team as I write this. I don't need a route, I don't need anything, and it's all provided for me so I don't need to write anything to support this. This has another advantage, with an extention to provide l/p security only authorized clients can add jobs, so I don;t have to expose my redis server to client applications directly. It also has a built in web console and the API allows the client to pull back lists of jobs associated with a given user very easily, so we can show the user all of their scheduled tasks in a nifty calendar view with 0 effort on my part.
The other compelling reason is the lack of steep learning curve associated with getting redis and Kue going for me. I've set up redis before, and Kue is simple and effective.
Yes, I'm a lazy developer, but I'm the good kind of lazy developer.
UPDATE:
I have it working and doing jobs, the throughput is amazing. I split out the task marshaling logic into it's own node instance, basically all I have to do is deploy my repo to a new machine and run node task-server.js to scale out my workers. I may need to add in some more job searching calls to Kue, because of how I implimented a few things, but that will be easy.

Resources