Monitoring React/NodeJS/MongoDB application in AWS - node.js

I have React/NodeJS/MongoDB application running in AWS and just wondering what is the best way to monitor the application's health and performance. Is it using specialist tools like Dynatrace or using AWS services like CloudWatch Alarms and Application Insights?

The monitoring is a bit of a complex topic, so let's split it into separate sub-topics.
React. Apparently, you can hardly monitor it since it is on FE, the only thing you can do is integrate something like Sentry into your application and send errors that you got on the FE. (You can integrate same thing into BE, as well, defo won't hurt)
Node.js. It depends on how you are running your application. e.g. if you are running it on EC2, you can use CloudWatch + some custom metrics to monitor the instance health. If you are using Kube, there are some health checks that Kube can do + monitoring instances on which you are running your cluster. If we are speaking about AWS Lambda... and so on :)
MongoDB. Again it depends on how you are running it, whether it is an EC2 or Document DB, or maybe you are using MongoDB Atlas...
Overall you can use CloudWatch as a native solution for AWS, alternatively, you could integrate something like NewRelic for metrics, and SumoLogic for logs.

Related

Can I use serverless framework without cloud providers?

In my company we don't use aws or asuze or any cloud providers yet, so we are deploy our nodejs applications (based on express.js) on physical machine or virtual machine (linux).
However, it is possible to use Serverless Framework just like express.js server?
I saw the serverless offline plugin and it launch a server (endpoint, function) which I can access.
But can I use serverless this way in the production?
No, the Serverless Framework is really only useful if you're deploying to a cloud (and mostly it's just AWS). serverless offline will run a small nodejs server, but it's emulating AWS Lambda. So if you'll never use Lambda, there's no real point to emulating it.
In your case, just run a regular nodejs server.
Serverless framework is not designed for such a use case, this tool is used for deployments to cloud environment. Offline mode is just a simulation which is supposed to be run in local environment, not production.
Another option would be to use a process manager like PM2. To deploy to the virtual machine you can use tools like ansible and then PM2 will take care of the runtime.
It also has some neat features - for example if your server crashes, it can automatically revive it ; cluster mode can run multiple node instances in the virtual machine to utilize all the cores of the CPU which can boost your performance if you run a stateless cluster. It covers more than "serverless offline" is designed for and if you run without containers this would be my next best bet.

What is the production deployment / runtime architecture of ResolveJS backend systems?

Does the reSolveJS generally run as a single NodeJS application on the server for production deployment?
Of course, event store and read models may be separate applications (e.g. databases) but are the CQRS read-side and write-side handled in the same NodeJS application?
If so, can / could these be split to enable them to scale separately, given the premise of CQRS is that the read-side is usually much more active than the write-side?
The reSolve Cloud Platform may alleviate these concerns, given the use of Lambdas that can scale naturally. Perhaps this is the recommended production deployment option?
That is, develop and test as monolith (single NodeJS application) and deploy in production to reSolve Cloud Platform to allow scaling?
Thanks again for developing and sharing an innovative platform.
Cheers,
Ashley.
reSolve app can be scaled as any other NodeJS app, using containers or any other scaling mechanisms.
So several instances can work with the same event store, and it is possible to configure several instances to work with the same read database, or for every instance to have its own read database.
reSolve config logic is specified in the app's run.js code, so you can extend it to have different configurations for different instance types.
Or you can have the same code in all instances and just route command and queries to the different instance pools.
Of course, reSolve Cloud frees you from these worries, in this case you use local reSolve as dev and test environment, and deploy there.
Please note that reSolve Cloud is not yet publicly released. Also, local reSolve can not have all required database adapters at the moment, so those yet to be written.

Profile node.js application running in aws fargate

I have a node.js application that runs inside docker in aws ec2 fargate.
It started to consume high cpu, and i wonder if i can profile it
I couldn't find a way to connect via ssh, and I am not sure if it helps to run it with --prof flag
I am a newbie in AWS myself, so please check everything that I will say. EC2 Fargate is provisioning EC2 instances for you and you are not allowed to interact with them directly (ssh) but I think you can use CloudWatch Logs, that prints every console.log of your app in the specified log groups. There are must be some configurations when you create your task definition or container defifnition. (at least in Cloudformation which I hardly recommend to use). You can console.log the number of users or function calls and use this info to debug what is happening.

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

How can I deploy a web process and a worker process with Elastic Beanstalk (node.js)?

My heroku Procfile looks like:
web: coffee server.coffee
scorebot: npm run score
So within the same codebase, I have 2 different types of process that will run. Is there an equivalent to doing this with Elastic Beanstalk?
Generally speaking Amazon gives you much more control than Heroku. With great power comes great responsibility. That means that with the increased power comes increased configuration steps. Amazon performs optimizations (both technical and billing) based on what tasks you're performing. You configure web or worker environments separately and deploy to them separately. Heroku does this for you but in some cases you may not want to deploy both at once. Amazon leaves that configuration up to you.
Now, don't get me wrong, you might see this as a feature of a heroku, but in advanced configurations you might have entire teams working on and redeploying workers independent from your web tier. This means that the default on Amazon is basically that you set up two completely separate apps that might happen to share source code (but don't have to).
Basically the answer to your question is no, there is not something that will allow you to do what you're asking in as simple a manor as with Heroku. That doesn't mean it is impossible, it just means you need to set up your environments yourself instead of Heroku doing it for you.
For more info see:
Worker "dyno" in AWS Elastic Beanstalk
http://colintoh.com/blog/configure-worker-for-aws-elastic-beanstalk

Resources