Profile node.js application running in aws fargate - node.js

I have a node.js application that runs inside docker in aws ec2 fargate.
It started to consume high cpu, and i wonder if i can profile it
I couldn't find a way to connect via ssh, and I am not sure if it helps to run it with --prof flag

I am a newbie in AWS myself, so please check everything that I will say. EC2 Fargate is provisioning EC2 instances for you and you are not allowed to interact with them directly (ssh) but I think you can use CloudWatch Logs, that prints every console.log of your app in the specified log groups. There are must be some configurations when you create your task definition or container defifnition. (at least in Cloudformation which I hardly recommend to use). You can console.log the number of users or function calls and use this info to debug what is happening.

Related

Monitoring React/NodeJS/MongoDB application in AWS

I have React/NodeJS/MongoDB application running in AWS and just wondering what is the best way to monitor the application's health and performance. Is it using specialist tools like Dynatrace or using AWS services like CloudWatch Alarms and Application Insights?
The monitoring is a bit of a complex topic, so let's split it into separate sub-topics.
React. Apparently, you can hardly monitor it since it is on FE, the only thing you can do is integrate something like Sentry into your application and send errors that you got on the FE. (You can integrate same thing into BE, as well, defo won't hurt)
Node.js. It depends on how you are running your application. e.g. if you are running it on EC2, you can use CloudWatch + some custom metrics to monitor the instance health. If you are using Kube, there are some health checks that Kube can do + monitoring instances on which you are running your cluster. If we are speaking about AWS Lambda... and so on :)
MongoDB. Again it depends on how you are running it, whether it is an EC2 or Document DB, or maybe you are using MongoDB Atlas...
Overall you can use CloudWatch as a native solution for AWS, alternatively, you could integrate something like NewRelic for metrics, and SumoLogic for logs.

Deploy NodeJS application on EC2 via launch template

I built a simple NodeJS application and ran on an EC2 instance.
Everything works fine. I decided to create an AMI (Amazon Linux based) and a launch template to be used by a ASG.
The problem is, I cannot get the application start automatically.
I tried to add the following command through the user_data field but it doesn't work:
node main.js
Any ideas on how to automatically start this application once launched by the ASG?
Typically you would add this to the startup script of the AMI, so once the instance has started it will run the script on boot.
You may want to look at PM2 as well, it's a great tool for things like this and also makes it easy to setup each node instance in cluster mode ( assuming you have an EC2 instance with more than one core )
Some other ways of doing this, albeit not an auto scale but with DigitalOcean they offer a CASS model called 'apps' that basically pushes you app into a container from a git-repo and deploys it, you can then just spin new instances out as needed. Downside is that the bandwidth is a bit small but CND etc can help with that.

How is it possible to debug an AWS Lambda function from remote?

We are taking over a whole application from another company, and they have built the whole pipeline for deploying, but we still don't have access to it. What we know, that there's a lambda function is running triggered by certain SNS messages, and all the code is in Node.js, and the development is in VS Code. We also have issues debugging it locally, but it's a bigger problem, that we need to debug it remotely.
Since I am new in AWS services, I'd really appreciate if somebody could help me in this.
Does it necessary to open a port? How is it possible to connect to a lambda? Do we need serverless to setup? Many unresolved questions.
I don't think there is way you can debug a lambda function remotely. Your best bet is to download the code on local machine, setup the env variables as you have set up on your lambda function and take it from there.
Remember at the end of the day lambda is just a container which is running the code for you. AWS doesn't allow any ssh or connection with those container. In your case you should be able to debug it on local till you have the same env variables. There are other things as well which are lambda specific but considering it is a running code which you have got so you should be able to find out the issue.
Hope it makes sense.
Thundra (https://www.thundra.io/aws-lambda-debugger) has live/remote debugging support for AWS Lambda through its native IDE plugins (VSCode and IntelliJ IDEA).
The way AWS have you 'remote' debug is to execute the lambda locally through Docker as it proxies the requests to the cloud for you, using AWS Toolkit. You have a lambda running on your local computer via docker that can access resources on the cloud, such as databases, api's etc. You can step through debug them using editors like vscode.
I use SAM with a template.yaml . This way, I can pass event data to the handler, reference dependency layers (shared code libraries) and have a deployment manifest to create a Cloudformation stack (release instance with history and resource management).
Debugging can be a bit slow as it compiles, deploys to Docker and invokes, but allows step through debugging and variable inspection.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
While far from ideal, any console-printing actions would likely get logged to CloudWatch, which you could then access to go through printed data.
For local debugging, there are many Github projects with Dockerfiles which which you can build a docker container locally, just like AWS does when your Lambda is invoked.

Shutdown system or stop AWS EC2 instance from NodeJS

I have AWS EC2 instances running Debian with systemd running Node as a service. (Hereinafter these instances are called the "Node servers".)
The Node servers are started by another instance (hereinafter called "the manager instance") that is permanently on.
When a Node server experiences some predefined period of inactivity, I want it to shut down automatically.
I am considering the following options:
(After sensing a period of inactivity in Node) execute a child_process in Node that runs the shutdown now command.
(After sensing a period of inactivity in Node) call AWS SDK's stopInstances with the instance's own resource ID.
Expose an HTTP GET endpoint called last-request-time on each Node server, which is periodically polled by a "manager instance", which then decides whether/when to call AWS SDK's stopInstances.
I am unsure which of these approaches to take and would appreciate any advice. Explicitly shutting down a machine from Node running on that same machine feels somehow inappropriate. But option 3 requires periodic HTTP polling, not to mention that it feels more risky to rely on another instance for auto-shutdown. (If the manager is down all the instances keep going.)
Or perhaps it is possible to get systemd to shut down the machine when a particular service exits with a particular code? This, if possible, would feel like the best solution as the Node process would only need to abort itself after the period of inactivity with a particular exit code.
You could create a lambda function that acts as an api and uses the SDK's stopInstances functionality.
That would also allow you to make it have the full functionality of a "manager instance" and save even more on instances since it will only run when needed.
Or you could cut out the middle-man and migrate the "Node servers" to lambda.
(lambda documentation)

nodejs, docker, nginx and amazon aws deployment

There have been many questions regarding docker, node and amazon aws and I have read most of them but I haven't got my answer.
I have been working on a production node.js API project for last some weeks and now that the API's are complete I have to deploy them.
There are a total of 2 microservices (this may increase later) and some worker processes. Different components of the system will communicate with each other using SQS and SNS. Each of the microservices uses mongo DB as the nosql storage and mongoose as the ODM. I chose mongolab as the mongoDB hosting provider. Currently I can connect to mongolab DB using MONGOLAB_URI environment variable (obviously this will not be enough during production any suggestion on this one is welcome)
I am going ahead with amazon aws platform.
My thought process is:
I will docerize each of the components. For worker processes it is straight forward.
For microservices I will have 2 docker images which I will deploy using amazon EC2 container service. I will have a third nginx docker image which I will put in front of node applications.
I am planning to create a cluster of 2 machines (c2 large) initially and host these 3 dockerized microservices and nginx images on these machines.
Obviously the node process will run on some port. Lets assume it to be 3100
Till now it is perfectly clear the problem came when I want to exposes these API's to outer world
The microservices exposes some endpoints like
service 1:- /users, /login, /me etc
service 2: /offers, /gifts etc
My Question is:
I want to resolve
mydomain.com/api/v1/users to service1:3100/users
similarly for other API's
I assume this can be done by nginx but I am not much familiar with it.
The constraints are:
I don't want to host each of the microservice on a separate machine (budget constraint).
I don't know which service will run on which machine (this I assume since I read that ec2 container service will automatically start docker processes on random machines and distribute load).
How Can I do this ?

Resources