Deploying Dockerized Node.JS Express API to AWS - node.js

I've been using the Firebase suite all my development experience (I'm a student) and using Firebase Functions to deploy my express apps as callable endpoints.
Recently, I've wanted to explore AWS more and also learn about Docker containers, as well as SQL databases as opposed to Firebase's NoSQL solution. I've created a Dockerized Node.JS Express API with some endpoints (attached below), but have absolutely no idea how to deploy to AWS because I'm overwhelmed by the amount of services, and would like to stay within the free tier for the project I'm building. What's the solution here? AWS Lambda? Gateway? EC2? What's the equivalent of Firebase Functions in AWS that would work with Docker? Very lost in the weeds.
I've successfully setup my PostgreSQL db with AWS RDS so I've managed that. My issue now is with actually deploying the Docker container somewhere and actually having endpoints that I can hit.
I have followed this specific guide: Deploying Docker Containers on ECS and actually managed to have endpoints to hit and successfully work, but it has been expensive. This method uses a service called AWS Fargate, and it seems that it isn't even in AWS's free tier and based on some experimentation was costing me around $0.01/API call. Obviously not attractive since Firebase Functions gave me up to 1M calls/mo for free and was much cheaper after that.
Mind you, I didn't know what a Docker container really was up until a week ago, nor am I at all familiar with all of these different AWS services. I would love to be pointed in the right direction. AWS Free Tier has services that say "1M calls/mo" such as AWS API Gateway, but I can't figure out how to get any of these to work with a Docker Image or how to connect them. I've read about every article out there about "Deploy Node.JS to AWS", so please don't just direct me to any of those search results, I'd love an explanation about how this all works. Here are examples of some of my files.
Dockerfile
# Dockerfile
FROM node:16-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5001
CMD [ "npm", "run", "docker:start" ]
Docker-Compose (I have three files. One for local, staging, prod)
# docker-compose.yml
version: "3.7"
services:
cwarehouse-prod:
image: zyade7/cwarehouse-prod:latest
container_name: cwarehouse-prod
build:
context: .
dockerfile: Dockerfile
env_file:
- ./src/config/.prod.env
ports:
- '5001:5001'
Sample endpoint (I have a few following this same format)
import express, { Request, Response } from "express";
import { createTerm, getTerm } from "./utils";
import { createGradeDistribution } from "../GradeDistribution/utils";
import { GradeDistributionObject } from "../GradeDistribution/types";
import { TermRelations } from "./Term.entity";
import { TermErrorCodes } from "./types";
import { db } from "../../db";
const termApi = express();
termApi.use(express.json());
// Creates a term in the db
termApi.post("/create", async (req: Request, res: Response) => {
try {
const {
gradeDistribution,
schoolId,
averageGPA,
totalSections,
totalStudents,
title,
}: {
gradeDistribution: GradeDistributionObject;
schoolId: string;
averageGPA: number;
totalSections: number;
totalStudents: number;
title: string;
} = req.body;
const gradeDistributionEntity = await createGradeDistribution(
db,
gradeDistribution,
averageGPA,
totalStudents
);
const term = await createTerm(
db,
gradeDistributionEntity,
schoolId,
averageGPA,
totalSections,
totalStudents,
title
);
res.status(200).send(term);
} catch (error) {
res.status(500).send(TermErrorCodes.TERM_CREATION_ERROR);
}
});

This is not a question, but rather looking for a guide on how to use AWS. You should go about reading the documentation, or try to understand the global scope of a backend solution.
In this specific case you are asking how to set up a serverless application. There are a number of guides online that you can follow. But I would rather advise you to look at the Serverless Framework. The framework works very well with a lot of the popular cloud providers (e.g. Google, Azure, AWS etc.) streamlining and automating the deployment process. You just need to think about which service you want to use and how to set it up. Then write the instructions that the framework needs to follow when deploying your application.
In your specific case I think you are looking for AWS Lambda. Lambda is a serverless function service that executes and scales on demand. This service is the equivalent to Firebase Functions, where the usual way to execute it is to determine the events that will trigger the function itself (be it API endpoints using AWS API-Gateway requests, AWS Event-Bridge time dependent executions, or AWS S3 Uploads).
In the case of Docker on AWS, Docker containers are usually deployed on AWS ECR. And from there AWS Lambda functions can run depending on the set-up. The idea here is that a docker container can run on lambda, and perform whatever tasks you need it to with whatever packages/libraries you want your docker to include. Keep in mind that Lambda functions have a specific time and disk size limit when its executing. E.g. after 30 seconds the function will automatically stop. But the Docker functionality is preferable to bypass the size limit.
Here is a good guide I followed earlier on Deploying AWS Lambda with Docker Containers.
In more borader terms, setting up a server is done with AWS EC2, which creates a computing resource running all the time.
As an end-point you can set up and control all of these services with the Serverless Framework, which is supposed to glue them together with the custom functionality you have to develop.
I hope this gives you an idea on where to find your information.

Related

serverless node js api multiple microservices with shared code

I am trying to deploy multiple serverless stacks on aws, having my code folder structure as below
The file s1/handler.js imports controllers like this
const demoController = require('../../src/Controllers/DemoController');
This works fine when I run it locally using "sls offline start" from within folder "s1"
However, after deploying to aws, this controller import fails and gives runtime importmodule error in lambda call.
Is this code structuring correct? How to fix this so that same structure works for "offline" as well as "deployed" multiple services?

GCloud Vision API Permission Denied on Second Request

I've gone through all the setup steps to make calls to the Google Vision API from a Node.js App. Link to the guide: https://cloud.google.com/vision/docs/libraries#setting_up_authentication
I'm using the ImageAnnotatorClient from the #google-cloud/vision package to make some text detections.
At first, it looked like everything was set up correctly but I don't know why it only allows me to do one request.
Further requests will give me the following error:
Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the vision.googleapis.com. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/
If I restart the Node app it again allows me to do one request to the Vision API but then the subsequent requests keep failing.
Here's my code which is almost the same as in the examples:
const vision = require('#google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
const detectText = async (imgPath) => {
// console.log(imgPath);
const [result] = await client.textDetection(imgPath);
const detections = result.textAnnotations;
return detections;
}
It is worth to mention that this works every time when I run the Node app in my local machine. The problem is happening on my Ubuntu Droplet from Digital Ocean.
Again, I set everything up as it is in the guides. Created a Service Account, downloaded the Service Account Key JSON file, set up the environment variable like this:
export GOOGLE_APPLICATION_CREDENTIALS="PATH-TO-JSON-FILE"
I'm also setting the environment variable in the .bashrc file.
What could I be missing? Before setting up everything from scratch and go through the whole process again I thought it would be good to ask for some help.
So I found the problem. In my case, it was a problem with PM2 not passing the system env variables to the Node app.
So I had everything set up correctly auth-wise but the Node app wasn't seeing the GOOGLE_APPLICATION_CREDENTIALS env var.
I deleted the PM2 process, created a new one and now it works.

Is it possible to run a Change Feed Processor host as an Azure Web Job?

I'm looking to use the Change Feed Processor SDK to monitor for changes to an Azure Cosmos DB collection, however, I have not seen clear documentation about whether the host can be run as an Azure Web Job. Can it? And if yes, are there any known issues or limitations versus running it as a Console App?
There are a good number of blog posts about using the CFP SDK, however, most of them vaguely mention running the host on a VM, and none of them or any examples running the host as an azure web job.
Even if it's possible, as a side question is, if such a host is deployed as a continuous web job and I select the "Scale" setting of the web job to Multi Instance, what are the approaches or recommendations to make the extra instances run with a different instance name, which the CFP SDK requires?
According to my research,Cosmos db trigger could be implemented in the WebJob SDK.
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddCosmosDB(a =>
{
a.ConnectionMode = ConnectionMode.Gateway;
a.Protocol = Protocol.Https;
a.LeaseOptions.LeasePrefix = "prefix1";
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
But it seems only Nuget for c# sdk could be used,no clues for other languages.So,you could refer to the Compare Functions and WebJobs to balance your needs and cost.
The Cosmos DB Trigger for Azure Functions it's actually, a WebJobs extension: https://github.com/Azure/azure-webjobs-sdk-extensions/tree/dev/src/WebJobs.Extensions.CosmosDB
And it uses the Change Feed Processor.
Functions run over WebJob technology. So to answer the question, yes, you can run Change Feed Processor on WebJobs, just make sure that:
Your App Service is set to Always On
If you plan to use multiple instances, make sure to set the InstanceName accordingly and not a static/fixed value. Probably something that identifies the WebJob instance.

How do I start a Google Cloud instance with a container image from a Node.JS client?

I want to start a vm instance with a container image from within a Google Cloud Function in Node.JS.
I can't figure out how to call the createVM function with a container image specified.
const [vm, operation] = await zone.createVM(vmName, {os: 'ubuntu'});
I don't see it anywhere in the documentation
https://googleapis.dev/nodejs/compute/latest/index.html
When creating the instance in the Google Cloud console, I was able to copy the equivalent REST command, take the JSON and paste it into the Google Cloud Compute Node.js SDK config.
const Compute = require('#google-cloud/compute');
// Creates a client
const compute = new Compute();
// Create a new VM using the latest OS image of your choice.
const zone = compute.zone('us-east1-d');
// The above object will auto-expand behind the scenes to something like the
// following. The Debian version may be different when you run the command.
//-
const config =
{
"kind": "compute#instance",
"name": "server",
"zone": "projects/projectName/zones/us-east1-d",
"machineType": "projects/projectName/zones/us-east1-d/machineTypes/f1-micro",
"displayDevice": {
"enableDisplay": false
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "gce-container-declaration",
"value": "spec:\n containers:\n - name: game-server\n image: gcr.io/projectName/imageName\n stdin: false\n tty: false\n restartPolicy: Never\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
},
{
"key": "google-logging-enabled",
"value": "true"
}
]
},
"tags": {
"items": [
"https-server"
]
},
"disks": [
{
... //Copied from Google Cloud console -> Compute Engine -> Create VM Instance -> copy equivalent REST command (at the bottom of the page)
]
};
//-
// If the callback is omitted, we'll return a Promise.
//-
zone.createVM('new-vm-name', config).then(function(data) {
const vm = data[0];
const operation = data[1];
const apiResponse = data[2];
res.status(200).send(apiResponse);
});
What I understand you want to end up with is a new GCP Compute Engine instance running the Container Optimized OS (COS) that runs Docker that creates a container instance from a repository hosted container image. To achieve this programatically, you are also wanting to use the Node.JS API.
The trick (for me) is to create an instance of the Compute Engine manually through the GCP Cloud Console. Once done, we can then login to the instance and retrieve the raw metadata by running:
wget --output-document=- --header="Metadata-Flavor: Google" --quiet http://metadata.google.internal/computeMetadata/v1/?recursive=true
What we get back is a JSON representation of that metadata. From here, we find that our actual goal in creating our desired Compute Engine through API is to create that Compute Engine using the standard API and then also define the metadata needed. It appears that the Container Optimized OS simply has a script/program which reads the metadata and uses that to run Docker.
When I examined data for a Container running in a Compute Engine, I found an attribute called:
attributes.gce-container-declaration
That contained:
"spec:\n containers:\n - name: instance-1\n image: nodered/node-red\n stdin: false\n tty: false\n restartPolicy: Always\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
which is YAML and if we format it prettily we find:
spec:
containers:
- name: instance-1
image: nodered/node-red
stdin: false
tty: false
restartPolicy: Always
# This container declaration format is not public API and may change without notice. Please
# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine.
And there we have it. To create a GCP Compute Engine hosting a container image, we would create a container image running the Container Optimized OS (eg. "image":"projects/cos-cloud/global/images/cos-stable-77-12371-114-0") and set the metadata to define the container to run.

AWS Lambda function to connect to a Postgresql database

Does anyone know how I can connect to a PostgreSQL database through an AWS Lambda function. I searched it up online but I couldn't find anything about it. If you could tell me how to go about it that would be great.
If you can find something wrong with my code (node.js) that would be great otherwise can you tell me how to go about it?
exports.handler = (event, context, callback) => {
"use strict"
const pg = require('pg');
const connectionStr =
"postgres://username:password#host:port/db_name";
var client = new pg.Client(connectionStr);
client.connect(function(err){
if(err) {
callback(err)
}
callback(null, 'Connection established');
});
context.callbackWaitsForEmptyEventLoop = false;
};
The code throws an error:
cannot find module 'pg'
I wrote it directly on AWS Lambda and didn't upload anything if that makes a difference.
I wrote it directly on AWS Lambda and didn't upload anything if that makes a difference.
Yes this makes the difference! Lambda doesnt provide 3rd party libraries out of the box. As soon as you have a dependency on a 3rd party library you need to zip and upload your Lambda code manually or with the use of the API.
Fore more informations: Lambda Execution Environment and Available Libraries
You need to refer Creating a Deployment Package (Node.js)
Simple scenario – If your custom code requires only the AWS SDK library, then you can use the inline editor in the AWS Lambda console. Using the console, you can edit and upload your code to AWS Lambda. The console will zip up your code with the relevant configuration information into a deployment package that the Lambda service can run.
and
Advanced scenario – If you are writing code that uses other resources, such as a graphics library for image processing, or you want to use the AWS CLI instead of the console, you need to first create the Lambda function deployment package, and then use the console or the CLI to upload the package.
Your case like mine falls under Advanced scenario. So we need to create a deployment package and then upload it. Here what I did -
mkdir deployment
cd deployment
vi index.js
write your lambda code in this file. Make sure your handler name is index.handler when you create it.
npm install pg
You should see node_modules directory created in deployment directory which has multiple modules in it
Package the deployment directory into a zip file and upload to Lambda.
You should be good then
NOTE : npm install will install node modules in same directory under node_modules directory unless it sees a node_module directory in parent directory. To be same first do npm init followed by npm install to ensure modules are installed in same directory for deployment.

Resources