How do I start a Google Cloud instance with a container image from a Node.JS client? - node.js

I want to start a vm instance with a container image from within a Google Cloud Function in Node.JS.
I can't figure out how to call the createVM function with a container image specified.
const [vm, operation] = await zone.createVM(vmName, {os: 'ubuntu'});
I don't see it anywhere in the documentation
https://googleapis.dev/nodejs/compute/latest/index.html

When creating the instance in the Google Cloud console, I was able to copy the equivalent REST command, take the JSON and paste it into the Google Cloud Compute Node.js SDK config.
const Compute = require('#google-cloud/compute');
// Creates a client
const compute = new Compute();
// Create a new VM using the latest OS image of your choice.
const zone = compute.zone('us-east1-d');
// The above object will auto-expand behind the scenes to something like the
// following. The Debian version may be different when you run the command.
//-
const config =
{
"kind": "compute#instance",
"name": "server",
"zone": "projects/projectName/zones/us-east1-d",
"machineType": "projects/projectName/zones/us-east1-d/machineTypes/f1-micro",
"displayDevice": {
"enableDisplay": false
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "gce-container-declaration",
"value": "spec:\n containers:\n - name: game-server\n image: gcr.io/projectName/imageName\n stdin: false\n tty: false\n restartPolicy: Never\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
},
{
"key": "google-logging-enabled",
"value": "true"
}
]
},
"tags": {
"items": [
"https-server"
]
},
"disks": [
{
... //Copied from Google Cloud console -> Compute Engine -> Create VM Instance -> copy equivalent REST command (at the bottom of the page)
]
};
//-
// If the callback is omitted, we'll return a Promise.
//-
zone.createVM('new-vm-name', config).then(function(data) {
const vm = data[0];
const operation = data[1];
const apiResponse = data[2];
res.status(200).send(apiResponse);
});

What I understand you want to end up with is a new GCP Compute Engine instance running the Container Optimized OS (COS) that runs Docker that creates a container instance from a repository hosted container image. To achieve this programatically, you are also wanting to use the Node.JS API.
The trick (for me) is to create an instance of the Compute Engine manually through the GCP Cloud Console. Once done, we can then login to the instance and retrieve the raw metadata by running:
wget --output-document=- --header="Metadata-Flavor: Google" --quiet http://metadata.google.internal/computeMetadata/v1/?recursive=true
What we get back is a JSON representation of that metadata. From here, we find that our actual goal in creating our desired Compute Engine through API is to create that Compute Engine using the standard API and then also define the metadata needed. It appears that the Container Optimized OS simply has a script/program which reads the metadata and uses that to run Docker.
When I examined data for a Container running in a Compute Engine, I found an attribute called:
attributes.gce-container-declaration
That contained:
"spec:\n containers:\n - name: instance-1\n image: nodered/node-red\n stdin: false\n tty: false\n restartPolicy: Always\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
which is YAML and if we format it prettily we find:
spec:
containers:
- name: instance-1
image: nodered/node-red
stdin: false
tty: false
restartPolicy: Always
# This container declaration format is not public API and may change without notice. Please
# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine.
And there we have it. To create a GCP Compute Engine hosting a container image, we would create a container image running the Container Optimized OS (eg. "image":"projects/cos-cloud/global/images/cos-stable-77-12371-114-0") and set the metadata to define the container to run.

Related

Deploying Dockerized Node.JS Express API to AWS

I've been using the Firebase suite all my development experience (I'm a student) and using Firebase Functions to deploy my express apps as callable endpoints.
Recently, I've wanted to explore AWS more and also learn about Docker containers, as well as SQL databases as opposed to Firebase's NoSQL solution. I've created a Dockerized Node.JS Express API with some endpoints (attached below), but have absolutely no idea how to deploy to AWS because I'm overwhelmed by the amount of services, and would like to stay within the free tier for the project I'm building. What's the solution here? AWS Lambda? Gateway? EC2? What's the equivalent of Firebase Functions in AWS that would work with Docker? Very lost in the weeds.
I've successfully setup my PostgreSQL db with AWS RDS so I've managed that. My issue now is with actually deploying the Docker container somewhere and actually having endpoints that I can hit.
I have followed this specific guide: Deploying Docker Containers on ECS and actually managed to have endpoints to hit and successfully work, but it has been expensive. This method uses a service called AWS Fargate, and it seems that it isn't even in AWS's free tier and based on some experimentation was costing me around $0.01/API call. Obviously not attractive since Firebase Functions gave me up to 1M calls/mo for free and was much cheaper after that.
Mind you, I didn't know what a Docker container really was up until a week ago, nor am I at all familiar with all of these different AWS services. I would love to be pointed in the right direction. AWS Free Tier has services that say "1M calls/mo" such as AWS API Gateway, but I can't figure out how to get any of these to work with a Docker Image or how to connect them. I've read about every article out there about "Deploy Node.JS to AWS", so please don't just direct me to any of those search results, I'd love an explanation about how this all works. Here are examples of some of my files.
Dockerfile
# Dockerfile
FROM node:16-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5001
CMD [ "npm", "run", "docker:start" ]
Docker-Compose (I have three files. One for local, staging, prod)
# docker-compose.yml
version: "3.7"
services:
cwarehouse-prod:
image: zyade7/cwarehouse-prod:latest
container_name: cwarehouse-prod
build:
context: .
dockerfile: Dockerfile
env_file:
- ./src/config/.prod.env
ports:
- '5001:5001'
Sample endpoint (I have a few following this same format)
import express, { Request, Response } from "express";
import { createTerm, getTerm } from "./utils";
import { createGradeDistribution } from "../GradeDistribution/utils";
import { GradeDistributionObject } from "../GradeDistribution/types";
import { TermRelations } from "./Term.entity";
import { TermErrorCodes } from "./types";
import { db } from "../../db";
const termApi = express();
termApi.use(express.json());
// Creates a term in the db
termApi.post("/create", async (req: Request, res: Response) => {
try {
const {
gradeDistribution,
schoolId,
averageGPA,
totalSections,
totalStudents,
title,
}: {
gradeDistribution: GradeDistributionObject;
schoolId: string;
averageGPA: number;
totalSections: number;
totalStudents: number;
title: string;
} = req.body;
const gradeDistributionEntity = await createGradeDistribution(
db,
gradeDistribution,
averageGPA,
totalStudents
);
const term = await createTerm(
db,
gradeDistributionEntity,
schoolId,
averageGPA,
totalSections,
totalStudents,
title
);
res.status(200).send(term);
} catch (error) {
res.status(500).send(TermErrorCodes.TERM_CREATION_ERROR);
}
});
This is not a question, but rather looking for a guide on how to use AWS. You should go about reading the documentation, or try to understand the global scope of a backend solution.
In this specific case you are asking how to set up a serverless application. There are a number of guides online that you can follow. But I would rather advise you to look at the Serverless Framework. The framework works very well with a lot of the popular cloud providers (e.g. Google, Azure, AWS etc.) streamlining and automating the deployment process. You just need to think about which service you want to use and how to set it up. Then write the instructions that the framework needs to follow when deploying your application.
In your specific case I think you are looking for AWS Lambda. Lambda is a serverless function service that executes and scales on demand. This service is the equivalent to Firebase Functions, where the usual way to execute it is to determine the events that will trigger the function itself (be it API endpoints using AWS API-Gateway requests, AWS Event-Bridge time dependent executions, or AWS S3 Uploads).
In the case of Docker on AWS, Docker containers are usually deployed on AWS ECR. And from there AWS Lambda functions can run depending on the set-up. The idea here is that a docker container can run on lambda, and perform whatever tasks you need it to with whatever packages/libraries you want your docker to include. Keep in mind that Lambda functions have a specific time and disk size limit when its executing. E.g. after 30 seconds the function will automatically stop. But the Docker functionality is preferable to bypass the size limit.
Here is a good guide I followed earlier on Deploying AWS Lambda with Docker Containers.
In more borader terms, setting up a server is done with AWS EC2, which creates a computing resource running all the time.
As an end-point you can set up and control all of these services with the Serverless Framework, which is supposed to glue them together with the custom functionality you have to develop.
I hope this gives you an idea on where to find your information.

How to create a Kubernetes deployment using the Node.js SDK

I am working on building a project using Node.js which will require me to have an application that can deploy to Kubernetes. The service I am working on will take some Kubernetes manifests, add some ENV variables into them, and then would deploy those resources.
I have some code that can create and destroy a namespace for me using the SDK and createNamespace and deleteNamespace. This part works how I want it to, ie without needing a Kubernetes YAML file. I would like to use the SDK for creating a deployment as well however I can't seem to get it to work. I found a code example of createNamespacedDeployment however using version 0.13.2 of the SDK I am unable to get that working. I get this error message when I run the example code I found.
k8sApi.createNamespacedDeployment is not a function
I have tried to check over the git repo for the SDK though it is massive and I've yet to find anything in it that would allow me to define a deployment in my Node.js code, closest I have found is a pod deployment however that won't work for me, I need a deployment.
How can I create a deployment via Node.js and have it apply to my Kubernetes cluster?
Management of deployments is handled by the AppsV1Api class:
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const appsApi = kc.makeApiClient(k8s.AppsV1Api);
const deploymentYamlString = fs.readFileSync('./deployment.yaml', { encoding: 'utf8'});
const deployment = k8s.loadYaml(deploymentYamlString);
const res = await appsApi.createNamespacedDeployment('default', deployment);
Generally, you can find the relevant API class for managing a Kubernetes object by its apiVersion, eg: Deployment -> apiVersion: apps/v1 -> AppsV1Api, CronJob -> apiVersion: batch/v1 -> BatchV1Api.
You can use the #c6o/kubelcient kubernetes client. It's a little simpler:
import { Cluster } from '#c6o/kubeclient'
const cluster = new Cluster({}) // Assumes process.env.KUBECONFIG is set
const result = await cluster.upsert({kind: 'Deployment', apiVersion.. })
if (result.error) ...
You can also it using the fluent API if you have multiple steps:
await cluster
.begin(`Provision Apps`)
.upsertFile('../../k8s/marina.yaml', options)
.upsertFile('../../k8s/store.yaml', options)
.upsertFile('../../k8s/harbourmaster.yaml', options)
.upsertFile('../../k8s/lifeboat.yaml', options)
.upsertFile('../../k8s/navstation.yaml', options)
.upsertFile('../../k8s/apps.yaml', options)
.upsertFile('../../k8s/istio.yaml', options)
.end()
We're working on the documentation but have lots of provisioners using this client here: https://github.com/c6o/provisioners

Google Cloud Compute Engine API: createVM directly with setMetadata

I use #google-cloud/compute to create VM instances automatically.
Also I use startup scripts in those instances.
So, firstly I call Zone.createVM and then VM.setMetadata.
But in some regions startup script is not running. And it is running after VM reset, so looks like my VM.setMetadata call is just too late.
In the web-interface we can create VM directly with metadata. But I do not see this ability in API.
Can it be done with API?
To set up a startup script during instance deployment you can provide it as part of the metadata property in the API call:
POST https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/instances
{
...
"metadata": {
"items": [
{
"key": "startup-script",
"value": "#! /bin/bash\n\n# Installs apache and a custom homepage\napt-get update\napt-get install -y apache2\ncat <<EOF > /var/www/html/index.html\n<html><body><h1>Hello World</h1>\n<p>This page was created from a simple start up script!</p>\n</body></html>"
}
]
}
...
}
See the full reference for the resource "compute.instances" of the Compute Engine API here.
Basically, if you are using a Nodejs library to create the instance you are already calling this, so you will only need to add the metadata keys as documented.
Also, if you are doing this frequently I guess it would be more practical if you stored the script in a bucket in GCP and simply add the URI to the metadata like this:
"metadata": {
"items": [
{
"key": "startup-script-url",
"value": "gs://bucket/myfile"
}
]
},

Automatic way to pick up the hostname inside docker container

We're running NodeJS application inside docker container hosted on Amazon EC2 instance. To
To enable Monitoring for Node.js app with Datadog we are using datadog-metrics library and integrate it with our application. We basically require to save the below Javascript code into a file called example_app.js
var metrics = require('datadog-metrics');
metrics.init({ **host: 'myhost', prefix: 'myapp.'** });
function collectMemoryStats() {
var memUsage = process.memoryUsage();
metrics.gauge('memory.rss', memUsage.rss);
metrics.gauge('memory.heapTotal', memUsage.heapTotal);
metrics.gauge('memory.heapUsed', memUsage.heapUsed);
metrics.increment('memory.statsReported');
}
setInterval(collectMemoryStats, 5000);
Although, we are able to successfully publish metrics to datadog but we're wondering if this can be automated. We want build this into our docker image, hence require an automatic way to pick up the hostname, at the very least be able to use the docker hosts name if possible..Because till now we're manually specifying "myhost" and "myapp" values manually. Any better way to fetch the AWS instance hostname value into %myhost?
Why not try?
var os = require(“os”);
var hostname = os.hostname();
It will return the docker container's hostname. If you haven't set a hostname explicitly, using something like docker run -h hostname image command then it will return the docker host's hostname.
Alternatively, you could do this using a deployment tool like puppet, ansible, etc. and template the file when you deploy the container.

Docker Remote API & Binds

I'm trying to use Docker's Remote API via nodejs docker.io library but I just can't find the right syntax how to bind directories.
I'm currently using this code:
docker.containers.start(cId, { Binds: ['/tmp:/tmp'] }, function(err, container)...
It starts container but when I inspect it doesn't show anything in Volumes.
Docker's Remote API documentation is lacking when it comes to syntax so I'm hoping somebody here knows the correct syntax.
I finally got it working. It seems you also need to create Volumes when you create the container. Here's the proper syntax:
the first API call to /container/create should include:
{
"Volumes": { "/container/path": {} }
}
Then when starting a container (POST /containers//start), use the "Binds" option:
{
"Binds": [ "/host/path:/container/path:rw" ]
}
source: https://groups.google.com/d/msg/docker-club/GrFQ3F1rqU4/3ZC5QoNkSAAJ

Resources