How to create a Kubernetes deployment using the Node.js SDK - node.js

I am working on building a project using Node.js which will require me to have an application that can deploy to Kubernetes. The service I am working on will take some Kubernetes manifests, add some ENV variables into them, and then would deploy those resources.
I have some code that can create and destroy a namespace for me using the SDK and createNamespace and deleteNamespace. This part works how I want it to, ie without needing a Kubernetes YAML file. I would like to use the SDK for creating a deployment as well however I can't seem to get it to work. I found a code example of createNamespacedDeployment however using version 0.13.2 of the SDK I am unable to get that working. I get this error message when I run the example code I found.
k8sApi.createNamespacedDeployment is not a function
I have tried to check over the git repo for the SDK though it is massive and I've yet to find anything in it that would allow me to define a deployment in my Node.js code, closest I have found is a pod deployment however that won't work for me, I need a deployment.
How can I create a deployment via Node.js and have it apply to my Kubernetes cluster?

Management of deployments is handled by the AppsV1Api class:
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const appsApi = kc.makeApiClient(k8s.AppsV1Api);
const deploymentYamlString = fs.readFileSync('./deployment.yaml', { encoding: 'utf8'});
const deployment = k8s.loadYaml(deploymentYamlString);
const res = await appsApi.createNamespacedDeployment('default', deployment);
Generally, you can find the relevant API class for managing a Kubernetes object by its apiVersion, eg: Deployment -> apiVersion: apps/v1 -> AppsV1Api, CronJob -> apiVersion: batch/v1 -> BatchV1Api.

You can use the #c6o/kubelcient kubernetes client. It's a little simpler:
import { Cluster } from '#c6o/kubeclient'
const cluster = new Cluster({}) // Assumes process.env.KUBECONFIG is set
const result = await cluster.upsert({kind: 'Deployment', apiVersion.. })
if (result.error) ...
You can also it using the fluent API if you have multiple steps:
await cluster
.begin(`Provision Apps`)
.upsertFile('../../k8s/marina.yaml', options)
.upsertFile('../../k8s/store.yaml', options)
.upsertFile('../../k8s/harbourmaster.yaml', options)
.upsertFile('../../k8s/lifeboat.yaml', options)
.upsertFile('../../k8s/navstation.yaml', options)
.upsertFile('../../k8s/apps.yaml', options)
.upsertFile('../../k8s/istio.yaml', options)
.end()
We're working on the documentation but have lots of provisioners using this client here: https://github.com/c6o/provisioners

Related

GCloud Vision API Permission Denied on Second Request

I've gone through all the setup steps to make calls to the Google Vision API from a Node.js App. Link to the guide: https://cloud.google.com/vision/docs/libraries#setting_up_authentication
I'm using the ImageAnnotatorClient from the #google-cloud/vision package to make some text detections.
At first, it looked like everything was set up correctly but I don't know why it only allows me to do one request.
Further requests will give me the following error:
Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the vision.googleapis.com. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/
If I restart the Node app it again allows me to do one request to the Vision API but then the subsequent requests keep failing.
Here's my code which is almost the same as in the examples:
const vision = require('#google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
const detectText = async (imgPath) => {
// console.log(imgPath);
const [result] = await client.textDetection(imgPath);
const detections = result.textAnnotations;
return detections;
}
It is worth to mention that this works every time when I run the Node app in my local machine. The problem is happening on my Ubuntu Droplet from Digital Ocean.
Again, I set everything up as it is in the guides. Created a Service Account, downloaded the Service Account Key JSON file, set up the environment variable like this:
export GOOGLE_APPLICATION_CREDENTIALS="PATH-TO-JSON-FILE"
I'm also setting the environment variable in the .bashrc file.
What could I be missing? Before setting up everything from scratch and go through the whole process again I thought it would be good to ask for some help.
So I found the problem. In my case, it was a problem with PM2 not passing the system env variables to the Node app.
So I had everything set up correctly auth-wise but the Node app wasn't seeing the GOOGLE_APPLICATION_CREDENTIALS env var.
I deleted the PM2 process, created a new one and now it works.

Add Custom Endpoint For Service ( Feathersjs )

I am new to NodeJS world.
I found FeatherJS is a awesome tools/framework to build API service with very less Coding
I need to add a custom service endpoint (like : localhost/servicename/custom-end-point ). I also need to grab data from user in those end-point (could be a get request or post).
I have already gone through followings links, but nothing is clearly mention there,
https://docs.feathersjs.com/guides/basics/services.html
https://docs.feathersjs.com/api/services.html
Install feathers-cli using the following command: npm install -g #feathersjs/cli.
To create a service, navigate to your project directory and run this command feathers generate service. It will ask some questions like service name.
If you don't already have an app then run this command to create one: feathers generate app.
Thats it!
Update:
Lets assume you have a service named organizations and you want to create a custom endpoint like custom-organization. Now, create a file inside services > organizations named custom-organizations.class.js. Add the following line in your organizations.service.js file.
// Import custom class
const { CustomOrganizations } = require('./custom-organizations.class');
// Initialize custom endpoint
app.use('/custom-organizations', new CustomOrganizations(options, app));
Add the following code in your custom-organizations.class.js file.
const { Service } = require('feathers-mongoose');
exports.CustomOrganizations = class CustomOrganizations extends Service {
constructor(options, app) {
super(options);
}
async find() {
return 'Test data';
}
};
Now, if you send a get request to /custom-organizations endpoint then you should get Test data.
Hope it helps.
Wrote an article about it here.

Not able to connect/call services of other nodes Moleculer NodeJs

I have created 2 nodes for moleculer using
npm init project project_name
I have added a service users.list in project one which gives list of all users which is working fine also i exposed its api.
But issue is, when i run the other node project2, and in action of service i call user.list it shows SERVICE_NOT_FOUNT. However it is calling its own functions but not the functions of other nodes
I want to connect different nodes so that i can call services of one node in other, i don't know what i am missing or doing wrong, because i followed documentation of moleculer which says it should work like that, but its not working
I am using REDIS as transporter.
Here is code for action
welcome: {
params: {
name: "string"
},
async handler(ctx) {
var tmp = await ctx.call("users.list",{});
return `Welcome, ${tmp}`;
}
}

How to get past errors using putParameter with aws-sdk for nodejs in Lambda?

I'm trying to set a parameter using putParameter in the AWS SDK for JavaScript in Node.js. In particular, I'd like to take advantage of the "Advanced" Tier, with an Expiration policy and Tags if possible. When I execute my code, I keep getting errors like:
There were 2 validation errors:
* UnexpectedParameter: Unexpected key 'Policies' found in params
* UnexpectedParameter: Unexpected key 'Tier' found in params
I suspected the issue was around the aws-sdk version I was using, so I've tried running the code locally using SAM local, and from Lambda functions using the nodejs8.10 and nodejs10.x environments. The errors do not go away.
const AWS = require('aws-sdk');
AWS.config.update({region: 'us-east-1'});
const ssm = new AWS.SSM({apiVersion: '2014-11-06'});
exports.lambdaHandler = async () => {
const tokenExpiration = new Date();
tokenExpiration.setSeconds(tokenExpiration.getSeconds() + 60);
await ssm.putParameter({
Name: 'SECRET_TOKEN',
Type: 'SecureString',
Value: '12345',
Policies: JSON.stringify([
{
"Type":"Expiration",
"Version":"1.0",
"Attributes":{
"Timestamp": tokenExpiration.toISOString()
}
}
]),
Overwrite: true,
Tier: 'Advanced'
}).promise();
};
I would expect this code to work and set a parameter with the expiration. However, it appears that the sdk doesn't recognize the "Policies" and "Tier" parameters, which are available according to the documentation. I don't know if it's an issue of waiting for the newest AWS SDK for JavaScript, but the runtimes page suggest that nodejs10.x is running AWS SDK for JavaScript 2.437.0.
It might be helpful to know that I can get the code running correctly without the parameters in question (ie, just the "Name", "Type", and "Value" parameters).
Unfortunately both Tier and Policies weren't added until v2.442.0 (see diff)
This means that to use these features you'll have to deploy with the version of the aws-sdk you're developing against.
It should be noted that either developing/testing against the built-in version, or deploying with the aws-sdk you do use, is often cited as good practice. If you're deploying your version you can use explicit client imports (e.g. const SSM = require('aws-sdk/clients/ssm') to keep the deployment size down. This is even more effective if you develop against the preview AWS-SDK Version 3.

Trying to insert data into BigQuery fails from container engine pod

I have a simple node.js application that tries to insert some data into BigQuery. It uses the provided gcloud node.js library.
The BigQuery client is created like this, according to the documentation:
google.auth.getApplicationDefault(function(err, authClient) {
if (err) {
return cb(err);
}
let bq = BigQuery({
auth: authClient,
projectId: "my-project"
});
let dataset = bq.dataset("my-dataset");
let table = dataset.table("my-table");
});
With that I try to insert data into BiqQuery.
table.insert(someRows).then(...)
This fails, because the BigQuery client returns a 403 telling me that the authentication is missing the required scopes. The documentation tells me to use the following snippet:
if (authClient.createScopedRequired &&
authClient.createScopedRequired()) {
authClient = authClient.createScoped([
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/bigquery.insertdata",
"https://www.googleapis.com/auth/cloud-platform"
]);
}
This didn't work either, because the if statement never executes. I skipped the if and set the scopes every time, but the error remains.
What am I missing here? Why are the scopes always wrong regardless of the authClient configuration? Has anybody found a way to get this or a similar gcloud client library (like Datastore) working with the described authentication scheme on a Container Engine pod?
The only working solution I found so far is to create a json keyfile and provide that to the BigQuery client, but I'd rather create the credentials on the fly then having them next to the code.
Side note: The node service works flawless without providing the auth option to BigQuery, when running on a Compute Engine VM, because there the authentication is negotiated automatically by Google.
baking JSON-Keyfiles into the images(containers) is bad idea (security wise [as you said]).
You should be able to add these kind of scopes to the Kubernetes Cluster during its creation (cannot be adjusted afterwards).
Take a look at this doc "--scopes"

Resources