I'm looking to create a small web application that lists some data about the ingresses in my cluster. The application will be hosted in the cluster itself, so I assume i'm going to need a service account attached to a backend application that calls the kubernetes api to get the data, then serves that up to the front end through a GET via axios etc. Am I along the right lines here?
You can use the JavaScript Kubernetes Client package for node directly in you node application to access kubeapi server over REST APIs
npm install #kubernetes/client-node
You can use either way to provide authentication information to your kubernetes client
This is a code which worked for me
const k8s = require('#kubernetes/client-node');
const cluster = {
name: '<cluster-name>',
server: '<server-address>',
caData: '<certificate-data>'
};
const user = {
name: '<cluster-user-name>',
certData: '<certificate-data>',
keyData: '<certificate-key>'
};
const context = {
name: '<context-name>',
user: user.name,
cluster: cluster.name,
};
const kc = new k8s.KubeConfig();
kc.loadFromOptions({
clusters: [cluster],
users: [user],
contexts: [context],
currentContext: context.name,
});
const k8sApi = kc.makeApiClient(k8s.NetworkingV1Api);
k8sApi.listNamespacedIngress('<namespace>').then((res) => {
console.log(res.body);
});
You need to Api client according to your ingress in my case I was using networkingV1Api
You can get further options from
https://github.com/kubernetes-client/javascript
JS client : https://github.com/kubernetes-client/javascript
If you have different ways to authenticate as you mentioned service account that is also one of them.
Yes, you will require it however if you are planning to run your script on cluster only there is no requirement of it.
you can directly use the method to authenticate
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8s = require('#kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.NetworkingV1beta1Api) // before 1.14 use extensions/v1beta1
k8sApi.listNamespacedIngress('<Namespace name>').then((res) => {
console.log(res.body);
});
You can check out this examples : https://github.com/kubernetes-client/javascript/tree/master/examples you can also use typescript.
Related
I have a NodeJS application that uses Node-Config (https://www.npmjs.com/package/config) to load application configurations. What I'm trying to do is to load secrets from Azure Keyvault to the config during startup, and ensure these are available before required (e.g. connecting to databases etc).
I have no problem connecting to and retrieving values from the Keyvault, but I am struggling with the non-blocking nature of JS. The application startup process is continuing before the config values have completed loaded (asynchronously) to the config.
One strategy could be to delay application launch to await the keyvault secrets loading How to await in the main during start up in node?
Another would be to not load them in Config but instead modify code where-ever secrets are used to load these asynchronously via promises
It seems like this will be a common problem, so I am hoping someone here can provide examples or a design pattern of the best way of ensuring remote keyvault secrets are loaded during startup.
Thanks in advance for suggestions.
Rod
I have now successfully resolved this question.
A key point to note is setting process.env['ALLOW_CONFIG_MUTATIONS']=true;
Configs are immutable by default (they can't be changed after initial setting). Since async is going to resolve these later, it's critical that you adjust this setting. Otherwise you will see asynchronous configs obtaining correct values from the keystore, but when you check with config.get they will not have been set. This really should be added to the documentation at https://github.com/node-config/node-config/wiki/Asynchronous-Configurations
My solution: first, let's create a module for the Azure keystore client - azure-keyvault.mjs :
import { DefaultAzureCredential } from '#azure/identity';
import { SecretClient } from '#azure/keyvault-secrets';
// https://learn.microsoft.com/en-us/azure/developer/javascript/how-to/with-web-app/use-secret-environment-variables
if (
!process.env.AZURE_TENANT_ID ||
!process.env.AZURE_CLIENT_ID ||
!process.env.AZURE_CLIENT_SECRET ||
!process.env.KEY_VAULT_NAME
) {
throw Error('azure-keyvault - required environment vars not configured');
}
const credential = new DefaultAzureCredential();
// Build the URL to reach your key vault
const url = `https://${process.env.KEY_VAULT_NAME}.vault.azure.net`;
// Create client to connect to service
const client = new SecretClient(url, credential);
export default client;
In the config (using #node-config) files:
process.env['ALLOW_CONFIG_MUTATIONS']=true;
const asyncConfig = require('config/async').asyncConfig;
const defer = require('config/defer').deferConfig;
const debug = require('debug')('app:config:default');
// example usage debug(`\`CASSANDRA_HOSTS\` environment variable is ${databaseHosts}`);
async function getSecret(secretName) {
const client = await (await (import('../azure/azure-keyvault.mjs'))).default;
const secret = await client.getSecret(secretName);
// dev: debug(`Get Async config: ${secretName} : ${secret.value}`);
return secret.value
}
module.exports = {
//note: defer just calculates this config at the end of config generation
isProduction: defer(cfg => cfg.env === 'production'),
database: {
// use asyncConfig to obtain promise for secret
username: asyncConfig(getSecret('DATABASE-USERNAME')),
password: asyncConfig(getSecret('DATABASE-PASSWORD'))
},
...
}
Finally modify application startup to resolve the async conferences BEFORE config.get is called
server.js
const { resolveAsyncConfigs } = require('config/async');
const config = require('config');
const P = require('bluebird');
...
function initServer() {
return resolveAsyncConfigs(config).then(() => {
// if you want to confirm the async configs have loaded
// try outputting one of them to the console at this point
console.log('db username: ' + config.get("database.username"));
// now proceed with any operations that will require configs
const client = require('./init/database.js');
// continue with bootstrapping (whatever you code is)
// in our case let's proceed once the db is ready
return client.promiseToBeReady().then(function () {
return new P.Promise(_pBootstrap);
});
});
}
I hope this helps others wishing to use config/async with remote keystores such as Azure. Comments or improvements on above welcome.
~ Rod
How can I get the current size of a GKE node pool using the REST (or Node) API?
I'm managing my own worker pool using my Express app running on my cluster, and can set the size of the pool and track the success of the setSize operation, but I see no API for getting the current node count. The NodePool resource only includes the original node count, not the current count. I don't want to use gcloud or kubectl on one of my production VMs.
I could go around GKE and try to infer the size using the Compute Engine (GCE) API, but I haven't looked into that approach yet. Note that it seems difficult to get the node count even from Stack Driver. Has anyone found any workarounds to get the current node size?
The worker pool size can be retrieved from the Compute Engine API by getting the instance group associated with the node pool.
const { google } = require('googleapis')
const Compute = require('#google-cloud/compute')
const container = google.container('v1')
const compute = new Compute()
const projectId = 'project-12345'
const zone = 'us-central1-a'
const nodePoolId = 'worker-pool'
const clusterId = 'cluster-name'
async function authorize() {
const auth = new google.auth.GoogleAuth({
scopes: [ 'https://www.googleapis.com/auth/cloud-platform' ],
})
return auth.getClient()
}
const getNodePoolSize = async () => {
const auth = await authorize()
const clusterName = `projects/${projectId}/zones/${zone}/clusters/${clusterId}`
const request = { name: clusterName, auth }
const response = await container.projects.locations.clusters.get(request)
const nodePool = response.data.nodePools.find(({ name }) => name === nodePoolId)
const igName = nodePool.instanceGroupUrls[0].match(/.*\/instanceGroupManagers\/([a-z0-9-]*)$/)[1]
const instanceGroup = await compute.zone(zone).instanceGroup(igName).get()
return instanceGroup[1 /* 0 is config, 1 is instance */].size
}
Note that this is using two different Node API mechanisms. We could use google.compute instead of #google-cloud/compute. Also, the two APIs are authenticated differently. The former uses the authorize() method to get a client, while the latter is authenticated via the default account set in environment variables.
I am trying to create a new table in BigQuery. I have followed these instructions https://codelabs.developers.google.com/codelabs/cloud-bigquery-nodejs/index.html?index=..%2F..index#9 and have my user and roles defined properly.
I created a node project, installed the google dependencies and have the following code:
const {BigQuery} = require('#google-cloud/bigquery');
const bigquery = new BigQuery({
projectId: 'myproject-develop-3fcb6',
private_key_id: "11111111111",
client_email: "myuser-bigquery-sa#myproject-develop-3fcb6.iam.gserviceaccount.com",
client_id: "212111112",
});
This is how im creating my dataset and table:
module.exports = {
createTable: ({ datasetId, tableId, schema, partitionBy}) => {
const options = { schema };
if (partitionBy) {
options.timePartitioning = {
field: partitionBy
};
}
return new Promise((resolve, reject) => {
resolve();
bigquery
.dataset(datasetId)
.createTable(tableId, options)
.then(results => resolve(results[0]))
.catch(err => {
handleError(err);
reject(err);
});
});
},
};
When i run my createTableFunction and pass in the data set name, table name, schema I get the following error immediately
ERROR: Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
How do I pass my default credentials to BigQuery so i can perform CRUD operations in node.js? Thanks
In the tutorial that you mentioned, this gcloud command creates a key.json:
gcloud iam service-accounts keys create ~/key.json --iam-account my-bigquery-sa#${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com
Then you can use the following code:
// Create a BigQuery client explicitly using service account credentials.
// by specifying the private key file.
const {BigQuery} = require('#google-cloud/bigquery');
const options = {
keyFilename: 'path/to/key.json',
projectId: 'my_project',
};
const bigquery = new BigQuery(options);
Authenticating With a Service Account Key File
I do not know where are you running your code, but in the tutorial is a line where you set the env variable therefore you do not need to authenticate using the key.json file in your code:
export GOOGLE_APPLICATION_CREDENTIALS="/home/${USER}/key.json"
GCP client libraries use a strategy called Application Default
Credentials (ADC) to find your application's credentials. When your
code uses a client library, the strategy checks for your credentials
in the following order:
First, ADC checks to see if the environment variable
GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC
uses the service account file that the variable points to. The next
section describes how to set the environment variable.
If the environment variable isn't set, ADC uses the default service
account that Compute Engine, Kubernetes Engine, Cloud Run, App Engine,
and Cloud Functions provide, for applications that run on those
services.
If ADC can't use either of the above credentials, an error occurs.
You can also pass credentials directly as parameters.
const {BigQuery} = require('#google-cloud/bigquery');
const bigQuery = new BigQuery({
projectId: "your-prject-id",
credentials: {...}, // content of json file
});
Thanks to #MahendraPatel comment.
Trying to use kubernetes-client and it works fine if I want to get a list of PODs. But how do I get a list of services, i.e.:
kubectl get services
I could not find any appropriate method in kubernetes-client:
const Client = require('kubernetes-client').Client;
const Config = require('kubernetes-client/backends/request').config;
const client = new K8sClient({ config: Config.fromKubeconfig(), version: '1.13' });
const pods = await client.api.v1.namespaces('xxxxx').pods.get({ qs: { labelSelector: 'application=test' } });
console.log('Pods: ', JSON.stringify(pods));
From the godaddy/kubernetes-client Libraries.
There seems to be:
api.v1.namespaces(namespace).services.get
list or watch objects of kind Service
Which looks the same as:
api.v1.namespaces(namespace).pods.get
list or watch objects of kind Pod
I am trying to create an ECS instance using the Node SDK, but failing. I am not sure if I have done the right thing.
I am able to create the ECS with same configuration using Portal as well as ROS.
//using the SDK for ECS
var ECSClient = require('#alicloud/ecs-2014-05-26');
// crearting the client
var client = new ECSClient({
accessKeyId: 'myaccesskeyid',
accessKeySecret: 'myaccesskeysecret',
endpoint: 'https://ecs.aliyuncs.com'
});
// image id and instance type procured using the OpenApi explorer
var params = {
ImageId: 'winsvr_64_dtcC_1809_en-us_40G_alibase_20190528.vhd',
InstanceType: 'ecs.t1.xsmall',
RegionId: 'ap-south-1'
}
// options
var opts = {
'x-acs-region-id': "ap-south-1"
}
// calling the sdk method to create ecs instance
client.createInstance(params, opts).then((res) => {
console.log(res)
}, (err) => {
console.log('ERROR!!')
console.log(JSON.stringify(err))
});
Not a solution but a suggestion. Create a support ticket in Portal. They replied pretty fast and solved my ECS problems.