Using gcloud commands in nodejs application - node.js

Some gcloud commands don't have API or client library support (for example - this one).
In these cases, is there a simple way to run gcloud commands from a nodejs application?

The gcloud endpoints service commands for IAM policy are difficult for me to check quickly but, if IIRC (and if this is similar to gcloud projects commands for IAM policy), it's not that there's no API, but that there's no single API call.
What you can always do with gcloud is append --log-http to see what happens beneath the covers. With IAM policy mutations (off-top-of-head), you get the policy, mutate it, and then apply the changes back using the etag the GET gave you. The backend checks the policy's state (the etag is like a hash of the policy) and, if it's unchanged, you can make the change.
If this is what's happening here, you should be able to repro the functionality in NodeJS using the existing (!) APIs and, if you're using API Client Libraries (rather than Cloud Client libraries), the functionality will be available.
Apart from the complexity involved in shelling out to gcloud, you'll need to also authenticate it and then you'll need to (un)marshal data to the shell and manage errors. Ergo, it's messy and generally discouraged.

In node.js ,we have child_process module. As the name suggests the child_process provides function like spawn or exec that creates new child process that executes shell command like independent process. spawn is a function that takes the main command as
first argument and other command line options as an array values in place of second parameter.
So with respect to link that you share, you might end-up writing something like this :
const { spawn } = require("child_process");
const listening = spawn('gcloud', ['endpoints', 'services', 'blah', '--option','someValue']);
listening.stdout.on("data", data => {
console.log(`stdout: ${data}`);
});
listening.stderr.on("data", data => {
console.log(`stderr: ${data}`);
});
listening.on('error', (error) => {
console.log(`error: ${error.message}`);
});
References :
https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options

I'm not sure this directly answers your question but there is an npm package that can help you run unix commands from within the app.
Check out shell.js

Related

How to slurp a value in a file and assign this to a variable inside a puppet module

I'm pretty new to puppet and have run into an issue.
We have a proprietary home-grown API-based secrets management platform. We can either query the API directly or configure so that the secrets for that host are mounted to the root filesystem.
My problem is I can't figure out how to get that information within the context of a puppet module and into a variable so that I can use it. It seems you can't get stdout/stderr back from exec (or can you) otherwise this would be cake.
So for simplicity, let's say my secret is /etc/app/example/foo.
$roles.each |$role| {
case downcase($role) {
'foo': {
# SOMEHOW I NEED TO GET TOKEN FROM FILESYSTEM OR API CALL HERE
$token = <GET TOKEN SOMEHOW>
# here I need to do something with my value
exec { "my description":
command => '//bin/foo',
environment => ["TOKEN=${token}"]
}
This is basically what I need to do at a basic level. It doesn't matter if I call curl directly (preferred approach) or read a mounted file.
Thx for any help.
you can't get stdout/stderr back from exec (or can you) otherwise this would be cake.
You cannot capture the standard output or error of an Exec's command for reuse, but Puppet's built-in generate() function serves exactly the purpose of executing a command and capturing its output. Normally that would run the command on the server, during catalog compilation, but if you want it to run on the client instead then you can defer its execution. One of the primary purposes for deferring functions is for interaction with secret stores.
With that said, you might want to consider wrapping the whole thing up in a custom resource type. That's maybe a bit more work (especially if you don't speak Ruby), but it's a lot more flexible, and it should make for cleaner and clearer code on the Puppet DSL side, too.

Elevating NodeJS ChildProcess created with child_process.fork()

I'm developing an electron application which downloads software. For users who target "Program Files" however, the installation needs to run with administrator permissions.
I'm creating a child process in which the installer runs using child_process.fork(), and am depending on the IPC connection for the ability to send and receive messages.
Unfortunately however, I can't find any way to elevate this process. Some libraries (such as node-windows) use child_process.exec() under the hood, but this doesn't create the IPC connection.
What is the best way to go about this?
The simplest option is to run the whole app as administrator.
You can force (or, politically correct,remind) user to run as admin.
E.g. in electron-builder with "requestedExecutionLevel": "requireAdministrator"
If you want to elevate only child process, you can either make this child process smart enough to ask for elevation, or use an 'elevator', extra program which will ask for the elevation.
Node-windows does that with VBS script
electron-react-boileplate does that with pre-compiled program elevate
Also node-powershell supports executing commands, if necessary, with elevation (basic powershell).
As for IPC, what are you after? child_process.exec buffers the output, while child_proces.spawn gives it to you in a stream-like manner (see child process)
You just need to provide a callback with the correct arguments.
Example from child process:
const { exec, spawn } = require('child_process');
exec('my.bat', (err, stdout, stderr) => {
if (err) {
console.error(err);
return;
}
console.log(stdout);
});

Google Cloud trace Custom trace only works a few times

I have activated Google Cloud Tracer with nodejs & express, works well in automatic mode, registers calls to the api correctly.
I try to create a trace manually, to know the execution time of intermediate steps.
controller (req, res) {
tracer.runInRootSpan({ name: 'dnd-task' }, (rootSpan) => {
//promise
myPromise(rootSpan)
.then((data) => {
rootSpan.endSpan()
res.ok(data)
})
.catch((err)=>{
rootSpan.endSpan()
res.send(err)
})
})
}
but Google Cloud Trace only lists 1 or 2 calls, while automatically generated calls show thousands of API calls.
I also read the documentation to try to get the context of express.js middleware, but I didn't find a way to get the context.
from: google-cloud-trace
a root span is automatically started whenever an incoming request is received (in other words, all middleware already runs within a root span).
Update base on #kjin comment:
inside a controller in express you only need
tracer.createChildSpan({name: 'name'})
If you have automatic tracing enabled and also generate root spans within a request listener using the custom span API, then the root span will be ignored because it was created within a pre-existing root span (the one that was automatically started for this request). This is my guess based on the code presented here, but you should be able to accomplish what you want by instead creating a child span. (Custom root spans are meant for work that occur outside of a request's lifecycle -- such as periodic work.)
Re: Express.js middleware context -- I am not exactly sure what you mean here, but the Trace Agent doesn't store any of the request listener arguments in the trace context.
As an additional note -- you'll get a quicker response time if you report an issue directly to the GitHub repository to which you linked.
Hope this helps!

How to test advanced interactions when developing an Alexa Skill

I am developing an Alexa Skill, and I am struggling a bit in understanding if I setup everything in the best way possible to debug while developing.
Right now I am developing locally using Node.js, uploading to the cloud when ready, and testing all the responses to intents using the Service Simulator in the Test section of the developer console.
I find the process a bit slow but working... But still, I have two questions:
1) Is there a way of avoiding the process of uploading to the cloud?
And mostly important 2) How do I test advanced interactions, for examples multi-step ones, in the console? How for example to test triggering the response to an intent, but then asking the user for confirmation (Yes/No)? Right now the only way of doing it is using the actual device.
Any improvement is highly appreciated
Like #Tom suggested - take a look at bespoken.tools for testing skills locally.
Also, the Alexa Command Line Interface was recently released and it has some command line options you might look into.
For example, the 'api invoke-skill' command lets you invoke the skill locally via the command line (or script) so you don't have to use the service simulator. Like this...
$ask api invoke-skill -s $SKILL_ID -f $JSON --endpoint-region $REGION --debug
Here is a quick video I did that introduces the ASK CLI. It doesn't specifically cover testing but it will provide a quick intro.
https://youtu.be/p-zlSdixCZ4
Hope that helps.
EDIT: Had another thought for testing locally. If you're using node and Lambda functions, you can call the index.js file from another local .js file (example: test.js) and pass in the event data and context. Here is an example:
//path to the Lambda index.js file
var lambdaFunction = require('../lambda/custom/index.js');
// json representing the event - just copy from the service simulator
var event = require('./events/GetUpdateByName.json');
var context = {
'succeed': function (data) {
console.log(JSON.stringify(data, null,'\t') );
},
'fail': function (err) {
console.log('context.fail occurred');
console.log(JSON.stringify(err, null,'\t') );
}
};
function callback(error, data) {
if(error) {
console.log('error: ' + error);
} else {
console.log(data);
}
}
// call the lambda function
lambdaFunction.handler (event, context, callback);
Here's how I'm testing multi-step interactions locally.
I'm using a 3rd party, free, tool called BSTAlexa:
http://docs.bespoken.tools/en/latest/api/classes/bstalexa.html
It emulates Amazon's role in accepting requests, feeding them to your skill, and maintaining the state of the interactions.
So I start my test script by configuring BSTAlexa - pointing it to my skill config (eg. intents) and to a local instance of my skill (in my case I'm giving it a local URL).
Then I feed BSTAlexa a sequence of textual requests and verify that I'm getting back the expected responses. And I put all this in a Mocha script.
It works quite well.
Please find answers (Answering in reverse order),
You can test multiple steps using simulator (echosim.io) but each time you have to press and hold Mic button (Or hold on space bar). Say for example first you are asking something to Alexa with echosim and alexa responding to confirm'yes/no' then you have to press and hold mic button again to respond to confirm it.
You can automate the lambda deployment process. Please see the link,
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
It would be good to write complete unit tests so that you can test your logic before uploading Lambda. Also it will help to reduce the number of Lambda deployments

nodejs ssh2 handling data responses in persistent shell

I want to open a persistent connection with the ssh, type commands and handle their responses. Commands will likely hook on to each other, such as changing directories then running another command, so exec does not seem to be an option from what I understand. With php and phpseclib it was simple, I could simply do:
$ssh->sftp('cd /some/dir');
$response = $ssh->sftp('ls');
However with ssh2 and nodejs there appears to be only one handler for all incoming data, so no matter what I write, it will all come back to the same function, which makes it hard to determine what is what. Especially since I can not control what comes back. If I did an 'ls' I would get a list of files and folders, but if I did a grep or tail I would get a different type of list, but my handler would not know which is which to handle/parse them properly.
How can I solve this issue?
Perhaps I am looking at this the wrong way and just need someone to take the PHP glasses off. My goal is to build a small local app that will connect to my servers through ssh and do complex tasks like grabbing my access logs and parsing all the data into a more readable format for me, or maybe creating a new sites-available config file and then a2ensite'ing it, or vardumping my databases and downloading the files to back them up locally, etc.
ssh.connection.shell is used to get access to remote shell in interactive manner. For connections to do work & come back, shell is not the right option.
Read this,
https://github.com/mscdex/ssh2/issues/210
You can do the same with npm#ssh2,
conn.sftp(function(err, sftp) {
if (err) throw err;
sftp.readdir('foo', function(err, list) {
if (err) throw err;
console.dir(list);
conn.end();
});
});

Resources