How to get yargs auto-complete working, when using --experimental-specifier-resolution=node - node.js

My objective is to write a CLI in Typescript/node.js, that uses --experimental-specifier-resolution=node, written in yargs with support for autocompletion.
To make this work, I use this entry.sh file, thanks to this helpful SO anwswer (and the bin: {eddy: "./entry.sh"} options in package.json points to this file)
#!/usr/bin/env bash
full_path=$(realpath $0)
dir_path=$(dirname $full_path)
script_path="$dir_path/dist/src/cli/entry.js"
# Path is made thanks to: https://code-maven.com/bash-shell-relative-path
# Combined with knowledge from: https://stackoverflow.com/questions/68111434/how-to-run-node-js-cli-with-experimental-specifier-resolution-node
/usr/bin/env node --experimental-specifier-resolution=node $script_path "$#"
This works great, and I can use the CLI. However, autocompletion does not work. According to yargs I should be able to get autocompletion by outputting the result from ./entry.sh completion to the ~/.bashrc profile. However this does not seem to work.
Output from ./entry.sh completion:
###-begin-entry.js-completions-###
#
# yargs command completion script
#
# Installation: ./dist/src/cli/entry.js completion >> ~/.bashrc
# or ./dist/src/cli/entry.js completion >> ~/.bash_profile on OSX.
#
_entry.js_yargs_completions()
{
local cur_word args type_list
cur_word="${COMP_WORDS[COMP_CWORD]}"
args=("${COMP_WORDS[#]}")
# ask yargs to generate completions.
type_list=$(./dist/src/cli/entry.js --get-yargs-completions "${args[#]}")
COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) )
# if no match was found, fall back to filename completion
if [ ${#COMPREPLY[#]} -eq 0 ]; then
COMPREPLY=()
fi
return 0
}
complete -o default -F _entry.js_yargs_completions entry.js
###-end-entry.js-completions-###
I tried modifying the completion output, but I don't really understand bash - just yet 😅
Update
Working on a reproducible example (WIP).
Repo is here.
Currently one of the big differences is that npm link does not work the same in the 2 different environments. It's only in the repo where I'm trying to reproduce that /usr/local/share/npm-global/bin/ is actually updated. Currently trying to investigate this.

You can try specifying the scriptName in your entry.js file to the name of your wrapper script. This may force generation of completion name using it. I haven't tried it but looking at the source code of yargs, it looks like the $0 parameter can be altered using scriptName, which in turn will affect how the completion-generation function generate the completion code:
In yargs-factor.ts:
scriptName(scriptName: string): YargsInstance {
this.customScriptName = true;
this.$0 = scriptName;
return this;
}
In completion.ts:
generateCompletionScript($0: string, cmd: string): string {
let script = this.zshShell
? templates.completionZshTemplate
: templates.completionShTemplate;
const name = this.shim.path.basename($0);
// add ./ to applications not yet installed as bin.
if ($0.match(/\.js$/)) $0 = `./${$0}`;
script = script.replace(/{{app_name}}/g, name);
script = script.replace(/{{completion_command}}/g, cmd);
return script.replace(/{{app_path}}/g, $0);
}
Also I'm not sure how the "bin" configuration works but maybe because of scriptName you'd no longer need a wrapper.
Make sure the version of yargs you use supports this.
Also as a side note I thought about suggesting to modify the generated completion script directly but besides being hackish that might also still lead to the script name being unrecognized during completion. Anyhow I just looked at the right approach first.
The modified version would like this:
_entry.sh_yargs_completions()
{
local cur_word args type_list
cur_word="${COMP_WORDS[COMP_CWORD]}"
args=("${COMP_WORDS[#]}")
# ask yargs to generate completions.
type_list=$(/path/to/entry.sh --get-yargs-completions "${args[#]}")
COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) )
# if no match was found, fall back to filename completion
if [ ${#COMPREPLY[#]} -eq 0 ]; then
COMPREPLY=()
fi
return 0
}
complete -o default -F _entry.sh_yargs_completions entry.sh
Another note: If the script name needs to be dynamic based on the name of its caller, you can make it identifiable through an environment variable, so in entry.sh you can declare it like this:
export ENTRY_JS_SCRIPT_NAME=entry.sh
node ...
Then somewhere in entry.js, you can access the variable name through this:
process.env.ENTRY_JS_SCRIPT_NAME
Maybe even just specify $0 or ${0##*/} whatever works:
export ENTRY_JS_SCRIPT_NAME=$0

Thanks, everyone. The solution I ended up with, was 2 fold:
I added a scriptName to the yargs config
In the .sh file "wrapping", I used which node to probably set the --experimental-specifier-resolution=node flags.
test-cli.js
#!/usr/bin/env node
import yargs from 'yargs'
import { hideBin } from 'yargs/helpers'
import { someOtherModule } from './some-other-module';
someOtherModule();
yargs(hideBin(process.argv))
.command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.command('curlAgain <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.demandCommand(1)
.help()
.completion()
.scriptName('eddy') // <== Added thanks to konsolebox
.parse()
test-cli.sh
#!/usr/bin/env bash
full_path="$(realpath "$0")"
dir_path="$(dirname $full_path)"
script_path="$dir_path/test-cli.js"
node_path="$(which node)" # <== Makes it work on github codespaces 😅
$node_path --experimental-specifier-resolution=node $script_path "$#"
package.json
{
"name": "lets-reproduce",
"type": "module",
"dependencies": {
"yargs": "^17.3.1"
},
"bin": {
"eddy": "./test-cli.sh"
}
}
Steps to install autocompletion:
run npm link
run eddy completion >> ~/.bashrc
source ~/.bashrc
profit 😅🔥

Related

How do i pass result from waited npm script to bash script?

In my npm script i have the following:
#!/usr/bin/env node
import { main } from './main';
import { CONFIG } from '../config';
(async () => {
const res = await main(CONFIG);
process.stdout.write(res.join('\n'));
return res;
})();
Now want to do some stuff depending on what's been returned in bash script. Attempts to do it so won't work properly:
npm run update-imports &
PID=$!
UpdateResult=$(wait $PID)
if [ -z "$UpdateResult" ];
then
echo "No imports updated, committing changes"
else
echo "Check the following files:\n ${UpdateResult}"
exit 1
fi
In short - if nothing or empty string returned - proceed with executing script, otherwise - exit script with warning.
How do i make it work?
In bash, wait returns the exit value of the process. Not the standard output as you expect. You can use process.exit(value) to return a value.
If you want to capture and process the standard output of node program, see the answer to question: How do I set a variable to the output of a command in Bash?
This should do the work:
UpdateResult=$(npm run update-imports)
if [ -z "$UpdateResult" ];
then
echo "No imports updated, committing changes"
else
echo "Check the following files:\n ${UpdateResult}"
exit 1
fi

Is it possible to roll a dynamically-generated Javascript array (Node) into a Bash array?

So I'm currently rolling a few different env variables into a Docker container using the following syntax:
Node script:
process.env['VAR1'] = 'someArbitraryValue';
process.env['VAR2'] = 'anotherArbitraryValue';
which then execs a bash script that looks like this:
params=()
[[ ! -z "$VAR1" ]] && params+=(-e "VAR1=$VAR1")
[[ ! -z "$VAR2" ]] && params+=(-e "VAR2=$VAR2")
docker run "${params[#]}"
That works just fine since I know the names of those env variables in advance and I can just hardcode the bash command to grab their values and insert them into params. However, what I'd like to be able to do is allow for a dynamically-generated list of variables to be added to the params list.
In other words, I run some function that returns an array that looks like:
var myArray = ['VAR3=somevalue', 'VAR4=anothervalue']
and is then passed into params by iterating through its contents and appending them. Since you can't set an array as an env variable in Bash, I'm not exactly sure if this is possible.
Is there a way to perform this operation, or am I out of luck?
If I'm not missing anything, yes; using child_process.execFile() (also see execFileSync()), you can pass elements of myArray as positional parameters to the bash script and do whatever you want with them in there.
const { execFile } = require('child_process');
// define "myArray" about here
const child = execFile('./myscript.sh', myArray, (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
// ...
#!/bin/bash -
params=()
for param; do
params+=(-e "${param}")
done
docker run "${params[#]}"

Create a persistent bash shell session in Node.js, know when commands finish, and read and modify sourced/exported variables

Imagine this contrived scenario:
./main.sh
source ./config.sh
SOME_CONFIG="${SOME_CONFIG}bar"
./output.sh
./config.sh
export SOME_CONFIG='foo'
./output.sh
echo "Config is: ${SOME_CONFIG}"
I am trying to replace ./main.sh with a Node.js powered ./main.js WITHOUT replacing the other shell files. The exported ./config.sh functions/variables must also be fully available to ./output.sh
Here is a NON working ./main.js. I have written this for the sole purpose to explain what I want the final code to look like:
const terminal = require('child_process').spawn('bash')
terminal.stdin.write('source ./config.sh\n')
process.env.SOME_CONFIG = `${process.env.SOME_CONFIG}bar` // this must be done in JS
terminal.stdin.write('./output.sh\n') // this must be able to access all exported functions/variables in config.sh, including the JS modified SOME_CONFIG
How can I achieve this? Ideally if there's a library that can do this I'd prefer that.
While this doesn't fully answer my question, it solves the contrived problem I had at hand and could help others if need be.
In general, if bash scripts communicate with each other via environment variables (eg. using export/source), this will allow you to start moving bash code to Node.js.
./main.js
const child_process = require("child_process");
const os = require("os");
// Source config.sh and print the environment variables including SOME_CONFIG
const sourcedConfig = child_process
.execSync(". ./config.sh > /dev/null 2>&1 && env")
.toString();
// Convert ALL sourced environment variables into an object
const sourcedEnvVars = sourcedConfig
.split(os.EOL)
.map((line) => ({
env: `${line.substr(0, line.indexOf("="))}`,
val: `${line.substr(line.indexOf("=") + 1)}`,
}))
.reduce((envVarObject, envVarEntry) => {
envVarObject[envVarEntry.env] = envVarEntry.val;
return envVarObject;
}, {});
// Make changes
sourcedEnvVars["SOME_CONFIG"] = `${sourcedEnvVars["SOME_CONFIG"]}bar`;
// Run output.sh and pass in the environment variables we got from the previous command
child_process.execSync("./output.sh", {
env: sourcedEnvVars,
stdio: "inherit",
});

Module cwd assistance

I created a Perl module that is to be used in many Perl scripts to use Net::SSH::Expect
to do a login.
package myRoutines;
#
use v5.22;
use strict;
use warnings;
use Net::SSH::Expect;
use Exporter qw(import);
our #EXPORT_OK = qw(my_login);
sub my_login {
my $user = 'xxxx';
my $port = '10000';
my $passwd = 'XYZ';
my $adminServer = 'myServer';
my $rootpassword = 'ABCDEF';
my ( $pName, $vName ) = #_;
our $ssh = Net::SSH::Expect->new(
host => "$adminServer",
ssh_option => "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null",
user => "$user",
password => "$passwd",
port => "$port",
raw_pty => 1,
restart_timeout_upon_receive => 1,
log_file => "/var/tmp/clilog_$pName$vName"
);
eval {
my $login_output = $ssh->login();
if ( $login_output !~ />/ ) {
die "Login has failed.
Login output was $login_output";
}
};
return $ssh;
}
1;
The scripts will do:
use myRoutines qw(my_login);
our ( $ssh, $pName, $vName );
$pName = 'abc';
$vName = '123';
$ssh = my_login( $pName, $vName );
$ssh->send( "some command\r" );
This all works if I'm in the directory that the script and module are in. If I'm in any other directory, the new call works but the call to $ssh->send does nothing.
I've tried adding to my script:
use lib '/some/dir';
(where the .pm file resides) to force it find the module, and that seems to work when I'm not in the directory where the pm file resides.
I've tried to:
use File::chdir;
$CWD = '/some/dir';
and again, the login seems to work but the next send does nothing. So I'm at a loss as to what might be happening and would like some advice.
Update 20170908:
Upon further playing and following the suggestions made, I've done the following and it now works:
removed the eval as it was unnecessary.
removed the our's and made it my's.
removed the ""'s
set the following in the script:
use File::Basename;
use Cwd qw( abs_path );
chdir "/some/dir";
use lib dirname(abs_path($0));
my $scriptName = basename($0);
use myRoutines qw(ovm_login);
my $pName = substr($scriptName,0,-3); (cutting off the .pl from the end of the script name to pass the scriptname as the pName)
using chdir to change directory to where my pl script and pm file is and then setting the lib is seemingly working as it should.
Borodin, I'm not sure I understand your meaning when you say to object orient the module and .... but would be interested in hearing more to better understand.
If you don't want to hardcode the directory, you can use
use FindBin qw( $RealBin );
use lib $RealBin;
($RealBin is the path to the script. Adjust as needed if myRoutines.pm is in a subdir.)
The simple and easy way would be placing your .pm file in:
/usr/lib64/perl5/
directory and you shouldn't have any problems.
But still not the perfect solution, you should be able to put the .pm file wherever you want.

How to set nodejs interpreter arguments from javascript source

I have someScript.js which requires some node options for correct running. E.g. (let's suppose we have node 0.12.7):
node --harmony someScript.js
I wish to run it without --harmony option and somehow set it inside the script.
I tried to change process.execArgv:
process.execArgv.push('--harmony');
function getGenerator() {
return function *() {
yield 1;
};
}
var gen = getGenerator();
Since NodeJS is interpreter it could allow such settings change (even on the fly). But, NodeJS seems to ignore process.execArgv changes.
Also I tried v8.setFlagsFromString:
Interesting, all NodeJs versions which needed --harmony to support generators do not contain v8 module.
So, I made experiments with NodeJS 4.1.1
var v8 = require('v8');
var vm = require('vm');
v8.setFlagsFromString('--harmony');
var str1 =
'function test1(a, ...args) {' +
' console.log(args);' +
'};';
var str2 =
'function test2(a, ...args) {' +
' console.log(args);' +
'};';
eval(str1);
test1('a', 'b', 'c');
vm.runInThisContext(str2);
test2('a', 'b', 'c');
Unfortunately, v8.setFlagsFromString('--harmony') did not help, even for eval() and vm.runInThisContext().
I wish to avoid creating some wrapping script.
So, is there some other way to set nodejs arguments from javascript source?
If one will have such someScript.js:
#!/bin/sh
":" //# comment; exec /usr/bin/env node --harmony "$0" "$#"
function test(a, ...args) {
console.log(args);
}
test('a', 'b', 'c');
Then "sh someScript.js" command will use interpreter and arguments specified in the someScript.js.
If one will mark someScript.js executable (chmod u+x someScript.js). Then ./someScript.js command also will work.
This article was helpful:
http://sambal.org/2014/02/passing-options-node-shebang-line/
Also if you use Windows EOL there will be a bug with '\r' adding to last script argument. If there are no arguments there will be one ('\r').
UPDATE:
Some shells allow to use
#! /usr/bin/env node --harmony
instead of
#!/bin/sh
":" //# comment; exec /usr/bin/env node --harmony "$0" "$#"

Resources