In my npm script i have the following:
#!/usr/bin/env node
import { main } from './main';
import { CONFIG } from '../config';
(async () => {
const res = await main(CONFIG);
process.stdout.write(res.join('\n'));
return res;
})();
Now want to do some stuff depending on what's been returned in bash script. Attempts to do it so won't work properly:
npm run update-imports &
PID=$!
UpdateResult=$(wait $PID)
if [ -z "$UpdateResult" ];
then
echo "No imports updated, committing changes"
else
echo "Check the following files:\n ${UpdateResult}"
exit 1
fi
In short - if nothing or empty string returned - proceed with executing script, otherwise - exit script with warning.
How do i make it work?
In bash, wait returns the exit value of the process. Not the standard output as you expect. You can use process.exit(value) to return a value.
If you want to capture and process the standard output of node program, see the answer to question: How do I set a variable to the output of a command in Bash?
This should do the work:
UpdateResult=$(npm run update-imports)
if [ -z "$UpdateResult" ];
then
echo "No imports updated, committing changes"
else
echo "Check the following files:\n ${UpdateResult}"
exit 1
fi
Related
My objective is to write a CLI in Typescript/node.js, that uses --experimental-specifier-resolution=node, written in yargs with support for autocompletion.
To make this work, I use this entry.sh file, thanks to this helpful SO anwswer (and the bin: {eddy: "./entry.sh"} options in package.json points to this file)
#!/usr/bin/env bash
full_path=$(realpath $0)
dir_path=$(dirname $full_path)
script_path="$dir_path/dist/src/cli/entry.js"
# Path is made thanks to: https://code-maven.com/bash-shell-relative-path
# Combined with knowledge from: https://stackoverflow.com/questions/68111434/how-to-run-node-js-cli-with-experimental-specifier-resolution-node
/usr/bin/env node --experimental-specifier-resolution=node $script_path "$#"
This works great, and I can use the CLI. However, autocompletion does not work. According to yargs I should be able to get autocompletion by outputting the result from ./entry.sh completion to the ~/.bashrc profile. However this does not seem to work.
Output from ./entry.sh completion:
###-begin-entry.js-completions-###
#
# yargs command completion script
#
# Installation: ./dist/src/cli/entry.js completion >> ~/.bashrc
# or ./dist/src/cli/entry.js completion >> ~/.bash_profile on OSX.
#
_entry.js_yargs_completions()
{
local cur_word args type_list
cur_word="${COMP_WORDS[COMP_CWORD]}"
args=("${COMP_WORDS[#]}")
# ask yargs to generate completions.
type_list=$(./dist/src/cli/entry.js --get-yargs-completions "${args[#]}")
COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) )
# if no match was found, fall back to filename completion
if [ ${#COMPREPLY[#]} -eq 0 ]; then
COMPREPLY=()
fi
return 0
}
complete -o default -F _entry.js_yargs_completions entry.js
###-end-entry.js-completions-###
I tried modifying the completion output, but I don't really understand bash - just yet 😅
Update
Working on a reproducible example (WIP).
Repo is here.
Currently one of the big differences is that npm link does not work the same in the 2 different environments. It's only in the repo where I'm trying to reproduce that /usr/local/share/npm-global/bin/ is actually updated. Currently trying to investigate this.
You can try specifying the scriptName in your entry.js file to the name of your wrapper script. This may force generation of completion name using it. I haven't tried it but looking at the source code of yargs, it looks like the $0 parameter can be altered using scriptName, which in turn will affect how the completion-generation function generate the completion code:
In yargs-factor.ts:
scriptName(scriptName: string): YargsInstance {
this.customScriptName = true;
this.$0 = scriptName;
return this;
}
In completion.ts:
generateCompletionScript($0: string, cmd: string): string {
let script = this.zshShell
? templates.completionZshTemplate
: templates.completionShTemplate;
const name = this.shim.path.basename($0);
// add ./ to applications not yet installed as bin.
if ($0.match(/\.js$/)) $0 = `./${$0}`;
script = script.replace(/{{app_name}}/g, name);
script = script.replace(/{{completion_command}}/g, cmd);
return script.replace(/{{app_path}}/g, $0);
}
Also I'm not sure how the "bin" configuration works but maybe because of scriptName you'd no longer need a wrapper.
Make sure the version of yargs you use supports this.
Also as a side note I thought about suggesting to modify the generated completion script directly but besides being hackish that might also still lead to the script name being unrecognized during completion. Anyhow I just looked at the right approach first.
The modified version would like this:
_entry.sh_yargs_completions()
{
local cur_word args type_list
cur_word="${COMP_WORDS[COMP_CWORD]}"
args=("${COMP_WORDS[#]}")
# ask yargs to generate completions.
type_list=$(/path/to/entry.sh --get-yargs-completions "${args[#]}")
COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) )
# if no match was found, fall back to filename completion
if [ ${#COMPREPLY[#]} -eq 0 ]; then
COMPREPLY=()
fi
return 0
}
complete -o default -F _entry.sh_yargs_completions entry.sh
Another note: If the script name needs to be dynamic based on the name of its caller, you can make it identifiable through an environment variable, so in entry.sh you can declare it like this:
export ENTRY_JS_SCRIPT_NAME=entry.sh
node ...
Then somewhere in entry.js, you can access the variable name through this:
process.env.ENTRY_JS_SCRIPT_NAME
Maybe even just specify $0 or ${0##*/} whatever works:
export ENTRY_JS_SCRIPT_NAME=$0
Thanks, everyone. The solution I ended up with, was 2 fold:
I added a scriptName to the yargs config
In the .sh file "wrapping", I used which node to probably set the --experimental-specifier-resolution=node flags.
test-cli.js
#!/usr/bin/env node
import yargs from 'yargs'
import { hideBin } from 'yargs/helpers'
import { someOtherModule } from './some-other-module';
someOtherModule();
yargs(hideBin(process.argv))
.command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.command('curlAgain <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.demandCommand(1)
.help()
.completion()
.scriptName('eddy') // <== Added thanks to konsolebox
.parse()
test-cli.sh
#!/usr/bin/env bash
full_path="$(realpath "$0")"
dir_path="$(dirname $full_path)"
script_path="$dir_path/test-cli.js"
node_path="$(which node)" # <== Makes it work on github codespaces 😅
$node_path --experimental-specifier-resolution=node $script_path "$#"
package.json
{
"name": "lets-reproduce",
"type": "module",
"dependencies": {
"yargs": "^17.3.1"
},
"bin": {
"eddy": "./test-cli.sh"
}
}
Steps to install autocompletion:
run npm link
run eddy completion >> ~/.bashrc
source ~/.bashrc
profit 😅🔥
So I'm currently rolling a few different env variables into a Docker container using the following syntax:
Node script:
process.env['VAR1'] = 'someArbitraryValue';
process.env['VAR2'] = 'anotherArbitraryValue';
which then execs a bash script that looks like this:
params=()
[[ ! -z "$VAR1" ]] && params+=(-e "VAR1=$VAR1")
[[ ! -z "$VAR2" ]] && params+=(-e "VAR2=$VAR2")
docker run "${params[#]}"
That works just fine since I know the names of those env variables in advance and I can just hardcode the bash command to grab their values and insert them into params. However, what I'd like to be able to do is allow for a dynamically-generated list of variables to be added to the params list.
In other words, I run some function that returns an array that looks like:
var myArray = ['VAR3=somevalue', 'VAR4=anothervalue']
and is then passed into params by iterating through its contents and appending them. Since you can't set an array as an env variable in Bash, I'm not exactly sure if this is possible.
Is there a way to perform this operation, or am I out of luck?
If I'm not missing anything, yes; using child_process.execFile() (also see execFileSync()), you can pass elements of myArray as positional parameters to the bash script and do whatever you want with them in there.
const { execFile } = require('child_process');
// define "myArray" about here
const child = execFile('./myscript.sh', myArray, (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
// ...
#!/bin/bash -
params=()
for param; do
params+=(-e "${param}")
done
docker run "${params[#]}"
For a executable which can fail with not zero exit codes, one can do:
executable && echo "suceed" || echo "failure"
How to do this with a shell function?
myfunction() {
executable arg1 arg2
}
myfunction && echo "succeed" || echo "failure"
From the bash manual:
When executed, the exit status of a function is the exit status of the last command executed in the body.
In other words, shell functions behave exactly as you have demonstrated in your question. For example, given:
myfunction() {
false
}
Running:
myfunction && echo success || echo failed
Results in:
failed
On the other hand, if we have:
myfunction() {
true
}
Running the same command returns success.
Namely, http://fvue.nl/wiki/Bash:_Error_handling#Set_ERR_trap_to_exit
Why is it necessary to set -o errtrace to make trap set/unset from a function call work?
#!/usr/bin/env bash
function trapit {
echo 'trapped in a box'
}
function setTrap {
trap 'trapit' ERR
}
function unsetTrap {
trap - ERR
}
function foo_init {
fooOldErrtrace=$(set +o | grep errtrace)
set -o errtrace
trap 'echo trapped' ERR # Set ERR trap
}
function foo_deinit {
trap - ERR # Reset ERR trap
eval $fooOldErrtrace # Restore `errtrace' setting
unset fooOldErrtrace # Delete global variable
}
# foo_init
setTrap
echo 'set'
false
echo 'unset'
#foo_deinit
unsetTrap
false
According to man bash(5) functions not inherits ERR trap without the errtrace flag turned on. I dont know why ERR trap cant be inherited by default, but... it is so for now :)
You can test this behaviour with my sample code:
#!/usr/bin/env bash
trapit () {
echo 'some error trapped'
}
doerr1 () {
echo 'I am the first err generator and i return error status to the callee'
return 1
}
doerr2 () {
echo 'I am the second err generator and i produce an error inside my code'
fgrep a /etc/motttd
return 0
}
[[ $1 ]] && set -o errtrace
trap trapit ERR
doerr1
doerr2
echo 'We will produce an exception in the main program...'
cat /etc/ftab | fgrep a
echo 'OK, thats done, you see it :)'
If you pass any parameter to this script, errtrace flag will be turned on and you will see that exception was "catched" when doerr2 tried to do something awful.
I have wrote this Perl script to automate my wireless connections:
#!/usr/bin/perl
use strict;
my #modes = ("start", "stop");
my $mode = $modes[0];
my $kill_command = "sudo kill -TERM ";
sub check_args
{
if($#ARGV != 0)
{
print(STDERR "Wrong arguments\n");
print(STDERR "Usage: ./wicd.pl start|stop\n");
exit();
}
my #aux = grep(/^$ARGV[0]$/, #modes);
if (!#aux)
{
print(STDERR "Unknown argument\n");
print(STDERR "Usage: ./wicd.pl start|stop\n");
exit();
}
$mode = $ARGV[0];
}
check_args();
my #is_wicd_running = `ps -A | grep wicd`;
# START
if ($mode eq $modes[0])
{
if (!#is_wicd_running)
{
system("gksudo ifconfig wlan0 down");
system("sudo macchanger -r wlan0");
system("sudo wicd");
}
my #is_wicd_gui_running = grep(/wicd-client/, #is_wicd_running);
if (!#is_wicd_gui_running)
{
system("gksudo wicd-gtk &");
}
}
# STOP
else
{
for (#is_wicd_running)
{
my #aux = split(/ /, $_);
system("$kill_command$aux[1]");
}
system("sudo ifconfig wlan0 down");
}
The problem is that macchanger and sudo ifconfig wlan0 down are not executing (only those...). The weird thing is that those call do execute when calling the script through Perl debugger (perl -d). I thought this could be a timing problem and added some sleep() calls before those calls, but no change. I also tried with system() calls with no change as well.
EDIT: more strange, I've found that if I run the script as perl wicd.pl it runs properly, while ./wicd.pl does not (it runs but has the problem described above). I've attached the whole script. The Perl interpreter used on the header is the same that which perl command returns.
Any clues? Thanks in advance!
More information may help, along with assuring that a \n always ends the output line. Try your running commands within
sub runx
{
foreach my $cmd ( #_ )
{
my $out = qx("$cmd 2>&1");
my $x = $?;
$out =~ s/\s*$/\n/s;
printf "\%s (0x\%0x):\n\%s", $cmd, $x, $out;
last if $x;
}
return $x;
}
No time to run this code this morning and can't delete my prior comment. But somethings running a "which" command can also assure your command is on PATH.