passing arguments to a node script coming from stdin - node.js

overview
I'd like to pass arguments to a node script coming from stdin.
generally, I'm shooting for something like this
nodeScript.js | node {{--attach-args??}} --verbose --dry-run
that would act the same as
node nodeScript.js --verbose --dry-run
more detail
here's a boiled down script for illustration, dumpargs.js
console.log("the arguments you passed in were");
console.log(process.argv);
console.log("");
so you could then:
node dumpargs.js --verbose --dry-run file.txt
[ 'node',
'/home/bill-murray/Documents/dumpargs.js',
'--verbose',
'--dry-run',
'file.js' ]
now the question, if that script comes in across stdin (say, via cat or curl)
cat dumpars.js | node
the arguments you passed in were
[ 'node' ]
is there a good way to pass arguments to it?
not node: with bash, using dumpargs.sh this time
echo "the arguments you passed in were"
printf "> $#"
echo
the answer would look like
cat dumpargs.sh | bash -s - "--verbose --dry-run file.txt"
the arguments you passed in were
> --verbose --dry-run file.txt

There is a specific syntax for this use case. The doc says :
- Alias for stdin, analogous to the use of - in other command line utilities, meaning
that the script will be read from stdin, and the rest of the options are passed to
that script.
-- Indicate the end of node options. Pass the rest of the arguments to the script.
If no script filename or eval/print script is supplied prior to this, then the next
argument will be used as a script filename.
So just do the following :
$ cat script.js | node - args1 args2 ...
For example, this will returns "hello world" :
$ echo "console.log(process.argv[2], process.argv[3])" | node - hello world

This isn't pretty, but works.
The call to node is going to launch the REPL, so your problem should be equivalent to setting / using argv manually from the terminal. Try doing something like:
// argv.js
process.argv[1] = 'asdf';
process.argv[2] = '1234';
and doing cat argv.js dumpargs.js | node.

Related

How do I pipe input to a Node.js program that uses readline-sync?

I have a very simple Node.js program that uses readline-sync to accept input, then echo it to the console:
const readlineSync = require('readline-sync');
const input = readlineSync.prompt();
console.log(input);
It works fine as an interactive program; however, when I try to pipe input to it (in either Git Bash or PowerShell), I get a Node.js error:
PS> echo "1.2" | node .\index.js
Windows PowerShell[35552]: c:\ws\src\node_file.cc:1631: Assertion `(argc) == (5)' failed.
Adding a #!/usr/bin/env node shebang and running it as a script with echo "1.2" | .\script.js produces the same error.
Is there a configuration option or something that I'm missing that allows readline-sync to read input from a pipe? Is there something wrong with how I'm running it in the shell? Any advice would be appreciated.
It is most probably the package compatibility issue with the node version that you are using. You need to check all the dependencies whether they are compatible with the node version that you are using.
I think your program is taking 'input' in the form of 'argument' but not from 'stdin'.
when you use '|' , Input will be given to the program as 'stdin' not as 'argument'
so to convert 'stdin' coming out of '|' to 'input argument' we can use 'xargs' in linux.
try following on linux/bash to see if it works :
echo "1.2" | xargs .\script.js
Just to give a example , we can see the functionality of 'echo' command which can take 'input' only in the form of 'arguments' but not as 'stdin' :
# following command does print anything :
echo boo | echo
# but when xargs is used after | , It displays the output :
echo boo | xargs echo
boo

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

How to preserve quotes in a bash parameter?

I have a bash script (Mac OS X) that in turns calls a Node.js command line application.
I normally call the Node.js app like this:
node mynodeapp events:"Open project"
Which node has no problem parsing as one parameter, in spite of the space between "Open" and "project".
I call my bash script like this:
. mybashscript.sh 2014-03-20 "Open project"
Inside the bash script I have:
EVENTSQUOTES=\"$2\"
echo node mixpanel-extract date:$1 events:$EVENTSQUOTES
node mixpanel-extract date:$1 events:$EVENTSQUOTES
Running the script produces:
node mixpanel-extract date:2014-03-20 events:"Open project"
Parameters: { date: '2014-03-20',
events: [ '"Open' ] }
So although the echo output line looks fine, the Parameters: output from my Node.js app tells me that bash splits the parameter in two. I've also tried wrapping it in more quotes e.g. EVENTSQUOTES='\"$2\"' but it makes no difference.
You need to use quote while calling also:
node mixpanel-extract date:"$1" events:"$EVENTSQUOTES"
echo node mixpanel-extract "date:$1" "events:$2"
node mixpanel-extract "date:$1" "events:$2"
You need to quote the variable when you use it as well, otherwise word splitting will occur.

Bash config file or command line parameters

If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command?
The watered down code:
#!/bin/bash
source builder.conf
function xmitBuildFile {
for IP in "{SERVER_LIST[#]}"
do
echo $1#$IP
done
}
xmitBuildFile
builder.conf:
SERVER_LIST=( 192.168.2.119 10.20.205.67 )
$bash> ./builder.sh myname
My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this:
. /etc/script.conf
of course you can use the positional parameters anywhere (before or after ". /etc/..."):
echo "$#"
test -n "$1" && ...
you can even define them in the script or in the very same config file:
test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa.
By the way shflags may be pretty useful in writing such script.

How can I run a function from a script in command line?

I have a script that has some functions.
Can I run one of the function directly from command line?
Something like this?
myScript.sh func()
Well, while the other answers are right - you can certainly do something else: if you have access to the bash script, you can modify it, and simply place at the end the special parameter "$#" - which will expand to the arguments of the command line you specify, and since it's "alone" the shell will try to call them verbatim; and here you could specify the function name as the first argument. Example:
$ cat test.sh
testA() {
echo "TEST A $1";
}
testB() {
echo "TEST B $2";
}
"$#"
$ bash test.sh
$ bash test.sh testA
TEST A
$ bash test.sh testA arg1 arg2
TEST A arg1
$ bash test.sh testB arg1 arg2
TEST B arg2
For polish, you can first verify that the command exists and is a function:
# Check if the function exists (bash specific)
if declare -f "$1" > /dev/null
then
# call arguments verbatim
"$#"
else
# Show a helpful error
echo "'$1' is not a known function name" >&2
exit 1
fi
If the script only defines the functions and does nothing else, you can first execute the script within the context of the current shell using the source or . command and then simply call the function. See help source for more information.
The following command first registers the function in the context, then calls it:
. ./myScript.sh && function_name
Briefly, no.
You can import all of the functions in the script into your environment with source (help source for details), which will then allow you to call them. This also has the effect of executing the script, so take care.
There is no way to call a function from a shell script as if it were a shared library.
Using case
#!/bin/bash
fun1 () {
echo "run function1"
[[ "$#" ]] && echo "options: $#"
}
fun2 () {
echo "run function2"
[[ "$#" ]] && echo "options: $#"
}
case $1 in
fun1) "$#"; exit;;
fun2) "$#"; exit;;
esac
fun1
fun2
This script will run functions fun1 and fun2 but if you start it with option
fun1 or fun2 it'll only run given function with args(if provided) and exit.
Usage
$ ./test
run function1
run function2
$ ./test fun2 a b c
run function2
options: a b c
I have a situation where I need a function from bash script which must not be executed before (e.g. by source) and the problem with #$ is that myScript.sh is then run twice, it seems... So I've come up with the idea to get the function out with sed:
sed -n "/^func ()/,/^}/p" myScript.sh
And to execute it at the time I need it, I put it in a file and use source:
sed -n "/^func ()/,/^}/p" myScript.sh > func.sh; source func.sh; rm func.sh
Edit: WARNING - seems this doesn't work in all cases, but works well on many public scripts.
If you have a bash script called "control" and inside it you have a function called "build":
function build() {
...
}
Then you can call it like this (from the directory where it is):
./control build
If it's inside another folder, that would make it:
another_folder/control build
If your file is called "control.sh", that would accordingly make the function callable like this:
./control.sh build
Solved post but I'd like to mention my preferred solution. Namely, define a generic one-liner script eval_func.sh:
#!/bin/bash
source $1 && shift && "#a"
Then call any function within any script via:
./eval_func.sh <any script> <any function> <any args>...
An issue I ran into with the accepted solution is that when sourcing my function-containing script within another script, the arguments of the latter would be evaluated by the former, causing an error.
The other answers here are nice, and much appreciated, but often I don't want to source the script in the session (which reads and executes the file in your current shell) or modify it directly.
I find it more convenient to write a one or two line 'bootstrap' file and run that. Makes testing the main script easier, doesn't have side effects on your shell session, and as a bonus you can load things that simulate other environments for testing. Example...
# breakfast.sh
make_donuts() {
echo 'donuts!'
}
make_bagels() {
echo 'bagels!'
}
# bootstrap.sh
source 'breakfast.sh'
make_donuts
Now just run ./bootstrap.sh.Same idea works with your python, ruby, or whatever scripts.
Why useful? Let's say you complicated your life for some reason, and your script may find itself in different environments with different states present. For example, either your terminal session, or a cloud provider's cool new thing. You also want to test cloud things in terminal, using simple methods. No worries, your bootstrap can load elementary state for you.
# breakfast.sh
# Now it has to do slightly different things
# depending on where the script lives!
make_donuts() {
if [[ $AWS_ENV_VAR ]]
then
echo '/donuts'
elif [[ $AZURE_ENV_VAR ]]
then
echo '\donuts'
else
echo '/keto_diet'
fi
}
If you let your bootstrap thing take an argument, you can load different state for your function to chew, still with one line in the shell session:
# bootstrap.sh
source 'breakfast.sh'
case $1 in
AWS)
AWS_ENV_VAR="arn::mumbo:jumbo:12345"
;;
AZURE)
AZURE_ENV_VAR="cloud::woo:_impress"
;;
esac
make_donuts # You could use $2 here to name the function you wanna, but careful if evaluating directly.
In terminal session you're just entering:
./bootstrap.sh AWS
Result:
# /donuts
you can call function from command line argument like below
function irfan() {
echo "Irfan khan"
date
hostname
}
function config() {
ifconfig
echo "hey"
}
$1
Once you defined the functions put $1 at the end to accept argument which function you want to call.
Lets say the above code is saved in fun.sh. Now you can call the functions like ./fun.sh irfan & ./fun.sh config in command line.

Resources