Error while using -N option with qsub - linux

I tried to use qsub -N "compile-$*" in Makefile and it gives the following error
because $* equals to "compile-obj/linux/flow" in this case.
qsub: ERROR! argument to -N option must not contain /
The whole command which I am using is:-
qsub -P bnormal -N "compile-obj/linux/flow" -cwd -now no -b y -l cputype=amd64 -sync yes -S /bin/sh -e /remote//qsub_files/ -o /remote/qsub_files/
Any idea how to include slash in naming while running qsub?
Thanks

I'm not familiar with qsub, but make just executes what command you supply it. So I suspect you constructed illegal qsub command.
Maybe Automatic-Variables section of GNU make can help you too.
Adding a whole rule to question can help.

I resolved the problem by manipulating the name passed to -N option by replacing / with -. It works for me. Thanks.

Related

Passing attributes to chef via command line

This is driving me nuts, any help is massively appreciated.
Currently using a recipe to run an ssh command whereby the command takes in args and then uses that.
The escaping of the string string quotes is quite literally sending me insane; please help me SO, you're my only help. :D
This is the literal string that I need for my ssh:
ssh -i /home/ec2-user/.ssh/Test-Key.pem -o StrictHostKeyChecking=no ec2-user#ipAddress echo '{\"attr\":\"value\"}' | sudo chef-client -o solr-restart -j /dev/stdin
it's wrapped in a command within the recipe like so:
command "ssh -i /home/ec2-user/.ssh/Test-Key.pem -o StrictHostKeyChecking=no ec2-user#ipAddress echo '{\"attr\":\"value\"}' | sudo chef-client -o solr-restart -j /dev/stdin"
no matter how I try and manipulate the string I cannot get the output to be correct, it either removes the escaped characters in the json, or adds in additional ones.
I've tried to echo '#{madness}'
where madness = madness = '{\"portAttribute\":\"'+"#{portNumber}"+'\"}'
but still no luck, thanks for any help.
IMHO your string interpolation looks fine but as you want to run the following command on remote machine:
echo '{\"portAttribute\":\"#{portNumber}\"}' | sudo chef-client -o solr-restart -j /dev/stdin
Command should tweaked a bit more and be passed in recipe as:
command "ssh -i /home/ec2-user/.ssh/Test-Key.pem -o StrictHostKeyChecking=no ec2-user#ipAddress 'echo \'{\\\"portAttribute\\\":\\\"#{portNumber}\\\"}\' | sudo chef-client -o solr-restart -j /dev/stdin' "
This works
{\\\"attr\\\":\\\"value\\\"}'
You reeeeeeally probably don't mean to be using -j, that totally overwrites whatever data is on the node already and is only intended for inital bootstrapping. After that, you don't pass data in on the command line, it comes from Chef Server.

Passing arguments to a script invoked with bash -c

I'm testing a Bash script I created on GitHub for behavioral correctness (e.g. that it parses options correctly). I want to do this without having to clone the repository locally, so here is how I'm doing it:
curl -sSL https://github.com/jamesqo/gid/raw/master/gid | xargs -0 bash -c
My question is, how can I pass arguments to the script in question? I tried bash -c --help, but that didn't work since it got interpreted as part of the script.
Thanks!
You’re actually over-complicating things by using xargs with Bash’s -c option.
Download the script directly
You don’t need to clone the repository to run the script. Just download it directly:
curl -o gid https://raw.githubusercontent.com/jamesqo/gid/master/gid
Now that it’s downloaded as gid, you can run it as a Bash script, e.g.,
bash gid --help
You can also make the downloaded script executable in order to run it as a regular Unix script file (using its shebang, #!/bin/bash):
chmod +x gid
./gid --help
Use process substitution
If you wanted to run the script without actually saving it to a file, you could use Bash process substitution:
bash <(curl -sSL https://github.com/jamesqo/gid/raw/master/gid) --help
I'll echo Anthony's comments - it makes a lot more sense to download the script and execute it directly, but if you're really set on using the -c option for bash, it's a little bit complicated, the problem is that when you do:
something | xargs -0 bash -c
there's no opportunity to pass any arguments. They all get swallowed as the argument to -c - it essentially gets turned into:
bash -c "$(something)"
so if you place something after the -c in the xargs, it gets before the something. There is no opportunity to put anything after something, as xargs doesn't let you.
If you want to pass arguments, you have to use the substitution position option for xargs, which allows you to place where the argument goes, The option is -J <item>, and the next thing to realize is that the first argument will be $0, so you have to do:
something | xargs -0 -I # bash -c # something <arg1> <arg2>…
I can emulate this with:
echo 'echo hi: ~$0~ ~$1~ ~$2~ ~$3~' | xargs -0 -I # bash -c # something one two three four
which yields:
hi: ~something~ ~one~ ~two~ ~three~

Does qsub pass command line arguments to my script?

When I submit a job using
qsub script.sh
is $# setted to some value inside script.sh? That is, are there any command line arguments passed to script.sh?
You can pass arguments to the job script using the -F option of qsub:
qsub script.sh -F "args to script"
or inside script.sh:
#PBS -F arguments
This is documented here.
On my platform the -F is not available. As a substitute -v helped:
qsub -v "var=value" script.csh
And then use the variable var in your script.
See also the documentation.
No. Just tried to submit a script with arguments before I answered and qsub won't accept it.
This won't be as convenient as putting arguments on the command line but you could possibly set some environment variables which you can have Torque export to the job with -v [var name} or -V.

Need to redirect an output to /dev/null.... works fine in command line but not in shell

I need to write an execute some command in bash file and ignore the inputs.
Example
pvs --noheadings -o pv_name,vg_name,vg_size 2> /dev/null
The above command works great in command line, but when I write the same in shell, it gives me an error
like
Failed to read physical volume "2>"
Failed to read physical volume "/dev/null"
I guess it looks it as an part of the whole command. Can you please give me some suggestions on how to rectify it?
Thanks in advance.
FULLCODE
#------------------------------
main() {
pv_cmd='pvs'
nh='--noheadings'
sp=' '
op='-o'
vgn='vg_name'
pvn='pv_name'
pvz='pv_size'
cm=','
tonull=' 2 > /dev/null '
pipe='|'
#cmd=$pv_cmd$sp$nh$sp$op$sp$vgn$cm$pvn$cm$pvz$sp$pipe$tonull #line A
cmd='pvs --noheadings -o vg_name,pv_name,pv_size 2> /dev/null' #line B
echo -n "Cmd="
echo $cmd
$cmd
}
main
#-----------------------------------------------------
If you look at the Line A & B both the versions are there, although one is commented out.....
You can't include the 2> /dev/null inside the quoted string. Quote removal happens after redirections are processed. You'll have to do
cmd='pvs --noheadings -o vg_name,pv_name,pv_size'
$cmd 2> /dev/null
for redirection to work properly.
The way you did it, 2> and /dev/null will be parsed as arguments. But you want 2> /dev/null to be bash code, not program argument, so
instead of
$cmd
you should
eval $cmd
That is how things work.
Or if the echo thing is for debugging, you can just set -o xtrace before the command and set +o xtrace after it. And do it the normal way instead of stuffing a string.
I think what's going on is that there is some character inside the line that is either not visible to us or the > is a different character than it appears. After all the shell should swallow the redirect before the command gets to see it, but the command sees 2> and /dev/null as [PhysicalVolume [PhysicalVolume...]]. Alternatively the redirection could be passed quoted (so it loses the special meaning to the shell and gets passed on), see chepner's answer.
tonull=' 2 > /dev/null '
is the issue. Exactly as chepner guessed.
eliminate space between 2 and >
pvs --noheadings -o pv_name,vg_name,vg_size 2>/dev/null

Why are commands executed in backquotes giving me different results when done in as script?

I have a script that I mean to be run from cron that ensures that a daemon that I wrote is working. The contents of the script file are similar to the following:
daemon_pid=`ps -A | grep -c fsdaemon`
echo "daemon_pid: " $daemon_pid
if [ $daemon_pid -eq 0 ]; then
echo "restarting fsdaemon"
/etc/init.d/fsdaemon start
fi
When I execute this script from the command prompt, the line that echoes the value of $daemon_pid is reporting a value of 2. This value is two regardless of whether my daemon is running or not. If, however, I execute the command with back quotes and then examine the $daemon_pid variable, the value of $daemon_pid is now one. I have also tried single stepping through the script using bashdb and, when I examine the variables using that tool, they are what they should be.
My question therefore is: why is there a difference in the behaviour between when the script is executed by the shell versus when the commands in the script are executed manually? I'm sure that there is something very fundamental that I am missing.
You're very likely encountering the grep as part of the 'answer' from ps.
To help fully understand what is happening, turn off the -c option, to see what data is being returned from just ps -A | grep fsdameon.
To solve the issue, some systems have a p(rocess)grep (pgrep). That will work, OR
ps -A | grep -v grep | grep -c fsdaemon
Is a common idiom you will see, but at the expense of another process.
The cleanest solution is,
ps -A | grep -c '[f]sdaemon'
The regular expression syntax should work with all greps, on all systems.
I hope this helps.
The problem is that grep itself shows up... Try running this command with anything after grep -c:
eple:~ erik$ ps -a | grep -c asdfladsf
1
eple:~ erik$ ps -a | grep -c gooblygoolbygookeydookey
1
eple:~ erik$
What does ps -a | grep fsdaemon return? Just look at the processes actually listed... :)
Since this is Linux, why not try the pgrep? This saves you a pipe, and you don't end up with grep reporting back the daemon script itself running.
Aany process with arguments including that name will add to the count - grep, and your script.
psing for a process isn't really reliable, you should use a lock file.
As several people have pointed out already, your process count is inflated because ps | grep detects (1) the script itself and (2) the subprocess created by the backquotes, which inherits the name of the main script. So an easy solution is to change the name of the script to something that doesn't include the name you're looking for. But you can do better.
The "best-practice" solution that I would suggest is to use the facilities provided by your operating system. It's not uncommon for an init script to create a PID file as part of the process of starting your daemon; in other words, instead of just running the daemon itself, you use a wrapper script that starts the daemon and then writes the process ID to a file somewhere. If start-stop-daemon exists on your system (and I think it's fairly common these days), you can use that like so:
start-stop-daemon --start --quiet --background \
--make-pidfile --pidfile /var/run/fsdaemon.pid -- /usr/bin/fsdaemon
(obviously replace the path /usr/bin/fsdaemon as appropriate) to start it, and then
start-stop-daemon --stop --quiet --pidfile /var/run/fsdaemon.pid
to stop it. start-stop-daemon has other options that might be useful to you, which you can investigate by reading the man page.
If you don't have access to start-stop-daemon, you can write a wrapper script to do basically the same thing, something like this to start:
echo "$$" > /var/run/fsdaemon.pid
exec /usr/bin/fsdaemon
and this to stop:
kill $(< /var/run/fsdaemon/pid)
rm /var/run/fsdaemon.pid
(this is pretty crude, of course, but it should normally work).
Anyway, once you have the setup to generate a PID file, whether by using start-stop-daemon or not, you can update your check script to this:
daemon_pid=`ps --no-headers --pid $(< /var/run/fsdaemon.pid) | wc -l`
if [ $daemon_pid -eq 0 ]; then
echo "restarting fsdaemon"
/etc/init.d/fsdaemon restart
fi
(one would think there would be a concise command to check whether a given PID is running, but I don't know it).
If you don't want to (or can't) create a PID file, I would at least suggest pgrep instead of ps | grep, since pgrep will search directly for a process by name and won't find anything that just happens to include the same string.
daemon_pid=`pgrep -x -c fsdaemon`
if [ $daemon_pid -eq 0 ]; then
echo "restarting fsdaemon"
/etc/init.d/fsdaemon restart
fi
The -x means "match exactly", and -c works as with grep.
By the way, it seems a bit misleading to name your variable daemon_pid when it is actually a count.

Resources