What's Linux test -a command test for? - linux

Please take a look at the following code,
snap=snapshot.file
touch snapshot.file-1
$ [ -a $snap-1 ] && echo yes
yes
What does the test -a command tests for here?
I tried info coreutils 'test invocation' and searched for -a, but didn't find it in the file characteristic tests section, but rather in the connectives for test section.
Is such test -a command an undocumented one?

-a is used for an and expression. You would usually use it with two operands:
$[ $snap0 -a $snap1 ]
not sure what context it is used in here, but it's possible that someone removed the first operand without removing the -a operator.

Related

Most efficient if statement in .zshrc to check whether Linux OS is running on WSL?

In my .zshrc file I conditionally set my PATH variable depending on whether I'm running on Linux or macOS - I'm now trying to figure out if there's a way I can efficiently detect from my .zshrc if I'm working on Linux running on WSL.
I'm wondering if I can somehow check for the existence of /mnt/c/Program Files or similar - but figure there must be a better way?
Example of my current .zshrc:
PATH="/usr/local/sbin:$PATH"
if ! [[ "$OSTYPE" == "darwin"* ]]; then
export PATH="$HOME/.nodenv/bin:$HOME/.rbenv/bin:$PATH"
fi
eval "$(rbenv init -)"
eval "$(nodenv init -)"
PATH="$HOME/.bin:$PATH"
if [[ "$OSTYPE" == "darwin"* ]]; then
export ANDROID_SDK_ROOT="$HOME/Library/Android/sdk"
export PATH="$PATH:$ANDROID_SDK_ROOT/tools:$ANDROID_SDK_ROOT/tools/bin:$ANDROID_SDK_ROOT/platform-tools:$ANDROID_SDK_ROOT/build-tools:$ANDROID_SDK_ROOT/tools/lib/x86_64"
export PATH="$PATH:/usr/local/share/dotnet"
fi
If anyone has any better ideas than somehow checking for the existence of /mnt/c/Program Files I'd very much appreciate it!
There are many possible way to check WSL in any shell. Most reliable ways are:
From uname -r command output.
From /proc/version file.
From /proc/sys/kernel/osrelease file.
#!/bin/bash
if uname -r |grep -q 'Microsoft' ; then
echo True
fi
if grep -q -i 'Microsoft' /proc/version ; then
echo True
fi
if grep -q -i 'Microsoft' /proc/sys/kernel/osrelease ; then
echo True
fi
Also there are many file existence can be checked with shell script. For example, only WSL has 1. /dev/lxss 2. /bin/wslpath 3. /sbin/mount.drvfs 4. /proc/sys/fs/binfmt_misc/WSLInterop 5. /etc/wsl.conf files but GNU/Linux distributions has not.
See more:
screenFetch
netfetch
In WSL, there is a special file for checking interoperability called /proc/sys/fs/binfmt_misc/WSLInterop which is WSL specific file. You can check using the following command:
#!/bin/bash
if [ -f /proc/sys/fs/binfmt_misc/WSLInterop ]; then
echo True
fi
or more simple one-line code(in bash):
[ -f /proc/sys/fs/binfmt_misc/WSLInterop ]
This will return exit code 0 if true, exit code 1 if false.
Thanks to Biswapiryo's comment - I came up with this solution to detect WSL:
if [[ $(uname -r)] == ^*Microsoft$ ]]; then
# Code goes here
fi
Short/current answer:
To detect either WSL1 or WSL2, you can use a modified version of #MichaelSmith's answer:
#!/bin/zsh
if [[ $(uname -r) == (#s)*[mM]icrosoft*(#e) ]]; then
echo test
fi
More detail:
When this question was originally asked, only WSL1 existed, and uname -r would return something like:
4.4.0-22000-Microsoft
This is not a "real" kernel in WSL1, but just the number/name that Microsoft chooses to provide in response to that particular syscall. The 22000, in this case, is the Windows build number, which currently corresponds to the WSL release. Note that this is the case even in the current WSL Preview in the Microsoft Store, even though it is decoupled from the Windows release.
With WSL2, however, Microsoft provides a real Linux kernel, which returns something like:
5.10.102.1-microsoft-standard-WSL2
Earlier versions may have left off the -WSL2 portion.
Of course, if you build your own WSL2 kernel, you should update the test to match the kernel name you provide.

How to bypass IncrediBuild console if not installed?

There are some bash scripts running on our build machines where IncrediBuild is installed. The idea of IncrediBuild is to utilize all cores available in the network to speed up the build process.
Example shell script:
...
ib_console cmake --build .
The shell script shall not be changed. I would like to run the script on machines without ib_console. Is there a possibility to somehow simulate it or forward the call to cmake ?
Place this into your .bashrc file:
if [ ! -f ´whereis ib_console´ ] then
alias ib_console='$#'
fi
Explanation:
-f checks, whether the ib_console binary exists
$# take all arguments into one single string (they are then executed standalone)
#alex: the alias works from the shell but not in the shell script above since aliases are not expanded in non interactive shell scripts.
With Why doesn't my Bash script recognize aliases? I could fix it.
if ! [ -f 'whereis ib_console' ]; then
ib_console()
{
echo "ib_console dummy: run '$#'..."
$#
echo "ib_console dummy: done"
}
export -f ib_console
fi
It is recommended to run 'exec bash' after updating .bashrc

How to run command during Docker build which requires a tty?

I have some script I need to run during a Docker build which requires a tty (which Docker does not provide during a build). Under the hood the script uses the read command. With a tty, I can do things like (echo yes; echo no) | myscript.sh.
Without it I get strange errors I don't completely understand. So is there any way to use this script during the build (given that its not mine to modify?)
EDIT: Here's a more definite example of the error:
FROM ubuntu:14.04
RUN echo yes | read
which fails with:
Step 0 : FROM ubuntu:14.04
---> 826544226fdc
Step 1 : RUN echo yes | read
---> Running in 4d49fd03b38b
/bin/sh: 1: read: arg count
The command '/bin/sh -c echo yes | read' returned a non-zero code: 2
RUN <command> in Dockerfile reference:
shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows
let's see what exactly /bin/sh is in ubuntu:14.04:
$ docker run -it --rm ubuntu:14.04 bash
root#7bdcaf403396:/# ls -n /bin/sh
lrwxrwxrwx 1 0 0 4 Feb 19 2014 /bin/sh -> dash
/bin/sh is a symbolic link of dash, see read function in dash:
$ man dash
...
read [-p prompt] [-r] variable [...]
The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line
is read from the standard input. The trailing newline is deleted from the line and the line is split as
described in the section on word splitting above, and the pieces are assigned to the variables in order.
At least one variable must be specified. If there are more pieces than variables, the remaining pieces
(along with the characters in IFS that separated them) are assigned to the last variable. If there are
more variables than pieces, the remaining variables are assigned the null string. The read builtin will
indicate success unless EOF is encountered on input, in which case failure is returned.
By default, unless the -r option is specified, the backslash ``\'' acts as an escape character, causing
the following character to be treated literally. If a backslash is followed by a newline, the backslash
and the newline will be deleted.
...
read function in dash:
At least one variable must be specified.
let's see read function in bash:
$ man bash
...
read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name...]
If no names are supplied, the line read is assigned to the variable REPLY. The return code is zero,
unless end-of-file is encountered, read times out (in which case the return code is greater than
128), or an invalid file descriptor is supplied as the argument to -u.
...
So I guess your script myscript.sh is start with #!/bin/bash or something else but not /bin/sh.
Also, you can change your Dockerfile like below:
FROM ubuntu:14.04
RUN echo yes | read ENV_NAME
Links:
https://docs.docker.com/engine/reference/builder/
http://linux.die.net/man/1/dash
http://linux.die.net/man/1/bash
Short answer : You can't do it straightly because docker build or either buildx didn't implement [/dev/tty, /dev/console]. But there is a hacky solution where you can achieve what you need but I highly discourage using it since it break the concept of CI. That's why docker didn't implement it.
Hacky solution
FROM ubuntu:14.04
RUN echo yes | read #tty requirement command
As mentioned in docker reference document the RUN consist of two stage, first is execution of command and the second is commit to the image as a new layer. So you can do the stages manually on your own where we will provide tty to first stage(execution) and then commit the result.
Code:
cd
cat >> tty_wrapper.sh << EOF
echo yes | read ## Your command which needs tty
rm /home/tty_wrapper.sh
EOF
docker run --interactive --tty --detach --privileged --name name1 ubuntu:14.04
docker cp tty_wrapper.sh name1:/home/
docker exec name1 bash -c "cd /home && chmod +x tty_wrapper.sh && ./tty_wrapper.sh "
docker commit name1 your:tag
Your new image is ready.
Here is a description about the code.
At first we make a bash script which wrap our tty to it and then remove itself after fist execute. Then we run a container with provided tty option(you can remove privileged if you don't need). Next step we copy wrapped bash script inside container and do the execution & commit stage on our own.
You don't need a tty for feeding your data to your script . just doing something like (echo yes; echo no) | myscript.sh as you suggested will do. also please make sure you copy your file first before trying to execute it . something like COPY myscript.sh myscript.sh
Most likely you don't need a tty. As the comment on the question shows, even the example provided is a situation where the read command was not properly called. A tty would turn the build into an interactive terminal process, which doesn't translate well to automated builds that may be run from tools without terminals.
If you need a tty, then there's the C library call to openpty that you would use when forking a process that includes a pseudo tty. You may be able to solve your problem with a tool like expect, but it's been so long that I don't remember if it creates a ptty or not. Alternatively, if your application can't be built automatically, you can manually perform the steps in a running container, and then docker commit the resulting container to make an image.
I'd recommend against any of those and to work out the procedure to build your application and install it in a non-interactive fashion. Depending on the application, it may be easier to modify the installer itself.

SSH bash script to test if java process is running?

I need to create a SSH BASH script (on Debian linux) to test if 'java' process is running.
Here how it should look like:
IF 'java' process is not running THEN run ./start.sh
to test if java process is running, I can make this test:
ps -A | grep java
This script should run every minute (I guess in a CRON)
Regards
First of all, to run a job every minute in cron, your crontab should look like this:
* * * * * /path/to/script.sh
Next, you have a few different options for detecting a Java process.
Note that each of the following is a negation: they detect the absence of Java:
With pgrep:
if [ ! $(pgrep java) ] ; then
# no java running
fi
With pidof:
if [ ! $(pidof java) ] ; then
# no java running
fi
With ps and grep:
if [ ! $(ps -A | grep 'java') ] ; then
# no java running
fi
Of these, pgrep and pidofare probably the most efficient. Don't quote me on that, though.
The check you are doing with PS and GREP doesn't look very detailed. What if other Java processes are running ? You may detect those, and come to a wrong conclusion, because you are just checking "any" Java, not some specific Java.
With pidof would be something like this:
script.sh
pidof java;
if[$? -ne 0];
then
# here put your code when errorcode of `pidof` wasn't 0, means that it didn't find process
# for example: /home/user/start.sh
# (please don't forget to use full paths if you want to use it in cron)
fi
Especially for haters:
man pidof:
EXIT STATUS
0 At least one program was found with the requested name.
1 No program was found with the requested name.

How to properly escaping qsub command with long input args within ssh command?

I have a complex qsub command to run remotely.
PROJECT_NAME_TEXT="TEST PROJECT"
PACK_ORGANIZATION="--source-organization \'MY, ORGANIZATION\'"
CONTACT_NAME="--contact-name \'Tom Riddle\'"
PROJECT_NAME_PACK="--project-name \"${PROJECT_NAME_TEXT}\""
INPUTARGS="${PACK_ORGANIZATION} ${CONTACT_NAME} ${PROJECT_NAME_PACK}"
ssh mycluster "qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
The problem is the remote cluster doesn't recognise the qsub command, it always showing incorrect qsub command or simply alway queued on cluster because of input args are wrong.
It must be the escaping problem, my question is how to escape the command above properly ?
Try doing this using a here-doc : you have a quote conflict (nested double quotes that is an error):
#!/bin/bash
PROJECT_NAME_TEXT="TEST PROJECT"
PACK_ORGANIZATION="--source-organization \'MY, ORGANIZATION\'"
CONTACT_NAME="--contact-name \'Tom Riddle\'"
PROJECT_NAME_PACK="--project-name \"${PROJECT_NAME_TEXT}\""
INPUTARGS="${PACK_ORGANIZATION} ${CONTACT_NAME} ${PROJECT_NAME_PACK}"
ssh mycluster <<EOF
qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script
EOF
As you can see, here-docs are really helpful for inputs with quotes.
See man bash | less +/'Here Documents'
Edit
from your comments :
I used this method but it gives me "Pseudo-terminal will not be allocated because stdin is not a terminal."
You can ignore this warning with
ssh mycluster <<EOF 2>/dev/null
(try the -t switch for ssh if needed)
If you have
-bash: line 2: EOF: command not found
I think you have a copy paste problem. Try to remove extra spaces on all end lines
And it seems this method cannot pass local variable $INPUTARGS to the remote cluster
it seems related to your EOF problem.
$argv returns nothing on remote cluster
What does this means ? $argv is not a pre-defined variable in bash. If you need to list command line arguments, use the pre-defined variable $#
Last thing : ensure you are using bash
Your problem is not the length, but the nesting of your quotes - in this line, you are trying to use " inside ", which won't work:
ssh mycluster "qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
Bash will see this as "qsub -v argv=" followed by $INPUTARGS (not quoted), followed by " -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script".
It's possible that backslash-escaping those inner quotes will have the desired effect, but nesting quotes in bash can get rather confusing. What I often try to do is add an echo at the beginning of the command, to show how the various stages of expansion pan out. e.g.
echo 'As expanded locally:'
echo ssh mycluster "qsub -v argv=\"$INPUTARGS\" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
echo 'As expanded remotely:'
ssh mycluster "echo qsub -v argv=\"$INPUTARGS\" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
Thanks for all the answers, however their methods will not work on my case. I have to answer this by myself since this problem is pretty complex, I got the clue from existing solutions in stackoverflow.
There are 2 problems must be solved in my case.
Pass local program's parameters to the remote cluster. Here-doc solution doesn't work in this case.
Run qsub on remote cluster with the a long variable as arguments that contain quote symbol.
Problem 1.
Firstly, I have to introduce my script that runs on local machine takes parameters like this:
scripttoberunoncluster.py --source-organisation "My_organization_my_department" --project-name "MyProjectName" --processes 4 /targetoutputfolder/
The real parameter is far more longer than above, so all the parameter must be sent to remote. They are sent in file like this:
PROJECT_NAME="MyProjectName"
PACK_ORGANIZATION="--source-organization '\\\"My_organization_my_department\\\"'" # multiple layers of escaping, remove all the spaces
PROJECT_NAME_PACK="--project-name '\\\"${PROJECT_NAME}\\\"'"
PROCESSES_="--processes 4"
TARGET_FOLDER_PACK="/targetoutputfolder/"
INPUTARGS="${PACK_ORGANIZATION} ${PROJECT_NAME_PACK} ${PROCESSES} ${TARGET_FOLDER_PACK}"
echo $INPUTARGS > "TempPath/temp.par"
scp "TempPath/temp.par" "remotecluster:/remotepath/"
My solution is sort of compromising. But in this way the remote cluster can run script with arguments contain quote symbol. If you don't put all your variable (as parameters) in a file and transfer it to remote cluster, no matter how you pass them into variable, the quote symbol will be removed.
Problem 2.
Check how the qsub runs on remote cluster.
ssh remotecluster "qsub -v argv=\"`cat /remotepath/temp.par`\" -l walltime=10:00:00 /remotepath/my.script"
And in the my.script:
INPUT_ARGS=`echo $argv`
python "/pythonprogramlocation/scripttoberunoncluster.py" $INPUT_ARGS ; #note: $INPUT_ARGS hasn't quote
The described escaping problem consists in the requirement to preserve final quotes around arguments after two evaluation processes, i. e. after two evaluations we should see something like:
--source-organization "My_organization_my_department" --project-name "MyProjectName" --processes 4 /targetoutputfolder/
This can be achieved codewise by first putting each argument in a separate variable and then enclosing the argument with single quotes while making sure that possible single quotes inside the argument string get "escaped" with '\'' (in fact, the argument will be split up into separate strings but, when used, the split-up argument will automatically get re-concatenated by the string evaluation mechanism of UNIX (POSIX?) shells). And this procedure has to be repeated three times.
{
escsquote="'\''"
PROJECT_NAME="MyProjectName"
myorg="My_organization_my_department"
myorg="'${myorg//\'/${escsquote}}'" # bash
myorg="'${myorg//\'/${escsquote}}'"
myorg="'${myorg//\'/${escsquote}}'"
PACK_ORGANIZATION="--source-organization ${myorg}"
pnp="${PROJECT_NAME}"
pnp="'${pnp//\'/${escsquote}}'"
pnp="'${pnp//\'/${escsquote}}'"
pnp="'${pnp//\'/${escsquote}}'"
PROJECT_NAME_PACK="--project-name ${pnp}"
PROCESSES="--processes 4"
TARGET_FOLDER_PACK="/targetoutputfolder/"
INPUTARGS="${PACK_ORGANIZATION} ${PROJECT_NAME_PACK} ${PROCESSES} ${TARGET_FOLDER_PACK}"
echo "$INPUTARGS"
eval echo "$INPUTARGS"
eval eval echo "$INPUTARGS"
echo
ssh -T localhost <<EOF
echo qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script
EOF
}
For further information please see:
Quotes exercise - how to do ssh inside ssh whilst running sql inside second ssh?
Quoting in ssh $host $FOO and ssh $host "sudo su user -c $FOO" type constructs.

Resources