I want to run my VirtualBox after my I logged in. To do that, I have to check if it's already running or not. If not, then start the VM.
My problem is that I can not put the result of VirtualBox command into a variable in Bash.
I've created a function to get the exit parameter, but I get a "Syntax error: Invalid parameter '|'" error message.
How can I make it work?
This is my Bash script:
########## Starting om-server VirtualBox ##############
function Run_VM {
"$#"
local status=$?
if [ $status -eq 0 ]; then
echo "Starting om-server VM!";
VBoxManage startvm "om-server"
sleep 30
ssh root#192.168.1.111
else
echo "om-server VM is running!"
fi
return $status
}
check_vm_status="VBoxManage showvminfo \"om-server\" | grep -c \"running (since\""
Run_VM $check_vm_status
########## Starting om-server VirtualBox ##############
In order to do what you want, you would have to use command substitution:
check_vm_status="$(VBoxManage showvminfo \"om-server\" | grep -c \"running (since\")"
The caveat is that the instructions will be executed during the variable expansion (i.e. during the variable allocation).
If you want to execute your instructions only during your function, you could use a eval:
function Run_VM {
eval "$#"
local status=$?
if [ $status -eq 0 ]; then
echo "Starting om-server VM!";
VBoxManage startvm "om-server"
sleep 30
ssh root#192.168.1.111
else
echo "om-server VM is running!"
fi
return $status
}
check_vm_status="VBoxManage showvminfo \"om-server\" | grep -c \"running (since\""
Run_VM $check_vm_status
Note that eval brings a lot of issues.
Related
I know there are lots of discussions about it but i need you help with ssh remote command exit codes. I have that code:
(scan is a script which scans for viruses in the given file)
for i in $FILES
do
RET_CODE=$(ssh $SSH_OPT $HOST "scan $i; echo $?")
if [ $? -eq 0 ]; then
SOME_CODE
The scan works and it returns either 0 or (1 for errors) or 2 if a virus is found. But somehow my return code is always 0. Even, if i scan a virus.
Here is set -x output:
++ ssh -i /home/USER/.ssh/id host 'scan Downloads/eicar.com; echo 0'
+ RET_CODE='File Downloads/eicar.com: VIRUS: Virus found.
code of the Eicar-Test-Signature virus
0'
Here is the Output if i run those commands on the "remote" machine without ssh:
[user#ws ~]$ scan eicar.com; echo $?
File eicar.com: VIRUS: Virus found.
code of the Eicar-Test-Signature virus
2
I just want to have the return Code, i dont need all the other output of scan.
!UPDATE!
It seems like, echo is the problem.
The reason your ssh is always returning 0 is because the final echo command is always succeeding! If you want to get the return code from scan, either remove the echo or assign it to a variable and use exit. On my system:
$ ssh host 'false'
$ echo $?
1
$ ssh host 'false; echo $?'
1
$ echo $?
0
$ ssh host 'false; ret=$?; echo $ret; exit $ret'
1
$ echo $?
1
ssh returns the exit status of the entire pipeline that it runs - in this case, that's the exit status of echo $?.
What you want to do is simply use the ssh result directly (since you say that you don't want any of the output):
for i in $FILES
do
if ssh $SSH_OPT $HOST "scan $i >/dev/lull 2>&1"
then
SOME_CODE
If you really feel you must print the return code, that you can do that without affecting the overall result by using an EXIT trap:
for i in $FILES
do
if ssh $SSH_OPT $HOST "trap 'echo \$?' EXIT; scan $i >/dev/lull 2>&1"
then
SOME_CODE
Demo:
$ ssh $host "trap 'echo \$?' EXIT; true"; echo $?
0
0
$ ssh $host "trap 'echo \$?' EXIT; false"; echo $?
1
1
BTW, I recommend you avoid uppercase variable names in your scripts - those are normally used for environment variables that change the behaviour of programs.
I have BBB based custom Embedded Linux based board with busybox shell(ash)
I have a situation where my script must run in background with following condition
There must only one instance of the script.
wrapper script need to know if script started successfully in background or not.
There is another wrapper script which starts and stops my script, wrapper script is as mentioned below.
#!/bin/sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin
readonly TEST_SCRIPT_PATH="/home/testscript.sh"
readonly TEST_SCRIPT_LOCK_PATH="/var/run/${TEST_SCRIPT_PATH##*/}.lock"
start_test_script()
{
local pid_of_testscript=0
local status=0
#Run test script in background
"${TEST_SCRIPT_PATH}" &
#---------Now When this point is hit, lock file must be created.-----
if [ -f "${TEST_SCRIPT_LOCK_PATH}" ];then
pid_of_testscript=$(head -n1 ${TEST_SCRIPT_LOCK_PATH})
if [ -n "${pid_of_testscript}" ];then
kill -0 ${pid_of_testscript} &> /dev/null || status="${?}"
if [ ${status} -ne 0 ];then
echo "Error starting testscript"
else
echo "testscript start successfully"
fi
else
echo "Error starting testscript.sh"
fi
fi
}
stop_test_script()
{
local pid_of_testscript=0
local status=0
if [ -f "${TEST_SCRIPT_LOCK_PATH}" ];then
pid_of_testscript=$(head -n1 ${TEST_SCRIPT_LOCK_PATH})
if [ -n "${pid_of_testscript}" ];then
kill -0 ${pid_of_testscript} &> /dev/null || status="${?}"
if [ ${status} -ne 0 ];then
echo "testscript not running"
rm "${TEST_SCRIPT_LOCK_PATH}"
else
#send SIGTERM signal
kill -SIGTERM "${pid_of_testscript}"
fi
fi
fi
}
#Script starts from here.
case ${1} in
'start')
start_test_script
;;
'stop')
stop_test_script
;;
*)
echo "Usage: ${0} [start|stop]"
exit 1
;;
esac
Now actual script "testscript.sh" looks something like this,
#!/bin/sh
#Filename : testscript.sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin
set -eu
LOCK_FILE="/var/run/${0##*/}.lock"
FLOCK_CMD="/bin/flock"
FLOCK_ID=200
eval "exec ${FLOCK_ID}>>${LOCK_FILE}"
"${FLOCK_CMD}" -n "${FLOCK_ID}" || exit 0
echo "${$}" > "${LOCK_FILE}"
# >>>>>>>>>>-----Now run the code in background---<<<<<<
handle_sigterm()
{
# cleanup
"${FLOCK_CMD}" -u "${FLOCK_ID}"
if [ -f "${LOCK_FILE}" ];then
rm "${LOCK_FILE}"
fi
}
trap handle_sigterm SIGTERM
while true
do
echo "do something"
sleep 10
done
Now in above script you can see "---Now run the code in background--" at that point I am sure that either lock file is successfully created or instance of this script is already running. So Then I can safely run other code in background and wrapper script can check for lockfile and find out if the process mentioned in the lock file is running or not.
can shellscript itself make it to run in background ?
if not is there a better way to meet all the conditions ?
I think you can look into job control built-in, specifically bg.
Job Control Commands
When processes say they background themselves, what they actually do is fork and exit the parent. You can do the same by running whichever commands, functions or statements you want with & and then exiting.
#!/bin/sh
echo "This runs in the foreground"
sleep 3
while true
do
sleep 10
echo "doing background things"
done &
I have a sample sh script on my Linux environment, which basically run's the ssh-agent for the current shell, adds a key to it and runs two git commands:
#!/bin/bash
eval "$(ssh-agent -s)"
ssh-add /home/duvdevan/.ssh/id_rsa
git -C /var/www/duvdevan/ reset --hard origin/master
git -C /var/www/duvdevan/ pull origin master
Script actually works fine, but every time I run it I get a new process so I think it might become a performance issue and I might end up having useless processes out there.
An example of the output:
Agent pid 12109
Identity added: /home/duvdevan/.ssh/custom_rsa (rsa w/o comment)
Also, along with all this, is it possible to find an existing ssh-agent process and add my keys into it?
No, really, how to check if ssh-agent is already running in bash?
Answers so far don't appear to answer the original question...
Here's what works for me:
if ps -p $SSH_AGENT_PID > /dev/null
then
echo "ssh-agent is already running"
# Do something knowing the pid exists, i.e. the process with $PID is running
else
eval `ssh-agent -s`
fi
This was taken from here
Also, along with all this, is it possible to find an existing ssh-agent process and add my keys into it?
Yes. We can store the connection info in a file:
# Ensure agent is running
ssh-add -l &>/dev/null
if [ "$?" == 2 ]; then
# Could not open a connection to your authentication agent.
# Load stored agent connection info.
test -r ~/.ssh-agent && \
eval "$(<~/.ssh-agent)" >/dev/null
ssh-add -l &>/dev/null
if [ "$?" == 2 ]; then
# Start agent and store agent connection info.
(umask 066; ssh-agent > ~/.ssh-agent)
eval "$(<~/.ssh-agent)" >/dev/null
fi
fi
# Load identities
ssh-add -l &>/dev/null
if [ "$?" == 1 ]; then
# The agent has no identities.
# Time to add one.
ssh-add -t 4h
fi
This code is from pitfalls of ssh agents which describes both the pitfalls of what you're currently doing, of this approach, and how you should use ssh-ident to do this for you.
If you only want to run ssh-agent if it's not running and do nothing otherwise:
if [ $(ps ax | grep [s]sh-agent | wc -l) -gt 0 ] ; then
echo "ssh-agent is already running"
else
eval $(ssh-agent -s)
if [ "$(ssh-add -l)" == "The agent has no identities." ] ; then
ssh-add ~/.ssh/id_rsa
fi
# Don't leave extra agents around: kill it on exit. You may not want this part.
trap "ssh-agent -k" exit
fi
However, this doesn't ensure ssh-agent will be accessible (just because it's running doesn't mean we have $SSH_AGENT_PID for ssh-add to connect to).
If you want it to be killed right after the script exits, you can just add this after the eval line:
trap "kill $SSH_AGENT_PID" exit
Or:
trap "ssh-agent -k" exit
$SSH_AGENT_PID gets set in the eval of ssh-agent -s.
You should be able to find running ssh-agents by scanning through /tmp/ssh-* and reconstruct the SSH_AGENT variables from it (SSH_AUTH_SOCK and SSH_AGENT_PID).
ps -p $SSH_AGENT_PID > /dev/null || eval "$(ssh-agent -s)"
Single line command. Run for the first time will start ssh-agent. Run for the second time will not start the ssh-agent. Simple and Elegant Mate !!!
Using $SSH_AGENT_PID can only test the ssh-agent but miss identities when it is not yet added
$ eval `ssh-agent`
Agent pid 9906
$ echo $SSH_AGENT_PID
9906
$ ssh-add -l
The agent has no identities.
So it would be save to check it with ssh-add -l with an expect script like example below:
$ eval `ssh-agent -k`
Agent pid 9906 killed
$ ssh-add -l
Could not open a connection to your authentication agent.
$ ssh-add -l &>/dev/null
$ [[ "$?" == 2 ]] && eval `ssh-agent`
Agent pid 9547
$ ssh-add -l &>/dev/null
$ [[ "$?" == 1 ]] && expect $HOME/.ssh/agent
spawn ssh-add /home/user/.ssh/id_rsa
Enter passphrase for /home/user/.ssh/id_rsa:
Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa)
$ ssh-add -l
4096 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX /home/user/.ssh/id_rsa (RSA)
So when both ssh-agent and ssh-add -l are put to run on a bash script:
#!/bin/bash
ssh-add -l &>/dev/null
[[ "$?" == 2 ]] && eval `ssh-agent`
ssh-add -l &>/dev/null
[[ "$?" == 1 ]] && expect $HOME/.ssh/agent
then it would always check and assuring that the connection is running:
$ ssh-add -l
4096 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX /home/user/.ssh/id_rsa (RSA)
You can also emulate the repeating of commands on above script with do while
The accepted answer did not work for me under Ubuntu 14.04.
The test to check if the ssh-agent is running I have to use is:
[[ ! -z ${SSH_AGENT_PID+x} ]]
And I am starting the ssh-agent with:
exec ssh-agent bash
Otherwise the SSH_AGENT_PID is not set.
The following seems to work under both Ubuntu 14.04 and 18.04.
#!/bin/bash
sshkey=id_rsa
# Check ssh-agent
if [[ ! -z ${SSH_AGENT_PID+x} ]]
then
echo "[OK] ssh-agent is already running with pid: "${SSH_AGENT_PID}
else
echo "Starting new ssh-agent..."
`exec ssh-agent bash`
echo "Started agent with pid: "${SSH_AGENT_PID}
fi
# Check ssh-key
if [[ $(ssh-add -L | grep ${sshkey} | wc -l) -gt 0 ]]
then
echo "[OK] SSH key already added to ssh-agent"
else
echo "Need to add SSH key to ssh-agent..."
# This should prompt for your passphrase
ssh-add ~/.ssh/${sshkey}
fi
Thanks to all the answers here. I've used this thread a few times over the years to tweak my approach. Wanted to share my current ssh-agent.sh checker/launcher script that works for me on Linux and OSX.
The following block is my $HOME/.bash.d/ssh-agent.sh
function check_ssh_agent() {
if [ -f $HOME/.ssh-agent ]; then
source $HOME/.ssh-agent > /dev/null
else
# no agent file
return 1
fi
if [[ ${OSTYPE//[0-9.]/} == 'darwin' ]]; then
ps -p $SSH_AGENT_PID > /dev/null
# gotcha: does not verify the PID is actually an ssh-agent
# just that the PID is running
return $?
fi
if [ -d /proc/$SSH_AGENT_PID/ ]; then
# verify PID dir is actually an agent
grep ssh-agent /proc/$SSH_AGENT_PID/cmdline > /dev/null 2> /dev/null;
if [ $? -eq 0 ]; then
# yep - that is an agent
return 0
else
# nope - that is something else reusing the PID
return 1
fi
else
# agent PID dir does not exist - dead agent
return 1
fi
}
function launch_ssh_agent() {
ssh-agent > $HOME/.ssh-agent
source $HOME/.ssh-agent
# load up all the pub keys
for I in $HOME/.ssh/*.pub ; do
echo adding ${I/.pub/}
ssh-add ${I/.pub/}
done
}
check_ssh_agent
if [ $? -eq 1 ];then
launch_ssh_agent
fi
I launch the above from my .bashrc using:
if [ -d $HOME/.bash.d ]; then
for I in $HOME/.bash.d/*.sh; do
source $I
done
fi
Hope this helps others get up and going quickly.
Created a public gist if you want to hack/improve this with me: https://gist.github.com/dayne/a97a258b487ed4d5e9777b61917f0a72
cat /usr/local/bin/ssh-agent-pro << 'EOF'
#!/usr/bin/env bash
SSH_AUTH_CONST_SOCK="/var/run/ssh-agent.sock"
if [[ x$(wc -w <<< $(pidof ssh-agent)) != x1 ]] || [[ ! -e ${SSH_AUTH_CONST_SOCK} ]]; then
kill -9 $(pidof ssh-agent) 2>/dev/null
rm -rf ${SSH_AUTH_CONST_SOCK}
ssh-agent -s -a ${SSH_AUTH_CONST_SOCK} 1>/dev/null
fi
echo "export SSH_AUTH_SOCK=${SSH_AUTH_CONST_SOCK}"
echo "export SSH_AGENT_PID=$(pidof ssh-agent)"
EOF
echo "eval \$(/usr/local/bin/ssh-agent-pro)" >> /etc/profile
. /etc/profile
then you can ssh-add xxxx once, you can use ssh-agent everytime when you login.
I've noticed that having a running agent is not enough because sometimes, the SSH_AUTH_SOCK variable is set or pointing to a socket file that does not exist anymore.
Therefore, to connect to an already running ssh-agent on your machine, you can do this :
$ pgrep -u $USER -n ssh-agent -a
1906647 ssh-agent -s
$ ssh-add -l
Could not open a connection to your authentication agent.
$ test -z "$SSH_AGENT_PID" && export SSH_AGENT_PID=$(pgrep -u $USER -n ssh-agent)
$ test -z "$SSH_AUTH_SOCK" && export SSH_AUTH_SOCK=$(ls /tmp/ssh-*/agent.$(($SSH_AGENT_PID-1)))
$ ssh-add -l
The agent has no identities.
Regarding finding running ssh-agents, previous answers either don't work or rely on a magic file like $HOME/.ssh_agent. These approaches require us to believe that user never run agents without saving their output to this file.
My approach instead relies on a rarely changed default UNIX domain socket template to find an accessible ssh-agent among available possibilities.
# (Paste the below code to your ~/.bash_profile and ~/.bashrc files)
C=$SSH_AUTH_SOCK
R=n/a
unset SSH_AUTH_SOCK
for s in $(ls $C /tmp/ssh-*/agent.* 2>/dev/null | sort -u) ; do
if SSH_AUTH_SOCK=$s ssh-add -l >/dev/null ; then R=$? ; else R=$? ; fi
case "$R" in
0|1) export SSH_AUTH_SOCK=$s ; break ;;
esac
done
if ! test -S "$SSH_AUTH_SOCK" ; then
eval $(ssh-agent -s)
unset SSH_AGENT_PID
R=1
fi
echo "Using $SSH_AUTH_SOCK"
if test "$R" = "1" ; then
ssh-add
fi
In this approach, SSH_AGENT_PID remains unknown, since it is hard to deduce it for non-roots. I assume it is actually not required for users since they don't normally want to stop agents. On my system, setting SSH_AUTH_SOCK is enough to communicate with agent for e.g. passwordless authentication.
The code should work with any shell-compatible shell.
You can modify line #1 to:
PID_SSH_AGENT=`eval ssh-agent -s | grep -Po "(?<=pid\ ).*(?=\;)"`
And then at the end of the script you can do:
kill -9 $PID_SSH_AGENT
I made this bash function to count and return the number of running ssh-agent processes... it searches ssh-agent process using procfs instead of using $ ps -p $SSH_AGENT_PID:cmd or $SSH_AUTH_SOCK:var ... (these ENV-var. can still be set with old values while ssh-agent's process is already killed: if $ ssh-agent -k or $ $(ssh-agent -k) instead of $ eval $(ssh-agent -k))
function count_agent_procfs(){
declare -a agent_list=( )
for folders in $(ls -d /proc/*[[:digit:]] | grep -v /proc/1$);do
fichier="${folders}/stat"
pid=${folders/\/proc\//}
[[ -f ${fichier} ]] && [[ $(cat ${fichier} | cut -d " " -f2) == "(ssh-agent)" ]] && agent_list+=(${pid})
done
return ${#agent_list[#]}
}
..and then if there is a lot of ssh-agent process running you get their PID with this list..."${agent_list[#]}"
Very simple command to check how many processes are running for ssh-agent (or any other program): pidof ssh-agent
or:
pgrep ssh-agent
And very simple command to kill all processes of ssh-agent (or any program):
kill $(pidof ssh-agent)
I have a file named "host.txt" with listing IP addresses of two systems.
~] cat hosts.txt
10.1.1.10
10.1.1.20
Using below script I am trying to login to each system, check status of a service and print the output of each system. The script prompts to login, however does not continue to execute the /opt/agent.sh status command. Can someone please help fix this script?
#!/bin/bash
for HOST in `cat hosts.txt`
do
ssh root#$HOST
STATUS=`/opt/agent.sh status | awk 'NR==1{print $3 $4}'`
echo $STATUS
if [ $STATUS! == "isrunning" ]; then
echo "$host == FAIL"
else
echo "$host == PASS"
fi
You script does not continue until the ssh command completes, which does not happen the interactive shell on $HOST that you started with ssh exits. Instead, you want to execute a script on $HOST.
(Also, note the correct way to iterate over the contents of hosts.txt.)
#!/bin/bash
while read HOST; do
do
if ssh root#$HOST '
STATUS=`/opt/agent.sh status | awk 'NR==1{print $3 $4}'`
[ "$STATUS" = "isrunning" ]
'; then
echo "$HOST == FAIL"
else
echo "$HOST == PASS"
fi
done < hosts.txt
The remote script simply exits with the result of comparing $STATUS to "isrunning". An if statement on the local host outputs a string based on the that result (which is the result of the ssh command itself). This saves the trouble of having to pass the value of $HOST to the remote host, simplifying the quoting required for the remote script.
I would like to check if a certain file exists on the remote host.
I tried this:
$ if [ ssh user#localhost -p 19999 -e /home/user/Dropbox/path/Research_and_Development/Puffer_and_Traps/Repeaters_Network/UBC_LOGS/log1349544129.tar.bz2 ] then echo "okidoke"; else "not okay!" fi
-sh: syntax error: unexpected "else" (expecting "then")
In addition to the answers above, there's the shorthand way to do it:
ssh -q $HOST [[ -f $FILE_PATH ]] && echo "File exists" || echo "File does not exist";
-q is quiet mode, it will suppress warnings and messages.
As #Mat mentioned, one advantage of testing like this is that you can easily swap out the -f for any test operator you like: -nt, -d, -s etc...
Test Operators: http://tldp.org/LDP/abs/html/fto.html
Here is a simple approach:
#!/bin/bash
USE_IP='-o StrictHostKeyChecking=no username#192.168.1.2'
FILE_NAME=/home/user/file.txt
SSH_PASS='sshpass -p password-for-remote-machine'
if $SSH_PASS ssh $USE_IP stat $FILE_NAME \> /dev/null 2\>\&1
then
echo "File exists"
else
echo "File does not exist"
fi
You need to install sshpass on your machine to work it.
Can't get much simpler than this :)
ssh host "test -e /path/to/file"
if [ $? -eq 0 ]; then
# your file exists
fi
As suggested by dimo414, this can be collapsed to:
if ssh host "test -e /path/to/file"; then
# your file exists
fi
one line, proper quoting
ssh remote_host test -f "/path/to/file" && echo found || echo not found
You're missing ;s. The general syntax if you put it all in one line would be:
if thing ; then ... ; else ... ; fi
The thing can be pretty much anything that returns an exit code. The then branch is taken if that thing returns 0, the else branch otherwise.
[ isn't syntax, it's the test program (check out ls /bin/[, it actually exists, man test for the docs – although can also have a built-in version with different/additional features.) which is used to test various common conditions on files and variables. (Note that [[ on the other hand is syntax and is handled by your shell, if it supports it).
For your case, you don't want to use test directly, you want to test something on the remote host. So try something like:
if ssh user#host test -e "$file" ; then ... ; else ... ; fi
Test if a file exists:
HOST="example.com"
FILE="/path/to/file"
if ssh $HOST "test -e $FILE"; then
echo "File exists."
else
echo "File does not exist."
fi
And the opposite, test if a file does not exist:
HOST="example.com"
FILE="/path/to/file"
if ! ssh $HOST "test -e $FILE"; then
echo "File does not exist."
else
echo "File exists."
fi
ssh -q $HOST [[ -f $FILE_PATH ]] && echo "File exists"
The above will run the echo command on the machine you're running the ssh command from. To get the remote server to run the command:
ssh -q $HOST "[[ ! -f $FILE_PATH ]] && touch $FILE_PATH"
Silent check if file exist and perform if not
if ! ssh $USER#$HOST "test -e file.txt" 2> /dev/null; then
echo "File not exist"
fi
You can specify the shell to be used by the remote host locally.
echo 'echo "Bash version: ${BASH_VERSION}"' | ssh -q localhost bash
And be careful to (single-)quote the variables you wish to be expanded by the remote host; otherwise variable expansion will be done by your local shell!
# example for local / remote variable expansion
{
echo "[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'" |
ssh -q localhost bash
echo '[[ $- == *i* ]] && echo "Interactive" || echo "Not interactive"' |
ssh -q localhost bash
}
So, to check if a certain file exists on the remote host you can do the following:
host='localhost' # localhost as test case
file='~/.bash_history'
if `echo 'test -f '"${file}"' && exit 0 || exit 1' | ssh -q "${host}" sh`; then
#if `echo '[[ -f '"${file}"' ]] && exit 0 || exit 1' | ssh -q "${host}" bash`; then
echo exists
else
echo does not exist
fi
I wanted also to check if a remote file exist but with RSH. I have tried the previous solutions but they didn't work with RSH.
Finally, I did I short function which works fine:
function existRemoteFile ()
{
REMOTE=$1
FILE=$2
RESULT=$(rsh -l user $REMOTE "test -e $FILE && echo \"0\" || echo \"1\"")
if [ $RESULT -eq 0 ]
then
return 0
else
return 1
fi
}
On CentOS machine, the oneliner bash that worked for me was:
if ssh <servername> "stat <filename> > /dev/null 2>&1"; then echo "file exists"; else echo "file doesnt exits"; fi
It needed I/O redirection (as the top answer) as well as quotes around the command to be run on remote.
This also works :
if ssh user#ip "[ -s /path/file_name ]" ;then
status=RECEIVED ;
else
status=MISSING ;
fi
#its simple
if [[ "`ssh -q user#hostname ls /dir/filename.abc 2>dev/null`" == "/dir/filename.abc" ]]
then
echo "file exists"
else
echo "file not exists"
fi