Compare string with slice output not working as expected - string

I have written one program which will execute if both the remote server reachable, otherwise it will terminate the execution. For that i written below login but it did not produce as expected.
Below is code snippet.
str := "false"
Comd1 := fmt.Sprintf("ping -c 1 %s > /dev/null && echo true || echo false", serIP)
op, err := exec.Command( "/bin/sh", "-c", Comd1 ).Output()
fmt.Println(string(op))
if err != nil || str == string(op) {
fmt.Println(err)
return
}
Whenver server ip not rechable it is expected to enter in if loop and terminate the program but it always skip the condition and execute further which leads panic which is expected as remote server
is not rechable.
Any suggestion in code to compare string with []byte output would be highly appreciable.

Related

How not to interrupt the execution of the loop in case of errors and write them to a variable?

I want to backup mikrotiks via scp. This script loops through the hosts from the hosts.txt. One by one, connects to each device from the list. Does backup and all manipulations. If at some stage it was not possible to connect to the device, then an empty backup is formed, which is then sent to the cloud.
I want to check. If it was not possible to connect to the host, then write this host into a variable, line by line, and go to the next device. Next, I will notify about all failed connections.
The problem is that only the first error is written to the variable, all subsequent ones are ignored.
Tell me who knows what.
#!/bin/bash
readarray -t hosts < hosts.txt
DATE=$(date +'%Y-%m-%d_%H-%M-%S')
ROS='<br>'
ERR=( )
#Get values from main list
for host in ${hosts[*]}
do
#Get values from sub list
hostname=($(echo ${host} | tr "_" " "))
echo ${hostname[0]} - ${hostname[1]}
#connect & backup & transfer & archive & rm old files & moove to cloud
if ssh backup#${hostname[0]} -C "/system backup save name=${hostname[1]}_$DATE"; then
scp backup#${hostname[0]}:"${hostname[1]}_$DATE.backup" ./
ssh backup#${hostname[0]} -C "rm ${hostname[1]}_$DATE.backup"
tar -czvf ./${hostname[1]}_$DATE.tar.gz ${hostname[1]}_$DATE.backup
scp ./${hostname[1]}_$DATE.tar.gz my#cloud.com:/var/www/my.cloud.com/backups/mikrotik/
rm ${hostname[1]}_$DATE.backup ${hostname[1]}_$DATE.tar.gz
ROS=$ROS${hostname[1]}"<br>"
else
ERR+=(${hosts[*]} "is not ready")
fi
done
hosts.txt
10.10.8.11_CAP-1
10.10.9.12_CAP-2
10.10.10.13_CAP-3
As I noted in the comments, you're misusing the array notation. Your line ERR=(${hosts[*]} "is not ready") should be ERR+=(${hosts[*]} "is not ready") and you should define ERR as an array, not a scalar: ERR=( ) for example, or declare -a ERR. Similarly with ROS.
Here's a test script that avoids all the ssh and scp work to demonstrate that lists of passing and failing hosts work — that the arrays hosts, ROS and ERR are handled correctly.
Note the use of "${ERR[#]}" with double quotes and # instead of no quotes and *. The difference matters because the values in the array contain spaces. Try the alternatives. Note, too, that printf always prints, even when there is no argument corresponding to the %s in the format string. Hence the check on the number of elements in the array before invoking printf.
#!/bin/bash
# Needs Bash 4.x - Bash 3.2 as found on Macs does not support readarray
# readarray -t hosts < hosts.txt
hosts=( passed-1 failed-2 passed-3 failed-4 passed-5 )
declare -a ERR
declare -a ROS
status=passed
for host in "${hosts[#]}"
do
if [ "$status" = "passed" ]
then ROS+=( "$host $status" ); status="failed"
else ERR+=( "$host $status" ); status="passed"
fi
done
# Brute force but handles empty lists
for passed in "${ROS[#]}"
do printf "== PASS == [%s]\n" "$passed"
done
for failed in "${ERR[#]}"
do printf "!! FAIL !! [%s]\n" "$failed"
done
# Alternative - better spread over multiple lines each
if [ "${#ROS}" -gt 0 ]; then printf "== PASS == [%s]\n" "${ROS[#]}"; fi
if [ "${#ERR}" -gt 0 ]; then printf "!! FAIL !! [%s]\n" "${ERR[#]}"; fi
Output:
== PASS == [passed-1 passed]
== PASS == [passed-3 passed]
== PASS == [passed-5 passed]
!! FAIL !! [failed-2 failed]
!! FAIL !! [failed-4 failed]
== PASS == [passed-1 passed]
== PASS == [passed-3 passed]
== PASS == [passed-5 passed]
!! FAIL !! [failed-2 failed]
!! FAIL !! [failed-4 failed]
I'm sorry there are so many failures to backup your data!

Get directory entries for "slow protocols" asynchronously

I want a function for getting directory entries on Linux. I use ioutil.ReadDir and usually it is fast.
But if I want to read some mounted virtual file system on /run/user/1000/gvfs/, this function becomes slow. If the directory has many file entries I need to wait a long time.
I can use the ls command in a terminal and result will be the same.
When I tried ls -U -a -p -1 I got line by line output immediately.
I tried running this in Go with exec.Command, but it didn't work asynchronously. Go is waiting for full program output. What did I do wrong?
m.cmd = exec.Command("ls", "-U", "-a", "-p", "-1")
// for example some "slow" directory:
m.cmd.Dir = "/run/user/1000/gvfs/dav:host=webdav.yandex.ru,ssl=true,user=...../"
reader, _ := m.cmd.StdoutPipe()
bufReader := bufio.NewReader(reader)
go func() {
m.cmd.Start()
for {
line, _, err := bufReader.ReadLine()
if err != nil {
break
}
linestr := string(line)
if linestr != "./" && linestr != "../" {
fmt.Println(linestr)
}
}
}()
I need line by line printing immediately in Go.
Try ls -U -a -p 1 | cat to see if you get line-by-line output.
Go doesn't control ls; ls does line-by-line writing if ls chooses to do so, and ls chooses not to do that when its output is a pipe. You could allocate a pty pair and use that, but that's the wrong way to do this.
ioutil.ReadDir first reads the entire directory (by calling Readdir(-1)), then sorts the file names. If you use os.Open to open the directory, then call the Readdir or Readdirnames function with a small (but not negative) number, you should get something more to your liking.

How to make RETURN trap in bash preserve the return code?

Below is the simplified scheme of the script I am writing. The program must take parameters in different ways, so there is a fine division to several functions.
The problem is that the chainloading of the return value from deeper functions breaks on the trap, where the result is to be checked to show a message.
#! /usr/bin/env bash
check_a_param() {
[ "$1" = return_ok ] && return 0 || return 3
}
check_params() {
# This trap should catch negative results from the functions
# performing actual checks, like check_a_param() below.
return_trap() {
local retval=$?
[ $retval -ne 0 ] && echo 'Bad, bad… Dropping to manual setup.'
return $retval
}
# check_params can be called from different functions, not only
# setup(). But the other functions don’t care about the return value
# of check_params().
[ "${FUNCNAME[1]}" = setup ] \
&& trap "return_trap; got_retval=$?; trap - RETURN; return $got_retval;" RETURN
check_a_param 'return_bad' || return $?
# …
# Here we check another parameters in the same way.
# …
echo 'Provided parameters are valid.'
return 0 # To be sure.
}
ask_for_params() {
echo 'User sets params manually step by step.'
}
setup() {
[ "$1" = force_manual ] && local MANUAL=t
# If gathered parameters do not pass check_params()
# the script shall resort to asking user for entering them.
[ ! -v MANUAL ] && {
check_params \
&& echo "check_params() returned with 0. Not running manual setup."
|| false
}|| ask_for_params
# do_the_job
}
setup "$#" # Either empty or ‘force_manual’.
How it should work:
↗ 3 → 3→ trap →3 ↗ || ask_for_params ↘
check_a_param >>> check_params >>> [ ! -v MANUAL ] ↓
↘ 0 → 0→ trap →0 ↘ && ____________ do_the_job
The idea is, if a check fails, its return code forces check_params() to return, too, which, in its turn would trigger the || ask_for_params condition in setup(). But the trap returns 0:
↗ 3 → 3→ trap →0
check_a_param >>> check_params >>> [ ! -v MANUAL ] &&… >>> do_the_job
↘ 0 → 0→ trap →0
If you try to run the script as is, you should see
Bad, bad… Dropping to manual setup.
check_params() returned with 0. Not running manual setup.
Which means that the bad result triggered the trap(!) but the mother function that has set it, didn’t pass the result.
In attempt to set a hack I’ve tried
to set retval as a global variable declare -g retval=$? in the return_trap() and use its value in the line setting the trap. The variable is set ([ -v retval ] returns successfully), but …has no value. Funny.
okay, let’s putretval=Eeh to the check_params(), outside the return_trap() and just set it to $? as a usual param. Nope, the retval in the function doesn’t set the value for the global variable, it stays ‘Eeh’. No, there’s no local directive. It should be treated as global by default. If you put test=1 to check_params() and test=3 in check_a_param() and then print it with echo $testat the end of setup(), you should see 3. At least I do. declare -g doesn’t make any difference here, as expected.
maybe that’s the scope of the function? No, that’s not it either. Moving return_trap() along with declare -g retval=Eeh doesn’t make any difference.
when the modern software means fall, it’s time to resort to good old writing to a file. Let’s print the retval to /tmp/t with retval=$?; echo $retval >/tmp/t in return_trap() and read it back with
trap "return_trap; trap - RETURN; return $(</tmp/t)" RETURN
Now we can finally see that the last return directive which reads the number from the file, actually returns 3. But check_params() still returns 0!
++ trap - RETURN
++ return 3
+ retval2=0
+ echo 'check_params() returned with 0. Not running manual setup.'
check_params() returned with 0. Not running manual setup.
If the argument to the trap command is strictly a function name, it returns the original result. The original one, not what return_trap() returns. I’ve tried to increment the result and still got 3.
You may also ask ‘Why would you need to unset the trap so much?’. It’s to avoid another bug, which causes the trap to trigger every time, even when check_params() is called from another function. Traps on RETURN are local things, they aren’t inherited by another functions unless there’s debug or trace flags explicitly set on them, but it looks like they keep traps set on them between runs. Or bash keeps traps for them. This trap should only be set when check_params() is called from a specific function, but if the trap is not unset, it continues to get triggered every time check_a_param() returns a value greater than zero independently of what’s in FUNCNAME[1].
Here I give up, because the only exit I see now is to implement a check on the calling function before each || return $? in check_params(). But it’s so ugly it hurts my eyes.
I may only add that, $? in the line setting the trap will always return 0. So, if you, for example, declare a local variable retval in return_trap(), and put such code to check it
trap "return_trap; [ -v retval ]; echo $?; trap - RETURN; return $retval" RETURN
it will print 0 regardless of whether retval is actually set or not, but if you use
trap "return_trap; [ -v retval ] && echo set || echo unset; trap - RETURN; return $retval" RETURN
It will print ‘unset’.
GNU bash, version 4.3.39(1)-release (x86_64-pc-linux-gnu)
Funny enough,
trap "return_trap; trap - RETURN" RETURN
simply works.
[ ! -v MANUAL ] && {
check_params; retval2=$?
[ $retval2 -eq 0 ] \
&& echo "check_params() returned with 0. Not running manual setup." \
|| false
}|| ask_for_params
And here’s the trace.
+ check_a_parameter return_bad
+ '[' return_bad = return_ok ']'
+ return 3
+ return 3
++ return_trap
++ local retval=3
++ echo 3
++ '[' 3 -ne 0 ']'
++ echo 'Bad, bad… Dropping to manual setup.'
Bad, bad… Dropping to manual setup.
++ return 3
++ trap - RETURN
+ retval2=3
+ '[' 3 -eq 0 ']'
+ false
+ ask_for_params
+ echo 'User sets params manually step by step.'
User sets params manually step by step.
So the answer is simple: do not try to overwrite the result in the line passed to the trap command. Bash handles everything for you.

what happens to background processes stdout and stderr when I log out?

I'm trying to understand what happens with stdout and stderr of background processes when exiting an SSH session. I understand about SIGHUP, child processes and all that, but I'm puzzled about the following:
If I run:
(while true; do date; sleep 0.5; done) | tee foo | cat >bar
and then kill the cat process then the tee process terminates because it can no longer write into the pipe. You can observe this using ps.
But if I run:
(while true; do date; sleep 0.5; done) | tee foo & disown
and the log out of my SSH session, I can observe that everything continues running just fine "forever". So somehow the stdout of the tee process must "keep going" even though my pty must be gone.
Can anyone explain what happens in the second example?
(Yes, I know I could explicitly redirect stdout/stderr/stdin of the background process.)
This is the crucial loop where tee sends output to stdout and opened files:
while (1)
{
bytes_read = read (0, buffer, sizeof buffer);
if (bytes_read < 0 && errno == EINTR)
continue;
if (bytes_read <= 0)
break;
/* Write to all NFILES + 1 descriptors.
Standard output is the first one. */
for (i = 0; i <= nfiles; i++)
if (descriptors[i]
&& fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)
{
error (0, errno, "%s", files[i]);
descriptors[i] = NULL;
ok = false;
}
}
Pay closer attention on this part:
if (descriptors[i]
&& fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)
{
error (0, errno, "%s", files[i]);
descriptors[i] = NULL;
ok = false;
}
It shows that when an error occurs, tee would not close itself but just unset the file descriptor descriptors[i] = NULL and continue to keep reading data until EOF or error on input occurs besides EINTR.
The date command or anything that sends output to the pipe connected to tee would not terminated since tee still reads their data. Only that the data doesn't go anywhere besides the file foo. And even if a file argument was not provided, tee would still read their data.
This is what /proc/**/fd looks like on tee when disconnected from a terminal:
0 -> pipe:[431978]
1 -> /dev/pts/2 (deleted)
2 -> /dev/pts/2 (deleted)
And this one's from the process that connects to its pipe:
0 -> /dev/pts/2 (deleted)
1 -> pipe:[431978]
2 -> /dev/pts/2 (deleted)
You can see that tee's stdout and stderr is already EOL but it's still running.

Running sh/bash/python scripts with arguments using Go

I've been stuck on this one a few days, I'm trying to run a bash script which runs off of the first argument (maybe I should give up all hope, haha)
Syntax for running the script can be assumed to be:
sudo bash script argument or since it has og+x it can be ran as just sudo script argument
In go I'm running it using the following:
package main
import (
"os"
"os/exec"
"fmt"
)
func main() {
c := exec.Command("/bin/bash", "script " + argument)
if err := c.Run(); err != nil {
fmt.Println("Error: ", err)
}
os.Exit(0)
}
I have had absolutely no luck, I've tried loads of other variations as well for this...
exec.Command("/bin/sh", "-c", "sudo script", argument)
exec.Command("/bin/sh", "-c", "sudo script " + argument) (my first try)
exec.Command("/bin/bash", "-c", "sudo script" + argument)
exec.Command("/bin/bash", "sudo script", argument)
exec.Command("/bin/bash sudo script" + argument)
Most of these I am met with '/bin/bash sudo ect' no such file or directory, or Error: exit status 1 I have even gone as far as to write a Python wrapper looking for an argument and executing the bash script with subprocess. To rule out the path to the script not being defined I have tried all of the above with a direct route to the script rather than script name.
For the sake of my remaining hair, what am I doing wrong here? How can I better diagnose this problem so that I can get more information rather than exit status 1?
You don't need to call bash/sh at all, simply pass each argument alone, also to get the error you have to capture the command's stderr, here's a working example:
func main() {
c := exec.Command("sudo", "ls", "/tmp")
stderr := &bytes.Buffer{}
stdout := &bytes.Buffer{}
c.Stderr = stderr
c.Stdout = stdout
if err := c.Run(); err != nil {
fmt.Println("Error: ", err, "|", stderr.String())
} else {
fmt.Println(stdout.String())
}
os.Exit(0)
}

Resources