I'm trying to capture the output of an scp command to a variable or array in Bash.
I'm not exactly sure where the text comes from local or remote ?
So far I've tried the following :
#!/bin/bash -x
mytty=$(tty)
echo "mytty is ${mytty}"
myvar=$(scp test.js crashdog#webserver.com:/home/crashdog 2>&1 > $mytty)
echo "myvar is ${myvar}"
I can see the text on my tty like follows:
test.js 100% 1200 1.2KB/s 00:00
But myvar remains empty. So my two questions, where is above text coming from ? and how can I assign the text to a variable or array in Bash.
Thanks
Your problem is that scp doesn't output anything when its stdout isn't the terminal.
To trick it you can use script:
out=$(script -qefc "scp test.js crashdog#webserver.com:/home/crashdog" /dev/null)
echo "$out"
You can find more info about it in How to trick an application into thinking its stdout is a terminal, not a pipe
Related
I'd like to know if the user can input data.
In a script, it is usually possible to call read -r VARIABLE to request input from the user. However, this doesn't work in all environments: for example, in CI scripts, it's not possible for the user to input anything, and I'd like to substitute a default value in that case.
So far, I'm handling this with a timeout, like this:
echo "If you are a human, type 'ENTER' now. Otherwise, automatic installation will start in 10 seconds..."
read -t 10 -r _user_choice || _user_choice="no-user-here"
But honestly, that just looks ugly.
The solution doesn't have to use read, however it needs to be portable to all major distros that have Bash, so it's not possible to use packages that are not installed by default.
$ cat stdin.bash
if [[ -t 0 ]]; then
echo "stdin is a terminal, so the user can input data"
else
echo "stdin is connected to some other redirect or pipeline"
fi
and, demonstrating
$ bash stdin.bash
stdin is a terminal, so the user can input data
$ echo foo | bash stdin.bash
stdin is connected to some other redirect or pipeline
From help test output:
-t FD True if FD is opened on a terminal.
I am writing shell script to install my application. I have more number of commands in my script such as copy, unzip, move, if and so on. I want to know the error if any of the commands fails. Also I don't want to send exit codes other than zero.
Order of script installation(root-file.sh):-
./script-to-install-mongodb
./script-to-install-jdk8
./script-to-install-myapplicaiton
Sample script file:-
cp sourceDir destinationDir
unzip filename
if [ true]
// success code
if
I want to know by using variable or any message if any of my scripts command failed in root-file.sh.
I don't want to write code to check every command status. Sometimes cp or mv command may fail due to invalid directory. At the end of script execution, I want to know all commands were executed successfully or error in it?
Is there a way to do it?
Note: I am using shell script not bash
/* the status of your last command stores in special variable $?, you can define variable for $? doing export var=$? */
unzip filename
export unzipStatus=$?
./script1.sh
export script1Status=$?
if [ !["$unzipStatus" || "$script1Status"]]
then
echo "Everything successful!"
else
echo "unsuccessful"
fi
Well as you are using shell script to achieve this there's not much external tooling. So the default $? should be of help. You may want to check for retrieval value in between the script. The code will look like this:
./script_1
retval=$?
if $retval==0; then
echo "script_1 successfully executed ..."
continue
else;
echo "script_1 failed with error exit code !"
break
fi
./script_2
Lemme know if this added any value to your scenario.
Exception handling in linux shell scripting can be done as follows
command || fallback_command
If you have multiple commands then you can do
(command_one && command_two) || fallback_command
Here fallback_command can be an echo or log details in a file etc.
I don't know if you have tried putting set -x on top of your script to see detailed execution.
Want to give my 2 cents here. Run your shell like this
sh root-file.sh 2> errors.txt
grep patterns from errors.txt
grep -e "root-file.sh: line" -e "script-to-install-mongodb.sh: line" -e "script-to-install-jdk8.sh: line" -e "script-to-install-myapplicaiton.sh: line" errors.txt
Output of above grep command will display commands which had errors in it along with line no. Let say output is:-
test.sh: line 8: file3: Permission denied
You can just go and check line no.(here it is 8) which had issue. refer this go to line no. in vi.
or this can also be automated: grep specific line from your shell script. grep line with had issue here it is 8.
head -8 test1.sh |tail -1
hope it helps.
Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi
I trying to create a .sh file that execute things like "pwd" or "ls" command.
My problem its when i execute the .sh file.
Its seems not recognize the tasks
I tried to use echo
Example : echo 'lsApps' or echo "lsApps"
but it prints the name of the task instead execute the comand
for example i want to execute a .ssh file that makes a pwd
VAR_1=pwd
echo $VAR_1
but it prints me pwd instead the current path ...
Any idea?
echo is used to print on the screen (man page reference). If you do echo 'IsApps' it will take it as a string and print it. If you want to execute a command you can just do it by doing IsApps (acutes not quotes, acute is usually below the escape key). This will execute the command and show the output on the screen. If you want to store the output of the command in a variable, you can do
<variable_name>=`IsApps`
This will store the output in the variable. Note that there is no space between variable name and the command. Also, those are not quotes but instead acutes. To print the variable on screen you can use echo by doing echo $<variable_name>
If you don't want to see the output at all. You can do
IsApps > /dev/null
this will execute the command but you will not see any stdout on your screen.
As far as ssh is concerned, do ssh-keygen and then ssh-copy-id user#remote_ip to set ssh keys so that you don't have to enter your password with ssh. Once you have done that, you can use ssh user#remote_ip in your shell script.
I want to use some output of a command, but I don't know, how I can save the output in a variable or file.
The script is connecting to a specified server via telnet and executes different commands. One command is needed for further commands. Well... I have to save these informations from the command output.
My code:
#!/bin/bash
(
echo open server portnumber
sleep 2s
echo "login username password"
sleep 1s
echo "use sid=1"
CLIENT_LIST=$(echo "clientlist")
sleep 1s
echo "clientupdate client_nickname=Administrator"
for client_id in $(echo $CLIENT_LIST | grep -Eo "clid=[0-9]+" | grep -Eo "[0-9]+"); do
echo "clientpoke clid=$client_id msg=How\sare\syou!"
sleep 1s
done
sleep 1s
echo "logout"
echo "quit"
) | telnet
I want to save the output of the command 'clientlist' in a variable or file. A variable would be the best solution. But actually the variable just saves 'clientlist' and not the output of the command. :(
I hope somebody can help me. Thanks in advance! :)
If you want to test it: It's made for TeamSpeak 3 server.
To run the command 'clientlist' and save the output in a variable:
output_var=$(clientlist)
In bash or sh, the $(...) syntax means "run this command and return whatever output it produces."
If by this question you mean "I want to run the clientlist command on the remote machine, then capture its output to a variable on the local machine", then you can't do this using something like your script. (Note that you script only pipes input into the telnet command; telnet's output isn't captured anywhere.)
You'll need to write a script using something like expect, or one of its work-alikes in another language like Perl or Python.