scp output as logfile - linux

I am relatively new to using scp - and I am trying to do some simple stuff over ec2 - something like the following:
scp -i ec2key.pem username#ec2ip:/path/to/file ~/path/to/dest/folder/file
What I would like to have is the log of the above command (i.e the screen output to a text file) - Is there a way to achieve this?
Thanks.

You can redirect both outputs (stdout, stderr) of the command with &> provided you use the verbose (-v) argument. Otherwise, scp will suppress the output as it expects its stdout to be connected to a terminal. But then you get too much information, which you can get rid of with grep:
scp -v -i ec2key.pem username#ec2ip:/path/to/file ~/path/to/dest/folder/file |& grep -v ^debug > file.log
If you want to have the output both to the screen and the file, use tee
scp -v -i ec2key.pem username#ec2ip:/path/to/file ~/path/to/dest/folder/file |& grep -v ^ debug tee file.log

scp -v -i ec2key.pem username#ec2ip:/p/t/file ~/p/t/d/f/file >> something.log 2>&1
-v and 2>&1 will append your extended details (i.e. debug info) in the existing something.log file.

How about (untested, compressing /path/to for readability):
(scp -i ec2key.pem username#ec2ip:/p/t/file ~/p/t/d/f/file ) 2>/p/t/textfile

Related

linux bash and grep bug?

Doing the following:
First console
touch /tmp/test
Second console
tail -f /tmp/test |grep propo |grep -v miles
Third console
echo propo >> /tmp/test
Second console must show "propo" but it doesn't shows anything, if you run in second console instead:
tail -f /tmp/test |grep propo
And do echo propo >> /tmp/test it will show propo, but the grep -v is for miles not for propo
Why?
Test into your own environment if you want, it's pretty obvious but not working.
Why?
Most probably because the output of a command when piped to another command is fully buffered, not line buffered. The output could be buffered in the first pipe or by grep.
Use stdbuf -oL to force line buffering and grep --line-buffered for line buffered grep.
the problem is that grep does not use line buffering by default; so the output will be buffered. You could use grep --line-buffered:
tail -f /tmp/test | grep --line-buffered propo | grep -v miles

linux strace: How to filter system calls that take more than a second

I'm using "strace -f -T -tt -o foo.txt -p 1234" to print the time spent in system calls. This makes the output file huge, is there a way to just print the system calls that took greater than 1second. I can grep it out from the file later, but is there a better way?
If we simply omit the -o foo.txt argument, the output goes to standard output. We can pipe it through grep and redirect to the file:
strace -f -T -tt -p 1234 | grep pattern > foo.txt
To watch the output at the same time:
strace -f -T -tt -p 1234 | grep pattern | tee foo.txt
If a command prints only to a file that is passed as an argument, and we want to filter/redirect its output, the first step is to check whether it implements the dash convention: can you specify standard input or output using - as a filename argument:
some_command - | our_pipe > file.txt
If not, then the recourse is to use Bash process substitution substitution syntax: >(output command) and <(input command):
some_command >(our_pipe > file.txt)
The process substitution syntax expands into a token that is suitable as a filename argument for a command or function. When the program opens that token, it gets a file descriptor to the command's input or output, depending on direction.
With process substitution, we can redirect the input or output of stubborn programs which work only with files passed as by name as arguments, and which do not support any convention for requesting that standard input or output be used in place of a file.
The token used by process substitution is platform-dependent; we can see what it is using echo. For instance on GNU/Linux, Bash takes advantage of the /dev/fd operating system feature:
$ echo <(ls -l)
/dev/fd/63
You can use the following command:
strace -T command 2>&1 >/dev/null | awk '{gsub(/[<>]/,"",$NF)}$NF+.0>1.0'
Explanation:
strace -T adds the time spent in the syscall end the end of the line, enclosed in <...>
2>&1 >/dev/null | awk pipes stderr to awk. (strace writes it's output to stderr!)
The awk command removes the <> from the last field $NF and prints lines where the time spent is higher than a second.
Probably you'll also want to pass the threshold as a variable to the awk command:
strace -T command 2>&1 >/dev/null \
| awk -v thres=0.001 '{gsub(/[<>]/,"",$NF)}$NF+.0>thres+.0'

Executing a string as a command in bash that contains pipes

I'm trying to list some ftp directories. I can't work out how to make bash execute a command that contains pipes correctly.
Here's my script:
#/bin/sh
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
cmd='echo "ls /mydir/'"$d"'/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1'
$cmd
done
This just outputs:
"ls /mydir/dir1/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
"ls /mydir/dir2/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
How can I make bash execute the whole string including the echo? I also need to be able to parse the output of the command.
I don't think that you need to be using the -b switch at all. It should be sufficient to specify the commands that you would like to execute as a string:
#/bin/bash
dirs=("/dir1" "/dir2")
for d in "${dirs[#]}"
do
printf -v d_str '%q' "$d"
sftp -i ~/mykey user#example.com "ls /mydir/$d_str/*.tar*" 2>&1 | tail -n1
done
As suggested in the comments (thanks #Charles), I've used printf with the %q format specifier to protect against characters in the directory name that may be interpreted by the shell.
First you need to use /bin/bash as shebang to use BASH arrays.
Then remove echo and use command substitution to capture the output:
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
output=$(ls /mydir/"$d"/*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
echo "$output"
done
I will however advise you not use ls's output in sftp command. You can replace that with:
output=$(echo "/mydir/$d/"*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
Don't store the command in a string; just use it directly.
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
echo "ls /mydir/$d/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
done
Usually, people store the command in a string so they can both execute it and log it, as a misguided form of factoring. (I'm of the opinion that it's not worth the trouble required to do correctly.)
Note that sftp reads from standard input by default, so you can just use
echo "ls ..." | sftp -i ~/mykey user#example.com 2>&1 | tail -n1
You can also use a here document instead of a pipeline.
sftp -i ~/mykey user#example.com 2>&1 <<EOF | tail -n1
ls /mydir/$d/*.tar.*
EOF

Shell script to log output of console

I want to grep the output of my script - which itself contains call to different binaries...
Since the script has multiple binaries within I can't simply put exec and dump the output in file (it does not copy output from the binaries)...
And to let you know, I am monitoring the script output to determine if the system has got stuck!
Why don't you append instead?
mybin1 | grep '...' >> mylog.txt
mybin2 | grep '...' >> mylog.txt
mybin3 | grep '...' >> mylog.txt
Does this not work?
#!/bin/bash
exec 11>&1 12>&2 > >(exec tee /var/log/somewhere) 2>&1 ## Or add -a option to tee to append.
# call your binaries here
exec >&- 2>&- >&11 2>&12 11>&- 12>&-

How to redirect ubuntu terminal output to a file?

I have tried redirecting the terminal output to a file using tee and > as in the examples here and the question. It worked for echo test | tee log.txt or ls -l | tee log.txt
But It does not work (does not add anything to the log.txt) when I run a command like divine verify file.dve | tee log.txt
where divine is an installed tool. Any ideas or alternatives?
Try divine verify file.dve 2>&1 | tee log.txt. If the program is outputting to stderr instead of stdout, this redirects stderr to stdout.
works on ffmpeg output too
{ echo ffmpeg -i [rest of command]; ffmpeg -i [rest of command]; } 2>&1 | tee ffmpeg.txt
and tee -a to append if file already exists
======
also if you want to see mediainfo on all files in a folder and make sure command is also visible in mediainfo.txt
{ echo mediainfo *; mediainfo *; } 2>&1 | tee mediainfo.txt
NB: { echo cmd; cmd; } means the command is kept in the txt file ; without this it is not printed

Resources