Using the tar command to display all files in a .tar archive to a file and to standard output - linux

Is it possible to direct that input from this command to a file and also to standard output?
tar -t all.tar
Also Is there a way to kill all processes running lets say an xclock?

you can use pipe the output of tar -t all.tar to the command tee
tar -t all.tar | tee output_file_name

Related

Creating compressed tar file, with only subset of files, remotely over SSH

I've successfully managed to transfer a tar file over SSH on stdout from a remote system, creating a compressed file locally, by doing something like this:
read -s sudopass
ssh me#remote "echo $sudopass | sudo -S tar cf - '/dir'" 2>/dev/null | XZ_OPT='-6 -T0 -v' xz > dir.tar.xz
As expected this gets me a dir.tar.xz locally which is all of the remote /dir compressed.
I've also managed to figure out how to locally only compress a subset of files, by passing a filelist to tar with -T on STDIN:
find '/dir' -name '*.log' | XZ_OPT='-6 -T0 -v' tar cJvf /root/logs.txz -T -
My main question is: how would I go about doing the first thing (transfer plain tar remotly, then compress locally) while at the same time telling tar that I only want to do it on a specific subset of files?
When I try combining the two:
ssh me#remote "echo $sudopass | sudo -S find '/dir' -name '*.log' | tar cf
-T -" | XZ_OPT='-6 -T0 -v' xz > cypress_logs.tar.xz
I get errors like:
tar: -: Cannot stat: No such file or directory
I feel like tar isn't liking the fact that I'm both passing it something on STDIN as well as expecting it to output to STDOUT. Adding another - didn't seem to help either.
Also, as a bonus question, if anyone has a better idea on how to pass $sudopass above that would be great, since this method -- while avoiding having the password in the bash history -- makes the sudo password show up in the process list while it's running.
Remember that the f option requires an argument, so when you write cf -T -, I suspect that the -T is getting consumed as the argument to f, which throws off the rest of the command line.
This works for me:
ssh me#remote "echo $password | sudo -S find /tmp/dir -name '*.log' | tar -cf- -T-"
You could also write it like this:
ssh me#remote "echo $password | sudo -S find /tmp/dir -name '*.log' | tar cf - -T-"
But I prefer to always use - for options, rather than legacy tar's weird options without any prefix.

Linux Pipe viewer, how to split the pipe

I'm trying to extract large .tar file using pv.
pv large_file.tar.gz | tar -xcf /../MyFolder.
The pv command works like expected,showing the progress in the console.
I'm trying to split the stdout, to show the progress both in the console and save the same standout, to a file.
I tried doing so with tee, but couldn't make it work.
pv large_file.tar.gz | tee /tmp/strout.log | tar -xcf /../MyFolder
Any suggestions how can i display the progress to the console an in the same time save it to a file?
Thanks!
Not sure that your original command works, as there are several errors in the options given to tar.
Given that ../MyFolder exists, your first command need to be
pv large_file.tar.gz | tar -xz -C ../MyFolder
If you insert tee call between pv and tar calls, then the whole chain works.
pv large_file.tar.gz | tee /tmp/strout.log | tar -xz -C ../MyFolder
However i'm not sure it does what you expect. If you pipe pv output to tee, tee will pipe it to tar, and dump the same contents as the original tar to /tmp/strout.log, resulting in your tar extracted to ../MyFolder and copied to /tmp/strout.log.
EDIT
As suggested by #DownloadPizza, you can use process substitution (see How do I write stderr to a file while using "tee" with a pipe?). By using -f flag with pv, your command will become
pv -f large_file.tar.gz 2> >(tee /tmp/strout.log) > >(tar -xz -C ../MyFolder)
and will produce expected output.
PV progress is sent to stderr, can you try this?:
pv large_file.tar.gz > >(tar -xz -C ./MyFolder/) | echo you might need to edit the tar command as i couldnt get yours to work for me

scp multiple files with different names from source and destination

I am trying to scp multiple files from source to destination.The scenario is the source file name is different from the destination file
Here is the SCP Command i am trying to do
scp /u07/retail/Bundle_de.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_de.properties
Basically i do have more than 7 files which i am trying seperate scps to achieve it. So i want to club it to a single scp to transfer all the files
Few of the scp commands i am trying here -
$ scp /u07/retail/Bundle_de.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_de.properties
$ scp /u07/retail/Bundle_as.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_as.properties
$ scp /u07/retail/Bundle_pt.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_pt.properties
$ scp /u07/retail/Bundle_op.properties rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_op.properties
I am looking for a solution by which i can achieve the above 4 files in a single scp command.
Looks like a straightforward loop in any standard POSIX shell:
for i in de as pt op
do scp "/u07/retail/Bundle_$i.properties" "rgbu_fc#<fc_host>:/u01/projects/MultiSolutionBundle_$i.properties"
done
Alternatively, you could give the files new names locally (copy, link, or move), and then transfer them with a wildcard:
dir=$(mktemp -d)
for i in de as pt op
do cp "/u07/retail/Bundle_$i.properties" "$dir/MultiSolutionBundle_$i.properties"
done
scp "$dir"/* "rgbu_fc#<fc_host>:/u01/projects/"
rm -rf "$dir"
With GNU tar, ssh and bash:
tar -C /u07/retail/ -c Bundle_{de,as,pt,op}.properties | ssh user#remote_host tar -C /u01/projects/ --transform 's/.*/MultiSolution\&/' --show-transformed-names -xv
If you want to use globbing (*) with filenames:
cd /u07/retail/ && tar -c Bundle_*.properties | ssh user#remote_host tar -C /u01/projects/ --transform 's/.*/MultiSolution\&/' --show-transformed-names -xv
-C: change to directory
-c: create a new archive
Bundle_{de,as,pt,op}.properties: bash is expanding this to Bundle_de.properties Bundle_as.properties Bundle_pt.properties Bundle_op.properties before executing tar command
--transform 's/.*/MultiSolution\&/': prepend MultiSolution to filenames
--show-transformed-names: show filenames after transformation
-xv: extract files and verbosely list files processed

bzip command not working with "tee -a"

I want to redirect stdop of bzip command to logfile using tee command but its not working and giving error for '-a' in tee command. Please see error below,
> bzip2 file -c 1> tee -a logfile
bzip2: Bad flag `-a'
bzip2, a block-sorting file compressor. Version 1.0.5, 10-Dec-2007.
usage: bzip2 [flags and input files in any order]
-h --help print this message
-d --decompress force decompression
-z --compress force compression
-k --keep keep (don't delete) input files
-f --force overwrite existing output files
-t --test test compressed file integrity
-c --stdout output to standard out
-q --quiet suppress noncritical error messages
-v --verbose be verbose (a 2nd -v gives more)
-L --license display software version & license
-V --version display software version & license
-s --small use less memory (at most 2500k)
-1 .. -9 set block size to 100k .. 900k
--fast alias for -1
--best alias for -9
If invoked as `bzip2', default action is to compress.
as `bunzip2', default action is to decompress.
as `bzcat', default action is to decompress to stdout.
If no file names are given, bzip2 compresses or decompresses
from standard input to standard output. You can combine
short flags, so `-v -4' means the same as -v4 or -4v, &c.
What is the issue? why bzip is considering the '-a' flag of tee command.
Try:
bzip2 -c file | tee -a logfile
The | (pipe) is redirecting the stdout of the left command to the stdin of the right command.
-c is is an option from bzip2 that says Compress or decompress to standard output.. see man bzip2
Your problem is that 1>does not pipe output of the bzip2 command to the tee command, but instead redirects the output to a file which will be named tee. Furthermore you probably don't want to use -c. You should be using the pipe | instead, as follows:
bzip2 file | tee -a logfile
Also, the reason why bzip2 is complaining is because the command as you mentioned above will be interpreted exactly as this one:
bzip2 file -a logfile 1> tee
And hence all arguments after the teeare actually added to the bzip2 command.
As others have pointed out, you want a pipe, not output redirection:
bzip2 file | tee -a logfile
However, bzip2 doesn't produce any output; it simply replaces the given file with a compressed version of the file. You might want to pipe standard error to the log file:
bzip2 file 2>&1 | tee -a logfile
(2>&1 copies standard error to standard output, which can then be piped.)

MySQL Dump to tar.gz from remote without shell access

I'm trying to get a dump from MySQL to my local client. This is what I currently have:
mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db | gunzip -9 > $FILE
What I want though is .tar.gz instead of a gunzip archive. I have shell access on local client but not on the server. So, I can't do a remote tar and copy it here. So, is there a way of piping the gzip to a tar.gz. (Currently, the .gz does not get recognized as a tar archive.)
Thanks.
If you are issuing the above command in client side, your compression is done in client side. mysqldump connects the remote server and downloads the data without any compression.
mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db > filename
tar cfz filename.tar.gz filename
rm filename
Probably some unix gurus will have a one liner to do it.
No. The files (yes, plural, since tar is usually used for more than one file) are first placed in a tar archive, and then that is compressed. If you are trying to use the tar command line tool then you will need to save the result in a temporary file and then tar that.
Personally though, I'd rather hit the other side with a cluebat.
mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db | tar -zcvf $FILE -
Where $FILE is your filename.tar.gz
Archived backup and renamed by time and date:
/usr/bin/mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db | gzip -c > /home/backup_`/bin/date +"\%Y-\%m-\%d_\%H:\%M"`.gz

Resources