Proper way to not print to shell with tee command - linux

I would like to use tee to append to multiple files, however, I don't need it to print to my shell, just the files. Outputting to /dev/null works great, as the command still appends to the files, and doesn't print to the shell:
echo test | tee -a file1 file2 file3 &>/dev/null
I was just wondering if this is the proper way to do it, as tee --help doesn't seem to have a parameter to not print to shell:
-a, --append append to the given FILEs, do not overwrite
-i, --ignore-interrupts ignore interrupt signals
-p diagnose errors writing to non pipes
--output-error[=MODE] set behavior on write error. See MODE below
--help display this help and exit
--version output version information and exit
I'm pretty sure this is the right way to do it, I guess I would just like some confirmation.

Well okay then...
... | tee -a file1 file2 >> file3

Related

How to write a shell script to append multiple line of data to 3-different file at a time and search if that data already exists and ignore it?

Tried using:
sed -i $ a 'hello' << foo.txt
when I'm trying to use is for multiple files I unattended. Someone plz help to sort this out. Appreciate your response! Thanks
Check out the tee command. Something like
echo "new line" | tee -a file1 file2 file3
To keep it from sending also to stdout you can redirect afterwards to /dev/null
echo "new line" | tee -a file1 file2 file3 > /dev/null
You can read more at its manpage man tee.

Simple tee example

Can someone please explain why tee works here:
echo "testtext" | tee file1 > file2
My understanding was that tee duplicates the input and prints 1 to screen.
The above example allows the output from echo to be sent to 2 files, the first redirecting to the second.
I would expect 'testtext' to be printed to screen and passed through file1 and landing in file2. Similar as to how the text in the following example would only end up in file2.
echo "testtext" > file1 > file2
Can anyone explain what i am missing in my understanding?
Edit
Is it because its writing to file and then to stdout which gets redirected?
Your description is right: tee receives data from stdin and writes it both into file and stdout. But when you redirect tee's stdout into another file, there is obviously nothing written into terminal because the data ended up inside the second file.
Is it because its writing to file and then to stdout which gets redirected?
Exactly.
What you are trying to do could be done like this (demonstrating how tee works):
$ echo "testtext" | tee file1 | tee file2
testtext
But since tee from gnu coreutils accepts several output files to be specified, one can do just:
$ echo "testtext" | tee file1 file2
testtext
But your idea of passed through file1 and landing in file2 is not correct. Your shell example:
echo "testtext" > file1 > file2
makes the shell open both files file1 and file2 for writing which effectively truncates them and since stdout can be only redirected into another file directly, only the last redirection is effective (as it overrides the previous ones).
tee writes its input to each of the files named in its arguments (it can take more than one) as well as to standard output. The example could also be written
echo "testtext" | tee file1 file2 > /dev/null
where you explicitly write to the two files, then ignore what goes to standard output, rather than redirecting standard output to one of the files.
The > file2 in the command you showed does not somehow "extract" what was written to file1, leaving standard output to be written to the screen. Rather, > file2 instructs the shell to pass a file handle opened on file2 (rather than the terminal) to tee for it to use as standard output.
"is it because its writing to file and then to stdout which gets redirected?"
That is correct
tee sends output to the specified file, and to stdout.
the last ">" redirects standout to the second file specified.

Grep files in between wget recursive downloads

I am trying to recursively download several files using wget -m, and I intend to grep all of the downloaded files to find specific text. Currently, I can wait for wget to fully complete, and then run grep. However, the wget process is time consuming as there are many files and instead I would like to show progress by grep-ing each file as it downloads and printing to stdout, all before the next file downloads.
Example:
download file1
grep file1 >> output.txt
download file2
grep file2 >> output.txt
...
Thanks for any advice on how this could be achieved.
As c4f4t0r pointed out
wget -m -O - <wesbites>|grep --color 'pattern'
using grep's color function to highlight the patterns may seem helpful especially when dealing with bulky data output to terminal.
EDIT:
Below is a command line you can use. it creates a file called file and save the output messages from wget.Afterwards it tails the message file.
Using awk to find any lines with "saved" and extract filename, then use grep to pattern from filename.
wget -m websites &> file & tail -f -n1 file|awk -F "\'|\`" '/saved/{system( ("grep --colour pattern ") $2)}'
Based on Xorg's solution I was able to achieve my desired effect with some minor adjustments:
wget -m -O file.txt http://google.com 2> /dev/null & sleep 1 && tail -f -n1 file.txt | grep pattern
This will print out all lines that contain pattern to stdout, and wget itself will produce no output visible from the terminal. The sleep is included because otherwise file.txt would not be created by the time the tail command executed.
As a note, this command will miss any results that wget downloads within the first second.

xargs can't get user input?

i have a sample code like this:
CMD="svn up blablabla | grep -v .tgz"
echo $CMD | xargs -n -P ${PARALLEL:=20} -- bash -c
the purpose is to run svn update in parallel. However when encounter the conflicts, which should prompt out several selection for users to choose, it just passes without waiting for user input. And an error is shown:
Conflict discovered in 'blablabla'.
Select: (p) postpone, (df) diff-full, (e) edit,
(mc) mine-conflict, (tc) theirs-conflict,
(s) show all options: svn: Can't read stdin: End of file found
Is there any way to fix this?
Thanks
Yes, there is a way to fix this! See the answer to how to prompt a user from a script run with xargs. Long story short, use
xargs -a FILENAME your_script
or
xargs -a <(cat FILENAME) your_script
The first version actually reads lines from a file, and the second one fakes reading lines from a file, which is convenient for using xargs in pipe chains with awk or perl. Remember to use the -0 flag if you don't want to break on whitespace!
Another solution, which doesn't rely on Bash but on GNU's flavor of xargs, is to use the -o or --open-tty option:
echo $CMD | xargs -n -P ${PARALLEL:=20} --open-tty -- bash -c
From the manpage:
-o, --open-tty
Reopen stdin as /dev/tty in the child process before executing the command. This is use‐
ful if you want xargs to run an interactive application.

How to append one file to another in Linux from the shell?

I have two files: file1 and file2. How do I append the contents of file2 to file1 so that contents of file1 persist the process?
Use bash builtin redirection (tldp):
cat file2 >> file1
cat file2 >> file1
The >> operator appends the output to the named file or creates the named file if it does not exist.
cat file1 file2 > file3
This concatenates two or more files to one. You can have as many source files as you need. For example,
cat *.txt >> newfile.txt
Update 20130902
In the comments eumiro suggests "don't try cat file1 file2 > file1." The reason this might not result in the expected outcome is that the file receiving the redirect is prepared before the command to the left of the > is executed. In this case, first file1 is truncated to zero length and opened for output, then the cat command attempts to concatenate the now zero-length file plus the contents of file2 into file1. The result is that the original contents of file1 are lost and in its place is a copy of file2 which probably isn't what was expected.
Update 20160919
In the comments tpartee suggests linking to backing information/sources. For an authoritative reference, I direct the kind reader to the sh man page at linuxcommand.org which states:
Before a command is executed, its input and output may be redirected
using a special notation interpreted by the shell.
While that does tell the reader what they need to know it is easy to miss if you aren't looking for it and parsing the statement word by word. The most important word here being 'before'. The redirection is completed (or fails) before the command is executed.
In the example case of cat file1 file2 > file1 the shell performs the redirection first so that the I/O handles are in place in the environment in which the command will be executed before it is executed.
A friendlier version in which the redirection precedence is covered at length can be found at Ian Allen's web site in the form of Linux courseware. His I/O Redirection Notes page has much to say on the topic, including the observation that redirection works even without a command. Passing this to the shell:
$ >out
...creates an empty file named out. The shell first sets up the I/O redirection, then looks for a command, finds none, and completes the operation.
Note: if you need to use sudo, do this:
sudo bash -c 'cat file2 >> file1'
The usual method of simply prepending sudo to the command will fail, since the privilege escalation doesn't carry over into the output redirection.
Try this command:
cat file2 >> file1
Just for reference, using ddrescue provides an interruptible way of achieving the task if, for example, you have large files and the need to pause and then carry on at some later point:
ddrescue -o $(wc --bytes file1 | awk '{ print $1 }') file2 file1 logfile
The logfile is the important bit. You can interrupt the process with Ctrl-C and resume it by specifying the exact same command again and ddrescue will read logfile and resume from where it left off. The -o A flag tells ddrescue to start from byte A in the output file (file1). So wc --bytes file1 | awk '{ print $1 }' just extracts the size of file1 in bytes (you can just paste in the output from ls if you like).
As pointed out by ngks in the comments, the downside is that ddrescue will probably not be installed by default, so you will have to install it manually. The other complication is that there are two versions of ddrescue which might be in your repositories: see this askubuntu question for more info. The version you want is the GNU ddrescue, and on Debian-based systems is the package named gddrescue:
sudo apt install gddrescue
For other distros check your package management system for the GNU version of ddrescue.
Another solution:
tee < file1 -a file2
tee has the benefit that you can append to as many files as you like, for example:
tee < file1 -a file2 file3 file3
will append the contents of file1 to file2, file3 and file4.
From the man page:
-a, --append
append to the given FILEs, do not overwrite
Zsh specific: You can also do this without cat, though honestly cat is more readable:
>> file1 < file2
The >> appends STDIN to file1 and the < dumps file2 to STDIN.
cat can be the easy solution but that become very slow when we concat large files, find -print is to rescue you, though you have to use cat once.
amey#xps ~/work/python/tmp $ ls -lhtr
total 969M
-rw-r--r-- 1 amey amey 485M May 24 23:54 bigFile2.txt
-rw-r--r-- 1 amey amey 485M May 24 23:55 bigFile1.txt
amey#xps ~/work/python/tmp $ time cat bigFile1.txt bigFile2.txt >> out.txt
real 0m3.084s
user 0m0.012s
sys 0m2.308s
amey#xps ~/work/python/tmp $ time find . -maxdepth 1 -type f -name 'bigFile*' -print0 | xargs -0 cat -- > outFile1
real 0m2.516s
user 0m0.028s
sys 0m2.204s

Resources