Hello I am trying to get the difference between to text files. There are a lot of differences and viewing them in terminal is making it volatile since I cannot save them. I want to view and save the diff. How would I catch the output and print it to a text file??
Code I am using for getting the diff is diff -i -w -B file1.txt file2.txt
Save to text file:
diff -i -w -B file1.txt file2.txt > diff.txt
Write directly to printer:
diff -i -w -B file1.txt file2.txt | lpr
Write saved text file to printer
lpr diff.txt
'Hope that helps .. PSM
PS:
Here's a link on Linux command-line printing:
http://tldp.org/HOWTO/Printing-Usage-HOWTO-2.html
Generally speaking,
command > output.txt
and in your case
diff -i -w -B file1.txt file2.txt > output.txt
and if you want to append the result
command >> output.txt
Just redirect it to a file:
diff -i -w -B file1.txt file2.txt > output.diff
If you'd like to know more about redirecting output, the advanced details vary shell-to-shell, but here's a reference for bash and a cheat-sheet for the common stdout/stderr redirects.
Related
This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 3 years ago.
I'm testing basic bash command on Ubuntu, when I put on terminal
wc -l < file1.txt > file2.txt
all lines from file1 are read by wc and the number is saved to file2.txt.
However if I use the similar command wc -l < file1.txt > file1.txt but I try to save it in the same > file1 the output is always 0.
What is the reason of this behavior ?
When you write the command wc -l < file1.txt > file1.txt, your shell will first read the command as a whole and redirect the standard input, output from/to file1.txt.
Then when a file is open for writing using >, if a file exists it will be overwritten and an empty file is created. the process (your wc) will then start its execution and it will write in it until the end of the execution.
Your wc -l will however have to read the file and count the number of lines in it, as the file is empty (overwritten) it will just dump in it:
0 file1.txt
To prove this look at what happens in appending mode using >>
$ cat file1.txt
a
b
b
$ wc -l file1.txt
3 file1.txt
$ wc -l file1.txt >> file1.txt
$ cat file1.txt
a
b
b
3 file1.txt
Your file content is intact and wc does properly read the number of lines before appending the result to the file.
Note:
In general, it is never recommended to write and read in the same file except if you know exactly what you are doing. Some commands have an inline option to modify the file during the run of the command and it is highly recommended to use them, as the file will not be corrupted even if the command crash in the middle of the execution.
Its because redirections have higher priority than other commands.
Here the file1 will be emptied before any operations can be performed on it
Try >> on file1.txt instead of >. >> will append to file unlike > overwriting whole file. Try -
wc -l < file1.txt >> file1.txt
I am running a script that has been working fine. However, yesterday, I got a couple errors. These errors are after several loops of the script:
sed: cant read file3.txt: No such file or directory
grep: file3.txt: No such file or directory
grep: file3.txt: No such file or directory
sed: cant read file3.txt: No such file or directory
grep: file3.txt: No such file or directory
Keep in mind, these errors do not happen consistently. It's occurring once in a while somewhere near this part of the script. File3.txt is the file not being found:
cat file1.txt | while read LINE; do grep -m 1 $LINE file2.txt >> file3.txt; done
sed -i 's/string//g' file3.txt
grep 'string' file3.txt | cut -d '|' -f1-2 > file4.txt
grep -v 'string' file3.txt | cut -d '|' -f1-2 >> file5.txt
sed -i 's/string//' file3.txt
grep -Fvf file3.txt file1.txt > file6.txt
Now, I'm thinking that since file3.txt is being appended, or later operated on by SED, sometimes the next command starts too soon and it can't find the file? Should I put a wait command in between?
I have looked up many pages with this error, but was unable to find anything:
cat file_name | grep "something" results "cat: grep: No such file or directory" in shell scripting
Pipe multiple commands to a single command with no EOF signal wait
grep command works in command line, but not in bash script: get no such file or directory erro
https://serverfault.com/questions/169539/sed-cant-find-a-file-that-obviously-exists
"No such file or directory" but it exists
If you think that putting a wait or sleep command will help, please let me know. Or, if you think there's a better solution, that would be great too. I'm running on Cygwin terminal. Any insight is greatly appreciated.
Instead of redirecting to file3.txt inside the while loop, redirect the whole loop. Then the file will be created even if the loop never runs because the input file is empty.
while read LINE; do
grep -m 1 $LINE file2.txt
done < file1.txt > file3.txt
If file1.txt is ever empty then file3.txt won't be created.
Also do grep -m 1 $LINE file2.txt will cause problems if there are crucial characters (space is the easiest of them).
Let's assume that the $LINE variable contains more than one word separated by spaces: hello world.
Now the command looks like this: grep -m 1 hello world file2.txt - grep interpretation will look something like this: let's find all hello in file named world and file named file2.txt in current folder.
Using "$LINE" instead of $LINE will lead you to a whole different scenario.
Look at the difference between the following two:
grep -m 1 $LINE file2.txt
grep -m 1 "$LINE" file2.txt
I am trying to recursively download several files using wget -m, and I intend to grep all of the downloaded files to find specific text. Currently, I can wait for wget to fully complete, and then run grep. However, the wget process is time consuming as there are many files and instead I would like to show progress by grep-ing each file as it downloads and printing to stdout, all before the next file downloads.
Example:
download file1
grep file1 >> output.txt
download file2
grep file2 >> output.txt
...
Thanks for any advice on how this could be achieved.
As c4f4t0r pointed out
wget -m -O - <wesbites>|grep --color 'pattern'
using grep's color function to highlight the patterns may seem helpful especially when dealing with bulky data output to terminal.
EDIT:
Below is a command line you can use. it creates a file called file and save the output messages from wget.Afterwards it tails the message file.
Using awk to find any lines with "saved" and extract filename, then use grep to pattern from filename.
wget -m websites &> file & tail -f -n1 file|awk -F "\'|\`" '/saved/{system( ("grep --colour pattern ") $2)}'
Based on Xorg's solution I was able to achieve my desired effect with some minor adjustments:
wget -m -O file.txt http://google.com 2> /dev/null & sleep 1 && tail -f -n1 file.txt | grep pattern
This will print out all lines that contain pattern to stdout, and wget itself will produce no output visible from the terminal. The sleep is included because otherwise file.txt would not be created by the time the tail command executed.
As a note, this command will miss any results that wget downloads within the first second.
I'm trying to understand a shell code which includes a line like this:
grep -n data file1.txt > file2.txt
Where data is the text i want to search for.
What does this command mean?
You can have a detailled answer here: http://explainshell.com/explain?cmd=%20grep%20-n%20data%20file1.txt%20%3E%20file2.txt
To sum it up:
grep will look for the string data in file1.txt and will output both the matching lines and their line number (because of the -n flag).
You could read the manual (man grep) to have a better understanding of what grep does.
The output will be redirected into file2.txt; that's what > is used for
I know that to append or join multiple files in Linux, we can use the command: cat file1 >> file2.
But I couldn't find any command to separate file1 from file2 after joining them. In other words, I want both original file1 and file2 back again. I tried to use the split command but it just dismembers a file into multiple files with the same size.
Is there a way to do it?
There is no such command, since no information about what was file1 or file2 is retained. The new combined file is just a data stream.
In order to "split" them back up, you need rules about how to do so (such as, how many bytes file1 and file2 were).
When you perform the concatenation, the system doesn't keep track of how the resulting file was created. So it has no way of remembering where the original split was located in that file.
Can you explain what you are trying to do ?
No problem, as long as you still have file1:
$ echo foobar >file1
$ echo blah >file2
$ cat file1 >> file2
$ truncate -s $(( $(stat -c '%s' file2) - $(stat -c '%s' file1) )) file2
$ cat file2
blah
Also, instead of stat -c '%s' filename you can use wc -c filename | cut -f 1 -d ' ', which is longer but more portable.