Avoid error with cat command when src file doesn't exist - linux

I am trying to copy content of file1 to file 2 using linux command
cat file1 > file2
file1 may or may not be available depending on different environments where the program is being run. What should be added to the command in case file1 is not available so that it doesn't return an error ? I have read that appending 2>/dev/null will not give error. While that's true, and I didn't get an error the command
cat file1 2>/dev/null > file2
made file2's previous content completely empty when file1 wasn't there. I don't want to lose the content of file2 in case file1 wasn't there and don't want an error to return.
Also in what other cases can the command fail and return an error ?

Test for file1 first.
[ -r file1 ] && cat ...
See help test for details.

elaborating on #Ignacio Vazquez-Abrams :
if (test -a file1); then cat file1 > file2; fi

File1 is empty
File2 consists below content
praveen
Now I am trying to append the content of file1 to file2
Since file1 is empty to nullifying error using /dev/null so output will not show any error
cat file1 >>file 2>/dev/null
File2 content not got deleted
file2 content exsists
praveen
If [ -f file1 ]
then
cat file >> file2
else
cat file1 >>file 2>/dev/null
fi

First, you wrote:
I am trying to copy content of file1 to file 2 using linux command
To copy content of file1 to file2 use the cp command:
if ! cp file1 file2 2>/dev/null ; then
echo "file1 does not exist or isn't readable"
fi
Just for completeness, with cat:
I would pipe stderr to /dev/null and check the return value:
if ! cat file1 2>/dev/null > file2 ; then
rm file2
echo "file1 does not exist or isn't readable"
fi

Related

How to write a shell script to append multiple line of data to 3-different file at a time and search if that data already exists and ignore it?

Tried using:
sed -i $ a 'hello' << foo.txt
when I'm trying to use is for multiple files I unattended. Someone plz help to sort this out. Appreciate your response! Thanks
Check out the tee command. Something like
echo "new line" | tee -a file1 file2 file3
To keep it from sending also to stdout you can redirect afterwards to /dev/null
echo "new line" | tee -a file1 file2 file3 > /dev/null
You can read more at its manpage man tee.

Using Sed to extract the headers in multiple files

I used head -3 to extract headers from some files that I needed to show header data I did this:
head -3 file1 file2 file3
and head -3 * works also.
I thought sed 3 file1 file2 file3 would work but it only gives the first file's output and not the others. I then tried sed -n '1,2p' file1 file2 file3. Again only the first file produced any output. I also tried with a wildcard sed -n '1,2p' filename* same result only the first file's output.
Everything I read seems like it should work. sed *filesnames*.
Thanks in advance
Assuming GNU sed as question is tagged linux. From GNU sed manual
-s
--separate By default, sed will consider the files specified on the command line as a single continuous long stream. This GNU sed
extension allows the user to consider them as separate files: range
addresses (such as ‘/abc/,/def/’) are not allowed to span several
files, line numbers are relative to the start of each file, $ refers
to the last line of each file, and files invoked from the R commands
are rewound at the start of each file.
Example:
$ cat file1
foo
bar
$ cat file2
123
456
$ sed -n '1p' file1 file2
foo
$ sed -n '3p' file1 file2
123
$ sed -sn '1p' file1 file2
foo
123
When using -i, the -s option is implied
$ sed -i '1chello' file1 file2
$ cat file1
hello
bar
$ cat file2
hello
456

Grep No such file or directory Error In Bash Script, Should I Insert Wait Command?

I am running a script that has been working fine. However, yesterday, I got a couple errors. These errors are after several loops of the script:
sed: cant read file3.txt: No such file or directory
grep: file3.txt: No such file or directory
grep: file3.txt: No such file or directory
sed: cant read file3.txt: No such file or directory
grep: file3.txt: No such file or directory
Keep in mind, these errors do not happen consistently. It's occurring once in a while somewhere near this part of the script. File3.txt is the file not being found:
cat file1.txt | while read LINE; do grep -m 1 $LINE file2.txt >> file3.txt; done
sed -i 's/string//g' file3.txt
grep 'string' file3.txt | cut -d '|' -f1-2 > file4.txt
grep -v 'string' file3.txt | cut -d '|' -f1-2 >> file5.txt
sed -i 's/string//' file3.txt
grep -Fvf file3.txt file1.txt > file6.txt
Now, I'm thinking that since file3.txt is being appended, or later operated on by SED, sometimes the next command starts too soon and it can't find the file? Should I put a wait command in between?
I have looked up many pages with this error, but was unable to find anything:
cat file_name | grep "something" results "cat: grep: No such file or directory" in shell scripting
Pipe multiple commands to a single command with no EOF signal wait
grep command works in command line, but not in bash script: get no such file or directory erro
https://serverfault.com/questions/169539/sed-cant-find-a-file-that-obviously-exists
"No such file or directory" but it exists
If you think that putting a wait or sleep command will help, please let me know. Or, if you think there's a better solution, that would be great too. I'm running on Cygwin terminal. Any insight is greatly appreciated.
Instead of redirecting to file3.txt inside the while loop, redirect the whole loop. Then the file will be created even if the loop never runs because the input file is empty.
while read LINE; do
grep -m 1 $LINE file2.txt
done < file1.txt > file3.txt
If file1.txt is ever empty then file3.txt won't be created.
Also do grep -m 1 $LINE file2.txt will cause problems if there are crucial characters (space is the easiest of them).
Let's assume that the $LINE variable contains more than one word separated by spaces: hello world.
Now the command looks like this: grep -m 1 hello world file2.txt - grep interpretation will look something like this: let's find all hello in file named world and file named file2.txt in current folder.
Using "$LINE" instead of $LINE will lead you to a whole different scenario.
Look at the difference between the following two:
grep -m 1 $LINE file2.txt
grep -m 1 "$LINE" file2.txt

tail dynamically created files in linux

I am having a list of files under a directory as below,
file1
file2
file3
....
....
files will get created dynamically by a process.
now when i do tail -f file* > data.txt,
file* takes only the existing files in the directory.
for (e.g)
existing files:
file1
file2
i do : tail -f file* > data.txt
when tail in process a new file named file3 got created,
(here i need to include file3 as well in the tail without restarting the command)
however i need to stop tail and start it again so that dynamically created files also tailed.
Is there a way to dynamically include files in tail whenever there is a new file created or any workaround for this.
I have an anwser that satisfies most but not all of your requirements:
You can use
tail -f --follow=name --retry file1 file2 file3 > data.txt
This will keep opening the files 1,2,3 until they become available. It will keep printing the output even if on of the files disappears and reappears again.
example usage:
first create two dummy files:
echo a >> file1
echo b >> file2
now use tail (in a separate window):
tail -f --follow=name --retry file1 file2 file3 > data.txt
now append some data and do some other manipulations:
echo b >> file2
echo c >> file3
rm file1
echo a >> file1
Now this is the final output. Remark that all three files are taken into account, even though they weren't present at a certain moment:
==> file1 <==
a
==> file2 <==
b
tail: cannot open ‘file3’ for reading: No such file or directory
==> file2 <==
b
tail: ‘file3’ has become accessible
==> file3 <==
c
tail: ‘file1’ has become inaccessible: No such file or directory
==> file1 <==
a
remark: this won't work with file*, because that is a glob pattern that is expanded before execution. Suppose you do:
tail -f file*
and only file1 and file2 are present; then tail gets as input:
tail -f file1 file2
The glob expansion cannot know which files would eventually match the pattern. So this is a partial answer: if you know all the possible names of files that will be created; this will do the trick.
You could use inotifywait to inform you of any files created in a directory. Read the output and start a new tail -f as a background process for each new file created.

Separating a joined file to original files in Linux

I know that to append or join multiple files in Linux, we can use the command: cat file1 >> file2.
But I couldn't find any command to separate file1 from file2 after joining them. In other words, I want both original file1 and file2 back again. I tried to use the split command but it just dismembers a file into multiple files with the same size.
Is there a way to do it?
There is no such command, since no information about what was file1 or file2 is retained. The new combined file is just a data stream.
In order to "split" them back up, you need rules about how to do so (such as, how many bytes file1 and file2 were).
When you perform the concatenation, the system doesn't keep track of how the resulting file was created. So it has no way of remembering where the original split was located in that file.
Can you explain what you are trying to do ?
No problem, as long as you still have file1:
$ echo foobar >file1
$ echo blah >file2
$ cat file1 >> file2
$ truncate -s $(( $(stat -c '%s' file2) - $(stat -c '%s' file1) )) file2
$ cat file2
blah
Also, instead of stat -c '%s' filename you can use wc -c filename | cut -f 1 -d ' ', which is longer but more portable.

Resources