tail dynamically created files in linux - linux

I am having a list of files under a directory as below,
file1
file2
file3
....
....
files will get created dynamically by a process.
now when i do tail -f file* > data.txt,
file* takes only the existing files in the directory.
for (e.g)
existing files:
file1
file2
i do : tail -f file* > data.txt
when tail in process a new file named file3 got created,
(here i need to include file3 as well in the tail without restarting the command)
however i need to stop tail and start it again so that dynamically created files also tailed.
Is there a way to dynamically include files in tail whenever there is a new file created or any workaround for this.

I have an anwser that satisfies most but not all of your requirements:
You can use
tail -f --follow=name --retry file1 file2 file3 > data.txt
This will keep opening the files 1,2,3 until they become available. It will keep printing the output even if on of the files disappears and reappears again.
example usage:
first create two dummy files:
echo a >> file1
echo b >> file2
now use tail (in a separate window):
tail -f --follow=name --retry file1 file2 file3 > data.txt
now append some data and do some other manipulations:
echo b >> file2
echo c >> file3
rm file1
echo a >> file1
Now this is the final output. Remark that all three files are taken into account, even though they weren't present at a certain moment:
==> file1 <==
a
==> file2 <==
b
tail: cannot open ‘file3’ for reading: No such file or directory
==> file2 <==
b
tail: ‘file3’ has become accessible
==> file3 <==
c
tail: ‘file1’ has become inaccessible: No such file or directory
==> file1 <==
a
remark: this won't work with file*, because that is a glob pattern that is expanded before execution. Suppose you do:
tail -f file*
and only file1 and file2 are present; then tail gets as input:
tail -f file1 file2
The glob expansion cannot know which files would eventually match the pattern. So this is a partial answer: if you know all the possible names of files that will be created; this will do the trick.

You could use inotifywait to inform you of any files created in a directory. Read the output and start a new tail -f as a background process for each new file created.

Related

How to use grep in a shell script?

I am trying to make a shell script which prints out the last modification dates of the following files.
Somehow the script just prints out an empty line
"modified" is a file which contains the names and the modification dates of the files in the following format(delimiter='#'):
>modified
for i in file1 file2 file3
do
echo $i#`stat --printf='%y\n' $i`>>modified
done
Having created that file, I'm trying to search it like:
for i in file1 file2 file3
do
var=`grep -w "$i" modified | cut -d'#' -f2`
echo $var
done
As mentioned by Charles, there's no reason to create that modified file for that (unless you are planning to use that file for another purpose).
Also, you can give different arguments to your stat command, as in:
stat --printf='%y\n' file1 file2 file3
This gives exactly the same output as what you're aiming for.

Avoid error with cat command when src file doesn't exist

I am trying to copy content of file1 to file 2 using linux command
cat file1 > file2
file1 may or may not be available depending on different environments where the program is being run. What should be added to the command in case file1 is not available so that it doesn't return an error ? I have read that appending 2>/dev/null will not give error. While that's true, and I didn't get an error the command
cat file1 2>/dev/null > file2
made file2's previous content completely empty when file1 wasn't there. I don't want to lose the content of file2 in case file1 wasn't there and don't want an error to return.
Also in what other cases can the command fail and return an error ?
Test for file1 first.
[ -r file1 ] && cat ...
See help test for details.
elaborating on #Ignacio Vazquez-Abrams :
if (test -a file1); then cat file1 > file2; fi
File1 is empty
File2 consists below content
praveen
Now I am trying to append the content of file1 to file2
Since file1 is empty to nullifying error using /dev/null so output will not show any error
cat file1 >>file 2>/dev/null
File2 content not got deleted
file2 content exsists
praveen
If [ -f file1 ]
then
cat file >> file2
else
cat file1 >>file 2>/dev/null
fi
First, you wrote:
I am trying to copy content of file1 to file 2 using linux command
To copy content of file1 to file2 use the cp command:
if ! cp file1 file2 2>/dev/null ; then
echo "file1 does not exist or isn't readable"
fi
Just for completeness, with cat:
I would pipe stderr to /dev/null and check the return value:
if ! cat file1 2>/dev/null > file2 ; then
rm file2
echo "file1 does not exist or isn't readable"
fi

Linux: Overwrite all files in folder with specified data?

I have a question, can linux overwrite all files in specified folder with data?
I have multiple files in folder:
file1.mp4 file2.mp3, file3.sh, file4.jpg
Both have some data (music, videos.. etc)
And I want to automatically overwrite these files with custom data (for example dummy file)
You can use tee
tee - read from standard input and write to standard output and files
$ echo "writing to file" > file1
$ echo "writing something else to all files" | tee file1 file2 file3
$ head *
==> file1 <==
writing something else to all files
==> file2 <==
writing something else to all files
==> file3 <==
writing something else to all files
With cat command:
for f in folder/*; do cat dummyfile > "$f"; done

shell script to compare two files and write the difference to third file

I want to compare two files and redirect the difference between the two files to third one.
file1:
/opt/a/a.sql
/opt/b/b.sql
/opt/c/c.sql
In case any file has # before /opt/c/c.sql, it should skip #
file2:
/opt/c/c.sql
/opt/a/a.sql
I want to get the difference between the two files. In this case, /opt/b/b.sql should be stored in a different file. Can anyone help me to achieve the above scenarios?
file1
$ cat file1 #both file1 and file2 may contain spaces which are ignored
/opt/a/a.sql
/opt/b/b.sql
/opt/c/c.sql
/opt/h/m.sql
file2
$ cat file2
/opt/c/c.sql
/opt/a/a.sql
Do
awk 'NR==FNR{line[$1];next}
{if(!($1 in line)){if($0!=""){print}}}
' file2 file1 > file3
file3
$ cat file3
/opt/b/b.sql
/opt/h/m.sql
Notes:
The order of files passed to awk is important here, pass the file to check - file2 here - first followed by the master file -file1.
Check awk documentation to understand what is done here.
You can use some tools like cat, sed, sort and uniq.
The main observation is this: if the line is in both files then it is not unique in cat file1 file2.
Furthermore in cat file1 file2| sort, all doubles are in sequence. Using uniq -u we get unique lines and have this pipe:
cat file1 file2 | sort | uniq -u
Using sed to remove leading whitespace, empty and comment lines, we get this final pipe:
cat file1 file2 | sed -r 's/^[ \t]+//; /^#/ d; /^$/ d;' | sort | uniq -u > file3

Separating a joined file to original files in Linux

I know that to append or join multiple files in Linux, we can use the command: cat file1 >> file2.
But I couldn't find any command to separate file1 from file2 after joining them. In other words, I want both original file1 and file2 back again. I tried to use the split command but it just dismembers a file into multiple files with the same size.
Is there a way to do it?
There is no such command, since no information about what was file1 or file2 is retained. The new combined file is just a data stream.
In order to "split" them back up, you need rules about how to do so (such as, how many bytes file1 and file2 were).
When you perform the concatenation, the system doesn't keep track of how the resulting file was created. So it has no way of remembering where the original split was located in that file.
Can you explain what you are trying to do ?
No problem, as long as you still have file1:
$ echo foobar >file1
$ echo blah >file2
$ cat file1 >> file2
$ truncate -s $(( $(stat -c '%s' file2) - $(stat -c '%s' file1) )) file2
$ cat file2
blah
Also, instead of stat -c '%s' filename you can use wc -c filename | cut -f 1 -d ' ', which is longer but more portable.

Resources