I have a question, can linux overwrite all files in specified folder with data?
I have multiple files in folder:
file1.mp4 file2.mp3, file3.sh, file4.jpg
Both have some data (music, videos.. etc)
And I want to automatically overwrite these files with custom data (for example dummy file)
You can use tee
tee - read from standard input and write to standard output and files
$ echo "writing to file" > file1
$ echo "writing something else to all files" | tee file1 file2 file3
$ head *
==> file1 <==
writing something else to all files
==> file2 <==
writing something else to all files
==> file3 <==
writing something else to all files
With cat command:
for f in folder/*; do cat dummyfile > "$f"; done
Related
This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 3 years ago.
I'm testing basic bash command on Ubuntu, when I put on terminal
wc -l < file1.txt > file2.txt
all lines from file1 are read by wc and the number is saved to file2.txt.
However if I use the similar command wc -l < file1.txt > file1.txt but I try to save it in the same > file1 the output is always 0.
What is the reason of this behavior ?
When you write the command wc -l < file1.txt > file1.txt, your shell will first read the command as a whole and redirect the standard input, output from/to file1.txt.
Then when a file is open for writing using >, if a file exists it will be overwritten and an empty file is created. the process (your wc) will then start its execution and it will write in it until the end of the execution.
Your wc -l will however have to read the file and count the number of lines in it, as the file is empty (overwritten) it will just dump in it:
0 file1.txt
To prove this look at what happens in appending mode using >>
$ cat file1.txt
a
b
b
$ wc -l file1.txt
3 file1.txt
$ wc -l file1.txt >> file1.txt
$ cat file1.txt
a
b
b
3 file1.txt
Your file content is intact and wc does properly read the number of lines before appending the result to the file.
Note:
In general, it is never recommended to write and read in the same file except if you know exactly what you are doing. Some commands have an inline option to modify the file during the run of the command and it is highly recommended to use them, as the file will not be corrupted even if the command crash in the middle of the execution.
Its because redirections have higher priority than other commands.
Here the file1 will be emptied before any operations can be performed on it
Try >> on file1.txt instead of >. >> will append to file unlike > overwriting whole file. Try -
wc -l < file1.txt >> file1.txt
i have two files a.txt and b.txt
I would like to report to output ONLY when file are same ?
so:
if file same -> report
if file different -> do not report anything
I know that in diff there is a -s option which report when file are the same but when file are different it will report as well (and I want to not report when files are different)
oh one more think I am not able to install anything additional
You tagged your question linux and batch-file, which is contradictory. Here is a batch-file solution:
fc file1 file2 >nul && (echo same) || (echo different)
"to not report when files are different", just skip the || (echo different) part
Chances are, if you have diff available, you will have grep available too. So pipe the diff output through grep to check which result you get, and act accordingly. diff -qs will output "Files a.txt and b.txt are identical" or "Files a.txt and b.txt differ". So you can check for the presence of "identical" in the output to find your case.
if diff -qs a.txt b.txt | grep -q identical; then
echo "Files are identical. Reporting"
else
# Do nothing
fi
Or as a oneliner:
(diff -qs a.txt b.txt | grep -q identical) && echo "Files are identical."
I am trying to copy content of file1 to file 2 using linux command
cat file1 > file2
file1 may or may not be available depending on different environments where the program is being run. What should be added to the command in case file1 is not available so that it doesn't return an error ? I have read that appending 2>/dev/null will not give error. While that's true, and I didn't get an error the command
cat file1 2>/dev/null > file2
made file2's previous content completely empty when file1 wasn't there. I don't want to lose the content of file2 in case file1 wasn't there and don't want an error to return.
Also in what other cases can the command fail and return an error ?
Test for file1 first.
[ -r file1 ] && cat ...
See help test for details.
elaborating on #Ignacio Vazquez-Abrams :
if (test -a file1); then cat file1 > file2; fi
File1 is empty
File2 consists below content
praveen
Now I am trying to append the content of file1 to file2
Since file1 is empty to nullifying error using /dev/null so output will not show any error
cat file1 >>file 2>/dev/null
File2 content not got deleted
file2 content exsists
praveen
If [ -f file1 ]
then
cat file >> file2
else
cat file1 >>file 2>/dev/null
fi
First, you wrote:
I am trying to copy content of file1 to file 2 using linux command
To copy content of file1 to file2 use the cp command:
if ! cp file1 file2 2>/dev/null ; then
echo "file1 does not exist or isn't readable"
fi
Just for completeness, with cat:
I would pipe stderr to /dev/null and check the return value:
if ! cat file1 2>/dev/null > file2 ; then
rm file2
echo "file1 does not exist or isn't readable"
fi
I am having a list of files under a directory as below,
file1
file2
file3
....
....
files will get created dynamically by a process.
now when i do tail -f file* > data.txt,
file* takes only the existing files in the directory.
for (e.g)
existing files:
file1
file2
i do : tail -f file* > data.txt
when tail in process a new file named file3 got created,
(here i need to include file3 as well in the tail without restarting the command)
however i need to stop tail and start it again so that dynamically created files also tailed.
Is there a way to dynamically include files in tail whenever there is a new file created or any workaround for this.
I have an anwser that satisfies most but not all of your requirements:
You can use
tail -f --follow=name --retry file1 file2 file3 > data.txt
This will keep opening the files 1,2,3 until they become available. It will keep printing the output even if on of the files disappears and reappears again.
example usage:
first create two dummy files:
echo a >> file1
echo b >> file2
now use tail (in a separate window):
tail -f --follow=name --retry file1 file2 file3 > data.txt
now append some data and do some other manipulations:
echo b >> file2
echo c >> file3
rm file1
echo a >> file1
Now this is the final output. Remark that all three files are taken into account, even though they weren't present at a certain moment:
==> file1 <==
a
==> file2 <==
b
tail: cannot open ‘file3’ for reading: No such file or directory
==> file2 <==
b
tail: ‘file3’ has become accessible
==> file3 <==
c
tail: ‘file1’ has become inaccessible: No such file or directory
==> file1 <==
a
remark: this won't work with file*, because that is a glob pattern that is expanded before execution. Suppose you do:
tail -f file*
and only file1 and file2 are present; then tail gets as input:
tail -f file1 file2
The glob expansion cannot know which files would eventually match the pattern. So this is a partial answer: if you know all the possible names of files that will be created; this will do the trick.
You could use inotifywait to inform you of any files created in a directory. Read the output and start a new tail -f as a background process for each new file created.
I know that to append or join multiple files in Linux, we can use the command: cat file1 >> file2.
But I couldn't find any command to separate file1 from file2 after joining them. In other words, I want both original file1 and file2 back again. I tried to use the split command but it just dismembers a file into multiple files with the same size.
Is there a way to do it?
There is no such command, since no information about what was file1 or file2 is retained. The new combined file is just a data stream.
In order to "split" them back up, you need rules about how to do so (such as, how many bytes file1 and file2 were).
When you perform the concatenation, the system doesn't keep track of how the resulting file was created. So it has no way of remembering where the original split was located in that file.
Can you explain what you are trying to do ?
No problem, as long as you still have file1:
$ echo foobar >file1
$ echo blah >file2
$ cat file1 >> file2
$ truncate -s $(( $(stat -c '%s' file2) - $(stat -c '%s' file1) )) file2
$ cat file2
blah
Also, instead of stat -c '%s' filename you can use wc -c filename | cut -f 1 -d ' ', which is longer but more portable.