I need to copy only the last line of a lot of files to another file. How can I do that? Please help me.
I know tail to take the last file and > to put that to other file but I can do the same thing to a lot of files?
Try:
tail -qn 1 inputfile1 inputfile2 ... > outputfile
-n 1 for outputting only the last line, -q for suppressing the header.
See:
man tail
Related
I have a problem with reading the last line of file in Linux Ubuntu.
I have a file named auth.log and I'm trying to read it last line after new line was added (after file was modified).
I know i need to use
tail -1 /var/log/auth.log to get last line but I don't know how to check the file every time it was modified.
After reading the last line i need to check if it contains "login" word and if yes I need to send this line by email. As far as I know grep needs to be use with mail command.
Any help would be awesome, thanks!
Maybe something like this would work for you, using tail -f to continuously monitor the file:
while read -r; do
echo "$REPLY" | mail your#email.address
done < <(tail -f /var/log/auth.log | grep --line-buffered login)
But this would be running continuously in the background, rather than as a cron job or something.
I have a log file which i need to take a backup,
Then empty the file instead of deleting it,
Deleting the file will cause someother script to get triggered,
Hence i should only empty it.
Please suggest me a way?
After you've read from the file you can just overwrite the file with > filename This overwrites the file with nothing. It is also equivalent to cat /dev/null > filename.
similar solutions referenced here
To empty a file you can use truncate -s 0 filename
AFAIK there is no easy way to backup a file and empty it at the same time. I faced a similar problem and what I ended up doing is reading the original file line by line and copy them to a new file while keeping count of line numbers. Then I simply remove that number of lines from the original file. I use this to manually rotate some log files for which standard rotating approaches were not an option.
ORIGINAL_FILE="file.log"
NEW_FILE="$(date +%s).file.log"
unset n
while read line; do echo "$line" >> $NEW_FILE; : $((n++)); done < $ORIGINAL_FILE
if [[ -v n ]]; then
sed -i "1,$n d" $ORIGINAL_FILE
fi
I have two files data.txt and results.txt, assuming there are 5 lines in data.txt, I want to copy all these lines and paste them in file results.txt starting from the line number 4.
Here is a sample below:
Data.txt file:
stack
ping
dns
ip
remote
Results.txt file:
# here are some text
# please do not edit these lines
# blah blah..
this is the 4th line that data should go on.
I've tried sed with various combinations but I couldn't make it work, I'm not sure if it fit for that purpose as well.
sed -n '4p' /path/to/file/data.txt > /path/to/file/results.txt
The above code copies line 4 only. That isn't what I'm trying to achieve. As I said above, I need to copy all lines from data.txt and paste them in results.txt but it has to start from line 4 without modifying or overriding the first 3 lines.
Any help is greatly appreciated.
EDIT:
I want to override the copied data starting from line number 4 in
the file results.txt. So, I want to leave the first 3 lines without
modifications and override the rest of the file with the data copied
from data.txt file.
Here's a way that works well from cron. Less chance of losing data or corrupting the file:
# preserve first lines of results
head -3 results.txt > results.TMP
# append new data
cat data.txt >> results.TMP
# rename output file atomically in case of system crash
mv results.TMP results.txt
You can use process substitution to give cat a fifo which it will be able to read from :
cat <(head -3 result.txt) data.txt > result.txt
head -n 3 /path/to/file/results.txt > /path/to/file/results.txt
cat /path/to/file/data.txt >> /path/to/file/results.txt
if you can use awk:
awk 'NR!=FNR || NR<4' Result.txt Data.txt
I'm using a diff command and it's printing out to a file. The file keeps getting an extra line in the end that I don't need to appear. How can I prevent it from being there?
The command is as follows:
diff -b <(grep -B 2 -A 1 'bedrock.local' /Applications/MAMP/conf/apache/httpd.conf) /Applications/MAMP/conf/apache/httpd.conf > test.txt
The file being used is here (thought I don't think it matters): http://yaharga.com/httpd.txt
Perhaps at least I'd like to know how to check the last line of the file and delete it only if it's blank.
To delete empty last line you can use sed, it will delete it only if it's blank:
sed '${/^\s*$/d;}' file
Ok i made research with your file on my MacOS.
I created file new.conf by touch new.conf and then copied data from your file to it.
btw i checked file and didn't have extra empty line at the bottom of it.
I wrote script script.sh with following:
diff -b <(grep -B 2 -A 1 'bedrock.local' new.conf) new.conf > test.txt
sed -i.bak '1d;s/^>//' test.txt
It diffed what was needed and deleted first useless row and all > saving it to a new file test.txt
I checked again and no extra empty line was presented.
Additionaly i would suggest you to try and delete the extra line you have like this: sed -i.bak '$d' test.txt
And check a number of lines before and after sed = test.txt
Probably your text editor somehow added this extra line to your file. Try something else - nano for example or vi
Original file contains:
B
RBWBW
RWRWWRBWWWBRBWRWWBWWB
My file contains :
B
RBWBW
RWRWWRBWWWBRBWRWWBWWB
However when i use the command diff original myfile it shows following:
1,3c1,3
< B
< RBWBW
< RWRWWRBWWWBRBWRWWBWWB
---
> B
> RBWBW
> RWRWWRBWWWBRBWRWWBWWB
When i put -w tag (diff original myfile -w) it shows no differences... but I'm absolutely sure these two files do not have whitespace/endline differences. What's the problem?
These texts are equal.
Maybe you have extra white spaces.
try
diff -w -B file1.txt file2.txt
-w Ignore all white space.
-B Ignore changes whose lines are all blank.
As seen in the comments, you must have some different line endings, caused because of an original file coming from a DOS system. That's why using -w dropped the end of the line and files matched.
To repair the file, execute:
dos2unix file
Look at them in Hex format. This way you can really see if they are the same.