tail -f starting with the entire file - tail

I would like to follow a file but tail -f always starts with the last 10 lines. Is there a way to output the entire file then follow?
My goal is to find all occurrences of a string in a log, such as tail -f streaming_log | grep "string". But also include all previous line.
I know I can do tail -f -n 10000 file but I don't want to count the lines first.

-n +<line> allows you to specify a starting line (1-based):
tail -f -n +1 file # output entire file, then wait for new content

Related

coreutils: print nth line in file, block until it exists

How would I print the nth line of a file or input, and block until it exists? I want to stick to coreutils.
sed 'NUMq;d' file will quickly give me the nth line, but doesn't block.
tail -f file will block, but doesn't do the other thing.
I should be able to pipe the line to something else eg with a file:
<block-until-line-20-exists> file | <process-line>
or, with input:
tail -n 0 -f file | <block-until-line-20-exists> | <process-line>
Nevermind. I think I answered my own question. tail & sed is the solution after all.
tail -n 0 -f file | sed '20q;d'
Turns out that tail stops blocking after sed completes.
EDIT: depending on how you use this, there is a potential for a time-of-check/time-of-use bug.

How to capture a file name when using unzip -c and doing multiple greps

I am running the following command:
for file in 2017120[1-9]/54100_*.zip; do unzip -c "$file" | grep "3613825" | grep '3418665' ; done
This does a grep job of pulling the data that matches my grep parameters, but I can't figure out how to capture which file the results came from.
I have tried adding grep -H but the result comes back with (standard input).
How can I capture the file name?
When I need to do something like this I just add an echo of the file name to the for loop like this:
for file in 2017120[1-9]/54100_*.zip; do echo $file; unzip -c "$file" | grep "3613825" | grep '3418665' ; done
This prints out the list of files, and the grep line that matches will print immediately after the file that the match is in. like this:
file_1
file_2
file_3
matching line
file_4
file_5
another matching line
file_6
...
Thus I know the matching lines occurred in file_3 and file_5.

Retrieve last 100 lines logs

I need to retrieve last 100 lines of logs from the log file.
I tried the sed command
sed -n -e '100,$p' logfilename
Please let me know how can I change this command to specifically retrieve the last 100 lines.
You can use tail command as follows:
tail -100 <log file> > newLogfile
Now last 100 lines will be present in newLogfile
EDIT:
More recent versions of tail as mentioned by twalberg use command:
tail -n 100 <log file> > newLogfile
"tail" is command to display the last part of a file, using proper available switches helps us to get more specific output. the most used switch for me is -n and -f
SYNOPSIS
tail [-F | -f | -r] [-q] [-b number | -c number | -n number] [file ...]
Here
-n number :
The location is number lines.
-f : The -f option causes tail to not stop when end of file is
reached, but rather to wait for additional data to be appended to the
input. The -f option is ignored if the
standard input is a pipe, but not if it is a FIFO.
Retrieve last 100 lines logs
To get last static 100 lines
tail -n 100 <file path>
To get real time last 100 lines
tail -f -n 100 <file path>
You can simply use the following command:-
tail -NUMBER_OF_LINES FILE_NAME
e.g tail -100 test.log
will fetch the last 100 lines from test.log
In case, if you want the output of the above in a separate file then you can pipes as follows:-
tail -NUMBER_OF_LINES FILE_NAME > OUTPUT_FILE_NAME
e.g tail -100 test.log > output.log
will fetch the last 100 lines from test.log and store them into a new file output.log)
Look, the sed script that prints the 100 last lines you can find in the documentation for sed (https://www.gnu.org/software/sed/manual/sed.html#tail):
$ cat sed.cmd
1! {; H; g; }
1,100 !s/[^\n]*\n//
$p
$ sed -nf sed.cmd logfilename
For me it is way more difficult than your script so
tail -n 100 logfilename
is much much simpler. And it is quite efficient, it will not read all file if it is not necessary. See my answer with strace report for tail ./huge-file: https://unix.stackexchange.com/questions/102905/does-tail-read-the-whole-file/102910#102910
I know this is very old, but, for whoever it may helps.
less +F my_log_file.log
that's just basic, with less you can do lot more powerful things. once you start seeing logs you can do search, go to line number, search for pattern, much more plus it is faster for large files.
its like vim for logs[totally my opinion]
original less's documentation : https://linux.die.net/man/1/less
less cheatsheet : https://gist.github.com/glnds/8862214
len=`cat filename | wc -l`
len=$(( $len + 1 ))
l=$(( $len - 99 ))
sed -n "${l},${len}p" filename
first line takes the length (Total lines) of file
then +1 in the total lines
after that we have to fatch 100 records so, -99 from total length
then just put the variables in the sed command to fetch the last 100 lines from file
I hope this will help you.

Display single row by specifying line number?

I want to display content of a file. The way I wanted is, when I specify a line number, it should show only that row. For example the content in a file named "FILE" are as follows:
/home/john
/home/mathew
/home/alexander
/home/testuser
/home/hellouser
I want to display single row at a time by giving line number, like If I specify 3, it should show only following row.
/home/alexander
I know another way is possible with head and tail using -n flag, but that just display the entire rows upto the line number we specify, like as follows.
head -n3 FILE
/home/john
/home/mathew
/home/alexander
I don't want that, I want only to display "/home/alexander" only the when I choose line number 3. How is it possible?
with head and tail you can do the following:
head -3 FILE | tail -1
Some other ways (4th line):
sed -n '4p' file
or
perl -ne 'print if 4 == $.' file
or
grep -n ^ file | grep '^4:' | cut -d: -f2-
You could use awk
line=3
awk -v var="$line" 'NR==var' file
It will display the line number stored in variable line

Grep only a part of a text file

How can I apply the following command to only a part of a text file? For example from the beginning to the line 5000.
grep "^ A : 11 B : 10" filename | wc -l
I cannot use head and then apply the above command since the text file is huge.
You could try using the sed command, which I believe does better for large files, from this question and pipe to grep.
sed -n 1,5000p file | grep ...
You can try combination of -n (prefixing each line of output with line number) and -m (limiting number of matching lines). Something like this:
grep -n -m 5000 pattern file.txt | grep -B 5000 "^5000:" | wc -l
First grep search for pattern, add line numbers and limit output to first 5000 matching lines (worst case scenario: all lines from range match pattern). Second grep match line number 5000, and print all lines before this line.
I don't know if it is more efficient solution

Resources