Related to head and tail command in Unix - linux

I know what output head -n and tail -n will provide.
Is there any command like head +n (head +2 filename) or tail +n (tail +2 filename)?
If yes, can anyone shed some light on this?

The Single Unix Specification Version 2 (1997) states the following for tail:
In the non-obsolescent form, if neither -c nor -n is specified, -n 10 is assumed.
In the obsolescent version, an argument beginning with a "-" or "+" can be used as a single option. The argument ±number with the letter c specified as a suffix is equivalent to -c ±number; ±number with the b suffix is equivalent to -c ±number*512; ±number with the letter l specified as a suffix, or with none of b, c nor l as a suffix, is equivalent to -n ±number. If number is not specified in these forms, 10 will be used. The letter f specified as a suffix is equivalent to specifying the -f option. If the [number]c[f] form is used and neither number nor the f suffix is specified, it will be interpreted as the -c 10 option.
In other words the following commands in each group are equivalent:
tail -2 file
tail -n 2 file
tail +2 file
tail -n +2
tail -2c file
tail -c 2 file
tail +3lf file
tail -f -n +3 file
Note that unless a "+" is used, the number given means "output the last N lines". If "+" is used, it means "output the lines starting from line N". For example, in a file with 40 lines, tail +2 (or equivalently tail -n +2) would output lines 2..40, whereas using -2 or simply 2 would output lines 39..40.
The next version of the Single Unix Specification of 2001 removed the obsolescent form completely, so there are no "options" starting with a "+" character.

tail supports both positive and negative offsets, but head does not.
Start output at the 10th line from the end of the file:
tail -10 filename
Start out at the 10th line from the begining of the file:
tail +10 filename

I think piping is what you're looking for: https://en.wikipedia.org/wiki/Pipeline_(Unix)
To use the first example you gave:
head +2 filename | head +n
I believe is what you want, though note that I haven't tested it

Related

removing first n and last n lines from multiple text files

I have been stuck for some time now
I have two text files, from which I would like to remove the first two and the last three lines.
So far I have
$tail -n +3 text_1.txt text_2.txt | head -n -3
When I enter this into console, I see that text_2.txt indeed comes out with proper format, but text_1.txt still has that last three lines that need to be removed. I presume that head command is not being applied to text_1.txt.
How can I solve this problem?
for i in text_1.txt text_2.txt; do tail -n +3 "$i" | head -n -3; done

Error while comparing in shell

I am trying to search a pattern(trailer) and if it occures more than once in a file, I need those filenames displayed
for f in *.txt
do
if((tail -n 1 $f | grep '[9][9][9]*' | wc -l) -ge 2);
then
echo " The file $f has more than one trailer"
fi
done
Your most crying syntax error is that -ge is an operator for the [ … ] or [[ … ]] conditional construct. It doesn't have a chance the way you wrote the program. -ge needs a number on both sides, and what you have on the left is a command. You probably meant to have the output of the command, which would need the command substitution syntax: $(…). That's
if [ $(tail -n 1 $f | grep '[9][9][9]*' | wc -l) -ge 2 ]; then
This is syntactically correct but will never match. tail -n 1 $f outputs exactly one line (unless the file is empty), so grep sees at most one line, so wc -l prints either 0 or 1.
If you want to search the pattern on more than one line, change your tail invocation. While you're at it, you can change grep … | wc -l to grep -c; both do exactly the same thing, which is to count matching lines. For example, to search in the last 42 lines:
if [ $(tail -n 42 -- "$f" | grep -c '[9][9][9]*') -ge 2 ]; then
If you want to search for two matches on the last lines, that's different. grep won't help because it determines whether each line matches or not, it doesn't look for multiple matches per line. If you want to look for multiple non-overlapping matches on the last line, repeat the pattern, allowing arbitrary text in between. You're testing if the pattern is present or not, so you only need to test the return status of grep, you don't need its output (hence the -q option).
if tail -n 1 -- "$f" | grep -q '[9][9][9]*.*[9][9][9]*'; then
I changed the tail invocations to add -- in case a file name begins with - (otherwise, tail would interpret it as an option) and to have double quotes around the file name (in case it contains whitespace or \[*?). These are good habits to get into. Always put double quotes around variable substitutions "$foo" and command substitutions "$(foo)" unless you know that the substitution will result in a whitespace-separated list of glob patterns.
tail -n 1 $f will produce (at most) one line of output, which is fed to grep, which can then produce by definition at most one line of output, which means that the output of wc will never be more than 1, and will especially never be greater than 2. Aside from the syntax issues mentioned in other comments/answers, I think this logic is probably one of the core problems.

Tail inverse / printing everything except the last n lines?

Is there a (POSIX command line) way to print all of a file EXCEPT the last n lines? Use case being, I will have multiple files of unknown size, all of which contain a boilerplate footer of a known size, which I want to remove. I was wondering if there is already a utility that does this before writing it myself.
Most versions of head(1) - GNU derived, in particular, but not BSD derived - have a feature to do this. It will show the top of the file except the end if you use a negative number for the number of lines to print.
Like so:
head -n -10 textfile
Probably less efficient than the "wc" + "do the math" + "tail" method, but easier to look at:
tail -r file.txt | tail +NUM | tail -r
Where NUM is one more than the number of ending lines you want to remove, e.g. +11 will print all but the last 10 lines. This works on BSD which does not support the head -n -NUM syntax.
The head utility is your friend.
From the man page of head:
-n, --lines=[-]K
print the first K lines instead of the first 10;
with the leading `-', print all but the last K lines of each file
There's no standard commands to do that, but you can use awk or sed to fill a buffer of N lines, and print from the head once it's full. E.g. with awk:
awk -v n=5 '{if(NR>n) print a[NR%n]; a[NR%n]=$0}' file
cat <filename> | head -n -10 # Everything except last 10 lines of a file
cat <filename> | tail -n +10 # Everything except 1st 10 lines of a file
If the footer starts with a consistent line that doesn't appear elsewhere, you can use sed:
sed '/FIRST_LINE_OF_FOOTER/q' filename
That prints the first line of the footer; if you want to avoid that:
sed -n '/FIRST_LINE_OF_FOOTER/q;p' filename
This could be more robust than counting lines if the size of the footer changes in the future. (Or it could be less robust if the first line changes.)
Another option, if your system's head command doesn't support head -n -10, is to precompute the number of lines you want to show. The following depends on bash-specific syntax:
lines=$(wc -l < filename) ; (( lines -= 10 )) ; head -$lines filename
Note that the head -NUMBER syntax is supported by some versions of head for backward compatibility; POSIX only permits the head -n NUMBER form. POSIX also only permits the argument to -n to be a positive decimal integer; head -n 0 isn't necessarily a no-op.
A POSIX-compliant solution is:
lines=$(wc -l < filename) ; lines=$(($lines - 10)) ; head -n $lines filename
If you need to deal with ancient pre-POSIX shells, you might consider this:
lines=`wc -l < filename` ; lines=`expr $lines - 10` ; head -n $lines filename
Any of these might do odd things if a file is 10 or fewer lines long.
tac file.txt | tail +[n+1] | tac
This answer is similar to user9645's, but it avoids the tail -r command, which is also not a valid option many systems. See, e.g., https://ubuntuforums.org/showthread.php?t=1346596&s=4246c451162feff4e519ef2f5cb1a45f&p=8444785#post8444785 for an example.
Note that the +1 (in the brackets) was needed on the system I tried it on to test, but it may not be required on your system. So, to remove the last line, I had to put 2 in the brackets. This is probably related to the fact that you need to have the last line ending with regular line feed character(s). This, arguably, makes the last line a blank line. If you don't do that, then the tac command will combine the last two lines, so removing the "last" line (or the first to the tail command) will actually remove the last two lines.
My answer should also be the fastest solution of those listed to date for systems lacking the improved version of head. So, I think it is both the most robust and the fastest of all the answers listed.
head -n $((`(wc -l < Windows_Terminal.json)`)) Windows_Terminal.json
This will work on Linux and on MacOs, keep in mind Mac does not support a negative value. so This is quite handy.
n.b : replace Windows_Terminal.json with your file name
It is simple. You have to add + to the number of lines that you wanted to avoid.
This example gives to you all the lines except the first 9
tail -n +10 inputfile
(yes, not the first 10...because it counts different...if you want 10, just type
tail -n 11 inputfile)

view a particular line of a file denoted by a number

Okay, this is probably an evident thing but it escapes me, as in it could probably be done in a much simpler way that I'm not aware of, so far..
Say there's a "file" and I want to view only what's on line number "X" of that file, what would be the solution?
here's what i can think of:
head -X < file | tail -1
sed -n Xp < file
is there anything else (or any other way) from the standard set of unix/gnu/linux text-tools/utils?
sed -n 'Xp' theFile, where X is your line number and theFile is your file.
in vi:
vi +X filename
in EMACS:
emacs +X filename
in the shell:
nl -ba -nln filename| grep '^X '
you can use context grep cgrep instead of grep to see some lines above and below the matching line..
EXAMPLES:
print just that one line:
$ nl -ba -nln active_record.rb | grep '^111 '
111 module ConnectionAdapters
with context:
$ nl -ba -nln active_record.rb | grep -C 2 '^111 '
109 end
110
111 module ConnectionAdapters
112 extend ActiveSupport::Autoload
113
for context control in grep check man grep :
Context Line Control
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given.
-B NUM, --before-context=NUM
Print NUM lines of leading context before matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given.
-C NUM, -NUM, --context=NUM
Print NUM lines of output context. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given.
awk one-liner:
awk "NR==$X" file
bash loop:
for ((i=1; i<=X; i++)); do
read l
done < file
echo "$l"
Just use vi
vi file
When in the file type
:X
where X is the line number you want to see
However, the sed -n Xp file is a good way if you really only want to see the one line

Get the first n lines matching a certain pattern (with Linux commands)

I have a giant file where I want to find a term model. I want to pipe the first 5 lines containing the word model to another file. How do I do that using Linux commands?
man grep mentions that
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is
standard input from a regular file, and NUM matching lines are
output, grep ensures that the standard input is positioned to
just after the last matching line before exiting, regardless of
the presence of trailing context lines. This enables a calling
process to resume a search.
so one can use
grep model old_file_name.txt -m 5 > new_file_name.txt
No need for a pipe. grep supports almost everything you need on it's own.
grep model [file] | head -n 5 > [newfile]
grep "model" filename | head -n 5 > newfile
cat file | grep model | head -n 5 > outfile.txt

Resources