How to print lines between 2 values using tail & head and pipe? - linux

For example:how can I print specific lines of a .txt file between line 5 and line 8 using only tail and head

Copied from here
infile.txt contains a numerical value on each line.
➜ X=3
➜ Y=10
➜ < infile.txt tail -n +"$X" | head -n "$((Y - X))"
3
4
5
6
7
8
9
➜

Related

Concatenate files without last lines of each one

I am concatenating a large number of files into a single one with the following command:
$ cat num_*.dat > dataset.dat
However, due to the structure of the files, I'd like to omit concatenating the first two and last two lines of each file. Those lines contain file information which is not important for my necesities.
I know the existence of head and tail, but I don't now how to combine them in a UNIX instruction to solve my issue.
The head command has some odd parameter usage.
You can use the following to list all of the lines except the last two.
$ cat num_*.dat | head -n-2 > dataset.dat
Next, take that and run the following tail command on it
$ tail dataset.dat -n+3 >> dataset.dat
I believe the following will work as one command.
$ cat num_*.dat | head -n-2 | tail -n+3 > dataset.dat
I tested on a file that had lines like the following:
Line 1 Line 2 Line 3 Line 4 Line 5 Line 6 Line
7
This one will get you started:
cat test.txt | head -n-2 | tail -n+3
From the file above it prints :
Line 3 Line 4 Line 5
The challenge is that when you use cat filename*.dat or whatever is that the command cats all of the files then runs the command one time so it becomes one large file with only removing the first two lines of the first catted file and the two lines of that last catted file.
Final Answer - Need to Write a Bash Script
I wrote a bash script that will do this for you.
This one will iterate through each file in your directory and run the command.
Notice that it appends (>>) to the dataset.dat file.
for file in num_*.dat; do
if [ -f "$file" ]; then
cat $file | head -n-2 | tail -n+3 >> dataset.dat
echo "$file"
fi
done
I had two files that looked like the following:
line 1 line 2 line 3 line 4 line 5 line 6 line
7 2 line 1 2 line 2 2 line 3 2 line 4 2 line 5
2 line 6 2 line 7
The final output was:
line 3 line 4 line 5 2 line 3 2 line 4 2 line
5
for i in num_*.dat; do # loop through all files concerned
cat $i | tail -n +3 | head -n -2 >> dataset.dat
done

How do I turn a text file with a single column into a matrix?

I have a text file that has a single column of numbers, like this:
1
2
3
4
5
6
I want to convert it into two columns, in the left to right order this way:
1 2
3 4
5 6
I can do it with:
awk '{print>"line-"NR%2}' file
paste line-0 line-1 >newfile
But I think the reliance on two intermediate files will make it fragile in a script.
I'd like to use something like cat file | mystery-zip-command >newfile
You can use paste to do this:
paste -d " " - - < file > newfile
You can also use pr:
pr -ats" " -2 file > newfile
-a - use round robin order
-t - suppress header and trailer
-s " " - use single space as the delimiter
-2 - two column output
See also:
Convert a text file into columns
another alternative
$ seq 6 | xargs -n2
1 2
3 4
5 6
or with awk
$ seq 6 | awk '{ORS=NR%2?FS:RS}1'
1 2
3 4
5 6
if you want the output terminate with a new line in case of odd number of input lines..
$ seq 7 | awk '{ORS=NR%2?FS:RS}1; END{ORS=NR%2?RS:FS; print ""}'
1 2
3 4
5 6
7
awk 'NR % 2 == 1 { printf("%s", $1) }
NR % 2 == 0 { printf(" %s\n", $1) }
END { if (NR % 2 == 1) print "" }' file
The odd lines are printed with no newline after them, to print the first column. The even lines are printed with a space first and a newline after, to print the second column. At the end, if there were an odd number of lines, we print a newline so we don't end in the middle of the line.
With bash:
while IFS= read -r odd; do IFS= read -r even; echo "$odd $even"; done < file
Output:
1 2
3 4
5 6
$ seq 6 | awk '{ORS=(NR%2?FS:RS); print} END{if (ORS==FS) printf RS}'
1 2
3 4
5 6
$
$ seq 7 | awk '{ORS=(NR%2?FS:RS); print} END{if (ORS==FS) printf RS}'
1 2
3 4
5 6
7
$
Note that it always adds a terminating newline - that is important as future commands might depend on it, e.g.:
$ seq 6 | awk '{ORS=(NR%2?FS:RS); print}' | wc -l
3
$ seq 7 | awk '{ORS=(NR%2?FS:RS); print}' | wc -l
3
$ seq 7 | awk '{ORS=(NR%2?FS:RS); print} END{if (ORS==FS) printf RS}' | wc -l
4
Just change the single occurrence of 2 to 3 or however many columns you want if your requirements change:
$ seq 6 | awk '{ORS=(NR%3?FS:RS); print} END{if (ORS==FS) printf RS}'
1 2 3
4 5 6
$ seq 7 | awk '{ORS=(NR%3?FS:RS); print} END{if (ORS==FS) printf RS}'
1 2 3
4 5 6
7
$ seq 8 | awk '{ORS=(NR%3?FS:RS); print} END{if (ORS==FS) printf RS}'
1 2 3
4 5 6
7 8
$ seq 9 | awk '{ORS=(NR%3?FS:RS); print} END{if (ORS==FS) printf RS}'
1 2 3
4 5 6
7 8 9
$
Short awk approach:
awk '{print ( ((getline nl) > 0)? $0" "nl : $0 )}' file
The output:
1 2
3 4
5 6
(getline nl)>0 - getline will get the next record and assign it to variable nl. The getline command returns 1 if it finds a record and 0 if it encounters the end of the file
Short GNU sed approach:
sed 'N;s/\n/ /' file
N - add a newline to the pattern space, then append the next line of input to the pattern space
s/\n/ / - replace newline with whitespace within captured pattern space
seq 6 | tr '\n' ' ' | sed -r 's/([^ ]* [^ ]* )/\1\n/g'

How to read n-th line from a text file in bash?

Say I have a text file called "demo.txt" who looks like this:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
Now I want to read a certain line, say line 2, with a command which will look something like this:
Line2 = read 2 "demo.txt"
So when I'll print it:
echo "$Line2"
I'll get:
5 6 7 8
I know how to use 'sed' command in order to print a n-th line from a file, but not how to read it. I also know the 'read' command but dont know how to use it in order a certain line.
Thanks in advance for the help.
Using head and tail
$ head -2 inputFile | tail -1
5 6 7 8
OR
a generalized version
$ line=2
$ head -"$line" input | tail -1
5 6 7 8
Using sed
$ sed -n '2 p' input
5 6 7 8
$ sed -n "$line p" input
5 6 7 8
What it does?
-n suppresses normal printing of pattern space.
'2 p' specifies the line number, 2 or ($line for more general), p commands to print the current patternspace
input input file
Edit
To get the output to some variable use some command substitution techniques.
$ content=`sed -n "$line p" input`
$ echo $content
5 6 7 8
OR
$ content=$(sed -n "$line p" input)
$ echo $content
5 6 7 8
To obtain the output to a bash array
$ content= ( $(sed -n "$line p" input) )
$ echo ${content[0]}
5
$ echo ${content[1]}
6
Using awk
Perhaps an awk solution might look like
$ awk -v line=$line 'NR==line' input
5 6 7 8
Thanks to Fredrik Pihl for the suggestion.
Perl has convenient support for this, too, and it's actually the most intuitive!
The flip-flop operator can be used with line numbers:
$ printf "0\n1\n2\n3\n4" | perl -ne 'printf if 2 .. 4'
1
2
3
Note that it's 1-based.
You can also mix regular expressions:
$ printf "0\n1\nfoo\n3\n4" | perl -ne 'printf if /foo/ .. -1'
foo
3
4
(-1 refers to the last line)

Move Last Four Lines To Second Row In Text File

I need to move the last 4 lines of a text file and move them to the second row in the text file.
I'm assuming that tail and sed are used but, I haven't much luck so far.
Here is a head and tail solution. Let us start with the same sample file as Glenn Jackman:
$ seq 10 >file
Apply these commands:
$ head -n1 file ; tail -n4 file; tail -n+2 file | head -n-4
1
7
8
9
10
2
3
4
5
6
Explanation:
head -n1 file
Print first line
tail -n4 file
Print last four lines
tail -n+2 file | head -n-4
Print the lines starting with line 2 and ending before the fourth-to-last line.
If I'm assuming correctly, ed can handle your task:
seq 10 > file
ed file <<'COMMANDS'
$-3,$m1
w
q
COMMANDS
cat file
1
7
8
9
10
2
3
4
5
6
lines 7,8,9,10 have been moved to the 2nd line
$-3,$m1 means, for the range of lines from "$-3" (3 lines before the last line) to "$" (the last line, move them ("m") below the first line ("1")
Note that the heredoc has been quoted so the shell does not try to interpret the strings $- and $m1 as variables
If you don't want to actually modify the file, but instead print to stdout:
ed -s file <<'COMMANDS'
$-3,$m1
%p
Q
COMMANDS
Here is an awk solution:
seq 10 > file
awk '{a[NR]=$0} END {for (i=1;i<=NR-4;i++) if (i==2) {for (j=NR-3;j<=NR;j++) print a[j];print a[i]} else print a[i]}' file
1
7
8
9
10
2
3
4
5
6

Searching a column in a unix file?

I have the data file below:
136110828724515000007700877
137110904734015000007700877
138110911724215000007700877
127110626724515000007700871
127110626726015000007700871
131110724724515000007700871
134110814725015000007700871
134110814734015000007700871
104110122726027000001810072
107110208724527000002900000
And I want to extract value of column 3 ie values of 6787714447.
I tried by using:-
awk "print $3" <filename>
but it didn't work. What should I use instead?
It is a better job for cut:
$ cut -c 3 < file
6
7
8
7
7
1
4
4
4
7
As per man cut:
-c, --characters=LIST
select only these characters
To make them appear all in the same line, pipe tr -d '\n':
$ cut -c 3 < file | tr -d '\n'
6787714447
Or even to sed to have the new line at the end:
$ cut -c 3 < file | tr -d '\n' | sed 's/$/\n/'
6787714447
With grep:
$ grep -oP "^..\K." file
6
7
8
7
7
1
4
4
4
7
with sed:
$ sed -r 's/..(.).*/\1/' file
6
7
8
7
7
1
4
4
4
7
with awk:
$ awk '{split ($0, a, ""); print a[3]}' file
6
7
8
7
7
1
4
4
4
7
Cut is probably the simpler/cleaner option, but here two alternatives:
AWK version:
awk '{print substr($1, 3, 1) }' <filename>
Python version:
python -c 'print "\n".join(map(lambda x: x[2], open("<filename>").readlines()))'
EDIT: Please see 1_CR's comments and disregard this option in favour of his.

Resources