Reorder Lines Based On Previous File Order Before Randomization - linux

I have the following lines in file1:
line 1text
line 2text
line 3text
line 4text
line 5text
line 6text
line 7text
With the command cat file1 | sort -R | head -4 I get the following in file2:
line 5text
line 1text
line 7text
line 2text
I would like to order the lines (not numerically, just the same order as file1) into the following file3:
line 1text
line 2text
line 5text
line 7text
The actual data doesn't have digits. Any easy way to do this? I was thinking of doing a grep and finding the first instance in a loop. But, I'm sure you experienced guys know an easier solution. Your positive input is highly appreciated.

You can decorate with line numbers, select four random lines lines, sort by line number and remove the line numbers:
$ nl -b a file1 | shuf -n 4 | sort -n -k 1,1 | cut -f 2-
line 2text
line 5text
line 6text
line 7text
The -b a option to nl makes sure that also empty lines are numbered.
Notice that this loads all of file1 into memory, as pointed out by ghoti. To avoid that (and as a generally smarter solution), we can use a different feature of (GNU) shuf: its -i option takes a number range and treats each number as a line. To get four random line numbers from an input file file1, we can use
shuf -n 4 -i 1-$(wc -l < file1)
Now, we have to print exactly these lines. Sed can do that; we just turn the output of the previous command into a sed script and run sed with sed -n -f -. All together:
shuf -n 4 -i 1-$(wc -l < file1) | sort -n | sed 's/$/p/;$s/p/{&;q}/' |
sed -n -f - file1
sort -n sorts the line numbers numerically. This isn't strictly needed, but if we know that the highest line number comes last, we can quit sed afterwards instead of reading the rest of the file for nothing.
sed 's/$/p/;$s/p/{&;q}/ appends p to each line. For the last line, we append {p;q} to stop processing the file.
If the output from sort looks like
27
774
670
541
then the sed command turns it into
27p
774p
670p
541{p;q}
sed -n -f - file1 processes file1, using the output of above sed command as the instructions for sed. -n suppresses output for the lines we don't want.
The command can be parametrized and put into a shell function, taking the file name and the number of lines to print as arguments:
randlines () {
fname=$1
nlines=$2
shuf -n "$nlines" -i 1-$(wc -l < "$fname") | sort -n |
sed 's/$/p/;$s/p/{&;q}/' | sed -n -f - "$fname"
}
to be used like
randlines file1 4

cat can add line numbers:
$ cat -n file
1 line one
2 line two
3 line three
4 line four
5 line five
6 line six
7 line seven
8 line eight
9 line nine
So you can use that to decorate, sort, undecorate:
$ cat -n file | sort -R | head -4 | sort -n
You can also use awk to decorate with a random number and line index (if your sort lacks -R like on OS X):
$ awk '{print rand() "\t" FNR "\t" $0}' file | sort -n | head -4
0.152208 4 line four
0.173531 8 line eight
0.193475 6 line six
0.237788 1 line one
Then sort with the line numbers and remove the decoration (one or two columns depending if you use cat or awk to decorate):
$ awk '{print rand() "\t" FNR "\t" $0}' file | sort -n | head -4 | cut -f2- | sort -n | cut -f2-
line one
line four
line six
line eight

another solution could be to sort whole file
sort file1 -o file2
to pick random lines on file2
shuf -n 4 file2 -o file3

Related

Linux: Number of characters in a text file on lines 'x' through 'y'

How do I print the number of characters on lines x - y of a text file?
I tried using wc -m filename.txt
but I couldn't figure out how to limit the search.
You could use
head -y filename | tail -(y-x+1) | wc -m
You can use the sed command to select the lines you want and then pipe the output into wc. Something like this would select lines 6-10 and print the number of characters:
sed -n '6,10p' filename.txt | wc -m
Try this:
awk '{ print NR, "-", length($0)}' filename.txt
It will print the line number NR and the characters per line length($0) of filename.txt so output will be something like:
1 - 3 # line 1 with 3 characters
2 - 0 # line 2 with no characters
...
In case you just want to print the number of characters for a specific range, let's say from line 1 to 3, this could be used:
awk 'NR>=1 && NR<=3 { print length($0)}' filename.txt

Removing a header in GNU/Linux

I'm trying to confirm or not if I am able to remove a header.
Let's say
I have a file data.gz:
This line is the header Data
Data line 1
Data line 2
Data line 3
Data line 4
Data line 5
I want to remove the first line before I do a regular expression
gunzip -c data.gz | grep -v '^This line is the header data$' | grep -o 'Data' | sort | uniq -c
Will this remove the header before I do second grep (regular expression) for data? Is there a better method for removing a header in a pipeline?
Yes! The tail command can skip lines counting from the beginning:
$ seq 1 3 | tail -n+2
2
3
Delete first line with sed:
| sed 1d

wc -l is NOT counting last of the file if it does not have end of line character

I need to count all lines of an unix file. The file has 3 lines but wc -l gives only 2 count.
I understand that it is not counting last line because it does not have end of line character
Could any one please tell me how to count that line as well ?
grep -c returns the number of matching lines. Just use an empty string "" as your matching expression:
$ echo -n $'a\nb\nc' > 2or3.txt
$ cat 2or3.txt | wc -l
2
$ grep -c "" 2or3.txt
3
It is better to have all lines ending with EOL \n in Unix files. You can do:
{ cat file; echo ''; } | wc -l
Or this awk:
awk 'END{print NR}' file
This approach will give the correct line count regardless of whether the last line in the file ends with a newline or not.
awk will make sure that, in its output, each line it prints ends with a new line character. Thus, to be sure each line ends in a newline before sending the line to wc, use:
awk '1' file | wc -l
Here, we use the trivial awk program that consists solely of the number 1. awk interprets this cryptic statement to mean "print the line" which it does, being assured that a trailing newline is present.
Examples
Let us create a file with three lines, each ending with a newline, and count the lines:
$ echo -n $'a\nb\nc\n' >file
$ awk '1' f | wc -l
3
The correct number is found.
Now, let's try again with the last new line missing:
$ echo -n $'a\nb\nc' >file
$ awk '1' f | wc -l
3
This still provides the right number. awk automatically corrects for a missing newline but leaves the file alone if the last newline is present.
Respect
I respect the answer from John1024 and would like to expand upon it.
Line Count function
I find myself comparing line counts A LOT especially from the clipboard, so I have defined a bash function. I'd like to modify it to show the filenames and when passed more than 1 file a total. However, it hasn't been important enough for me to do so far.
# semicolons used because this is a condensed to 1 line in my ~/.bash_profile
function wcl(){
if [[ -z "${1:-}" ]]; then
set -- /dev/stdin "$#";
fi;
for f in "$#"; do
awk 1 "$f" | wc -l;
done;
}
Counting lines without the function
# Line count of the file
$ cat file_with_newline | wc -l
3
# Line count of the file
$ cat file_without_newline | wc -l
2
# Line count of the file unchanged by cat
$ cat file_without_newline | cat | wc -l
2
# Line count of the file changed by awk
$ cat file_without_newline | awk 1 | wc -l
3
# Line count of the file changed by only the first call to awk
$ cat file_without_newline | awk 1 | awk 1 | awk 1 | wc -l
3
# Line count of the file unchanged by awk because it ends with a newline character
$ cat file_with_newline | awk 1 | awk 1 | awk 1 | wc -l
3
Counting characters (why you don't want to put a wrapper around wc)
# Character count of the file
$ cat file_with_newline | wc -c
6
# Character count of the file unchanged by awk because it ends with a newline character
$ cat file_with_newline | awk 1 | awk 1 | awk 1 | wc -c
6
# Character count of the file
$ cat file_without_newline | wc -c
5
# Character count of the file changed by awk
$ cat file_without_newline | awk 1 | wc -c
6
Counting lines with the function
# Line count function used on stdin
$ cat file_with_newline | wcl
3
# Line count function used on stdin
$ cat file_without_newline | wcl
3
# Line count function used on filenames passed as arguments
$ wcl file_without_newline file_with_newline
3
3

How to count the number of character in a comma separated line where commas within delimiter are not to be counted as separate?

Let's say I have the following line in my file:
HELLO,1410250216446000,1410250216470330,1410250216470367,329,PE,B,T,GALU,[ , , T, I],3.38,3,A,A, , , , ,0, ,0,0, ,-Infinity,-Infinity,-Infinity, ,,0
if I use
grep -a -w HELLO my_file | head -10 | awk -F '[\t,]' '{print NF}' | less
output is 32.
But I don't want to count the commas within []. I mean [ , , T, I] must be counted as a single word. So that the output of my query is 29.
What will be one line command for doing this in Linux?
Remove content inside brackets using sed. Then continue counting
grep -a -w HELLO my_file|sed "s/\[.*\]//g" | head -10 | awk -F '[\t,]' '{print NF}' | less
output
29

linux command to get the last appearance of a string in a text file

I want to find the last appearance of a string in a text file with linux commands. For example
1 a 1
2 a 2
3 a 3
1 b 1
2 b 2
3 b 3
1 c 1
2 c 2
3 c 3
In such a text file, i want to find the line number of the last appearance of b which is 6.
I can find the first appearance with
awk '/ b / {print NR;exit}' textFile.txt
but I have no idea how to do it for the last occurrence.
cat -n textfile.txt | grep " b " | tail -1 | cut -f 1
cat -n prints the file to STDOUT prepending line numbers.
grep greps out all lines containing "b" (you can use egrep for more advanced patterns or fgrep for faster grep of fixed strings)
tail -1 prints last line of those lines containing "b"
cut -f 1 prints first column, which is line # from cat -n
Or you can use Perl if you wish (It's very similar to what you'd do in awk, but frankly, I personally don't ever use awk if I have Perl handy - Perl supports 100% of what awk can do, by design, as 1-liners - YMMV):
perl -ne '{$n=$. if / b /} END {print "$n\n"}' textfile.txt
This can work:
$ awk '{if ($2~"b") a=NR} END{print a}' your_file
We check every second file being "b" and we record the number of line. It is appended, so by the time we finish reading the file, it will be the last one.
Test:
$ awk '{if ($2~"b") a=NR} END{print a}' your_file
6
Update based on sudo_O advise:
$ awk '{if ($2=="b") a=NR} END{print a}' your_file
to avoid having some abc in 2nd field.
It is also valid this one (shorter, I keep the one above because it is the one I thought :D):
$ awk '$2=="b" {a=NR} END{print a}' your_file
Another approach if $2 is always grouped (may be more efficient then waiting until the end):
awk 'NR==1||$2=="b",$2=="b"{next} {print NR-1; exit}' file
or
awk '$2=="b"{f=1} f==1 && $2!="b" {print NR-1; exit}' file

Resources