How to sort content of a text file in Terminal Linux by splitting at a specific char? - linux

I have an assignment in school to sort a files content in a specific order.
I had to do it with Windows batch-files first and now I have to do the same in Linux.
The file looks more or less like this the whole way through:
John Doe : Crocodiles : 1035
In windows I solved the problem by this:
sort /r /+39 file.txt
The rows in the file are supposed to get sorted by the number of points (which is the number to the right) in decreasing order.
Also the second part of the assignment is to sort the rows by the center column.
How can I get the same result(s) in Linux? I have tried a couple of different variations of the sort command in Linux too but so far without success.

I'd do it with:
sort -nr -t: -k3
-nr - numbers reverse order
-t: - key separator colon
-k3 - third field

The Linux equivalent of your Windows command, sort /r /+39 file, is:
sort -r -k +39 file

Related

Sorting numerically if the number is not at the start of a line

I used grep -Eo '[0-9]{1,}kg' *.dat which filters the ones with *kg. Now I'm trying to sort them in increasing order. My output from grep is:
blue_whale.dat:240kg
crocodile.dat:5kg
elephant.dat:6kg
giraffe.dat:15kg
hippopotamus.dat:4kg
humpback_whale.dat:5kg
ostrich.dat:1kg
sea_turtle.dat:10kg
I've tried to used sort -n. But the sorting doesn't work.
edit:
I have bunch of files with how much each animals weight and their length. I filtered the weights of each animal. This part was easy. And then I want to order them in increasing order which I thought was just sort -n.
edit:
In my directory, I have many dat files.
And they contain values like 110000kg 24m
And I need to order them in weight increasing order
You need to use the command in this manner:
grep -Eo '[0-9]{1,}kg' *.dat | sort -t: -n -k2
Use the "-t" option to specify the colon as field separator.
You can use -r option for decreasing or reverse order.

Getting the latest file in shell with YYYYMMDD_HHMMSS.csv.gz format

1)I have set of files in a directory in shell and i want go get the latest file depending on the time stamp in the file name.
2)For Example:
test1_20180823_121545.csv.gz
test2_20180822_191545.csv.gz
test3_20180823_192050.csv.gz
test4_20180823_100510.csv.gz
test4_20180823_191040.csv.gz
3)
From the above given files based on their time and date extension. My output should be test3_20180823_192050.csv.gz
Using find and sort:
find /path/to/mydirectory -type f | sort -t_ -k2,3 | tail -1
Option for the sort command are -t for the delimiter and -k for selecting the key on which the sort is done.
tail is to get last entry from the sorted list.
if files have also corresponding modification times (shown by ls -l) then you can list them by modification times in reverse order and get the last one
ls -1rt | tail -1
But if you can not rely on this, than you need to write the script (e.g. perl). You would get file list to array then extract time stamp to other array, convert timestamps to epoch time (which is easy to sort) to other array, sort while sorting also file list. Maybe hashes can help with it. Then print last one.
You can try to write it, if you will have issues, someone here can correct you.

What is the equivalent linux command in Windows?

I am trying to merge all the files in Windows batch, then sort all the rows and filter only based on unique rows as header can be repeated many times. I ever used linux and in linux this command is just this however I am not sure how can I do the same in windows bash,
sed 1d *.csv | sort -r| uniq > merged-file.csv
To do this without the sorting part, you can simply run this from cmdline or in a batchfile.
copy *.csv merged-file.csv
Which will copy the content of each csv file into merged-file.csv
To do the sorting and uniq part, you would need a little more than a simple oneliner.

egrep not writing to a file

I am using the following command in order to extract domain names & the full domain extension from a file. Ex: www.abc.yahoo.com, www.efg.yahoo.com.us.
[a-z0-9\-]+\.com(\.[a-z]{2})?' source.txt | sort | uniq | sed -e 's/www.//'
> dest.txt
The command write correctly when I specify small maximum parameter -m 100 after the source.txt. The problem if I didn't specify, or if I specified a huge number. Although, I could write to files with grep (not egrep) before with huge numbers similar to what I'm trying now and that was successful. I also check the last modified date and time during the command being executed, and it seems there is no modification happening in the destination file. What could be the problem ?
As I mentioned in your earlier question, it's probably not an issue with egrep, but that your file is too big and that sort won't output anything (to uniq) until egrep is done. I suggested that you split the files into manageable chucks using the split command. Something like this:
split -l 10000000 source.txt split_source.
This will split the source.txt file into 10 million line chunks called split_source.a, split_source.b, split_source.c etc. And you can run the entire command on each one of those files (and maybe changing the pipe to append at the end: >> dest.txt).
The problem here is that you can get duplicates across multiple files, so at the end you may need to run
sort dest.txt | uniq > dest_uniq.txt
Your question is missing information.
That aside, a few thoughts. First, to debug and isolate your problem:
Run the egrep <params> | less so you can see what egreps doing, and eliminate any problem from sort, uniq, or sed (my bets on sort).
How big is your input? Any chance sort is dying from too much input?
Gonna need to see the full command to make further comments.
Second, to improve your script:
You may want to sort | uniq AFTER sed, otherwise you could end up with duplicates in your result set, AND an unsorted result set. Maybe that's what you want.
Consider wrapping your regular expressions with "^...$", if it's appropriate to establish beginning of line (^) and end of line ($) anchors. Otherwise you'll be matching portions in the middle of a line.

sort across multiple files in linux

I have multiple (many) files; each very large:
file0.txt
file1.txt
file2.txt
I do not want to join them into a single file because the resulting file would be 10+ Gigs. Each line in each file contains a 40-byte string. The strings are fairly well ordered right now, (about 1:10 steps is a decrease in value instead of an increase).
I would like the lines ordered. (in-place if possible?) This means some of the lines from the end of file0.txt will be moved to the beginning of file1.txt and vice versa.
I am working on Linux and fairly new to it. I know about the sort command for a single file, but am wondering if there is a way to sort across multiple files. Or maybe there is a way to make a pseudo-file made from smaller files that linux will treat as a single file.
What I know can do:
I can sort each file individually and read into file1.txt to find the value larger than the largest in file0.txt (and similarly grab the lines from the end of file0.txt), join and then sort.. but this is a pain and assumes no values from file2.txt belong in file0.txt (however highly unlikely in my case)
Edit
To be clear, if the files look like this:
f0.txt
DDD
XXX
AAA
f1.txt
BBB
FFF
CCC
f2.txt
EEE
YYY
ZZZ
I want this:
f0.txt
AAA
BBB
CCC
f1.txt
DDD
EEE
FFF
f2.txt
XXX
YYY
ZZZ
I don't know about a command doing in-place sorting, but I think a faster "merge sort" is possible:
for file in *.txt; do
sort -o $file $file
done
sort -m *.txt | split -d -l 1000000 - output
The sort in the for loop makes sure the content of the input files is sorted. If you don't want to overwrite the original, simply change the value after the -o parameter. (If you expect the files to be sorted already, you could change the sort statement to "check-only": sort -c $file || exit 1)
The second sort does efficient merging of the input files, all while keeping the output sorted.
This is piped to the split command which will then write to suffixed output files. Notice the - character; this tells split to read from standard input (i.e. the pipe) instead of a file.
Also, here's a short summary of how the merge sort works:
sort reads a line from each file.
It orders these lines and selects the one which should come first. This line gets sent to the output, and a new line is read from the file which contained this line.
Repeat step 2 until there are no more lines in any file.
At this point, the output should be a perfectly sorted file.
Profit!
It isn't exactly what you asked for, but the sort(1) utility can help, a little, using the --merge option. Sort each file individually, then sort the resulting pile of files:
for f in file*.txt ; do sort -o $f < $f ; done
sort --merge file*.txt | split -l 100000 - sorted_file
(That's 100,000 lines per output file. Perhaps that's still way too small.)
I believe that this is your best bet, using stock linux utilities:
sort each file individually, e.g. for f in file*.txt; do sort $f > sorted_$f.txt; done
sort everything using sort -m sorted_file*.txt | split -d -l <lines> - <prefix>, where <lines> is the number of lines per file, and <prefix> is the filename prefix. (The -d tells split to use numeric suffixes).
The -m option to sort lets it know the input files are already sorted, so it can be smart.
mmap() the 3 files, as all lines are 40 bytes long, you can easily sort them in place (SIP :-). Don't forget the msync at the end.
If the files are sorted individually, then you can use sort -m file*.txt to merge them together - read the first line of each file, output the smallest one, and repeat.

Resources