Assume you have an unsorted file with the following content:
identifier,count=Number
identifier, extra information
identifier, extra information
...
I want to sort this file so that for each id, write the line with the count first and then the lines with extra info. I can only use the sort unix command with option -k1,1 but am allowed to slightly change the lines to get this sort.
As an example, take
a,Count=1
a,giulio
aa,Count=44
aa,tango
aa,information
ee,Count=2
bb,que
f,Count=3
b,Count=23
bax,game
f,ee
c,Count=3
c,roma
b,italy
bax,Count=332
a,atlanta
bb,Count=78
c,Count=3
The output should be
a,Count=1
a,atlanta
a,giulio
aa,Count=44
aa,information
aa,tango
b,Count=23
b,italy
bax,Count=332
bax,game
bb,Count=78
bb,que
c,Count=3
c,roma
ee,Count=2
f,Count=3
f,ee
but I get:
aa,Count=44
aa,information
aa,tango
a,atlanta
a,Count=1
a,giulio
bax,Count=332
bax,game
bb,Count=78
bb,que
b,Count=23
b,italy
c,Count=3
c,Count=3
c,roma
ee,Count=2
f,Count=3
f,ee
I tried adding spaces at the end of the identifier and/or at the beginning of the count field and other characters, but none of these approaches work.
Any pointer on how to perform this sorting?
EDIT:
if you consider for example the products with id starting with a, one of them has info 'atlanta' and appears before Count (but I wand Count to appear before any information). In addition, bb should be after b in alphabetical order for the ids. To make my question clearer: How can I get the IDs sorted by alphabetical order and such that for a given ID, the line with Count appears before the others. And how to do this using sort -k1,1 (This is a group project I am working on and I am not free to change the sorting command) and maybe changing the content (I tried for example adding a '~' to all the infos so that Count is before)
you need to tell sort, that comma is used as field separator
sort -t, -k1,1
For ASCII sorting make sure LC_ALL=C and LANG and LANGUAGE are unset
Related
I have 3 folders in my server,
Assuming folder names are
workbook_20220217
workbook_20220407
workbook_20220105
Each folder consist of its respective files
I would only want to print the latest file based on date, there are 2 methods i have tried so far
The first method i tried
Variable Declared
TABLEAU_REPORTING_FOLDER=/farid/reporting/workbook
#First Method
ls $TABLEAU_REPORTING_FOLDER *_* | sort -t_ -n -k2 | sed ':0 N;s/\n/, /;t0'
#The first method will return all its contents in the folder as well
#The second Method i have tried
$(ls -td ${TABLEAU_REPORTING_FOLDER}/workbook/* | head -1)
# This will return folder based on ascending order
Target output should be a workbook_20220407
What is the best approach should look into? There are no other logics i could think rather than using the date as the biggest value to determine if its the latest date
*PS i could not read folder as date modified because once folder have been transferred to my server, all 3 folders will be of the same date
UPDATE
I found a way to get the latest folder based on filename based on this reference : https://www.unix.com/shell-programming-and-scripting/174140-how-sort-files-based-file-name-having-numbers.html
ls | sort -t'-' -nk2.3 | tail -1
This will return the latest folder based on folder title , will this be safe to use ?
Also what does -nk.2.3 does and mean ?
You can list your files in a directory in reverse order with option -r (independent if you have selected either sort order) See man page of ls(1) command for details.
The options -n and -k2.3 of sort(1) command mean, respectively (see also sort(1) man page for details):
sort numerically. This meaning that the keys are considered as numbers and sorted accordingly.
select fields 2 and 3 (the dot must be a comma, by the way) as keys for sorting purposes.
Read the man pages of both commands, they are your friends.
I have a data output something like this captured in a file.
List item1
attrib1: someval11
attrib2: someval12
attrib3: someval13
attrib4: someval14
List item2
attrib1: someval21
attrib2: someval12
attrib4: someval24
attrib3: someval23
List item3
attrib1: someval31
attrib2: someval32
attrib3: someval33
attrib4: someval34
I want to extract attrib1, attrib3, attrib4 from the list of data only if "attrib2 is someval12".
note that attrib3 and attrib4 could be in any order after attrib2.
so far I tried to use grep with -A and -B option but I need to specify line number and that is sort of hardcoding which I don't want to do it.
grep -B 1 -A 1 -A 2 "attrib2: someval12" | egrep -w "attrib1|attrib3|attrib4"
can i use any other option of grep which doesn't involve specifying the before and after occurence for this example?
Grep and other tools (like join, sort, uniq) work on the principle "one record per line". It is therefore possible to use a 3-step pipe:
Convert each list item to a single line, using sed.
Do the filtering, using grep.
Convert back to the original format, using sed.
First you need to pick a character that is known not to occur in the input, and use it as separator character. For example, '|'.
Then, find the sed command for step 1, which transforms the input to the format
List item1|attrib1: someval11|attrib2: someval12|attrib3: someval13|attrib4: someval14|
List item2|attrib1: someval21|attrib2: someval12|attrib4: someval24|attrib3: someval23|
List item3|attrib1: someval31|attrib2: someval32|attrib3: someval33|attrib4: someval34|
Now step 2 is easy.
I want to search a given word and retrieve all the surrounding lines between a pair of keywords:
I have the following data
NEW:
this is stackoverflow
this is a ghi/enlightening website
NEW:
put returns between paragraphs
indent code by 4 spaces
NEW:
here is this
most productive website
this is abc/enlightening/def
Now I want to retrieve all information between the two NEW which have the word "enlightening". That is, for the example input above I want the following output:
OUTPUT:
NEW:
this is stackoverflow
this is a ghi/enlightening website
NEW:
here is this
most productive website
this is abc/enlightening/def
I know that grep allows me to search a word-- but it retrieves only a specified number of lines e.g. 5 (specified by the user) above and below the given word. But how do I find out all the information between any keyword in linux("NEW" in this case). E.g. I specify here the delimiting keyword as "NEW" and call the information between any two new as paragraph. So, here my first paragraph is:
this is stackoverflow
this is a ghi/enlightening website
my second paragraph is:
put returns between paragraphs
indent code by 4 spaces
and so on.
Now I want all those paragraphs which have the keyword "enlightening" in them. i.e. I want the following output:
OUTPUT:
NEW:
this is stackoverflow
this is a ghi/enlightening website
NEW:
here is this
most productive website
this is abc/enlightening/def
The following AWK command should work (for mawk anyway -- POSIX doesn't seem to allow RS to be an arbitrary string):
awk -vRS='NEW:\n' -vORS= '/enlightening/ { print RS $0 }' data
Explanation:
-vFOO=BAR is a variable assignment.
Setting RS (Record Separator) to NEW:\n makes records be separated by NEW:\n instead of being lines.
Setting ORS to the empty string removes redundant blank lines after records on output. (Another option is to set it to NEW:\n, if having NEW:\n appear after the record is okay.)
/enlightening/ { print RS $0 } prints the record separator followed by the entire matching record ($0) for each record that contains "enlightening".
If having the separator appear after records is okay, then the command can be simplified to the following:
awk -vRS='NEW:\n' -vORS='NEW:\n' '/enlightening/' data
The default action when no action is specified is to print the record.
For strict POSIX compliance, appending lines to a temporary buffer while between two NEW:s and only printing that buffer if the search term was seen (could use a flag) should work, though it's more complicated.
I have two files:
file1 has the format:
field1;field2;field3;field4
(file1 is initially unsorted)
file2 has the format:
field1
(file2 is sorted)
I run the 2 following commands:
sort -t\; -k1 file1 -o file1 # to sort file 1
join -t\; -1 1 -2 1 -o 1.1 1.2 1.3 1.4 file1 file2
I get the following message:
join: file1:27497: is not sorted: line_which_was_identified_as_out_of_order
Why is this happening ?
(I also tried to sort file1 taking into consideration the entire line not only the first filed of the line but with no success)
sort -t\; -c file1 doesn't output anything. Around line 27497, the situation is indeed strange which means that sort doesn't do its job correctly:
XYZ113017;...
line 27497--> XYZ11301;...
XYZ11301;...
To complement Wumpus Q. Wumbley's helpful answer with a broader perspective (since I found this post researching a slightly different problem).
When using join, the input files must be sorted by the join field ONLY, otherwise you may see the warning reported by the OP.
There are two common scenarios in which more than the field of interest is mistakenly included when sorting the input files:
If you do specify a field, it's easy to forget that you must also specify a stop field - even if you target only 1 field - because sort uses the remainder of the line if only a start field is specified; e.g.:
sort -t, -k1 ... # !! FROM field 1 THROUGH THE REST OF THE LINE
sort -t, -k1,1 ... # Field 1 only
If your sort field is the FIRST field in the input, it's tempting to not specify any field selector at all.
However, if field values can be prefix substrings of each other, sorting whole lines will NOT (necessarily) result in the same sort order as just sorting by the 1st field:
sort ... # NOT always the same as 'sort -k1,1'! see below for example
Pitfall example:
#!/usr/bin/env bash
# Input data: fields separated by '^'.
# Note that, when properly sorting by field 1, the order should
# be "nameA" before "nameAA" (followed by "nameZ").
# Note how "nameA" is a substring of "nameAA".
read -r -d '' input <<EOF
nameA^other1
nameAA^other2
nameZ^other3
EOF
# NOTE: "WRONG" below refers to deviation from the expected outcome
# of sorting by field 1 only, based on mistaken assumptions.
# The commands do work correctly in a technical sense.
echo '--- just sort'
sort <<<"$input" | head -1 # WRONG: 'nameAA' comes first
echo '--- sort FROM field 1'
sort -t^ -k1 <<<"$input" | head -1 # WRONG: 'nameAA' comes first
echo '--- sort with field 1 ONLY'
sort -t^ -k1,1 <<<"$input" | head -1 # ok, 'nameA' comes first
Explanation:
When NOT limiting sorting to the first field, it is the relative sort order of chars. ^ and A (column index 6) that matters in this example. In other words: the field separator is compared to data, which is the source of the problem: ^ has a HIGHER ASCII value than A, and therefore sorts after 'A', resulting in the line starting with nameAA^ sorting BEFORE the one with nameA^.
Note: It is possible for problems to surface on one platform, but be masked on another, based on locale and character-set settings and/or the sort implementation used; e.g., with a locale of en_US.UTF-8 in effect, with , as the separator and - permissible inside fields:
sort as used on OSX 10.10.2 (which is an old GNU sort version, 5.93) sorts , before - (in line with ASCII values)
sort as used on Ubuntu 14.04 (GNU sort 8.21) does the opposite: sorts - before ,[1]
[1] I don't know why - if somebody knows, please tell me. Test with sort <<<$'-\n,'
sort -k1 uses all fields starting from field 1 as the key. You need to specify a stop field.
sort -t\; -k1,1
... or the gnu sort is just as buggy as every other GNU command
try and sort Gi1/0/11 vs Gi1/0/1 and you'll never be able to get an actual regular textual sort suitable for join input because someone added some extra intelligence in sort which will happily use numeric or human numeric sorting automagically in such cases without even bothering to add a flag to force the regular behavior
what is suitable for humans is seldom suitable for scripting
I have an unsorted server list like the following;
bgsqlnp-z101
bgsqlnp-z102
bgsqlnp-z103
bgsqlnp-z2
bgsqlnp-z3
bgsqlnp-z5
dfsqlnp-z108
dfsqlnp-z4
bgsqlnp-z1
dfsqlprd-z8
fuqddev-z88
fuqhdev-z8
ghsbqudev-z18
heiappprod-z1
htsybprd-z24
Using sort to read-in the file, I'm trying to get the following;
bgsqlnp-z1
bgsqlnp-z2
bgsqlnp-z3
bgsqlnp-z5
bgsqlnp-z101
bgsqlnp-z102
bgsqlnp-z103
dfsqlnp-z4
dfsqlnp-z108
dfsqlprd-z8
fuqddev-z88
fuqhdev-z8
ghsbqudev-z18
heiappprod-z1
htsybprd-z24
I'm just not able to find the right keydef for my -k option.
Here's the closest I've been able to get;
sort -k2n -t"z"
bgsqlnp-z1
bgsqlnp-z101
bgsqlnp-z102
bgsqlnp-z103
bgsqlnp-z2
bgsqlnp-z3
bgsqlnp-z5
dfsqlnp-z108
dfsqlnp-z4
dfsqlprd-z8
fuqddev-z88
fuqhdev-z8
ghsbqudev-z18
heiappprod-z1
htsybprd-z24
The numbers are in the right order, but the server names aren't sorted.
Attempts using a multi-field keydef (-k1,2n) seem to have zero effect (i get no sorting at all).
Here's some extra info about the server names;
1) All of them have a "-z[1-200]" suffix on the names, some numbers repeat.
2) Server names are differing lengths (4 to 16 characters)
So using 'cut' is out of the question
You can use sed to get around having a multi-character separator. You can switch between numeric and dictionary order after each sort key definition. Note that you have to have multiple -k options for multiple keys, check the man page for details on this.
Something like this:
sed 's/-z/ /' file | sort -k2,2n -k1,1d | sed 's/ /-z/'