joining two csv files based on a column - linux

I have 2 csv files as follows
AllEmpployees.txt
EmpID,Name
QualifiedEmployeees.csv
Empid
Now i want to find names of qualified employees
Empid,Name
Am using following command
join -t , -1 1 -2 1 QualifiedEmployeees.csv AllEmployees.txt
This results in zero records.Am sure that there is a intersection of employeeids.
Reference : https://superuser.com/questions/26834/how-to-join-two-csv-files
Is it because qualified employees file has only one column and there is no delimiter?Or am i doing something wrong

Try this:
join -t "," <(dos2unix <QualifiedEmployeees.csv) <(dos2unix <AllEmpployees.txt)

If join is not working (not producing as many rows as you expect, or no rows at all), it is likely because your input is not sorted. From man join we see this:
When the default field delimiter characters are used, the files to be joined should be ordered in the
collating sequence of sort(1), using the -b option, on the fields on which they are to be joined, oth-
erwise join may not report all field matches. When the field delimiter characters are specified by the
-t option, the collating sequence should be the same as sort(1) without the -b option.

awk -F, 'FNR==NR{a[$1];next}($1 in a){print $2}' Qualiedemployees.txt allEmployees.txt

Related

unix sort by single column only

I have a file of numbers seperated by comma. I want to sort the list by columns 2 only. my expecation is that lines wil lbe sorted by only the second columns and not by additional columns. I am not looking to sort by multiple keys. I know how to sort by a single key. My question here is why when i am sotin giving start POS and end POS as column 2 why is it also sorting columns 3?
FILE
cat chris.num
1,5,2
1,4,3
1,4,1
1,7,2
1,7,1
AFTER SORT
sort -t',' -k2,2 chris.num
1,4,1
1,4,3
1,5,2
1,7,1
1,7,2
However my expected output is
1,4,3
1,4,1
1,5,2
1,7,2
1,7,1
So i thought since i give the start and end key as -k2,2 that it would sort only based on this column but it seems to be sorting the other columns too. how can i get this to sort only by column 2 and not by others
From the POSIX description of sort:
Except when the -u option is specified, lines that otherwise compare equal shall be ordered as if none of the options -d, -f, -i, -n, or -k were present (but with -r still in effect, if it was specified) and with all bytes in the lines significant to the comparison. The order in which lines that still compare equal are written is unspecified.
So in your case, when two lines have the same value in the second column and thus are equal, the entire lines are then compared to get the final ordering.
GNU sort (And possibly other implementations, but it's not mandated by POSIX) has the -s option for a stable sort where lines with keys that compare equal appear in the same order as in the original, which is what it appears you want:
$ sort -t, -s -k2,2n chris.num
1,4,3
1,4,1
1,5,2
1,7,2
1,7,1

Merging two excel file into a third one with the common heading

I have two excel file with common headings "StudentID" and "StudentName" in both of the excel files. I want to merge these two excel files in to a third excel containing all the records from the two excel along with the common heading. How can i do the same through linux commands.
I assumed it was csv files as it would be way more complicated with .xlsx files
cp first_file.csv third_file.csv
tail -n +2 second_file.csv >> third_file.csv
First line copies your first file into a new file called third_file.csv. Second line fills the new file with the content of the second file starting from the second line (escapes header).
Due to your requirement to do this with "Linux commands" I assume that you have two CSV files rather than XLSX files.
If so, the Linux join command is a good fit for a problem like this.
Imagine your two files are:
# file1.csv
Student ID,Student Name,City
1,John Smith,London
2,Arthur Dent,Newcastle
3,Sophie Smith,London
and:
# file2.csv
Student ID,Student Name,Subjects
1,John Smith,Maths
2,Arthur Dent,Philosophy
3,Sophie Smith,English
We want to do an equality join on the Student ID field (or we could use Student Name, it doesn't matter since both are common to each).
We can do this using the following command:
$ join -1 1 -2 1 -t, -o 1.1,1.2,1.3,2.3 file1.csv file2.csv
Student ID,Student Name,City,Subjects
1,John Smith,London,Maths
2,Arthur Dent,Newcastle,Philosophy
3,Sophie Smith,London,English
By way of explanation, this join command written as SQL would be something like:
SELECT `Student ID`, `Student Name`, `City`, `Subjects`
FROM `file1.csv`, `file2.csv`
WHERE `file1.Student ID` = `file2.Student ID`
The options to join mean:
The "SELECT" clause:
-o 1.1,1.2,1.3,2.3 means select the first file's first field, first file's second field, first file's third field,second file's third field.
The "FROM" clause:
file1.csv file2.csv, i.e. the two filename arguments passed to join.
The "WHERE" clause:
-1 1 means join from the 1st field from the Left table
-2 1 means join to the 1st field from the Right table (-1 = Left; -2 = Right)
Also:
-t, tells join to use the comma as the field separator
#Corentin Limier Thanks for the answer.
Was able to achieve the same through similar way below.
Let's say two files a.xls,b.xls and want to merge the same into the third file c.xls
cat a.xls > c.xls && tail -n +2 b.xls >> c.xls

Bash sort -nu results in unexpected behaviour

A colleague of mine noticed some odd behaviour with the sort command today, and I was wondering if anyone knows if the output of this command is intentional or not?
Given the file:
ABC_22
ABC_43
ABC_1
ABC_1
ABC_43
ABC_10
ABC_123
We are looking to sort the file with numeric sort, and also make it unique, so we run:
sort file.txt -nu
The output is:
ABC_22
Now, we know that the numeric sort won't work in this case as the lines don't begin with numbers (and that's fine, this is just part of a larger script), but I would have expected something more along the lines of:
ABC_1
ABC_10
ABC_123
ABC_22
ABC_43
Does anyone know why this isn't the case? The sort acts as one would expect if given just the -u or -n options individually.
With -n, an empty number is zero:
Sort numerically. The number begins each line and consists of optional
blanks, an optional ‘-’ sign, and zero or more digits possibly
separated by thousands separators, optionally followed by a
decimal-point character and zero or more digits. An empty number is
treated as ‘0’.
All these lines have an empty number at the start of the line, so they are all zero for sort's numerical uniqueness. If you'd started each line with the same number, say 1, the effect would be the same. You should specify the field containing the numbers explicitly, or use version sort (-V):
$ sort -Vu foo
ABC_1
ABC_10
ABC_22
ABC_43
ABC_123
You are missing specifying the de-limit on the second field of GNU sort as
sort -nu -t'_' -k2 file
ABC_1
ABC_10
ABC_22
ABC_43
ABC_123
The flag -n for numerical sort, -u for unique lines and the key part is to set de-limiter as _ and sort on the second field after _ done by -k2.

Differences between Unix commands for Sorting CSV

What's the difference between:
!tail -n +2 hits.csv | sort -k 1n -o output.csv
and
!tail -n +2 hits.csv | sort -t "," -k1 -n -k2 > output.csv
?
I'm trying to sort a csv file by first column first, then by the second column, so that lines with the same first column are still together.
It seems like the first one already does that correctly, by first sorting by the field before the first comma, then by the field following the first comma. (breaking ties, that is.)
Or does it not actually do that?
And what does the second command do/mean? (And what's the difference between the two?) There is a significant difference between the two output.csv files when I run the two.
And, finally, which one should I use? (Or are they both wrong?)
See also the answer by #morido for some other pointers, but here's a description of exactly what those two sort invocations do:
sort -k 1n -o output.csv
This assumes that the "fields" in your file are delimited by a transition from non-whitespace to whitespace (i.e. leading whitespace is included in each field, not stripped, as many might expect/assume), and tells sort to order things by a key that starts with the first field and extends to the end of the line, and assumes that the key is formatted as a numeric value. The output is sent explicitly to a specific file.
sort -t "," -k1 -n -k2
This defines the field separator as a comma, and then defines two keys to sort on. The first key again starts at the first field and extends to the end of the line and is lexicographic (dictionary order), not numeric, and the second key, which will be used when values of the first key are identical, starts with the second field and extends to the end of the line, and because of the intervening -n, will be assumed to be numeric data as well. However, because your first key entails the entire line, essentially, the second key is not likely to ever be needed (if the first key of two separate lines is identical, the second key most likely will be too).
Since you didn't provide sample data, it's unknown whether the data in the first two fields is numeric or not, but I suspect you want something like what was suggested in the answer by #morido:
sort -t, -k1,1 -k2,2
or
sort -t, -k1,1n -k2,2n (alternatively sort -t, -n -k1,1 -k2,2)
if the data is numeric.
First off: you want to remove the leading ! from these two commands. In Bash (and probably others since this comes from csh) you are otherwise referencing the last command that contained tail in your history which does not make sense here.
The main difference between your two versions is that in the first case you are not taking the second column into account.
This is how I would do it:
tail -n +2 hits.csv | sort -t "," -n --key=1,1 --key=2,2 > output.csv
-t specifies the field separator
-n turns on numerical sorting order
--key specifies the fields that should be used for sorting (in order of precedence)

use uniq -d on a particular column?

Have a text file like this.
john,3
albert,4
tom,3
junior,5
max,6
tony,5
I'm trying to fetch records where column2 value is same. My desired output.
john,3
tom,3
junior,5
tony,5
I'm checking if we can use uniq -d on second column?
Here's one way using awk. It reads the input file twice, but avoids the need to sort:
awk -F, 'FNR==NR { a[$2]++; next } a[$2] > 1' file file
Results:
john,3
tom,3
junior,5
tony,5
Brief explanation:
FNR==NR is a common AWK idiom that is true for the first file in the arguments list. Here, column two is added to an array and incremented. On the second read of the file, we simply check if the value of column two is greater than one (the next keyword skips processing the rest of the code).
You can use uniq on fields (columns), but not easily in your case.
Uniq's -f and -s options filter by fields and characters respectively. However neither of these quite do what want.
-f divides fields by whitespace and you separate them with commas.
-s skips a fixed number of characters and your names are of variable length.
Overall though, uniq is used to compress input by consolidating duplicates into unique lines. You are actually wishing to retain duplicates and eliminate singletons, which is the opposite of what uniq is used to do. It would appear you need a different approach.

Resources