Linux Compare two text files [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have two text file like below:
File1.txt
A|234-211
B|234-244
C|234-351
D|999-876
E|456-411
F|567-211
File2.txt
234-244
999-876
567-211
And I want to compare both files and get containing values like below:
Dequired output
B|234-244
D|999-876
F|567-211

$ grep -F -f file2.txt file1.txt
B|234-244
D|999-876
F|567-211
The -F makes grep search for fixed strings (not patterns). Both -F and -f are POSIX options to grep.
Note that this assumes your file2.txt does not contain short strings like 11 which could lead to false positives.

Try:
grep -f File2.txt File1.txt

Related

Sort files in a directory by their text character length and copy to other directory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to find the smallest file by character length inside of a directory and, once it is found, I want to rename it and copy it to another directory.
For example, I have two files in one directory ~/Files and these are cars.txt and rabbits.txt
Text in cars.txt:
I like red cars that are big.
Text in rabbits.txt:
I like rabbits.
So far I know how to get the character length of a single file with the command wc -m 'filename' but I don't know how to do it in all the files and sort them in order. I know rabbits.txt is smaller in character length, but how do I compare both of them?
You could sort the files by size, then select the name of the first one:
file=$(wc -m ~/Files/* 2>/dev/null | sort -n | head -n 1 | awk '{print $2}')
echo $file

Extracting month from day using linux terminal [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am having a text file containing a list of date and time just like the sample below -
posted_at"
2012-06-09 11:48:31"
2012-08-09 12:40:02"
2012-04-09 13:10:00"
2012-03-09 13:40:00"
2012-10-09 14:30:01"
2012-12-09 15:30:00"
2012-11-09 16:20:00"
I want to extract the month from each line.
P.S - grep should not be used at any point of the code
Thanks in advance!
First select date pattern :
egrep '[0-9]{4}-[0-9]{2}-[0-9]{2} ' content_file
Second, extract the month :
awk -F '-' '{print $2}'
Third redirect to desired file :
>> desired_file
So mix of all this with | to final solution :
egrep '[0-9]{4}-[0-9]{2}-[0-9]{2} ' content_file| awk -F '-' '{print $2}'>> desired_file
VoilĂ 

Parsing a conf file in bash [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Here's my config file
#comment 1
--longoption1
#comment 2
--longoption2
#comment 3
-s
#comment 4
--longoption4
I want to write a bash script that will read this .conf file, skip comments and serialize the commandline options like so.
./binary --longoption1 --longoption2 -s --longoption4
Working off of this post on sed, you just need to pipe the output from sed to xargs:
sed -e 's/#.*$//' -e '/^$/d' inputFile | xargs ./binary
As Wiimm points out, xargs can be finicky with a lot of arguments and it might split it up across multiple calls to binary. It may be better off to use sed directly:
./binary $(sed -e 's/#.*$//' -e '/^$/d' inputFile)

Formatting Diff output in Shell Script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm currently using (diff -q directory1 directory2) to output the files in each directory that are different and printing them to a table in html.
Current output: "Files directory1/file1 and directory2/file2 differ"
What I want: "file1 has changed"
I do not want to use comm or sort the files because other applications are pulling from the files and are sensitive to ordering. Any idea on how to get this done?
you need to grep diff output for file that differ then use awk to print file name with your new format
diff -rq dir1 dir2 | grep "differ" | awk '{print $2 "has changed"}'
Will this work?
diff -q $file1 $file2 | awk '{print $2 " has changed"}'

Unix/Linux comma deliminated files [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have comma deliminated file that basically has the structure of:
1, 2, 3, 4, 5 ,,,, 6, etc
I have to count the number of unique 6th columns. Pleasseee help
(btw this is an intro to unix/linux class so this should be able to be done with basic commands)
cut -d "," -f 6 myFile |sort |uniq -c |wc -l
Looking into my crystal ball, I see that your class is discussing awk. Try
awk -F, '!a[$6]++{c++} END{ print c }' input-file

Resources