Extracting month from day using linux terminal [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am having a text file containing a list of date and time just like the sample below -
posted_at"
2012-06-09 11:48:31"
2012-08-09 12:40:02"
2012-04-09 13:10:00"
2012-03-09 13:40:00"
2012-10-09 14:30:01"
2012-12-09 15:30:00"
2012-11-09 16:20:00"
I want to extract the month from each line.
P.S - grep should not be used at any point of the code
Thanks in advance!

First select date pattern :
egrep '[0-9]{4}-[0-9]{2}-[0-9]{2} ' content_file
Second, extract the month :
awk -F '-' '{print $2}'
Third redirect to desired file :
>> desired_file
So mix of all this with | to final solution :
egrep '[0-9]{4}-[0-9]{2}-[0-9]{2} ' content_file| awk -F '-' '{print $2}'>> desired_file
VoilĂ 

Related

how can I remove some numbers at the end of line in a text file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I have a text file which contains a series of same line except at the end.
eg
lesi-1-1500-1
lesi-1-1500-2
lesi-1-1500-3
how can I remove the last number? it goes upto 250
to change in the file itself
sed -i 's/[0-9]\+$//' /path/to/file
or
sed 's/[0-9]\+$//' /path/to/file > /path/to/output
see example
You can do it with Awk by breaking it into fields.
echo "lesi-1-1500-2" > foo.txt
echo "lesi-1-1500-3" >> foo.txt
cat foo.txt | awk -F '-' '{print $1 "-" $2 "-" $3 }'
The -F switch allows us to set the delimiter which is -. Then we just print the first three fields with - for formatting.

Sort files in a directory by their text character length and copy to other directory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to find the smallest file by character length inside of a directory and, once it is found, I want to rename it and copy it to another directory.
For example, I have two files in one directory ~/Files and these are cars.txt and rabbits.txt
Text in cars.txt:
I like red cars that are big.
Text in rabbits.txt:
I like rabbits.
So far I know how to get the character length of a single file with the command wc -m 'filename' but I don't know how to do it in all the files and sort them in order. I know rabbits.txt is smaller in character length, but how do I compare both of them?
You could sort the files by size, then select the name of the first one:
file=$(wc -m ~/Files/* 2>/dev/null | sort -n | head -n 1 | awk '{print $2}')
echo $file

Linux Compare two text files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have two text file like below:
File1.txt
A|234-211
B|234-244
C|234-351
D|999-876
E|456-411
F|567-211
File2.txt
234-244
999-876
567-211
And I want to compare both files and get containing values like below:
Dequired output
B|234-244
D|999-876
F|567-211
$ grep -F -f file2.txt file1.txt
B|234-244
D|999-876
F|567-211
The -F makes grep search for fixed strings (not patterns). Both -F and -f are POSIX options to grep.
Note that this assumes your file2.txt does not contain short strings like 11 which could lead to false positives.
Try:
grep -f File2.txt File1.txt

Bash script manipulation [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I work with Bash script and I want to get line from big text by special text
for example i have these lines
first fffffffffffffffffffffffffff
.................................
second ssssssssssssssssssssssssss
.................................
third ttttttttttttttttttttttttttt
and I want to get ssssssssssssssssssssssssss string .
Can anybody help me?
Is this what you want?
echo "$longstring" | awk '$1 == "second" { print $2 }'
since you seem to not have any criterion as to which line you want to output, i suggest something like:
echo "ssssssssssssssssssssssssss"
this is pretty robust regarding the content of your input, doesn't depend on a "file", and is a fast solution.
cat filename | grep "^second" | cut -d " " -f 2
Or, if you are ALF:
<filename grep "^second" | cut -d " " -f 2
Or
grep "^second" filename | cut -d " " -f 2

Unix/Linux comma deliminated files [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have comma deliminated file that basically has the structure of:
1, 2, 3, 4, 5 ,,,, 6, etc
I have to count the number of unique 6th columns. Pleasseee help
(btw this is an intro to unix/linux class so this should be able to be done with basic commands)
cut -d "," -f 6 myFile |sort |uniq -c |wc -l
Looking into my crystal ball, I see that your class is discussing awk. Try
awk -F, '!a[$6]++{c++} END{ print c }' input-file

Resources