Insert character in a file with bash - linux

Hello I have a problem in bash.
i have a file and i am trying insert a point in the final line of each line:
cat file | sed s/"\n"/\\n./g > salida.csv
but not works =(.
Because i need count the lines with a word
I need count the lines with the same country
and if i do a grep the grep take colombia and colombias.
And other question how i can count lines with the same country?
for example
1 colombia
2 brazil
3 ecuador
4 colombias
5 colombia
colombia 2
colombias 1
ecuador 1
brazil 1

how about
cut -f2 -d' ' salida.csv | sort | uniq -c

since a sed solution was posted (probably the best tool for this task), I'll contribute an awk
awk '$NF=$NF"."' file > salida.csv
Update:
$ cat input
1 colombia
2 brazil
3 ecuador
4 colombias
5 colombia
$ awk '{a[$2]++}END{for (i in a) print i, a[i]}' input
brazil 1
colombias 1
ecuador 1
colombia 2
...and, please stop updating your question with different questions...

Your command line has a few problems. Some that matter, some that are style choices, but here's my take:
Unnecessary cat. sed can take a filename as an argument.
Your sed command doesn't need the g. Since each line only has one end, there's no reason to tell it to look for more.
Don't look for the newline character, just match the end of line with $.
That leaves you with:
sed s/$/./ file > salida.csv
Edit:
If your real question is "How do I grep for colombia, but not match colombias?", you just need to use the -w flag to match whole words:
grep -w colombia file
If you want to count them, just add -c:
grep -c -w colombia file
Read the grep(1) man page for more information.

Related

Add some words to one column of text film based on linux system?

like this, two columns, however, not only three lines
1 21
1 21
1 21
...
to become
1 a21
1 a21
1 a21
...
still two columns, just add same word "a", I guess maybe can be solve by “awk” or “sed”, but didn't find how to realize this, thanks in advance.
Using awk :
awk '{print "a"$2}' file
Using sed :
sed -E 's/^[0-9]+[[:blank:]]+/&a/g' file
Using perl in substitution mode, like sed :
perl -pe 's/^\d+\s+/$&a/ file
Sed solution:
sed -e 's/ */&a/'
With extended regexes (-E or -r):
sed -E 's/ +/&a/'

Using cat and grep to print line and its number but ignore at the same time blank lines

I have created a simple script that prints the contents of a text file using cat command. Now I want to print a line along with its number, but at the same time I need to ignore blank lines. The following format is desired:
1 George Jones Berlin 2564536877
2 Mike Dixon Paris 2794321976
I tried using
cat -n catalog.txt | grep -v '^$' catalog.txt
But I get the following results:
George Jones Berlin 2564536877
Mike Dixon Paris 2794321976
I have managed to get rid of the blank lines, but line's number is not printed. What am I doing wrong?
Here are the contents of catalog.txt:
George Jones Berlin 2564536877
Mike Dixon Paris 2794321976
Your solution doesn't work because cat -n catalog.txt is already giving you non-blank lines.
You can pipe grep's output to cat -n:
grep -v '^$' yourFile | cat -n
Example:
test.txt:
Hello
how
are
you
?
$ grep -v '^$' test | cat -n
1 Hello
2 how
3 are
4 you
5 ?
At first glance, you should drop the file name in the command line to grep to make grep read from stdin:
cat -n catalog.txt | grep -v '^$'
^^^
In your code, you supplied catalog.txt to grep, which made it read from the file and ignore its standard input. So you're basically grepping from the file instead of the output of cat piped to its stdin.
To correctly ignore blank lines the prepend line numbers, switch the order of grep and cat:
grep -v '^$' catalog.txt | cat -n
Another awk
$ awk 'NF{$0=FNR " " $0}NF' 48488182
1 George Jones Berlin 2564536877
3 Mike Dixon Paris 2794321976
The second line was blank in this case.
single, simple, basic awk solution could help you here.
Solution 1st:
awk 'NF{print FNR,$0}' Input_file
Solution 2nd: Above will print line number including the line number of NULL lines, in case you want to leave empty lines line number then following may help you in same.
awk '!NF{FNR--;next} NF{print FNR,$0}' Input_file
Solution 3rd: Using only grep, though output will have a colon in between line number and the line.
grep -v '^$' Input_file | grep -n '.*'
Explanation of Solution 1st:
NF: Checking condition here if NF(Number of fields in current line, it is awk's out of the box variable which has the value of number of fields in a line) is NOT NULL, if this condition is TRUE then following the actions mentioned next to it.
{print FNR,$0}: Using print function of awk here to print FNR(Line number, which will have the line's number in it, it is awk's out of box variable) then print $0 which means current line.
By this we satisfy OP's both the conditions of leaving empty lines and print the line numbers along with lines too. I hope this helps you.

How To Delete First X Lines Based On Minimum Lines In File

I have a file with 10,000 lines. Using the following command, I am deleting all lines after line 10,000.
sed -i '10000,$ d' file.txt
However, now I would like to delete the first X lines so that the file has no more than 10,000 lines.
I think it would be something like this:
sed -i '1,$x d' file.txt
Where $x would be the number of lines over 10,000. I'm a little stuck on how to write the if, then part of it. Or, I was thinking I could use the original command and just cat the file in reverse?
For example, if we wanted just 3 lines from the bottom (seems simpler after a few helpful answers):
Input:
Example Line 1
Example Line 2
Example Line 3
Example Line 4
Example Line 5
Expected Output:
Example Line 3
Example Line 4
Example Line 5
Of course, if you know a more efficient way to write the command, I would be open to that too. Your positive input is highly appreciated.
tail can do exactly what you want.
tail -n 10000 file.txt
For simplicity, I would reverse the file, keep the first 10000 lines, then re-reverse the file.
It makes saving the file in-place a touch more complicated
source=file.txt
temp=$(mktemp)
tac "$source" | sed '10000 q' | tac > "$temp" && mv "$temp" "$source"
Without reversing the file, you'd count the number of lines and do some arithmetic:
sed -i "1,$(( $(wc -l < file.txt) - 10000 )) d" file.txt
$ awk -v n=3 '{a[NR%n]=$0} END{for (i=NR+1;i<=(NR+n);i++) print a[i%n]}' file
Example Line 3
Example Line 4
Example Line 5
Add -i inplace if you have GNU awk and want to do "inplace" editing.
To keep the first 10000 lines :
head -n 10000 file.txt
To keep the last 10000 lines :
tail -n 10000 file.txt
Test with your file Example
tail -n 3 file.txt
Example Line 3
Example Line 4
Example Line 5
tac file.txt | sed "$x q" | tac | sponge file.txt
The sponge command is useful here in avoiding an additional temporary file.
tail -10000 <<<"$(cat file.txt)" > file.txt
Okay, not «just» tail, but this way it`s capable of inplace truncation.

How to grep within a grep

I have a bunch of massive text files, about 100MB each.
I want to grep to find entries that have 'INDIANA JONES' in it:
$ grep -ir 'INDIANA JONES' ./
Then, I would like to find the entries where there is the word PORTUGAL within 5,000 characters of the INDIANA JONES term. How would I do this?
# in pseudocode
grep -ir 'INDIANA JONES' ./ | grep 'PORTUGAL' within 5000 char
Use grep's -o flag to output the 5000 characters surround the match, then search those characters for the second string. For example:
grep -ioE ".{5000}INDIANA JONES.{5000}" file.txt | grep "PORTUGAL"
If you need the original match, add the -n flag to the second grep and pipe into:
cut -f1 -d: > line_numbers.txt
then you could use awk to print those lines:
awk 'FNR==NR { a[$0]; next } FNR in a' line_numbers.txt file.txt
To avoid the temporary file, this could be written like:
awk 'FNR==NR { a[$0]; next } FNR in a' <(grep -ioE ".{50000}INDIANA JONES.{50000}" file.txt | grep -n "PORTUGAL" | cut -f1 -d:) file.txt
For multiple files, use find and a bash loop:
for i in $(find . -type f); do
awk 'FNR==NR { a[$0]; next } FNR in a' <(grep -ioE ".{50000}INDIANA JONES.{50000}" "$i" | grep -n "PORTUGAL" | cut -f1 -d:) "$i"
done
One way to deal with this is with gawk. You could set the record separator to either INDIANA JONES or PORTUGAL and then perform a length check on the record (after stripping newlines, assuming newlines do not count towards the limit of 5000). You may have to resort to find to run this recursively within a directory
awk -v RS='INDIANA JONES|PORTUGAL' '{a = $0;
gsub("\n", "", a)};
((RT ~ /IND/ && prevRT ~/POR/) || (RT ~ /POR/ && prevRT ~/IND/)) && length(a) < 5000{found=1};
{prevRT=RT};
END{if (found) print FILENAME}' file.txt
Consider installing ack-grep.
sudo apt-get install ack-grep
ack-grep is a more powerful version of grep.
There's no trivial solution to your question (that i can think of) outside of a full batch script, but you can use the -A and -B flags on ack-grep to specify a number of trailing or leading lines to output, resp.
This may not be a number of chars, but is a step further in that direction.
While this may not be a solution, it might give you some idea as to how to do this. Lookup filters like ack, awk, sed, etc. and see if you can find one with a flag for this kind of behaviour.
The ack-grep manual:
http://manpages.ubuntu.com/manpages/hardy/man1/ack-grep.1p.html
EDIT:
I think the sad news is, what you might think you're looking for is something like:
grep "\(INDIANA JONES\).\{1,5000\}PORTUGAL" filename
The problem is, even on a small file, querying this is going to be impossible time-wise.
I got this one to work with a different number. it's a size problem.
For such a large set of files, you'll need to do this in more than one step.
A Solution:
The only solution I know of is the leading and trailing output from ack-grep.
Step 1: how long are your lines?
If you knew how many lines out you had to go
(and you could estimate/calculate this a few ways) then you'd be able to grep the output of the first grep. Depending on what's in your file, you should be able to get a decent upper bound as to how many lines is 5000 chars (if a line has 100 chars average, 50+ lines should cover you, but if it has 10 chars, you'll need 500+).
You've got to determine the maximum number of lines that could be 5000 chars. You could guess or pick a high range if you like, but that'll be up to you. It's your data.
With that, call: (if you needed 100 lines for 5000 chars)
ack-grep -ira "PORTUGAL" -A 100 -B 100 filename
and
ack-grep -ira "INDIANA JONES" -A 100 -B 100 filename
replace the 100s with what you need.
Step 2: parse the output
you'll need to take the matches that ack-grep returns and parse them, looking for any matches again, within these sub-ranges.
Look for INDIANA JONES in the first PORTUGAL ack-grep match output, and look for PORTUGAL in the second set of matches.
This should take a bit more work, likely involving a bash script (I might see if I can get one working this week), but it solves your massive-data problem, by breaking it down into more manageable chunks.
grep 'INDIANA JONES' . -iR -l | while read filename; do head -c 5000 "$filename" | grep -n PORTUGAL -H --label="$filename" ; done
This works as follows:
grep 'INDIANA JONES' . -iR -l. Search for all files in or below the current directory. Case insensitive (-i). And only print the names of the files that match (-l), don't print any content.
| while read filename; do ...|...|...; done for each line of input, store it in variable $filename and execute the pipeline.
Now, for each file that matched 'INDIANA JONES', we do
head -c 5000 "$filename" - extract the first 5000 characters
grep ... - search for PORTUGAL. Print the filename (-H), but where we tell us the 'filename' we want to use with --label="$filename". Print line numbers too, -n.

How to crop(cut) text files based on starting and ending line-numbers in cygwin?

I have few log files around 100MBs each.
Personally I find it cumbersome to deal with such big files. I know that log lines that are interesting to me are only between 200 to 400 lines or so.
What would be a good way to extract relavant log lines from these files ie I just want to pipe the range of line numbers to another file.
For example, the inputs are:
filename: MyHugeLogFile.log
Starting line number: 38438
Ending line number: 39276
Is there a command that I can run in cygwin to cat out only that range in that file? I know that if I can somehow display that range in stdout then I can also pipe to an output file.
Note: Adding Linux tag for more visibility, but I need a solution that might work in cygwin. (Usually linux commands do work in cygwin).
Sounds like a job for sed:
sed -n '8,12p' yourfile
...will send lines 8 through 12 of yourfile to standard out.
If you want to prepend the line number, you may wish to use cat -n first:
cat -n yourfile | sed -n '8,12p'
You can use wc -l to figure out the total # of lines.
You can then combine head and tail to get at the range you want. Let's assume the log is 40,000 lines, you want the last 1562 lines, then of those you want the first 838. So:
tail -1562 MyHugeLogFile.log | head -838 | ....
Or there's probably an easier way using sed or awk.
I saw this thread when I was trying to split a file in files with 100 000 lines. A better solution than sed for that is:
split -l 100000 database.sql database-
It will give files like:
database-aaa
database-aab
database-aac
...
And if you simply want to cut part of a file - say from line 26 to 142 - and input it to a newfile :
cat file-to-cut.txt | sed -n '26,142p' >> new-file.txt
How about this:
$ seq 1 100000 | tail -n +10000 | head -n 10
10000
10001
10002
10003
10004
10005
10006
10007
10008
10009
It uses tail to output from the 10,000th line and onwards and then head to only keep 10 lines.
The same (almost) result with sed:
$ seq 1 100000 | sed -n '10000,10010p'
10000
10001
10002
10003
10004
10005
10006
10007
10008
10009
10010
This one has the advantage of allowing you to input the line range directly.
If you are interested only in the last X lines, you can use the "tail" command like this.
$ tail -n XXXXX yourlogfile.log >> mycroppedfile.txt
This will save the last XXXXX lines of your log file to a new file called "mycroppedfile.txt"
This is an old thread but I was surprised nobody mentioned grep. The -A option allows specifying a number of lines to print after a search match and the -B option includes lines before a match. The following command would output 10 lines before and 10 lines after occurrences of "my search string" in the file "mylogfile.log":
grep -A 10 -B 10 "my search string" mylogfile.log
If there are multiple matches within a large file the output can rapidly get unwieldy. Two helpful options are -n which tells grep to include line numbers and --color which highlights the matched text in the output.
If there is more than file to be searched grep allows multiple files to be listed separated by spaces. Wildcards can also be used. Putting it all together:
grep -A 10 -B 10 -n --color "my search string" *.log someOtherFile.txt

Resources