How to count number of lines with only 1 character? - linux

Im trying to just print counted number of lines which have only 1 character.
I have a file with 200k lines some of the lines have only one character (any type of character)
Since I have no experience I have googled a lot and scraped documentation and come up with this mixed solution from different sources:
awk -F^\w$ '{print NF-1}' myfile.log
I was expecting that will filter lines with single char, and it seems work
^\w$
However Im not getting number of the lines containing a single character. Instead something like this:

If a non-awk solution is OK:
grep -c '^.$'

You could try the following:
awk '/^.$/{++c}END{print c}' file
The variable c is incremented for every line containing only 1 character (any character).
When the parsing of the file is finished, the variable is printed.

In awk, rules like your {print NF-1} are executed for each line. To print only one thing for the whole file you have to use END { print ... }. There you can print a counter which you increment each time you see a line with one character.
However, I'd use grep instead because it is easier to write and faster to execute:
grep -xc . yourFile

Related

Prefix search names to output in bash

I have a simple egrep command searching for multiple strings in a text file which outputs either null or a value. Below is the command and the output.
cat Output.txt|egrep -i "abc|def|efg"|cut -d ':' -f 2
Output is:-
xxx
(null)
yyy
Now, i am trying to prefix my search texts to the output like below.
abc:xxx
def:
efg:yyy
Any help on the code to achieve this or where to start would be appreciated.
-Abhi
Since I do not know exactly your input file content (not specified properly in the question), I will put some hypothesis in order to answer your question.
Case 1: the patterns you are looking for are always located in the same column
If it is the case, the answer is quite straightforward:
$ cat grep_file.in
abc:xxx:uvw
def:::
efg:yyy:toto
xyz:lol:hey
$ egrep -i "abc|def|efg" grep_file.in | cut -d':' -f1,2
abc:xxx
def:
efg:yyy
After the grep just use the cut with the 2 columns that you are looking for (here it is 1 and 2)
REMARK:
Do not cat the file, pipe it and then grep it, since this is doing the work twice!!! Your grep command will already read the file so do not read it twice, it might not be that important on small files but you will feel the difference on 10GB files for example!
Case 2: the patterns you are looking for are NOT located in the same column
In this case it is a bit more tricky, but not impossible. There are many ways of doing, here I will detail the awk way:
$ cat grep_file2.in
abc:xxx:uvw
::def:
efg:yyy:toto
xyz:lol:hey
If your input file is in this format; with your pattern that could be located anywhere:
$ awk 'BEGIN{FS=":";ORS=FS}{tmp=0;for(i=1;i<=NF;i++){tmp=match($i,/abc|def|efg/);if(tmp){print $i;break}}if(tmp){printf "%s\n", $2}}' grep_file
2.in
abc:xxx
def:
efg:yyy
Explanations:
FS=":";ORS=FS define your input/output field separator at : Then on each line you define a test variable that will become true when you reach your pattern, you loop on all the fields of the line until you reach it if it is the case you print it, break the loop and print the second field + an EOL char.
If you do not meet your pattern you do nothing.
If you prefer the sed way, you can use the following command:
$ sed -n '/abc\|def\|efg/{h;s/.*\(abc\|def\|efg\).*/\1:/;x;s/^[^:]*:\([^:]*\):.*/\1/;H;x;s/\n//p}' grep_file2.in
abc:xxx
def:
efg:yyy
Explanations:
/abc\|def\|efg/{} is used to filter the lines that contain only one of the patterns provided, then you execute the instructions in the block. h;s/.*\(abc\|def\|efg\).*/\1:/; save the line in the hold space and replace the line with one of the 3 patterns, x;s/^[^:]*:\([^:]*\):.*/\1/; is used to exchange the pattern and hold space and extract the 2nd column element. Last but not least, H;x;s/\n//p is used to regroup both extracted elements on 1 line and print it.
try this
$ egrep -io "(abc|def|efg):[^:]*" file
will print the match and the next token after delimiter.
If we can assume that there are only two fields, that abc etc will always match in the first field, and that getting the last match on a line which contains multiple matches is acceptable, a very simple sed script could work.
sed -n 's/^[^:]*\(abc\|def\|efg\)[^:]*:\([^:]*\)/\1:\2/p' file
If other but similar conditions apply (e.g. there are three fields or more but we don't care about matches in the first two) the required modifications are trivial. If not, you really need to clarify your question.

removing text between pipe and comma

I have a enormous long file with the text separated as
subtlechanges|NEW=19647490,subtlec|NEW=19638255
and I want the text like
subtlechanges,subtle.
I tried using the \|.*$ but it is removing everything after the first pipe. Any guess. Thanks in advance
If I understand you correctly, we have a file that may look like:
$ cat file
subtlechanges|NEW=19647490,subtle|NEW=19638255
And, we want to remove everything from a pipe character to the next comma. In that case:
$ sed 's/|[^,]*//g' file
subtlechanges,subtle
How it works
In sed, substitute commands look like s/old/new/g where old is a regular expression for what is removed, new is what gets substituted in, and the final g signifies that we want to do this not just once per line but as many times per line as we can.
The regular expression that we use for old here is |[^,]*. This matches a pipe, |, and any characters after up to, but not including, the first comma.
Another approach, using comma or pipe as the field separator, print the 1st, 3rd, ... every odd field.
awk -F '[,|]' '{
sep=""
for (i=1; i<NF; i+=2) {
printf "%s%s", sep, $i
sep=","
}
print ""
}' file

remove almost-duplicates containing substring of next line

I need to know a way to remove duplicate strings in line, but let me explain, cause I have already used uniq. In a file, I get these two lines:
ANASI:A=4-63261950;
ANASI:A=4-63261950,ES=541;
The string 4-63261950 is duplicated in both lines, but the line itself is different, only that string is equal in both lines. I just need a way to process the entire file and remove the first line and leave only the one with the ANASI:A=4-63261950,ES=541;. The file will contain several lines with this exact same scenario. Is there a way to do this with sed or something?
awk to the rescue...
assuming your delimiters and structure stays the same
sort file | awk -F"[;,]" '!a[$1]++'
will pick the first one based on lexical order (, < ;)
If file is huge (and memory a problem or issue)
sort YourFile | awk -F '[;,]' 'Last != $1{print}{Last = $1}'
This might work for you (GNU sed):
sed -r 'N;/^(.*);\n\1,/!P;D' file
This uses a moving window to compare successive pairs of lines to print the required match.

How can I remove lines that contain more than N words

Is there a good one-liner in bash to remove lines containing more than N words from a file?
example input:
I want this, not that, but thank you it is very nice of you to offer.
The very long sentence finding form ordering system always and redundantly requires an initial, albeit annoying and sometimes nonsensical use of commas, completion of the form A-1 followed, after this has been processed by the finance department and is legal, by a positive approval that allows for the form B-1 to be completed after the affirmative response to the form A-1 is received.
example output:
I want this, not that, but thank you it is very nice of you to offer.
In Python I would code something like this:
if len(line.split()) < 40:
print line
To only show lines containing less than 40 words, you can use awk:
awk 'NF < 40' file
Using the default field separator, each word is treated as a field. Lines with less than 40 fields are printed.
Note this answer assumes the first approach of the question: how to print those lines being shorter than a given number of characters
Use awk with length():
awk 'length($0)<40' file
You can even give the length as a parameter:
awk -v maxsize=40 'length($0) < maxsize' file
A test with 10 characters:
$ cat a
hello
how are you
i am fine but
i would like
to do other
things
$ awk 'length($0)<10' a
hello
things
If you feel like using sed for this, you can say:
sed -rn '/^.{,39}$/p' file
This checks if the line contains less than 40 characters. If so, it prints it.

Combine matching lines using sed or awk?

I have a file like the following:
1,
cake:01351
12,
bun:1063
scone:13581
biscuit:1931
14,
jelly:1385
I need to convert it so that when a number is read at the start of a line it is combined with the line beneath it, but if there is no number at the start the line is left as is. This would be the output that I need:
1,cake:01351
12,bun:1063
scone:13581
biscuit:1931
14,jelly:1385
Having a lot of trouble achieving this with sed, it seems it may not be the best way for what I think should be quite simple.
Any suggestions greatly appreciated.
A very basic sed implementation:
sed -e '/^[0-9]/{N;s/\n//;}'
This relies on the first character on only the 'number' lines being a number (as you specified).
It
matches lines starting with a number, ^[0-9]
brings in the next line, N
deletes the embedded newline, s/\n//
This is a file on my intranet. I can't recall where I found the handy sed one-liner. You might find something if you search for 'sed one-liner'
Have you ever needed to combine lines of text, but it's too tedious to do it by hand.
For example, imagine that we have a text file with hundreds of lines which look like this:
14/04/2003,10:27:47,0
IdVg,3.000,-1.000,0.050,0.006
GmMax,0.011,0.975,0.005
IdVg,3.000,-1.000,0.050,0.006
GmMax,0.011,0.975,0.005
14/04/2003,10:30:51,600
IdVg,3.000,-1.000,0.050,0.006
GmMax,0.011,0.975,0.005
IdVg,3.000,-1.000,0.050,0.006
GmMax,0.010,0.975,0.005
14/04/2003,10:34:02,600
IdVg,3.000,-1.000,0.050,0.006
GmMax,0.011,0.975,0.005
IdVg,3.000,-1.000,0.050,0.006
GmMax,0.010,0.975,0.005
Each date (14/04/2003) is the start of a data record, and it continues on the next four lines.
We would like to input this to Excel as a 'comma separated value' file, and see each record in its own row.
In our example, we need to append any line starting with a G or I to the preceding line, and insert a comma, so as to produce the following:
14/04/2003,10:27:47,0,IdVg,3.000,-1.000,0.050,0.006,GmMax,0.011,0.975,0.005,IdVg,3.000,...
14/04/2003,10:30:51,600,IdVg,3.000,-1.000,0.050,0.006,GmMax,0.011,0.975,0.0005,IdVg,3.000,...
14/04/2003,10:34:02,600,IdVg,3.000,-1.000,0.050,0.006,GmMax,0.011,0.975,0.0005,IdVg,3.000,...
This is a classic application of a 'regular expression' and, once again, sed comes to the rescue.
The editing can be done with a single sed command:
sed -e :a -e '$!N;s/\n\([GI]\)/,\1/;ta' -e 'P;D' filename >newfilename
I didn't say it would be obvious, or easy, did I?
This is the kind of command you write down somewhere for the rare occasions when you need it.
Try a regular expression, such as:
sed '/[0-9]\+,/{N}s/\n//)'
That checks the first line for a number (0-9) and a comma, then replaces the new line with nothing, removing it.
Another awk solution, less cryptic than some other answers:
awk '/^[0-9]/ {n = $0; getline; print n $0; next} 1'
$ awk 'ORS= /^[0-9]+,$/?" ":"\n"' file
1, cake:01351
12, bun:1063
scone:13581
biscuit:1931
14, jelly:1385

Resources