Easy way to count key
my way:
cat \
public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803
| grep 18310680207 | wc -l
I need easy way to count this. In fact, my question is how does cat use grep?
File list:
public.log.2015050723
public.log.2015050800
public.log.2015050801
public.log.2015050802
public.log.2015050803
This is easier because it uses one fewer processes:
cat public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803 | # Note pipe or backslash needed here!
grep -c 18310680207
Note that the pipe symbol needs to appear after the last file name, or you need a backslash after the last file name.
If you need the occurrences per file, then you can lose the cat too (which is what anubhava suggested):
grep -c 18310680207 \
public.log.2015050723 \
public.log.2015050800 \
public.log.2015050801 \
public.log.2015050802 \
public.log.2015050803
You can reduce the list of file names, with your sample file names, to:
cat public.log.2015050723 public.log.201505080[0-3] |
grep -c 18310680207
or:
grep -c public.log.2015050723 public.log.201505080[0-3]
Related
I have two commands which I want to close in variables:
val=`awk -F "\"" '{print $2}' ~/.cache/wal/colors-wal-dwm.h | sed -n -e 1,3p -e 5,7p`
dummy=`printf "dwm.normfgcolor:\ndwm.normbgcolor:\ndwm.normbordercolor:\ndwm.selfgcolor:\ndwm.selbgcolor:\ndwm.selbordercolor:")`
They basically print some stuff. I want to merge the output with paste command (this doesn't work):
paste <($dummy) <($val)
I wanted to avoid temp files but at this point I'm out of ideas. Thanks in advance.
$dummy
Is a variable, not a command to execute. echo is a command. printf is another command.
paste <(echo "$dummy") <(echo "$val")
Do not use backticks - $(..) instead. Check your scripts with shellcheck. You code is somewhat unreadable to me... if you don't care about variables, just don't use them.
awk -F '"' '{print $2}' ~/.cache/wal/colors-wal-dwm.h |
sed -n -e 1,3p -e 5,7p |
paste <(
printf "dwm.%s:\n" \
"normfgcolor" \
"normbgcolor" \
"normbordercolor" \
"selfgcolor" \
"selbgcolor" \
"selbordercolor"
) -
I want to join two files
a.csv
customer|BillTo
100|3437146
103|3436977
b.csv
Customer|Parent
100|ANHEUSER-BUSCH INBEV
1025|INTRASTATE DISTRIBUTORS INC.
The joined file should be like this
Parent|BillTo
ANHEUSER-BUSCH INBEV|3437146
I tried to use awk but seems can't get the result. Any help would be appreciated.
$ join -j1 --header -o 2.2,1.2 -t'|' \
<(head -n 1 a.csv; tail -n +2 a.csv | sort) \
<(head -n 1 b.csv; tail -n +2 b.csv | sort)
Parent|BillTo
ANHEUSER-BUSCH INBEV|3437146
This assumes GNU join, which since you have this tagged linux seems a safe bet, and a shell like bash, zsh, ksh93, etc. that supports <(command) style redirection (/bin/sh usually doesn't).
I am trying to grep for words in a file that is not present in another file
grep -v -w -i -r -f "dont_use_words.txt" "list_of_words.txt" >> inverse_match_words.txt
uniq -c -i inverse_match_words.txt | sort -nr
But I get duplicate values in my uniq command. Why so?
I am wondering if it might be because grep differentiates between strings, say, "AAA" found in "GIRLAAA", "AAABOY", "GIRLAAABOY" and therefore, I end up with duplicates.
When I do a grep -F "AAA" all of them are returned though.
I'd appreciate if someone could help me out on this. I am new to Linux OS.
uniq eliminates all but one line in each group of consecutive duplicate lines. The conventional way to use it, therefore, is to pass the input through sort first. You're not doing that, so yes, it is entirely possible that (non-consecutive) duplicates will remain in the output.
Example:
grep -v -w -i -f dont_use_words.txt list_of_words.txt \
| sort -f \
| uniq -c -i \
| sort -nr
guys.
There is a file named 'server.conf' and I want to use shell to change content from it.
In line 115, there is server-bridge 192.168.50.225(ip) 255.255.0.0(mask) 192.168.10.50(begin ip) 192.168.10.90(end ip)
in it. I want to change the ip, mask, begin ip and end ip. For example, I plan to change
`server-bridge 192.168.50.225 255.255.0.0 192.168.10.50 192.168.10.90`
into
`server-bridge 192.168.10.100 255.255.0.0 192.168.10.60 192.168.10.80`
What should I do with sed or others tools? Thanks a lot.
sed -i 's/server-bridge\ 192.168.50.225\ 255.255.0.0\ \ 192.168.10.50\ 192.168.10.90/server-bridge\ 192.168.10.100\ 255.255.0.0\ \ 192.168.10.60\ 192.168.10.80/' server.conf
You can also create a simple script where new values to be replaced are stored in $ip ..etc.... sed -i will do in place editing to the file.
The best tool I have used is Vi.
(sudo) vi /home/mydoc.txt will open and allow you to do any editing you need done. If you have never used Vi before, there are some great HOW-TO's and tutorials online. Here is one:
http://www.howtogeek.com/102468/a-beginners-guide-to-editing-text-files-with-vi/
But I'd encourage you to really read and experiment on test files before you change the file you are referencing, AND, PLEASE, make a backup up the file ( cp /home/mydoc.txt mydoc.txt-orig ) before you do. You can always remove the edited file that does not work, but restoring the original after extensive exiting can be a hair-pulling experience.
you can use sed to do this for example some thing like this
Note:every space is escaped by \ sed Intro
cat server.conf | sed 's/server-bridge\ 192.168.50.225\ 255.255.0.0\ \ 192.168.10.50\ 192.168.10.90/server-bridge\ 192.168.10.100 255.255.0.0\ \ 192.168.10.60\ 192.168.10.80/' > server.conf
or you can use two files for safety like this
cat server.conf | sed 's/server-bridge\ 192.168.50.225\ 255.255.0.0\ \ 192.168.10.50\ 192.168.10.90/server-bridge\ 192.168.10.100 255.255.0.0\ \ 192.168.10.60\ 192.168.10.80/' > server.conf.bak
cat server.conf.bak > server.conf
You can use this awk:
awk -v ip='192.168.10.100' -v mask='255.255.0.0' -v bip='192.168.10.60' \
-v eip='192.168.10.80' '/server-bridge/{$2=ip "(ip)"; $3=mask "(mask)"; $4=bip "(begin";
$6=eip "(end"} 1' server.conf
Using sed to change all lines containing server-bridge:
sed -i -e '/^server-bridge/!b' \
-e 'c server-bridge 192.168.10.100 255.255.0.0 192.168.10.60 192.168.10.80' input
to change th 115th line only:
sed -i -e '115!b' \
-e 'c server-bridge 192.168.10.100 255.255.0.0 192.168.10.60 192.168.10.80' input
I have got 2 files. Let us call them md5s1.txt and md5s2.txt. Both contain the output of a
find -type f -print0 | xargs -0 md5sum | sort > md5s.txt
command in different directories. Many files were renamed, but the content stayed the same. Hence, they should have the same md5sum. I want to generate a diff like
diff md5s1.txt md5s2.txt
but it should compare only the first 32 characters of each line, i.e. only the md5sum, not the filename. Lines with equal md5sum should be considered equal. The output should be in normal diff format.
Easy starter:
diff <(cut -d' ' -f1 md5s1.txt) <(cut -d' ' -f1 md5s2.txt)
Also, consider just
diff -EwburqN folder1/ folder2/
Compare only the md5 column using diff on <(cut -c -32 md5sums.sort.XXX), and tell diff to print just the line numbers of added or removed lines, using --old/new-line-format='%dn'$'\n'. Pipe this into ed md5sums.sort.XXX so it will print only those lines from the md5sums.sort.XXX file.
diff \
--new-line-format='%dn'$'\n' \
--old-line-format='' \
--unchanged-line-format='' \
<(cut -c -32 md5sums.sort.old) \
<(cut -c -32 md5sums.sort.new) \
| ed md5sums.sort.new \
> files-added
diff \
--new-line-format='' \
--old-line-format='%dn'$'\n' \
--unchanged-line-format='' \
<(cut -c -32 md5sums.sort.old) \
<(cut -c -32 md5sums.sort.new) \
| ed md5sums.sort.old \
> files-removed
The problem with ed is that it will load the entire file into memory, which can be a problem if you have a lot of checksums. Instead of piping the output of diff into ed, pipe it into the following command, which will use much less memory.
diff … | (
lnum=0;
while read lprint; do
while [ $lnum -lt $lprint ]; do read line <&3; ((lnum++)); done;
echo $line;
done
) 3<md5sums.sort.XXX
If you are looking for duplicate files fdupes can do this for you:
$ fdupes --recurse
On ubuntu you can install it by doing
$ apt-get install fdupes