how to filter the lines contain at least one "x" for the first nine characters - linux

for example if we have txt as below
drwxr-sr-x 7 abcdefgetdf
drwxr-sr-x 7 abcdef123123sa
drwxr-sr-- 7 abcdefgetdf
drwxr-sr-- 7 abcdeadfvcxvxcvx
drwxr-sr-x 7 abcdef123ewlld

To answer the question in the title strictly:
awk 'substr($0, 1, 9) ~ /x/' txt
Though if you're interested in files with at least one execute permission bit set, then perhaps find -perm /0111 would be something to look into.

awk solution:
ls -l | awk '$1~/x/'
$1~/x/ - regular expression match, accepts only lines with the 1st field containing x character

I couldn't figure out a one liner that just checked the first 9 (or 10) characters for 'x'. But this little script does the trick. Uses "ls -l" as input instead of a file, but it's trivial to pass in a file to filter instead.
#!/bin/bash
ls -l | while read line
do
perms="${line:0:10}"
if [[ "${perms}" =~ "x" ]]; then
echo "${line}"
fi
done

Related

space/tab/newline insensitive comparison

Suppose I have these two files:
File 1:
1 2 3 4 5 6 7
File 2:
1
2
3
4
5
6
7
Is it possible to use diff to compare these two files so that the result is equal ?
(Or if not, what are other tools should I use? )
Thanks
You could collapse whitespace so file2 looks like file1, with every number on the same line:
$ cat file1
1 2 3 4 5 6 7
$ cat file2
1
2
4
3
5
6
7
$ diff <(echo $(< file1)) <(echo $(< file2))
1c1
< 1 2 3 4 5 6 7
---
> 1 2 4 3 5 6 7
Explanation:
< file # Equivalent to "cat file", but slightly faster since the shell doesn't
# have to fork a new process.
$(< file) # Capture the output of the "< file" command. Can also be written
# with backticks, as in `< file`.
echo $(< file) # Echo each word from the file. This will have the side effect of
# collapsing all of the whitespace.
<(echo $(< file)) # An advanced way of piping the output of one command to another.
# The shell opens an unused file descriptor (say fd 42) and pipes
# the echo command to it. Then it passes the filename /dev/fd/42 to
# diff. The result is that you can pipe two different echo commands
# to diff.
Alternately, you may want to make file1 look like file2, with each number on separate lines. That will produce more useful diff output.
$ diff -u <(printf '%s\n' $(< file1)) <(printf '%s\n' $(< file2))
--- /dev/fd/63 2012-09-10 23:55:30.000000000 -0400
+++ file2 2012-09-10 23:47:24.000000000 -0400
## -1,7 +1,7 ##
1
2
-3
4
+3
5
6
7
This is similar to the first command with echo changed to printf '%s\n' to put a newline after each word.
Note: Both of these commands will fail if the files being diffed are overly long. This is because of the limit on command-line length. If that happens then you will need to workaround this limitation, say by storing the output of echo/printf to temporary files.
Some diffs have a -b (ignore blanks) and -w (ingnore whitespace), but as unix utilities are all line-oriented, I don't thing whitespace will include \n chars.
Dbl-check that your version of diff doesn't have some fancy gnu-options with diff --help | less or man diff.
Is your formatting correct above, file 1, data all on one line? You could force file2 to match that format with
awk '{printf"%s ", $0}' file2
Or as mentioned in comments, convert file 1
awk '{for (i=1;i<=NF;i++) printf("%s\n", $i)}' file1
But I'm guessing that your data isn't really that simple. Also there are likely line length limitations that will appear when you can least afford the time to deal with them.
Probably not what you want to hear, and diffing of complicated stuff like source-code is not an exact science. So, if you still need help, create a slightly more complicated testcase and add it to your question.
Finally, you'll need to show us what you'd expect the output of such a diff project to look like. Right now I can't see any meaningful way to display such differences for a non-trival case.
IHTH
If it turns out the data is indeed simple enough to not run into limitations, and the only difference between the files is that the first one separates by space and the second by newline, you can also do process substitution (as was suggested above) but with sed to replace the spaces in the first file with newlines:
diff <(sed 's/ /\n/g' file1) file2

Linux: How to get files with a name that ends with a number greater than 10

I have some files that have names like this: 'abcdefg_y_zz.jpg'
The 'abcdefg' is a sequence of digits, while the 'y' and 'zz' are letters.
I need to get all the files that have the sequence of digits ending with a number greater than 10. The files that have 'fg' greater that 10.
Does anyone have an idea on how to do that in a bash script?
Ok, technically, based on all your info...
ls | grep '[0-9]{5}[1-9][0-9]_[[:alpha:]]_[[:alpha:]]{2}.jpg'
How about this? Just exclude ones which have 0 in position f.
ls -1 | grep -v "?????0?_?_??.jpg"
Update
Since you want > 10 and not >= 10, you'll need to exclude 10 too. So do this:
ls -1 | grep -v "?????0*_?_??.jpg" | grep -v "??????10_?_??.jpg"
with more scripting
#!/bin/bash
for i in seq FIRST INCREMENT LAST
do
cp abcde$i_y_zz.jpg /your_new_dir //or whatever you want to do with those files
done
so in your example line with seq will be
for i in seq 11 1 100000000
If the filenames are orderly named this awk solution works:
ls | awk 'BEGIN { FIELDWIDTHS = "5 2" } $2 > 10'
Explanation
FIELDWIDTHS = "5 2" means that $1 will refer to the first 5 characters and $2 the next 2.
$2 > 10 matches when field 2 is greater than 10 and implicitly invokes the default code block, i.e. '{ print }'
Just one process:
ls ?????+(1[1-9]|[2-9]?)_?_??.jpg
All the solutions provided so far are fine, but anybody who's had some experience with shell programming knows that parsing ls is never a good idea and must be avoided. This actually doesn't apply in this case, where we can assume that the names of the files follow a certain pattern, but it's a rule that should be remembered. More explanation here.
What you want can be achieved much safer with GNU find - assuming that you run the command in the directory where the files are, it would look something like this :
find . -regextype posix-egrep -regex '\./[0-9]{5}[1-9][0-9]_[[:alpha:]]_[[:alpha:]]{2}.jpg$'

Quick unix command to display specific lines in the middle of a file?

Trying to debug an issue with a server and my only log file is a 20GB log file (with no timestamps even! Why do people use System.out.println() as logging? In production?!)
Using grep, I've found an area of the file that I'd like to take a look at, line 347340107.
Other than doing something like
head -<$LINENUM + 10> filename | tail -20
... which would require head to read through the first 347 million lines of the log file, is there a quick and easy command that would dump lines 347340100 - 347340200 (for example) to the console?
update I totally forgot that grep can print the context around a match ... this works well. Thanks!
I found two other solutions if you know the line number but nothing else (no grep possible):
Assuming you need lines 20 to 40,
sed -n '20,40p;41q' file_name
or
awk 'FNR>=20 && FNR<=40' file_name
When using sed it is more efficient to quit processing after having printed the last line than continue processing until the end of the file. This is especially important in the case of large files and printing lines at the beginning. In order to do so, the sed command above introduces the instruction 41q in order to stop processing after line 41 because in the example we are interested in lines 20-40 only. You will need to change the 41 to whatever the last line you are interested in is, plus one.
# print line number 52
sed -n '52p' # method 1
sed '52!d' # method 2
sed '52q;d' # method 3, efficient on large files
method 3 efficient on large files
fastest way to display specific lines
with GNU-grep you could just say
grep --context=10 ...
No there isn't, files are not line-addressable.
There is no constant-time way to find the start of line n in a text file. You must stream through the file and count newlines.
Use the simplest/fastest tool you have to do the job. To me, using head makes much more sense than grep, since the latter is way more complicated. I'm not saying "grep is slow", it really isn't, but I would be surprised if it's faster than head for this case. That'd be a bug in head, basically.
What about:
tail -n +347340107 filename | head -n 100
I didn't test it, but I think that would work.
I prefer just going into less and
typing 50% to goto halfway the file,
43210G to go to line 43210
:43210 to do the same
and stuff like that.
Even better: hit v to start editing (in vim, of course!), at that location. Now, note that vim has the same key bindings!
You can use the ex command, a standard Unix editor (part of Vim now), e.g.
display a single line (e.g. 2nd one):
ex +2p -scq file.txt
corresponding sed syntax: sed -n '2p' file.txt
range of lines (e.g. 2-5 lines):
ex +2,5p -scq file.txt
sed syntax: sed -n '2,5p' file.txt
from the given line till the end (e.g. 5th to the end of the file):
ex +5,p -scq file.txt
sed syntax: sed -n '2,$p' file.txt
multiple line ranges (e.g. 2-4 and 6-8 lines):
ex +2,4p +6,8p -scq file.txt
sed syntax: sed -n '2,4p;6,8p' file.txt
Above commands can be tested with the following test file:
seq 1 20 > file.txt
Explanation:
+ or -c followed by the command - execute the (vi/vim) command after file has been read,
-s - silent mode, also uses current terminal as a default output,
q followed by -c is the command to quit editor (add ! to do force quit, e.g. -scq!).
I'd first split the file into few smaller ones like this
$ split --lines=50000 /path/to/large/file /path/to/output/file/prefix
and then grep on the resulting files.
If your line number is 100 to read
head -100 filename | tail -1
Get ack
Ubuntu/Debian install:
$ sudo apt-get install ack-grep
Then run:
$ ack --lines=$START-$END filename
Example:
$ ack --lines=10-20 filename
From $ man ack:
--lines=NUM
Only print line NUM of each file. Multiple lines can be given with multiple --lines options or as a comma separated list (--lines=3,5,7). --lines=4-7 also works.
The lines are always output in ascending order, no matter the order given on the command line.
sed will need to read the data too to count the lines.
The only way a shortcut would be possible would there to be context/order in the file to operate on. For example if there were log lines prepended with a fixed width time/date etc.
you could use the look unix utility to binary search through the files for particular dates/times
Use
x=`cat -n <file> | grep <match> | awk '{print $1}'`
Here you will get the line number where the match occurred.
Now you can use the following command to print 100 lines
awk -v var="$x" 'NR>=var && NR<=var+100{print}' <file>
or you can use "sed" as well
sed -n "${x},${x+100}p" <file>
With sed -e '1,N d; M q' you'll print lines N+1 through M. This is probably a bit better then grep -C as it doesn't try to match lines to a pattern.
Building on Sklivvz' answer, here's a nice function one can put in a .bash_aliases file. It is efficient on huge files when printing stuff from the front of the file.
function middle()
{
startidx=$1
len=$2
endidx=$(($startidx+$len))
filename=$3
awk "FNR>=${startidx} && FNR<=${endidx} { print NR\" \"\$0 }; FNR>${endidx} { print \"END HERE\"; exit }" $filename
}
To display a line from a <textfile> by its <line#>, just do this:
perl -wne 'print if $. == <line#>' <textfile>
If you want a more powerful way to show a range of lines with regular expressions -- I won't say why grep is a bad idea for doing this, it should be fairly obvious -- this simple expression will show you your range in a single pass which is what you want when dealing with ~20GB text files:
perl -wne 'print if m/<regex1>/ .. m/<regex2>/' <filename>
(tip: if your regex has / in it, use something like m!<regex>! instead)
This would print out <filename> starting with the line that matches <regex1> up until (and including) the line that matches <regex2>.
It doesn't take a wizard to see how a few tweaks can make it even more powerful.
Last thing: perl, since it is a mature language, has many hidden enhancements to favor speed and performance. With this in mind, it makes it the obvious choice for such an operation since it was originally developed for handling large log files, text, databases, etc.
print line 5
sed -n '5p' file.txt
sed '5q' file.txt
print everything else than line 5
`sed '5d' file.txt
and my creation using google
#!/bin/bash
#removeline.sh
#remove deleting it comes move line xD
usage() { # Function: Print a help message.
echo "Usage: $0 -l LINENUMBER -i INPUTFILE [ -o OUTPUTFILE ]"
echo "line is removed from INPUTFILE"
echo "line is appended to OUTPUTFILE"
}
exit_abnormal() { # Function: Exit with error.
usage
exit 1
}
while getopts l:i:o:b flag
do
case "${flag}" in
l) line=${OPTARG};;
i) input=${OPTARG};;
o) output=${OPTARG};;
esac
done
if [ -f tmp ]; then
echo "Temp file:tmp exist. delete it yourself :)"
exit
fi
if [ -f "$input" ]; then
re_isanum='^[0-9]+$'
if ! [[ $line =~ $re_isanum ]] ; then
echo "Error: LINENUMBER must be a positive, whole number."
exit 1
elif [ $line -eq "0" ]; then
echo "Error: LINENUMBER must be greater than zero."
exit_abnormal
fi
if [ ! -z $output ]; then
sed -n "${line}p" $input >> $output
fi
if [ ! -z $input ]; then
# remove this sed command and this comes move line to other file
sed "${line}d" $input > tmp && cp tmp $input
fi
fi
if [ -f tmp ]; then
rm tmp
fi
You could try this command:
egrep -n "*" <filename> | egrep "<line number>"
Easy with perl! If you want to get line 1, 3 and 5 from a file, say /etc/passwd:
perl -e 'while(<>){if(++$l~~[1,3,5]){print}}' < /etc/passwd
I am surprised only one other answer (by Ramana Reddy) suggested to add line numbers to the output. The following searches for the required line number and colours the output.
file=FILE
lineno=LINENO
wb="107"; bf="30;1"; rb="101"; yb="103"
cat -n ${file} | { GREP_COLORS="se=${wb};${bf}:cx=${wb};${bf}:ms=${rb};${bf}:sl=${yb};${bf}" grep --color -C 10 "^[[:space:]]\\+${lineno}[[:space:]]"; }

Egrep acts strange with -f option

I've got a strangely acting egrep -f.
Example:
$ egrep -f ~/tmp/tmpgrep2 orig_20_L_A_20090228.txt | wc -l
3
$ for lines in `cat ~/tmp/tmpgrep2` ; do egrep $lines orig_20_L_A_20090228.txt ; done | wc -l
12
Could someone give me a hint what could be the problem?
No, the files did not changed between executions. The expected answer for the egrep line count is 12.
UPDATE on file contents: the searched file contains cca 13000 lines, each of them are 500 char long, the pattern file contains 12 lines, each of them are 24 char long. The pattern always (and only) occurs on a fixed position in the seached file (26-49).
UPDATE on pattern contents: every pattern from tmpgrep2 are a 24 char long number.
If the search patterns are found on the same lines, then you can get the result you see:
Suppose you look for:
abc
def
ghi
jkl
and the data file is:
abcdefghijklmnoprstuvwxzy
then the one-time command will print 1 and the loop will print 4.
Could it be that the lines read contain something that the shell is expanding/substituting for you, in the second version? Then that doesn't get done by grep when it reads the patterns itself, thus leading to a different sent of patterns being matched.
I'm not totally sure if the shell is doing any expansion on the variable value in an invocation like that, but it's an idea at least.
EDIT: Nope, it doesn't seem to do any substitutions. But it could be quoting issue, if your patterns contain whitespace the for loop will step through each token, not through each line. Take a look at the read bash builtin.
Do you have any duplicates in ~/tmp/tmpgrep2? Egrep will only use the dupes one time, but your loop will use each occurrence.
Get rid of dupes by doing something like this:
$ for lines in `sort < ~/tmp/tmpgrep2 | uniq` ; do egrep $lines orig_20_L_A_20090228.txt ; done | wc -l
I second #unwind.
Why don't you run without wc -l and see what each search is finding?
And maybe:
for lines in `cat ~/tmp/tmpgrep2` ; do echo $lines ; done
Just to see now the shell is handling $lines?
The others have already come up with most of the things I would look at. The next thing I would check is the environment variable GREP_OPTIONS, or whatever it is called on your machine. I've gotten the strangest error messages or behaviors when using a command line argument that interfered with the environment settings.

Splitting a file and its lines under Linux/bash

I have a rather large file (150 million lines of 10 chars). I need to split it in 150 files of 2 million lines, with each output line being alternatively the first 5 characters or the last 5 characters of the source line.
I could do this in Perl rather quickly, but I was wondering if there was an easy solution using bash.
Any ideas?
Homework? :-)
I would think that a simple pipe with sed (to split each line into two) and split (to split things up into multiple files) would be enough.
The man command is your friend.
Added after confirmation that it is not homework:
How about
sed 's/\(.....\)\(.....\)/\1\n\2/' input_file | split -l 2000000 - out-prefix-
?
I think that something like this could work:
out_file=1
out_pairs=0
cat $in_file | while read line; do
if [ $out_pairs -gt 1000000 ]; then
out_file=$(($out_file + 1))
out_pairs=0
fi
echo "${line%?????}" >> out${out_file}
echo "${line#?????}" >> out${out_file}
out_pairs=$(($out_pairs + 1))
done
Not sure if it's simpler or more efficient than using Perl, though.
First five chars of each line variant, assuming that the large file called x.txt, and assuming it's OK to create files in the current directory with names x.txt.* :
split -l 2000000 x.txt x.txt.out && (for splitfile in x.txt.out*; do outfile="${splitfile}.firstfive"; echo "$splitfile -> $outfile"; cut -c 1-5 "$splitfile" > "$outfile"; done)
Why not just use native linux split function?
split -d -l 999999 input_filename
this will output new split files with file names like x00 x01 x02...
for more info see the manual
man split

Resources