grep but indexable? - linux

I have over 200mb of source code files that I have to constantly look up (I am part of a very big team). I notice that grep does not create an index so lookup requires going through the entire source code database each time.
Is there a command line utility similar to grep which has indexing ability?

The solutions below are rather simple. There are a lot of corner cases that they do not cover:
searching for start of line ^
filenames containing \n or : will fail
filenames containing white space will fail (though that can be fixed by using GNU Parallel instead of xargs)
searching for a string that matches the path of another files will be suboptimal
The good part about the solutions is that they are very easy to implement.
Solution 1: one big file
Fact: Seeking is dead slow, reading one big file is often faster.
Given those facts the idea is to simply make an index containing all the files with all their content - each line prepended with the filename and the line number:
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . > .index
Use the index:
grep foo .index
Solution 2: one big compressed file
Fact: Harddrives are slow. Seeking is dead slow. Multi core CPUs are normal.
So it may be faster to read a compressed file and decompress it on the fly than reading the uncompressed file - especially if you have RAM enough to cache the compressed file but not enough for the uncompressed file.
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . | pbzip2 > .index
Use the index:
pbzcat .index | grep foo
Solution 3: use index for finding potential candidates
Generating the index can be time consuming and you might not want to do that for every single change in the dir.
To speed that up only use the index for identifying filenames that might match and do an actual grep through those (hopefully limited number of) files. This will discover files that no longer match, but it will not discover new files that do match.
The sort -u is needed to avoid grepping the same file multiple times.
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . | pbzip2 > .index
Use the index:
pbzcat .index | grep foo | sed s/:.*// | sort -u | xargs grep foo
Solution 4: append to the index
Re-creating the full index can be very slow. If most of the dir stays the same, you can simply append to the index with newly changed files. The index will again only be used for locating potential candidates, so if a file no longer matches it will be discovered when grepping through the actual file.
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . | pbzip2 > .index
Append to the index:
find . -type f -newer .index -print0 | xargs -0 grep -Han . | pbzip2 >> .index
Use the index:
pbzcat .index | grep foo | sed s/:.*// | sort -u | xargs grep foo
It can be even faster if you use pzstd instead of pbzip2/pbzcat.
Solution 5: use git
git grep can grep through a git repository. But it seems to do a lot of seeks and is 4 times slower on my system than solution 4.
The good part is that the .git index is smaller than the .index.bz2.
Index a dir:
git init
git add .
Append to the index:
git add .
Use the index:
git grep foo
Solution 6: optimize git
Git puts its data into many small files. This results in seeking. But you can ask git to compress the small files into few, bigger files:
git gc --aggressive
This takes a while, but it packs the index very efficiently in few files.
Now you can do:
find .git -type f | xargs cat >/dev/null
git grep foo
git will do a lot of seeking into the index, but by running cat first, you put the whole index into RAM.
Adding to the index is the same as in solution 5, but run git gc now and then to avoid many small files, and git gc --aggressive to save more disk space, when the system is idle.
git will not free disk space if you remove files. So if you remove large amounts of data, remove .git and do git init; git add . again.

There is https://code.google.com/p/codesearch/ project which is capable of creating index and fast searching in the index. Regexps are supported and computed using index (actually, only subset of regexp can use index to filter file set, and then real regexp is reevaluted on the matched files).
Index from codesearch is usually 10-20% of source code size, building an index is fast like running classic grep 2 or 3 times, and the searching is almost instantaneous.
The ideas used in the codesearch project are from google's Code Search site (RIP). E.g. the index contains map from n-grams (3-grams or every 3-byte set found in your sources) to the files; and regexp is translated to 4-grams when searching.
PS And there are ctags and cscope to navigate in C/C++ sources. Ctags can find declarations/definitions, cscope is more capable, but has problems with C++.
PPS and there are also clang-based tools for C/C++/ObjC languages: http://blog.wuwon.id.au/2011/10/vim-plugin-for-navigating-c-with.html and clang-complete

I notice that grep does not create an index so lookup requires going through the entire source code database each time.
Without addressing the indexing ability part, git grep will have, with Git 2.8 (Q1 2016) the abililty to run in parallel!
See commit 89f09dd, commit 044b1f3, commit b6b468b (15 Dec 2015) by Victor Leschuk (vleschuk).
(Merged by Junio C Hamano -- gitster -- in commit bdd1cc2, 12 Jan 2016)
grep: add --threads=<num> option and grep.threads configuration
"git grep" can now be configured (or told from the command line) how
many threads to use when searching in the working tree files.
grep.threads:
Number of grep worker threads to use.

ack is a code searching tool that is optimized for programmers, especially programmers dealing with large heterogeneous source code trees: http://beyondgrep.com/
Is some of your search examples where you only want to search a certain type of file, like only Java files? Then you can do
ack --java function
ack does not index the source code, but it may not matter depending on what your searching patterns are like. In many cases, only searching for certain types of files gives the speedup that you need because you're not also searching all those other XML, etc files.
And if ack doesn't do it for you, here is a list of many tools designed for searching source code: http://beyondgrep.com/more-tools/

We use a tool internally to index very large log files and make efficient searches of them. It has been open-sourced. I don't know how well it scales to large numbers of files, though. It multithreads by default, it searches inside gzipped files, and it caches indexes of previously searched files.
https://github.com/purestorage/4grep

This grep-cache article has a script for caching grep results. His examples were run on windows with linux tools installed, so it can easily be used on nix/mac with little modification. It's mostly just a perl script anyway.
Also, the filesystem itself (assuming your using *nix) often caches recently read data, causing future grep times to be faster since grep is effectively searching virt memory instead of disk.
The cache is usually located in /proc/sys/vm/drop_caches if you want manually erase it to see the speed increase from an uncached to a cached grep.

Since you mention various kinds of text files that are not really code, I suggest you have a look at GNU ID utils. For example:
cd /tmp
# create index file named 'ID'
mkid -m /dev/null -d text /var/log/messages.*
# query index
gid -r 'spamd|kernel'
These tools focus on tokens, so queries on strings of tokens are not possible. There is minimal integration in emacs for the gid command.
For the more specific case of indexing source code, I prefer to use GNU global, which I find more flexible. For example:
cd sourcedir
# index source tree
gtags .
# look for a definition
global -x main
# look for a reference
global -xr printf
# look for another kind of symbol
global -xs argc
Global natively supports C/C++ and Java, and with a bit of configuration, can be extended to support many more languages. It also has very good integration with emacs: successive queries are stacked, and updating a source file updates the index efficiently. However I'm not aware that it is able to index plain text (yet).

Related

Is there a way to check contribution lines in a project in relevant files (such .java, .story, etc) and ignore other type of files?

I'm trying to check how many lines of code I contributed to the project I work, but only in relevant files such as .java and .story, not sure if there any other relevant types, and I want to ignore any other file types (I added some files for unit tests and don't want to consider them in this counting).
I also want to know if there is a better way to get this information.
I used this command:
git log --shortstat --author "<author>" --since "<beginDate>" --until "<endDate>" \
| grep "files\? changed" \
| awk '{files+=$1; inserted+=$4; deleted+=$6} END \
{print "files changed", files, "lines inserted:", inserted, "lines deleted:", deleted}'
log can get glob patterns as the last argument.... so you could say
git log whatever conditions -- '*.java' '*.txt'
Which will only consider logging those files (just make sure that bash doesn't expand them.... that's why I used quotes).

GNU find, how to use debug options, -D

I've been using find more successfully in my daily work especially with xargs, parallel, grep and pcregrep.
I was reading the man pages for find tonight and found a section I'd always skipped right over...Debug options.
I promptly tried:
find -D help
# then
find -D tree . -type f -name "fjdalfj"
# and
find -D search . -type f -name "fjaldkj"
Wow some really interesting output....however I was unsuccessful in tracking down what this means.
Can someone point me to a resource I can read about this or perhaps they care to explain a bit about how they would use these options or when they'd be useful.
Fedora '$ man find' provides illumination. Notice carefully where the "-D" options are placed: before the selection expression and before the search directories are listed. Most of the debug mode deals with how the selection expressions are parsed, as these can get quite involved.
tree: shows the generated parse tree for the expression.
search: verbosely log the navigation of the searched directories.
stat: trace calls to stat(2) and friends when getting the file attributes, such as its size.
rates: indicate how frequently each expression predicate is successful.
opt: provide details on how the selection expression was optimized after it is parsed.
exec: show details about the -exec, -execdir, and -okdir programs run.
You mileage may vary, your version of find(1) may have different debug options. A '$ find -D help' is probably in order.

How can I ALIAS --> "less" the latest file in a directory?

Just wondering how could I less the latest log file in a directory in Linux?
I'm after a oneliner, possibly considering an alias!
Something like this?
ls -1dtr /your/dir/{*,.*} | tail -1 | xargs less
Note that for the first block of ls I am using an answer of Unix ls command: show full path when using options
As it requires a parameter, we create a function instead of an alias. Store the following in ~/.bashrc:
my_less_func ()
{
ls -1dtr "$1"/{*,.*} | tail -1 | xargs less
}
Source it (it is enough doing . ~/.bashrc) and call it with:
my_less_func your/path
In zsh: less dir/*(.om[1])
dir/* is a regular glob.
The . qualifier restricts to regular files.
om means order by modification time, newest first.
[1] means just expand the first filename.
It's probably better without the [1] - just pass all the filenames to less in the om order. If the first one satisfies you, you can hit q and be done with it. If not, the next one is just a :n away, or you can search them all with /*something. If there are too many, om[1,10] will get you 10 newest files.

linux find on multiple patterns

I need to do a find on roughly 1500 file names and was wondering if there is a way to execute simultaneous find commands at the same time.
Right now I do something like
for fil in $(cat my_file)
do
find . -name $fil >> outputfile
done
is there a way to spawn multiple instances of find to speed up the process. Right now it takes about 7 hours to run this loop one file at a time.
Given the 7-hour runtime you mention, I presume the file system has some millions of files in it so that OS disk buffers loaded in one query are being reused before the next query begins. You can test this hypothesis by timing the same find a few times, as in following example.
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m15.823s
user 0m0.908s
sys 0m1.608s
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m0.715s
user 0m0.340s
sys 0m0.368s
In the example, the second find ran much faster because the OS still had buffers in RAM from the first find. [On my small Linux 3.2.0-32 system, according to top at the moment 2.5GB of RAM is buffers, 0.3GB is free, and 3.8GB in use (ie about 1.3GB for programs and OS).]
Anyhow, to speed up processing, you need to find a way to make better use of OS disk buffering. For example, double or quadruple your system memory. For an alternative, try the locate command. The query
time locate IMG_0772.JPG
consistently takes under a second on my system. You may wish to run updatedb just before starting the job that finds the 1500 file names. See man updatedb. If directory . in your find's gives only a small part of the overall file system, so that the locate database includes numerous irrelevant files, use various prune options when you run updatedb, to minimize the size of the locate database that is accessed when you run locate; and afterwards, run a plain updatedb to restore other filenames to the locate database. Using locate you probably can cut the run time to 20 minutes.
This solution calls find and fgrep only once:
find . | fgrep -f my_file > outputfile
I assume that my_file has a list of files you are looking for, with each name on a separate line.
Explanation
The find command finds all the files (including directories) in the current directory. Its output is a list of files/directories, one per line
The fgrep command search from the output of the find command, but instead of specifying the search term on the command line, it gets the search terms from my_file--that's what the -f flag for.
The output of the fgrep command, which is the list of files you are looking for, are redirected into outputfile
maybe something like
find . \( -name file1 -o -name file2 -o ... \) >outputfile
You could build lines of this kind, depending on the number of names in my_file:
find . \( $(xargs <my_file printf "-name %s -o " | sed 's/-o $//') \) >outputfile
is there a way to spawn multiple instances of find to speed up the process.
This is not how you want to solve the problem, since find is I/O- and FS-limited.
Either use multiple -name arguments grouped together with -o in order to use one find command to look for multiple filenames at once, or find all files once and use a tool such as grep to search the resultant list of files for the filenames of interest.

How to compare two tarball's content

I want to tell whether two tarball files contain identical files, in terms of file name and file content, not including meta-data like date, user, group.
However, There are some restrictions:
first, I have no control of whether the meta-data is included when making the tar file, actually, the tar file always contains meta-data, so directly diff the two tar files doesn't work.
Second, since some tar files are so large that I cannot afford to untar them in to a temp directory and diff the contained files one by one. (I know if I can untar file1.tar into file1/, I can compare them by invoking 'tar -dvf file2.tar' in file/. But usually I cannot afford untar even one of them)
Any idea how I can compare the two tar files? It would be better if it can be accomplished within SHELL scripts. Alternatively, is there any way to get each sub-file's checksum without actually untar a tarball?
Thanks,
Try also pkgdiff to visualize differences between packages (detects added/removed/renamed files and changed content, exist with zero code if unchanged):
pkgdiff PKG-0.tgz PKG-1.tgz
Are you controlling the creation of these tar files?
If so, the best trick would be to create a MD5 checksum and store it in a file within the archive itself. Then, when you want to compare two files, you just extract this checksum files and compare them.
If you can afford to extract just one tar file, you can use the --diff option of tar to look for differences with the contents of other tar file.
One more crude trick if you are fine with just a comparison of the filenames and their sizes.
Remember, this does not guarantee that the other files are same!
execute a tar tvf to list the contents of each file and store the outputs in two different files. then, slice out everything besides the filename and size columns. Preferably sort the two files too. Then, just do a file diff between the two lists.
Just remember that this last scheme does not really do checksum.
Sample tar and output (all files are zero size in this example).
$ tar tvfj pack1.tar.bz2
drwxr-xr-x user/group 0 2009-06-23 10:29:51 dir1/
-rw-r--r-- user/group 0 2009-06-23 10:29:50 dir1/file1
-rw-r--r-- user/group 0 2009-06-23 10:29:51 dir1/file2
drwxr-xr-x user/group 0 2009-06-23 10:29:59 dir2/
-rw-r--r-- user/group 0 2009-06-23 10:29:57 dir2/file1
-rw-r--r-- user/group 0 2009-06-23 10:29:59 dir2/file3
drwxr-xr-x user/group 0 2009-06-23 10:29:45 dir3/
Command to generate sorted name/size list
$ tar tvfj pack1.tar.bz2 | awk '{printf "%10s %s\n",$3,$6}' | sort -k 2
0 dir1/
0 dir1/file1
0 dir1/file2
0 dir2/
0 dir2/file1
0 dir2/file3
0 dir3/
You can take two such sorted lists and diff them.
You can also use the date and time columns if that works for you.
tarsum is almost what you need. Take its output, run it through sort to get the ordering identical on each, and then compare the two with diff. That should get you a basic implementation going, and it would be easily enough to pull those steps into the main program by modifying the Python code to do the whole job.
Here is my variant, it is checking the unix permission too:
Works only if the filenames are shorter than 200 char.
diff <(tar -tvf 1.tar | awk '{printf "%10s %200s %10s\n",$3,$6,$1}'|sort -k2) <(tar -tvf 2.tar|awk '{printf "%10s %200s %10s\n",$3,$6,$1}'|sort -k2)
EDIT: See the comment by #StéphaneGourichon
I realise that this is a late reply, but I came across the thread whilst attempting to achieve the same thing. The solution that I've implemented outputs the tar to stdout, and pipes it to whichever hash you choose:
tar -xOzf archive.tar.gz | sort | sha1sum
Note that the order of the arguments is important; particularly O which signals to use stdout.
Is tardiff what you're looking for? It's "a simple perl script" that "compares the contents of two tarballs and reports on any differences found between them."
There is also diffoscope, which is more generic, and allows to compare things recursively (including various formats).
pip install diffoscope
I propose gtarsum, that I have written in Go, which means it will be an autonomous executable (no Python or other execution environment needed).
go get github.com/VonC/gtarsum
It will read a tar file, and:
sort the list of files alphabetically,
compute a SHA256 for each file content,
concatenate those hashes into one giant string
compute the SHA256 of that string
The result is a "global hash" for a tar file, based on the list of files and their content.
It can compare multiple tar files, and return 0 if they are identical, 1 if they are not.
Just throwing this out there since none of the above solutions worked for what I needed.
This function gets the md5 hash of the md5 hashes of all the file-paths matching a given path. If the hashes are the same, the file hierarchy and file lists are the same.
I know it's not as performant as others, but it provides the certainty I needed.
PATH_TO_CHECK="some/path"
for template in $(find build/ -name '*.tar'); do
tar -xvf $template --to-command=md5sum |
grep $PATH_TO_CHECK -A 1 |
grep -v $PATH_TO_CHECK |
awk '{print $1}' |
md5sum |
awk "{print \"$template\",\$1}"
done
*note: An invalid path simply returns nothing.
If not extracting the archives nor needing the differences, try diff's -q option:
diff -q 1.tar 2.tar
This quiet result will be "1.tar 2.tar differ" or nothing, if no differences.
There is tool called archdiff. It is basically a perl script that can look into the archives.
Takes two archives, or an archive and a directory and shows a summary of the
differences between them.
I have a similar question and i resolve it by python, here is the code.
ps:although this code is used to compare two zipball's content,but it's similar with tarball, hope i can help you
import zipfile
import os,md5
import hashlib
import shutil
def decompressZip(zipName, dirName):
try:
zipFile = zipfile.ZipFile(zipName, "r")
fileNames = zipFile.namelist()
for file in fileNames:
zipFile.extract(file, dirName)
zipFile.close()
return fileNames
except Exception,e:
raise Exception,e
def md5sum(filename):
f = open(filename,"rb")
md5obj = hashlib.md5()
md5obj.update(f.read())
hash = md5obj.hexdigest()
f.close()
return str(hash).upper()
if __name__ == "__main__":
oldFileList = decompressZip("./old.zip", "./oldDir")
newFileList = decompressZip("./new.zip", "./newDir")
oldDict = dict()
newDict = dict()
for oldFile in oldFileList:
tmpOldFile = "./oldDir/" + oldFile
if not os.path.isdir(tmpOldFile):
oldFileMD5 = md5sum(tmpOldFile)
oldDict[oldFile] = oldFileMD5
for newFile in newFileList:
tmpNewFile = "./newDir/" + newFile
if not os.path.isdir(tmpNewFile):
newFileMD5 = md5sum(tmpNewFile)
newDict[newFile] = newFileMD5
additionList = list()
modifyList = list()
for key in newDict:
if not oldDict.has_key(key):
additionList.append(key)
else:
newMD5 = newDict[key]
oldMD5 = oldDict[key]
if not newMD5 == oldMD5:
modifyList.append(key)
print "new file lis:%s" % additionList
print "modified file list:%s" % modifyList
shutil.rmtree("./oldDir")
shutil.rmtree("./newDir")

Resources