linux find on multiple patterns - linux

I need to do a find on roughly 1500 file names and was wondering if there is a way to execute simultaneous find commands at the same time.
Right now I do something like
for fil in $(cat my_file)
do
find . -name $fil >> outputfile
done
is there a way to spawn multiple instances of find to speed up the process. Right now it takes about 7 hours to run this loop one file at a time.

Given the 7-hour runtime you mention, I presume the file system has some millions of files in it so that OS disk buffers loaded in one query are being reused before the next query begins. You can test this hypothesis by timing the same find a few times, as in following example.
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m15.823s
user 0m0.908s
sys 0m1.608s
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m0.715s
user 0m0.340s
sys 0m0.368s
In the example, the second find ran much faster because the OS still had buffers in RAM from the first find. [On my small Linux 3.2.0-32 system, according to top at the moment 2.5GB of RAM is buffers, 0.3GB is free, and 3.8GB in use (ie about 1.3GB for programs and OS).]
Anyhow, to speed up processing, you need to find a way to make better use of OS disk buffering. For example, double or quadruple your system memory. For an alternative, try the locate command. The query
time locate IMG_0772.JPG
consistently takes under a second on my system. You may wish to run updatedb just before starting the job that finds the 1500 file names. See man updatedb. If directory . in your find's gives only a small part of the overall file system, so that the locate database includes numerous irrelevant files, use various prune options when you run updatedb, to minimize the size of the locate database that is accessed when you run locate; and afterwards, run a plain updatedb to restore other filenames to the locate database. Using locate you probably can cut the run time to 20 minutes.

This solution calls find and fgrep only once:
find . | fgrep -f my_file > outputfile
I assume that my_file has a list of files you are looking for, with each name on a separate line.
Explanation
The find command finds all the files (including directories) in the current directory. Its output is a list of files/directories, one per line
The fgrep command search from the output of the find command, but instead of specifying the search term on the command line, it gets the search terms from my_file--that's what the -f flag for.
The output of the fgrep command, which is the list of files you are looking for, are redirected into outputfile

maybe something like
find . \( -name file1 -o -name file2 -o ... \) >outputfile
You could build lines of this kind, depending on the number of names in my_file:
find . \( $(xargs <my_file printf "-name %s -o " | sed 's/-o $//') \) >outputfile

is there a way to spawn multiple instances of find to speed up the process.
This is not how you want to solve the problem, since find is I/O- and FS-limited.
Either use multiple -name arguments grouped together with -o in order to use one find command to look for multiple filenames at once, or find all files once and use a tool such as grep to search the resultant list of files for the filenames of interest.

Related

Recursively replace linux file and folder names such as "%m-%d-%y.tar" with their actual creation month/day/year

I'm looking for something like this but with its original creation date instead of the current date.
Example: This folder (output below is from Linux command ls -ltr)
drwxrwxr-x 2 backup_user backup_user 4096 Apr 26 01:06 "%m-%d-%y"
would have its file name changed to "04-26-20".
Since there are some information missing I try to make assumptions and show a possible solution approach in general.
As already mentioned within the comments, for a filesystem like EXT3 there would be no creation time. It might be possible to use the modification time which could be gathered via the stat command, i.e.
MTIME=$(stat --format="%y" \"%m-%d-%y\" | cut -d " " -f 1)
... or even access time or change time.
The date of MTIME is given in format %Y-%m-%d and can be changed for the new file name via
FNAME=$(date -d ${MTIME} +%m-%d-%y)
Than it it is possible to rename the directory, i.e.
mv \"%m-%d-%y\" ${FNAME}
which will of course change the timestamps within the filesystem for the directory.

filename last modification date shell in script

I'm using bash to build a script where I will get a filename in a variable an then with this variable get the file unix last modification date.
I need to get this modification date value and I can't use stat command.
Do you know any way to get it with the common available *nix commands?
Why you shouldn't use ls:
Parsing ls is a bad idea. Not only is the behaviour of certain characters in filenames undefined and platform dependant, for your purposes, it'll mess with dates when they're six months in the past. In short, yes, it'll probably work for you in your limited testing. It will not be platform-independent (so no portability) and the behaviour of your parsing is not guaranteed given the range of 'legal' filenames on various systems. (Ext4, for example, allows spaces and newlines in filenames).
Having said all that, personally, I'd use ls because it's fast and easy ;)
Edit
As pointed out by Hugo in the comments, the OP doesn't want to use stat. In addition, I should point out that the below section is BSD-stat specific (the %Sm flag doesn't work when I test on Ubuntu; Linux has a stat command, if you're interested in it read the man page).
So, a non-stat solution: use date
date, at least on Linux, has a flag: -r, which according to the man page:
display the last modification time of FILE
So, the scripted solution would be similar to this:
date -r ${MY_FILE_VARIABLE}
which would return you something similar to this:
zsh% date -r MyFile.foo
Thu Feb 23 07:41:27 CST 2012
To address the OP's comment:
If possible with a configurable date format
date has a rather extensive set of time-format variables; read the man page for more information.
I'm not 100% sure how portable date is across all 'UNIX-like systems'. For BSD-based (such as OS X), this will not work; the -r flag for the BSD-date does something completely different. The question doesn't' specify exactly how portable a solution is required to be. For a BSD-based solution, see the below section ;)
A better solution, BSD systems (tested on OS X, using BSD-stat; GNU stat is slightly different but could be made to work in the same way).
Use stat. You can format the output of stat with the -f flag, and you can select to display only the file modification data (which, for this question, is nice).
For example, stat -f "%m%t%Sm %N" ./*:
1340738054 Jun 26 21:14:14 2012 ./build
1340738921 Jun 26 21:28:41 2012 ./build.xml
1340738140 Jun 26 21:15:40 2012 ./lib
1340657124 Jun 25 22:45:24 2012 ./tests
Where the first bit is the UNIX epoch time, the date is the file modification time, and the rest is the filename.
Breakdown of the example command
stat -f "%m%t%Sm %N" ./*
stat -f: call stat, and specify the format (-f).
%m: The UNIX epoch time.
%t: A tab seperator in the output.
%Sm: S says to display the output as a string, m says to use the file modification data.
%N: Display the name of the file in question.
A command in your script along the lines of the following:
stat -f "%Sm" ${FILE_VARIABLE}
will give you output such as:
Jun 26 21:28:41 2012
Read the man page for stat for further information; timestamp formatting is done by strftime.
have perl?
perl -MFile::stat -e "print scalar localtime stat('FileName.txt')->mtime"
How about:
find $PATH -maxdepth 1 -name $FILE -printf %Tc
See the find manpage for other values you can use with %T.
You can use the "date" command adding the desired format option the format:
date +%Y-%m-%d -r /root/foo.txt
2013-05-27
date +%H:%M -r /root/foo.txt
23:02
You can use ls -l which lists the last modification time, and then use cut to cut out the modification date:
mod_date=$(ls -l $file_name | cut -c35-46)
This works on my system because the date appears between columns 35 to 46. You might have to play with it on your system.
The date is in two different formats:
Mmm dd hh:mm
Mmm dd yyyy
Files modified more than a year ago will have the later format. Files modified less than a year ago will have to first format. You could search for a ":" and know which format the file is in:
if echo "$mod_date" | grep -q ":"
then
echo "File was modified within the year"
else
echo "File was modified more than a year ago"
fi

in drupal language: grep and pipe - list all the findings to avoid overhead & serverperformance issues

As i have a serous sever performance warning with installing drupal-commons (this is a installation-profile) i now want to reduce the server load.
Why - i get a message when trying to install drupal commons: Too-many-files-open it says!
Well Drupal & modules (ab)uses too many files! 50,000 maximum files and maybe 5000 directories is their goal and that si what they only backup so its in
So my question: How can i get rid of all those silly translation files or whatever for tiny miny parts of info and
unnecesary subdivisions; How i can get rid of them!
Background: I would expect that file_exists() during the installation(or bootstrap-cycle) is the most expensive built-in PHP function measured as total time spent calling the function for all invocations in a single request.
Well now i try to get rid of all the overhead (especially of the translation-files that are so called - po-files) - and unnecessary files that are contained in the drupal-commons 6.x-2.3 in order to get it runnning on my server.
i want to get rid all those silly translation files or whatever for tiny miny parts of info and unnecesary subdivisions;
How to search for all those .po-files recursivly - with GREP i guess ..
Note: i do not now where they are!
linux-vi17:/home/martin/web_technik/drupal/commons_3_jan_12/commons-6.x-2.3/commons-6.x-2.3 # lsCHANGELOG.txt
._.htaccess install.php modules themes
._CHANGELOG.txt ._includes INSTALL.txt ._profiles ._update.php
COMMONS_RELEASE_NOTES.txt includes ._INSTALL.txt profiles update.php
._COMMONS_RELEASE_NOTES.txt ._index.php LICENSE.txt ._robots.txt UPGRADE.txt
COPYRIGHT.txt index.php ._LICENSE.txt robots.txt ._UPGRADE.txt
._COPYRIGHT.txt INSTALL.mysql.txt MAINTAINERS.txt ._scripts ._xmlrpc.php
._cron.php ._INSTALL.mysql.txt ._MAINTAINERS.txt scripts xmlrpc.php
cron.php INSTALL.pgsql.txt ._misc ._sites
.directory ._INSTALL.pgsql.txt misc sites
.htaccess ._install.php ._modules ._themes
linux-vi17:/home/martin/web_technik/drupal/commons_3_jan_12/commons-6.x-2.3/commons-6.x-2.3 # grep .po
Any way i want to remove all .po files with one bash command - is this possible
but wait: first of all - i want to find out all the files - and the ni want to list it:
- since i then know what i rease (/or remove)
Well - all language translations in Drupal are named with .po -
how to find them with GREP?
How to list them - and subsequently - how to erase them!?
update:
i did the search with
find -type f -name "*.po"
. well i found approx 930 files.
afterwards i did remove all them with
6.x-2.3 # find -type f -name "*.po" -exec rm -f {} \;
a final serach with that code
find -type f -name "*.po"
gave no results back so every po-file was erased!
manym many thanks for the hints.
greetings
zero
If you want to find all files named *.po in a directory named /some/directory, you can use find:
find /some/directory -type f -name "*.po"
If you want to delete them all in a row (you do have backups, don't you?), then append an action to this command:
find /some/directory -type f -name "*.po" -exec rm -f {} \;
Replace /some/directory with the appropriate value and you should be set.
The issue with "too many open files" isn't normally because there are too many files in the filesystem, but because there is a limitation to the amount of files an application or user can have open at one time. This issue has been covered on drupal forums, for example, see this thread to solve it more permanently/nicely:
http://drupal.org/node/474152
A few more links about open files:
http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html
http://blog.thecodingmachine.com/content/solving-too-many-open-files-exception-red5-or-any-other-application

grep but indexable?

I have over 200mb of source code files that I have to constantly look up (I am part of a very big team). I notice that grep does not create an index so lookup requires going through the entire source code database each time.
Is there a command line utility similar to grep which has indexing ability?
The solutions below are rather simple. There are a lot of corner cases that they do not cover:
searching for start of line ^
filenames containing \n or : will fail
filenames containing white space will fail (though that can be fixed by using GNU Parallel instead of xargs)
searching for a string that matches the path of another files will be suboptimal
The good part about the solutions is that they are very easy to implement.
Solution 1: one big file
Fact: Seeking is dead slow, reading one big file is often faster.
Given those facts the idea is to simply make an index containing all the files with all their content - each line prepended with the filename and the line number:
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . > .index
Use the index:
grep foo .index
Solution 2: one big compressed file
Fact: Harddrives are slow. Seeking is dead slow. Multi core CPUs are normal.
So it may be faster to read a compressed file and decompress it on the fly than reading the uncompressed file - especially if you have RAM enough to cache the compressed file but not enough for the uncompressed file.
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . | pbzip2 > .index
Use the index:
pbzcat .index | grep foo
Solution 3: use index for finding potential candidates
Generating the index can be time consuming and you might not want to do that for every single change in the dir.
To speed that up only use the index for identifying filenames that might match and do an actual grep through those (hopefully limited number of) files. This will discover files that no longer match, but it will not discover new files that do match.
The sort -u is needed to avoid grepping the same file multiple times.
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . | pbzip2 > .index
Use the index:
pbzcat .index | grep foo | sed s/:.*// | sort -u | xargs grep foo
Solution 4: append to the index
Re-creating the full index can be very slow. If most of the dir stays the same, you can simply append to the index with newly changed files. The index will again only be used for locating potential candidates, so if a file no longer matches it will be discovered when grepping through the actual file.
Index a dir:
find . -type f -print0 | xargs -0 grep -Han . | pbzip2 > .index
Append to the index:
find . -type f -newer .index -print0 | xargs -0 grep -Han . | pbzip2 >> .index
Use the index:
pbzcat .index | grep foo | sed s/:.*// | sort -u | xargs grep foo
It can be even faster if you use pzstd instead of pbzip2/pbzcat.
Solution 5: use git
git grep can grep through a git repository. But it seems to do a lot of seeks and is 4 times slower on my system than solution 4.
The good part is that the .git index is smaller than the .index.bz2.
Index a dir:
git init
git add .
Append to the index:
git add .
Use the index:
git grep foo
Solution 6: optimize git
Git puts its data into many small files. This results in seeking. But you can ask git to compress the small files into few, bigger files:
git gc --aggressive
This takes a while, but it packs the index very efficiently in few files.
Now you can do:
find .git -type f | xargs cat >/dev/null
git grep foo
git will do a lot of seeking into the index, but by running cat first, you put the whole index into RAM.
Adding to the index is the same as in solution 5, but run git gc now and then to avoid many small files, and git gc --aggressive to save more disk space, when the system is idle.
git will not free disk space if you remove files. So if you remove large amounts of data, remove .git and do git init; git add . again.
There is https://code.google.com/p/codesearch/ project which is capable of creating index and fast searching in the index. Regexps are supported and computed using index (actually, only subset of regexp can use index to filter file set, and then real regexp is reevaluted on the matched files).
Index from codesearch is usually 10-20% of source code size, building an index is fast like running classic grep 2 or 3 times, and the searching is almost instantaneous.
The ideas used in the codesearch project are from google's Code Search site (RIP). E.g. the index contains map from n-grams (3-grams or every 3-byte set found in your sources) to the files; and regexp is translated to 4-grams when searching.
PS And there are ctags and cscope to navigate in C/C++ sources. Ctags can find declarations/definitions, cscope is more capable, but has problems with C++.
PPS and there are also clang-based tools for C/C++/ObjC languages: http://blog.wuwon.id.au/2011/10/vim-plugin-for-navigating-c-with.html and clang-complete
I notice that grep does not create an index so lookup requires going through the entire source code database each time.
Without addressing the indexing ability part, git grep will have, with Git 2.8 (Q1 2016) the abililty to run in parallel!
See commit 89f09dd, commit 044b1f3, commit b6b468b (15 Dec 2015) by Victor Leschuk (vleschuk).
(Merged by Junio C Hamano -- gitster -- in commit bdd1cc2, 12 Jan 2016)
grep: add --threads=<num> option and grep.threads configuration
"git grep" can now be configured (or told from the command line) how
many threads to use when searching in the working tree files.
grep.threads:
Number of grep worker threads to use.
ack is a code searching tool that is optimized for programmers, especially programmers dealing with large heterogeneous source code trees: http://beyondgrep.com/
Is some of your search examples where you only want to search a certain type of file, like only Java files? Then you can do
ack --java function
ack does not index the source code, but it may not matter depending on what your searching patterns are like. In many cases, only searching for certain types of files gives the speedup that you need because you're not also searching all those other XML, etc files.
And if ack doesn't do it for you, here is a list of many tools designed for searching source code: http://beyondgrep.com/more-tools/
We use a tool internally to index very large log files and make efficient searches of them. It has been open-sourced. I don't know how well it scales to large numbers of files, though. It multithreads by default, it searches inside gzipped files, and it caches indexes of previously searched files.
https://github.com/purestorage/4grep
This grep-cache article has a script for caching grep results. His examples were run on windows with linux tools installed, so it can easily be used on nix/mac with little modification. It's mostly just a perl script anyway.
Also, the filesystem itself (assuming your using *nix) often caches recently read data, causing future grep times to be faster since grep is effectively searching virt memory instead of disk.
The cache is usually located in /proc/sys/vm/drop_caches if you want manually erase it to see the speed increase from an uncached to a cached grep.
Since you mention various kinds of text files that are not really code, I suggest you have a look at GNU ID utils. For example:
cd /tmp
# create index file named 'ID'
mkid -m /dev/null -d text /var/log/messages.*
# query index
gid -r 'spamd|kernel'
These tools focus on tokens, so queries on strings of tokens are not possible. There is minimal integration in emacs for the gid command.
For the more specific case of indexing source code, I prefer to use GNU global, which I find more flexible. For example:
cd sourcedir
# index source tree
gtags .
# look for a definition
global -x main
# look for a reference
global -xr printf
# look for another kind of symbol
global -xs argc
Global natively supports C/C++ and Java, and with a bit of configuration, can be extended to support many more languages. It also has very good integration with emacs: successive queries are stacked, and updating a source file updates the index efficiently. However I'm not aware that it is able to index plain text (yet).

How to compare two tarball's content

I want to tell whether two tarball files contain identical files, in terms of file name and file content, not including meta-data like date, user, group.
However, There are some restrictions:
first, I have no control of whether the meta-data is included when making the tar file, actually, the tar file always contains meta-data, so directly diff the two tar files doesn't work.
Second, since some tar files are so large that I cannot afford to untar them in to a temp directory and diff the contained files one by one. (I know if I can untar file1.tar into file1/, I can compare them by invoking 'tar -dvf file2.tar' in file/. But usually I cannot afford untar even one of them)
Any idea how I can compare the two tar files? It would be better if it can be accomplished within SHELL scripts. Alternatively, is there any way to get each sub-file's checksum without actually untar a tarball?
Thanks,
Try also pkgdiff to visualize differences between packages (detects added/removed/renamed files and changed content, exist with zero code if unchanged):
pkgdiff PKG-0.tgz PKG-1.tgz
Are you controlling the creation of these tar files?
If so, the best trick would be to create a MD5 checksum and store it in a file within the archive itself. Then, when you want to compare two files, you just extract this checksum files and compare them.
If you can afford to extract just one tar file, you can use the --diff option of tar to look for differences with the contents of other tar file.
One more crude trick if you are fine with just a comparison of the filenames and their sizes.
Remember, this does not guarantee that the other files are same!
execute a tar tvf to list the contents of each file and store the outputs in two different files. then, slice out everything besides the filename and size columns. Preferably sort the two files too. Then, just do a file diff between the two lists.
Just remember that this last scheme does not really do checksum.
Sample tar and output (all files are zero size in this example).
$ tar tvfj pack1.tar.bz2
drwxr-xr-x user/group 0 2009-06-23 10:29:51 dir1/
-rw-r--r-- user/group 0 2009-06-23 10:29:50 dir1/file1
-rw-r--r-- user/group 0 2009-06-23 10:29:51 dir1/file2
drwxr-xr-x user/group 0 2009-06-23 10:29:59 dir2/
-rw-r--r-- user/group 0 2009-06-23 10:29:57 dir2/file1
-rw-r--r-- user/group 0 2009-06-23 10:29:59 dir2/file3
drwxr-xr-x user/group 0 2009-06-23 10:29:45 dir3/
Command to generate sorted name/size list
$ tar tvfj pack1.tar.bz2 | awk '{printf "%10s %s\n",$3,$6}' | sort -k 2
0 dir1/
0 dir1/file1
0 dir1/file2
0 dir2/
0 dir2/file1
0 dir2/file3
0 dir3/
You can take two such sorted lists and diff them.
You can also use the date and time columns if that works for you.
tarsum is almost what you need. Take its output, run it through sort to get the ordering identical on each, and then compare the two with diff. That should get you a basic implementation going, and it would be easily enough to pull those steps into the main program by modifying the Python code to do the whole job.
Here is my variant, it is checking the unix permission too:
Works only if the filenames are shorter than 200 char.
diff <(tar -tvf 1.tar | awk '{printf "%10s %200s %10s\n",$3,$6,$1}'|sort -k2) <(tar -tvf 2.tar|awk '{printf "%10s %200s %10s\n",$3,$6,$1}'|sort -k2)
EDIT: See the comment by #StéphaneGourichon
I realise that this is a late reply, but I came across the thread whilst attempting to achieve the same thing. The solution that I've implemented outputs the tar to stdout, and pipes it to whichever hash you choose:
tar -xOzf archive.tar.gz | sort | sha1sum
Note that the order of the arguments is important; particularly O which signals to use stdout.
Is tardiff what you're looking for? It's "a simple perl script" that "compares the contents of two tarballs and reports on any differences found between them."
There is also diffoscope, which is more generic, and allows to compare things recursively (including various formats).
pip install diffoscope
I propose gtarsum, that I have written in Go, which means it will be an autonomous executable (no Python or other execution environment needed).
go get github.com/VonC/gtarsum
It will read a tar file, and:
sort the list of files alphabetically,
compute a SHA256 for each file content,
concatenate those hashes into one giant string
compute the SHA256 of that string
The result is a "global hash" for a tar file, based on the list of files and their content.
It can compare multiple tar files, and return 0 if they are identical, 1 if they are not.
Just throwing this out there since none of the above solutions worked for what I needed.
This function gets the md5 hash of the md5 hashes of all the file-paths matching a given path. If the hashes are the same, the file hierarchy and file lists are the same.
I know it's not as performant as others, but it provides the certainty I needed.
PATH_TO_CHECK="some/path"
for template in $(find build/ -name '*.tar'); do
tar -xvf $template --to-command=md5sum |
grep $PATH_TO_CHECK -A 1 |
grep -v $PATH_TO_CHECK |
awk '{print $1}' |
md5sum |
awk "{print \"$template\",\$1}"
done
*note: An invalid path simply returns nothing.
If not extracting the archives nor needing the differences, try diff's -q option:
diff -q 1.tar 2.tar
This quiet result will be "1.tar 2.tar differ" or nothing, if no differences.
There is tool called archdiff. It is basically a perl script that can look into the archives.
Takes two archives, or an archive and a directory and shows a summary of the
differences between them.
I have a similar question and i resolve it by python, here is the code.
ps:although this code is used to compare two zipball's content,but it's similar with tarball, hope i can help you
import zipfile
import os,md5
import hashlib
import shutil
def decompressZip(zipName, dirName):
try:
zipFile = zipfile.ZipFile(zipName, "r")
fileNames = zipFile.namelist()
for file in fileNames:
zipFile.extract(file, dirName)
zipFile.close()
return fileNames
except Exception,e:
raise Exception,e
def md5sum(filename):
f = open(filename,"rb")
md5obj = hashlib.md5()
md5obj.update(f.read())
hash = md5obj.hexdigest()
f.close()
return str(hash).upper()
if __name__ == "__main__":
oldFileList = decompressZip("./old.zip", "./oldDir")
newFileList = decompressZip("./new.zip", "./newDir")
oldDict = dict()
newDict = dict()
for oldFile in oldFileList:
tmpOldFile = "./oldDir/" + oldFile
if not os.path.isdir(tmpOldFile):
oldFileMD5 = md5sum(tmpOldFile)
oldDict[oldFile] = oldFileMD5
for newFile in newFileList:
tmpNewFile = "./newDir/" + newFile
if not os.path.isdir(tmpNewFile):
newFileMD5 = md5sum(tmpNewFile)
newDict[newFile] = newFileMD5
additionList = list()
modifyList = list()
for key in newDict:
if not oldDict.has_key(key):
additionList.append(key)
else:
newMD5 = newDict[key]
oldMD5 = oldDict[key]
if not newMD5 == oldMD5:
modifyList.append(key)
print "new file lis:%s" % additionList
print "modified file list:%s" % modifyList
shutil.rmtree("./oldDir")
shutil.rmtree("./newDir")

Resources