Refresh directory Size in Linux - linux

I have delete all files older than two days from my directories using:
find . -mtime +2 -exec rm {} \;
the files got deleted fine but the size of the directories has not changed is there anything I can do to refresh the size? I have tries pwd but nothing.

Have you checked lsof? If some application uses files they are not actually deleted till they close them. But I am not sure that this affects dir size.

Related

bash delete older files

I have this unique requirement of finding 2 years older files and delete them. But not only files as well as corresponding empty directories. I have written most of the logic but only thing that is still pending is , when I delete particular file from a directory , How can I delete the corresponding directory when it is empty. As when I delete the particular file , the ctime/mtime would also accordingly get updated. How do I target those corresponding older directories and delete them?
Any pointers will be helpful.
Thanks in advance.
Admin
I would do something like this:
find /path/to/files* -mtime +730 -delete
-mtime +730 finds files which are older than 730 days.
Please be careful with this kind of command though, be sure to write find /path/to/files* -mtime +730 beforehand and check that these are the files you want to delete!
Edit:
Now you have deleted the files from the directories, -mtime +730 won't work.
To delete all empty directories that you have recently altered:
find . -type d -mmin -60 -empty -delete

Linux large amount of files not being deleted

I have a folder of cache files in a linux VM that weren't being deleted for some reason.
I'm trying to delete them ( or the folder it self ) but nothing seems to work.
rm just gives me back Argument list is too long
I'm trying now
find ./cache -type f -delete , hitting ls-l every once in a while but keep getting the same # of files.
Also tried
find ./cache -type f -exec rm-v {} \; but same thing again.
I would be ok if i just delete the folder as long as i recreate it after.
Thank you
EDIT: Ok found out ls-l does not return the # of files, if however i do
ls | wc -l system seems to not respond at all.
Use rm -R filename to remove large data files
Linux command line length is limited so the rm cannot work.
The find command will work, though your directory is really big. Launch your find command and go to lunch.
EDIT – btw make sure to ls the same directory you want to remove files of, i.e. ./cache. It is not clear in your question.

How to delete files and directories older than n days in linux

I have a directory named repository which has a number of files and sub directories. I want to find the files and directories which have not been modified since last 14 days so that I can delete those files and directories.
I have wrote this script but it is giving the directory name only
#!/bin/sh
M2_REPO=/var/lib/jenkins/.m2/repository
echo $M2_REPO
OLDFILES=/var/lib/jenkins/.m2/repository/deleted_artifacts.txt
AGE=14
find "${M2_REPO}" -name '*' -atime +${AGE} -exec dirname {} \; >> ${OLDFILES}
find /path/to/files* -mtime +5 -exec rm {} \;
Note that there are spaces between rm, {}, and \;
Explanation
The first argument is the path to the files. This can be a path, a directory, or a wildcard as in the example above. I would recommend using the full path, and make sure that you run the command without the exec rm to make sure you are getting the right results.
The second argument, -mtime, is used to specify the number of days old that the file is. If you enter +5, it will find files older than 5 days.
The third argument, -exec, allows you to pass in a command such as rm. The {} \; at the end is required to end the command.
This should work on Ubuntu, Suse, Redhat, or pretty much any version of linux.
You can give the find -delete flag to remove the files with it. Just be careful to put it in the end of the command so that the time filter is applied first.
You can first just list the files that the command finds:
find "${M2_REPO}" -depth -mtime +${AGE} -print
The -d flag makes the find do the search depth-first, which is implied by the -deletecommand.
If you like the results, change the print to delete:
find "${M2_REPO}" -mtime +${AGE} -delete
I know this is a very old question but FWIW I solved the problem in two steps, first find and delete files older than N days, then find and delete empty directories. I tried doing both in one step but the delete operation updates the modification time on the file's parent directory, and then the (empty) directory does not match the -mtime criteria any more! Here's the solution with shell variables:
age=14
dir="/tmp/dirty"
find "$dir" -mtime "+$age" -delete && find "$dir" -type d -empty -delete

Linux - Find command and tar command Failure

I am using a combination of find and copy command in my backup script.
it is used on a fairly huge amount of data,
first, out of 25 files it needs to find all the files older than 60 mins
then copy these files to a temp directory - each of these files are 1.52GB to 2GB
one of these 25 files will have data being appended continuously.
I have learnt from googling that Tarring operation will fail if there is an update going on to the file being attempted to tar, is it the same thing with find and copy also??
I am trying something like this,
/usr/bin/find $logPath -mmin +60 -type f -exec /bin/cp {} $logPath/$bkpDirectoryName \;
after this I have a step where I tar the files copied to the temp directory as mentioned above(&bkpDirectoryName), here I use as mentioned below,
/bin/tar -czf $bkpDir/$bkpDirectoryName.tgz $logPath/$bkpDirectoryName
and this also fails.
the same backup script was running from past many days and suddenly it has started failing and causing me headache! can someone please help me on this??
can you try these steps please
instead of copying files older than 60 min, move them.
run the tar on the moved files
If you do the above, the file which is continuously appended will not be moved.
In case any of your other 24 files might be updated after 60 min, you can do the following
Once you move a file, touch a file with the same name in case there are async updates which are not continuous.
When tarring the file, give a timestamp name to the tar file.This way you have a rolling tar of your logs
If nothing works due to some custom requirement on your side, try doing a rsync and then do the same operations on the rsynced files (i.e find and tar or just tar)
try this
output=`find $logPath -mmin 60 -type f`
if [ "temp$output" != "temp" ];then
cp -rf $output $other_than_logPath/$bkpDirectoryName/
else
echo sorry
fi
I think, you are using +60 instead of 60.
I also want to know, at what interval your script gets called.
#!/bin/bash
for find in `find / -name "*" -mmin 60`
do
cp $find / ## Choose directory
done
That's basically what you need, just change the directory I guess//

Bash script to recursively step through folders and delete files

Can anyone give me a bash script or one line command i can run on linux to recursively go through each folder from the current folder and delete all files or directories starting with '._'?
Change directory to the root directory you want (or change . to the directory) and execute:
find . -name "._*" -print0 | xargs -0 rm -rf
xargs allows you to pass several parameters to a single command, so it will be faster than using the find -exec syntax. Also, you can run this once without the | to view the files it will delete, make sure it is safe.
find . -name '._*' -exec rm -Rf {} \;
I've had a similar problem a while ago (I assume you are trying to clean up a drive that was connected to a Mac which saves a lot of these files), so I wrote a simple python script which deletes these and other useless files; maybe it will be useful to you:
http://github.com/houbysoft/short/blob/master/tidy
find /path -name "._*" -exec rm -fr "{}" +;
Instead of deleting the AppleDouble files, you could merge them with the corresponding files. You can use dot_clean.
dot_clean -- Merge ._* files with corresponding native files.
For each dir, dot_clean recursively merges all ._* files with their corresponding native files according to the rules specified with the given arguments. By default, if there is an attribute on the native file that is also present in the ._ file, the most recent attribute will be used.
If no operands are given, a usage message is output. If more than one directory is given, directories are merged in the order in which they are specified.
Because dot_clean works recursively by default, use:
dot_clean <directory>
If you want to turn off the recursively merge, use -f for flat merge.
dot_clean -f <directory>
find . -name '.*' -delete
A bit shorter and perform better in case of extremely long list of files.

Resources