Linux large amount of files not being deleted - linux

I have a folder of cache files in a linux VM that weren't being deleted for some reason.
I'm trying to delete them ( or the folder it self ) but nothing seems to work.
rm just gives me back Argument list is too long
I'm trying now
find ./cache -type f -delete , hitting ls-l every once in a while but keep getting the same # of files.
Also tried
find ./cache -type f -exec rm-v {} \; but same thing again.
I would be ok if i just delete the folder as long as i recreate it after.
Thank you
EDIT: Ok found out ls-l does not return the # of files, if however i do
ls | wc -l system seems to not respond at all.

Use rm -R filename to remove large data files

Linux command line length is limited so the rm cannot work.
The find command will work, though your directory is really big. Launch your find command and go to lunch.
EDIT – btw make sure to ls the same directory you want to remove files of, i.e. ./cache. It is not clear in your question.

Related

Linux - Can't recursively delete large directories

I have a pretty big find that is supposed to delete any files/dir it finds. I just can't get it to work properly.
If I attach -exec rm -fr {} \;, at some point, I always get the following errors:
find: ‘/path/to/dir/file123.local’: No such file or directory
If I replace it with -delete, I get the following error:
find: cannot delete `/path/to/dir': Directory not empty
I looked for suggestions online but the suggestion is always the other option (replace -exec with -delete and vice-versa)
Does anyone happen to know a way to fix it without redirecting stderr to null?
Thanks ahead!
find doesn't know what your command passed to -exec does. It traverses the directory tree in this order:
find a file
execute a command on that file
if it's a directory, traverse it down
Now if the directory is removed with rm -fr, there is nothing to traverse down any more, so find reports it.
If you supply the -depth option, then the traversal order changes:
find a file
if it's a directory, traverse it down
execute a command on that file
This will eliminate the error message.
-delete implies -depth, so ostensibly it should work. However it is your responsibility to make sure the directories you want to delete are completely cleaned up. If you filter out some files with -time etc, you may end up trying to delete a directory which is not completely clean.
You could try to wrap {} in double quotes, there may have space in directry path.
-exec rm -rf "{}" \;
If I read your question well, you want to remove files, but sometimes it happens that they already have been removed by some other process, and you wonder what you should do.
Why do you think you should do anything? You want the file to be gone, and apparently it is gone, so no worries.
Obviously the corresponding error messages might be annoying, but this you can handle adding 2>/dev/null at the end of your command (redirect the error output to <NULL>).
So you get:
find ... -exec rm -fr {} \; 2>/dev/null
Edit after comment from user1934428:
I might be a good idea to drop the r switch:
find ... -exec rm -f {} \; 2>/dev/null
In that case, you should have no errors anymore:
find ... -exec rm -f {} \;

Refresh directory Size in Linux

I have delete all files older than two days from my directories using:
find . -mtime +2 -exec rm {} \;
the files got deleted fine but the size of the directories has not changed is there anything I can do to refresh the size? I have tries pwd but nothing.
Have you checked lsof? If some application uses files they are not actually deleted till they close them. But I am not sure that this affects dir size.

Find and remove over SSH?

My web server got hacked (Despite the security team telling me nothing was compromised) and an uncountable number of files have an extra line of PHP code generating a link to some Vietnamese website.
Given there are tens of thousands of files across my server, is there a way I can go in with SSH and remove that line of code from every file it's found in?
Please be specific in your answer, I have only used SSH a few times for some very basic tasks and don't want to end up deleting a bunch of my files!
Yes, a few lines of shell script would do it. I hesitate to give it to you, though, as if something goes wrong I'll get blamed for messing up your web server. That said, the solution could be as simple as this:
for i in `find /where/ever -name '*.php'`; do
mv $i $i.bak
grep -v "http://vietnamese.web.site" $i.bak >> $i
done
This finds all the *php files under /where/ever, and removes any lines that have http://vietnamese.web.site in them. It makes a *.bak copy of every file. After you run this and all seems good, you could delete the backups with
find . -name '*.php.bak' -exec rm \{\} \;
Your next task would be to find a new provider, as not only did they get hacked, but they apparently don't keep backups. Good luck.
First create a regex, that matches the bad code (and only the bad code), then run
find /path/to/webroot -name \*.php -exec echo sed -i -e 's/your-regex-here//' {} \;
If everything looks right, remove the echo
I do it following way. E.g. to delete files matching particular name or extension.
rm -rf * cron.php. *
rm -rf * match_string *
where match_string will be any string. Make sure there will be no space between * and string name.
rm -f cron.php.*
Delete all file in this folder called cron.php.[whereveryouwant]

sed not working as expected, but only for directory depth greater than 1

I am trying to find all instances of a string in all files on my system up to a specified directory depth. I then want to replace these with another string and I am using 'find' and 'sed' by piping one into the other.
This works where I use the base path as cd /home/../.. or any other directory which isn't "/". It also only works if I select a directory depth of 1 (so /test.txt is changed, but /home/test.txt isn't) If I change nothing else and used say a depth of 2 or 3, neither /test.txt nor /home/text.txt are changed. In the former, no warnings appear, and in the latter, the results below (And no strings are replaced in either of the files).
Worryingly, it did work once out of the blue, but I have no idea how and I can't recreate the results. I should say I know the risks of using these commands with root from base directory, and the specific use of the programs below is intentional so I am not looking for an alternative way, just a clue as to how this isn't working and perhaps a suggestion on how to fix it.
cd /;find . -maxdepth 3 -type f -print0 | xargs -0 sed -i 's/teststring123/itworked/gI'
sed: couldn't open temporary file ./sys/kernel/sedoPGqGB: No such file or directory
sed: couldn't open temporary file ./proc/878/sedtqayiq: No such file or directory
As you see, there are warnings, but nether the less I would expect it to work, the commands appear good, anything I am missing folks?
This should be:
find / -maxdepth 3 -type f -print -exec sed -i -e 's/teststring123/itworked/g' {} \;
Although changing all files below / strikes me as a very bad idea indeed (I hope you're not running as root!).
The "couldn't open temporary file ./[...]" errors are likely to be because sed, running as your user, doesn't have permission to create files in /.
My version runs from your current working directory, I assume your ${HOME}, where you'll be able to create the temporary file, but you're still unlikely to be able to replace those files vital to the continued running of your operating system.

Bash script to recursively step through folders and delete files

Can anyone give me a bash script or one line command i can run on linux to recursively go through each folder from the current folder and delete all files or directories starting with '._'?
Change directory to the root directory you want (or change . to the directory) and execute:
find . -name "._*" -print0 | xargs -0 rm -rf
xargs allows you to pass several parameters to a single command, so it will be faster than using the find -exec syntax. Also, you can run this once without the | to view the files it will delete, make sure it is safe.
find . -name '._*' -exec rm -Rf {} \;
I've had a similar problem a while ago (I assume you are trying to clean up a drive that was connected to a Mac which saves a lot of these files), so I wrote a simple python script which deletes these and other useless files; maybe it will be useful to you:
http://github.com/houbysoft/short/blob/master/tidy
find /path -name "._*" -exec rm -fr "{}" +;
Instead of deleting the AppleDouble files, you could merge them with the corresponding files. You can use dot_clean.
dot_clean -- Merge ._* files with corresponding native files.
For each dir, dot_clean recursively merges all ._* files with their corresponding native files according to the rules specified with the given arguments. By default, if there is an attribute on the native file that is also present in the ._ file, the most recent attribute will be used.
If no operands are given, a usage message is output. If more than one directory is given, directories are merged in the order in which they are specified.
Because dot_clean works recursively by default, use:
dot_clean <directory>
If you want to turn off the recursively merge, use -f for flat merge.
dot_clean -f <directory>
find . -name '.*' -delete
A bit shorter and perform better in case of extremely long list of files.

Resources