php script just filled up harddrive with junk how do i find it? - linux

I just ran a php script which filled up my nix servers harddrive with 15GB of some sort of junk, how do i find the junk so I can delete it? I'm not sure if it's a huge error_doc file or what

One option is to use the find command.
find / -type f -size +50M
Will search downwards from the root directory for items which are files larger than 50MB. If you want to limit how many subdirectories you want to search, you can use the -maxdepth switch.
find / -maxdepth 3 -type f -size +50M
will look for files larger than 50MB, but will only recurse 3 directories down.
This assumes that you know that the files which were created are larger than a certain size, and you can pick them out if they are displayed.
You might also be able to make use of the knowledge that the files were created recently.
find / -type f -mmin 60
should find files which were modified in the past hour.

Related

Linux Copy All Files with specific filename length

I want to Copy all files in my directory with a specific file name length.
e.g.
These files exist:
1.py
12.py
123.py
321.py
1234.py
Than I want to copy only the files 123.py and 312.py (because of length of 3)
I am new to Linux and donĀ“t know how to accomplish this. Anyone can help me?
If I understood correctly, you want to copy files whose names consist of three characters followed by .py. This could be done using:
cp ???.py destination_directory/
(Note: this could fail if you have a very large number, but the limit is typically large on modern systems.)
You can do it using the command find
find directory1 -type f -size 3k -exec cp -nv {} directory2/ \;

Find all files above a size and truncate?

Running cPanel on a server with various customer accounts under the /home directory.
Many customers' error_log files are exceeding a desired size (let's say 100MB) and I want to create a cron job to run daily to truncate any files over a certain size.
I know truncate can shrink files but it will extend files if they're smaller than the stipulated amount, so does my solution below (of first finding all files above the desired size and only shrinking those) make the most sense and will it work?
for i in $(find /home -type f -iname error_log -size +99M); do
truncate -s 100M $i
done
I'd suggest rotating and compressing logs rather than truncating them. Logs typically compress really well, and you can move the compressed logs to backup media if you like. Plus, if you do have to delete anything, delete the oldest logs, not the newest ones.
That said, for educational purposes let's explore truncate. It has the ability to only shrink files, though it's buried in the documentation:
SIZE may also be prefixed by one of the following modifying characters: '+' extend by, '-' reduce by, '<' at most, '>' at least, '/' round down to multiple of, '%' round up to multiple of.
If the files are at a fixed depth you don't need the loop nor the find call. A simple glob will do:
truncate -s '<100M' /home/*/path/to/error_log
If they're at unpredictable depths you can use extended globbing...
shopt -s extglob
truncate -s '<100M' /home/**/error_log
...or use find -exec <cmd> {} +, which tells find to invoke a command on the files it finds.
find /home -name error_log -exec truncate -s '<100M' {} +
(If there are lots and lots of files find is safest. The glob options could exceed Linux's command-line length limit whereas find guards against that possibility.)
Do not use for i in $(...). It will break on whitespaces.
Always quote your variable expansions. Do "$i".
find has -exec, just use it.
So:
find /home -type f -iname error_log -size +99M -exec truncate -s 100M {} \;

Find numbered subdirectories below number X and delete them

I have a folder 'masterfolder' that has subfolders with a numbered naming scheme:
\masterfolder\S01
\masterfolder\S02
\masterfolder\S03
\masterfolder\S04
\masterfolder\S05
Now I want to find and delete all folders below a specific number, for example S03. This means, S03, S04, S05 etc should not get deleted, S01 and S02 should get deleted.
I normally use this command to find and delete a specific folder:
find "/mnt/USBDRIVE/masterfolder" -type d -name "S02" -exec rm -rf '{}' \;
I tried finding a solution myself, but the only method I have found is to delete everything except the number I know I want to keep:
find "/mnt/USBDRIVE/masterfolder" -mindepth 1 -maxdepth 1 -type d -not -name "S03" -exec rm -rf '{}' \;
This will keep S03, but delete all others. I want to keep S03 and any other folder with a higher number than S03.
Any ideas appreciated.
There are many ways to solve this.
Since your numbers are zero padded, the easiest way is to just send a list of the directories to a file sorted alphabetically. Then delete the ones you want ignored (they'll all be together), do a global change to add "rm " to the beginning of each line, and run the file as a script.
This will take you less than 30 seconds. Any programmatic solution will take longer.

bash delete older files

I have this unique requirement of finding 2 years older files and delete them. But not only files as well as corresponding empty directories. I have written most of the logic but only thing that is still pending is , when I delete particular file from a directory , How can I delete the corresponding directory when it is empty. As when I delete the particular file , the ctime/mtime would also accordingly get updated. How do I target those corresponding older directories and delete them?
Any pointers will be helpful.
Thanks in advance.
Admin
I would do something like this:
find /path/to/files* -mtime +730 -delete
-mtime +730 finds files which are older than 730 days.
Please be careful with this kind of command though, be sure to write find /path/to/files* -mtime +730 beforehand and check that these are the files you want to delete!
Edit:
Now you have deleted the files from the directories, -mtime +730 won't work.
To delete all empty directories that you have recently altered:
find . -type d -mmin -60 -empty -delete

How to find all files which are basically soft or hard links of other directories or files on linux?

How could I get the list of all linked files on my system or from a certain directory. I used to create links but they became unmanageable with time. I want the list of all such links from a directory. Can anyone help?
Finding symlinks is easy:
% find . -type l
Finding hard links is tricky, because if a subdirectory of the directory in question also has subdirectories then those increase the hard link count. That's how subdirectories are linked to their parents in UNIX (it's the .. entry in each subdirectory).
If you only want to find linked files (and not directories), this will work:
% find . -type f \! -links 1
This works because a file that does have hard links will have a link count > 1, and unlinked file has a link count == 1, hence this command looks for all files whose link count <> 1
Alternatively, on newer versions of find you could use:
% find . -type f -links +1
This works for the same reason as above; however, newer versions of find can take +n or -n instead of just a number. This is equivalent to testing for greater than n or less than n, respectively.
find / -xdev -samefile filename
#OP, If you have GNU find, you can find hard links using -printf "%n",
e.g.
find /path -type f -printf "%f/%n/%i\n" | while IFS="/" read filename num_hlinks inum
do
echo "Filename: $filename. Number of hard links: $num_hlinks, inode: $inum"
# if 2 or more files have the same inode number, then they are hard links.
# you can therefore count how many $inum that are the same and determine those hard links, which
# you have to try doing yourself.
done
See e.g. here
https://www.gnu.org/software/findutils/manual/html_node/find_html/Hard-Links.html
or combine Alnitak and amber_linux answer into
find -L /where/to/search -samefile /some/link/to/file
to find all hard and soft links to a given file.

Resources