Find and remove over SSH? - security

My web server got hacked (Despite the security team telling me nothing was compromised) and an uncountable number of files have an extra line of PHP code generating a link to some Vietnamese website.
Given there are tens of thousands of files across my server, is there a way I can go in with SSH and remove that line of code from every file it's found in?
Please be specific in your answer, I have only used SSH a few times for some very basic tasks and don't want to end up deleting a bunch of my files!

Yes, a few lines of shell script would do it. I hesitate to give it to you, though, as if something goes wrong I'll get blamed for messing up your web server. That said, the solution could be as simple as this:
for i in `find /where/ever -name '*.php'`; do
mv $i $i.bak
grep -v "http://vietnamese.web.site" $i.bak >> $i
done
This finds all the *php files under /where/ever, and removes any lines that have http://vietnamese.web.site in them. It makes a *.bak copy of every file. After you run this and all seems good, you could delete the backups with
find . -name '*.php.bak' -exec rm \{\} \;
Your next task would be to find a new provider, as not only did they get hacked, but they apparently don't keep backups. Good luck.

First create a regex, that matches the bad code (and only the bad code), then run
find /path/to/webroot -name \*.php -exec echo sed -i -e 's/your-regex-here//' {} \;
If everything looks right, remove the echo

I do it following way. E.g. to delete files matching particular name or extension.
rm -rf * cron.php. *
rm -rf * match_string *
where match_string will be any string. Make sure there will be no space between * and string name.

rm -f cron.php.*
Delete all file in this folder called cron.php.[whereveryouwant]

Related

Customized deleting files from a folder

I have a folder where different files can be located. I would like to check if it contains other files than .gitkeep and delete them, keeping .gitkeep at once. How can I do this ? (I'm a newbie when it comes to bash)
As always, there are multiple ways to do this, I am just sharing what little I know of linux :
1)find <path-to-the-folder> -maxdepth 1 -type f ! -iname '\.gitkeep' -delete
maxdepth of 1 specifies to search only the current directory. If you remove maxdepth, it will recursively find all files other than '.gitkeep' in all directories under your path. You can increase maxdepth to however deep you want find to go into directories from your path.
'-type f' specifies that we are just looking for files . If you want to find directories as well (or links, other types ) then you can omit this option.
-iname '.gitkeep' specifies a case insensitive math for '.gitkeep', the '\' is used for escaping the '.', since in bash, '.' is a regular expression.
You can leave it to be -name instead of -iname for case sensitive match.
The '!' before -iname, is to do an inverse match, i.e to find all files that don't have the name '.gitkeep', if you remove the '!', then you will get all files that match '.gitkeep'.
finally, '-delete' will delete the files that match this specification.
If you want to see what all files will be deleted before executing -delete, you can remove that flag and it will show you all the files :
find <path-to-the-folder> -maxdepth 1 -type f ! -iname '\.gitkeep'
(you can also use -print at the end, which is just redundant)
2) for i in `ls -a | grep -v '\.gitkeep'` ; do rm -rf $i ; done
Not really recommended to do it this way, since rm -rf is always a bad idea (IMO). You can change that to rm -f (to ensure it just works on file and not directories).
To be on the safe side, it is recommended to do an echo of the file list first to see if you are ready to delete all the files shown :
for i in `ls -a | grep -v '\.gitkeep'` ; do echo $i ; done
This will iterate thru all the files that don't match '.gitkeep' and delete them one by one ... not the best way I suppose to delete files
3)rm -rf $(ls -a | grep -v '\.gitkeep')
Again, careful with rm -rf, instead of rm -rf above, you can again do an echo to find out the files that will get deleted
I am sure there are more ways, but just a glimpse of the array of possibilities :)
Good Luck,
Ash
================================================================
EDIT :
=> manpages are your friend when you are trying to learn something new, if you don't understand how a command works or what options it can take and do, always lookup man for details.
ex : man find
=> I understand that you are trying to learn something out of your comfort zone, which is always commendable, but stack overflow doesn't like people asking questions without researching.
If you did research, you are expected to mention it in your question, letting people know what you have done to find answers on your own.
A simple google search or a deep dive into stack overflow questions would have provided you with a similar or even a better answer to your question. So be careful :)
Forewarned is forearmed :)
You can use find:
find /path/to/folder -maxdepth 1 ! -name .gitkeep -delete

Bash script to delete a file in all sub directories.

I have a directory that is filled with subdirectories exceeding 450 GBs. Inside of these subdirectories is an instruction file in each subdirectory. I have a script that copies the instruction file in the directory I am currently in and puts it inside every subdirectory via:
#!/bin/bash
for d in */; do cp "INSTALLATION INSTRUCTIONS.rtf" "$d"; done
I need to remove all of these files in the subdirectories and replace them with new instructions. Can I simple write another script that does this:
#!/bin/bash
for d in */; do rm "INSTALLATION INSTRUCTIONS.rtf" "$d"; done
I am very hesitant and wanted to make sue as these files are vitally important and I don't want to accidentally remove anything and making a backup of 450+ GBs is very taxing.
find . -mindepth 2 -name "INSTALLATION INSTRUCTIONS.rtf" -exec rm -f '{}' +
Since this is "vitally important" data, I would first list all files that match the file name you want to delete/overwrite, without taking any action on it (other than listing):
find /folder/ -type f -name "INSTALLATION INSTRUCTIONS.rtf" -print > /tmp/holder
That would create a list of matches on /tmp/holder. Then you could analyze this list before taking any action (either visually or programatically) to make sure that the list does not include anything you don't want to delete (when dealing with big amounts of data, strange things can happen, so be proactive on protecting the data).
If you are happy with what the list shows, then you could delete the old instructions, or if possible, overwrite them with the new file. Here's an example to overwrite the old file with the new one:
while read -r line; do cp --no-preserve=all /folder/newfile "$line"; done < /tmp/holder
The cp --no-preserve=all command (available on GNU bash) would ensure that the new file has permissions that are "adequate" to the folder where they are located. You may change that to a simple cp if you don't want that to happen.

Linux large amount of files not being deleted

I have a folder of cache files in a linux VM that weren't being deleted for some reason.
I'm trying to delete them ( or the folder it self ) but nothing seems to work.
rm just gives me back Argument list is too long
I'm trying now
find ./cache -type f -delete , hitting ls-l every once in a while but keep getting the same # of files.
Also tried
find ./cache -type f -exec rm-v {} \; but same thing again.
I would be ok if i just delete the folder as long as i recreate it after.
Thank you
EDIT: Ok found out ls-l does not return the # of files, if however i do
ls | wc -l system seems to not respond at all.
Use rm -R filename to remove large data files
Linux command line length is limited so the rm cannot work.
The find command will work, though your directory is really big. Launch your find command and go to lunch.
EDIT – btw make sure to ls the same directory you want to remove files of, i.e. ./cache. It is not clear in your question.

How can I batch rename files to lowercase in Subversion?

We are running subversion on a Linux server. Someone in our organization overwrote about 3k files with mixed case when they need to all be lowercase.
This works in the CLI
rename 'y/A-Z/a-z/' *
But obviously will screw up subversion.
svn rename 'y/A-Z/a-z/' *
Doesn't work because subversion deals with renames differently I guess. So How can I do this as a batch job? I suck with CLI, so please explain it like I'm your parents. hand renaming all 3k files is not a task I wish to undertake.
Thanks
Create a little script svn-lowername.sh:
#!/bin/bash
# svn rename each supplied file with its lowercase
for src; do
dst="$(dirname "$src")/$(basename "$src" | tr '[A-Z]' '[a-z]')"
[ "$src" = "$dst" ] || svn rename "$src" "$dst"
done
Make sure to chmod +x svn-lowername.sh. Then execute:
How it works:
Loop over each supplied filename.
Basename part of filename is lowercased, directory part is left alone, using
tr to do character by character translations. (If you need to support
non-ASCII letters, you'll need to make a more elaborate mapping.)
If source and destination filenames are the same we're okay, otherwise, svn rename. (You could use an if statement instead, but for one-liner
conditionals using || is pretty idiomatic.)
You can use this for all files in a directory:
./svn-lowername.sh *
...or if you want to do it recursively:
find -name .svn -prune -o -type f -print0 | xargs -0 ./svn-lowername.sh
If you think this problem might happen again, stash svn-lowername.sh in your
path somewhere (~/bin/ might be a good spot) and you can then lose the ./
prefix when running it.
This is a bit hacky, but should work on any number of nested directories:
find -mindepth 1 -name .svn -prune -o -print | (
while read file; do
svn mv "$file" "`perl -e 'print lc(shift)' "$file"`"
done
)
Also, you could just revert their commit. Administering the clue-by-four may be prudent.
Renames in Subversion require two steps, an svn copy followed by an svn delete if you want to retain history. If you use svn move you will loose the history of the file.
While this answer doesn't help with your question, it will help keep this from happening again. Implement the case-insenstive.py script in your pre-commit hook on your Subversion repository. The next time someone tries to add a file with a different case, the pre-commit hook won't let them.

What is the safest way to empty a directory in *nix?

I'm scared that one day, I'm going to put a space or miss out something in the command I currently use:
rm -rf ./*
Is there a safer way of emptying the current directory's contents?
The safest way is to sit on your hands before pressing Enter.
That aside, you could create an alias like this one (for Bash)
alias rm="pwd;read;rm"
That will show you your directory, wait for an enter press and then remove what you specified with the proper flags. You can cancel by pressing ^C instead of Enter.
Here is a safer way: use ls first to list the files that will be affected, then use command-line history or history substitution to change the ls to rm and execute the command again after you are convinced the correct files will be operated on.
If you want to be really safe, you could create a simple alias or shell script like:
mv $1 ~/.recycle/
This would just move your stuff to a .recycle folder (hello, Windows!).
Then set up a cron job to do rm -rf on stuff in that folder that is older than a week.
I think this is a reasonable way:
find . -maxdepth 1 \! -name . -print0 | xargs -0 rm -rf
and it will also take care of hidden files and directories. The slash isn't required after the dot and this then will also eliminate the possible accident of typing . /.
Now if you are worried what it will delete, just change it into
find . -maxdepth 1 \! -name . -print | less
And look at the list. Now you can put it into a function:
function enum_files { find . -maxdepth 1 \! -name . "$#"; }
And now your remove is safe:
enum_files | less # view the files
enum_files -print0 | xargs -0 rm -rf # remove the files
If you are not in the habit of having embedded new-lines in filenames, you can omit the -print0 and -0 parameters. But i would use them, just in case :)
Go one level up and type in the directory name
rm -rf <dir>/*
I use one of:
rm -fr .
cd ..; rm -fr name-of-subdirectory
I'm seldom sufficiently attached to a directory that I want to get rid of the contents but must keep the directory itself.
When using rm -rf I almost always use the fully qualified path.
Use the trash command. In Debian/Ubuntu/etc., it can be installed from the package trash-cli. It works on both files and directories (since it's really moving the file, rather than immediately deleting it).
trash implements the freedesktop.org trash specification, compatible with the GNOME and KDE trash.
Files can be undeleted using restore-trash from the same package, or through the usual GUI.
You could always turn on -i which would prompt you on every file, but that would be really time consuming for large directories.
I always do a pwd first.
I'll even go as far as to create an alias so that it forces the prompt for my users. Red Hat does that by default, I think.
You could drop the `f' switch and it should prompt you for each file to make sure you really want to remove it.
If what you want to do is to blow away an entire directory there is always some level of danger associated with that operation. If you really want to be sure that you are doing the right thing you could always do a move operation to some place like /tmp, wait for some amount of time to make sure that everything is okay with the "deletion" in place. Then go into the /tmp directory and ONLY use relative paths for a forced and recursive remove operation. Additional, in the move do a rename to "delete-directoryname" to make it easier not to make a mistake.
For example I want to delete /opt/folder so I do:
mv /opt/folder /tmp/delete-folder
.... wait to be sure everything is okay - maybe a minute, maybe a week ....
cd /tmp
pwd
rm -rf delete-folder/
The most important tip for doing an rm -rf is to always use relative paths. This keeps you from ever having typed a / before having completed your typing.
There's a reason I have [tcsh]:
alias clean '\rm -i -- "#"* *~'
alias rmo 'rm -- *.o'
They were created the first time I accidentally put a space between the * and the .o. Suffice to say, what happened wasn't what I expected to happen...
But things could have been worse. Back in the early '90s, a friend of mine had a ~/etc directory. He wanted to delete it. Unfortunately he typed rm -rf /etc. Unfortunately, he was logged in as root. He had a bad day!
To be evil: touch -- '-rf *'
To be safe, use '--' and -i. Or get it right once and create an alias!
Here are the alias I am using on macOS. It would ask for every rm command before executing.
# ~/.profile
function saferm() {
echo rm "$#"
echo ""
read -p "* execute rm (y/n)? : " yesorno
if [ $yesorno == "y" ]; then
/bin/rm "$#"
fi
}
alias srm=saferm
alias rm=srm

Resources