I've noticed yesterday by looking into my apache error log that someone tried to get access to the website via calling a lot of sites like:
mywebsite.com/phpmyadmin
mywebsite.com/dbadmin
mywebsite.com/mysqladmin
mywebsite.com/foo.php#some-javascript
...
This caused a lot of 404 errors. What's the best way to stop them doing so?
I thought about creating a fake-phpmyadmin dir with some php code that bans their ip address from my website when accessing this dir for about 12 to 24 h.
Is there a better way to deal with this kind of guys?
You should take a look at Fail2ban, it's pretty easy to set up in Apache.
You can't really prevent people from trying these sorts of attacks. The best you can do is log all these sorts of attempts like you're currently doing and maybe implement some sort of temporary blacklisting.
The security of your site shouldn't depend on people not trying to do these sorts of attacks, since you will never be able to fully prevent them.
If none of those exist, they're not gonna be able to do anything. You just have to worry about them being able to access parts that do exist and that you don't want them to access. Or using your poorly written scripts with XSS holes in it.
You could make it harder on them by checking if they're trying to access a common XSS path (like phpMyAdmin's normal path) and use an alternate 404 page that has malicious javascript on it or something.
Related
I don’t know if I asked my question correctly, but I wanted to know how it’s done. We have a website, and yesterday we noticed that the index.php file was deleted in the server, and instead added the index.html. We know for sure that the problem is not in the server, I mean that they didn't hack the server, and I would like to know with what attacks they could do this. I understand that there can be a lot of options, but I ask for help, can someone describe how this can be done, or give some kind of link where I could read about it. I apologize if I described the situation poorly, but I think someone will understand what I am asking for, and maybe help, thanks in advance.
The main attacks are most likely related to a rootkit, specific modification of a server is hard to do with an automated script, so your suspected hacker is likely accessing your server through a back door; you need to make sure that you are only keeping the needed ports open and have firewalls to detect scanners being used on your server. Another option, if you have the funds, is to store your files in a backend storage server, and allowing your frontend server to access those files, it's not foolproof, but it should effectively square the amount of time to detect an open port and pass through the firewall.
Look into these website(s) if you need more info: https://www.veracode.com/security/rootkit
https://en.wikipedia.org/wiki/Rootkit
I've proof read these, and they work well for some basic elaboration on the subject, as well as some prevention methods.
I have an E-commerce site (built on OpenCart 2.0.3.1).
Using an SEO pack plugin that keeps a list of 404 errors, so we can make redirects.
As of a couple of weeks ago, I keep seeing a LOT of 404s that don't even look like links:
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39
...and so on, until it reaches:
999999.9" //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39,0x393133353134353632342e39,0x393133353134353632352e39,0x393133353134353632362e39,0x393133353134353632372e39,0x393133353134353632382e39,0x393133353134353632392e39,0x39313335313435363231302e39,0x3931
This isn't happening once, but 30-50 times per example. Over 1600 lines of this mess in the latest 404s report.
Now, I know how to make redirects for "normal" broken links, but:
a.) I have no clue how to even format this.
b.) I'm concerned that this could be a brute-hacking attempt.
What would StackOverflow do?
TomJones999 -
As is mentioned in the comments (sort of), this is a security issue for you. The reason for so many URL requests is because it is likely a script that is rifling through many URL requests which have SQL in them and the script / hacker is attempting to either do a reconnaissance and find if your site / pages are susceptible to an SQL Injection attack, or, since they likely already know what E-Commerce Site (AND VERSION) you are using, they could be intending to exploit a known vulnerability with this SQL Injection attempt and achieve some nefarious result (DB access, Data Dump, etc).
A few things I would do:
Make sure your OpenCart is up to date and has all the latest patches applied
If it is up to date, it might be worth bringing up in the forums or to an OpenCart Moderator in case the attacker is going after a weakness he found but that OpenCart has not pushed a patch for yet.
Immediately, you can try to ban the attacker's IP address, but it is likely that they are going to use several different IP addresses and rotate through them. I might suggest looking into either ModSecurity or fail2ban ( https://www.fail2ban.org/ ). Fail2Ban can be a great add on for security in these situations because there are several ways for it to 'dynamically' thwart this attack attempt.
The excessive 404 errors in a short time span can be observed by fail2ban and fail2ban can then ban the client that is causing all of them
Also, there is a fail2ban filter for detecting attempted SQL injections and consequently banning the users. For example, I quickly searched and found this fail2ban filter with a few adjustments/improvements/fixes to the Regular Expression that detects the SQL injection.
I would not concern yourself at all with "how to format" that error log heh...
With regards to your code (or the code in OpenCart), what you want to be sure of is that all user submitted data is sanitized (such as data sent to your server as a GET parameter as in your case).
Also, if you feel uneasy about the attempted hack, it might be worth watching the feed provided on the haveibeenpwned website because data resulting from exploits targeted at databases very commonly tend to end up on sites like pastebin etc and haveibeenpwned will try to parse some of the data and identify these hacks so that you or your users can at least become aware and take appropriate measures.
Best of luck.
I am trying to find out how suitable Webdav is for a product by the company i am working at.
Our needs seem to exceed what Webdav has to offer and i'm trying to find out if my theory is correct and if so how we could work around it.
I am using the Webdav-package which you can install through the "add/remove windows features"-dialog.
The problem is that we want to be able to set permissions for each file and since we can access and change authoring-rules by code this is more or less possible.
Authoring-rules seem to apply to folders and not individual files but this could be worked around by giving each file it's own folder (although it's a bit ugly).
To me this solution seems very inefficient mainly because the authoring-rules are all placed in a list which means that for all file-requests the server has to loop through the entire list which gets larger for every file added to the server.
My thought is that we could build some kind of "proxy" that checks permissions in a more efficient way and if the user has permission to access the file we just forward the request to the webdav-server.
This might also be inefficient though since we have to have an application managing the connection between the user and the Webdav-server but at least the inefficiency isn't exponential.
I guess this leads to the questions:
Is Webdav at all suitable for more complex permissions?
Is there some part of Webdav that i have missed which solves this problem?
If so, would it be better to go with the internal solution or should we do an external solution?
If not Webdav, is there a better solution? (We want all the nice file-locking, version-control and office-integration stuff)
use an HttpModule to apply your authorization rules.
system.webServer/modules has an attribute runManagedModulesForWebDavRequests
(!not the same as runAllManagedModulesForAllRequests)
Forget about IIS
Forget about pure WebDAV
Build|get Apache+mod_svn
Use path-based authorization in SVN, which can enable (if needed) rules on per-file basis
Both the web files and the database have been tampered pointing to malicious JavaScript. They have tasked me to rebuild their site, but I would like to be able to view the site if possible to get at the content and view the site as they had a lot of pages. Since I didn’t originally build the site I don’t know the structure of the content.
I don’t have to repair the site; I just need to rebuild it with the CMS of my choice. I don’t know anything about the Joomla database, or know if I can even get access to it to be able to start there.
I originally thought using a virtual machine would be OK for this, but I wasn’t sure if I would be risking my host machine as well using this method. I would of course turn off JavaScript but I was hoping someone else may have been already been down this road and might be able to offer some insight.
Couldn't you just FTP to their host, pull it off and get it working on a machine with no connection?
If you were really paranoid. I don't think an XSS infected site would do too much damage to a properly protected machine anyway.
My paranoid answer:
It's a great idea to turn off Javascript. I would get an extension like Noscript for Firefox or Notscript for Chrome. I use these Noscript regularly, and it makes it easy to see what Javascript is coming from where.
Secondly, your idea with a VM is good, but take it a step further and run Linux in that VM. Linux can be infected, but it is rare to see something that will infect Linux.
Regular expressions and HTML parsers can also be your friends. Script something that can scan files looking for things like script tags and especially iframes. That way you can get an idea of files that have been corrupted and what is calling to where.
One other less likely gotcha is malicious executables or scripts disguised as something innocent like a JPEG, PDFs, etc. If you download and open files off of that machine, make sure it is at least onto your VM with no network connectivity.
Get server logs if you can; perhaps your assailant was sloppy and let some clue about their activities. Perhaps run Wireshark on a second machine to look for things calling out to strange domains. This may be excessive, but I find it to be a fun exercise. :)
Also things like Virustotal and Threat Expert can be your friends if you think you have a malicious file or you see malicious activity. Better to be paranoid than compromised.
Cleaning this type of stuff up isn't exactly rocket science. You just need to get a connection to the backing database server and run a couple queries to kill the xss stuff out of the stored content.
You'd do your client a great service by starting off doing just that.
The VM idea is a good one. krs1 suggests running Linux which is an even better idea as almost all trojans that get downloaded are for Windows. If you run Wireshark while you use the site so you can see what the network traffic looks like and what URLs are being requested, etc. If you run it in a Linux VM though you'll probably only get half the picture since any exploit worth the oxygen it took to keep the programmer alive while it was written will check what platform you're on and only download when you're on an exploitable one.
But I digress, you're rebuilding a website, not doing malware analysis (which is more fun IMO). Once you identify and remove the offending content you should be good. See if you can find out what the exploit was that got them and work with their IT guy if they have one so steps can be taken to mitigate it from happening again.
For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".