404 errors that look like strange SQL queries - how to block? - security

I have an E-commerce site (built on OpenCart 2.0.3.1).
Using an SEO pack plugin that keeps a list of 404 errors, so we can make redirects.
As of a couple of weeks ago, I keep seeing a LOT of 404s that don't even look like links:
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39
...and so on, until it reaches:
999999.9" //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39,0x393133353134353632342e39,0x393133353134353632352e39,0x393133353134353632362e39,0x393133353134353632372e39,0x393133353134353632382e39,0x393133353134353632392e39,0x39313335313435363231302e39,0x3931
This isn't happening once, but 30-50 times per example. Over 1600 lines of this mess in the latest 404s report.
Now, I know how to make redirects for "normal" broken links, but:
a.) I have no clue how to even format this.
b.) I'm concerned that this could be a brute-hacking attempt.
What would StackOverflow do?

TomJones999 -
As is mentioned in the comments (sort of), this is a security issue for you. The reason for so many URL requests is because it is likely a script that is rifling through many URL requests which have SQL in them and the script / hacker is attempting to either do a reconnaissance and find if your site / pages are susceptible to an SQL Injection attack, or, since they likely already know what E-Commerce Site (AND VERSION) you are using, they could be intending to exploit a known vulnerability with this SQL Injection attempt and achieve some nefarious result (DB access, Data Dump, etc).
A few things I would do:
Make sure your OpenCart is up to date and has all the latest patches applied
If it is up to date, it might be worth bringing up in the forums or to an OpenCart Moderator in case the attacker is going after a weakness he found but that OpenCart has not pushed a patch for yet.
Immediately, you can try to ban the attacker's IP address, but it is likely that they are going to use several different IP addresses and rotate through them. I might suggest looking into either ModSecurity or fail2ban ( https://www.fail2ban.org/ ). Fail2Ban can be a great add on for security in these situations because there are several ways for it to 'dynamically' thwart this attack attempt.
The excessive 404 errors in a short time span can be observed by fail2ban and fail2ban can then ban the client that is causing all of them
Also, there is a fail2ban filter for detecting attempted SQL injections and consequently banning the users. For example, I quickly searched and found this fail2ban filter with a few adjustments/improvements/fixes to the Regular Expression that detects the SQL injection.
I would not concern yourself at all with "how to format" that error log heh...
With regards to your code (or the code in OpenCart), what you want to be sure of is that all user submitted data is sanitized (such as data sent to your server as a GET parameter as in your case).
Also, if you feel uneasy about the attempted hack, it might be worth watching the feed provided on the haveibeenpwned website because data resulting from exploits targeted at databases very commonly tend to end up on sites like pastebin etc and haveibeenpwned will try to parse some of the data and identify these hacks so that you or your users can at least become aware and take appropriate measures.
Best of luck.

Related

One of our users visited different URLs in my website

I have a affiliate website. I am monitoring which websites are user visiting. For the first time I have noticed a user is visiting following url in my websites which I guess is some kind of hacking attempt. I need help. Constantly my website is performing poor. Sometimes it opens longer than normal time. Sometimes table appears blank. Sometime Cron jobs fail to execute.
Following are the few URLs visited by a user repetitively:
http://www.example.com/product.php?category=study-materials&id=SHOEMHMZH8HPAX4H%22%20or%20(1,2)=(select*from(select%20name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a)%20--%20%22x%22=%22x
http://www.example.com/product.php?category=video-albums&id=SHOEMHMZH8HPAX4H%20or%20(1,2)=(select*from(select%20name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a)%20--%20and%201%3D1
There are lots more such URLs. I am totally confused and bit scared too. What it is exactly and what the user trying to do with such URL? How can I prevent from such actions?
Since morning the user has been visiting from different IP addresses and his or her visited URLs looks like same as I have mentioned.
It's someone trying to break into the site, probably by using a tool of some kind. This happens all the time to every website, since anyone can go off and download tools freely to attack sites.
The URLs you list have attempts to send commands to your database, called SQL Injection. If you look in your web server logs, you'll probably see this kind of thing a lot.
As long as your site has been coded securely, doesn't trust user input, doesn't use vulnerable software (such as out of date plugins or un-patched operating systems) then it may be nothing to worry about.
I presume you didn't write the software. You could always contact the creator of to ask about how it was coded, tested and has it been pentested (which is when a professional hacker has been paid to try to break into the site).

Creating a honeypot for nodejs / hapi.js

I have a hapijs application and checking some logs I have found some entries for automated site scanners and hits to entries to /admin.php and similar.
I found this great article How to Block Automated Scanners from Scanning your Site and I thought it was great.
I am looking for guidance on what the best strategy would be to create honey pots for a hapijs / nodejs app to identify suspicious requests, log them, and possibly ban the IPs temporarily.
Do you have any general or specific (to node and hapi) recommendations on how to implement this?
My thoughts include:
Create the honeypot route with a non-obvious name
Add a robots.txt to disallow search engines on that route
Create the content of the route (see the article and discussions for some of the recommendations)
Write to a special log or tag the log entries for easy tracking and later analysis
Possibly create some logic that if traffic from this IP address receives more traffic than certain threshold (5 times of honeypot route access will ban the IP for X hours or permanently)
A few questions I have:
How can you ban an IP address using hapi.js?
Are there any other recommendations to identify automated scanners?
Do you have specific suggestions for implementing a honeypot?
Thanks!
Let me start with saying that this Idea sounds really cool but I'm not if it is much practical.
First the chances of blocking legit bots/users is small but still exisits.
Even if you ignore true mistakes the option for abuse and denial of service is quite big. Once I know your blocking users who enter this route I can try cause legit users touch it (with an iframe / img / redirect) and cause them to be banned from the site.
Than it's effectiveness is small. sure your going to stop all automated bots that scan your sites (I'm sure the first thing they do is check the Disallow info and this is the first thing you do in a pentest). But only unsophisticated attacks are going to be blocked cause anyone actively targeting you will blacklist the endpoint and get a different IP.
So I'm not saying you shouldn't do it but I am saying you should think to see if the pros outwaite the cons here.
How to actually get it done is actually quite simple. And it seem like your looking for a very unique case of rate limiting I wouldn't do it directly in your hapi app since you want the ban to be shared between instances and you probably want them to be persistent across restarts (You can do it from your app but it's too much logic for something that is already solved).
The article you mentioned actually suggests using fail2ban which is a great solution for rate limiting. you'll need to make sure your app logs to afile it can read and write a filter and jail conf specifically for your app but it should work with hapi with no issues.
Specifically for hapi I maintain an npm module for rate limiting called ralphi it has a hapi plugin but unless you need a proper rate limiting (which you should have for logins, sessions and other tokens) fail2ban might be a better option in this case.
In general Honey pots are not hard to implement but as with any secuiry related solution you should consider who is your potential attacker and what are you trying to protect.
Also in general Honey pots are mostly used to notify about an existing breach or an imminent breach. Though they can be used to also trigger a lockdown your main take from them is to get visibility once a breach happend but before the attacker had to much time to abuse the system (You don't want to discover the breach two months later when your site has been defaced and all valuable data was already taken)
A few ideas for honey pots can be -
Have an 'admin' user with relatively average password (random 8 chars) but no privileges at all when this user successfully loges in notify the real admin.
Notice that your not locking the attacker on first attempt to login even if you know he is doing something wrong (he will get a different ip and use another account). But if he actually managed to loggin, maybe there's an error in your login logic ? maybe password reset is broken ? maybe rate limiting isn't working ? So much more info to follow through.
now that you know you have a semi competent attacker maybe try and see what is he trying to do, maybe you'll know who he is or what his end goal is (Highly valuable since he probably going to try again).
Find sensitive places you don't want users to play with and plant some canary tokens in. This can be just a file that sites with all your other uploads on the system, It can be an AWS creds on your dev machine, it can be a link that goes from your admin panel that says "technical documentation" the idea is that regular users should not care or have any access to this files but attackers will find them too tempting to ignore. the moment they touch one you know this area has been compromised and you need to start blocking and investigating
Just remember before implementing any security in try to think who you expect is going to attack you honey pots are probably one of the last security mesaures you should consider and there are a lot more common and basic security issues that need to be addressed first (There are endless amount of lists about node.js security best practices and OWASP Top 10 defacto standard for general web apps security)

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

Is it possible for a 3rd party to reliably discern your CMS?

I don't know much about poking at servers, etc, but in light of the (relatively) recent Wordpress security issues, I'm wondering if it's possible to obscure which CMS you might be using to the outside world.
Obviously you can rename the default login page, error messages, the favicon (I see the joomla one everywhere) and use a non-default template, but the sorts of things I'm wondering about are watching redirects somehow and things like that. Do most CMS leave traces?
This is not to replace other forms of security, but more of a curious question.
Thanks for any insight!
Yes, many CMS leave traces like the forming of identifiers and hierarchy of elements that are a plain giveaway.
This is however not the point. What is the point, is that there are only few very popular CMS. It is not necessary to determine which one you use. It will suffice to methodically try attack techniques for the 5 to 10 biggest CMS in use on your site to get a pretty good probability of success.
In the general case, security by obscurity doesn't work. If you rely on the fact that someone doesn't know something, this means you're vulnerable to certain attacks since you blind yourself to them.
Therefore, it is dangerous to follow this path. Chose a successful CMS and then install all the available security patches right away. By using a famous CMS, you make sure that you'll get security fixes quickly. Your biggest enemy is time; attackers can find thousands of vulnerable sites with Google and attack them simultaneously using bot nets. This is a completely automated process today. Trying to hide what software you're using won't stop the bots from hacking your site since they don't check which vulnerability they might expect; they just try the top 10 of the currently most successful exploits.
[EDIT] Bot nets with 10'000 bots are not uncommon today. As long as installing security patches is so hard, people won't protect their computers and that means criminals will have lots of resources to attack. On top of that, there are sites which sell exploits as ready-to-use plugins for bots (or bots or rent whole bot nets).
So as long as the vulnerability is still there, camouflaging your site won't help.
A lot of CMS's have id, classnames and structure patterns that can identify them (Wordpress for example). URLs have specific patterns too. You just need someone experienced with the plataform or with just some browsing to identify which CMS it's using.
IMHO, you can try to change all this structure in your CMS, but if you are into all this effort, I think you should just create your own CMS.
It's more important to keep everything up to date in your plataform and follow some security measures than try to change everything that could reveal the CMS you're using.
Since this question is tagged "wordpress:" you can hide your wordpress version by putting this in your theme's functions.php file:
add_action('init', 'removeWPVersionInfo');
function removeWPVersionInfo() {
remove_action('wp_head', 'wp_generator');
}
But, you're still going to have the usual paths, i.e., wp-content/themes/ etc... and wp-content/plugins/ etc... in page source, unless you figure out a way to rewrite those with .htaccess.

Is it good practice to hide web server information in HTTP headers?

This question is more security related than programming related, sorry if it shouldn't be here.
I'm currently developing a web application and I'm curious as to why most websites don't mind displaying their exact server configuration in HTTP headers, like versions of Apache and PHP, with complete "mod_perl, mod_python, ..." listing and so on.
From a security point of view, I'd prefer that it would be impossible to find out if I'm running PHP on Apache, ASP.NET on IIS or even Rails on Lighttpd.
Obviously "obscurity is not security" but should I be worried at all that visitors know what version of Apache and PHP my server is running ? Is it good practice or totally unnecessary to hide this information ?
Prevailing wisdom is to remove the server ID and the version; better yet, change them to another legitimate server ID and version - that way the attacker goes off trying IIS vulnerabilities against Apache or something like that. Might as well mislead the attacker.
But honestly, there are so many other clues to go by, I wonder about whether this is worth it. I suppose it could stop attackers using a search engine to find servers with known vulnerabilities.
(Personally, I don't bother on my HTTP server, but it's written in Java and much less vulnerable to the typical kinds of attack.)
I think you usually see those headers because the systems send them by default.
I routinely remove them as they provide no real value and could, as you suggested reveal information about the server.
Hiding the information in the headers usually just slows down the lazy and ignorant villains. There are many ways to fingerprint a system.
Running nmap -O -sV against an IP will give you the OS and service versions with a fairly high degree of accuracy. The only extra info you're giving away by having your server advertise that information is which modules you have loaded.
It seems that some of the answers are missing an obvious advantage of turning off the headers.
Yes, you all are right; turning of the headers (and the statusline present e.g. at directory listings) does not stop an attacker from finding out what software you use.
However, turning this information off prevents malware which uses google to look for vulnerable systems from finding you.
tldr: Don't use it as a (or even as THE) security-measure, but as a measure to drive away unwanted traffic.
I normally turn off Apache's long header version information with ServerTokens; it adds nothing useful.
One point which nobody has picked up on, is it looks like better security to a prospective client, pen testing company etc, if you're giving out less information from your web server.
So giving less information out boosts the perceived security (i.e. it shows you have actually thought about it and done something)

Resources