One of our users visited different URLs in my website - security

I have a affiliate website. I am monitoring which websites are user visiting. For the first time I have noticed a user is visiting following url in my websites which I guess is some kind of hacking attempt. I need help. Constantly my website is performing poor. Sometimes it opens longer than normal time. Sometimes table appears blank. Sometime Cron jobs fail to execute.
Following are the few URLs visited by a user repetitively:
http://www.example.com/product.php?category=study-materials&id=SHOEMHMZH8HPAX4H%22%20or%20(1,2)=(select*from(select%20name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a)%20--%20%22x%22=%22x
http://www.example.com/product.php?category=video-albums&id=SHOEMHMZH8HPAX4H%20or%20(1,2)=(select*from(select%20name_const(CHAR(111,108,111,108,111,115,104,101,114),1),name_const(CHAR(111,108,111,108,111,115,104,101,114),1))a)%20--%20and%201%3D1
There are lots more such URLs. I am totally confused and bit scared too. What it is exactly and what the user trying to do with such URL? How can I prevent from such actions?
Since morning the user has been visiting from different IP addresses and his or her visited URLs looks like same as I have mentioned.

It's someone trying to break into the site, probably by using a tool of some kind. This happens all the time to every website, since anyone can go off and download tools freely to attack sites.
The URLs you list have attempts to send commands to your database, called SQL Injection. If you look in your web server logs, you'll probably see this kind of thing a lot.
As long as your site has been coded securely, doesn't trust user input, doesn't use vulnerable software (such as out of date plugins or un-patched operating systems) then it may be nothing to worry about.
I presume you didn't write the software. You could always contact the creator of to ask about how it was coded, tested and has it been pentested (which is when a professional hacker has been paid to try to break into the site).

Related

404 errors that look like strange SQL queries - how to block?

I have an E-commerce site (built on OpenCart 2.0.3.1).
Using an SEO pack plugin that keeps a list of 404 errors, so we can make redirects.
As of a couple of weeks ago, I keep seeing a LOT of 404s that don't even look like links:
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39
999999.9 //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39
...and so on, until it reaches:
999999.9" //uNiOn//aLl /**/sElEcT 0x393133353134353632312e39,0x393133353134353632322e39,0x393133353134353632332e39,0x393133353134353632342e39,0x393133353134353632352e39,0x393133353134353632362e39,0x393133353134353632372e39,0x393133353134353632382e39,0x393133353134353632392e39,0x39313335313435363231302e39,0x3931
This isn't happening once, but 30-50 times per example. Over 1600 lines of this mess in the latest 404s report.
Now, I know how to make redirects for "normal" broken links, but:
a.) I have no clue how to even format this.
b.) I'm concerned that this could be a brute-hacking attempt.
What would StackOverflow do?
TomJones999 -
As is mentioned in the comments (sort of), this is a security issue for you. The reason for so many URL requests is because it is likely a script that is rifling through many URL requests which have SQL in them and the script / hacker is attempting to either do a reconnaissance and find if your site / pages are susceptible to an SQL Injection attack, or, since they likely already know what E-Commerce Site (AND VERSION) you are using, they could be intending to exploit a known vulnerability with this SQL Injection attempt and achieve some nefarious result (DB access, Data Dump, etc).
A few things I would do:
Make sure your OpenCart is up to date and has all the latest patches applied
If it is up to date, it might be worth bringing up in the forums or to an OpenCart Moderator in case the attacker is going after a weakness he found but that OpenCart has not pushed a patch for yet.
Immediately, you can try to ban the attacker's IP address, but it is likely that they are going to use several different IP addresses and rotate through them. I might suggest looking into either ModSecurity or fail2ban ( https://www.fail2ban.org/ ). Fail2Ban can be a great add on for security in these situations because there are several ways for it to 'dynamically' thwart this attack attempt.
The excessive 404 errors in a short time span can be observed by fail2ban and fail2ban can then ban the client that is causing all of them
Also, there is a fail2ban filter for detecting attempted SQL injections and consequently banning the users. For example, I quickly searched and found this fail2ban filter with a few adjustments/improvements/fixes to the Regular Expression that detects the SQL injection.
I would not concern yourself at all with "how to format" that error log heh...
With regards to your code (or the code in OpenCart), what you want to be sure of is that all user submitted data is sanitized (such as data sent to your server as a GET parameter as in your case).
Also, if you feel uneasy about the attempted hack, it might be worth watching the feed provided on the haveibeenpwned website because data resulting from exploits targeted at databases very commonly tend to end up on sites like pastebin etc and haveibeenpwned will try to parse some of the data and identify these hacks so that you or your users can at least become aware and take appropriate measures.
Best of luck.

"Sandbox" Google Analytics for security

By including Google Analytics in a website (specifically the Javascript version) isn't it true that you are giving Google complete access to all your cookies and site information? (ie. it could be a security hole).
Can this be mitigated by putting Google in an iFrame that is sandboxed? Or maybe only passing Google the necessary information (ie. browser type, screen resolution, etc)?
How can someone get the most out of Google Analytics without leaving the entire site open?
Or perhaps passing the data through my own server and then uploading it to Google?
You can create a scriptless implementation via the measurement protocol (for Universal Analytics enabled properties). This not only avoids any security issues with the script (although I'd rather trust Google on that), it also means you have more control what data is submitted to the Google Server.
A script run on your site can read cookies on your site, yes. And that data can be sent back to google, yes. That is why you shouldn't store sensitive information in cookies. You shouldn't do this even if you don't use google analytics. Even if you don't use ANY other code except your own. Browsers and browser addons can also read that stuff and you definitely cannot control that. Again, never store sensitive information in cookies.
As far as access to "site information".. javascript can be used to read the content on your pages, know urls of pages, etc.. IOW anything you serve up on a web page. Anything that is not behind a wall (e.g. login barrier) is surely up for grabs. But crawlers will look at that stuff anyway. Stuff behind walls can still be grabbed automatically, depending on what they have to actually do to get past those walls (e.g. simple registration/login barriers are pretty easy to get past).
This is also why you should never display sensitive information even in content of your site. E.g. credit card numbers, passwords, etc.. that's why virtually every site you go to that has even remotely sensitive information always shows a mask (e.g. ** ) instead of actual values.
Google Analytics does not actively do these things, but you're right: there's nothing stopping them from doing it, and you've already given them the right to do it by using their script.
And you are right: the safest way to control what Google can actually see is to send server-side requests to them. And also put all your content behind barriers that cannot be easily crawled or scraped. The strongest barrier being one that involves having to pay for access. People are ingenious about making bots about making crawlers and bots to get past all sorts of forms and "human" checks etc.. and you're fighting a losing battle on that count, but nothing stops a bot faster than requiring someone to give you money to access your stuff. Of course, this also means you'd have to make everybody pay for access...
Anyways.. if you're that paranoid about this stuff, why use GA at all? Use something you host yourself (e.g. Piwik). This won't solve for crawlers/bots, obviously, but it will solve for worries about GA grabbing more than you want it to.

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

How can you ensure that a user knows they are on your website?

The talk of internet town today is the SNAFU that led to dozens of Facebook users being led by Google search to an article on ReadWriteWeb about the Facebook-AOL deal. What ensued in the comments tread is quickly becoming the stuff of internet legend.
However, behind the hilarity is a scary fact that this might be how users browse to all sites, including their banking and other more important sites. A quick search for "my bank website login" and quickly click the first result. Once they are there, the user is willing to submit their credentials even though the site looks nothing like the site they tried to reach. (This is evidenced by the fact that user's comments are connected to their facebook accounts via facebook-connect)
Preventing this scenario is pretty much out of our control and educating our users on the basics of internet browsing may be just as impossible. So how then can we ensure that users know they are on the correct web site before trying to log in? Is something like Bank of America's SiteKey sufficient, or is that another cop-out that shifts responsibility back on the user?
The Internet and web browsers used to have a couple of cool features that might actually have some applicability there.
One was something called "domain names." Instead entering the website name over on the right site of your toolbar, there was another, larger text field on the left where you could enter it. Rather than searching a proprietary Google database running on vast farms of Magic 8-Balls, this arcane "address" field consulted an authoritative registry of "domain names", and would lead you to the right site every time. Sadly, it sometimes required you to enter up to 8 extra characters! This burden was too much for most users to shoulder, and this cumbersome feature has been abandoned.
Another thing you used to see in browsers was something called a "bookmark." Etymologists are still trying to determine where the term "bookmark" originated. They suspect it has something to do with paper with funny squiggles on it. Anyway, these bookmarks allowed users to create a button that would take them directly to the web site of interest. Of course, creating a bookmark was a tedious, intimidating process, sometimes requiring as many as two menu clicks—or worse yet, use of the Ctrl-key!
Ah, the wonders of the ancients.
The site could "personalize" itself by showing some personal information,
easy recognizable by the user, on every page.
There are plenty of ways to implement it. The obvious one:
under first visit, the site requires user to upload some avatar,
and adds user's id to the cookies. After that, every time the user browses
the site, the avatar is shown.
When I set up my online bank account, it asked me to choose from a selection of images. The image I chose is now shown to me every time I login. This assures me that I am on the right website.
EDIT: i just read the link about the BoA SiteKey, this is apparently the same thing (it sounded from the name like a challenge-response dongle)
I suppose the best answer would be a hardware device which required a code from the bank and the user and authenticated both. But any of these things assume that people are actually thinking about the problem, which of course they don't. This was going on before internet banking was common - I had a friend who had her wallet stolen back in the 90s, and theif phoned her pretending to be her bank and persuaded her to reveal her PIN...
When the user first visits the site and logs in, he can share some personal information (even something very trivial) that imposter sites couldn't possible know - high school mascot, first street lived on, etc.
If there's ever any question of site authenticity, the site could share this information back to the user.
Like on TV shows/movies with the evil twin. The good twin always wins trust by sharing a secret that only the person who's trying to figure out who the good twin is would know.
You cannot prevent phishing per-se but you can take several steps each of which do a little bit to mitigate the problem.
1) If you have something like site-key or a sign-in seal, please ensure that these cannot be iframed on a malicious website. Just javascript framebusting may not be enough as IE has security="restricted".
2) Be very consistent about how you ask for user credentials - serve the login form over SSL (not just post-back over SSL). Do not ask for login on several places or sites. Encourage third parties who want to work with user data stored on your site to use OAuth (instead of taking your user's password).
3) You should never ask for information via email (with or without link).
4) Have a security page where you talk about these issues.
5) Send notification on changes to registered phone, email, etc.
Apart from above, monitor user account activity - such as changes to contact information, security Q&A, access, etc (noting time, ip, and there are several subtle techniques).

How do you combat website spoofing/phishing?

What is your suggested solution for the threat of website UI spoofing?
By definition any solution that relies on the site showing you personalised information once you've logged in is ineffective against phishers. If you've attempted to login, they've already succeeded!
FWIW, I don't yet know the real answer, maybe this question will throw up some good ideas. I am however professionally involved in research into phishing, bad domain registrations, etc.
I don't believe there's any significant technical solution that web site developers can implement. Again, by definition, if your users arrive at a phishing site you're no longer in control.
This is why all current anti-phishing technologies reside in the browser, and not in the phished site.
The key to this problem is identifying some difference between a request to the real site and a request to the spoof site.
The simplest difference is some cookie-based UI preference. A cookie set on your (real) site will only ever be returned to your site, and will never be sent to a spoof site.
Now there are plenty of reasons that the valid cookie might not be sent to your site, the user might be using a different computer or they might have expired/deleted cookies, but at least you can guarantee that it won't be sent to the spoof site.
I think the only answer here is to program better people.
Doing things like customizing the appearance or uploading an image only work if the user in questions actually recognizes when these things are wrong. I think the majority of users would never recognize these things except for sites they visit a lot. Even if they did they may attribute it to a change in website design and not a phish.
One solution is to customize the web site per user. Spoofing only works when users have basically the same view of the website (one spoof - many victims). So if, for example, eBay would let you configure a custom background color, you should be able to notice that the page you're viewing is some spoof (that won't know your choice of color). A real solution is a bit more complex (like maybe a secret keyword configured in the browser that only the browser can render within password controls or into the url bar, etc.), but the idea is the same.
Customize the UI per user so spoofing (which relies on most users expecting to see basically the same UI) stops working. It can be a browser based solution, or something web sites offer to their users (some already do).
I've seen some sites that let you select a "personal" icon. Whenever you log in, that icon is displayed as proof that you are on their site.
You can ask a question when the user login (a question that the user has written with the answer).
You can display a picture after the loggin that the user have uploaded, if the user doesn't see his picture (private that only him could see) than it's not the real website.

Resources