I would like to use eWay (http://eway.com.au) as payment gateway but the problem is it doesn't allow much customization on their hosted page. I would like to display products client would be paying for but that is not possible so I thought maybe just whack hosted page into Iframe. But then again, I'm expecting security issues with it, although couldn't exactly pinpoint what exactly could be the problem. I would be grateful if somone could give me a better idea if it would cause any security holes.
The problem with embedding an iframe from another website that is meant to be secure is that the users have no easy way to check that this website is the one they really want to talk to (your website could quite easily fake that iframe to be on one of your sites without them noticing: you could be the man in the middle, or someone between you and them could, if you're not using HTTPS on your site).
If the iframe points to an HTTPS site (most likely to be the case for payments), the users won't be able to check the lock or blue/green bar.
It's possible to look into the source of the page to check the URI, but very few users know how to do this, even fewer will go that far.
(Note that, even if it's not a good idea, some big websites do this sort of things anyway.)
Related
By including Google Analytics in a website (specifically the Javascript version) isn't it true that you are giving Google complete access to all your cookies and site information? (ie. it could be a security hole).
Can this be mitigated by putting Google in an iFrame that is sandboxed? Or maybe only passing Google the necessary information (ie. browser type, screen resolution, etc)?
How can someone get the most out of Google Analytics without leaving the entire site open?
Or perhaps passing the data through my own server and then uploading it to Google?
You can create a scriptless implementation via the measurement protocol (for Universal Analytics enabled properties). This not only avoids any security issues with the script (although I'd rather trust Google on that), it also means you have more control what data is submitted to the Google Server.
A script run on your site can read cookies on your site, yes. And that data can be sent back to google, yes. That is why you shouldn't store sensitive information in cookies. You shouldn't do this even if you don't use google analytics. Even if you don't use ANY other code except your own. Browsers and browser addons can also read that stuff and you definitely cannot control that. Again, never store sensitive information in cookies.
As far as access to "site information".. javascript can be used to read the content on your pages, know urls of pages, etc.. IOW anything you serve up on a web page. Anything that is not behind a wall (e.g. login barrier) is surely up for grabs. But crawlers will look at that stuff anyway. Stuff behind walls can still be grabbed automatically, depending on what they have to actually do to get past those walls (e.g. simple registration/login barriers are pretty easy to get past).
This is also why you should never display sensitive information even in content of your site. E.g. credit card numbers, passwords, etc.. that's why virtually every site you go to that has even remotely sensitive information always shows a mask (e.g. ** ) instead of actual values.
Google Analytics does not actively do these things, but you're right: there's nothing stopping them from doing it, and you've already given them the right to do it by using their script.
And you are right: the safest way to control what Google can actually see is to send server-side requests to them. And also put all your content behind barriers that cannot be easily crawled or scraped. The strongest barrier being one that involves having to pay for access. People are ingenious about making bots about making crawlers and bots to get past all sorts of forms and "human" checks etc.. and you're fighting a losing battle on that count, but nothing stops a bot faster than requiring someone to give you money to access your stuff. Of course, this also means you'd have to make everybody pay for access...
Anyways.. if you're that paranoid about this stuff, why use GA at all? Use something you host yourself (e.g. Piwik). This won't solve for crawlers/bots, obviously, but it will solve for worries about GA grabbing more than you want it to.
In another question dealing with a bug in blackberry10 that denies cross origin XHR calls, it is proposed to get around the issue by disabling web security.
But what does disabling web security really imply here? Am I going to torture small harmless woodland creatures if I use this?
Seriously though, does doing this expose my app to additional security risks beyond those introduced when adding the popular wildcard access uri="*" or access origin="*" line in my config.xml for blackberry10?
please advice
But what does disabling web security really imply here? Am I going to torture small harmless woodland creatures if I use this?
No.
It means your application could access ANY resource in the Internet good, bad or ugly IF (and only if) the user is able to navigate / access that resource.
By disabling web security, the following scenario could happen:
If you published a link in your app to a remote page that you do not control, you risk that page may display unexpected/malicious/inappropriate content OR enable the user to navigate elsewhere to another page that might. Example: Say you are display content in your app loaded directly from some remote URL. Do you know exactly what type of content your users might 'see' in your app? If that remote URL was loading 'buy these pills now to get huge' advertisements from a different URL, would you be okay with YOUR users seeing that content in YOUR app?
Most devs will only include content in their app that they 'trust' and white list just the specific urls they need. However, sometimes you do need to unlock the front door if you don't know what URL your users want to access.
So disabling web security is available if you really need it, but not recommended. Use it at your own risk, not as a matter of convenience.
I have a web application, there is one input can let user post their website links.
However I try to protect user as I can.
Is any free website can check the links for you?
What i'm looking for is passing the links to the security site, if the links is ok, it will auto pass to the destination, if not, it will stop and let user know the site is not safe.
links(my site) -> security check website -> destination
The Web of Trust API allows you to check the user-provided "reputation" of a website.
WOT is a good suggestion. However, keep in mind that the crowdsoured nature of this project can be an issue, security wise.
For example, using few fake account, one can boost the trust of a malicious domain. This is especially true for "under the radar" domains, that would generate no organic ratings.
Admittedly, this would probably be resolved over time (as users become exposed to the threat) but the process could be repeated again and again...
I would suggest to use WOT as cross reference, combining it with other factors (i.e. Alexa) just to filter out small domains, which are not likely to be rated - expect by their owners.
Best way is to manage your own list...
For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".
What is your suggested solution for the threat of website UI spoofing?
By definition any solution that relies on the site showing you personalised information once you've logged in is ineffective against phishers. If you've attempted to login, they've already succeeded!
FWIW, I don't yet know the real answer, maybe this question will throw up some good ideas. I am however professionally involved in research into phishing, bad domain registrations, etc.
I don't believe there's any significant technical solution that web site developers can implement. Again, by definition, if your users arrive at a phishing site you're no longer in control.
This is why all current anti-phishing technologies reside in the browser, and not in the phished site.
The key to this problem is identifying some difference between a request to the real site and a request to the spoof site.
The simplest difference is some cookie-based UI preference. A cookie set on your (real) site will only ever be returned to your site, and will never be sent to a spoof site.
Now there are plenty of reasons that the valid cookie might not be sent to your site, the user might be using a different computer or they might have expired/deleted cookies, but at least you can guarantee that it won't be sent to the spoof site.
I think the only answer here is to program better people.
Doing things like customizing the appearance or uploading an image only work if the user in questions actually recognizes when these things are wrong. I think the majority of users would never recognize these things except for sites they visit a lot. Even if they did they may attribute it to a change in website design and not a phish.
One solution is to customize the web site per user. Spoofing only works when users have basically the same view of the website (one spoof - many victims). So if, for example, eBay would let you configure a custom background color, you should be able to notice that the page you're viewing is some spoof (that won't know your choice of color). A real solution is a bit more complex (like maybe a secret keyword configured in the browser that only the browser can render within password controls or into the url bar, etc.), but the idea is the same.
Customize the UI per user so spoofing (which relies on most users expecting to see basically the same UI) stops working. It can be a browser based solution, or something web sites offer to their users (some already do).
I've seen some sites that let you select a "personal" icon. Whenever you log in, that icon is displayed as proof that you are on their site.
You can ask a question when the user login (a question that the user has written with the answer).
You can display a picture after the loggin that the user have uploaded, if the user doesn't see his picture (private that only him could see) than it's not the real website.