Secure iframe in unsecure document? - security

I'm building a website for a sports center. Registrations are handled through a third-party software program. There are options to register directly through the third party's site or to integrate the registration form into my site with iframes.
Since I'd rather not send people to another site, I went with the iframes option. My question is, can I be sure that people will be getting the same level of security in the iframe as they would on the completely-secure third-party page?
Thank you.

This design does make you more prone to SSLStrip. I recommended watching the video of Moxie Marlenspike's Talk. Although in practice such an attack isn't common.
This iframe would not be a violation of OWASP A9: Insufficient Transport Layer Protection. However if you are planning on letting people login to the HTTP site, or if you are transmitting a session id over HTTP then this would be clear violation of OWASP A9.
In short, https is absolutely necessary to protect your users.

Related

How to prevent user from modifying REST request?

This question might sound trivial, but even after reading a number of tutorials, I still don't get how the REST security should be implemented.
I have a webpage and soon-to-be-ready mobile app. Both of them will be using the REST API (written in node.js), and the question is - how can I prevent users from modyfing those requests? It's very easy to see the network traffic in the browser, and all the GET/POST requests that are made to the server. It also seems very easy to copy such a request, modify its parameters and/or payload and send it to the server.
How do I make sure that's my webpage or the app who made the request, and not someone else?
Sisyphus is absolutely correct: your focus should be on securing the channel (TLS, SSH, etc) and authentication (e.g. OAuth2).
You should absolutely familiarize yourself with the Open Web Application Security Project (OWASP). In particular, start with:
OWASP Top 10 Cheat Sheet
OWASP REST Security Cheat Sheet
Here is an excellent "hands on" tutorial that gives you a great overview of all the different pieces you need to worry about:
Authenticate a Node.js API with JSON Web Tokens
Once you've gone through the tutorial and scanned the OWASP cheat sheets, you'll have a much better idea of what kinds of things you need to worry about, what options/technologies are available to mitigate those risks, and what might work best for your particular scenario.
Good luck!
Typically, security these days uses a combination of Transport Layer Security and OAuth2. OAuth2 provides authentication and authorisation, ensuring appropriate access to resources, with TLS both securing data over the network and preventing the kind of replay attacks which you're concerned about. Neither are really specific to Restful APIs and you can find them being used in non-Rest contexts also.

What are the implications of disbling websecurity in a blackberry10 app?

In another question dealing with a bug in blackberry10 that denies cross origin XHR calls, it is proposed to get around the issue by disabling web security.
But what does disabling web security really imply here? Am I going to torture small harmless woodland creatures if I use this?
Seriously though, does doing this expose my app to additional security risks beyond those introduced when adding the popular wildcard access uri="*" or access origin="*" line in my config.xml for blackberry10?
please advice
But what does disabling web security really imply here? Am I going to torture small harmless woodland creatures if I use this?
No.
It means your application could access ANY resource in the Internet good, bad or ugly IF (and only if) the user is able to navigate / access that resource.
By disabling web security, the following scenario could happen:
If you published a link in your app to a remote page that you do not control, you risk that page may display unexpected/malicious/inappropriate content OR enable the user to navigate elsewhere to another page that might. Example: Say you are display content in your app loaded directly from some remote URL. Do you know exactly what type of content your users might 'see' in your app? If that remote URL was loading 'buy these pills now to get huge' advertisements from a different URL, would you be okay with YOUR users seeing that content in YOUR app?
Most devs will only include content in their app that they 'trust' and white list just the specific urls they need. However, sometimes you do need to unlock the front door if you don't know what URL your users want to access.
So disabling web security is available if you really need it, but not recommended. Use it at your own risk, not as a matter of convenience.

Browsers are requesting crossdomain.xml & /-7890*sfxd*0*sfxd*0 on my site

Just recently I have seen multiple sessions on my site that are repeatedly requesting /crossdomain.xml & /-7890*sfxd*0*sfxd*0. We have had feedback from some of the folks behind these sessions that they cannot browse the site correctly. Is anyone aware of what might be causing these requests? We were thinking either virus or some toolbar.
The only common item we have seen on the requests is that they all are some version of IE (7, 8 or 9).
Independently of the nature of your site/application, ...
... the request of the /crossdomain.xml policy file is indicative of a [typically Adbobe Flash, Silverlight, JavaFX or the like] application running on the client workstation and attempting to assert whether your site allows the application to access your site on behalf of the user on said workstation. This assertion of the crossdomain policy is a security feature of the underlying "sandboxed" environment (Flash Player, Silverlight, etc.) aimed at protecting the user of the workstation. That is because when accessing third party sites "on behalf" of the user, the application gains access to whatever information these sites will provide in the context of the various sessions or cookies the user may have readily started/obtained.
... the request of /-7890*sfxd*0*sfxd*0 is a hint that the client (be it the application mentioned above, some unrelated http reference, web browser plug-in or yet some other logic) is thinking that your site is either superfish.com, some online store affiliated with superfish.com or one of the many sites that send traffic to superfish.com for the purpose of sharing revenue.
Now... these two kinds of request received by your site may well be unrelated, even though they originate from the same workstation in some apparent simultaneity. For example it could just be that the crossdomain policy assertion is from a web application which legitimately wishes to access some service from your site, while the "sfxd" request comes from some a plug-in on workstation's web browser (e.g. WindowsShopper or, alas, a slew of other plug-ins) which somehow trigger their requests based on whatever images the browser receives.
The fact that some of the clients which make these requests are not able to browse your site correctly (whatever that means...) could further indicate that some -I suspect- JavaScript logic on these clients get the root URL of their underlying application/affiliates confused with that of your site. But that's just a guess, there's not enough context about your site to get more precise hints.
A few suggestions to move forward:
Decide whether your site can and should allow crossdomain access and to whom, and remove or edit your site's crossdomain.xml file accordingly. Too many sites seem to just put <allow-access-from domain="*"/> in their crossdomain policy file for no good reason (and hence putting their users at risk). This first suggestion will not lead to solving the problem at hand, but I couldn't resist the cautionary warning.
ask one of these users which "cannot access your site properly" to disable some of the plug-in (aka add-ons) on their web browser and/or to use alternate web browser, and see if that improves the situation. Disabling plug-ins on web browser is usually very easy. To speed up the discovery, you may suggest some kind of a dichotomy approach, disabling several plug-ins at once and continuing the experiment with half of these plug-ins or with the ones that were still enabled, depending on results with your site's proper access.
If your application provides ads from third party sites, temporally disable these ads and see if that helps these users who "cannot access your site properly".

OAuth and phishing vulnerabilities, are they inexorably tied together?

I've been doing a fair bit of work with OAuth recently, and I have to say that I really like it. I like the concept, and I like how it provides a low barrier-of-entry for your users to connect up the external data to your site (or for you to provide the data apis for consumption externally). Personally, I've always balked at sites that ask me to provide my login for another website to them directly. And OAuth "valet key for the web" approach solves this nicely.
The biggest problem I (and many others) see with it though, is the standard OAuth work-flow encourages the same type of behaviors that phishing attacks use to their advantage. If you train your user that it is normal behavior to be redirected to a site to provide login credentials, then it is easy for a phishing site to exploit that normal behavior but instead redirect to their clone site where they capture your username and password.
What, if anything, have you done (or seen done) to alleviate this problem?
Do you tell the users to go and login to the providing site manually, without automatic links or redirection? (but then this increases the barrier of entry)
Do you attempt to educate your users, and if so, when and how? Any lengthy explanation of security that the user has to read also increases the barrier of entry.
What else?
I believe that OAUth and phishing they are inexorably linked, at least in OAuth's current form. There have been systems in place to prevent Phishing, most notability HTTPs (pause for laughter...), but obviously it doesn't work.
Phishing is a very successful attack against systems that require username/password combos. As long as people use usernames and password for authentication phishing will always be a problem. A better system is to use asymmetric cryptography for authentication. All modern browsers have built in support for smart cards. You can't phish a card sitting in someones wallet and hacking the user's desktop won't leak the private key. The asymmetric keypair doesn't have to be on a smartcard, but I think that it builds a stronger system than if it where purely implemented in software.
You have an account with the site you are being redirected to, shouldn't they be implementing anti-phishing measures such as a signature phrase and image? This also leverages any existing training the users have received from e.g. banks who commonly use these measures.
In general, the sign-in page should present user-friendly shared secrets to the user to confirm the identity of the site they are logging into.
As Jingle notes, a ssl certificate could be used for authentication, but in this case couldn't the user load a certificate directly from the site into their web browser as part of the OAuth setup process? If a trust relationship has already been established with the site, I'm not sure further resort to a CA is necessary.
There are some techniques that can be used to avoid or diminish phishing attacks. I made a list of cheap options:
Mutual identification resources. E.x. icon associated with a specific user shown only after user input his username.
Use of usernames not deterministic and avoid emails as usernames.
Include option to user see his login history.
QRCode that allows authentication in device pre-registered like smartphones. Like whatsapp web.
Show authentication numbers in login pages that the user can validate in the official company site.
All options listed above highly depends on user education about information security and privacy. Wizards that appears only on the first authentication can helps achieve this goal.
To extend the valet analogy: how do you know you can trust the valet, and that he/she is not just someone trying it on? You don't really: you just make that (perhaps unconscious) judgement based on context: you know the hotel, you've bene there before, you might even recognise the person to whom you're giving your key.
In the same way, when you sign in using OAuth (or OpenID), you are redirecting the user to a site/URL which should be familiar to them, seeing as they are providing their credentials from that site which is known to them.
This isn't just an OAuth problem, it's OpenID's problem as well. Worse of course with OpenID you're giving a web site your provider, it's easy to automatically scrape that site if you don't have a bogus one already and generate one which you then direct your user too.
It's lucky that nothing serious uses OpenID to authenticate - blog posts, flickr comments just aren't a juicy target.
Now OpenID are going somewhere to mitigation as they start to develop their Information Card support, where a fixed UI in the shape of client side software will provide an identity "wallet" which is secure, but MS appear to have dropped the ball themselves on Information Cards, even though it's their (open) spec.
It's not going away anytime soon.
What about to certify the oAuth provider just like the ssl certification? Only certified oAuth provider is trustworthy. But the problem is, as with ssl certification, the CA matters.

How do you combat website spoofing/phishing?

What is your suggested solution for the threat of website UI spoofing?
By definition any solution that relies on the site showing you personalised information once you've logged in is ineffective against phishers. If you've attempted to login, they've already succeeded!
FWIW, I don't yet know the real answer, maybe this question will throw up some good ideas. I am however professionally involved in research into phishing, bad domain registrations, etc.
I don't believe there's any significant technical solution that web site developers can implement. Again, by definition, if your users arrive at a phishing site you're no longer in control.
This is why all current anti-phishing technologies reside in the browser, and not in the phished site.
The key to this problem is identifying some difference between a request to the real site and a request to the spoof site.
The simplest difference is some cookie-based UI preference. A cookie set on your (real) site will only ever be returned to your site, and will never be sent to a spoof site.
Now there are plenty of reasons that the valid cookie might not be sent to your site, the user might be using a different computer or they might have expired/deleted cookies, but at least you can guarantee that it won't be sent to the spoof site.
I think the only answer here is to program better people.
Doing things like customizing the appearance or uploading an image only work if the user in questions actually recognizes when these things are wrong. I think the majority of users would never recognize these things except for sites they visit a lot. Even if they did they may attribute it to a change in website design and not a phish.
One solution is to customize the web site per user. Spoofing only works when users have basically the same view of the website (one spoof - many victims). So if, for example, eBay would let you configure a custom background color, you should be able to notice that the page you're viewing is some spoof (that won't know your choice of color). A real solution is a bit more complex (like maybe a secret keyword configured in the browser that only the browser can render within password controls or into the url bar, etc.), but the idea is the same.
Customize the UI per user so spoofing (which relies on most users expecting to see basically the same UI) stops working. It can be a browser based solution, or something web sites offer to their users (some already do).
I've seen some sites that let you select a "personal" icon. Whenever you log in, that icon is displayed as proof that you are on their site.
You can ask a question when the user login (a question that the user has written with the answer).
You can display a picture after the loggin that the user have uploaded, if the user doesn't see his picture (private that only him could see) than it's not the real website.

Resources