Does anyone know if this exact thing would be possible? I've been looking around everywhere but no help. The idea is that we have two sites (we'll call them Websites A and B respectively) residing on two different ports on one server in a network. Website A (designed for the user to go there first) has dual authentication upon the initial login, then has a link to Website B. We want it to work so that a user could not get to Website B without visiting Website A beforehand and logging in to duo. I would think it's somehow possible, given that it's our domain, server, and sites.
Thanks so much! All help is appreciated.
I've tried a lot. However, it's a bit different now, because one of the sites used to be on a different server, but now it's on the same one. I haven't tried anything since they've been on the same server, and I'm not really sure where to start. I've looked at a lot of forums, but no one had my exact problem, so I thought I'd ask.
You can use single sign-on (SSO) solution that supports two-step authentication. This would allow users to log in to Website A using their two-step authentication credentials, and then automatically log them in to Website B without requiring them to enter their credentials again.
The best one for your use case will depend on a number of factors, such as the technology stack you are using, your budget, and the specific requirements of your implementation.
Related
Dear StackOverflow community,
======================================
TL;DR VERSION:
Before we proceed further in our relationship with a cloud web portal provider, I'd like to insist that they provide us a secure way to obtain a copy of our data from their web server.
Secure for authenticating ourselves without leaving ourselves vulnerable to having our credentials stolen or spoofed and
Secure for the file in transit on its way back to us.
I suspect I might have to point them in the right direction myself despite my own inexperience in the field. What kinds of simple-yet-secure approaches to authenticating us could I ask them to look into?
======================================
FULL POST
BACKGROUND:
At work, we are evaluating a cloud-based portal through which our current and former customers will be able to network with each other (we have customers who interact with us in cohorts).
The user interface of the portal is well-designed, which is why we're thinking about buying it, but the company providing it is young. So, for example, their idea of "helping us integrate our portal data with SalesForce" was to have a link within the administrative control panel to a page that returns a CSV file containing the entire contents of our database.
"Fetch a CSV" actually is fine, because we already do it with other CSV files from our ERP (pushing to SalesForce with a data loader and scheduled Windows batch scripting on an always-on PC).
I said we could work with it as long as they provided us a way to fetch the CSV file programmatically, without human intervention, at 5AM. They did so, but the solution seems vulnerable to exploitation and I'd like guidance redirecting their efforts.
A DIVERSION ABOUT THE HUMAN UI:
The link one sees as a human using the web interface to the portal under consideration is http://www.OurBrandedDomain.com/admin/downloaddatabase
If you aren't already logged in, you will be redirected http://www.OurBrandedDomain.com/Admin/login?returnUrl=admin/downloaddatabase , and as soon as you log in, the CSV file will be offered to you.
(Yes, I know, it's HTTP and it's customer data ... I'm planning to talk to them about turning off HTTP access to the login/signup forms and to the internals of the site, too. Not the focus of my question, though.)
THEIR PROPOSAL:
So, as I said, I asked for something programmatically usable.
What they gave us was instructions to go to http://www.OurFlavorOfTheirSite.com/admin/fetchdatabase?email=AdminsEmail#Domain.com&password=AdminsPassword
Please correct me if I'm wrong, but this seems like a really insecure way to authenticate ourselves to the web server.
HOW I NEED HELP:
Before we proceed further in our relationship with this portal provider, I'd like to insist that they provide us a secure way to obtain a CSV copy of our data.
Secure for authenticating ourselves without leaving ourselves vulnerable to having our credentials stolen or spoofed and
Secure for the file in transit on its way back to us.
However, I don't get the sense that they've really thought about security much, and I suspect I might have to point them in the right direction myself despite my own inexperience in the field.
What kinds of simple-yet-secure approaches to authenticating us could I ask them to look into, knowing nothing more about the architecture of their servers than can be inferred from what I've just described here?
The solution doesn't have to involve us using a browser to interact with their server. Since we'll be downloading the file in a Windows scripting environment without human intervention, it's fine to suggest solutions that we can only test programmatically (even though that will make my learning curve a bit steeper).
(I suppose the solution could even get away from the server providing the data in the form of a CSV file, though then we'd probably just end up rebuilding a CSV file locally because we have infrastructure in place for CSV->SalesForce.)
Thanks in advance.
Yes, that is insecure.
You should insist on using TLS. For this they need to install a certificate from a Certification Authority to verify that they own the domain OurFlavorOfTheirSite.com. This will enable the URL to use HTTPS which means communication is encrypted, and authenticated (i.e. another website cannot spoof OurFlavorOfTheirSite.com without a browser warning being displayed).
Although the email=AdminsEmail#Domain.com&password=AdminsPassword parameters will be encrypted, these should be submitted via POST rather than GET. The reason is that GET query string parameters are logged in browser history, logged in proxy and server logs by default and can be transmitted in the referer header when resources are included from other domains.
I am thinking of using another "less" important server to store files that our clients want to upload and handling the data validation, copying, insertion, etc at that end.
I would display the whole upload thingy through iframe on our website and using HTML,PHP,SQL as syntax-languages for the thingy?
Now I would like to ask your opinions is this is a good or bad idea.
I´m figuring out that the pros and cons are:
**Pros:
The other server is "less" valuable, meaning if something malicious could be uploaded there it would not be the end of the world
Since the other server has less events/users/functionality/data it would help to lessen the stress of our main website server
If the less important server goes down the other functionality on main server would still be functioning
Firewall prevents outside traffic (at least to a certain point)
The users need to be logged through the main website
**Cons:
It does not have any CMS+plugins, so it might be more vunerable
It might generate more malicious traffic towards it.
Makes the upkeep of the main website that much more complicated for future developers
Generally I´m not found of the idea that users get to uploading files, but it is not up to me.
Thanks for your input. I´m looking forward to hearing your opinions.
Servers have file quotas and bandwidths defined/allocated for them.
If you transfer your "less" used files to another server ,it will help your main server to improve its performance.
And also there wont be much maintenance headaches with the main server if all files are uploaded there.
Conclusion : It is a good idea.
Well, I guess most importantly, you will need a single sign-on (SSO) solution in place between the two web applications. I assume you don't want user A be able to read or delete files from user B.
SSO between 2 servers is a lot more complicated than for a single web application. Unless this site is only deployed in an intranet with a Active Directory domain controller in which case you can use Kerberos.
I'm not sure it's worth it just for the advantages you name.
I have been building quite a few MVC based websites locally and am finally ready to deploy the first, but, I am getting rather nervous.
During testing, I noticed several things that worried me - I am using the default forms authentication with a few tweaks (although nothing to the underlining security).
I noticed that if I created a user in one application and logged in, then launched another application... it would keep me logged in* as the user from the previous application. The user doesn't even exist in the new application!
* - I used [Authorize] on controllers, and was surprised I could just get straight in without any sort of authentication
I assume it is because the cookie is being set for localhost instead of the application/port (although, not too much I can do about this in development).
Based on this, how secure is the default authentication?
1. Is there anyway to check from the code that the user doesn't have a "faked" cookie? / Check the user has logged in from my application?
2. I was just wondering if there are any sort of check lists or anything I can go through before deploying?
Sort of - 3.As of writing this, For question 1. I am guessing I could add a column with a random number that is saved to the cookie, and then that number is checked every time any authentication is done... however, I did not want to start mucking around with the membership provider... but I think this could work. Is this a good idea?
Try using IIS on your machine instead of VS Dev Server. Solves your problem 1.
Other than that I don't think you will need any extra effort to make default membership mechanisms of asp.net to make more secure if of course you don't need a real custom things to do in your projects. These things are around for a while now and I think they have been well tested in terms of security.
You just need to remember to put [Authorize] attribute to right places. If not on your controllers put them to right methods.
Basic Web Authentication shouldn't be trusted for applications which contain truly sensitive information. That being said it's sufficient for most applications. Be sure to check your application as often as possible before and after release for XSS vulnerabilities.
Here is Microsoft's recommended "Secure yourself" list. http://msdn.microsoft.com/en-us/library/ff649310.aspx
No matter how strong your authentication is, one small XSS mistake and a malicious user can do as they wish to your site, and your users data!
I recently read a good book: Worx Professional ASP.NET, it talks about these steps in more detail on securing yourself as well as exposing examples of problems. After reading this I was able to "deface and steal" my own sites information with relative ease, was a good eye opener on the importance of securing for XSS.
I'm possibly developing a web-based application that allows users to create individual pages. I would like users to be able to use their own domains/sub-domains to access the pages.
So far I've considered:
A) Getting users to forward with masking to their pages. Probably the most in-efficient option, as having used this before myself I'm pretty sure it iFrames the page (not entirely sure though).
B) Having the users download certain files, which then make calls to the server for information for their specific account settings via a user key of some sort. The most efficient in my mind at the moment, however, this requires letting users see a fair degree of source code, something I'd rather not do if possible
C) Getting the users to add a C-NAME record to their DNS settings, which is semi in-efficient (most of these users will be used to uploading files via FTP hence why B is the most efficient option), but at the same time means no source code will be seen by them.
The downside is, I have no idea how to implement C or what would be needed.
I got the idea from: http://unbounce.com/features/custom-urls/.
I'm wondering what method of the three I should use to allow custom urls for users, I would prefer to do C, but I have no idea how to implement it (I'm kind of asking how), and whether or not the time spent learning how-to/getting that kind of functionality set-up would even be worth it.
Any answers/opinions/comments would be very much appreciated :)!
Option C is called wildcard DNS: I've linked to a writeup that gives an example of how to do it using Apache. Other web server setups should be able to do this as well: for what you want it is well worth it.
What is your suggested solution for the threat of website UI spoofing?
By definition any solution that relies on the site showing you personalised information once you've logged in is ineffective against phishers. If you've attempted to login, they've already succeeded!
FWIW, I don't yet know the real answer, maybe this question will throw up some good ideas. I am however professionally involved in research into phishing, bad domain registrations, etc.
I don't believe there's any significant technical solution that web site developers can implement. Again, by definition, if your users arrive at a phishing site you're no longer in control.
This is why all current anti-phishing technologies reside in the browser, and not in the phished site.
The key to this problem is identifying some difference between a request to the real site and a request to the spoof site.
The simplest difference is some cookie-based UI preference. A cookie set on your (real) site will only ever be returned to your site, and will never be sent to a spoof site.
Now there are plenty of reasons that the valid cookie might not be sent to your site, the user might be using a different computer or they might have expired/deleted cookies, but at least you can guarantee that it won't be sent to the spoof site.
I think the only answer here is to program better people.
Doing things like customizing the appearance or uploading an image only work if the user in questions actually recognizes when these things are wrong. I think the majority of users would never recognize these things except for sites they visit a lot. Even if they did they may attribute it to a change in website design and not a phish.
One solution is to customize the web site per user. Spoofing only works when users have basically the same view of the website (one spoof - many victims). So if, for example, eBay would let you configure a custom background color, you should be able to notice that the page you're viewing is some spoof (that won't know your choice of color). A real solution is a bit more complex (like maybe a secret keyword configured in the browser that only the browser can render within password controls or into the url bar, etc.), but the idea is the same.
Customize the UI per user so spoofing (which relies on most users expecting to see basically the same UI) stops working. It can be a browser based solution, or something web sites offer to their users (some already do).
I've seen some sites that let you select a "personal" icon. Whenever you log in, that icon is displayed as proof that you are on their site.
You can ask a question when the user login (a question that the user has written with the answer).
You can display a picture after the loggin that the user have uploaded, if the user doesn't see his picture (private that only him could see) than it's not the real website.