We currently have a B2B website available over the public internet that is accessed by thousands of authenticated users worldwide from any location. We would like it so each user can only access the website from one computer (for security and license reasons). We currently use a Java applet on the site that obtains the user's MAC address but it's obstructive and that value can be spoofed so we are looking to move away from this implementation.
What is the best way to limit usage of a website to a single computer? Is this something that's best left to a security vendor, do we need to have users install certificates on their machines or are there other solutions available? Any advise on this topic is appreciated. Thanks.
Update: What we would like to do is implement some kind of device authorization for the website. I thought I saw some banking websites do this kind of thing...does anyone by chance know what approaches can be taken it accomplish such functionality? Perhaps virtual tokens or some other multi-factor authentication implementation?
There won't be a solution to this that you like.
By design, web browsers have very limited access to the containing computer. In the spirit of 'on the internet no one knows that you are a dog', your side can't ever find out much about the other end. The IP address is subject to NAT and other spoofing. An X.509 certificate is perfectly portable from one computer to another.
Essentially, the conceptual model of the entire 'web' does not include 'computers'. If you are a server, you get a connection, and you can ask it very few questions indeed. None of them amount to 'give me a unique token that identifies a computer on the other end'.
Related
i trying make an internet voting service but the problem is internet is just so easy to cheat by creating multiple accounts and vote same thing. capcha and email is not helping as take just 3 second to pass by human. IP can be changed by proxy. if we put some cookie on voter browser he just clean it next time.
i created this question to ask help for methods we can use with basic futures that all browsers have (javascript etc)to prevent our service being cheated easily.
the first idea i have myself is that possible my website access all cookies user have on his browser by just visiting my site ? because when they clean everything by CCleaner for new accounts then i can understand the browser is empty so the person is perhaps a cheater as most of real users when come to my site always have at least several cookie from different sites
There is no way to address the issue of uniquely identifying real-world assets (here: humans) without stepping out of your virtual system, by definition.
There are various ways to ensure a higher reliability of the mapping "one human to exactly one virtual identity", but none of them is fool-proof.
The most accessible way would be to do it via a smartphone app. A human usually only has one smartphone (and a phone number).
Another way is to send them snail mail to their real address, with a secret code, which you require them to enter in your virtual system.
or the social insurance number
or their fingerprints as log in credentials
The list could go on, but the point is, these things are bound to the physical world. If you combine more such elements, you get a higher accuracy (but never 100% certainty).
While the are many social networks in the wild, most rely on data stored on a central site owned by a third party.
I'd like to build a solution, where data remains local on member's systems. Think of the project as an address book, which automagically updates contact's data as soon a a contact changes its coordinates. This base idea might get extended later on...
Updates will be transferred using public/private key cryptography using a central host. The sole role of the host is to be a store and forward intermediate. Private keys remain private on each member's system.
If two client are both online and a p2p connection could be established, the clients could transfer data telegrams without the central host.
Thus, sender and receiver will be the only parties which are able create authentic messages.
Questions:
Do exist certain protocols which I should adopt?
Are there any security concerns I should keep in mind?
Do exist certain services which should be integrated or used somehow?
More technically:
Use e.g. Amazon or Google provided services?
Or better use a raw web-server? If yes: Why?
Which algorithm and key length should be used?
UPDATE-1
I googled my own question title and found this academic project developed 2008/09: http://www.lifesocial.org/.
The solution you are describing sounds remarkably like email, with encrypted messages as the payload, and an application rather than a human being creating the messages.
It doesn't really sound like "p2p" - in most P2P protocols, the only requirement for central servers is discovery - you're using store & forward.
As a quick proof of concept, I'd set up an email server, and build an application that sends emails to addresses registered on that server, encrypted using PGP - the tooling and libraries are available, so you should be able to get that up and running in days, rather than weeks. In my experience, building a throw-away PoC for this kind of question is a great way of sifting out the nugget of my idea.
The second issue is that the nature of a social network is that it's a network. Your design may require you to store more than the data of the two direct contacts - you may also have to store their friends, or at least the public interactions those friends have had.
This may not be part of your plan, but if it is, you need to think it through early on - you may end up having to transmit the entire social graph to each participant for local storage, which creates a scalability problem....
The paper about Safebook might be interesting for you.
Also you could take a look at other distributed OSN and see what they are doing.
None of the federated networks mentioned on http://en.wikipedia.org/wiki/Distributed_social_network is actually distributed. What Stefan intends to do is indeed new and was only explored by some proprietary folks.
I've been thinking about the same concept for the last two years. I've finally decided to give it a try using Python.
I've spent the better part of last night and this morning writing a sockets communication script & server. I also plan to remove the central server from the equation as it's just plain cumbersome and there's no point to it when all the members could keep copies of their friend's keys.
Each profile could be accessed via a hashed string of someone's public key. My social network relies on nodes and pods. Pods are computers which have their ports open to the network. They help with relaying traffic as most firewalls block incoming socket requests. Nodes store information and share it with other nodes. Each node will get a directory of active pods which may be used to relay their traffic.
The PeerSoN project looks like something you might be interested in: http://www.peerson.net/index.shtml
They have done a lot of research and the papers are available on their site.
Some thoughts about it:
protocols to use: you could think exactly on P2P programs and their design
security concerns: privacy. Take a great care to not open doors: a whole system can get compromised 'cause you have opened some door.
services: you could integrate with the regular social networks through their APIs
People will have to install a program in their computers and remeber to open it everytime, like any P2P client. Leaving everything on a web-server has a smaller footprint / necessity of user action.
Somehow you'll need a centralized server to manage the searches. You can't just broadcast the internet to find friends. Or you'll have to rely uppon email requests to add somenone, and to do that you'll need to know the email in advance.
The fewer friends /contacts use your program, the fewer ones will want to use it, since it won't have contact information available.
I see that your server will be a store and forward, so the update problem is solved.
I've been doing a fair bit of work with OAuth recently, and I have to say that I really like it. I like the concept, and I like how it provides a low barrier-of-entry for your users to connect up the external data to your site (or for you to provide the data apis for consumption externally). Personally, I've always balked at sites that ask me to provide my login for another website to them directly. And OAuth "valet key for the web" approach solves this nicely.
The biggest problem I (and many others) see with it though, is the standard OAuth work-flow encourages the same type of behaviors that phishing attacks use to their advantage. If you train your user that it is normal behavior to be redirected to a site to provide login credentials, then it is easy for a phishing site to exploit that normal behavior but instead redirect to their clone site where they capture your username and password.
What, if anything, have you done (or seen done) to alleviate this problem?
Do you tell the users to go and login to the providing site manually, without automatic links or redirection? (but then this increases the barrier of entry)
Do you attempt to educate your users, and if so, when and how? Any lengthy explanation of security that the user has to read also increases the barrier of entry.
What else?
I believe that OAUth and phishing they are inexorably linked, at least in OAuth's current form. There have been systems in place to prevent Phishing, most notability HTTPs (pause for laughter...), but obviously it doesn't work.
Phishing is a very successful attack against systems that require username/password combos. As long as people use usernames and password for authentication phishing will always be a problem. A better system is to use asymmetric cryptography for authentication. All modern browsers have built in support for smart cards. You can't phish a card sitting in someones wallet and hacking the user's desktop won't leak the private key. The asymmetric keypair doesn't have to be on a smartcard, but I think that it builds a stronger system than if it where purely implemented in software.
You have an account with the site you are being redirected to, shouldn't they be implementing anti-phishing measures such as a signature phrase and image? This also leverages any existing training the users have received from e.g. banks who commonly use these measures.
In general, the sign-in page should present user-friendly shared secrets to the user to confirm the identity of the site they are logging into.
As Jingle notes, a ssl certificate could be used for authentication, but in this case couldn't the user load a certificate directly from the site into their web browser as part of the OAuth setup process? If a trust relationship has already been established with the site, I'm not sure further resort to a CA is necessary.
There are some techniques that can be used to avoid or diminish phishing attacks. I made a list of cheap options:
Mutual identification resources. E.x. icon associated with a specific user shown only after user input his username.
Use of usernames not deterministic and avoid emails as usernames.
Include option to user see his login history.
QRCode that allows authentication in device pre-registered like smartphones. Like whatsapp web.
Show authentication numbers in login pages that the user can validate in the official company site.
All options listed above highly depends on user education about information security and privacy. Wizards that appears only on the first authentication can helps achieve this goal.
To extend the valet analogy: how do you know you can trust the valet, and that he/she is not just someone trying it on? You don't really: you just make that (perhaps unconscious) judgement based on context: you know the hotel, you've bene there before, you might even recognise the person to whom you're giving your key.
In the same way, when you sign in using OAuth (or OpenID), you are redirecting the user to a site/URL which should be familiar to them, seeing as they are providing their credentials from that site which is known to them.
This isn't just an OAuth problem, it's OpenID's problem as well. Worse of course with OpenID you're giving a web site your provider, it's easy to automatically scrape that site if you don't have a bogus one already and generate one which you then direct your user too.
It's lucky that nothing serious uses OpenID to authenticate - blog posts, flickr comments just aren't a juicy target.
Now OpenID are going somewhere to mitigation as they start to develop their Information Card support, where a fixed UI in the shape of client side software will provide an identity "wallet" which is secure, but MS appear to have dropped the ball themselves on Information Cards, even though it's their (open) spec.
It's not going away anytime soon.
What about to certify the oAuth provider just like the ssl certification? Only certified oAuth provider is trustworthy. But the problem is, as with ssl certification, the CA matters.
I am working on a project that requires user login/registration. I'd like to avoid setting up private SSL since I am using a shared hosting provider and would like to host multiple domains off of the same plan (but since a private SSL certificate requires a dedicated ip, I can only have 1 certificate per plan...but would still like to secure all of my sites).
I am debating between
resorting to OpenID (although for a non-technical audience all the complaints I found on SO would be further multiplied)
using my host's shared SSL (which will pop up those annoying certificate warnings in the browser saying that the sites don't match).
What seems like a better option? Or would you suggest run away from both and just suggest sucking it up and purchasing additional/better hosting plans?
From my experience in dealing with SO and a fairly simple site using Google App Engine (and their authentication system), I'd give the following advice:
Do NOT use OpenID for identification. It can work for authentication with your own identity management, but there are issues as soon as you try to identify a specific user.
Its amazing how many open ids people will have, so be prepared to support multiple OpenID auth URLs (definitely more than 1, probably more than 2)
If high security is a requirement, be very wary of OpenID. Many people will use providers that they normally only use for low-security tasks (and therefore have weak passwords). This particular issue struck Jeff Atwood directly (his account was stolen due to exactly this mistake)!
Keep things simple for your users. If you do go with OpenID, emphasize one or two providers that they likely already have (eg, Google), and then provide a deemphasized selection for generic providers. Don't make the more simple-minded users think about OpenID.
Along with that thinking, a simple "Login with your Google Account" button works surprisingly well. I thought people would find it confusing to login to a third party site with their google account, but in practice this has not been a problem with our .appspot.com domain.
The bottom line is that you shouldn't expect your users to prefer openid, but it can be an acceptable compromise. I don't think that showing an invalid certificate is a reasonable option for many end-users.
Of course, the separate certs option is the cleanest, but you have to decide if thats really worth it for the value gained. I'm a cheapskate and would tend to avoid it myself. :)
Why not roll your own from the ground up? If your database is accessible from each domain, you could keep one user store that every domain could access.
Is there a particular reason you do not want to create your own user model? It's easy to do but you may have other factors that are leaning you towards something like OpenId that I am not aware of.
If you use the shared SSL's URL, you shouldn't get the popups. That's the whole point of shared SSL. What you is the identity of your site's URL when the user jumps to the secure connection.
I would talk to your hosting provider about your options when it comes to private SSL. They're really not that expensive (even free if you're ok with poor IE support). I've been with shared providers in the past that would allocate you a dedicated IP for use with SSL for a tiny extra fee (like $2/mo).
To me, the extra $54 per year ($30 for the cert + $24 for the IP) was well worth the peace of mind for me and my users.
We would like to run a wireless access point for public use. However, in case of misbehavior, we would like some personal information to be able to pass on to law enforcement.
The proposed solution involves a captive portal where users enter their email addresses, and are then given ten minutes to check their email and verify, after which they are given unrestricted access.
The problem, as I see it, is that once a user is authenticated, anyone can come along, spoof the MAC or IP, and then have access. If they commit a crime or copyright infringement, the user who entered the email address is now blamed.
Now, we could solve that by using WPA and requiring users to preregister. But as I said, we would like to allow anyone to just drive up and use it, and we don't want to provide any technical support.
The other alternative is not collecting email addresses, but then in case of an investigation or lawsuit, we wouldn't have anything to hand over, and thus risk the possibility of being shut down.
Is there any way out of this dilemma?
Collecting email would also be futile since you have no good way of confirming it without also providing compromised access. You should simply log the traffic that the user generates.
The answer is to not care about unsatisfiable demands from law enforcement for the personal information of your users. If that's not an acceptable answer, then the answer is to stop trying to provide a public access point. If that's not an acceptable answer either, then the answer is the proposed solution you already have. How you go about living with yourself afterward, for collecting personal information from law abiding people that will only ever be used by criminals to cover their tracks, is a personal matter and out of scope for this site. Good luck.
Having the end-user accept a legal disclaimer that you (the provider) are not responsible and they (the end-user) is responsible, and that they should not do illegal things is usually good enough. Just log that they clicked "I agree" and their IP and MAC at the time. They should have to do this every time they connect.
Asking for an email is basically worthless; many will use a made-up email, or enter a typo, then complain they never got it - many will use a disposable email - many will use a junk account they create with one of the free webmail providers.
A system that sends their mobile phone a TXT message with a unique (random) code, and having that entered on the captive portal page to gain access is a better system IMHO. I've done this before and it works OK, except for kids who have mommy's iPad or another tablet but no phone. You save all this data for 90+ days, or however long your lawyers tell you.
Realize that implementing any of this significantly decreases the actual use of your hotspot, users don't have the patience and will be frustrated and abandon the process.
Most captive portal products can log the MAC and IP lease every client gets, and where they go on the Internet (at least that's how I do it) so if a legal request comes along, you can give law enforcement the data you have. It's up to law enforcement to then steak out or track down the device with that MAC, which depending on their competency level is possible, or impossible for them, either way it's not your job to do their job for them.
I also advocate filtering the obvious porn and malware domains, not just to save on bandwidth, but to limit your liability. Any good captive portal product can do this.
Your public wireless network should at the least be NAT'd to a separate static IP, so you can differentiate legal requests that reference that IP, as opposed to say your private office network. You can do this with separate firewalls, or a firewall that supports multiple LAN interfaces.