Secure captive portal? - security

We would like to run a wireless access point for public use. However, in case of misbehavior, we would like some personal information to be able to pass on to law enforcement.
The proposed solution involves a captive portal where users enter their email addresses, and are then given ten minutes to check their email and verify, after which they are given unrestricted access.
The problem, as I see it, is that once a user is authenticated, anyone can come along, spoof the MAC or IP, and then have access. If they commit a crime or copyright infringement, the user who entered the email address is now blamed.
Now, we could solve that by using WPA and requiring users to preregister. But as I said, we would like to allow anyone to just drive up and use it, and we don't want to provide any technical support.
The other alternative is not collecting email addresses, but then in case of an investigation or lawsuit, we wouldn't have anything to hand over, and thus risk the possibility of being shut down.
Is there any way out of this dilemma?

Collecting email would also be futile since you have no good way of confirming it without also providing compromised access. You should simply log the traffic that the user generates.

The answer is to not care about unsatisfiable demands from law enforcement for the personal information of your users. If that's not an acceptable answer, then the answer is to stop trying to provide a public access point. If that's not an acceptable answer either, then the answer is the proposed solution you already have. How you go about living with yourself afterward, for collecting personal information from law abiding people that will only ever be used by criminals to cover their tracks, is a personal matter and out of scope for this site. Good luck.

Having the end-user accept a legal disclaimer that you (the provider) are not responsible and they (the end-user) is responsible, and that they should not do illegal things is usually good enough. Just log that they clicked "I agree" and their IP and MAC at the time. They should have to do this every time they connect.
Asking for an email is basically worthless; many will use a made-up email, or enter a typo, then complain they never got it - many will use a disposable email - many will use a junk account they create with one of the free webmail providers.
A system that sends their mobile phone a TXT message with a unique (random) code, and having that entered on the captive portal page to gain access is a better system IMHO. I've done this before and it works OK, except for kids who have mommy's iPad or another tablet but no phone. You save all this data for 90+ days, or however long your lawyers tell you.
Realize that implementing any of this significantly decreases the actual use of your hotspot, users don't have the patience and will be frustrated and abandon the process.
Most captive portal products can log the MAC and IP lease every client gets, and where they go on the Internet (at least that's how I do it) so if a legal request comes along, you can give law enforcement the data you have. It's up to law enforcement to then steak out or track down the device with that MAC, which depending on their competency level is possible, or impossible for them, either way it's not your job to do their job for them.
I also advocate filtering the obvious porn and malware domains, not just to save on bandwidth, but to limit your liability. Any good captive portal product can do this.
Your public wireless network should at the least be NAT'd to a separate static IP, so you can differentiate legal requests that reference that IP, as opposed to say your private office network. You can do this with separate firewalls, or a firewall that supports multiple LAN interfaces.

Related

How to identify visitors are unique?

i trying make an internet voting service but the problem is internet is just so easy to cheat by creating multiple accounts and vote same thing. capcha and email is not helping as take just 3 second to pass by human. IP can be changed by proxy. if we put some cookie on voter browser he just clean it next time.
i created this question to ask help for methods we can use with basic futures that all browsers have (javascript etc)to prevent our service being cheated easily.
the first idea i have myself is that possible my website access all cookies user have on his browser by just visiting my site ? because when they clean everything by CCleaner for new accounts then i can understand the browser is empty so the person is perhaps a cheater as most of real users when come to my site always have at least several cookie from different sites
There is no way to address the issue of uniquely identifying real-world assets (here: humans) without stepping out of your virtual system, by definition.
There are various ways to ensure a higher reliability of the mapping "one human to exactly one virtual identity", but none of them is fool-proof.
The most accessible way would be to do it via a smartphone app. A human usually only has one smartphone (and a phone number).
Another way is to send them snail mail to their real address, with a secret code, which you require them to enter in your virtual system.
or the social insurance number
or their fingerprints as log in credentials
The list could go on, but the point is, these things are bound to the physical world. If you combine more such elements, you get a higher accuracy (but never 100% certainty).

what is the simplest protocol to securely tether a hardware device to a network?

After the Sony PSN debacle, I am trying to find examples of secure hardware tethering to a network. There are two use cases in particular:
1- computer downloads a piece of software that then uniquely and securely labels it to a cloud service
2- a hardware manufacturer uniquely labels a hardware device that then negotiates membership on the network.
Given the fact that the hardware device might have to change (revoke or service enhancements) it feels like #2 becomes #1.
The broad outline is this:
- connect to the service via HTTPS to protect against man in the middle
- device generates a GUID and presents it via HTTPS to service
- service records GUID against account
- on success, service 'enables' device
But how do you protect the GUID so that it cannot be stolen?
I just wanted to comment here:
Sony's PSN issues started with horrible practices with regards to their QA environment.
First, they defaulted to trusting anything that was sent to those servers using their developers toolkit. The reason they did this was that the dev kit used to cost upwards of $10k US and therefore they thought anyone who paid that amount would be on the up and up. However, when they radically lowered the price things changed externally and they didn't account for it.
The second issue with PSN was that the security between QA and live was, well, weak at best and easily circumvented. My understanding is that you could send commands to live using QA credentials. Because QA credentials were used, all chargeable actions were approved without money changing hands and the actions were applied to live accounts. When several people told Sony about this they did nothing.
A third issue was a reliance on hardware based encryption keys. Even hardware encryption keys installed on the devices can be figured out.
Point is, Sony dug their own grave on it so I wouldn't use anything they did as a template for how to do things. Heck, a lot of their websites were open to SQL injection which in today's day and age should get you fired.
Another example here is the iPhone. Each iPhone has a unique identifier that installed apps can grab and send back across the network; similar to a serial number. Some apps use this ID to try and tie a particular device to a person. However, it's trivial to create ID's and broadcast them, so this hasn't worked out so well for the partners. Also Apple does not expose a way to ensure a given ID (UUID) is valid to app producers.
A third example is mobile phone carriers. They use a particular ID baked into your SIM card to identify your account in order to know who to bill when a call is made. This ID is verified whenever the phone checks in with the network. However, we're dealing with radio signals and any device that can broadcast a correct ID can gain access. Point is, honest people think that only AT&T approved devices can get on an AT&T network. Reality is, anything can but they are going to bill the owner of the particular ID...
That said, any software you have running on a remote device that is not under your direct control is likely to be hacked. The popularity of the device will increase the likelihood of it happening sooner rather than later.
Where do we go from here?
On a basic level you associate an ID with an account in your service. PSN, Apple and others have done this. When an ID is broadcast, you need to verify that it exists AND that it's tied to an active account. If both pass then you have two options: either perform the action requested OR request additional verification.
For any actions that require money to be spent, do the additional verification (usually some form of username/password), capture the funds, then perform the action. Go one step further and every time a bad login is entered, send an email to the user on file. Further, automatically send a receipt. These are typically done so that your honest users can tell when something is going on.
Anything else just let through.
Bearing in mind, of course, that QA credentials should NOT work in your Live environment. Those systems should not be tied to each other under any condition and, quite frankly, should even live on separate hardware. In other words, QA and Live should NOT share a login database.
The thing here is that you shouldn't care about the device itself; just the account. You can't control the device as it's out of your hands; heck you can't even be sure it hasn't been physically tampered with. (XBox has been fighting this one with people adding resistors or burning out certain components to get past physical security features).
So, IMHO, do a bit to keep honest people honest but overall don't worry about it. Now, you should transfer everything via SSL or someother encrypted connection between the device and your cloud so that you don't leak ID's to anyone that wants to grab them. This will help protect those honest people.
Further, you shouldn't have a direct way to query whether an ID is valid or not from the outside. This will make it a bit more difficult for a hacker to find existing valid IDs and take over accounts. If you want to get fancy you could honey pot those and track the hackers down in order to sue them into oblivion, but that takes time and resources companies don't normally have. Also you could log all of the requests that contained bad IDs and use that to track hackers down.
Note that even after the device has been "enabled" I still suggest you have two levels of authentication. The first is for simple actions like downloading free content; the second kicks in anytime there is a fee associated. Again, we're trying to protect your honest subscribers.
For the dishonest ones you will have to apply some statistical analysis on the transactions coming across. Things like the transaction rate can help identify bots that are running and allow you to kill their IDs. There are others but they'll be unique to your application.
This was long winded. But my point is:
You can't secure the ID or anything else you pass out.
You can't ensure the requests are coming from your devices or your own approved devices.
You better take actions to keep QA and production separate for those building software for these devices using your services.
You better take actions to protect your normal honest users.
Trust NOTHING.
Due to the above you should evaluate your business model so that you don't care what device was used and instead focus on the individual accounts themselves; which you do have control over.
I am not sure I entirely understand the question, but I think you want some sort of device to hold on to a GUID assigned to it by a web service, and you don't want someone finding out what that GUID is, correct?
If so, there isn't a lot you can do. You have already mentioned one option... using HTTPS during the assigning of the ID. That is a good start, but remember that anyone who has physical access to the device can do a lot of things to look up this ID.
In short, it is impossible to completely hide. Someone can always reverse engineer it. There are folks out there reading data right out of memory with hardware.

How can you ensure that a user knows they are on your website?

The talk of internet town today is the SNAFU that led to dozens of Facebook users being led by Google search to an article on ReadWriteWeb about the Facebook-AOL deal. What ensued in the comments tread is quickly becoming the stuff of internet legend.
However, behind the hilarity is a scary fact that this might be how users browse to all sites, including their banking and other more important sites. A quick search for "my bank website login" and quickly click the first result. Once they are there, the user is willing to submit their credentials even though the site looks nothing like the site they tried to reach. (This is evidenced by the fact that user's comments are connected to their facebook accounts via facebook-connect)
Preventing this scenario is pretty much out of our control and educating our users on the basics of internet browsing may be just as impossible. So how then can we ensure that users know they are on the correct web site before trying to log in? Is something like Bank of America's SiteKey sufficient, or is that another cop-out that shifts responsibility back on the user?
The Internet and web browsers used to have a couple of cool features that might actually have some applicability there.
One was something called "domain names." Instead entering the website name over on the right site of your toolbar, there was another, larger text field on the left where you could enter it. Rather than searching a proprietary Google database running on vast farms of Magic 8-Balls, this arcane "address" field consulted an authoritative registry of "domain names", and would lead you to the right site every time. Sadly, it sometimes required you to enter up to 8 extra characters! This burden was too much for most users to shoulder, and this cumbersome feature has been abandoned.
Another thing you used to see in browsers was something called a "bookmark." Etymologists are still trying to determine where the term "bookmark" originated. They suspect it has something to do with paper with funny squiggles on it. Anyway, these bookmarks allowed users to create a button that would take them directly to the web site of interest. Of course, creating a bookmark was a tedious, intimidating process, sometimes requiring as many as two menu clicks—or worse yet, use of the Ctrl-key!
Ah, the wonders of the ancients.
The site could "personalize" itself by showing some personal information,
easy recognizable by the user, on every page.
There are plenty of ways to implement it. The obvious one:
under first visit, the site requires user to upload some avatar,
and adds user's id to the cookies. After that, every time the user browses
the site, the avatar is shown.
When I set up my online bank account, it asked me to choose from a selection of images. The image I chose is now shown to me every time I login. This assures me that I am on the right website.
EDIT: i just read the link about the BoA SiteKey, this is apparently the same thing (it sounded from the name like a challenge-response dongle)
I suppose the best answer would be a hardware device which required a code from the bank and the user and authenticated both. But any of these things assume that people are actually thinking about the problem, which of course they don't. This was going on before internet banking was common - I had a friend who had her wallet stolen back in the 90s, and theif phoned her pretending to be her bank and persuaded her to reveal her PIN...
When the user first visits the site and logs in, he can share some personal information (even something very trivial) that imposter sites couldn't possible know - high school mascot, first street lived on, etc.
If there's ever any question of site authenticity, the site could share this information back to the user.
Like on TV shows/movies with the evil twin. The good twin always wins trust by sharing a secret that only the person who's trying to figure out who the good twin is would know.
You cannot prevent phishing per-se but you can take several steps each of which do a little bit to mitigate the problem.
1) If you have something like site-key or a sign-in seal, please ensure that these cannot be iframed on a malicious website. Just javascript framebusting may not be enough as IE has security="restricted".
2) Be very consistent about how you ask for user credentials - serve the login form over SSL (not just post-back over SSL). Do not ask for login on several places or sites. Encourage third parties who want to work with user data stored on your site to use OAuth (instead of taking your user's password).
3) You should never ask for information via email (with or without link).
4) Have a security page where you talk about these issues.
5) Send notification on changes to registered phone, email, etc.
Apart from above, monitor user account activity - such as changes to contact information, security Q&A, access, etc (noting time, ip, and there are several subtle techniques).

Detecting login credentials abuse

I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website.
The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…).
It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website).
I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers.
But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password.
Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change.
Question:
What potential problems would anyone see with such an approach?
Any particular reason why you can't force each club member to register (just straight-up, not necessarily as a sub or a similar complex structure)? Perhaps give each club some sort of code to use just when the users register so you can automatically create their accounts and affiliate them with a club, but you then have direct accounting of each member without an onerous process that the club has to manage themselves. Then it's much easier to determine if a given account is being spread around (disparate IP accesses in given periods of time).
Clearly then you can also set a limit on the number of affiliated accounts per club, should you want to do so. This is basically what you've suggested, I suppose, but I would try to keep any onerous management tasks out of the hands of your users if at all possible. If you can manage club-affiliated signups, you should, rather than forcing someone at the club to manage them for you.
Also, while some sort of heuristic based on IP and credentials is probably fine, I would stay away from incorporating user-agent, or at least caring too much about it. Seeing a few different UAs from the same IP - depending on your expected userbase, I suppose - isn't really that unusual. I use several browsers in the course of my day due to website bugs, etc. and unless someone is using a machine as a proxy, it's not evidence of anything nefarious.

What kind of damage could one do with a payment gateway API login and transaction key?

Currently, I'm in the process of hiring a web developer who will be working on a site that processes credit cards. While he won't have the credentials to log into the payment gateway's UI he will have access to the API login and transaction key since it's embedded in the application's code.
I'd like to be aware of all the "what if" scenarios pertaining to the type of damage one could do with that information. Obviously, he can process credit cards but the money goes into the site owner's bank account so I'm not sure how much damage that could cause. Can anyone think of any other possible scenarios?
UPDATE: The payment gateway being used is Authorize.net.
Do they really need access to your production sites?
Don't store the key in your code, store it in your production database, or on a file on the production server.
Some good answers here, I'll just add that you'd probably have some trouble with PCI.
PCI-DSS specifically dictates separation of duties, isolation of production environments from dev/test, protection of encryption keys from anyone who does not require it, and more.
As #Matthew Watson said, rethink this, and dont grant production access to developers.
As an aside, if he can access the API directly, how do you ensure that "the money goes into the site owner's bank account"? Not to mention access to all that credit card data...
If the developer gets access to the raw credit card numbers that can become a bigger problem as your site can be associated with fraudulent activity, assuming the developer is a bad apple. (They could redirect account numbers, CCV, expiration date to another site, though this should be spottable through network tools and a comprehensive code review.)
Does the API perform the "$1.00" charge (or "$X.XX") to verify that a credit card can be charged a certain amount (and thus returning the result to the caller, such as "yes" or "no")? If so, it could be used to automate the validation of credit card account numbers traded on the Internet and abuse of such a system could lead back to you.
With any gateway I have worked with, the payment processor ties the API key to the specific IP or IP range of the site of the merchant. With that said, unless the malicious(?) code in question is executed on the same server as the merchant - there shouldn't be any security concerns in that regard.
If this is not the case for your merchant site - contact them and ask if this is feasible.
Does the payment gateway allow for reversal of charges? If so there is the possibility of a number of scams being run.
Does the site process refunds? Will it ever in the future?
If we're talking about nefarious uses, then the site owner might be investigated if lots of unauthorized purchases are made. How would that affect you if the owner is investigated?
From your description it seems that this developer will have access to the customer cards detail in which case the customers privacy may be compromised. You might consider wording the contract appropriately to make sure that this angle is covered.
However the main point is that if you're working on a sensitive project/information it's better for you to find people you could trust. Hiring a software house to do the job may save you some sleep later on.
First and foremost, it is best that you never store this type of information in plain text. Usually people take this as second-hand knowledge for credit card numbers (Sadly, only because of legal reasons), but any sort of private data that you don't want others with database/source-code access viewing should be encrypted. You should store the account information somewhere in a well encrypted format, and you should provide a test account for your developers to use on their development workstations. This way, only people with server access are able to see even the encrypted information.
This way, you can have a database on the developer's workstation with the test account's API information stored (hopefully encrypted) in it's local database, but when the code is mirrored onto the production server it will still use the live, real gateway information stored on the production server's database without extra code/configuration.
With this said, I don't think that a programmer with API authentication details can do too much. Either way, it's not worth the risk - in my opinion.
Hope this help.
PS: If something bad does end up happening, you can always generate a new key in the web interface on authorize.net after you've taken the precautions to make sure it wont happen again.
In the specific case of Authorize.Net they would not be able to do credits towards their own credit cards since Authorize.Net only allows this to be done on transactions performed through them within the last six months. The only exception being allowed if you are granted an exception for unlinked refunds. If you have signed the proper paperwork for this and someone has your API login and transaction key then can then process credits towards their own credit cards. The only way for you to catch this would be to monitor your statements carefully.
To help mitigate this you should change your transaction key immediately upon completion of the work they perform for you. That would render the key they have useless after 24 hours.

Resources