What algorithm or set of heuristics can a server and a mobile app use so that the server can always be fairly certain that the app is used within the boundaries of a given geographic region (e.g. a country)? How can the server ensure that app users outside of the defined region can not falsely claim that they are inside the region?
You can't be 100% sure that user isn't reporting a fake location, you can only make the process of faking it as difficult as possible. You should implement several checks depending on the data you have access to:
1) user's IP address (user can use a proxy)
2) device's gps coordinates (they can be spoofed)
3) the locale of the device (isn't a reliable indicator)
One of the most secure checks (but also not 100%) is sending user an SMS with the confirmation code, which he has to type in the app.
One of the most sophisticated algorithms known to me is in the Google Play (so some apps can only be available only certain countries). It checks such parameters as IP address, user's mobile operator and several others, but there are tools (like Market Enabler) and techniques that can trick the system.
If you dont want to use Google Play or other ways, the best way (I say best because it first costs nothing performance-wise and cost-wise, and secondly it is easy to use and and thirdly you need it anyway if you expect large number of users - it provides nice tools and static cache, optimizer, analytics, user blocking, country blocking etc) is to use cloudflare.
Once you signup for a free cloudflare account, you can set up your server public IP address there so that all traffic is coming through cloudflare proxy network.
After that everything is pretty straightforward, you can install cloudflare module in your server .
In your app, you can get country code of the visitor in the global server request variable HTTP_CF_IPCOUNTRY - for example,
$_SERVER['HTTP_CF_IPCOUNTRY'] in PHP. It will give you AU for Australia. (iso-3166-1 country codes). It doesnt matter what language you use.
Coudflare IP database is frequently updated and seems very reliable to detect user's geolocation without performance overhead.
You also get free protection from attacks, get free cache and cdn features for fast-loading etc.
I had used several other ways but none of them was quite reliable.
If you app runs without a server, you cstill pout a file to a server and make a call to the remote url to get country of the user at each request.
apart from things that #bzz mentioned. you can read the wifi SSID of user wifi networks, services like http://www.skyhookwireless.com/ provides api( i think with browser plugins, i am not sure) which you can use to get location by submitting the wifi SSID.
if you need user to be within specific region all the time when using the app you ll probably end up using all the options together, in case you just need one time check, SMS based approach is the best one IMO.
for accessing wifi SSID , refer to this, still you can not be 100% sure.
Related
If you log into a platform (Twitch, Blizzard, Steam, Most Crypto exchanges, Most Banks) from a new device you'll typically get an email stating so.
As far as my knowledge goes, the only information you can get on a request is
IP address
Device Operating system & version
Browser type & version
Are these platforms basing their "unique" users off of this information alone and/or am is there more information that can be gathered?
From a security perspective the largest thing is your identity or how you authenticate. That's king. The email stating "hey this is a new device" I've seen handled differently from site to site. Most commonly it's actually browser cache and I see banks specifically use browser cache to store these kinds of tokens. Otherwise every time your cellphone connected to a new cell tower you'd likely be flagged as different. They're not necessarily the same as an authentication token, rather it just says hey I've authenticated as this user to this site before. Since it's generated by the service provider, the service provider knows to trust it, and it's nearly impossible to hack (assuming it's implemented correctly).
From my own experience the operating systems and browser types, that's more record keeping than actionable insights, however you could build a security system that takes into account an IP address from very different geo-locations. I.e. why is this guy from the US logging in from China. They just logged in from California 3 hours ago, this is impossible. I don't believe most sites really go to that extent though. I do see MFA providers saying "hey there's a login from china, do you want to approve?". That workflow makes a lot more sense.
The last part of your question is tricky, regarding "unique users." Most calculate that based off the number of sessions opened (tabs), or in the case of Twitch (since you mentioned them specifically), the number of tabs that are streaming that video in. These open platforms where anyone without an account can stream the content obviously treat this differently than say Netflix that makes you authenticate and each account has a limited number of sessions that can be open.
AFAIK, most of the systems like this stores a cookie in your browser when you log (not the session cookie, just a random ID) that is also assiciated to your account in the provider database, so when you came back, you log in, and they check whether you have that cookie set and in case if the ID matches
They you can probably do some more advance stuff with that ID, like base that value from the browser, OS, expire date and so on
I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.
I have cors installed and only my website is whitelisted, how reliable is this? Can bad actors still call my api if they are not calling it from my website?
Next I want to rate limit each user on my website, (the users are not registered or signed in),
I want to restrict each user to make no more than 1 request per second.
How can each user be identified? and then how can each user be limited?
Too many separate questions packaged together here. I'll tackle the ones I can:
I have cors installed and only my website is whitelisted, how reliable is this? Can bad actors still call my api if they are not calling it from my website?
CORS only works with cooperating clients. That means browsers. Your API can be used by anybody else with a scripting tool or any programming language or even a tool like CURL. So, CORS does not prevent bad actors at all. The only thing it prevents is people embedding calls to your API in their own web page Javascript. It doesn't prevent anyone from accessing your API programmatically from whatever tool they want. And, they could even use your API in their own web-site via a proxy. It's not much protection.
How can each user be identified? and then how can each user be limited?
Rate limiting works best when there's an authentication credential with each request because that allows you to uniquely identify each request and/or ban or delay credentials
that misbehave. If there are no credentials, you can try to cookie them to track a given user, but cookies can be blocked or thrown away even in browsers to defeat that. So, without any sort of auth credential, you're stuck with just the requesting IP address. For some users (like home users), that's probably sufficient. But, for corporate users, many, many users may present as the same corporate IP address (due to how their NAT or proxy works), thus you can't tell one user at a major company from another purely by IP address. If you had a lot of users from one company simultaneously using the site, you could falsely trigger rate limiting.
I'm currently developing a web application that has one feature while allows input from anonymous users (No authorization required). I realize that this may prove to have security risks such as repeated arbitrary inputs (ex. spam), or users posting malicious content. So to remedy this I'm trying to create a sort of system that keeps track of what each anonymous user has posted.
So far all I can think of is tracking by IP, but it seems as though it may not be viable due to dynamic IPs, are there any other solutions for anonymous user tracking?
I would recommend requiring them to answer a captcha before posting, or after an unusual number of posts from a single ip address.
"A CAPTCHA is a program that protects websites against bots by generating and grading tests >that humans can pass but current computer programs cannot. For example, humans can read >distorted text as the one shown below, but current computer programs can't"
That way the spammers are actual humans. That will slow the firehose to a level where you can weed out any that does get through.
http://www.captcha.net/
There's two main ways: clientside and serverside. Tracking IP is all that I can think of serverside; clientside there's more accurate options, but they are all under user's control, and he can reanonymise himself (it's his machine, after all): cookies and storage come to mind.
Drop a cookie with an ID on it. Sure, cookies can be deleted, but this at least gives you something.
My suggestion is:
Use cookies for tracking of user identity. As you yourself have said, due to dynamic IP addresses, you can't reliably use them for tracking user identity.
To detect and curb spam, use IP + user browser agent combination.
Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.