For a particular client/server game written in C++, I would like to develop a 'Login Server', so players can be individually tracked by a game server. However, I am averse to reinventing the wheel, and although I have unorthadox requirements, I want to know if what I am looking for has an already established implementation.
What I want is something like OpenID, where there is no authoritative login server. I want there to potentially be many login servers, and all a game server knows for sure is that the guy who used a specific login server with a specific username is the same as the guy who used that login server with that username last time.
Well, why not use OpenID, since I mentioned it by name? It's too web-centric. I don't want to put a browser in my game or in the launcher in order for people to pass their credentials to the login server when they want to play on a specific game server. In fact, a system that is agnostic to the protocol would be preferred, so the login server, game client and game server can all communicate using the same UDP infrastructure that the game client and server already has in place.
Some guidance in this area would be appreciated. I'd really prefer not to have to come up with the entire system myself, because authentication and security are tough problems.
OpenID binds your users to particular servers. For example, if a user comes from a http://openid.net/theusername you can store this information but from this time, the user just has to use the same authentication server and if the server is down, there is a difficult scenario on how to migrate an account to another identity (another provider).
On the other hand, if you rely on Oauth2 authentication and make email your primary recognized identity, users could authenticate with dozen of different providers you set up (Google, Facebook, Twitter, Linkedin) and you can trust providers that the email has been verified prior to authentication (this is how identity providers work).
I would then strongly recommend OAuth2. You still doesn't impose any specific provider and let your users to pick up their favourite one but in the same time you have quite reliable information, email, which can be used as a communication channel to send any information from your application to your users.
Related
Thanks for your help in advance.
I'm using React Native and Node.js to deliver a product for my company.
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
My research so far.
Embedded in strings.xml
Hidden in Source Code
Hidden in BuildConfigs
Using Proguard
Disguised/Encrypted Strings
Hidden in Native Libraries
http://rammic.github.io/2015/07/28/hiding-secrets-in-android-apps/
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
https://github.com/oblador/react-native-keychain
Although this may obfuscate keys, these still have to be hardcoded. Making these kind of useless, unless I'm missing something.
I could use a .env file
https://github.com/luggit/react-native-config
Again, I feel like the hacker can still view secret keys, even if they are saved in a .env
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
Your Question
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
The cruel truth is... you can't!!!
It seems that you already have done some extensive research on the subject, and in my opinion you mentioned one effective way of shipping your App with an embedded secret:
Hidden in Native Libraries
But as you also say:
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
Some are useless and others make reverse engineer the secret from the mobile app a lot harder. As I wrote here, the approach of using the native interfaces to hide the secret will require expertise to reverse engineer it, but then if is hard to reverse engineer the binary you can always resort to a man in the middle (MitM) attack to steel the secret, as I show here for retrieving a secret that is hidden in the mobile app binary with the use of the native interfaces, JNI/NDK.
To protect your mobile app from a MitM you can employ Certificate Pinning:
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
You can read this series of react native articles that show you how to apply certificate pinning to protect the communication channel between your mobile app and the API server.
If you don't know yet certificcate pinning can also be bypassed by using tools like Frida or xPosed.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
So now you may be wondering how can I protect from certificate pinning bypass?
Well is not easy, but is possible, by using a mobile app attestation solution.
Before we go further on it, I would like to clarify first a common misconception among developers, regarding WHO and WHAT is accessing the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered, is not running in a rooted device and is not being the target of a MitM attack. This is done by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on. The cloud service also verifies that the TLS certificate provided to the mobile app on the handshake with the API server is indeed the same in use by the original and genuine API server for the mobile app, not one from a MitM attack.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
I think you should relaly go with a mobile app attestation solution, that you can roll in your own if you have the expertise for it, or you can use a solution that already exists as a SAAS solution at Approov(I work here), that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
Summary
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
Don't go down this route of storing keys in the mobile app, because as you already know, by your extensive research, they can be bypassed.
Instead use a mobile attestation solution in conjunction with OAUTH2 or OpenID connect, that you can bind with the mobile app attestation token. An example of this token binding can be found in this article for the check of the custom payload claim in the endpoint /forms.
Going the Extra Mile
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
I have an OpenID Connect provider built with IdentityServer4 and ASP.NET Identity, running on let's say: login.example.com.
I have a SPA application running on let's say spa.example.com, that already uses my OpenID Connect provider to authenticate users through login.example.com and authorize them to access the SPA.
I have a mobile app (native on both platforms) that is using a custom authentication system at the moment.
I thought it would be nice to get rid of the custom auth system, and instead allow my users to log-in with the same account they use on the SPA, by using my OpenID provider.
So I started by looking on the OpenID connect website and also re-reading the RFC6749, after a few google searches I realized that was a common problem and I found RFC8252 (OAuth2 for Native clients), also Client Dynamic Registration (RFC7591) and PKCE (RFC7636).
I scratched my head about the fact that it was no longer possible to store any kind of "secret" on the client/third-party (the native apps) as it could become compromised.
I disscussed the topic with some co-workers and we came out with the following set-up:
Associate a domain let's say app.example.com to my mobile app by using Apple Universal Links and Android App Links.
Use an AuthenticationCode flow for both clients and enforce them to use PKCE.
Use a redirect_uri on the app associated domain say: https://app.example.com/openid
Make the user always consent to log-in into the application after log-in, because neither iOS or Android would bring back the application by doing an automatic redirect, it has to be the user who manually clicks the universal/app link every time.
I used AppAuth library on both apps and everything is working just fine right now on test, but I'm wondering:
Do you think this is a secure way to prevent that anyone with the right skills could impersonate my apps or by any other means get unauthorized access to my APIs? What is the current best practice on achieving this?
Is there any way to avoid having the user to always "consent" (having them to actually tap the universal/app link).
I also noted that Facebook uses their application as a kind of authorization server itself, so when I tap "sing-in with facebook" on an application I get to a facebook page that asks me if I would like to" launch the application to perform log-in". I would like to know how can I achieve something like this, to allow my users login to the SPA on a phone by using my application if installed, as facebook does with theirs.
I thought it would be nice to get rid of the custom auth system, and instead allow my users to log-in with the same account they use on the SPA, by using my OpenID provider.
This is what OAuth 2.0 and OpenID Connect provides you. The ability to use single user identity among different services. So this is the correct approach .!
it was no longer possible to store any kind of "secret" on the client/third-party (the native apps) as it could become compromised
Correct. From OAuth 2.0 specification perspective, these are called public clients. They are not recommended to have client secrets associated to them. Instead, authorization code, application ID and Redirect URL is used to validate token request in identity provider. This makes authorization code a valuable secret.!
Associate a domain let's say app.example.com to my mobile app by using Apple Universal Links and Android App Links.
Not a mobile expert. But yes, custom URL domains are the way to handle redirect for OAuth and OpenID Connect.
Also usage of PKCE is the correct approach. Hence redirect occur in the browser (user agent) there can be malicious parties which can obtain the authorization code. PKCE avoid this by introducing a secret that will not get exposed to user agent (browser). Secret is only used in token request (direct HTTP communication) thus is secure.
Q1
Using authorization code flow with PKCE is a standard best practice recommended by OAuth specifications. This is valid for OpenID Connect as well (hence it's built on OAuth 2.0)
One thing to note is that, if you believe PKCE secret can be exploited, then it literally means device is compromised. Think about extracting secret from OS memory. that means system is compromised (virus/ keylogger or what ever we call them). In such case end user and your application has more things to be worried about.
Also, I believe this is for a business application. If that's the case your clients will definitely have security best practice guide for their devices. For example installation of virus guards and restrictions of application installation. To prevent attacks mentioned above, we will have to rely on such security establishments. OAuth 2.0 alone is not secure .! Thats's why there are best practice guides(RFC68129) and policies.
Q2
Not clear on this. Consent page is presented from Identity Provider. So it will be a configuration of that system.
Q3
Well, Identity Provider can maintain a SSO session in the browser. Login page is present on that browser. So most of the time, if app uses the same browser, users should be able to use SPA without a login.
The threat here comes from someone actually installing a malicious app on their device that could indeed impersonate your app. PKCE prevents another app from intercepting legitimate sign in requests initiated from your app so the standard approach is about as safe as you can make it. Forcing the user to sign in/consent every time should help a bit to make them take note of what is going on.
From a UX PoV I think it makes a lot of sense to minimize the occasions when the browser-based sign in flow is used. I'd leverage the security features of the platform (e.g. secure enclave on iOS) and keep a refresh token in there once the user has signed in interactively and then they can sign in using their PIN, finger print or face etc.
I have in the past done a hand rolled app that stores a user token on client side $window.sessionStorage.
I have since then realized this is not safe. I am now looking for the most safe, standard way to secure an app that uses a node/express backend api that I will make, and also uses a front end that makes requests to this api such as angular for web or a native mobile app. Plus, whenever I would close the browser, I would have to re-log in because the $window's session storage was wiped out.
From what I've researched thus far, one of the safest ways to date if you're going to handroll it is to store a jwt in an http only secure cookie.
However, I'd kind of like to use a service that already exists, such as oAuth. Couple questions:
1) How safe is oAuth in terms of keeping ownershp of your userbase? What if 3 years from now oAuth just suddenly or slowly dies out? Aren't all my users technically stored on their server? How would I keep my users native to my app?
2) If I'm going to be creating a startup app in the same realm as snapchat, twitter, tumblr, etc... would it be generally recommended to use a service like oAuth to handle my authentication? Of course the future is unknown, but assuming the best, that my app would reach millions of users, would using a service like oAuth still be a smart choice? It seems like once you start using oAuth, there's never any going back to storing your users in your own database a year or two down the road.
Thanks
OAuth is an open standard for authorization.
Maybe you're thinking about Auth0. There are a lot of services that can handle user authorization for you, including Auth0, Stormpath, Apigee, UserApp, AuthRocket or Amazon Cognito. Whichever you choose, make sure that you can get the database from them in case you want to stop using their service. Not everyone explicitly offers an easy way to leave them but if that's important for you then make sure who suits your needs and who doesn't, and base your decision on that.
As for OAuth, see the https://en.wikipedia.org/wiki/OAuth article.
There's a huge list of OAuth providers on Wikipedia but those are services like Twitter, Google or Facebook. In a way you can use one of those services to manage all your logins but as soon as they see you as their competition, you're in trouble. I've heard stories like that.
Some interesting read on the subject:
The dangers of OAuth/Social Login
Signing Me onto Your Accounts through Facebook and Google: a Traffic-Guided Security Study of Commercially Deployed Single-Sign-On Web Services
OpenID Vulnerability report: Data confusion
Social Login Setups – The Good, the Bad and the Ugly
We need a new authentication server for our large business system (ERP). The previous authentication server was internally developed. Now that we need a new one we will first try to find an already existing authentication server that we can use. Can anyone recommend an authentication server? Remember that this is a business system, so things like Facebook login is not an option. Do microsoft / google or others have any authentication servers that can be installed and run locally?
Seems like you need to store the usernames in your DB, because you do not want to be dependent on 3rd party to store them for you. Furthermore, you do not want to force your users to have accounts in the 3rd party (e.g. Facebook, Google).
So AFAIK, the only option for you is to maintain your own authentication-server...
The good news: you have oauth-2 packages that you can use, and I've written a project that implements all authentication-server flows, such as registration, forgot password, etc.
You can see a demo here.
HTH.
While I understand the various options available for server to server authentication between REST services, I could use some clarification on the security implications of each approach.
I want a service to verify that a request received does originate from a legitimate calling remote service. No interactive users involved, assume the request happens as the calling service starts up. The three approaches usually mentioned are:
Use a fake user account and authenticate the client against the existing auth system
Use a shared secret / API key and sign the request
Use a client certificate (verifying the server is not a priority) 3.
The part I am missing is that it seems that all three methods depend entirely on the calling service's host (the client in the call) not being compromised. In the first approach this would give away the fake user password, but in the two other approaches an attacker could obtain the shared secret or the client certificate and impersonate the calling server just as easily as with approach number 1... so in what respect are 2 & 3 considered more secure?
If the host is compromised, the game is already over. You cannot hope to use network security techniques to provide guarantees about the end systems, that is not what they're meant for. Consider passwords, for example. When a user types in a password, the guarantee you have is that the entity that entered the password knows the password, that's all. Designing to be secure against compromised hosts is like trying to build a password scheme that only authorizes you if you're the real person - you're expecting a guarantee that the mechanism is not built to provide.
If you want to check the calling server is not compromised you might want to use TPM based verification of the calling server in case the machines have TPMs on them. Once it has been verified that it is not compromised any of the above 3 methods would be secure.(ref: http://en.wikipedia.org/wiki/Trusted_Platform_Module)