I'm currently developing an API for an online service.
I would like to give access for mobile and web developers to create their applications.
Developers will have the usual number reqs/minute limits for their applications.
What are the best practices for authenticating applications?
For web applications it's easy. We provide token, token is valid for a domain so even if somebody will try to use anywhere else it will fail.
How to do that for mobile applications?
We can provide token. Such token needs to be distributed with application on the device
and means that somebody will sniff that token he can write another application that will use the same token. This will mean that original user will have to revoke old token, create a new one and release new version (that his users will have to download again).
Do you know any solution for that?
I'm not sure that your developers would be able to securely do this without having some form of communication with their own host and some form of user account on their system. As you said, if you included a long-lived token in an app, no matter what obfuscation is done it could eventually be discovered by reverse engineering techniques.
There are 2 options that I can see:
1. Short lived token
In this scheme the mobile application contacts the developer's system to receive an short auth token.
During enrollment and periodically thereafter, developers generate a public-private keypair and give you the public key.
Each auth token would need to include an unencrypted "developer key ID" of some sort and an encrypted bit of data including the token's issue data and a salt of pseudo-random data. The developer's host would encrypt the data using a private key in a public-private keypair. This keeps the secret in a controlled and secure space. The encrypted data needs to include the salt in order to prevent known-plaintext attacks on your developers' keys.
The app sends the token to you, you can determine it's legitimacy by:
Use the unencrypted developer key ID to determine which key to use in decrypting the encrypted string.
Has the developer key ID been revoked or expired? (due to key compromise, dev API subscription expiration or abuse, etc). If it was revoked, deny access.
Does the encrypted data in the token decrypt correctly? If not, deny access.
Has the token expired? (based on the encrypted token date) If so, tell the client to get a new token from the dev server. Their software should do this before contacting your API, but you have to check just in case. I'd suggest that tokens be allowed to live for a relatively short time since copying a token between apps is a weakness.
Allow access
You could also use symmetric encryption instead of public-private key encryption, but then you and the dev both know the secret. It'd be more secure if only the dev knows it.
2. Pass API calls through dev host
It'd be possible for mobile applications to talk to their developer's host instead of your host for calls to the API. When the dev host receives one of the calls, it simply passes the call through to your API and adds their secret token.
Related
I have been reading about OAuth 2.0 Authorization Code flow to protect APIs in microservices architectures but I dont understand how an access token issued by the Auth Server is supposed to protect an API hosted in another server.
Is that same access token also kept in the API and when the client tries to access it with the access token issued by the Auth Server, the API checks if contains it? If so, does that mean that the access token is sent both to client and the protected API in the authentication process?
I hope to have explained my problem well. Thanks in advance.
Anunay gave a pretty good analogy of how JWTs work at a high level as portable, trustable identifier, but since OAuth supports more than just JWT authentication it might warrant sharing a bit more detail.
Token introspection
In your question you rightly assumed that tokens need some way of being trusted, and that one such way would be to store the token in a private database and do a lookup whenever a token is presented to determine its validity. You would absolutely be able to instrument a valid OAuth server using a method like this by issuing the token using whatever form you wish and writing an introspection endpoint that performs the lookup. The OAuth spec is intentionally abstract so that the functional behavior of token introspection can take many forms.
One of the reasons for this level of abstraction is because while storing the tokens for direct lookup might be easy, it means that you have to store copies of these tokens in some form in a private database for comparison. This storage would in turn make you a honeypot for bad actors, both internally and externally, who would seek to impersonate your users en-masse. It's for this reason that many implementations of OAuth prefer to issue and validate tokens using public/private key encryption instead of direct lookups. This process is very much like the one Anunay described in his comment in that it issues tokens that are signed with a private key and verified with a public one. With this process, you no longer need to keep everyone's token in a private database, and instead simply need to secure private and public keys that are used to sign and verify tokens respectively.
JSON Web Tokens (JWTs) and reducing number of introspection calls
Anunay's response specifically referred to a common token structure that is generated using public/private key encryption and issued to users, JSON Web Tokens. These tokens are structured in such a way that they include the user information a backend service might need like the User ID, email address, and sometimes more, in a raw format that is directly readable to the backend API. In addition to this raw information however, JWTs include a duplicate copy of the data, but this duplicate copy is private-key encrypted. In order to trust a JWT token, all you have to do is use the public key and ensure that the private-key encoded payload is verifiable by applying the public key to the raw payload. Since public keys rarely change, many backend services cache the keys used for verification and elect not to do a token introspection on the issuing server since they already can verify the payload. This is how you'd optimize throughput on backend services that are protected via OAuth.
Since public keys can only be used to verify payloads and not produce them, these public keys are often broadcast by the servers that issued the tokens allowing anyone to "trust" the tokens it issues if they so choose. To learn more about this process, I'd recommend you research OpenID Connect.
Access token can be understood as an passport that government issue to the citizen based on proof of identity.
When you take it to another country, they look at the document and trust it because they trust the country and you because you are the holder of that document with you details.
They trust the fact that passport cannot be fiddled with and allow you entry
Now for access token, in very simple terms, authorization server verifies the user. Once verified it issues the user a JWT token (Access Token). This token is signed with private key. It has your details and is encoded along with signature. Now you can take this token to any third party who has got the public key and trust the authorization server. Now when you share the access token with this third party, it use public key to verify the token and check for expiry. If valid it allows you in.
So API doesn't really need to talk to auth server or keep any details about the token. All its needs is a public key to decode the token.
Now there are two important things. One if you ever let loose your access token, or some one who is not intended to get hold of your token gets it, he can do what ever he wants and auth server will not be able to do much. However as you see this approach reduces the chattiness of the systems specially microservices.
So to address this we limit the expiry of access token. Like passport, it comes with expiry. Shorter you keep it,user have to go and get the token refreshed with auth server. Every time he does so, auth server gets a change to verify creds and other details. If they do not match access token will not be refreshed.
Is it correct to assume that the idtoken offers no more security than replacing it with a (possibly salted) SHA2 hash of itself in any follow-up communication with the backend server?
The indended flow would be the following:
The Android app obtains an idtoken from Google
The app sends the idtoken to the self-hosted backend server, where it is verified by the backend with the use of gitkitclient.VerifyGitkitToken
The backend creates a SHA2 hash of the token together with the expiry
date and the associated user id, and stores it for future reference in a lookup table
The Android app creates the same SHA2 hash of the idtoken and passes it
along in the header in any future communication with the backend, instead of using the idtoken for the follow-up communication.
Does this decrease in any way the security of the system?
If a transparent proxy with https inspection (and the according certificates installed on the device, ie legitimately in a corporate environment) would sniff the traffic, it would make no difference if the idtoken is obtained or the SHA2 hash of it, that transparent proxy would be able to act (possibly in a rouge way) on behalf of the Android app for the entire lifetime of the idtoken, right?
My issue is that calling gitkitclient.VerifyGitkitToken on every follow-up communication with the server is too expensive, and not necessary once the validity of the idtoken has been ascertained.
I also don't want the idtoken to be stored on the server for future reference, instead prefer having a hash of it. Is a SHA224 hash of the idtoken enough and is it safe to assume that it would not result in any collisions?
This requires a long discussion of what should be in an authentication cookie and what are the pros and cons of various approaches. No one solution will be fit for all and depending on the app/site security, performance, scalability requirements a solution should be carefully selected. So I really can not comment on the proposed solution without understanding a lot more details about the app, requirements and threats.
In general, the authentication cookie/token have these basic requirements
Should not be forgeable (signed by your server)
Should be easy to validate
Even if the signing secret is stolen, a hacker should not be able to create tokens for all users (e.g. can be achieved by having a per account nonce)
Should be revokable from the server (achieved by maintaining a server side state)
Optionally tied to a client to it was issued, so if stolen from a client will make it useless
I'm sure there are a lot more desirable properties.
GITKit issues the id_token for a one time authentication use and a developer should use their own cookies/token to continue to keep the session of the app/browser. We know that many developer would like us to help and we are working on a solution that would give a long lived OAuth refresh token (and short lived access token) to the app that it can continue to use with its home server.
This is quite a conceptional question, but I'm also interested in implementation details.
Let's say I have an API written in Node.js.
Clients (primarily an iOS app) authenticate via OAuth and then use the session token to authorize each following request.
I now want to point from the app to a browser based web app and take over the authenticated session.
This should, of course, be highly secure and must not be vulnarable in a theoretic sense, but as far as possible in an implementation wise thinking.
I must, somehow, ensure that the request comes from the same device and user, etc.
I thought of generating a short valid token that the client must send, but also this does not seem quite secure when having in mind the TLS protected API.
You want to be more secure than OAuth.
From OAuth's perspective, whomever possess the token, is authorised to act on the user's behalf. You may wish to include a secondary secret or verify the IP but I doubt it will do much for you in practice:
If the platform you are on (e.g. iOS) is not compromised, than the OAuth token will be entropy enough to confirm the user's identity. If the platform IS compromised, then any secret your application can set can also be extracted by the assumed attacker.
I run a service that integrates with a few other cloud platforms via their apis. In order to do this, we have to store the login credentials for OTHER sites in our database. Obviously security is a bit of a risk here.
So far, we have been storing the passwords using AES encryption and a salted version of the user's password(for our site) as the cipher. When a user requests something from the api, they must input their password. The password checked for validity against the sha hash that we store, and once confirmed, is used to decrypt the password.
The problem is, we would like to start offering a service that retrieves data from the apis we interact with at scheduled intervals(outside the scope of synchronous user requests.). If we do this, our current security structure will no longer be viable.
My question is, are there any ways to allow for this type of api interaction without storing recoverable versions the passwords in our database? If not, what are my options for securely storing passwords?
we would like to start offering a service that retrieves data from the apis we interact with at scheduled intervals(outside the scope of synchronous user requests.).
This is what the OAuth protocol is designed for. The OAuth 2.0 code grant gives a client application an access token and a refresh token. The refresh token allows the application to get an access token even when the user is not there to authorize the request.
I am working on an web application. Which uses oauth to authenticate from different services.
Is there any risk of securing these tokens and secret directly into database. Or should I encrypt them ?
What are the general security pattern for saving oauth token and secret
This thread answers all of your questions:
Securly Storing OpenID identifiers and OAuth tokens
Essentially, the following are dependent among themselves one or other way:
Consumer key
Consumer secret
Access token
Access token secret
Unless the consumer key/secret are also at risk, you don't need to encrypt the access token/secret. The access tokens can only be used in combination with the consumer key/secret which generated them.
I'm assuming you're talking about the typical "Service Provider," "Consumer" and "User" setup?
If so, the session and cookies are good enough for saving tokens, but the problem is that it's your Consumers (your clients, as I understand) that need to be saving them and not you. Is there a session/cookie available in the scope of the calls to your API?
In either case, if the tokens are stored in the session or cookies, they will be "temporary" keys and the User will have to re-authenticate when they expire. But there is nothing wrong with that as far as the oAuth spec is concerned - as long as the Users don't mind re-authenticating.
Also bear in mind that the tokens are tied to a given service and user, and not to any IP address or device UUID, for example. They could not be used with different API and secret keys, as they are tied to the application they were issued for.
This way the user can de-authorize on a by-application basis, and every app can have a different set of permissions (e.g. read-only access). So your answer is you don't need to encrypt them, and you need them in plaintext anyway (if you're the User).