server-to-server REST API security - security

We're building a REST API that will only be accessed from a known set of servers. My question is, if there is no access directly from any browser based clients, what security mechanisms are required.
Currently Have:
Obviously over HTTPS
Have HTTP auth enabled, API consumers have a Key & password
Is it neccessary to:
Create some changing key, e.g. md5(timestamp + token) that is formed for the request and validated at the endpoint?
Use OAuth (2-legged authentication)?

Doesn't matter - from browser or not.
Is it neccessary to:
Create some changing key, e.g. md5(timestamp + token) that is formed
for the request and validated at the endpoint?
Use oauth (2-legged authorization)?
Use OAuth, it solves both these questions. And OAuth usage is good because:
You aren't reinventing wheel
There are already a lot of libraries and approaches depending on technology stack
You can also use JWT token to pass some security context with custom claims from service to service.
Also as reference you can look how different providers solve the problem. For example Azure Active Directory has on behalf flow for this purpose
https://learn.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-on-behalf-of-flow
Use of OAuth2/OpenID Connect is not mandatory between your services, there are other protocols and alternatives and even custom. All depends in which relationships are services and either they both are in full trust environment.
You can use anything you like but main idea not to share sensitive information between services like service account credentials or user credentials.
If REST API security is main requirement - OAuth2/OpenID Connect is maybe the best choice, if you need just secure (in a sense of authentication) calls in full trust environment in a simplest way - Kerberos, if you need encrypted custom tunnel between them for data in transit encryption - other options like VPN. It does not make sense to implement somthing custom because if you have Service A and Service B, and would like to make sure call between them is authenticated, then to avoid coupling and sharing senstive information you will always need some central service C as Identity provider. So if you think from tis pov, OAuth2/OIDC is not overkill

Whether the consumers of your API are web browsers or servers you don't control doesn't change the security picture.
If you are using HTTPs and clients already have a key/password then it isn't clear what kind of attack any other mechanism would protect against.
Any compromise on the client side will expose everything anyway.

Firstly - it matters whether a user agent (such as a browser) is involved in call.
If there are only S2S calls then 1 way SSL HTTPS (for network encryption) and some kind of signature mechanism (SHA-256) should be enough for your security.
However if you return sensitive information in your api response, its always better to accept 2 way ssl HTTPS connections (in order to validate the client).
OAuth2 doesn't add any value in a server to server call (that takes place without user consent and without any user agent present).

For authentication between servers:
Authentication
Known servers:
use TLS with X.509 client certificates (TLS with mutual authentication).
issue the client certificates with a common CA (certificate authority). That way, the servers need only have the CA certificate or public key in the truststore, and new client certificates for additional clients/servers can be issued without having to update the truststores.
Open set of servers:
use API keys, issued by a central authority. The servers need to validate these keys on each request (and may cache the hashes of the keys along with the validation result for some short time).
Identity propagation
if the requests are executed in the context of a non-technical user, use JWT (or SAML) for identity propagation of the user principal and claims (authorize at security proxy/WAF/IAM, and issue JWT signed by authentication server).
otherwise the user principal refers to the technical user and can can be extracted from the client certificate (X.509 DName) or be returned with a successful authentication response (API key case).

Related

Azure API Management - Authentication: OAuth2 vs Certificate

I would like to have your mind about an implementation. Thank you in advance for any suggestion :)
**Scenario**
I have a set of APIs. They are accessible via REST and protected by OAuth2. I have also a list of machines that needs to access them.
Question
On machine’s side, which is the best solution to access them? Should I implement a client certificate authentication or OAuth2 is a suitable solution?
**My doubts:**
In case of hundreds, thousands of machines, the certificate
management become too complex/costly?
Should I use a certificate for each machine or one certificate for more than one?
How can I deploy smartly the certificate to each machine?
I like the idea to have the mutual authentication, but I’m afraid is to heavy to maintain compared to the OAuth structure. I plan to use Microsoft Azure as cloud service.
Thanks!
I would say the important factors here are client identity and privilege:
You may have many client machines
But do they represent a single identity, eg a cluster?
And do they all have the same privilege?
By default I would aim for a solution where all client machines with the same privilege present the same credential / identity. The APIs can then authorize requests based on the client identity, provided in the access token.
The standard OAuth solution is Client Credentials flow, where clients each send a secret to the server.
If required (and supported by your Authorization Server) you can use a Mutual TLS form of Client Credentials, via the Client Assertion Profile. This usually boils down to a private key signature sent by the client, which is contained in a whitelist configured on the Authorization Server.

Public keys in OpenID Connect

I'm currently trying to use IdentityServer4 to build a single-signon experience for my users across different apps I have. They are all hosted in the same local network and no third-party apps authenticate with it. The client apps are still Katana/Owin-based.
I'm using the implicit workflow.
At the moment I still use a certificate randomly generated at runtime to sign tokens.
I wonder
whether I actually need more and what the implications are of leaving it as it is and
how the signature is actually validated by clients.
To the second question I found this piece in the openidconnect spec:
The OP advertises its public keys via its Discovery document, or may
supply this information by other means. The RP declares its public
keys via its Dynamic Registration request, or may communicate this
information by other means.
So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly? And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct? So why would I need a proper, rarely-changing certificate?
Generating a new certificate on app startup has a few downsides:
If you restart your IDS4 process you effectively invalidate any otherwise valid tokens as the signature will no longer be valid
Inability to scale out - all servers need to have the same signing and validation keys
Clients might only periodically update their discovery info so you need to allow for a rollover period, something that IDS4 supports as you can have more than one validation key.
See the guidance here: http://docs.identityserver.io/en/release/topics/crypto.html
The next simplest option would be to use a self-issued cert that's installed in the host machine's ceritificate store.
First of all, OpenID Connect discovery is a process of communicating relying party to retrieve provider's information, dynamically. There is a dedicated specification for this, OpenID Connect Discovery 1.0
According to it's metadata section, jwks_uri explains about token signing key publication.
1. So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly?
Yes it should. If your applications (relying parties) want dynamic information, you should go ahead with discovery document to retrieve token signing key information.
2 And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct?
Discovery document is part of OpenID Connect dynamic (reference - http://openid.net/connect/ ). So yes, it can be used to convey certificate changes to relying party (token consumers)
3. So why would I need a proper, rarely-changing certificate?
Certificate must be there to validate id tokens issued by identity provider. So at minimum, certificate must live till last token expires. Other than that, one might be using proper certificates issued by a CA, which comes with a cost. So, some implementations could have rarely changing certificates.
Bonus : how the signature is actually validated by clients.
You hash your received message, compare it against decrypted signature using public key of the certificate. Also, if you are wondering the format of key information, it is a JWK defined by RFC7517.
P.S - ID Token validation is same as validating a JWT explained by JWT spec.
Note - I am not an expert in PKI domain. Some expert could point out something else for short lived certificates independent of OpenID Connect protocol.

Does HTTPS prevent a valid user from tampering with the payload?

We have 2 web applications, both secured using HTTPS (server-side certificates only) and a single sign-on authentication system. In App1, a user will click a link which then needs to “drill down” into a page in App2. They share the same domain and SSL certificate, but are physically not the same app. When App1 forwards or redirects the request to App2, it includes an authentication token in the request so App2 can verify the user’s identity.
App1 knows what information the user is authorized to see, call it a list of accounts; App2 does not have access to this information (at least not at this time). It has been proposed that App1 may pass the list of authorized accounts to App2 as well, in the request.
My question is whether HTTPS protects the payload and guarantees that it was generated only by the App1/App2 servers? More specifically, my concern is whether a valid user, with a valid authentication token, might be able to build his own form with additional accounts and submit it as a valid HTTPS POST request to the App2 server and thereby gain access to unauthorized accounts?
No, HTTPS alone does not provide you with the security you're looking for. For an indication of how others have tackled the problem you're facing, take a look at this link:
SSO with SAML
It is about accomplishing SSO with the SAML protocol. In general if security is a serious concern of yours, you'll want to use a peer-reviewed solution (like SAML) instead of a DIY approach to single sign-on. You don't need to use SAML, but you should try to use an existing SSO solution available for your environment.
In order to "guarantee it was generated by the App1/App2 servers" -- you could digitally sign the payload. This would prevent tampering but may not prevent replay attacks -- SSL would help some with that as the transmission would be encrypted but replay attacks would still be possible (via perhaps a man in the middle attack)
HTTPS provides a secure communication channel.
But you could have a secure communication with the devil himself
:-)

How to design API with no SSL support?

I am developing Restful API layer my app. The app would be used in premises where HTTPS support is not available. We need to support both web apps and mobile apps. We are using Node/Expressjs at the server side. My two concerns are:
Is there a way we could setup secure authentication without HTTPS?
Is there a way we could reuse the same authentication layer on both web app (backbonejs) and native mobile app (iOS)?
I think you are confusing authenticity and confidentiality. It's totally possible to create an API that securely validates the caller is who they say they are using a MAC; most often an HMAC. The assumption, though, is that you've securely established a shared secret—which you could do in person, but that's pretty inconvenient.
Amazon S3 is an example of an API that authenticates its requests without SSL/TLS. It does so by dictating a specific way in which the caller creates an HMAC based on the parts of the HTTP request. It then verifies that the requester is actually a person allowed to ask for that object. Amazon relies on SSL to initially establish your shared secret at registration time, but SSL is not needed to correctly perform an API call that can be securely authenticated as originating from an authorized individual—that can be plain old HTTP.
Now the downside to that approach is that all data passing in both directions is visible to anyone. While the authorization data sent will not allow an attacker to impersonate a valid user, the attacker can see anything that you transmit—thus the need for confidentiality in many cases.
One use case for publicly transmitted API responses with S3 includes websites whose code is hosted on one server, while its images and such are hosted in S3. Websites often use S3's Query String Authentication to allow browsers to request the images directly from S3 for a small window of time, while also ensuring that the website code is the only one that can authorize a browser to retrieve that image (and thus charge the owner for bandwidth).
Another example of an API authentication mechanism that allows the use of non-SSL requests is OAuth. It's obsolete 1.0 family used it exclusively (even if you used SSL), and OAuth 2.0 specification defines several access token types, including the OAuth2 HTTP MAC type whose main purpose is to simplify and improve HTTP authentication for services that are unwilling or unable to employ TLS for every request (though it does require SSL for initially establishing the secret). While the OAuth2 Bearer type requires SSL, and keeps things simpler (no normalization; the bane of all developers using all request signing APIs without well established & tested libraries).
To sum it up, if all you care about is securely establishing the authenticity of a request, that's possible. If you care about confidentiality during the transport of the response, you'll need some kind of transport security, and TLS is easier to get right in your app code (though other options may be feasible).
Is there a way we could setup secure authentication without HTTPS?
If you mean SSL, No. Whatever you send through your browser to the web server will be unencrypted, so third parties can listen. HTTPS is not authentication, its encyrption of the traffic between the client and server.
Is there a way we could reuse the same authentication layer on both web app (backbonejs) and native mobile app (iOS)?
Yes, as you say, it is layer, so it's interface will be independent from client, it will be HTTP and if the web-app is on same-origin with that layer, there will be no problem. (e.g. api.myapp.com accessed from myapp.com). Your native mobile can make HTTP requests, too.
In either case of SSL or not SSL, you can be secure if you use a private/public key scenario where you require the user to sign each request prior to sending. Once you receive the request, you then decrypt it with their private key (not sent over the wire) and match what was signed and what operation the user was requesting and make sure those two match. You base this on a timestamp of UTC and this also requires that all servers using this model be very accurate in their clock settings.
Amazon Web Services in particular uses this security method and it is secure enough to use without SSL although they do not recommend it.
I would seriously invest some small change to support SSL as it gives you more credibility in doing so. I personally would not think you to be a credible organization without one.

Using cookies/sessions for mobile application authentication?

Is there any reason why I shouldn't use cookies/sessions for native mobile applications, usually used by browsers, to authenticate with my server and for subsequent API calls?
Clarification: It seems the de-facto method of authentication on mobile clients is token based systems like OAuth/XAuth. Why don't traditional browser methods suffice?
This depends on your application (your threat scenario to be more exact).
Some of the most common threats are
- eavesdropping (-> should encrypt)
- man in the middle (-> must authenticate other party)
- ...what are yours? (how secure is your cookie store,....)
A cookie at first only holds a token as proof that sometime you have successfully made an authentication. If the cookie is valid long enough or transport not encrypted, there is a good chance that someone someday will find out...
In addition you must take into account what additional security measures are in place, at first and most important SSL.
What is your authentication method (what credential does a client need to logon)? Do you have the possibility to work with authentication based on PPK infrastructure or is the communication "ad-hoc"?
EDIT
Wrt. to OpenAuth: as far as i understood the protocol its main concern is authentication delegation. A scenario where you authorize an agent to do some very specific task on behalf of another identity. This way you dont scatter your credentials all over the web. If you have OpenAuth in place, a client can use the protocol directly, too. So why bother adding another. But OpenAuth explicitly states that with a direct client scenario you again run into security issues as now the token is available on the device and must be protected accordingly (as you must do with your cookie).

Resources