I am developing Restful API layer my app. The app would be used in premises where HTTPS support is not available. We need to support both web apps and mobile apps. We are using Node/Expressjs at the server side. My two concerns are:
Is there a way we could setup secure authentication without HTTPS?
Is there a way we could reuse the same authentication layer on both web app (backbonejs) and native mobile app (iOS)?
I think you are confusing authenticity and confidentiality. It's totally possible to create an API that securely validates the caller is who they say they are using a MAC; most often an HMAC. The assumption, though, is that you've securely established a shared secret—which you could do in person, but that's pretty inconvenient.
Amazon S3 is an example of an API that authenticates its requests without SSL/TLS. It does so by dictating a specific way in which the caller creates an HMAC based on the parts of the HTTP request. It then verifies that the requester is actually a person allowed to ask for that object. Amazon relies on SSL to initially establish your shared secret at registration time, but SSL is not needed to correctly perform an API call that can be securely authenticated as originating from an authorized individual—that can be plain old HTTP.
Now the downside to that approach is that all data passing in both directions is visible to anyone. While the authorization data sent will not allow an attacker to impersonate a valid user, the attacker can see anything that you transmit—thus the need for confidentiality in many cases.
One use case for publicly transmitted API responses with S3 includes websites whose code is hosted on one server, while its images and such are hosted in S3. Websites often use S3's Query String Authentication to allow browsers to request the images directly from S3 for a small window of time, while also ensuring that the website code is the only one that can authorize a browser to retrieve that image (and thus charge the owner for bandwidth).
Another example of an API authentication mechanism that allows the use of non-SSL requests is OAuth. It's obsolete 1.0 family used it exclusively (even if you used SSL), and OAuth 2.0 specification defines several access token types, including the OAuth2 HTTP MAC type whose main purpose is to simplify and improve HTTP authentication for services that are unwilling or unable to employ TLS for every request (though it does require SSL for initially establishing the secret). While the OAuth2 Bearer type requires SSL, and keeps things simpler (no normalization; the bane of all developers using all request signing APIs without well established & tested libraries).
To sum it up, if all you care about is securely establishing the authenticity of a request, that's possible. If you care about confidentiality during the transport of the response, you'll need some kind of transport security, and TLS is easier to get right in your app code (though other options may be feasible).
Is there a way we could setup secure authentication without HTTPS?
If you mean SSL, No. Whatever you send through your browser to the web server will be unencrypted, so third parties can listen. HTTPS is not authentication, its encyrption of the traffic between the client and server.
Is there a way we could reuse the same authentication layer on both web app (backbonejs) and native mobile app (iOS)?
Yes, as you say, it is layer, so it's interface will be independent from client, it will be HTTP and if the web-app is on same-origin with that layer, there will be no problem. (e.g. api.myapp.com accessed from myapp.com). Your native mobile can make HTTP requests, too.
In either case of SSL or not SSL, you can be secure if you use a private/public key scenario where you require the user to sign each request prior to sending. Once you receive the request, you then decrypt it with their private key (not sent over the wire) and match what was signed and what operation the user was requesting and make sure those two match. You base this on a timestamp of UTC and this also requires that all servers using this model be very accurate in their clock settings.
Amazon Web Services in particular uses this security method and it is secure enough to use without SSL although they do not recommend it.
I would seriously invest some small change to support SSL as it gives you more credibility in doing so. I personally would not think you to be a credible organization without one.
Related
Desktop Client
Protected Resource Server
Authorization Server (Google)
User-Agent (Browser)
The Desktop Client generates Pub/Priv key pair, and directs the user agent with webbrowser.open_new() to Protected Resource Server OAuth initiation page, which stores the Public Key in the State parameter field of the Auth_URI redirect.
User-Agent successfully auths with the Authorization Server and is redirected back to Protected Resource Sever with Auth_Code and Public Key in state parameter field.
Protected Resource Server exchanges Auth_Code using confidential client secret and validates id_token.
If id_token is valid, (server side processing happens), it then redirects on loopback to listening Desktop Client with query parameter containing an encrypted value that is only accessible by the initiating app.
It's a very similar process to PKCE, but I'm having the Client Secret remain confidential on the sever rather than embedding it in the Desktop Client.
My concern is about a malicious 3rd party app that is able to intercept the initial OAuth_URI redirect and modify its values. Is this a mitigable threat once the device/browser is compromised? PKCE would suffer from the same issue and I've seen no explanations that mention my concern as a particular issue so I am under the assumption it is fine.
REDIRECT TAMPERING
The standard protection against malicious redirect tampering is in these emerging standards:
PAR
JAR
JARM
In high security scenarios you may need to follow profiles that mandate some of these - and some profiles may include financial-grade recommendations such as FAPI 2.0 Client Requirements, which includes PAR.
DESKTOP APPS
One known issue for desktop apps is that a malicious app could send the same client ID and use the same redirect URI to trigger a complete flow. I don't think you can fully protect against that, and any efforts may just be obfuscation.
Your concern is perhaps this issue, and it is not currently solvable, since any code your app runs could also be run by a malicious app, including use of PAR / DPop or other advanced options.
CLIENT ATTESTATION
The behaviour you are after is client attestation, where a malicious party cannot attempt authentication without proving its identity cryptographically. Eg an iOS app can send proof of ownership of its App Store signing key.
Mobile and web apps can achieve a reasonable amount of client attestation by owning the domain for an HTTPS based redirect URI, but these cannot be used for desktop apps. It would be good to see improved client attestation options for desktop apps some time soon.
WHAT TO DO?
Generally I would say keep your code based on standards that have been vetted by experts. This should mean that you use libraries in places but that your code remains simple.
Also your desktop app will be as secure as other desktop apps without trying to solve this problem. Also, users are socialised to not run arbitrary EXEs these days, and only run proper signed apps, whose code signing certificate identifies them and chains up to an authority that has approved the app (we hope).
Thanks for your help in advance.
I'm using React Native and Node.js to deliver a product for my company.
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
My research so far.
Embedded in strings.xml
Hidden in Source Code
Hidden in BuildConfigs
Using Proguard
Disguised/Encrypted Strings
Hidden in Native Libraries
http://rammic.github.io/2015/07/28/hiding-secrets-in-android-apps/
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
https://github.com/oblador/react-native-keychain
Although this may obfuscate keys, these still have to be hardcoded. Making these kind of useless, unless I'm missing something.
I could use a .env file
https://github.com/luggit/react-native-config
Again, I feel like the hacker can still view secret keys, even if they are saved in a .env
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
Your Question
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
The cruel truth is... you can't!!!
It seems that you already have done some extensive research on the subject, and in my opinion you mentioned one effective way of shipping your App with an embedded secret:
Hidden in Native Libraries
But as you also say:
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
Some are useless and others make reverse engineer the secret from the mobile app a lot harder. As I wrote here, the approach of using the native interfaces to hide the secret will require expertise to reverse engineer it, but then if is hard to reverse engineer the binary you can always resort to a man in the middle (MitM) attack to steel the secret, as I show here for retrieving a secret that is hidden in the mobile app binary with the use of the native interfaces, JNI/NDK.
To protect your mobile app from a MitM you can employ Certificate Pinning:
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
You can read this series of react native articles that show you how to apply certificate pinning to protect the communication channel between your mobile app and the API server.
If you don't know yet certificcate pinning can also be bypassed by using tools like Frida or xPosed.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
So now you may be wondering how can I protect from certificate pinning bypass?
Well is not easy, but is possible, by using a mobile app attestation solution.
Before we go further on it, I would like to clarify first a common misconception among developers, regarding WHO and WHAT is accessing the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered, is not running in a rooted device and is not being the target of a MitM attack. This is done by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on. The cloud service also verifies that the TLS certificate provided to the mobile app on the handshake with the API server is indeed the same in use by the original and genuine API server for the mobile app, not one from a MitM attack.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
I think you should relaly go with a mobile app attestation solution, that you can roll in your own if you have the expertise for it, or you can use a solution that already exists as a SAAS solution at Approov(I work here), that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
Summary
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
Don't go down this route of storing keys in the mobile app, because as you already know, by your extensive research, they can be bypassed.
Instead use a mobile attestation solution in conjunction with OAUTH2 or OpenID connect, that you can bind with the mobile app attestation token. An example of this token binding can be found in this article for the check of the custom payload claim in the endpoint /forms.
Going the Extra Mile
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
I am developing a backend for a mobile application using Node.js to handle HTTPS requests. I have set up an SSL to connect from the client to the server and was wondering if this was secure enough.
I don't have experience with intercepting endpoints from the mobile devices, but I have seen that it is possible for people to monitor internet traffic out of their cellphones and pick up endpoints to server requests. I have seen hacks on tinder where people can see response JSON and even automate swipes by sending http requests to tinder's endpoints.
My real concern is that people will be able to update/read/modify data on my backend. I can implement OAuth2 into my schema as well but I still see cases in which people could abuse the system.
My main question is whether or not using HTTPS is secure enough to protect my data, or if a session authentication system is needed like OAuth2.
Thanks.
HTTPS, providing it is properly configured, will ensure the message was not read or changed en route and that the client can know the server it is talking to is not a fake.
It will secure the transport. It will not secure the application.
For example supposing you have an app that allows you to send a message saying https://www.example.com/transfermoney?from=Kyle&to=BazzaDP&amount=9999.99 and the server does just that based on those parameters. Then I could send that message myself - I've no need to intercept any app messages.
Normally the server needs authentication as well as HTTPS to, for example, verify only Kyle user can send above message and not anyone else. HTTPS normally only gives server authentication not client authentication (unless using two way certificate HTTPS).
So the question is, even if an attacker cannot read or alter any messages between app and server can they still cause harm? That is the measure of whether it is secure enough.
A SSL connection is only secure with the content you are sending.
SSL encrypts and ensures the authenticity of the whole connection, including the requested method and URL
So i would say just using the SSL encryption is save to transfer data between - i might consider OAuth2 for password etc.
But i would recommend to use GET for retrieval data and post for authorized data
You're building an armored tunnel between two open fields.
Assuming that you use current SSL protocols and settings, and valid certificates from trusted issuers, you can pretty much assume the network is OK.
However it's still entirely possible to compromise any or all of your transaction from the client. Security really depends on the device and how well it's configured and patched.
Which parties of Oauth 2.0 are required to have an SSL connection?
Auth server: SSL required
Resource server: SSL required
Client apps: Is it really necessary, as long as it uses SSL for the resource server communication?
The Authorization server is required to use SSL/TLS as per the specification, for example:
Since requests to the authorization endpoint result in user
authentication and the transmission of clear-text credentials (in the
HTTP response), the authorization server MUST require the use of TLS
as described in Section 1.6 when sending requests to the
authorization endpoint.
Since requests to the token endpoint result in the transmission of
clear-text credentials (in the HTTP request and response), the
authorization server MUST require the use of TLS as described in
Section 1.6 when sending requests to the token endpoint.
That same specification does not require it for the client application, but heavily recommends it:
The redirection endpoint SHOULD require the use of TLS as described
in Section 1.6 when the requested response type is "code" or "token",
or when the redirection request will result in the transmission of
sensitive credentials over an open network. This specification does
not mandate the use of TLS because at the time of this writing,
requiring clients to deploy TLS is a significant hurdle for many
client developers. If TLS is not available, the authorization server
SHOULD warn the resource owner about the insecure endpoint prior to
redirection (e.g., display a message during the authorization
request).
Lack of transport-layer security can have a severe impact on the
security of the client and the protected resources it is authorized
to access. The use of transport-layer security is particularly
critical when the authorization process is used as a form of
delegated end-user authentication by the client (e.g., third-party
sign-in service).
Calls to the resource server contain the access token and require SSL/TLS:
Access token credentials (as well as any confidential access token
attributes) MUST be kept confidential in transit and storage, and
only shared among the authorization server, the resource servers the
access token is valid for, and the client to whom the access token is
issued. Access token credentials MUST only be transmitted using TLS
as described in Section 1.6 with server authentication as defined by
[RFC2818].
The reasons should be pretty obvious: In any of these does not use secure transport, the token can be intercepted and the solution is not secure.
You question specifically calls out the client application.
Client apps: Is it really necessary, as long as it uses SSL for the resource server communication?
I am assuming that you client is a web application, and you are talking about the communication between the browser and the server after authentication has happened. I am furthermore assuming that you ask the question, because (in your implementation), this communication is not authenticated with access tokens, but through some other means.
And there you have your answer: that communication is authenticated in some way or another. How else would the server know who is making the call? Most web sites use a session cookie they set at the beginning of the session, and use that to identify the session and therefor the user. Anyone who can grab that session cookie can hijack the session and impersonate the user. If you don't want that (and you really should not want that), you must use SSL/TLS to secure the communication between the browser and the server.
In some cases, the browser part of the client talks to the resource server directly; and the server part only serves static content, such as HTML, CSS, images and last but not least, JavaScript. Maybe your client is built like this, and you are wondering whether the static content must be downloaded over SSL/TLS? Well, if it isn't, a man in the middle can insert their own evil JavaScript, that steals you user's access tokens. You do want to secure the download of static content.
Last but not least, your question is based on a hidden assumption, that there might be valid reasons not to use SSL/TLS. Often people claim the cost of the certificate is too high, or the encryption requires too much CPU power, hence requiring more hardware to run the application. I do not believe these costs to be significant in virtually all cases. They are very low, compared to the total cost of building and running the solution. They are also very low compared to the risks of not using encryption. Don't spend time (and money) debating this, just use SSL/TLS all the way through.
We're building a REST API that will only be accessed from a known set of servers. My question is, if there is no access directly from any browser based clients, what security mechanisms are required.
Currently Have:
Obviously over HTTPS
Have HTTP auth enabled, API consumers have a Key & password
Is it neccessary to:
Create some changing key, e.g. md5(timestamp + token) that is formed for the request and validated at the endpoint?
Use OAuth (2-legged authentication)?
Doesn't matter - from browser or not.
Is it neccessary to:
Create some changing key, e.g. md5(timestamp + token) that is formed
for the request and validated at the endpoint?
Use oauth (2-legged authorization)?
Use OAuth, it solves both these questions. And OAuth usage is good because:
You aren't reinventing wheel
There are already a lot of libraries and approaches depending on technology stack
You can also use JWT token to pass some security context with custom claims from service to service.
Also as reference you can look how different providers solve the problem. For example Azure Active Directory has on behalf flow for this purpose
https://learn.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-on-behalf-of-flow
Use of OAuth2/OpenID Connect is not mandatory between your services, there are other protocols and alternatives and even custom. All depends in which relationships are services and either they both are in full trust environment.
You can use anything you like but main idea not to share sensitive information between services like service account credentials or user credentials.
If REST API security is main requirement - OAuth2/OpenID Connect is maybe the best choice, if you need just secure (in a sense of authentication) calls in full trust environment in a simplest way - Kerberos, if you need encrypted custom tunnel between them for data in transit encryption - other options like VPN. It does not make sense to implement somthing custom because if you have Service A and Service B, and would like to make sure call between them is authenticated, then to avoid coupling and sharing senstive information you will always need some central service C as Identity provider. So if you think from tis pov, OAuth2/OIDC is not overkill
Whether the consumers of your API are web browsers or servers you don't control doesn't change the security picture.
If you are using HTTPs and clients already have a key/password then it isn't clear what kind of attack any other mechanism would protect against.
Any compromise on the client side will expose everything anyway.
Firstly - it matters whether a user agent (such as a browser) is involved in call.
If there are only S2S calls then 1 way SSL HTTPS (for network encryption) and some kind of signature mechanism (SHA-256) should be enough for your security.
However if you return sensitive information in your api response, its always better to accept 2 way ssl HTTPS connections (in order to validate the client).
OAuth2 doesn't add any value in a server to server call (that takes place without user consent and without any user agent present).
For authentication between servers:
Authentication
Known servers:
use TLS with X.509 client certificates (TLS with mutual authentication).
issue the client certificates with a common CA (certificate authority). That way, the servers need only have the CA certificate or public key in the truststore, and new client certificates for additional clients/servers can be issued without having to update the truststores.
Open set of servers:
use API keys, issued by a central authority. The servers need to validate these keys on each request (and may cache the hashes of the keys along with the validation result for some short time).
Identity propagation
if the requests are executed in the context of a non-technical user, use JWT (or SAML) for identity propagation of the user principal and claims (authorize at security proxy/WAF/IAM, and issue JWT signed by authentication server).
otherwise the user principal refers to the technical user and can can be extracted from the client certificate (X.509 DName) or be returned with a successful authentication response (API key case).