Related
Thanks for your help in advance.
I'm using React Native and Node.js to deliver a product for my company.
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
My research so far.
Embedded in strings.xml
Hidden in Source Code
Hidden in BuildConfigs
Using Proguard
Disguised/Encrypted Strings
Hidden in Native Libraries
http://rammic.github.io/2015/07/28/hiding-secrets-in-android-apps/
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
https://github.com/oblador/react-native-keychain
Although this may obfuscate keys, these still have to be hardcoded. Making these kind of useless, unless I'm missing something.
I could use a .env file
https://github.com/luggit/react-native-config
Again, I feel like the hacker can still view secret keys, even if they are saved in a .env
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
Your Question
I've setup the steps on the backend to retrieve a password, validate it and respond with a token. The only problem is - the password I use on the front end (mobile app) to be validated by the back end is hardcoded.
My question is:
How should I securely store this password on the mobile app so that it can not be sniffed out by a hacker and used to compromise the backend?
The cruel truth is... you can't!!!
It seems that you already have done some extensive research on the subject, and in my opinion you mentioned one effective way of shipping your App with an embedded secret:
Hidden in Native Libraries
But as you also say:
These methods are basically useless because hackers can easily circumnavigate these methods of protection.
Some are useless and others make reverse engineer the secret from the mobile app a lot harder. As I wrote here, the approach of using the native interfaces to hide the secret will require expertise to reverse engineer it, but then if is hard to reverse engineer the binary you can always resort to a man in the middle (MitM) attack to steel the secret, as I show here for retrieving a secret that is hidden in the mobile app binary with the use of the native interfaces, JNI/NDK.
To protect your mobile app from a MitM you can employ Certificate Pinning:
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
You can read this series of react native articles that show you how to apply certificate pinning to protect the communication channel between your mobile app and the API server.
If you don't know yet certificcate pinning can also be bypassed by using tools like Frida or xPosed.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
So now you may be wondering how can I protect from certificate pinning bypass?
Well is not easy, but is possible, by using a mobile app attestation solution.
Before we go further on it, I would like to clarify first a common misconception among developers, regarding WHO and WHAT is accessing the API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered, is not running in a rooted device and is not being the target of a MitM attack. This is done by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on. The cloud service also verifies that the TLS certificate provided to the mobile app on the handshake with the API server is indeed the same in use by the original and genuine API server for the mobile app, not one from a MitM attack.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
So this solution works in a positive detection model without false positives, thus not blocking legit users while keeping the bad guys at bays.
What suggestions do you have to protect the world (react- native apps) from pesky hackers, when they're stealing keys and using them inappropriately?
I think you should relaly go with a mobile app attestation solution, that you can roll in your own if you have the expertise for it, or you can use a solution that already exists as a SAAS solution at Approov(I work here), that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
Summary
I want to be able to store keys in the app so that I can validate the user an allow them to access resources on the backend. However, I don't know what the best plan of action is to ensure user/business security.
Don't go down this route of storing keys in the mobile app, because as you already know, by your extensive research, they can be bypassed.
Instead use a mobile attestation solution in conjunction with OAUTH2 or OpenID connect, that you can bind with the mobile app attestation token. An example of this token binding can be found in this article for the check of the custom payload claim in the endpoint /forms.
Going the Extra Mile
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
I've set up an API with authentication but I want to only allow certain applications and websites to access it. What do I do?
I've got authentication set up for users that are Logged in only being able to access the API, however, how do I prevent them from just logging in from anywhere?
Before I address your question, I think is important that first we clear a common misconception among developers, regarding WHO and WHAT is accessing an API.
THE DIFFERENCE BETWEEN WHO AND WHAT IS COMMUNICATING WITH YOUR API SERVER
To better understand the differences between the WHO and the WHAT are accessing your mobile app, let’s use this picture:
The Intended Communication Channel represents your mobile being used as you expected, by a legit user without any malicious intentions, using an untampered version of your mobile app, and communicating directly with your API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of your mobile app, a hacker using the genuine version of you mobile app while man in the middle attacking it to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let your API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, your mobile app.
Now we need a way to identify WHAT is calling your API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
For your surprise you may end up discovering that It can be one of your legit users using a repackaged version of your mobile app or an automated script trying to gamify and take advantage of your service.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
YOUR QUESTIONS
I've got authentication set up for users that are Logged in only being able to access the API, however, how do I prevent them from just logging in from anywhere?
If by logging in from anywhere you mean any physical location, then you can use blocking by IP address as already suggested by #hanshenrik, but if you mean blocking from logging from other applications, that are not the ones you have issued the API keys for, then you have a very hard problem in your hands to solve, that leads to your first question:
I've set up an API with authentication but I want to only allow certain applications and websites to access it. What do I do?
This will depend if WHAT is accessing the API is a web or a mobile application.
Web application
In a web app we only need to inspect the source code with the browser dev tools or by right click on view page source and search for the API key, and then use it in any tool, like Postman or in any kind of automation we want, just by replicating the calls as we saw them being made in the network tab of the browser.
For an API serving a web app you can employ several layers of dense, starting with reCaptcha V3, followed by Web Application Firewall(WAF) and finally if you can afford it a User Behavior Analytics(UBA) solution.
Google reCAPTCHA V3:
reCAPTCHA is a free service that protects your website from spam and abuse. reCAPTCHA uses an advanced risk analysis engine and adaptive challenges to keep automated software from engaging in abusive activities on your site. It does this while letting your valid users pass through with ease.
...helps you detect abusive traffic on your website without any user friction. It returns a score based on the interactions with your website and provides you more flexibility to take appropriate actions.
WAF - Web Application Firewall:
A web application firewall (or WAF) filters, monitors, and blocks HTTP traffic to and from a web application. A WAF is differentiated from a regular firewall in that a WAF is able to filter the content of specific web applications while regular firewalls serve as a safety gate between servers. By inspecting HTTP traffic, it can prevent attacks stemming from web application security flaws, such as SQL injection, cross-site scripting (XSS), file inclusion, and security misconfigurations.
UBA - User Behavior Analytics:
User behavior analytics (UBA) as defined by Gartner is a cybersecurity process about detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns—anomalies that indicate potential threats. Instead of tracking devices or security events, UBA tracks a system's users. Big data platforms like Apache Hadoop are increasing UBA functionality by allowing them to analyze petabytes worth of data to detect insider threats and advanced persistent threats.
All this solutions work based on a negative identification model, by other words they try their best to differentiate the bad from the good by identifying WHAT is bad, not WHAT is good, thus they are prone to false positives, despite of the advanced technology used by some of them, like machine learning and artificial intelligence.
So you may find yourself more often than not in having to relax how you block the access to the API server in order to not affect the good users. This also means that this solutions require constant monitoring to validate that the false positives are not blocking your legit users and that at same time they are properly keeping at bay the unauthorized ones.
Mobile Application
From your reply to a comment:
What about for mobile applications?
Some may think that once a mobile app is released in a binary format that their API key will be safe, but turns out that is not true, and extracting it from a binary is sometimes almost as easy as extracting it from a web application.
Reverse engineering a mobile app is made easy by plethora of open source tools, like the Mobile Security Framework(MobSF), Frida, XPosed, MitmProxy, and many other more, but as you can see in this article, it can be done with MobSF or with the strings utility that is installed in a normal Linux distribution.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
Regarding APIs serving mobile apps a positive identification model can be used by using a Mobile App Attestation solution that guarantees to the API server that WHAT is making the requests can be trusted, without the possibility of false positives.
The Mobile App Attestation
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
CONCLUSION
In the end the solution to use in order to protect your API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
So using API keys may sound like locking the door of your home and leave the key under the mat, but not using them is liking leaving your car parked with the door closed, but the key in the ignition.
I have a web application where some data (not file) needs to be digitally signed using a PKI Private Key. The PKI Certificate & Private Key will be in a USB Cryptotoken which registers the certificates with the browser when inserted into the USB slot. This eases the pain of doing authentication using the certificate because I do that by trigerring ssl-renegotiation in my Application.
However, using a certificate for digital signing seems to be a bit more tricky. I can think of several ways to do this
CAPICOM - http://en.wikipedia.org/wiki/CAPICOM
This will work for browsers which support CAPICOM (eg. IE). However it seems that Microsoft has discontinued this.
Mozilla Crypto Object - https://developer.mozilla.org/en-US/docs/JavaScript_crypto
WebCrypto API - this is not yet supported by most browsers.
A custom Java Applet or some opensource freely available JavaApplet control.
Any other options?
I am trying to figure out what is the common, convenient and secure way of doing this in a web-application.
Note:
I am OK with just supporting the popular browsers.
I am signing a small piece of data - say 100-200 bytes rather than a file.
I would prefer PKCS#7 signatures.
[Disclosure: I work for CoSign.]
The problem that you're running into is a common one with old-style PKI systems that store the signer's private key at the boundary (eg in a smart card, a token, etc). This system was designed when the PC (and apps running on it) was the focus. But that isn't true this century. Now either the browser or the mobile is the focus.
You have tension between the nature of web apps (they're either running on the host or are sandboxed JavaScript on the browser) versus the idea of local hardware that "protects" the private key.
Breaking out of the browser's sandbox
One design direction is to try to break out of the browser's sandbox to access the local hardware private key store. You've listed a number of options. An additional one is the Chrome USB access library. But all of these solutions are:
Limited to specific browsers
Hard (and expensive) to install
Hard (and expensive) to maintain
High level of administrative overhead to help the users with their questions about keeping the system working.
Re your question 5 "Any other options?"
Yes: Centralized signing
A better option (IMHO) is to sign centrally. This way the keys are kept in a centralized FIPS-secure server. Meanwhile, the signers just use a webapp to authorize the signing. The signers don't need to hold the private key since it is stored in the secure server.
To authenticate the signers, you can use whatever level of security your app needs: user name/password; One Time Password; two factor authentication via SMS; etc.
The CoSign Signature API and CoSign Signature Web Agent are designed for this. Centralized PKI signing is also available from other vendors.
Added in response to comment
From the 2nd part of your answer - If the certificate is stored in the server and retrieved by authenticating the user by using uname/pwd or with 2FA, then why do digital signing at all? i.e. what advantage does it offer over just authenticating the transaction with uname/pwd or 2FA?
A: In the centralized design, the private key does not leave the central server. Rather, the document or data to be signed is sent to the server, is signed, and then the signed doc or data (e.g. XML) is returned to the webapp.
Re: why do this? Because a digitally signed document or data set (eg XML) can be verified to guarantee that the document was not changed since signed and provides a trust chain to provide assurance of the signer's identity. In contrast, passwords, even when strengthed by 2FA etc, only provide the app with signer identity assurance, not third parties.
PKI digital signing enables third parties to assure themselves of the signer's identity through the verification process. And the strength of the assurance can be set, as needed, by choosing different CAs.
I am trying to implement delegated authorization in a Web API for mobile apps using OAuth 2.0. According to specification, the implicit grant flow does not support refresh tokens, which means once an access token is granted for an specific period of time, the user must grant permissions to the app again once the token expires or it is revoked.
I guess this is a good scenario for some javascript code running on a browser as it is mentioned in the specification. I am trying to minimize the times the user must grant permissions to the app to obtain a token, so it looks like the Authorization Code flow is a good option as it supports refresh tokens.
However, this flow seems to rely heavily on a web browser for performing the redirections. I am wondering if this flow is still a good option for a mobile app if a embedded web browser is used. Or should I go with the implicit flow ?
Clarification: Mobile App = Native App
As stated in other comments and a few sources online, implicit seems like a natural fit for mobile apps, however the best solution is not always clear cut (and in fact implicit is not recommended for reasons discussed below).
Native App OAuth2 Best Practises
Whatever approach you choose (there are a few trade offs to consider), you should pay attention to the best practices as outlined here for Native Apps using OAuth2: https://www.rfc-editor.org/rfc/rfc8252
Consider the following options
Implicit
Should I use implicit?
To quote from Section 8.2 https://www.rfc-editor.org/rfc/rfc8252#section-8.2
The OAuth 2.0 implicit grant authorization flow (defined in Section 4.2 of OAuth 2.0 [RFC6749]) generally works with the practice of performing the authorization request in the browser and receiving the authorization response via URI-based inter-app communication.
However, as the implicit flow cannot be protected by PKCE [RFC7636] (which is required in Section 8.1), the use of the Implicit Flow with native apps is NOT RECOMMENDED.
Access tokens granted via the implicit flow also cannot be refreshed without user interaction, making the authorization code grant flow --
which can issue refresh tokens -- the more practical option for native app authorizations that require refreshing of access tokens.
Authorization Code
If you do go with Authorization Code, then one approach would be to proxy through your own web server component which enriches the token requests with the client secret to avoid storing it on the distributed app on devices.
Excerpt below from: https://dev.fitbit.com/docs/oauth2/
The Authorization Code Grant flow is recommended for applications that
have a web service. This flow requires server-to-server communication
using an application's client secret.
Note: Never put your client secret in distributed code, such as apps
downloaded through an app store or client-side JavaScript.
Applications that do not have a web service should use the Implicit
Grant flow.
Conclusion
The final decision should factor in your desired user experience but also your appetite for risk after doing a proper risk assessment of your shortlisted approaches and better understanding the implications.
A great read is here https://auth0.com/blog/oauth-2-best-practices-for-native-apps/
Another one is https://www.oauth.com/oauth2-servers/oauth-native-apps/ which states
The current industry best practice is to use the Authorization Flow
while omitting the client secret, and to use an external user agent to
complete the flow. An external user agent is typically the device’s
native browser, (with a separate security domain from the native app,)
so that the app cannot access the cookie storage or inspect or modify
the page content inside the browser.
PKCE Consideration
You should also consider PKCE which is described here https://www.oauth.com/oauth2-servers/pkce/
Specifically, if you are also implementing the Authorization Server then https://www.oauth.com/oauth2-servers/oauth-native-apps/checklist-server-support-native-apps/ states that you should
Allow clients to register custom URL schemes for their redirect URLs.
Support loopback IP redirect URLs with arbitrary port numbers in order to support desktop apps.
Don’t assume native apps can keep a secret. Require all apps to declare whether they are public or confidential, and only issue client secrets to confidential apps.
Support the PKCE extension, and require that public clients use it.
Attempt to detect when the authorization interface is embedded in a native app’s web view, instead of launched in a system browser, and reject those requests.
Web Views Consideration
There are many examples in the wild using Web Views i.e. an embedded user-agent but this approach should be avoided (especially when the app is not first-party) and in some cases may result in you being banned from using an API as the excerpt below from here demonstrates
Any attempt to embed the OAuth 2.0 authentication page will result in
your application being banned from the Fitbit API.
For security consideration, the OAuth 2.0 authorization page must be
presented in a dedicated browser view. Fitbit users can only confirm
they are authenticating with the genuine Fitbit.com site if they have
the tools provided by the browser, such as the URL bar and Transport
Layer Security (TLS) certificate information.
For native applications, this means the authorization page must open
in the default browser. Native applications can use custom URL schemes
as redirect URIs to redirect the user back from the browser to the
application requesting permission.
iOS applications may use the SFSafariViewController class instead of
app switching to Safari. Use of the WKWebView or UIWebView class is
prohibited.
Android applications may use Chrome Custom Tabs instead of app
switching to the default browser. Use of WebView is prohibited.
To further clarify, here is a quote from this section of a previous draft of the best practise link provided above
Embedded user-agents, commonly implemented with web-views, are an
alternative method for authorizing native apps. They are however
unsafe for use by third-parties by definition. They involve the user
signing in with their full login credentials, only to have them
downscoped to less powerful OAuth credentials.
Even when used by trusted first-party apps, embedded user-agents
violate the principle of least privilege by obtaining more powerful
credentials than they need, potentially increasing the attack surface.
In typical web-view based implementations of embedded user-agents, the
host application can: log every keystroke entered in the form to
capture usernames and passwords; automatically submit forms and bypass
user-consent; copy session cookies and use them to perform
authenticated actions as the user.
Encouraging users to enter credentials in an embedded web-view without
the usual address bar and other identity features that browsers have
makes it impossible for the user to know if they are signing in to the
legitimate site, and even when they are, it trains them that it's OK
to enter credentials without validating the site first.
Aside from the security concerns, web-views do not share the
authentication state with other apps or the system browser, requiring
the user to login for every authorization request and leading to a
poor user experience.
Due to the above, use of embedded user-agents is NOT RECOMMENDED,
except where a trusted first-party app acts as the external user-
agent for other apps, or provides single sign-on for multiple first-
party apps.
Authorization servers SHOULD consider taking steps to detect and block
logins via embedded user-agents that are not their own, where
possible.
Some interesting points are also raised here: https://security.stackexchange.com/questions/179756/why-are-developers-using-embedded-user-agents-for-3rd-party-auth-what-are-the-a
Unfortunately, I don't think there is a clear answer to this question. However, here are the options that I've identified:
If it is ok to ask the user for his/her credentials, then use the Resource Owner Password Credentials. However, this may not be possible for some reasons, namely
Usability or security policies forbid the insertion of the password directly at the app
The authentication process is delegated on an external Identity Provider and must be performed via an HTTP redirect-based flow (e.g. OpenID, SAMLP or WS-Federation)
If usage of a browser based flow is required, then use the Authorization Code Flow. Here, the definition of the redirect_uri is a major challenge, for which there are the following options:
Use the technique described in https://developers.google.com/accounts/docs/OAuth2InstalledApp, where a special redirect_uri (e.g. urn:ietf:wg:oauth:2.0:oob) signals the authorization endpoint to show the authorization code instead of redirecting back to the client app. The user can manually copy this code or the app can try to obtain it from the HTML document title.
Use a localhost server at the device (the port management may not be easy).
Use a custom URI scheme (e.g. myapp://...) that when dereferenced triggers a registered "handler" (the details depend on the mobile platform).
If available, use a special "web view", such as the WebAuthenticationBroker on Windows 8, to control and access the HTTP redirect responses.
Hope this helps
Pedro
TL;DR: Use Authorization Code Grant with PKCE
1. Implicit Grant Type
The implicit grant type is quite popular with mobile apps. But it was not meant to be used like this. There are security concerns around the redirect. Justin Richer states:
The problem comes when you realize that unlike with a remote server
URL, there is no reliable way to ensure that the binding between a
given redirect URI and a specific mobile application is honored. Any
app on the device can try to insert itself into the redirection
process and cause it to serve the redirect URI. And guess what: if
you’ve used the implicit flow in your native application, then you
just handed the attacker your access token. There’s no recovery from
that point — they’ve got the token and they can use it.
And together with the fact, that it does not let you refresh the access token, better avoid it.
2. Authorization Code Grant Type
The authorization code grant requires a client secret. But you should not store sensitive information in the source code of your mobile app. People can extract them. To not expose the client secret, you have to run a server as a middleman as Facebook writes:
We recommend that App Access Tokens should only be used directly from
your app's servers in order to provide the best security. For native
apps, we suggest that the app communicates with your own server and
the server then makes the API requests to Facebook using the App
Access Token.
Not an ideal solution but there is new, a better way to do OAuth on mobile devices: Proof Key for Code Exchange
3. Authorization Code Grant Type with PKCE (Proof Key for Code Exchange)
Out of the limitations, a new technique was created that let you use the Authorization Code without a client secret. You can read the full RFC 7636 or this short introduction.
PKCE (RFC 7636) is a technique to secure public clients that don't use
a client secret.
It is primarily used by native and mobile apps, but the technique can
be applied to any public client as well. It requires additional
support by the authorization server, so it is only supported on
certain providers.
from https://oauth.net/2/pkce/
Using a webview in your mobile application should be an affordable way to implement OAuth2.0 protocol on Android platform.
As for redirect_uri field, I think http://localhost is a good choice and you don't have to port a HTTP server inside your application, because you can override the implementation of onPageStarted function in the WebViewClient class and stop loading the web page from http://localhost after you check the url parameter.
public void onPageStarted(final WebView webView, final String url,
final Bitmap favicon) {}
The smoothest user experience for authentication, and the easiest to implement is to embed a webview in your app. Process the responses received by the webview from the authentication point and detect error (user cancel) or approval (and extract token from url query parameters).
And I think you can actually do that in all platforms. I have successfully made this work for the following: ios, android, mac, windows store 8.1 apps, windows phone 8.1 app. I did this for the following services: dropbox, google drive, onedrive, box, basecamp. For the non-windows platforms, I was using Xamarin which supposedly does not expose the entire platform specific APIs, yet it did expose enough for making this possible. So it is a pretty accessible solution, even from a cross platform perspective, and you don't have to worry about the ui of the authentication form.
In my application, I just want to upload some data on the server without interacting with the user.
How do I silently upload data on the server in J2ME without asking the user for Internet usage?
In order to upload silently, the user must approve at least once that it allows you to connect to the internet, as specified by the MIDP 2.0 Security Architecture.
First you have to sign your Midlet with a certificate from a Certificate Authority (commonly refered as CAs) as Verisign, Thawte, Java Verified, etc. You have to choose your CA depending on the devices you are targeting. The device will just recognize the CAs installed as root certificates. If it doesn't have the root certificate of the CA you chose, it will not be a secure third-party application. This is explained in simple steps in the Nokia Wiki
The second step is to set in your JAD file the next line
MIDlet-Permissions: javax.microedition.io.Connector.http
This will ask for http connections permission since it is installed.
In this way the user will just be noticed once, and will be allowed to set the permission permanently. Some devices will not allow a permanent permission if the application is not signed.
This is impossible. All the phones ask the user before letting an application use internet services.
One possibility could be signing the application somehow, but that would work on very few phones, if any.
If your application is signed by Java Verify or similar you will be able to let the user say they allow all future http connections, rather than having to authorise them all singularly.