I am working on a REST API to be used by a mobile application I am writing, mostly for the purpose of communicating with a database.
The mobile application makes calls to URLs like this:
example.com/mobileapi/getinfo
And carries certain POST payload along with each call.
I'm not worried about user authentication etc.
However, what I am worried about is, if someone were to use the mobile application along with a network monitoring tool like Fiddler or Wireshark, they could document all the URLs being called, along with all the POST parameters. That would be enough information to create their own app that uses my API.
How can I prevent this? I considered hardcoding a Key into my application and have that included as a POST parameter with each request, but that would be visible as well.
What you want to do is employ mutually-authenticated SSL, so that your server will only accept incoming connections from your app and your app will only communicate with your server.
Here's the high-level approach. Create a self-signed server SSL certificate and deploy on your web server. If you're using Android, you can use the keytool included with the Android SDK for this purpose; if you're using another app platform, similar tools exist for them as well. Then create a self-signed client and deploy that within your application in a custom keystore included in your application as a resource (keytool will generate this as well). Configure the server to require client-side SSL authentication and to only accept the client certificate you generated. Configure the client to use that client-side certificate to identify itself and only accept the one server-side certificate you installed on your server for that part of it.
If someone/something other than your app attempts to connect to your server, the SSL connection will not be created, as the server will reject incoming SSL connections that do not present the client certificate that you have included in your app.
A step-by-step for this is a much longer answer than is warranted here. I would suggest doing this in stages as there are resources on the web about how to deal with self-signed SSL certificate in Android (I'm not as familiar with how to do this on other mobile platforms), both server and client side. There is also a complete walk-through in my book, Application Security for the Android Platform, published by O'Reilly.
Related
For testing purposes I would like to enable the 'Incoming Client Certificates' option in my Azure App Service (running a WCF webservice), and see if my Client application can still connect to the webservice. Since I am still in a testing phase, my app service still has the .azurewebsites.net domain name.
However, I can't seem to figure out how to get a proper client certificate that the server will accept (without switching to a custom domain name, which I know will work).
Currently, I see 2 possible routes to a solution:
Somehow get my hands on .cer that is signed by a CA trusted by the App Service server.
Generate a self-signed .pfx and .cer with my own self-signed CA. Import the pfx on the App Service and install the .cer on the client.
Both directions have not yielded any success so far. Does anyone have any experience with this?
Per my understanding, the client certificate is used by client systems to make authenticated requests to a remote server. In this case, your webservice is the remote server in a C/S mode. As you point out, "validating this certificate is the responsibility of the web app. So this means that any certificate will be valid as long as you don't validate anything". It does not effect on whether you have a custom domain or not in your web app service.
If you want to use client cert authentication with Azure app, you can refer to How To Configure TLS Mutual Authentication for Web App.
If the server has requested client certificate in its server hello and the client cert has signing capability, then it is expected to send the CertificateVerify message to the server. It contains signed hash of all messages from Client Hello till that point which are buffered on the server side. The server TLS layer will decrypt this using the client public key (which is in the Client certificate received earlier) and compare with its calculated hash. It will call back to application layer if this fails.
The application needs to handle it at that point and return its own error or continue with the session. https://www.rfc-editor.org/rfc/rfc5246#section-7.4.8
One example of this with Wolfssl library is https://github.com/wolfSSL/wolfssl/blob/14ef517b6113033c5fc7506a9da100e5e341bfd4/wrapper/CSharp/wolfSSL-Example-IOCallbacks/wolfSSL-Example-IOCallbacks.cs#L145
I have a .net WEB API publicly exposed and also a Xamarin Forms App which uses the API, the app needs to be extremely secure due to the data it manages.
I will create an HTTP Certificate for the WEB API.
The Xamarin Forms app will have a login/password to validate against a local Active Directory. via a /token endpoint, and using an Authorize attribute on all endpoints to assure that every HTTP call has the bearer token in it, I implemented that using this:
I based my implementation on this one:
http://bitoftech.net/2014/06/01/token-based-authentication-asp-net-web-api-2-owin-asp-net-identity/
Additionally the customer has asked us for Client Certificate Authentication, I dont understand how this totally works.
1. I need to add a certificate to the Xamarin Project, right? How do I Add it? How do I generate it?
2. In the Web API I need to validate each http call has the certificate attached.
I found this but not sure if it will work:
http://www.razibinrais.com/secure-web-api-with-client-certificate/
However when investigating this, I also found something about certificate pinning, which is basically security but the other way around, it means the Xamarin APP will validate if the server certificate is associated with the right server (or something like that), so there is no way of a MAN IN THE MIDDLE Attack.
I found how to implement it here:
https://thomasbandt.com/certificate-and-public-key-pinning-with-xamarin
Question is:
1. Do I need both ?
Something else that I should research for on this journey?
Certificate pinning and Client Certificate Authentication are 2 very different things. Certificate pinning makes sure your app is talking to the server it expects to talk to. It also prevents eavesdropping, which is known as a 'Man in the middle' attack. I just recently wrote an article about this on my blog.
Client Certificate Authentication works the other way around. It adds an extra layer of security so your server can be sure only clients that have the certificate can communicate successfully with it. However, since apps can be decompiled without a lot of effort, this client certificate can 'easily' be obtained by a malicious user. So this isn't a silver bullet.
From my experience, Client Certificate Authentication is often used in enterprise apps, when there is an Enterprise Mobility Management solution in place (eg. Mobile Iron or Microsoft Intune or others), where the EMM solution can push the certificates to the users device out of band.
Should you use both? That really depends on the requirements of your customer, since they mitigate 2 very different problems.
The Web API link you included looks like it should do the server job properly at first sight. This article also includes how to generate a client certificate with a Powershell command.
Generating a client side certificate:
Use the Powershell command in the article that you referenced in your question.
Otherwise, this gist might help you on your way.
Installation:
Add the certificate file to each platform specific project as a resource. This is usually done in the form of a .p12 file.
Usage:
That all depends on which HttpClient you are using.
If you use the provided Web API solution, you should add the certificate contents as a X-ARR-ClientCert header with each request.
We are working on a mobile app that communicates with the backend through REST API over SSL. Mobile device executes cert validation on the API call (using standard libraries in mobile frameworks).
If we try to connect the mobile device through proxy (such as Charles), we see all the traffic, but it is encrypted - as expected.
However, if I enable SSL proxy, generate root certificate and install that cert on my device, I will see all the data in clear text through Charles - again, as expected.
The question is, how to prevent this?
The main target, of course, is to expose data ONLY if device calls allowed server with a valid certificate for that server.
Off hand the only way to prevent such a thing if the attacker has that level of access to the device would be to use SSL thumb printing. You would initiate a connection to the server. Retrieve the SSL certificate and compare this to a hard coded value within the app code. If this does not match abort the connection and don't send the data.
The issue with this however is the overhead if the SSL updates. You would need to release an update to the app with a fresh thumbprint value. This would also stop people using the app until they updated to the latest version.
The only way to prevent this is through certificate pinning, but if the attacker is able to install a root certificate before you connect for the first time to your API, you can still be MiM'ed.
I am developing Restful API layer my app. The app would be used in premises where HTTPS support is not available. We need to support both web apps and mobile apps. We are using Node/Expressjs at the server side. My two concerns are:
Is there a way we could setup secure authentication without HTTPS?
Is there a way we could reuse the same authentication layer on both web app (backbonejs) and native mobile app (iOS)?
I think you are confusing authenticity and confidentiality. It's totally possible to create an API that securely validates the caller is who they say they are using a MAC; most often an HMAC. The assumption, though, is that you've securely established a shared secret—which you could do in person, but that's pretty inconvenient.
Amazon S3 is an example of an API that authenticates its requests without SSL/TLS. It does so by dictating a specific way in which the caller creates an HMAC based on the parts of the HTTP request. It then verifies that the requester is actually a person allowed to ask for that object. Amazon relies on SSL to initially establish your shared secret at registration time, but SSL is not needed to correctly perform an API call that can be securely authenticated as originating from an authorized individual—that can be plain old HTTP.
Now the downside to that approach is that all data passing in both directions is visible to anyone. While the authorization data sent will not allow an attacker to impersonate a valid user, the attacker can see anything that you transmit—thus the need for confidentiality in many cases.
One use case for publicly transmitted API responses with S3 includes websites whose code is hosted on one server, while its images and such are hosted in S3. Websites often use S3's Query String Authentication to allow browsers to request the images directly from S3 for a small window of time, while also ensuring that the website code is the only one that can authorize a browser to retrieve that image (and thus charge the owner for bandwidth).
Another example of an API authentication mechanism that allows the use of non-SSL requests is OAuth. It's obsolete 1.0 family used it exclusively (even if you used SSL), and OAuth 2.0 specification defines several access token types, including the OAuth2 HTTP MAC type whose main purpose is to simplify and improve HTTP authentication for services that are unwilling or unable to employ TLS for every request (though it does require SSL for initially establishing the secret). While the OAuth2 Bearer type requires SSL, and keeps things simpler (no normalization; the bane of all developers using all request signing APIs without well established & tested libraries).
To sum it up, if all you care about is securely establishing the authenticity of a request, that's possible. If you care about confidentiality during the transport of the response, you'll need some kind of transport security, and TLS is easier to get right in your app code (though other options may be feasible).
Is there a way we could setup secure authentication without HTTPS?
If you mean SSL, No. Whatever you send through your browser to the web server will be unencrypted, so third parties can listen. HTTPS is not authentication, its encyrption of the traffic between the client and server.
Is there a way we could reuse the same authentication layer on both web app (backbonejs) and native mobile app (iOS)?
Yes, as you say, it is layer, so it's interface will be independent from client, it will be HTTP and if the web-app is on same-origin with that layer, there will be no problem. (e.g. api.myapp.com accessed from myapp.com). Your native mobile can make HTTP requests, too.
In either case of SSL or not SSL, you can be secure if you use a private/public key scenario where you require the user to sign each request prior to sending. Once you receive the request, you then decrypt it with their private key (not sent over the wire) and match what was signed and what operation the user was requesting and make sure those two match. You base this on a timestamp of UTC and this also requires that all servers using this model be very accurate in their clock settings.
Amazon Web Services in particular uses this security method and it is secure enough to use without SSL although they do not recommend it.
I would seriously invest some small change to support SSL as it gives you more credibility in doing so. I personally would not think you to be a credible organization without one.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
securing connection to php server
I'm writing an mobile application to access an online database (I'm more interested in the high-level algorithm/protocol than the platform-specific implementation).
Since keeping the DB updated require a lot of work I want to restrict the access to my sponsored application only (I don't want other apps to take advantage of my DB for free). To do this I need to authenticate the application itself, but how can I do it?
If I store some sort of credentials within the app somebody could try to disassemble the program, retrieve the data and write his own application bypassing mine (even if I encrypt the credentials I still need to store somewhere the decryption key...)
What you want to do is employ mutually-authenticated SSL, so that your server will only accept incoming connections from your app and your app will only communicate with your server.
Here's the high-level approach. Create a self-signed server SSL certificate and deploy on your web server. You can use the keytool included with the Android SDK (if you're using Android; there are similar tools out there for other platforms) for this purpose. Then create a self-signed client and deploy that within your application in a custom keystore included in your application as a resource (keytool will generate this as well). Configure the server to require client-side SSL authentication and to only accept the client certificate you generated. Configure the client to use that client-side certificate to identify itself and only accept the one server-side certificate you installed on your server for that part of it.
If someone/something other than your app attempts to connect to your server, the SSL connection will not be created, as the server will reject incoming SSL connections that do not present the client certificate that you have included in your app.
A step-by-step for this is a much longer answer than is warranted here. I would suggest doing this in stages as there are resources on the web about how to deal with self-signed SSL certificate in Android, both server and client side. There is also a complete walk-through for Android applications in my book, Application Security for the Android Platform, published by O'Reilly.
Now...you are right in that someone with access to the mobile app could recover the private key associated with the client-side certificate. It would be in a BKS keystore that would be encrypted but your app would need to supply a password to open that keystore. So, someone could reverse engineer your app (fairly easy on the Android platform), grab the password, grab the keystore, and decrypt it to recover the client-side private key. You can mitigate this someway by obfuscating the app to make reversing the keystore password more difficult, or asking the user to log in to the app and using that password to derive the password the the keystore, etc...it really depends on the level of risk you're willing to take on for your application.