AFNetworking Certificate Pinning UIWebView - uiwebview

I'm using AFNetworking UIWebView category and I was wondering if there is any way to load a url with certificate pinning check.
Best Regards,
fnxpt

UIWebView does not expose any direct API to that, but it can still be implemented using NSURLProtocol, which gives you the ability to validate the certificate of all outgoing network requests. It does take a lot of work though, and a better solution is to use WKWebView, which provides an explicit way of doing pinning via the WKNavigationDelegate's webView:didReceiveAuthenticationChallenge:completionHandler:.
There are more details in the "Pinning in Webviews" section of this article: https://datatheorem.github.io/TrustKit/getting-started.html and it's also a library for implement public key pinning in iOS Apps.

Related

How to check security level of a secure service?

My client-secure algorithm need a Random oracle... And I am using a HTTPS webservice for it,
https://www.random.org/integers/?num=1&min=0&max=255&col=1&base=16&format=plain&rnd=new
... But many people notice that
random numbers transferred over the public internet are not cryptographically secure for most purposes
... So, the question is:
the described architecture is secure? Need to use HTTPS POST instead GET? Need to add some cryptographic layer in the response?
there are a way to check/quantify "how much secure" is it, to compare with another solutions?
Context
It is not "so simple" and I really need a webservice, that is like a random oracle, must be an exteral device input (the oracle)... I can't use a local client algorithm (ex. local CSPRNG). The focus in the question is the secure communication protocol, for a very simple webservice (simple and fast RPC).
PS: here a javascript fragment example of client-side service request: xmlHttp.open("GET", url, true) where url is the random.org link above.
Most systems provide a Cryptographically secure pseudorandom number generator (CPRNG), use that.
Per the context update use HTTPS and pin the certificate so you know there is no MITM and you are contacting the correct site. Note that you still have to trust the site.

How risky is publishing your App Id when using feed dialog?

I'm concerned that when I use Facebook's feed dialog I'm making my App Id public and thus open to exploitation. Is there much risk in my App Id being public? IF yes, what are those risks?Are there any ways that I can minimise those risks?
EDIT: The examples I've been looking at make use of the Javascript SDK, so getting the App ID would be relatively easy. https://developers.facebook.com/docs/reference/dialogs/feed/
I haven't seen any examples using the PHP SDK, but I think the App ID would still be present in the URL.
EDIT2: Found some more information here App_id spoofing and misuse
No, there's no risk. Your Application ID is already public information. It's your Application Secret that you can't leak.

Securing a RESTful API

For my current side project, which is a modular web management system (which could contain modules for database management, cms, project management, resource management, time tracking, etc…), I want to expose the entire system as a RESTful API as I think that will make the system as more usable. The system itself is going to be coded in ASP.MET MVC3 however if I make all the data/actions available through a RESTful API, that should make the system very easy to use with PHP, Ruby, Python, etc… (they could even make there own interface to manage certain data if they wanted).
However, the one thing that seems hard to do easily (from the user's using the RESTful API point of view) with a RESTful API is security with ajax functionality. If I wanted something that was complex to setup and use, I would just create SOAP services but the whole drive for using a RESTful API is that it is very easy. The most common way of securing a RESTful API with with a key that is associated with a user. This works fine when all the calls are done on the server side however once you start doing ajax functionality, that changes. I would want the RESTful API to be able to be called directly from javascript however anyone who are firebug would easily be able to access the key the user is using allow that person access to the system. Is there a better way the secure a RESTful API where it does not make the user of the RESTful API do complex things just to set it up?
For one thing, you can't prevent the user of your API to not expose his key.
But, if you are writing a client for your API, I would suggest using your server side to do any requests to the API, while your HTML pages provide the data from the user. If you absolutely must use Javascript to make calls to the API and you still have a server side that populates the page in question, then you can obscure the actual key via a one-way digest algorithm in a timestamp-dependant way, while generating the page, and make it that your api checks that digest in a time-dependant way too.
Also, I'd suggest that you take a look into OAuth Nonces and timestamps a bit more deeply. Twitter and other API providers obviously have this problem too, so they must be doing something with the Nonce values.
It is possible to make some signature in request from javascript. But I'm hot sure, how 'RESTfull' urls would be with this extra info. And there you have the same problem: anyone who can see your making-signature-algorithm can make his own signature, witch you server will accept as well.
SSL stands for secure socket layer. It is crucial for security in REST API design. This will secure your API and make it less vulnerable to malicious attacks.
Other security measures you should take into consideration include: making the communication between server and client private and ensuring that anyone consuming the API doesn’t get more than what they request.
SSL certificates are not hard to load to a server and are available for free mostly during the first year. They are not expensive to buy in cases where they are not available for free.
The clear difference between the URL of a REST API that runs over SSL and the one which does not is the “s” in HTTP:
https :// mysite.com/posts runs on SSL.
http :// mysite.com/posts does not run on SSL.

Keygen tag in HTML5

So I came across this new tag in HTML5, <keygen>. I can't quite figure out what it is for, how it is applied, and how it might affect browser behavior.
I understand that this tag is for form encryption, but what is the difference between <keygen> and having a SSL certificate for your domain. Also, what is the challenge attribute?
I'm not planning on using it as it is far from implemented in an acceptable range of browsers, but I am curious as to what EXACTLY this tag does. All I can find is vague cookie-cutter documentation with no real examples of usage.
Edit:
I have found a VERY informative document, here. This runs through both client-side and server-side implementation of the keygen tag.
I am still curious as to what the benefit of this over a domain SSL certificate would be.
SSL is about "server identification" or "server AND client authentication (mutual authentication)".
In most cases only the server presents its server-certificate during the SSL handshake so that you could make sure that this really is the server you expect to connect to. In some cases the server also wants to verify that you really are the person you pretend to be. For this you need a client-certificate.
The <keygen> tag generates a public/private key pair and then creates a certificate request. This certificate request will be sent to a Certificate Authority (CA). The CA creates a certificate and sends it back to the browser. Now you are able to use this certificate for user authentication.
You're missing some history. keygen was first supported by Netscape when it was still a relevant browser. IE, OTOH, supported the same use cases through its ActiveX APIs. Opera and WebKit (or even KHTML), unwilling to reverse-engineer the entire Win32 API, reverse-engineered keygen instead.
It was specified in Web Forms 2.0 (which has now been merged into the HTML specification), in order to improve interoperability between the browsers that implemented it.
Since then, the IE team has reiterated their refusal to implement keygen, and the specification (in order to avoid turning into dry science fiction) has been changed to not require an actual implementation:
Note: This specification does not
specify what key types user agents are
to support — it is possible for a user
agent to not support any key types at
all.
In short, this is not a new element, and unless you can ignore IE, it's probably not what you want.
If you're looking for "exactly" then I'd recommend reading the RFC.
The keygen element is for creating a key for authentication of the user while SSL is concerned about privacy of communication and the authentication of the server. Quoting from the RFC:
This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use TLS and certificate authentication.
Deprecated
This feature has been removed from the Web standards. Though some
browsers may still support it, it is in the process of being dropped.
Avoid using it and update existing code if possible. Be aware that
this feature may cease to work at any time.
Source
The doc is useful to elaborate on what is the keygen element. Its requirement arises in WebID that maybe understood to be part of Semantic Web of Linked Data as seen at https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#creating-a-certificate 2.1.1
This might be useful for websites that provide services, where people need to pay for the service, like video on demand, or news website for professionals like Bloomberg. With this keys people can only watch the content in their computer and not in simultaneous computers! You decide how data is stored and processed. you can specify a .asp or .php file that will receive the variables and your file will store that key in the user profile. This way your users will not be able to log in from a different computer if you want. You may force them to check their email to authorize that new computer, just like steam does. Basically it allows to individualize service access, if your licensing model is per machine, like Operating System.
You can check the specs here:
http://www.w3.org/TR/html-markup/keygen.html

Restrict browser plugin usage to specific servers?

For a new banking application we are currently discussing the details of a browser plugin installed on client PCs for accessing smartcard readers.
A question that came up was: Is there a way to restrict the usage of this plugin to a specified list of domains? It should prevent any third-party-site to use the plugin just by serving some <embed/object>-Tag.
The solution should be basically browser-independent.
It may include cryptography if neccessary, but should only result in moderate implementation overhead in the plugin code.
Ideas, anyone?
I know there exists a MS solution called SiteLock, but that's only IE.
You could hard code the list of authorized domains into the plugin itself.
Alternatively, you could expose a web service which will deliver a list of authorized domains. The plugin could make a call to your web service when instantiated to determine whether it can be started or not.
We came up with this idea: (described for one server)
The plugin carries a public key A. The plugin creator issues a certificate to the server's public key B. The server starts the plugin within a HTML-page and provides these parameters:
several allication sepcific parameters
the certificate
a digital signature
Then the plugin will start and first of all perform these checks:
verify the certificate with the public key delivered within the plugin
verify the signature with the public key from the certificate
if verification was OK then proceed, else terminate.

Resources