For a new banking application we are currently discussing the details of a browser plugin installed on client PCs for accessing smartcard readers.
A question that came up was: Is there a way to restrict the usage of this plugin to a specified list of domains? It should prevent any third-party-site to use the plugin just by serving some <embed/object>-Tag.
The solution should be basically browser-independent.
It may include cryptography if neccessary, but should only result in moderate implementation overhead in the plugin code.
Ideas, anyone?
I know there exists a MS solution called SiteLock, but that's only IE.
You could hard code the list of authorized domains into the plugin itself.
Alternatively, you could expose a web service which will deliver a list of authorized domains. The plugin could make a call to your web service when instantiated to determine whether it can be started or not.
We came up with this idea: (described for one server)
The plugin carries a public key A. The plugin creator issues a certificate to the server's public key B. The server starts the plugin within a HTML-page and provides these parameters:
several allication sepcific parameters
the certificate
a digital signature
Then the plugin will start and first of all perform these checks:
verify the certificate with the public key delivered within the plugin
verify the signature with the public key from the certificate
if verification was OK then proceed, else terminate.
Related
I need it for security testing. My purpose is to check, how the application would behave, if the adversary presents a certificate with a wrong Common Name (CN) and/or SubjectAltName, but signed by a correct CA.
I believe that the application in test uses HostnameVerifier incorrectly and need to prove it.
Here is an official answer from Telerik (Eric Lawrence):
Click Rules > Customize Rules. Scroll to OnBeforeRequest.
Inside that function, add the following:
if (oSession.HTTPMethodIs("CONNECT") &&
oSession.HostnameIs("siteIcareabout.com"))
{
oSession["X-OverrideCertCN"] = "badhostname.net";
}
Save the file and restart the browser if it had previously established any connections to https://siteIcareabout.com.
==========================
I checked it and it works.
Vanilla Fiddler lacks a mechanism to do this, so you would need to use an external tool or plugin. Some examples are cataloged here.
Of course, any certificate you generate will be signed by the Fiddler root cert, so the platform you're running the application from will need to trust that certificate
I'm developing an application using Chrome Native Messaging that starts through a Chrome Extension.
My question is: How can I ensure that host application is really the same supplied by me?
I need to ensure the authenticity the application called by extension. How do I get it if I don´t have permission to read registry or check if something was changed?
That is an excellent question, and my guess is the answer is "unfortunately, you can't".
It would be interesting to implement some sort of cryptographic hash like the ones Chrome uses to verify extension files, but that's not a very strong guarantee.
Consider (all of this hypothetical):
You can secure the registry entry / manifest pretty easily this way, but what about the file itself?
Suppose you pin a hash of the executable, then it becomes painful to update it (you'll have to update the extension too in sync). Can be resolved with some kind of public key signature though instead of a hash.
Suppose you pin the executable in the manifest. What about its data files? More importantly, what about the libraries a native app uses?
Securing a Chrome extension/app is easy, since the only "library"/runtime you rely on is Chrome itself (and you put trust into that). A native app can depend on many, many things on the system (like the already mentioned libraries), how do you keep track?
Anyway, this seems like an interesting thing to brainstorm. Take a look the Chrome bug tracker if there is already anything similar, if not - try to raise a feature request. Maybe try some Chromium-related mailing list to ask the devs.
I realize this is an older post, but I thought it would be worth sharing the Chromium team's official response from the bug I filed: https://bugs.chromium.org/p/chromium/issues/detail?id=514936
An attacker who can modify registry or the FS on the user's machine can also modify the chrome binary, and so any type of validation implemented in chrome can be disabled by such attacker by mangling with the chrome's code. For that reason chrome has to trust FS (and anything that comes from local machine).
If i understood the question correctly,The solution could be
Register your executable with your server while installing along with signing the executable and store your register number inside the executable and server
In Each Request (postMessage) from extension ,send a token in addition which was given by your server
Ask the server for the Next token to send response to the extension by passing the token from extension along with you registry number
Server will respond with the token if you are a registered user
Encrypt it with your registry number and send it to extension along with the token from extension
extension holder browser will ask the server its a good response
With the help of extension token the server will identify the executable registry number and decrypt the executable token and verify which was generated by us(server) for the extension token
Once server confirmed ,Browser will consider it as a response
To be important your registry number should be secure and the client machine cannot able to get it out from the executable(Using proper signing it can be achievable)
As chrome stopped support for Applet ,I implemented the same for smart card reader in chrome
The only loop hole is,The client machine can able to trace each and every request its sending with the help of some tools
If you are able to make your executable communication with your server be secure using some httpOnly Cookie(Client machine cannot able to read) or else the password mechanism ,Most probably a secure solution you can achieve
I am writing an auto update client. It's a very simple app that:
1) Checks a central server to see if an update exists for some application
2) Downloads the install program from the server if a newer version exists
3) Runs the setup program
Other than server-side concerns (like someone hacking our site and placing a 'newer' malicious application there), what client-side security concerns must I take into account when implementing this?
My current ideas are:
1) Checksum. Include the checksum in the .xml file and check that against the downloaded file. (Pre or post encryption?)
2) Encrypt the file. Encrypt the file with some private key, and let this program decrypt it using the public key.
Are both or either of these necessary and sufficient? Is there anything else I need to consider?
Please remember this is only for concerns on the CLIENT-SIDE. I have almost no control over the server itself.
If you retrieve all of the information over https and check for a valid certificate then you can be sure that the data is coming from you server.
The checksums are only as strong as the site from which they're downloaded.
If you use an asymmetric signature, so that the auto-update client has the public key, then you can sign your updates instead, and it won't matter if someone hacks your website, as long as they don't get the private key.
If I can compromise the server that delivers the patch, and the checksum is on the same server, then I can compromise the checksum.
Encrypting the patch is mainly useful if you do not use SSL to deliver the file.
The user that executes a program is usually not authorized to write to the installation directory (for security reasons; this applies to desktop applications as well as e.g. PHP scripts on a web server). You will have to take that into account when figuring out a way how to install the patch.
So I came across this new tag in HTML5, <keygen>. I can't quite figure out what it is for, how it is applied, and how it might affect browser behavior.
I understand that this tag is for form encryption, but what is the difference between <keygen> and having a SSL certificate for your domain. Also, what is the challenge attribute?
I'm not planning on using it as it is far from implemented in an acceptable range of browsers, but I am curious as to what EXACTLY this tag does. All I can find is vague cookie-cutter documentation with no real examples of usage.
Edit:
I have found a VERY informative document, here. This runs through both client-side and server-side implementation of the keygen tag.
I am still curious as to what the benefit of this over a domain SSL certificate would be.
SSL is about "server identification" or "server AND client authentication (mutual authentication)".
In most cases only the server presents its server-certificate during the SSL handshake so that you could make sure that this really is the server you expect to connect to. In some cases the server also wants to verify that you really are the person you pretend to be. For this you need a client-certificate.
The <keygen> tag generates a public/private key pair and then creates a certificate request. This certificate request will be sent to a Certificate Authority (CA). The CA creates a certificate and sends it back to the browser. Now you are able to use this certificate for user authentication.
You're missing some history. keygen was first supported by Netscape when it was still a relevant browser. IE, OTOH, supported the same use cases through its ActiveX APIs. Opera and WebKit (or even KHTML), unwilling to reverse-engineer the entire Win32 API, reverse-engineered keygen instead.
It was specified in Web Forms 2.0 (which has now been merged into the HTML specification), in order to improve interoperability between the browsers that implemented it.
Since then, the IE team has reiterated their refusal to implement keygen, and the specification (in order to avoid turning into dry science fiction) has been changed to not require an actual implementation:
Note: This specification does not
specify what key types user agents are
to support — it is possible for a user
agent to not support any key types at
all.
In short, this is not a new element, and unless you can ignore IE, it's probably not what you want.
If you're looking for "exactly" then I'd recommend reading the RFC.
The keygen element is for creating a key for authentication of the user while SSL is concerned about privacy of communication and the authentication of the server. Quoting from the RFC:
This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use TLS and certificate authentication.
Deprecated
This feature has been removed from the Web standards. Though some
browsers may still support it, it is in the process of being dropped.
Avoid using it and update existing code if possible. Be aware that
this feature may cease to work at any time.
Source
The doc is useful to elaborate on what is the keygen element. Its requirement arises in WebID that maybe understood to be part of Semantic Web of Linked Data as seen at https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#creating-a-certificate 2.1.1
This might be useful for websites that provide services, where people need to pay for the service, like video on demand, or news website for professionals like Bloomberg. With this keys people can only watch the content in their computer and not in simultaneous computers! You decide how data is stored and processed. you can specify a .asp or .php file that will receive the variables and your file will store that key in the user profile. This way your users will not be able to log in from a different computer if you want. You may force them to check their email to authorize that new computer, just like steam does. Basically it allows to individualize service access, if your licensing model is per machine, like Operating System.
You can check the specs here:
http://www.w3.org/TR/html-markup/keygen.html
I am looking for something like https, but backwards. The user generates their own private key (in advance) and then (only later) provides the web application with the associated public key. This part of the exchange should (if necessary) occur out-of-band. Communication is then encrypted/decrypted with these keys.
I've thought of some strange JavaScript approaches to implement this (From the client perspective: form submissions are encrypted on their way out while (on ajax response) web content is decrypted. I recognize this is horrible, but you can't deny that it would be a fun hack. However, I wondered if there was already something out there... something commonly implemented in browsers and web/application servers.
Primarily this is to address compromised security when (unknowingly) communicating through a rogue access point that may be intercepting https connections and issuing its own certificates. Recently (in my own network) I recreated this and (with due horror) soon saw my gmail password in plain text! I have a web application going that only I and a few others use, but where security (from a learning stand point) needs to be top notch.
I should add, the solution does not need to be practical
Also, if there is something intrinsically wrong with my thought process, I would greatly appreciate it if someone set me on the right track or directed me to the proper literature. Science is not about finding better answers; science is about forming better questions.
Thank you for your time,
O∴D
This is already done. They're called TLS client certificates. SSL doesn't have to be one-way; it can be two-party mutual authentication.
What you do is have the client generate a private key. The client then sends a CSR (Certificate Signing Request) to the server, who signs the public key therein and returns it to the client. The private key is never sent over the network. If the AP intercepts and modifies the key, the client will know.
However, this does not stop a rogue AP from requesting a certificate on behalf of a client. You need an out-of-band channel to verify identity. There is no way to stop a man in the middle from impersonating a client without some way to get around that MITM.
If a rogue access point can sniff packets, it can also change packets (an ‘active’ man-in-the-middle attack). So any security measure a client-side script could possibly provide would be easily circumvented by nobbling the script itself on the way to the client.
HTTPS—and the unauthorised-certificate warning you get when a MitM is trying to fool you—is as good as it gets.
SSL and there for HTTPS allows for client certificates. on the server side you can use these environment variables to verify a certificate. If you only have 1 server and a bunch of clients then a full PKI isn't necessary. Instead you can have a list of valid client certificates in the database. Here is more info on the topic.
Implementing anything like this in JavaScript is a bad idea.
I don't see, why you are using assymetric encryption here. For one, it is slow, and secondly, it is vulnerable to man in the middle anyhow.
Usually, you use an asymmetric encryption to have a relatively secure session negotiation, including an exchange of keys for a symmetric encryption, valid for the session.
Since you use a secure channel for the negociation, I don't really understand why you even send around public keys, which themselves are only valid for one session.
Asymmetric encryption makes sense, if you have shared secret, that allows verifying a public key. Having this shared secret is signifficantly easier, if you don't change the key for every session, and if the key is generated in a central place (i.e. the server and not for all clients).
Also, as the rook already pointed out, JavaScript is a bad idea. You have to write everything from scratch, starting with basic arithmetic operations, since Number won't get you very far, if you want to work with keys in an order of magnitude, that provides reasonable security.
greetz
back2dos