security headers Strict-Transport-Security available for http? - security

I'm reading the list of useful header from OWASP and I've some trouble understanding if the first two require an HTTPS certificate ?
Public Key Pinning Extension for HTTP : The Public Key Pinning Extension for HTTP (HPKP) is a security header that tells a web client
to associate a specific cryptographic public key with a certain web
server to prevent MITM attacks with forged certificates.
Strict-Transport-Security : HTTP Strict-Transport-Security (HSTS) enforces secure (HTTP over SSL/TLS) connections to the server. This
reduces impact of bugs in web applications leaking session data
through cookies and external links and defends against
Man-in-the-middle attacks. HSTS also disables the ability for user's
to ignore SSL negotiation warnings.

HTTP Public Key Pinning(HPKP) is a trust on first use technique. When a user requests a web server for the first time, the server tells to pin its own or an intermediate CA’s public key to the browser via a special HTTP header. After that browser stores this key for a given period of time. Upon user’s subsequent requests for that web server, the browser expects to contain the pinned key inside one of certificates in the certificate chain of the web server. If not, the user is blocked by giving a warning. For more information you can refer Implementing and Testing HTTP Public Key Pinning (HPKP)

Yes they both do require a certificate:
The first one pin a list of certificates. One of the must be in the current certificate chain.
The second one force to use https if the current https connection is valid.
So by definition the first one need https and a certificate, and the spec of HSTS forbid to send the header with http connections.

Related

How to secure access token beyond XSS and CSRF

I understand the XSS vulnerability of using web storage and the CSRF vulnerability of using cookies. So I store the access token in memory and for persistence I have a refresh token in a cookie which I use to silently refresh my access token when we lose it. I feel somewhat better about XSS and CSRF threats... BUT how do we secure the token from a packet sniffer? A packet sniffer would find the token in the request. I see a lot of discussion on XSS and CSRF but how do we keep safe from packet sniffers, and are there even more threats we do not commonly think about?
You use HTTPS to defend against packet sniffers.
Fiddler as a proxy will not be able to decrypt HTTPS traffic in the cloud unless the fiddlers built in root certificate is added to the browser or client making the request.
Fiddler is able to decrypt HTTPS because you have added Fiddlers root certificate to your trusted store in YOUR computer. without this a proper HTTPS connections can't be made.
So , don't worry about Fidler in the cloud.
https provides an end to end encryption. Its implenented on the application level by browsers so no other users on the same network doesnt break https security.
The below is a short explanation of how ssl and https (http over ssl) works
Ssl:
The key idea is that mathematically you can generate a public key A and a private key B in a way that if you encrypt something with A you can only decrypt it with B.
So let's say the server google has a pair of public key A and private key B.
A client wants to send some data to google. The idea is if the client has key B he can encrypt the data he wants to send and he doesnt care anymore of network threats such as a man in the middle fishing the packet or data because only the owner of key B (google) can decrypt the data. So the man in the middle will only have encrypted data which has no use.
Note that from above it is clear that servers (google here) should keep their private key unshared, but should distribute their public key so that clients can use to communicate safely (with encryption) with the server.
How to distribute the public key ?
Using the public key infrastructure (PKI) which is a set of roles, entities, definitions, etc. used to manage and distribute public keys.
Again in short, a server request a certificate from a certificate authority (CA). That certificate contains information on the server (name, ip, etc.) And the public key and private key generated for that server (note here there are several procedures that a server might request to generate the keys, one of which is the server generates the private key and the CA only responsible for generating the public key).
Finally, there is a list of trusted CAs built into the browsers, so that these browsers such as chrome can go to and get google's public key A, and use it to encrypt the data it wants to send to google.
The above is how ssl communication protocol works (ssl certificates). Ssl is simply a protocol that provides secure communication. It provides no routing and networking capabilities.
Https:
Https is basically HTTP connection which is delivering the data secured using SSL.
That means SSL encrypted data will be routed using protocols like HTTP for communication.

Request tampering over SSL using man-in-the-middle attack

I am familiar with SSL/TLS and its mechanism to protect data sent over HTTP between the browser and the web server. One of the issues identified by my security testing team is request tampering over SSL where they were able to modify the HTTP request payload of a POST request using man-in-the-middle attack. The browser obviously did show a certificate validity warning and it was ignored.
In my opinion, the application shouldn't handle or remediate such request tampering scenarios because SSL/TLS takes care of it. Server side validation of data that matches any client side validation should suffice to ensure that the HTTP payload is valid.
So my question is basically to confirm my understanding about this. Is request tampering using man-in-the-middle attack over SSL a valid security testing scenario? And should an application do any specific request encoding to protect from such attacks.
Yes, it is a valid testing scenario.
Depending on the threat model of your application, the application might implement Certificate Pinning, to mitigate that threat. With that, you can make sure that only a specific cert (or certs signed by a certain CA are trusted.
See this answer for reference.

Can cookies be shared between subdomains over SSL on different servers?

We are evaluating feasibility of the following solution for Single Sign On.
Scenario.
Website domain. https://www.example.com
[SSL using multi-server wildcard certificate]
Hosted on Server 1.
Other portal. https://portal.example.com
[SSL using the same certificate (multi-server wildcard certificate)]
Hosted on Server 2.
Solution:
The intention is to share a cookie between the www.example.com and the portal.example.com subdomains, however, for this to work the SSL protocol needs to satisfy the following requirements:
Continue to block Man-in-the-middle attacks.
Encrypt/Decrypt in the same manner for both Server 1 and Server 2. So that they can get the same information from the cookie.
Question is: Is there any limitations in terms of private keys or the SSL protocol perse that will make the solution above not feasible?
Thanks,
Yes, your solution will work and will mitigate most MITM attacks provided the CA that your certificates are purchased from are trusted by the client browsers.
If the cookie is set with domain example.com it will be shared between www.example.com and portal.example.com. It is also advisable to set the Secure flag to make sure it cannot be transmitted over plain HTTP. I am assuming both www.example.com and portal.example.com both have a server-side mechanism to validate the cookie value to authorise each request.
RFC 6265 states:
The Domain attribute specifies those hosts to which the cookie will be
sent. For example, if the value of the Domain attribute is
"example.com", the user agent will include the cookie in the Cookie
header when making HTTP requests to example.com, www.example.com, and
www.corp.example.com.
The old RFC 2109 specified that you needed a . before the domain, however 6265 overrides this. This means if you want to share cookies and make it compatible with very old browsers you should set the cookie with domain .example.com rather than example.com. There is nothing to lose by doing the former as newer browsers will simply ignore the dot.
In your solution, both Server 1 and Server 2 will receive and be able to decrypt the cookie. Note there is not requirement for both servers to have the same certificate - they will both decrypt the SSL session independently using their installed private keys (or rather a shared symmetric key that is transmitted using their installed private keys - to be precise).
However, using a wildcard certificate for *.example.com will be cheaper as the same certificate can be installed on both servers.

Secure cookie and invalid certificate

Is a secure cookie supposed to be sent to an HTTPS server that have an invalid certificate? I mean, I have an application served by a HTTPS server which send a cookie with the secure flag activated after the login step. Is my server supposed to receive the cookie back if it has an invalid certificate? Is this is normalized (it seems it's not), could someone point me to the relevant part of the norm?
Yes, a cookie with Secure flag set is only sent for TLS/SSL secured connections:
If the cookie's secure-only-flag is true, then the request-uri's scheme must denote a "secure" protocol (as defined by the user agent). […] Typically, user agents consider a protocol secure if the protocol makes use of transport-layer security, such as SSL or TLS. For example, most user agents consider "https" to be a scheme that denotes a secure protocol.
But to establish a TLS/SSL connection, it only matters whether the certificate is trusted. It doesn’t matter how the certificate was trusted, i. e. whether it was trusted automatically or manually.
Whether the certificate is valid or not is actually immaterial. If an invalid certificate is detected when browsing to a site most browsers will tell the user the cert is invalid and let the user determine whether they want to proceed or not.
With regards to the "secure" part of the cookie, all that does is tell the browser that the cookie is only valid for https connections and shouldn't be transferred over regular http connections.
This means that yes, your server should receive the cookie back from the browser provided that the URL being accessed is an https url. Even if the server's cert is invalid.
There's also this statement in the RFC 2965 which is obsoleted by RFC 6265:
The user agent (possibly with user interaction) MAY determine what
level of security it considers appropriate for "secure" cookies.
The Secure attribute should be considered security advice from the
server to the user agent, indicating that it is in the session's
interest to protect the cookie contents. When it sends a "secure"
cookie back to a server, the user agent SHOULD use no less than
the same level of security as was used when it received the cookie
from the server.

Transport-level vs message-level security

I'm reading a book on WCF and author debates about pros of using message-level security over using transport-level security. Anyways, I can't find any logic in author's arguments
One limitation of transport
security is that it relies on every
“step” and participant in the network
path having consistently configured
security. In other words, if a message
must travel through an intermediary
before reaching its destination, there
is no way to ensure that transport
security has been enabled for the step
after the intermediary (unless that
interme- diary is fully controlled by
the original service provider). If
that security is not faithfully
reproduced, the data may be
compromised downstream.
Message security focuses on ensuring the integrity and privacy of
individ- ual messages, without regard
for the network. Through mechanisms
such as encryption and signing via
public and private keys, the message
will be protected even if sent over an
unprotected transport (such as plain
HTTP).
a)
If that security is not faithfully
reproduced, the data may be
compromised downstream.
True, but assuming two systems communicating use SSL and thus certificates, then the data they exchange can't be decrypted by intermediary, but instead it can only be altered, which the receiver will notice and thus reject the packet?!
b) Anyways, as far as I understand the above quote, it is implying that if two systems establish a SSL connection, and if intermediary system S has SSL enabled and if S is also owned by a hacker, then S ( aka hacker ) won't be able to intercept SSL traffic travelling through it? But if S doesn't have SSL enabled, then hacker will be able to intercept SSL traffic? That doesn't make sense!
c)
Message security focuses on ensuring the integrity and privacy of individ-
ual messages, without regard for the network. Through mechanisms such
as encryption and signing via public and private keys, the message will be
protected even if sent over an unprotected transport (such as plain HTTP).
This doesn't make sense, since transport-level security also can use encryption and certificates, so why would using private/public keys at message-level be more secure than using them at transport-level? Namelly, if intermediary is able to intercept SSL traffic, why wouldn't it also be able to intercept messages secured via message-level private/public keys?
thank you
Consider the case of SSL interception.
Generally, if you have an SSL encrypted connection to a server, you can trust that you "really are* connected to that server, and that the server's owners have identified themselves unambiguously to a mutually trusted third party, like Verisign, Entrust, or Thawte (by presenting credentials identifying their name, address, contact information, ability to do business, etc., and receiving a certificate countersigned by the third party's signature). Using SSL, this certificate is an assurance to the end user that traffic between the user's browser (client) and the server's SSL endpoint (which may not be the server itself, but some switch, router, or load-balancer where the SSL certificate is installed) is secure. Anyone intercepting that traffic gets gobbledygook and if they tamper with it in any way, then the traffic is rejected by the server.
But SSL interception is becoming common in many companies. With SSL interception, you "ask" for an HTTPS connection to (for example) www.google.com, the company's switch/router/proxy hands you a valid certificate naming www.google.com as the endpoint (so your browser doesn't complain about a name mismatch), but instead of being countersigned by a mutually trusted third party, it is countersigned by their own certificate authority (operating somewhere in the company), which also happens to be trusted by your browser (since it's in your trusted root CA list which the company has control over).
The company's proxy then establishes a separate SSL-encrypted connection to your target site (in this example, www.google.com), but the proxy/switch/router in the middle is now capable of logging all of your traffic.
You still see a lock icon in your browser, since the traffic is encrypted up to your company's inner SSL endpoint using their own certificate, and the traffic is re-encrypted from that endpoint to your final destination using the destination's SSL certificate, but the man in the middle (the proxy/router/switch) can now log, redirect, or even tamper with all of your traffic.
Message-level encryption would guarantee that the message remains encrypted, even during these intermediate "hops" where the traffic itself is decrypted.
Load-balancing is another good example, because the SSL certificate is generally installed on the load balancer, which represents the SSL endpoint. The load balancer is then responsible for deciding which physical machine to send the now-decrypted traffic to for processing. Your messages may go through several "hops" like this before it finally reaches the service endpoint that can understand and process the message.
I think I see what he's getting at. Say like this:
Web client ---> Presentation web server ---> web service call to database
In this case you're depending on the middle server encrypting the data again before it gets to the database. If the message was encrypted instead, only the back end would know how to read it, so the middle doesn't matter.

Resources