I have not clear understanding of the purpose of the max-age directive in the RFC7469 (Public Key Pinning Extension)
My understanding of RFC7469 and HTTP Public Key Pinning is that every time a client starts an HTTPS transaction with a server, it should compute the pin of the server certificate and verify that it matches one of the pin returned by the server in a previous transaction. If pin does not match, than a man-in-the-middle event may have occurred and connection must be denied.
What is not clear to me is the purpose of "max-age" directive. This is what RFC7469 states:
The "max-age" directive specifies the number of seconds after the
reception of the PKP header field during which the UA SHOULD regard
the host (from whom the message was received) as a Known Pinned Host.
Does this mean that the client should update a local copy of pins not later than max-age expires?
Max-age tells the client how long the HPKP header is valid for. After max-age expires the HPKP header should be forgotten and ignored. However if you revisit the site during that time you will likely get a new max-age and extend the max-age a little longer.
Certificates have a validity period so it does not make sense to make a HPKP header valid indefinitely. It’s also possible to block access to your site accidentally by updating certificate. So a max-age is necessary.
HPKP has been seen to be dangerous (see my blog post here for some of the reasons), so even with a max-age it’s proven to be too dangerous for most sites and Chrome for one are removing it as an option.
Related
I'm working on a web archiving technology that simply saves warc and mhtml format of a web page. Protected/private contents that need authentication are archived on the client-side which is susceptible to tampering which makes them unusable for legal admissions.
So I proposed a solution to create a mitmproxy server to intercept the traffic and hashes the content and sign it with an EC key and add that to the header. So that anyone can verify the archive by validating the signature from the header. The proxy server will act as a trusted signing authority and doesn't depend on clients. Is this the right approach?
Is there any existing way to prove that a document existed in a server at a specific time by verifying the headers?
I have found out that content-md5 header can provide the digest of the document which solves half the problem but its not a global standard and only some servers use it. Also found that ETag header can sometimes be the hash of content but again it is not really a standard.
Some time ago I came across the option in one of the software I use at work, to turn off XSRF server-side protection by including a special HTTP header value on the client side. Therefore, I wonder:
How is this not a security vulnerability?
Why would you implement a security feature and allow clients to turn it off? Is there a use-case I am missing?
I am doubting my knowledge of XSRF protection at the moment and since we could not reach a consensus at work I decided to post my concerns here.
The product is Bamboo and they publicly report the option in https://confluence.atlassian.com/bamkb/rest-api-calls-fail-due-to-missing-xsrf-token-899447048.html#RESTAPIcallsfailduetoMissingXSRFToken-Workaround. I first mentioned this in an old answer here: https://stackoverflow.com/a/45090321/410939.
I can understand allowing the server to turn it off on a per API basis. However allowing the client to turn it off is a very bad idea... It's as good as not being there. The only reason I can think this is OK is for backwards compatibility. Maybe there is an older version of the client that relies on this way to mitigate CSRF, while newer clients use the new version, and switch off the older version (but one of them must be used).
I would turn the question around: Why would you implement security features and then ask users to turn them on? This is the opt-in model to security you will find everywhere, e.g. literally no one are forcing 2FA even though it is a huge security improvement.
If XSRF is session based and you run multiple tabs with the same application and you are forced to reauthenticate in one of them you will typically get a new XSRF token. Other tabs might then no longer pass the XSRF check with the risk of losing unsaved work. There could possibly be other similar scenarios.
There are sometimes trade offs between security and usability, in this case they make security default and let people who run into problems take an informed risk.
There are other ways to mitigate XSRF. So, if cookie is not an option (maybe the client doesn't support cookie), you might want to disable this cookie solution.
Some other ways to mitigate XSRF:
State Variable (Auth0 uses it) - The client will generate and pass with every request a cryptographically strong random nonce which the server will echo back along with its response allowing the client to validate the nonce. It's explained in Auth0 doc
Always check the referer header and accept requests only when referer is a trusted domain. If referer header is absent or a non-whitelisted domain, simply reject the request. When using SSL/TLS referrer is usually present. Landing pages (that is mostly informational and not containing login form or any secured content may be little relaxed and allow requests with missing referer header
One of our clients made a penetration test on our application and reported missing flags when working with cookies.
We should always use httpOnly and secure flags when setting cookies.
After some testing I realized that cookies were actually using this flags when set, but with one exception: Log out.
When logging out some cookies were set with a past expiration date, as to delete that cookie, secure and httpOnly were not used.
Does this represent a security risk? Does it make sense to set these flags when setting an expired cookie?
No, assuming there are no holes in your app, the flag doesn't matter on the log out.
However, you should do what the pen tester says because there may be other security flaws in your app that can be exploited using this cookie if the flags aren't set. In other words, if your app were otherwise secure then the cookie wouldn't matter, however it probably does matter because there are no guarantees that your app is secure.
One example is an app that doesn't properly terminate or close sessions. A logout cookie is sent to the client without the flags, and is therefore compromised in some way such as MitM or Wire Sniffing. The attacker submits the cookie back to the app, along with any other arbitrary data designed to exploit a hole, thus triggering a vulnerability and getting a live session either by resurrecting the previous one or receiving a new one (like the famous NULL session attack).
This is a classic case of one security hole that is useless by itself, but adds a link to a chain that can be used to obtain a compromise.
As part of a project with a partner, we are required to provide single-sign-on service on our app. Basically, people will log in through our partner's website, then they are redirected to ours. The redirected request will have the user's data in the HTTP header fields.
Here's where it gets "iffy". The process of authenticating if this request is valid or not is dependent on the value of the HTTP Referer field. Our partner tells us to check this field to see that the source is a legitimate one.
Now I know (and I'm glad to be proven wrong) that this field is easy enough to forge, and since no other method of authentication is given to us, a malicious user could easily construct a false HTTP request and gain access to our web app.
I'm a programmer first, and admittedly know very little about the intricacies of HTTP. So are my concerns real? Would using SSL (somehow) void this concern?
Remember that rule number one is never trust client input. Like any other client input, the Referer header is trivial to forge. SSL does nothing for you because you still rely on client input. Also, note that browsers SHOULD NOT send Referer to http pages when referred by https pages.
Additionally, consider that many privacy-conscious people and proxies (that individuals may not have any control over) might strip Referer headers from their requests, breaking your scheme.
To do this properly, you need to use something like OAuth or OpenID, where the protocols have been designed to be secure.
The HTTP Referrer header is unreliable: depending on the browser used it may not be sent.
Does http-equiv="refresh" keep referrer info and metadata?
Yes - It is forgeable.
No - A client can just as easily send a (fake) HTTPS request as a (fake) HTTP request. The only difference is the connection is encrypted. It says nothing about the data transmitted.
That being said, it is another precaution that can be used. It should not be relied upon for security, however.
I would look at Microsoft Federation -- it's likely overkill, but it shows one way to implement SSO securely.
I currently have a roll-your-own application security service that runs in my enterprise and is - for the most part - meeting business needs.
The issue that I currently face is that the service has traditionally (naively) relied on the user's source IP remaining constant as a hedge against session hijacking - the web applications in the enterprise are not directly available to the public and it was in the past perfectly acceptable for me to require that a users' address remain constant throughout a given session.
Unfortunately this is no longer the case and I am therefore forced to switch to a solution that does not rely on the source IP. I would much prefer to implement a solution that actually accomplishes the original designer's intent (i.e. preventing session hijacking).
My research so far has turned up this, which essentially says "salt your authentication token hash with the SSL session key."
On the face of it, this seems like a perfect solution, however I am left with a nagging suspicion that real-world implementation of this scheme is impractical due to the possibility that the client and server can at any time - effectively arbitrarily - opt to re-negotiate the SSL session and therefore change the key.
this is the scenario I am envisioning:
SSL session established and key agreed upon.
Client authenticates to server at the application level (i.e. via username and password).
Server writes a secure cookie that includes SSL session key.
Something occurs that causes a session re-negotiation. For example, I think IE does this on a timer with or without a reason.
Client submits a request to the server containing the old session key (since there was no application level knowledge of the re-negotiation there was no opportunity for a new, updated hash to be written to the client).
Server rejects client's credential due to hash match failure, etc.
Is this a real issue or is this a misapprehension on my part due to a (to say the least) less-than-perfect understanding of how SSL works?
See all topics related to SSL persistence. This is a well-researched issue in the load-balancer world.
The short answer is: you cannot rely on the SSLID -- most browsers renegotiate, and you still have to use the source IP. If the IP address is likely to change mid-session then you can force a soft-reauthentication, or use the SSLID as a bridge between the two IP changes (and vice-versa, i.e. only assume hijacking if both IP address and SSLID change at the same time, as seen by the server.)
2014 UPDATE
Just force the use of https and make sure that that you are not vulnerable to session fixation or to CRIME. Do not bother to salt your auth token with any client-side information because if an attacker was able to obtain the token (provided that said token was not just trivial to guess) then whatever means were used to obtain it (e.g. cross-site scripting, or the full compromising of the client system) will also allow the attacker to easily obtain any client-side information that might have gone into the token (and replicate those on a secondary system if needed).
If the client is likely to be connecting from only a few systems, then you could generate an RSA keypair in the browser for possibly every new client system the client connects from (where the public part is submitted to your server and the private part remains in what is hopefully secure client storage) and redirect to a virtual host that uses two-way (peer/client certificate) verification in lieu of password-based authentication.
I am wondering why it would not be just enough to simply
require ssl in your transport
encode inputs (html/url/attribute) to prevent cross-site scripting
require only POSTs for all requests that change information and
prevent CSRF as best you can (depending on what your platform supports).
Set your cookies to HTTPOnly
Yes, but there are several things you can do about it. The easiest it to simply cache the session key(s) you use as salt (per user), and accept any of them. Even if the session is renegotiated you'll still have it in your cache. There are details--expiration policy, etc.--but nothing insurmountable unless you are running something that needs to be milspec hardened, in which case you shouldn't be doing it this way in the first place.
-- MarkusQ