If I am running a HTTPS only service, is there any reason not to enable HSTS? Is there a strategy to test HSTS without permanently enabling it or a way "out of" HSTS?
I'd like to add to Mike's answer the warning, that you are probably not running an HTTPS-only service. The reason is that when your server doesn't listen on port 80 then if you only type in the domain and not the protocol (stackoverflow.com instead of https://stackoverflow.com) your browser will not automatically try to connect on port 443 (https) and show a connection error. Thus for most sites an HTTPS only service is out of the question.
The classical way to ensure an https connection by forwarding every http page to an https page via 301/303 forwards is not a sufficient replacement for HSTS. In fact HSTS was build for that case exactly. The reason is that many bookmarks and links will still point to http and every time a user enters a URL without specifying the protocol - which is always - the browser will first try the http connection. An active attacker can hijack that first connection and never forward the user to the https site.
To give you a more vivid image of such an attack imagine a state who spoofs every DNS request to twitter and answers with its own IPs. When it receives an https request it forwards it to twitter without any action (and chance for interception). But when it receives an http request it uses the tool ssl strip Mike has mentioned to transparently forward the content of the connection to twitter's TLS port. Neither the user nor twitter notice that anything is off (except for the very alert users who checks for TLS encryption) but the state has access to every login password.
HSTS can protect those users that have had a legitimate https connection with the server before and have already seen an HSTS header. The header instructs the browser to exchange every http url of the domain with an https url itself (before an http connection is established at all) and deny any unencrypted connection to this domain. Thus in the scenario above almost all users will not end up on the compromised http connection and are safe against the nation wide attack.
From a defense in depth perspective, you should still enable HTTP Strict Transport Policy (HSTS). There are some issues that could crop up in the future that would benefit from HSTS, including:
Server misconfiguration, where HTTP is accidentally turned on. There's one site I visited recently that takes credit card details, it has a HTTPS site but Google links to their HTTP site so depending on how you got there, you could be submitting your details in the clear.
Malicious attacker poisons or hijacks DNS records to redirect the client to their own HTTP-only server, perhaps in conjunction with an ssl strip attack.
You should also ensure a sufficiently long HSTS lifetime, e.g. a year or more.
You can disable support for HSTS by setting the max-age to 0. You'll need to leave this header in place for as long as you had originally set the value. E.g. If you had set it to 2 years, and change your mind, you'll need to leave max-age=0 for at least 2 years (and continue to offer an HTTPS service on that domain) so past clients won't have any issues connecting to it.
Related
Is http ok vs https in an intranet connected to the internet ?
If a bank has a website which using http and only accessible if connected to the network, is this a security risk?
Not sure if I understood your question, but let me try an answer.
HTTPS has the objective to guarantee that no one in the middle of the connection is being able to see what you and the server are talking to each other, he does it using encryption, so even in an intranet using HTTPS has its advantages.
Imagine a wireless connection, everyone in the bank is in the middle of you and the connection with the router (consequently, the server) if you are using it, so if this website has sensitive information like a password, even in an intranet, you would want it secure so no one inside that network can sniff packets and get your password easily in a clear HTTP request.
I have noticed a lot of very large websites make you log in using HTTPS and then immediately switch back over to HTTP once I am logged in (myfitnesspal.com, pluralsight.com). If I use a packet sniffer I can see the session id cookie and verify that the request is being sent via HTTP. Doesn't this mean that someone could easily hijack my session if they were listening, or is there something else I am missing? Also, on a similar note is there any reason that I would want to use HTTP over HTTPS other than the additional computation on the server?
It depends how sessions are being handled. It is possible that two sessions are being handled by the server. One secured and one unsecured.
When you log into these sites they may set two session cookies, one for browsing and one for secure access to admin/account management/checkout areas. The second cookie would be marked as "SECURE" and only be sent over a TLS/SSL connection. When browsing normally, etc only the unsecured connection is used and only to maintain state in the session, but when you go to account management, checkout, etc, then you are switched back to the secure session for those purposes. If it has been too long since your last secure access you may be asked to reauthenticate.
So while it is possible that your browsing session could be hijacked, it is unlikely (if properly implemented) that your account could be compromised as a result.
Well you can see the Session cookie if its not secure flagged or for example if you see go to wireshark and search for HTTP on your LAN, and you look at the traffic for your example: pluralsight, you just press Follow TCP STREAM on your HTTP connection to the site were after log in you see its on http data stream, and just collect the sessionid and grab greasemonkey(firefox tool) + session hijack : and ctrl+v your collected sessionid and you will see for yourself it there is a vulnerability or not.
Sorry if it is a duplicate, as I am not a security nor network expert I may have missed the correct lingo to find information.
I am working on an application to intercept and modify HTTP requests and responses between a web browser and a web server (see how to intercept and modify HTTP responses on server side? for the background). I decided to implement a reverse proxy in ASP.Net which forwards client requests to the back-end HTTP server, translates links and headers from the response to the properly "proxified" URL, and sends the response to the client after having extracted relevant information from the response.
It is working as expected, except for the authentication part: the web server uses NTLM authentication by default, and just forwarding requests and responses through the reverse proxy does not allow the user to be authenticated on the remote application. Both the reverse proxy and the web application are on the same physical machine and are executed in the same IIS server (Windows server 2008/IIS 7 if that matters). I tried both enabling and disabling authentication on the reverse proxy app with no luck.
I have looked for information about it, and it seems to be related to the "double-hop problem", which I do not understand. My question is: is there a way to authenticate the user on the remote application through the reverse proxy using NTLM? If there is none, are there alternative authentication methods I could use?
Even if you don't have a solution to my problem, just pointing me to relevant information about it to help me get out of the confusion would be great!
I found what the problem was (and it is NTLM): in order to have the browser asks the user for its credentials, the response must have a 401 status code. My reverse proxy was forwarding the response to the browser, so IIS was adding a standard HTML code to explain the requested page cannot be accessed thus preventing the browser from asking credentials.
The problem was solved by removing the response content when the status code is a 401.
With all due respect I have for the one that answered that some years ago, I must admit this is plainly false. The problem was indeed solved AFTER removing the response content when the status code is a 401, but it had none to do with the initial problem..
The truth is that windows authentication was made to authenticate people over local windows networks, where no proxy server is present or even needed.
The main problem with NTLM authentication is that this protocol does not authenticate the HTTP session but the underlying TCP connection, and as far as I know there is no way to access it from asp code.
Every proxy server I tried broke NTLM authentication.
Windows authentication is comfortable for an user because he won't ever need to enter your password to whatever application may lie in your intranet, frightening for a security guy because there is an auto-login without even a prompt if the site domain is trusted by IE, shocking for a network administrator because it melts the application, transport and network layer into some "windows ball of mug" instead of just plain http traffic.
NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received > them. And that's why many reverse proxy doesn't work with NTLM authentication. (like nginx) > They forward HTTP requests correcty but not the TCP packets.
Nginx has the functionality to work with NTLM authentication. Keepalive needs to be enabled which is only available trough the http_upstream_module. Additionally in the location block you need to specify that you will be using HTTP/1.1 and that the "Connection" header field should be cleared for each proxied request. Nginx config should look something like:
upstream http_backend {
server 1.1.1.1:80;
keepalive 16;
}
server {
...
location / {
proxy_pass http://http_backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
I scratched my head for quite some time with this issue but the above works for me. Note that if you need to proxy HTTPS traffic, a separate upstream block is deemed necessary. To clarify a bit more, "keepalive 16;" specifies the number of simultaneous connections to the upstream your proxy is allowed to keep. Adjust the number as per the expected number of simultaneous visitors on the site.
Although this is an old post, I just want to report that it works for me quite well with an Apache2.2 reverse proxy and the keepalive=on option. Obviously, this keeps the connection between the proxy and the SharePoint host open and "pinned" to the client<>proxy connection. I don't exactly know the mechanisms behind this, but it works fairly well.
But: Sometimes, my users encounter the issue that they're logged in as another user. So there seems to be some mixing-up through sessions. I will have to give this some further testing.
Solution for everything (in case you have a valid, signed SSL certificate): Switch IIS to Basic Auth. This works absolutely fine, and even Windows (i.e. Office with SharePoint connection, all WebClient-based processes etc.) won't complain at all.
But they will when you're just using http without SSL/TLS, and also with self-signed certificates.
I confirm that it works with "keep-alive=on" on apache2.2
I examined frames with Wireshark, and I know why it doesn't work. NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received them. That's why many reverse proxies, like nginx, don't work with NTLM authentication. Reverse proxies forward HTTP requests correctly but not the TCP packets.
NTLM requires a TCP reverse proxy.
This question already has answers here:
Is a HTTPS query string secure?
(9 answers)
Closed 9 years ago.
Suppose I setup a simple php web server with a page that can be accessed by HTTPS. The URL has simple parameters, like https://www.example.com/test?abc=123.
Is it true that the parameter here in this case will be safe from people sniffing the packets? And would this be true if the server does not employ any SSL certificate?
Yes your URL would be safe from sniffing; however, one hole that is easily overlooken is if your page references any third party resources such as Google Analytics, Add Content anything, your entire URL will be sent to the third party in the referer. If its really sensitive it doesn't belong in the query string.
As for your second part of the question, you can't use SSL if you don't have a certificate on the server.
http://answers.google.com/answers/threadview/id/758002.html
HTTPS Establishes an underlying SSL
connection before any HTTP data is
transferred. This ensures that all URL
data (with the exception of hostname,
which is used to establish the
connection) is carried solely within
this encrypted connection, and is
protected from man-in-the-middle
attacks in the same way that any HTTPS
data is.
All HTTP-level transactions within an
HTTPS connection are conducted within
the established SSL session, and no
query data is transferred before the
secure connection is established.
From the outside the only data that is
visible to the world is the hostname
and port you are connecting to.
Everything else is simply a stream of
binary data which is encrypted using a
private key shared only between you
and the server.
In the example you provide your
browser would do this:
Derive
hostname (and port if present)
from URL.
Connect to host.
Check certificate (it must be 'signed'
by a known authority, applied specifically
to correct IP address and port, and be
current).
The browser and server
exchange cryptographic data and the
browser receives a private key.
The
HTTP request is made, and encrypted with
established cryptography.
HTTP response is received. Also encrypted.
HTTP is an 'Application Layer'
protocol. It is carried on top of the
secure layer. According to the SSL
specification, drawn up by Netscape,
it dictates that no application layer
data may be transmitted until a secure
connection is established - as
outlined in the following paragraph:
"At this point, a change cipher spec
message is sent by the client, and the
client copies the pending Cipher Spec
into the current Cipher Spec. The
client then immediately sends the
finished message under the new
algorithms, keys, and secrets. In
response, the server will send its own
change cipher spec message, transfer
the pending to the current Cipher
Spec, and send its finished message
under the new Cipher Spec. At this
point, the handshake is complete and
the client and server may begin to
exchange application layer data."
http://wp.netscape.com/eng/ssl3/draft302.txt
So yes. The data contained in the URL
query on an HTTPS connection is
encrypted. However it is very poor
practice to include such sensitive
data as a password in a 'GET'
request. While it cannot be
intercepted, the data would be logged
in plaintext server logs on the
receiving HTTPS server, and quite
possibly also in browser history. It
is probably also available to browser
plugins and possibly even other
applications on the client computer.
At most a HTTPS URL could be
reasonably allowed to include a
session ID or similar non-reusable
variable. It should NEVER contain
static authentication tokens.
The HTTP connection concept is most
clearly explained here:
http://www.ourshop.com/resources/ssl_step1.html
The requested URI (/test?abc=123) is sent to the web server as part of the HTTP request header and thus encrypted.
However URLs can leak in other ways, usually web browser toolbars, bookmarks, and sending links to friends. POSTing data may be more appropriate depending on the context/sensitivity of the data you're sending.
I believe an HTTPS connection requires an SSL certificate, even a self-generated one if you don't want to buy one.
Hope that helps a bit!
depends on what you mean by safe
SSL encrypts the entire HTTP request/response, so the URL in the GET portion will be encrypted. This does not stop MITM attacks and corruption of the integrity of the SSL session itself. If a non-authoritative certificate is used, this makes potential attack vectors simpler.
Are REST request headers encrypted by SSL?
Is a similar question.
The url:s will be stored both in the server logs and in the browser history so even if they aren't sniffable they are far from safe.
On the wire, yes. At the end points (browser and server) not necessarily. SSL/TLS is transport layer security. It will encrypt your traffic between the browser and the server. It is possible on the browser-side to peek at the data (a BHO for example). Once it reaches the server-side, it is available to the recipient of course and is only as secure as he treats it. If the data needs to move securely beyond the initial exchange and protected from prying eyes on the client, you should also look at message layer security.
The SSL/TSL is a Transport Layer Security, yes the data can be picked with BHO (as #JP wrote) or any add on but also with "out of browser" HTTP sniffers. They read messaging between winsock32 and the application. The encryption takes place in the winsock32 not in the browser.
Take a look (this part was taked from the page of IEinspector):
IEInspector HTTP Analyzer is such a handy tool that allows you to monitor, trace, debug and analyze HTTP/HTTPS traffic in real-time.
When sending data over HTTPS, I know the content is encrypted, however I hear mixed answers about whether the headers are encrypted, or how much of the header is encrypted.
How much of HTTPS headers are encrypted?
Including GET/POST request URLs, Cookies, etc.
All the HTTP headers are encrypted†.
That's why SSL on vhosts doesn't work too well - you need a dedicated IP address because the Host header is encrypted.
†The Server Name Identification (SNI) standard means that the hostname may not be encrypted if you're using TLS. Also, whether you're using SNI or not, the TCP and IP headers are never encrypted. (If they were, your packets would not be routable.)
The headers are entirely encrypted. The only information going over the network 'in the clear' is related to the SSL setup and D/H key exchange. This exchange is carefully designed not to yield any useful information to eavesdroppers, and once it has taken place, all data is encrypted.
New answer to old question, sorry. I thought I'd add my $.02
The OP asked if the headers were encrypted.
They are: in transit.
They are NOT: when not in transit.
So, your browser's URL (and title, in some cases) can display the querystring (which usually contain the most sensitive details) and some details in the header; the browser knows some header information (content type, unicode, etc); and browser history, password management, favorites/bookmarks, and cached pages will all contain the querystring. Server logs on the remote end can also contain querystring as well as some content details.
Also, the URL isn't always secure: the domain, protocol, and port are visible - otherwise routers don't know where to send your requests.
Also, if you've got an HTTP proxy, the proxy server knows the address, usually they don't know the full querystring.
So if the data is moving, it's generally protected. If it's not in transit, it's not encrypted.
Not to nit pick, but data at the end is also decrypted, and can be parsed, read, saved, forwarded, or discarded at will. And, malware at either end can take snapshots of data entering (or exiting) the SSL protocol - such as (bad) Javascript inside a page inside HTTPS which can surreptitiously make http (or https) calls to logging websites (since access to local harddrive is often restricted and not useful).
Also, cookies are not encrypted under the HTTPS protocol, either. Developers wanting to store sensitive data in cookies (or anywhere else for that matter) need to use their own encryption mechanism.
As to cache, most modern browsers won't cache HTTPS pages, but that fact is not defined by the HTTPS protocol, it is entirely dependent on the developer of a browser to be sure not to cache pages received through HTTPS.
So if you're worried about packet sniffing, you're probably okay. But if you're worried about malware or someone poking through your history, bookmarks, cookies, or cache, you are not out of the water yet.
HTTP version 1.1 added a special HTTP method, CONNECT - intended to create the SSL tunnel, including the necessary protocol handshake and cryptographic setup.
The regular requests thereafter all get sent wrapped in the SSL tunnel, headers and body inclusive.
HTTPS (HTTP over SSL) sends all HTTP content over a SSL tunel, so HTTP content and headers are encrypted as well.
With SSL the encryption is at the transport level, so it takes place before a request is sent.
So everything in the request is encrypted.
Yes, headers are encrypted. It's written here.
Everything in the HTTPS message is encrypted, including the headers, and the request/response load.
the URL is also encrypted, you really only have the IP, Port and if SNI, the host name that are unencrypted.
To understand, what is encrypted and what not, you need to know that SSL/TLS is the layer between the transport-layer and the application-layer.
in the case of HTTPS, HTTP is the application-layer, and TCP the transport-layer. That means, all Headers below the SSL-Level are unencrypted. Also, SSL itself may expose data. The exposed data includes(for each layer's Header):
NOTE: Additional Data might be exposed too, but this data is pretty sure to be exposed.
MAC:
Source MAC address(Current Hop)
Destination MAC address(Next Hop)
IP(assuming IPv4):
Destination IP address
Source IP address
IP Options(if set)
Type-Of-Service(TOS)
The number of hops the current packet passed, if TTL is set to 64
TCP:
Source Port
Destination Port
TCP-Options
Theoretically, you can encrypt the TCP-Headers, but that is hard to implement.
SSL:
Hostname(if SNI is being used)
Usually, a browser won't just connect to the destination host by IP immediantely using HTTPS, there are some earlier requests, that might expose the following information(if your client is not a browser, it might behave differently, but the DNS request is pretty common):
DNS:
This request is being sent to get the correct IP address of a server. It will include the hostname, and its result will include all IP addresses belonging to the server.
HTTP:
the first request to your server. A browser will only use SSL/TLS if instructed to, unencrypted HTTP is used first. Usually, this will result in a redirect to the seucre site. However, some headers might be included here already:
User-Agent(Specification of the client)
Host (Hostname)
Accept-Language (User-Language)