what functionalities should a web server at least have? - linux

what functionalities should a web server at least have?
And in order to implement a web server, what protocols should be read and understood first?

A web server almost by definition has to serve a website.
Websites are served by the HTTP protocol.
This is explained in detail in RFC 2616.
Note: This is HTTP/1.1 rather than 1.0, but literally no one uses HTTP/1.0.
HTTP is a reasonable simple request response protocol, and has been expanded by a huge number of extensions over the years.
It is also being succeeded by HTTP/2.0, which is significantly more complex.

Related

Get Flickr Photos over HTTP/2

I am using the Flickr API to retrieve a list of photos with associated sizes, returning as links to the raw image. These links are all HTTPS, as is the domain they are served from.
However, despite the document and all local resources being served over HTTP/2, all images from Flickr are served over HTTP/1.1. What necessary steps are involved in serving all resources over HTTP/2 where available?
I might be misunderstanding its implementation with Cross-Origin - or Flickr's API implementation - but as far as I can tell resources across domains should be able to use the HTTP/2 protocol where available server-side. Any reasons to the contrary or explanations why not would be greatly appreciated, as the newer protocol significantly speeds up the site in question, but serving them locally would take up much space.

Forcing SSL on server level vs app level

I am pretty sure that similar questions have been asked before but I didn't manage to find any (maybe I am using the wrong terms).
I have an unsecure web app (built in Laravel). All communication between the frontend and the backend goes through http. Now, I want to switch to https. As far as I know, there are two ways I can do this.
The first is to configure the server (the one that hosts the app) to accept only https requests. If I do it this way, the communication between the client and the server will be encrypted and I won't have to change anything in my app (is this correct ?).
The second way is to configure my app to accept only https requests. If I do it this way I will have to make some changes to my application code.
Now I want to ask, are both ways equally secure ? Which way is prefered and why ?
Several things are mixed up here I'm afraid.
You can only turn on SSL on your web server (Apache, Nginx, etc). You need a server certificate, and you have to configure your web server to be able to receive https (ssl) connections. As for how exactly to do that is beyond the scope of this answer, but there are lots of tutorials you can find. You have to do this first.
When your web server is configured to support SSL, you want your web application to only be accessible over HTTPS and not plain HTTP. The purpose is that on the one hand, users who don't know the difference are still safe, and on the other hand that attackers can't downgrade a users connection to insecure plain HTTP.
Now as for how you want to enforce HTTPS for your application, you really do have two choices. You can have your web server handle plain HTTP requests and redirect them to SSL, this is an easy configuration both in Apache and Nginx. Or you can add redirects to your application to handle the scenario when it's accessed over plain HTTP and redirect your user with something like a Location header to HTTPS.
Security-wise, it doesn't really matter whether it's the webserver or the application that makes the redirect, from the client's perspective it's the same (mostly indistinguishable, actually). Choose the option that you like best. There may be for example maintainability reasons to choose one or the other. (Do you want to maintain redirection in your application code, or have your server operations add the redirect headers, etc.)
Note though, that either way, your application may still be vulnerable to an attack called SSL Stripping, and to prevent that you should always send a HSTS response header.

Cross domain requests from the server

I know that browsers often prevent cross domain http requests to servers due to security measures (which can be avoided by CORS or JSONP), but what about a server making an http request to another server? Can that be blocked by security restrictions?
I guess what I'm asking is that since the server is making the request and not the browser, would I still need to deal with things such as CORS and/or JSONP, or are those work arounds specifically geared towards browser-level security?
A computer is free to send whatever requests it wants.
In the case of CORS, that's one piece of software (the browser) restricting less trusted code (Javascript) running on the same computer. But if you have full access to the computer you can do anything.
It is a browser specific measure designed to deal with the fact that people often run untrusted code in their browser and sensibly want to restrict it. More specifically, the Same Origin Policy causes the restriction and CORS is a way around it for participating servers due to the need for legitimate cross site AJAX.
Blocked by whose security restrictions? Of course it could be, but not by the user. A server making an HTTP request to another web server is no different than your browser making the same request.

Using IIS as secure reverse proxy in front of less secure HTTP server?

I have a CppCMS based application and I cant use IIS's FastCGI connector as
it is broken for my use thus I want to try to
use the internal HTTP server designed for debug purposes behind IIS.
I it is quite simple web server for an application that handles basic HTTP/1.0 requests
and does not care too much about security like DoS, file serving and more.
So I'd like to know if it is possible to use IIS in front of such application such that
it would:
Sanitize all requests - ensure that they are proper HTTP
Handle all DoS issues like timeouts
Serve the static files.
Is this something that can be configured and done at all?
I would suggest this is the wrong way of doing this. I would use a web server like Nginx to proxy the requests through to backend server. It is very configurable and you will find a lot of articles with doing it to Apache.
We just did something like this. You want the URL Rewriter module. You can use it to sanitize the URLs, however, it isn't going to sanitize the payload. Which is to say, you can make sure that the URLs that hit your box are very specific ones, e.g. not attempts to hits CGI, but you can't use it to make sure that the contents of an upload are safe.
ModSecurity is out for IIS now, it can handle lots of the security related issues.

Why not use HTTPS for everything?

If I was setting up a server, and had the SSL certificate(s), why wouldn't I use HTTPS for the entire site instead of just for purchases/logins? I would think it would make more sense just to encrypt the entire site, and protect the user entirely. It would prevent problems such as deciding what has to be secured because everything would be, and it's not really an inconvenience to the user.
If I was already using an HTTPS for part of the site, why wouldn't I want to use it for the entire site?
This is a related question: Why is https only used for login?, but the answers are not satisfactory. The answers assume you've not been able to apply https to the entire site.
In addition to the other reasons (especially performance related) you can only host a single domain per IP address* when using HTTPS.
A single server can support multiple domains in HTTP because the Server HTTP header lets the server know which domain to respond with.
With HTTPS, the server must offer its certificate to the client during the initial TLS handshake (which is before HTTP starts). This means that the Server header hasn't been sent yet so there is no way for the server to know which domain is being requested and which certificate (www.foo.com, or www.bar.com) to respond with.
*Footnote: Technically, you can host multiple domains if you host them on different ports, but that is generally not an option. You can also host multiple domains if your SSL certificate is has a wild-card. For example, you could host both foo.example.com and bar.example.com with the certificate * .example.com
I can think of a couple reasons.
Some browsers may not support SSL.
SSL may decrease performance somewhat. If users are downloading large, public files, there may be a system burden to encrypt these each time.
SSL/TLS isn't used nearly often enough. HTTPS must be used for the entire session, at no point can a Session ID be sent over HTTP. If you are only useing https for logging in then you are in clear violation of The OWASP top 10 for 2010 "A3: Broken Authentication and Session Management".
Why not send every snail-mail post in a tamper-proof opaque envelope by Registered Mail? Someone from the Post Office would always have personal custody of it, so you could be pretty sure that no one is snooping on your mail. Obviously, the answer is that while some mail is worth the expense, most mail isn't. I don't care if anyone reads my "Glad you got out of jail!" postcard to Uncle Joe.
Encryption isn't free, and it doesn't always help.
If a session (such as shopping, banking, etc.) is going to wind up using HTTPS, there's no good reason not to make the whole session HTTPS as early as possible.
My opinion is that HTTPS should be used only when unavoidably necessary, either because the request or the response needs to be safeguarded from intermediate snooping. As an example, go look at the Yahoo! homepage. Even though you're logged in, most of your interaction will be over HTTP. You authenticate over HTTPS and get cookies that prove your identity, so you don't need HTTPS to read news stories.
The biggest reason, beyond system load, is that it breaks name-based virtual hosting. With SSL, it's one site - one IP address. This is pretty expensive, as well as harder to administer.
For high latency links the initial TLS handshake requires additional round trips to validate the certificate chain (including sending any intermediate certificates), agree on cipher suites and establish a session. Once a session is established subsequent requests may utilize session caching to reduce the number of round trips but even in this best case there is still more round trips than a normal HTTP connection requires. Even if encryption operations were free round trips are not and can be quite noticable over slower network links especially if the site does not leverage http pipelining. For broadband users within a well connected segment of the network this is not an issue. If you do business internationally requring https can easily cause noticable delays.
There are additional considerations such as server maintenance of session state requiring potentially significantly more memory and of course data encryption operations. Any small sites practically need not worry about either given server capability vs cost of todays hardware. Any large site would easily be able to afford CPU /w AES offload or add-on cards to provide similar functionality.
All of these issues are becoming more and more of a non-issue as time marches on and the capabilities of hardware and the network improve. In most cases I doubt there is any tangable difference today.
There may be operational considerations such as administrative restrictions on https traffic (think intermediate content filters..et al) possibly some corporate or governmental regulations. Some corporate environment require data decryption at the perimeter to prevent information leakage ... interference with hotspot and similiar web based access systems not capable of injecting messages in https transactions. At the end of the day in my view reasons for not going https by default are likely to be quite small.
https is more resource-hungry than the normal http.
It demands more from both the servers and the clients.
If whole session is encrypted then you won't be able to use caching for static resources like images and js on proxy level eg ISP.
You should use HTTPS everywhere, but you will lose the following:
You should definitely not use SSL Compression or HTTP Compression over SSL, due to BREACH and CRIME attacks. So no compression if your response contains session or csrf identifiers. You can mitigate this by putting your static resources (images, js, css) on a cookie-less domain, and use compression there. You can also use HTML minification.
One SSL cert, one IP address, unless using SNI, which doesn't work on all browsers (old android, blackberry 6, etc).
You shouldn't host any external content on your pages that don't come over SSL.
You lose the outbound HTTP Referer header when browser goes to an HTTP page, which may or may not be a problem for you.
Well, the obvious reason is performance: all of the data will have to be encrypted by the server before transmission and then decrypted by the client upon receipt, which is a waste of time if there's no sensitive data. It may also affect how much of your site is cached.
It's also potentially confusing for end users if all the addresses use https:// rather than the familiar http://. Also, see this answer:
Why not always use https when including a js file?
https requires the server to encrypt and decrypt client requests and responses. The performance impact will add up if the server is serving lots of clients. That's why most current implementations of https is limited to password authentication only. But with increasing computing power this may change, after all Gmail is using SSL for the entire site.
In addition to WhirlWind's response, you should consider the cost and applicability of SSL certificates, access issues (it's possible, though unlikely, that a client may not be able to communicate via the SSL port), etc.
Using SSL isn't a guaranteed blanket of security. This type of protection needs to be built into the architecture of the application, rather than trying to rely on some magic bullet.
I was told that on one project at our company, they found that the bandwidth taken up by SSL messages was significantly more than for plain messages. I believe someone told me it was an astounding 12 times as much data. I have not verified this myself and it sounds very high, but if there is some sort of header added to each page and most pages have a small amount of content, that may not be so far out.
That said, the hassle of going back and forth between http and https and keeping track of which pages are which seems like too much effort to me. I only once tried to build a site that mixed them and we ended up abandoning the plan when we got tripped up by complex things like pop-up windows created by Javascript getting the wrong protocol attached to them and that sort of thing. We ended up just making the whole site https as less trouble. I guess in simple cases where you just have a login screen and a payment screen that need to be protected and they're simple pages, it wouldn't be a big deal to mix-and-match.
I wouldn't worry much about the burden on the client to decrypt. Normally the client is going to be spending a lot more time waiting for data to come over the wire than it takes to process it. Until users routinely have gigabit/sec internet connections, client processing power is probably pretty irrelevant. The CPU power requried by the server to encrypt pages is a different issue. There might well be issues of it not being able to keep up with hundreds or thousands of users.
One other small point (maybe someone can verify), If a user types data into a form item such as a text box and then for some reason refreshes the page or the server crashes out for a second, the data the user entered is lost using HTTPS but is preserved using HTTP.
Note: I'm not sure if this is browser specific but it certainly happens with my Firefox browser.
windows Server 2012 with IIS 8.0 now offers SNI which is Server Name Indication which allows multiple SSL Web Applications in IIS to be hosted on one IP Address.

Resources