Cross domain requests from the server - security

I know that browsers often prevent cross domain http requests to servers due to security measures (which can be avoided by CORS or JSONP), but what about a server making an http request to another server? Can that be blocked by security restrictions?
I guess what I'm asking is that since the server is making the request and not the browser, would I still need to deal with things such as CORS and/or JSONP, or are those work arounds specifically geared towards browser-level security?

A computer is free to send whatever requests it wants.
In the case of CORS, that's one piece of software (the browser) restricting less trusted code (Javascript) running on the same computer. But if you have full access to the computer you can do anything.
It is a browser specific measure designed to deal with the fact that people often run untrusted code in their browser and sensibly want to restrict it. More specifically, the Same Origin Policy causes the restriction and CORS is a way around it for participating servers due to the need for legitimate cross site AJAX.

Blocked by whose security restrictions? Of course it could be, but not by the user. A server making an HTTP request to another web server is no different than your browser making the same request.

Related

Is is possible to restrict a requesting domain at the application level?

I wonder how some video streaming sites can restrict videos to be played only on certain domains. More generally, how do some websites only respond to requests from certain domains.
I've looked at http://en.wikipedia.org/wiki/List_of_HTTP_header_fields and saw the referrer field that might be used, but I understand that HTTP headers can be spoofed (can they?)
So my question is, can this be done at the application level? By application, I mean, for example, web applications deployed on a server, not a network router's operating system.
Any programming language would work for an answer. I'm just curious how this is done.
If anything's unclear, let me know. Or you can use it as an opportunity to teach me what I need to know to clearly specify the question.
HTTP Headers regarding ip-information are helpful (because only a smaller portion is faked) but is not reliable. Usually web-applications are using web-frameworks, which give you easy access to these.
Some ways to gain source information:
originating ip-address from the ip/tcp network stack itself: Problem with it is that this server-visible address must not match the real-clients address (it could come from company-proxy, anonymous proxy, big ISP... ).
HTTP X-Forwarded-For Header, proxies are supposed to set this header to solve the mentioned problem above, but it also can be faked or many anonymous proxies aren't setting it at all.
apart from ip-source information you also can use machine identifiers (some use the User-Agent Header. Several sites for instance store this machine identifiers and store it inside flash cookies, so they can reidentify a recalling client to block it. But same story: this is unreliable and can be faked.
The source problem is that you need a lot of security-complexity to securely identify a client (e.g. by authentication and client based certificates). But this is high effort and adds a lot of usability problem, so many sites don't do it. Most often this isn't an issue, because only a small portion of clients are putting some brains to fake and access server.
HTTP Referer is a different thing: It shows you from which page a user was coming. It is included by the browser. It is also unreliable, because the content can be corrupted and some clients do not include it at all (I remember several IE browser version skipping Referer).
These type of controls are based on the originating IP address. From the IP address, the country can be determined. Finding out the IP address requires access to low-level protocol information (e.g. from the socket).
The referrer header makes sense when you click a link from one site to another, but a typical HTTP request built with a programming library doesn't need to include this.

SSL: How to balance API performance with security?

APIs with terrible security are common place. Case in point - this story on TechCrunch.
It begs the question, how do you balance security with performance when it comes to SSL? Obviously, sensitive information such as usernames and password should be sent over SSL. What about subsequent calls that perhaps use an API key? At what point is it okay to use an unencrypted connection when it comes to API calls that require proof of identity?
If you allow mixed content, then a man-in-the-middle, can rewrite mixed content to inject JS to steal sensitive information already in the page.
With cafés and the like providing free wireless access, man-in-the-middle attacks are not all that difficult.
https://www.eff.org/pages/how-deploy-https-correctly gives a good explanation:
When hosting an application over
HTTPS, there can be no mixed content;
that is, all content in the page must
be fetched via HTTPS. It is common to
see partial HTTPS support on sites, in
which the main pages are fetched via
HTTPS but some or all of the media
elements, stylesheets, and JavaScript
in the page are fetched via HTTP.
This is unsafe because although the
main page load is protected against
active and passive network attack,
none of the other resources are. If a
page loads some JavaScript or CSS code
via HTTP, an attacker can provide a
false, malicious code file and take
over the page’s DOM once it loads.
Then, the user would be back to a
situation of having no security. This
is why all mainstream browsers warn
users about pages that load mixed
content. Nor is it safe to reference
images via HTTP: What if the attacker
swapped the Save Message and Delete
Message icons in a webmail app?
You must serve the entire application
domain over HTTPS. Redirect HTTP
requests with HTTP 301 or 302
responses to the equivalent HTTPS
resource.
The problem is that without understanding the performance of your application it is just wrong to try and optimize the application without metrics. This is what leads to decisions by devs to leave an API unecrypted simply thinking it's eeking out another 10ms's of performance. Simply put the best way to balance security concerns versus performance is to worry about security first, get some load from real customers(not whiteboard stick figures being obsessed over by some architect) and get real metrics from your code when you suspect performance might be an issue. I have a weird feeling that it won't be security related.
You need to gather some evidence about the alleged performance issues of SSL before you leap. You might get quite a surprise.

What are the pros and cons of a 100% HTTPS site?

First, let me admit that what I know about HTTPS is pretty rudimentary. I don't know much about session security, encryption, or how either of those things is supposed to be done.
What I do know is that web security is important; that horror stories of XSS, CSRF, and database injections pop up over and over again. I know that a preventative stance against such exploits is better than a reactive one.
But the motivation for this question comes from a different point of view. I work at a site that regularly accepts payment from users. Obviously, the payments are sent over a secure channel (HTTPS). I mainly work on the CSS, HTML, and JavaScript of the site. What I've been told is that it is necessary to duplicate CSS, JavaScript, and image files before they can be called over HTTPS. So assume I have the following files:
css/global.css
js/global.js
images/
logo.png
bg.png
The way I understand it, these files need to be duplicated before they can be "added" to the HTTPS. So a file can either be under security (HTTPS) or not.
If this is true, then this is a major hindrance. In even the smallest site, it would be a major pain to duplicate files and then have to maintain them every time you make a CSS or JS change. Obviously this could be alleviated by moving everything into the HTTPS.
So what I want to know is, what are the pros and cons of a site that is completely behind HTTPS? Does it cause noticeable overhead? Is it just foolish to place the entire site under encryption? Would users feel safer seeing the "secure" notifications in their browser during their entire visit? And last but not least, does it truly make for a more secure site? What can HTTPS not protect against?
You can serve the same content via HTTPS as you do via HTTP (just point it to the same document root).
Cons that may be major or minor, depending:
serving content over HTTPS is slower than serving it via HTTP.
certificates signed by well-known authorities can be expensive
if you don't have a certificate signed by a trusted authority (eg, you sign it yourself), visitors will get a warning
Those are pretty basic, but just a few things to note. Also, personally, I feel much better seeing that the entire site is HTTPS if it's anything related to financial stuff, obviously, but as far as general browsing, no, I don't care.
Noticeable overhead? Yes, but that matters less and less these days as clients and servers are much faster.
You don't need to make a copy of everything, but you do need to make those files accessible via HTTPS. Your HTTPS and HTTP services can use the same doc root.
Is it foolish to put the whole site under encryption? Typically no.
Would users feel safer? Probably.
Does it truly make for a more secure site? Only when dealing with the communication channel between the client and the server. Everything else is still up for grabs.
You've been misinformed. The css, js, and image files need not be duplicated assuming you've set up the http and https mapping to point to the same physical website on the server. The only important thing is that these files are referenced with https when the page you're looking at is also under https. This will prevent the dreaded security message that says that some objects on the page are not secured.
For every other page where you're running the site under http (unsecured) you can reference those same files in the same locations, but with an http address.
To answer your other question, there would indeed be a performance penalty to put the entire site under https. The server has to work hard to encrypt everything it sends over the wire. And then some not-so-old browsers won't cache https content to disk by default, which of course will result in an even heavier load on the server.
Because I like my sites to be as responsive as possible, I'm always selective about which sections of a site I choose to be SSL-encrypted. In most typical e-commerce sites, the only pages that need SSL encryption are the login, registration, and checkout pages.
The traditional reason for not having the entire site behind SSL is processing time. It does take more work for both the client and the server to use SSL. However, this overhead is fairly small compared to modern processors.
If you are running a very large site, you may need to scale slightly faster if you are encrypting everything.
You also need to buy a certificate, or use a self signed one which may not be trusted by your users.
You also need a dedicated IP address. If you are on a shared hosting system, you need to have an IP that you can dedicate to only having SSL on your site.
But if you can afford a certificate and private ip and don't mind needed a slightly faster server, using SSL on your entire site is a great idea.
With the number of attacks that SSL mitigates, I would say do it.
You do not need multiple copies of these files for them to work with HTTPs. You may need to have 2 copies of these files if the hosting setup has been configured in such that you have a separate https directory. So to answer your question - no duplicate files are not required for HTTPs but depending on the web hosting configuration - they may be.
In regards to the pros and cons of https vs http there are already a few posts addressing that.
HTTP vs HTTPS performance
HTTPS vs HTTP speed comparison
HTTPs only encrypts the data between the client computer and the server. It does not software holes or issues such as remote javascript includes. HTTPs doesn't make your application better - it only helps secure the data between the user and your app. You need to make sure your app has no security holes, practice filtering all data, SQL, and review security logs frequently.
However if you're only responsible for the frontend part of the site I wouldn't worry about it but would bring up concerns of security with the main developer for the backend.
One of the concerns is that https traffic could be blocked, for example on Apple computers if you set parental control on it blocks https traffic because it can't read the encrypted content, you can read here:
http://support.apple.com/kb/ht2900
https note: For websites that use SSL
encryption (the URL will usually begin
with https), the Internet content
filter is unable to examine the
encrypted content of the page. For
this reason, encrypted websites must
be explicitly allowed using the Always
Allow list. Encrypted websites that
are not on the Always Allow list will
be blocked by the automatic Internet
content filter.
An important "pro" for more https at your site is the following:
a user connecting thru an unencrypted WiFi, like at an airport, can give their password in https, but if the site then switches back to http after the password page, the session cookie becomes exposed and can be immediately used by an eavesdropper.
See this article http://steve.grc.com/2010/10/28/why-firesheeps-time-has-come/#comment-2666

Why not use HTTPS for everything?

If I was setting up a server, and had the SSL certificate(s), why wouldn't I use HTTPS for the entire site instead of just for purchases/logins? I would think it would make more sense just to encrypt the entire site, and protect the user entirely. It would prevent problems such as deciding what has to be secured because everything would be, and it's not really an inconvenience to the user.
If I was already using an HTTPS for part of the site, why wouldn't I want to use it for the entire site?
This is a related question: Why is https only used for login?, but the answers are not satisfactory. The answers assume you've not been able to apply https to the entire site.
In addition to the other reasons (especially performance related) you can only host a single domain per IP address* when using HTTPS.
A single server can support multiple domains in HTTP because the Server HTTP header lets the server know which domain to respond with.
With HTTPS, the server must offer its certificate to the client during the initial TLS handshake (which is before HTTP starts). This means that the Server header hasn't been sent yet so there is no way for the server to know which domain is being requested and which certificate (www.foo.com, or www.bar.com) to respond with.
*Footnote: Technically, you can host multiple domains if you host them on different ports, but that is generally not an option. You can also host multiple domains if your SSL certificate is has a wild-card. For example, you could host both foo.example.com and bar.example.com with the certificate * .example.com
I can think of a couple reasons.
Some browsers may not support SSL.
SSL may decrease performance somewhat. If users are downloading large, public files, there may be a system burden to encrypt these each time.
SSL/TLS isn't used nearly often enough. HTTPS must be used for the entire session, at no point can a Session ID be sent over HTTP. If you are only useing https for logging in then you are in clear violation of The OWASP top 10 for 2010 "A3: Broken Authentication and Session Management".
Why not send every snail-mail post in a tamper-proof opaque envelope by Registered Mail? Someone from the Post Office would always have personal custody of it, so you could be pretty sure that no one is snooping on your mail. Obviously, the answer is that while some mail is worth the expense, most mail isn't. I don't care if anyone reads my "Glad you got out of jail!" postcard to Uncle Joe.
Encryption isn't free, and it doesn't always help.
If a session (such as shopping, banking, etc.) is going to wind up using HTTPS, there's no good reason not to make the whole session HTTPS as early as possible.
My opinion is that HTTPS should be used only when unavoidably necessary, either because the request or the response needs to be safeguarded from intermediate snooping. As an example, go look at the Yahoo! homepage. Even though you're logged in, most of your interaction will be over HTTP. You authenticate over HTTPS and get cookies that prove your identity, so you don't need HTTPS to read news stories.
The biggest reason, beyond system load, is that it breaks name-based virtual hosting. With SSL, it's one site - one IP address. This is pretty expensive, as well as harder to administer.
For high latency links the initial TLS handshake requires additional round trips to validate the certificate chain (including sending any intermediate certificates), agree on cipher suites and establish a session. Once a session is established subsequent requests may utilize session caching to reduce the number of round trips but even in this best case there is still more round trips than a normal HTTP connection requires. Even if encryption operations were free round trips are not and can be quite noticable over slower network links especially if the site does not leverage http pipelining. For broadband users within a well connected segment of the network this is not an issue. If you do business internationally requring https can easily cause noticable delays.
There are additional considerations such as server maintenance of session state requiring potentially significantly more memory and of course data encryption operations. Any small sites practically need not worry about either given server capability vs cost of todays hardware. Any large site would easily be able to afford CPU /w AES offload or add-on cards to provide similar functionality.
All of these issues are becoming more and more of a non-issue as time marches on and the capabilities of hardware and the network improve. In most cases I doubt there is any tangable difference today.
There may be operational considerations such as administrative restrictions on https traffic (think intermediate content filters..et al) possibly some corporate or governmental regulations. Some corporate environment require data decryption at the perimeter to prevent information leakage ... interference with hotspot and similiar web based access systems not capable of injecting messages in https transactions. At the end of the day in my view reasons for not going https by default are likely to be quite small.
https is more resource-hungry than the normal http.
It demands more from both the servers and the clients.
If whole session is encrypted then you won't be able to use caching for static resources like images and js on proxy level eg ISP.
You should use HTTPS everywhere, but you will lose the following:
You should definitely not use SSL Compression or HTTP Compression over SSL, due to BREACH and CRIME attacks. So no compression if your response contains session or csrf identifiers. You can mitigate this by putting your static resources (images, js, css) on a cookie-less domain, and use compression there. You can also use HTML minification.
One SSL cert, one IP address, unless using SNI, which doesn't work on all browsers (old android, blackberry 6, etc).
You shouldn't host any external content on your pages that don't come over SSL.
You lose the outbound HTTP Referer header when browser goes to an HTTP page, which may or may not be a problem for you.
Well, the obvious reason is performance: all of the data will have to be encrypted by the server before transmission and then decrypted by the client upon receipt, which is a waste of time if there's no sensitive data. It may also affect how much of your site is cached.
It's also potentially confusing for end users if all the addresses use https:// rather than the familiar http://. Also, see this answer:
Why not always use https when including a js file?
https requires the server to encrypt and decrypt client requests and responses. The performance impact will add up if the server is serving lots of clients. That's why most current implementations of https is limited to password authentication only. But with increasing computing power this may change, after all Gmail is using SSL for the entire site.
In addition to WhirlWind's response, you should consider the cost and applicability of SSL certificates, access issues (it's possible, though unlikely, that a client may not be able to communicate via the SSL port), etc.
Using SSL isn't a guaranteed blanket of security. This type of protection needs to be built into the architecture of the application, rather than trying to rely on some magic bullet.
I was told that on one project at our company, they found that the bandwidth taken up by SSL messages was significantly more than for plain messages. I believe someone told me it was an astounding 12 times as much data. I have not verified this myself and it sounds very high, but if there is some sort of header added to each page and most pages have a small amount of content, that may not be so far out.
That said, the hassle of going back and forth between http and https and keeping track of which pages are which seems like too much effort to me. I only once tried to build a site that mixed them and we ended up abandoning the plan when we got tripped up by complex things like pop-up windows created by Javascript getting the wrong protocol attached to them and that sort of thing. We ended up just making the whole site https as less trouble. I guess in simple cases where you just have a login screen and a payment screen that need to be protected and they're simple pages, it wouldn't be a big deal to mix-and-match.
I wouldn't worry much about the burden on the client to decrypt. Normally the client is going to be spending a lot more time waiting for data to come over the wire than it takes to process it. Until users routinely have gigabit/sec internet connections, client processing power is probably pretty irrelevant. The CPU power requried by the server to encrypt pages is a different issue. There might well be issues of it not being able to keep up with hundreds or thousands of users.
One other small point (maybe someone can verify), If a user types data into a form item such as a text box and then for some reason refreshes the page or the server crashes out for a second, the data the user entered is lost using HTTPS but is preserved using HTTP.
Note: I'm not sure if this is browser specific but it certainly happens with my Firefox browser.
windows Server 2012 with IIS 8.0 now offers SNI which is Server Name Indication which allows multiple SSL Web Applications in IIS to be hosted on one IP Address.

Cross-domain error

What is a cross-domain error?
It happens when Javascript (most of the time) try to access something which it shouldn't.
Such as if you try to read another domain's cookie, that won't work. If you try to do XMLHTTP request to another domain or protocol (HTTP > HTTPS) that won't work. Because if you can do that you can hijack, steal your visitors session in other websites.
It's security feature and now it's a standard in all browser.
As I understand it, client-side tools such as Silverlight (and maybe Flash/Javascript) throw a cross-domain error when you attempt to make a connection to a server that is normally only allowed when it is made to the same domain that the page was served from (some origin policy).
A cross-domain error may be thrown when, for example, you are viewing a page on your test server when it is trying to call your live server, or when you are viewing a test page as a local file using a file:// protocol.
Try ensuring that the domain you are testing on is the same as that which the site was designed to be on. Note that Flash has the crossdomain.xml feature which specifically allow you to do cross-domain requests. Javascript also has ways to get around same origin policy, but you should be aware of the implications of what you're doing.

Resources