Cross-domain error - dns

What is a cross-domain error?

It happens when Javascript (most of the time) try to access something which it shouldn't.
Such as if you try to read another domain's cookie, that won't work. If you try to do XMLHTTP request to another domain or protocol (HTTP > HTTPS) that won't work. Because if you can do that you can hijack, steal your visitors session in other websites.
It's security feature and now it's a standard in all browser.

As I understand it, client-side tools such as Silverlight (and maybe Flash/Javascript) throw a cross-domain error when you attempt to make a connection to a server that is normally only allowed when it is made to the same domain that the page was served from (some origin policy).
A cross-domain error may be thrown when, for example, you are viewing a page on your test server when it is trying to call your live server, or when you are viewing a test page as a local file using a file:// protocol.
Try ensuring that the domain you are testing on is the same as that which the site was designed to be on. Note that Flash has the crossdomain.xml feature which specifically allow you to do cross-domain requests. Javascript also has ways to get around same origin policy, but you should be aware of the implications of what you're doing.

Related

Unsecure XMLHttpRequest calls from secure page

in our company we need to implement a self hosted Rest Service that has to be deployed in the client workstations in order for our internal web applications to interact with them.
The web applications are in https, and we are not using, at the moment, the CSP headers.
Our concern is whether it's necessary to call the local service also in https or this can ne avoided (and so we can avoid to manage a certificate to deploy in every single workstation).
We made some trials with Chrome and Edge and it seems that the ajax calls are working also in plain http, but we would like to know if that is actually supported or not. Our internal web applications are not using, at the moment, the Content Security Policy headers.
Thank you!
On an HTTPS connection browsers will block HTTP content as mixed content, CSP will not change that. However, Chrome will allow mixed content on http://127.0.0.1 and http://localhost while Firefox will allow it on http://127.0.0.1, see note on https://developer.mozilla.org/en-US/docs/Web/Security/Mixed_content.
When you implement CSP you should include http://127.0.0.1 (or http://localhost) for the appropriate directive.

if i use http for part of my website and https for another part does this open up any security issues

I have a node.js app.
I have it configured to redirect everything to https from http.
but i was thinking if the extra work to make the normal pages visible on http and the logged in pages only visible via https, would be worth the effort.
does having both in my app expose any security holes?
Yes multiple, including:
Cookies are shared between the two sites unless you remember to include the "secure" attribute each time you set a cookie.
You are vulnerable to MITM attacks (e.g. replacing a "login" link on http to either keep you on http or redirect you to another site instead).
Resources need to be loaded over https on the secure site or you will get mixed security warnings. It's easy to miss this when running mixed sites.
Users will not know whether pages should be secure or not.
Can forget to renew cert and/or see cert errors but this should be more obvious if whole site is https.
Cannot use advanced security features like HSTS.
And that's just off the top of my head.
Go https everywhere and redirect all http traffic to https. Unless you've a good reason not to.
There are other benefits too (user confidence, looks more professional, small SEO boast, Google sees this as two sites, easier management of sites, Chrome will soon block access to some features like location tracking on http, cannot upgrade to HTTP/2 until you implement https... etc.).

Is it really possible to hack the a forbidden web browser area that throws a 403 error?

I am not asking how. I am asking if. Is it possible to bypass a 403 error on the web?
Let me explain a bit in detail. On a web server the IIS has set up a directory for a project we are such that it is not accessable to the outside. So if you type the path to that directory in a web browser, the web browser will say that it is not accessable and it will throw a 403 error.
Now, here is the problem. Some files are placed there with some secure information. A programmer on our team has made a big deal about this and the fact that the files are placed on a server that is accessalbe to the outside world. On the other hand, I think this is not such a big deal since if a user on the outside tried to go to that directory, his web browser will throw the 403 error. But other people on the team say that a hacker can still somehow access it.
So that leads me here and to my question. Is it possible to bypass a 403 error on the web? I say no. Some network guys at work say maybe. I am not asking how to do it. I am only asking if it is really possible.
I gather from your information that there is a web server with a directory setup on the web like so
http://www.example.com/directory
Now, if you navigate to this URL you get a 403 Forbidden error? However, if you know the name of a file you can go to http://www.example.com/directory/MyImportantDocument.docx and it is possible to view the document at this location?
Unless there is a runnable script on your server that does this, it is not possible to view the directory contents via the web. However, URLs are not considered secure as they are logged in browser history, proxy and server logs and can also be leaked by browsers' referer header. I assume the files are stored here so they can be accessed by a remote application?
File names can be easily brute forced by an attacker. Tools such as dirbuster and dirb do this automatically. Therefore, if the files do not need to be readable remotely, they should be moved to an internal server, not accessible from the internet or DMZ.
If access is needed you should implement some sort of authentication. At the very least activate basic auth on IIS. This will prompt a web browser user for a username and password in order to view files, or the files can be accessed programmatically by setting the appropriate Authorization header, which is an encoded username and password.
Better would be something with comprehensive session management, like an application pre-built for this purpose. E.g. a CMS which is kept up-to-date and securely configured.
Also you should make sure that the IIS website is only configured to be accessed via HTTPS which will protect against traffic snooping of the credentials, URL path, headers and file contents.
In some cases (e.g. Back-end or web server mis-configuration) it's possible to bypass 403. For understanding those methods read this script:
https://github.com/lobuhi/byp4xx
this script contained well-known methods and collected from various bug bounty communities.
So if your back-end server not vulnerable to this script, probably it's safe.
So basically it is NOT possible if the server software itself doesn't has any bug. But if you have other parts of your website that are public and probably using a dynamic scripting language that may higher your risk if someone is able to find a hole with something like "access file from filesystem".
In general I would recommend you to NOT store any security relevant files on a public server that don't need to!
If you could avoid it, it's always the better way.
There is a simple exploit to bypass .httacess restrictions... Try to Google "bypass error 403" and you will find the method. As auditor I can confirm that it is not a good practice (and if I see it I will always raise it as an issue) if you store credentials (or any other sensitive information) in plain text on web server.

Cross domain requests from the server

I know that browsers often prevent cross domain http requests to servers due to security measures (which can be avoided by CORS or JSONP), but what about a server making an http request to another server? Can that be blocked by security restrictions?
I guess what I'm asking is that since the server is making the request and not the browser, would I still need to deal with things such as CORS and/or JSONP, or are those work arounds specifically geared towards browser-level security?
A computer is free to send whatever requests it wants.
In the case of CORS, that's one piece of software (the browser) restricting less trusted code (Javascript) running on the same computer. But if you have full access to the computer you can do anything.
It is a browser specific measure designed to deal with the fact that people often run untrusted code in their browser and sensibly want to restrict it. More specifically, the Same Origin Policy causes the restriction and CORS is a way around it for participating servers due to the need for legitimate cross site AJAX.
Blocked by whose security restrictions? Of course it could be, but not by the user. A server making an HTTP request to another web server is no different than your browser making the same request.

SSL: How to balance API performance with security?

APIs with terrible security are common place. Case in point - this story on TechCrunch.
It begs the question, how do you balance security with performance when it comes to SSL? Obviously, sensitive information such as usernames and password should be sent over SSL. What about subsequent calls that perhaps use an API key? At what point is it okay to use an unencrypted connection when it comes to API calls that require proof of identity?
If you allow mixed content, then a man-in-the-middle, can rewrite mixed content to inject JS to steal sensitive information already in the page.
With cafés and the like providing free wireless access, man-in-the-middle attacks are not all that difficult.
https://www.eff.org/pages/how-deploy-https-correctly gives a good explanation:
When hosting an application over
HTTPS, there can be no mixed content;
that is, all content in the page must
be fetched via HTTPS. It is common to
see partial HTTPS support on sites, in
which the main pages are fetched via
HTTPS but some or all of the media
elements, stylesheets, and JavaScript
in the page are fetched via HTTP.
This is unsafe because although the
main page load is protected against
active and passive network attack,
none of the other resources are. If a
page loads some JavaScript or CSS code
via HTTP, an attacker can provide a
false, malicious code file and take
over the page’s DOM once it loads.
Then, the user would be back to a
situation of having no security. This
is why all mainstream browsers warn
users about pages that load mixed
content. Nor is it safe to reference
images via HTTP: What if the attacker
swapped the Save Message and Delete
Message icons in a webmail app?
You must serve the entire application
domain over HTTPS. Redirect HTTP
requests with HTTP 301 or 302
responses to the equivalent HTTPS
resource.
The problem is that without understanding the performance of your application it is just wrong to try and optimize the application without metrics. This is what leads to decisions by devs to leave an API unecrypted simply thinking it's eeking out another 10ms's of performance. Simply put the best way to balance security concerns versus performance is to worry about security first, get some load from real customers(not whiteboard stick figures being obsessed over by some architect) and get real metrics from your code when you suspect performance might be an issue. I have a weird feeling that it won't be security related.
You need to gather some evidence about the alleged performance issues of SSL before you leap. You might get quite a surprise.

Resources