What are the latest news on CDN security, related to the fact that CDN provider has access to my users' cookies? Let's assume that the delivered scripts are not patched to do a malicious job behind the scenes, but the fact that someone intercepts (by design, SSL can't help here) cookies of my users is somewhat worrying.
A CDN won't have access to your user's cookies. Cookies are tied to a specific domain (or subdomains if you tell them to be) so HTTP requests to a CDN won't include the cookies because the domain name will be different. You can use Fiddler or your network monitor tool of choice to verify this.
Related
My web app is not publicly available and will be used by certain verified users within a firewall.
Do I need to worry about CSRF?
Reading about the CSRF attacks and the below:
From Spring documentation:-
18.3 When to use CSRF protection
When should you use CSRF protection? Our recommendation is to use CSRF protection for any request that could be processed by a browser by normal users. If you are only creating a service that is used by non-browser clients, you will likely want to disable CSRF protection.
I believe apps hosted publicly are more susceptible.
Thank you in advance!
Naturally I do not know the exact circumstances or environment you are dealing with; so I can't say beyond a reasonable doubt here.
Your risk factor is definitely low, being that the web app it is not publicly accessible, behind firewalls, verified users. But it depends on what your app is doing.
I've found that XSRF or CSRF protection is only needed for domains that use cookies. Every request to your site carries your cookies even if the request comes from a web page not controlled by you. It is most important when you are modifying state on the server but there are cases where it can be useful even in cases where state is not changed.
This guide here contains a short tutorial on XSRF and how to avoid it, and allows you to actually try it out.
CSRF is one of the attacks that is actually used to bypass firewalls. So it sounds like you do need protection if authentication in your app relies on something sent automatically by the browser (eg. session cookies). If authentication tokens are not sent automatically (you have explicit Javascript to attach them to requests, like request headers), then you are inherently protected from CSRF.
In case of CSRF, the unsuspecting victim user, while being logged in to your (internal) application visits a malicious external site. That site then creates a request (eg. POSTs) the user to your apps internal url, with appropriate parameters to do something your user didn't want. As cookies are sent automatically to the destination domain, the request will be valid and processed by your app (even in case of javascript, mostly).
Notice that the user doing the internal request is your victim user with access to the internal resource. So the firewall is bypassed, the victim's browser was tricked into making the request.
Also note that this required knowledge of your app (most probably a targeted attack specifically against your app). That reduces the risk somewhat, but it is still very much a risk from a security perspective. You don't want to rely on security by obscurity, ie. you should assume that attackers know everything about your application except cryptographic secrets, but including endpoints, parameters necessary and so on. That will not always be the case, but you should design for this to make it secure.
I want to only allow users from my own IP and two domains which would cover the client's intranet and external secure website. Should I be doing it in web.config? Azure itself?
Thanks for the help :-)
The mechanism to control which referral domains are allowed to access resources in your Azure Web App, or any other HTTP endpoint for that matter, is called Cross-Origin Resource Sharing (CORS).
CORS is an IETF standard (RFC6454) and is supported and configurable for any Web App / App Service. However, it will not help you in what you are trying to achieve.
Web browsers nowadays operate what's referred to as same-origin policy. This is where a browser will only fetch resources from the same domain present in the address bar. Why? It's really a security mechanism designed to protect the user against cross-site scripting (CSS) attacks, where a malicious actor may craft scripts to make calls to websites a victim is currently logged in to, where their cookie will automatically be sent to the server to sign in, thus being able to carry out activities as the victim. CORS allows developers to permit cross-origin requests safely by white-listing particular domains which are allowed to access resources.
CORS should not be used a mechanism to restrict access to a site. Neither should the referrer HTTP header be used when locking down access to a website, since this can easily be spoofed. Further, CORS operates on an honorary basis meaning that, should origin be indeterminable, the request will be allowed, as it is assumed that the request is same-origin, or initial.
What you are looking for is IP restrictions. Azure Web Apps support IP restrictions. In the portal, navigate to your Web App -> Networking -> IP Restrictions. This area will allow you to configure IP addresses or ranges that are allowed to access the application. You will need to create a rule allowing your IP address and addresses relating to the "referral domains" in question, which demands that the users are coming from known addresses.
So, to answer your question, it should be done in the portal, and you should use IP restrictions.
Vendor has developed a website, which currently sits within enterprise managed Amazon Web Services (AWS) environment. However the vendor owns the domain name of the website.
The site is an ecommerce platform which allows users to submit personal information for the purchase of insurance products.
Would like to know if it is technically possible for the domain owner to redirect form submissions to a different server (without the enterprise knowing about it). Thank you!
It's possible.
As vendor owns the domain name, it can easily deploy a proxy server and resolve your website domain name to that proxy.
In this way, the proxy can intercept all user requests, do whatever it wants, before sending the requests to the real server (your server).
Essentially, this is a Man-in-the-middle attack:
a man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other.
This question has come up at my job a few times, and I was hoping to get some community backing.
We are working on a Single Page WebApp, but require that the data coming from our services API be secure to prevent snooping of the data. We are also trying to iron out the prod environment details for where our SPA will be hosted. One of the options is using something like Amazon's S3, which doesn't support SSL (to my knowledge).
There's a group that believes the whole site needs to be hosted over SSL. However, it's my understanding that SSL will only protect the transmission of the data. So the point I'm trying to make is that hosting the services from an HTTPS site and the client code from non-SSL based URLs will be just as secure as hosting everything from an SSL site.
Could anyone clarify this for me?
Thanks in advance.
Yes, SSL just encrypts the transmission of the data, and does not offer any type of protection of the runtime environment on any client-side code.
Now, it is generally considered a best practice to host everything over SSL, for these reasons:
Users can get warnings that a site is transmitting data with an untrusted source if parts are from SSL and parts are not.
Any cookies, will be sent in the clear when requesting the non-SSL files and may contain information that should be kept private.
I have a website that came with a SSL site for HTTPS but its on a different server. Example being
my website:
http://example.com
my SSL site:
https://myhostingcompany.com/~myuseraccount/
So I can do transactions over HTTPS and we have user accounts and everything but it is located on a different domain. The cookie domain is set for that one.
Is there a way I can check on my actual site to see if a cookie is set for the other one? And possibly grab its data and auth a user?
I think this violates a major principle of security and can't be done for good reasons, but am i wrong? is this possible?
You can setup a service on either site to handle RPC via HTTP POST requests. You can make it require some sort of session that can only be created by your sites. However, whatever can be accessed over that shared session on the HTTPS site will have no guarantee of confidentiality or integrity.