To solve single origin policy we can use either Cross origin resource sharing (CORS) or Jsonp..
In case of CORS,we may not have access to server.. so many ppl are suggesting us to go for Jsonp.
But in Jsonp also only if the server sends the response as proper javascript, we are able to evaluate the response.
For eg: The function call appended ..
The response in this cause may be like this.. "parseJson(data)"
now, my question is.
How can we ensure that server will giv proper response (proper Javascript)?
Assume that we don have access to server, wht to do in this case?
Shd we have something like proxy in-between?
Thanks.
Yes, a proxy server is the best route when you don't have access to the server response. You are right in that both CORS and JSONP need some special formatting from the server.
Related
I've read an article which used Cors-Anywhere to make an example url request, and it made me think about how easily the Same Origin Policy can be bypassed.
While the browser prevents you from accessing the error directly, and cancels the request altogether when it doesn't pass a preflight request, a simple node server does not need to abide to such rules, and can be used as a proxy.
All there needs to be done is to append 'https://cors-anywhere.herokuapp.com/' to the start of the requested url in the malicious script and Voila, you don't need to pass CORS.
And as sideshowbarker pointed out, it takes a couple of minutes to deploy your own Cors-Anywhere server.
Doesn't it make SOP as a security measure pretty much pointless?
The purpose of the SOP is to segregate data stored in browsers by their origin. If you got a cookie from domain1.tld (or it stored data for you in a browser store), Javascript on domain2.tld will not be able to gain access. This cannot be circumvented by any server-side component, because that component will still not have access in any way. If there was no SOP, a malicious site could just read any data stored by other websites in your browsers.
Now this is also related to CORS, as you somewhat correctly pointed out. Normally, your browser will not receive the response from a javascript request made to a different origin than the page origin it's running on. The purpose of this is that if it worked, you could gain information from sites where the user is logged in. If you send it through Cors-Anywhere though, you will not be able to send the user's session cookie for the other site, because you still don't have access, the request goes to your own server as the proxy.
Where Cors-Anywhere matters is unauthenticated APIs. Some APIs might check the origin header and only respond to their own client domain. In that case, sure, Cors-Anywhere can add or change CORS headers so that you can query it from your own hosted client. But the purpose of SOP is not to prevent this, and even in this case, it would be a lot easier for the API owner to blacklist or throttle your requests, because they are all proxied by your server.
So in short, SOP and CORS are not access control mechanisms in the sense I think you meant. Their purpose is to prevent and/or securely allow cross-origin requests to certain resources, but they are not meant to for example prevent server-side components from making any request, or for example to try and authenticate your client javascript itself (which is not technically possible).
Lets say you have an API that is primarily consumed by browsers from other origins.
Each customer has their own subdomain on the API, so .api.service.com
The service allows the customer to define which origins should be allowed to perform CORS-requests.
When a browser with an allowed origin performs a request, the server responds with the expected Access-Control-Allow-Origin header set to the same value as the Origin request header.
When a browser performs a request from an origin that is NOT allowed, the common way to handle this is to respond to the request with a 403 without specifying the Access-Control-Allow-Origin header, which will cause the browser to trigger an error on the request. The browser does not, however, expose any information that the error was caused by missing CORS-headers (although it logs a helpful error in the console, usually).
This makes it hard to programatically show a helpful "This origin is not allowed, please configure."-message, since there doesn't seem to be a good way to reliably decide whether the error was caused by a wifi glitch, a network error or an invalid/missing CORS-configuration.
My question is; when the server detects an origin that should not be allowed, instead of responding with no CORS-headers, could it respond with a 403 and include CORS-headers to allow the browser to read the error?
Since every request goes through this process on the API, I'm thinking this should be safe, but I might be overlooking something. Thoughts?
Suppose I have an client/server application working over HTTP. The server provides a RESTy API and client calls the server over HTTP using regular HTTP GET requests.
The server requires no authentication. Anyone on the Internet can send a GET HTTP request to my server. It's Ok. I just wonder how I can distinguish between the requests from my client and other requests from the Internet.
Suppose my client sent a request X. A user recorded this request (including the agent, headers, cookies, etc.) and send it again with wget for example. I would like to distinguish between these two requests in the server-side.
There is no exact solution rather then authentication. On the other hand, you do not need to implement username & password authentication for this basic requirement. You could simply identify a random string for your "client" and send it to api over custom http header variable like ;
GET /api/ HTTP/1.1
Host: www.backend.com
My-Custom-Token-Dude: a717sfa618e89a7a7d17dgasad
...
You could distinguish the requests by this custom header variable and it's values existence and validity. But I'm saying "Security through obscurity" is not a solution.
You cannot know for sure if it is your application or not. Anything in the request can be made up.
But, you can make sure that nobody is using your application inadvertently. For example somebody may create a javascript application and point to your REST API. The browser sends the Origin header (draft) indicating in which application was the request generated. You can use this header to filter calls from applications that are not yours.
However, that somebody may use his own web server as proxy to your application, allowing him then to craft HTTP requests with more detail. In this case, at some point you would be able of pin point his IP address and block it.
But the best solution would be to put some degree of authorization. For example, the UI part can ask for authentication via login/password, or just a captcha to ensure the caller is a person, then generate a token and associate that token with the use session. From that point the calls to the API have to provide such token, otherwise you must reject them.
I'm afraid I am fairly new to varnish but I have a problem whch I cannot find a solution to anywhere (yet): Varnish is set up to cache GET requests. We have some requests which have so many parameters that we decided to pass them in the body of the request. This works fine when we bypass Varnish but when we go through Varnish (for caching), the request is passed on without the body, so the service behind Varnish fails.
I know we could use POST, but we want to GET data. I also know that Varnish CAN pass the request body on if we use pass mode but as far as I can see, requests made in pass mode aren't cached. I've already put a hash into the url so that when things work, we will actually get the correct data from cache (as far as the url goes the calls would otherwise all look to be the same).
The problem now is "just" how to rewrite vcl_fetch to pass on the request body to the webserver? Any hints and tips welcome!
Thanks in advance
Jon
I don't think you can, but, even if you can, it's very dangerous : Varnish won't store request body into cache or hash table, so it won't be able to see any difference between 2 requests with same URI and different body.
I haven't heard about a VCL key to read request body but, if it exists, you can pass it to req.hash to differenciate requests.
Anyway, request body should only be used with POST or PUT...and POST/PUT requests should not be cached.
Request body is supposed to send data to the server. A cache is used to get data...
I don't know the details, but I think there's a design issue in your process...
I am not sure I got your question right but if you try to interact with the request body in some way this is not possible with VCL. You do not have any VCL variable/subroutine to do this.
You can find the list of variables available in VCL here (or in man vcl) :
https://github.com/varnish/Varnish-Cache/blob/master/lib/libvcl/generate.py#L105
I agree with Gauthier, you seem to have a design issue in your system.
'Hope that helps.
Let say, I make a call to a HTTPS server. THe server send me back http response. Is there a way I can change the http response in the client side? (i.e. any javascript..etc)
Thanks.
--- UPDATE ----
Well, for HTTP request, let say a javascript making a ajax call with query=123456. Of course, I can intercept it and change query=123456 before it is sent it out. (if I want to hack).
But, when the http response come back, is it possible that I can intercept the data and change it before it reach the browser. assuming that it is HTTPS.
--- More ---
The actual program I am writing require the data from server be secured. because the javascript code will be public (thus anyone can inject into their page), I have to make sure the response data sent from my server will be the same as the one the javascript side receive it.
sorry for the initial question not being clear. :)
The best you can do is make sure the data sent from the server is correct. That's all. On the client side, all bets are off by definition. If the connection to the server is SSL secured, it's harder for anybody to mess with the data, but by far not impossible. One of the advantages of an HTTPS connection is that the identity of the server is confirmed. That's displayed to the user in form of a security lock or a green address bar or whatnot. And conversely, when a certificate is invalid, the browser will complain to the user about it. It's completely up to the user to notice or disregard all that though.
Javascript can be manipulated on the client or by a man-in-the-middle attack between your server and the client, data can be manipulated the same way, there's no guarantee for anything on the client side. Which is why the client should never be entrusted to do anything of importance, the server needs to have the last say in anything. SSL can help indicate to the user whether a connection is trusted or not, but it's no guarantee.
You can create a proxy and have your traffic go through the proxy. The proxy would have to, using the proper certificate, "decrypt" the traffic and then "encrypt" it and send it on it's way. But why would you want to? This sounds malicious.
I dont see what good changing data going to the browser is going to do unless you're trying to fool the suer.
Try playing around with fiddler for a bit.