I'm using fiddler to test some new code on our site. Testing this code requires that I block all traffic from my browser to a specific URL. I need to see what would happen if this request does not work.
I can't figure out how to do this though, all of the guides just talk about blocking responses from a URL. I need the entire request to a specific URL to not work as if it was never sent. How can I do this?
It's not clear entirely what you mean. If you use an AutoResponder rule with a value of *drop or *reset, Fiddler will kill the connection via FIN or RST after it reads the request.
OnBeforeRequest
// Break requests for URLs containing "somehost.com"
if (oSession.uriContains("somehost.com")) {
oSession.oRequest.FailSession(404, "Blocked", "Fiddler blocked");
}
Related
Using express JS I'm trying to add some headers to the redirection I'm returning
However, everything I tried just work for the response headers and not for the request headers of the redirection. I.E., when inspecting it with the developer tools I can see the response headers but when the next call is made, I can not see the request headers
req.headers['x-custom-header'] = 'value'
res.setHeader('x-custom-header', 'value')
res.redirect('example.com')
Does anybody could explain how the response and request headers work on ExpressJS?
A redirect just does a redirect. It tells the browser to go to that new location with standard, non-custom headers. You cannot set custom headers on the next request after the redirect. The browser simply doesn't do that.
The usual way to pass some type of parameters in a redirect is to put them in a query string for the redirect URL or, in some cases, to put them in a cookie. In both cases of query string parameters and data in a cookie, those will be available to your server when the browser sends you the request for the redirected URL.
It also may be worth revisiting why you're redirecting in the first place and perhaps there's a different flow of data/urls that doesn't need to redirect in the first place. We'd have to know a lot more about what this actual operation is trying to accomplish to make suggestions there.
If your request is being processed by an Ajax call, then you can program the code receiving the results of the Ajax call to do anything you want it to do (including add custom headers), but if it's the browser processing the redirect and changing the page URL to load a new page, it won't pay any attention to custom headers on the redirect response.
Can anybody explain how the response and request headers work on ExpressJS?
Express is doing exactly what you told it to do. It's attaching the custom headers to the response that goes back to the browser. It's the browser that does not attach those same headers to the next request to the redirected URL. So, this isn't an Express thing, it's a browser thing.
We have developed a corporate NodeJS application served through http/2 protocol and we need to identify clients by their IP address because the server need to send events to clients based on their IP (basically some data about phone calls).
I can successfully get client IP through req.connection.remoteAddress but there are some of the clients that can only reach the server through our proxy server.
I know about x-forwarded-for header, but this doesn't work for us because proxies can't modify http headers in ssl connections.
So i though I could get the IP from client side and send back to the server, for example, during the login process.
But, if I'm not wrong, browsers doesn't provide that information to javascript so we need a way to obtain that information first.
Researching about it, the only option I found out is obtaining from a server which could tell me the IP from where I'm reaching it.
Of course, through https I can't because of the proxy. But I can easily enable an http service just to serve the client IP.
But then I found out that browsers blocks http connections from https-served pages because of "mixed active content" issue.
I read about it and I found out that I can get "mixed passive content" and I succeed in downloading garbage data as image file through <img>, but when I try to do the same thing using an <object> element I get a "mixed active content" block issue again even in MDN documentation it says it's considered passive.
Is there any way to read that data either by that (broken) <img> tag or am I missing something to make the <object> element really passive?
Any other idea to achieve our goal will also be welcome.
Finally I found a solution:
As I said, I was able to perform an http request by putting an <img> tag.
What I was unable to do is to read downloaded data regardless if it were an actual image or not.
...but the fact is that the request was made and to which url is something that I can decide beforehand.
So all what I need to do is to generate a random key for each served login screen and:
Remember it in association with your session data.
Insert a, maybe hidden, <img> tag pointing to some http url containing that id.
As soon as your http server receive the request to download that image, you could read the real IP through the x-forwarded-for header (trusting your proxy, of course) and resolve to which active session it belongs.
Of course, you also must care to clear keys, regardless of being used or not, after a few time to avoid memory leak or even to be reused with malicious intentions.
FINAL NOTE: The only drawback of this approach is the risk that, some day, browsers could start blocking mixed passive content too by default.
For this reason I, in fact, opted by a double strategy approach. That is: additionally to the technique explained above, I also implemented an http redirector which does almost the same: It redirects all petitions to the root route ("/") to our https app. But it does so by a POST request containing a key which is previously associated to the client real IP.
This way, in case some day the first approach stops to work, users would be anyway able to access first through http. ...Which is in fact what we are going to do. But the first approach, while it continue working, could avoid problems if users decide to bookmark the page from within it (which will result in a bookmark to its https url).
I just could not get the http-proxy module to work properly as a forward proxy. It works great as a reverse proxy. Therefore, I have implemented a node-based forward proxy using node's http and net modules. It works fine, both with http and https. I will deal with websockets later. Among other things, I want to log the URLs visited or requested through a browser. In the request object, I do get the URL, but as expected, when a page loads, a zillion other requests are triggered, including AJAX, third-party ads, etc. I do not want to log these.
I know that I can distinguish an AJAX request from the x-requested-with header. I can distinguish requests coming from a browser by examining the user-agent header (though these can be spoofed thru cURL). I want to minimize the log entries.
How do commercial proxies log such info? Or do they just log every request? One way would be to not log any requests within a certain time after the main request presuming that they are all associated with the main request. That would not be technically accurate.
I have researched in this area but did not find any solution. I am not looking for any specific code, just some direction...
No one can know that with precision, but you can find clues such as, "HTTP referer", "x-requested-with" or add your custom headers in each ajax request (squid proxy by default sends a "X-Forwarded-For" which says he is a proxy), but anybody can figure out what headers are you sending for your requests or copy all headers that a common browser sends by default and you will believe it is a person from a browser, but could be a bash cURL sent by a bot.
So, really, you can't know for example, if a request is an AJAX request because the headers aren't mandatory, by default your browser or your framework adds an x-requested-with or useful information to help to "guess" who is performing the request.
I'm working on a chrome app and finally got to the point of issuing a PUT to the node.js server. My GET logic is working fine. My PUT however gets hijacked into a OPTIONS request. My requests are made to
http://localhost:4000/whatever
I read about the OPTIONS pass asking permission to do the PUT. I was under the impression that BROWSERS issue OPTIONS when CORS is requested, but didn't realize that a chrome app would also do this for me.
Is the app doing this because I didn't and I'm supposed to, or is this SOP that chrome will issue the OPTIONS request and I just issue my PUT that triggers it?
My PUT never makes it to the server. I've tried issuing my own OPTIONS just ahead of my PUT but so far nothing is working. The OPTIONS request makes it to the server (the default one or mine), but that's the end of the conversation.
At the server, all I'm doing to satisfy the OPTIONS request is as follows:
case 'OPTIONS':
res.writeHead(200, {'Access-Control-Allow-Methods': 'OPTIONS, TRACE, GET, HEAD, POST, PUT',
'Access-Control-Allow-Origin': "*"});
break;
When I try issuing my own OPTIONS & PUT requests, I'm doing them with separate XMLHttpRequest objects. I don't see where the permission hand off from OPTIONS to PUT is made.
This is called "preflighting", and browsers MUST preflight cross-origin requests if they fit specific criteria. For example, if the request method is anything other than GET or POST, the browser must preflight the request. You will need to handle these OPTIONS (preflight) requests properly in your server.
Presumably, your page is hosted on a port other than 4000, and the call to port 4000 is then considered cross-origin (in all browsers other than IE). Don't issue the OPTIONS request yourself. Chrome will then preflight your request. Your server must respond appropriately. The browser will handle the response to this OPTIONS request for you, and then send along the PUT as expected if the OPTIONS request was handled properly by your server.
There is an excellent article on Mozilla Developer Network that covers all things CORS. If you plan on working in any cross-origin environment, you should read this article. It will provide you with most of the knowledge necessary to understand the concepts required to properly deal with this type of an environment.
XMLHttpRequests require CORS to work cross-domain. Similarly for web fonts, WebGL textures, and a few other things. In general all new APIs seem to have this restriction.
Why?
It's so easy to circumvent: all it takes is a simple server-side proxy. In other words, server-side code isn't prohibited from doing cross-domain requests; why is client-side code? How does this give any security, to anyone?
And it's so inconsistent: I can't XMLHttpRequest, but I can <script src> or <link rel> or <img src> or <iframe>. What does restricting XHR etc. even accomplish?
If I visit a malicious website, I want to be sure that :
It cannot read my personal data from other websites I use. Think attacker.com reading gmail.com
It cannot perform actions on my behalf on other websites that I use. Think attacker.com transferring funds from my account on bank.com
Same Origin Policy solves the first problem. The second problem is called cross site request forgery, and cannot be solved with the cross-domain restrictions currently in place.
The same origin policy is in general consistent with the following rules -
Rule 1: Doesn't let you read anything from a different domain
Rule 2: Lets you write whatever you want to a different domain, but rule #1 will not allow you to read the response.
Rule 3: You can freely make cross-domain GET requests and POST requests, but you cannot control the HTTP headers
Lets see how the various things you have listed line up to the above rules :
<img> tags let you make a HTTP request, but there is no way to read the contents of the image other than simply displaying it. For example, if I do this <img src="http://bank.com/get/latest/funds"/>, the request will go through (rule 2). But there is no way for the attacker to see my balance (rule 1).
<script> tags work mostly like <img>. If you do something like <script src="http://bank.com/get/latest/funds">, the request will go through. The browser will also try to parse the response as JavaScript, and will fail.
There is a well known abuse of <script> tags called JSONP, where you collude with the cross-domain server so that you can 'read' cross-domain. But without the explicit involvement of the cross-domain server, you cannot read the response via the <script> tag
<link> for stylesheets work mostly like <script> tags, except the response is evaluated as CSS. In general, you cannot read the response - unless the response somehow happens to be well-formed CSS.
<iframe> is essentially a new browser window. You cannot read the HTML of a cross-domain iframe. Incidentally, you can change the URL of a cross-domain iframe, but you cannot read the URL. Notice how it follows the two rules I mentioned above.
XMLHttpRequest is the most versatile method to make HTTP requests. This is completely in the developers control; the browser does not do anything with the response. For example, in the case of <img>, <script> or <link>, the browser assumes a particular format and in general will validate it appropriately. But in XHR, there is no prescribed response format. So, browsers enforce the same origin policy and prevent you from reading the response unless the cross domain website explicitly allows you.
Fonts via font-face are an anomaly. AFAIK, only Firefox requires the opt-in behavior; other browsers let you use fonts just like you would use images.
In short, the same origin policy is consistent. If you find a way to make a cross-domain request and read the response without explicit permission from the cross-domain website - you'll make headlines all over the world.
EDIT : Why can't I just get around all of this with a server-side proxy?
For gmail to show personalized data, it needs cookies from your browser. Some sites use HTTP basic authentication, in which the credentials are stored in the browser.
A server-side proxy cannot get access to either the cookies or the basic auth credentials. And so, even though it can make a request, the server will not return user specific data.
Consider this scenario...
You go to my malicious website.
My site makes an XHR to your banking website and requests the form for bank transfer.
The XHR reads the token that prevents CSRF and POSTs the form alongside the security token and transfers a sum of money to my account.
(I) Profit!!!
Without Same Origin Policy in existence, you could still POST that form, but you wouldn't be able to request the CSRF token that prevents CSRFs.
Server side code does not run on the client's computer.
The main issue with XHR is that they can not just send a request but you are also able to read the response. Sending almost arbitrary requests was already possible. But reading their responses was not. That’s why the original XHR did not allow any cross-origin requests at all.
Later, when the demand for cross-origin requests with XHR arose, the CORS was established to allow cross-origin requests under specific conditions. One condition is that particular request methods, request header fields, and requests that would contain user credentials require a so called preflight request with which the client can check whether the server would allow the request. With this the server has the ability to restrict access to only specific origins as otherwise any origin could send requests.