When multiple ARR servers are used as in the image below:
Will the ARR server honour client affinity requests, irrespective of which ARR server handles an incoming request? The client passes a cookie in the headers, but the value appears to be an encoded string. I'm not sure if ARR 2 will be able to interpret a value generated on ARR 1 and route the client request to the affinitized server...
Yes, once the client affinity cookie is created by an ARR server, the cookie will work on any other ARR server, provided the two (or more) ARR servers share the same Web Farm configuration.
Reference: I'm the current owner of ARR.
Related
I am using IIS as a revese proxy for some backend services. ARR or Helicon ISAPI_Rewrite. Everything works fine, except that chunked responses from the service do not go through IIS (the connection to the endpoint that happens to keep the connection open and sends some data back to the client in chunks) never answers. The service itself without a reverse proxy works well, i.e. client receives updates. This is similar to CouchDB /_changes endpoint with 'continous' mode.
In Chrome, the connection is marked as "pending".
I tried to disable all the caching on IIS, but no success.
The whole setup mimics sending SSE events, actually. Implementation of SSE also does not work obviously.
Is there any way to fix that?
I am trying to debug IIS + ARR in Reverse Proxy scenario. I have bunch of URL Rewrite rules that change the hostname of incoming request to another hostname, the requests come in via HTTPS. I need to capture the headers of the outbound request made by the ARR Reverse Proxy to the rewritten host.
The flow is:
Client calls https://originalhostname.com/foo/bar.aspx
ARR receives the requests and rewrites it to https://newhostname.com/foo/bar.aspx
After it hears back from newhostname.com ARR returns the response back to the client.
So i need to capture the request initiated by the ARR box for newhostname.com
I setup fiddler to intercept the outbound request following this link
Outbound requests are visible in fiddler and are decryptable too but the hostname of the requests is not newhostname but instead are all to the originalhostname.
I do notice that a HTTPS Decryption tunnel is setup for the newhostname, but then i see the following in the fiddler logs and then the subsequent requests are all targeting originalhostname.
03:21:48:2877 Session #25 detaching ServerPipe. Had: 'direct->https/newhostname:443' but needs: 'direct->https/originalhostname:443'
What could be wrong? How can i debug this further?
I'm looking for some ideas...
I have a series of robust node.js apps that need to be delivered to specific users (post authentication), virtually no file serving, only the initial delivery of the index. The rest of the communication is all done via socket.io.
ClientA (login) needs to be connected to an application on lets say :90001
ClientB (login) on :90002
ClientC (login) on :90003
*All HTTP/1.1 ws need to be secure
I have tried a few configurations:
stunnel/varnish/nginx
stunnel/haproxy
stunnel/nginx
I was thinking a good approach would be to somehow use redis to store sessions and validate against a cookie, however that would most likely be done by (using node) exposing node.js on the frontend.
questions:
What are the risks in using node-http-proxy as the front piece?
Is this something that i should deem possible (to have one piece that "securely" redirects ws traffic and manages specific sessions to many independent/exclusive backends).
I am aware that nginx 1.3 (in dev) is to support ws, is this worth holding out for?
Has anyone had any thorough experience with yao's tcp_proxy module for nginx (reliability / scalability)?
I can't say I have done this before, but I can offer some ideas perhaps:
1 node authentication server which takes login details and sets a cookie specific to the server the user should connect to. It then redirects to the index page at which point, haproxy can direct the request based on the cookie. See this question https://serverfault.com/questions/75385/is-there-a-way-to-configure-haproxy-to-send-traffic-based-on-a-cookie
Alternatively, you could have the above authentication on all servers instead of just one. Haproxy would have to be configured to balance across all nodes if there is no relevant cookie header. Each node would do the set-cookie + redirect and subsequent requests should end up on the specific node instance.
bts, haproxy 1.5 dev now has built in support for SSL, so no need for stunnel anymore.
I have this IIS7.5 with ARR installed on and configured as a reverse proxy to another server which is running IIS7.
On this IIS7.5 I have ASP.NET 4 applications and simple websites installed.
Since configuring a farm on this IIS7.5 running it as a reversed proxy, the local application doesn't run with this error message:
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
Will it be possible to run both local application and routing (reverse proxy) on this IIS7.5 at the same time or should I give up and move the applications to other servers?
Application request routing operates as a server-wide URL-rewriter.
This means that it captures all traffic coming to a box.
You can still host an IIS website on the same box, but you need to make sure that ARR leaves the requests for this site alone.
I set this up so that the ARR rule, while still remaining a wildcard *, I make sure that part of the match conditions is for requests to my local site to be left alone.
There are a number of conditions you can use to create a does not match rule.
Ive used:
{HTTP_HOSTNAME} if you are just doing HTTP requests and just want certain domain names to be left alone.
{SERVER_PORT} if you're hosting an SSL site and are the only one on the box.
{LOCAL_ADDR} if your site sits on a dedicated IP address.
many more.... really you just need to set up rules that exclude your locally hosted website.
We have a load balanced website. It connects to 6 different servers. Is there any way (ping or otherwise) to determine from the client side, which server the load is being passed to by the load balancer?
You could set a cookie that contains this information.
After some research Ive realized that you can get some information like the web server IIS version etc by looking at the HTTP header response. But you cannot tell from the client end which server this response originated from. Long story short, what is behind the load balancer is opaque to the user at the client end.