Target specific Azure Web App Instance with Header instead of a Cookie - azure

I have an architecture where I have multiple instances but I want to maximize cache hits.
Users are defined in groups and I want to make sure that all users that belong to the same group hit the same server as much as possible.
The application is fully stateless, but having users from the same group hitting the same server will dramatically increase performance and memory load on all instances.
When loading the main page I already know which server I would like to send this user to on the XHR call.
Using the ARRAffinity cookies are not really great is almost impossible in this scenario (cross domain, have to make server call first etc) and I would strongly prefer sending a hint myself through a custom header.
I'm trying manually to do some workarounds with deleting the cookies and assigning them, but it feels very hacky and I don't get it fully working yet. and it doesn't work for XHR calls.
Question:
Is it possible to direct to a specific instance through a header, url or domain instead of a cookie?
Notes
Distributed cache does not work for me in this case. I need the performance of memory cache without extra network hops and serialization/deserialization.
This seems to be possible with Application Gateway, but it seem to need a lot of extra infrastructure and moving parts while all my problems would be fixed by sending the "right" header.
I could fix this by duplicating the web app in its entirety and assigning a different hostname. Also this feels like adding a lot of extra moving parts that can break. Also maintenance will be harder and more confusing, I loss autoscale, etc.
Maybe this can be fixed easily by Kubenetes/Docker Swarm type of architecture (no experience), but as this is a large legacy project and I have a pretty strict deadline I am very cautious of making such a dramatic switch last minute.

If I am understanding correctly you want to set a custom header via a client application and based on that proxy over the connection to some other backend server.
I like to use HAProxy for that, you can also look into Nginx as well for that.
You can install HAProxy on linux from the distribution's package manager, or you can use the available HAProxy docker container.
An example of installing it on ArchLinux:
sudo pacman -S haproxy
sudo systemctl start haproxy
Once its installed you can find out where your haproxy.cfg config file is located and then copy the haproxy.cfg config snippet that I posted here below instead of the existing default config.
In my case haproxy.cfg was in /etc/haproxy/haproxy.cfg
To achieve what you want in HAProxy you would set all clients to communicate with this main HAProxy server, which would then forward the connection to the different backend servers you have based on the value of the custom header that you can set client side, for example "x-mycustom-header: Server-one". As a bonus, you can also enable sticky sessions as well if needed on HAProxy, but it is not mandatory to achieve what you are looking for.
Here is a simple example setup for the HAProxy config file (haproxy.cfg) with only 2 backend servers, but you can add more.
The logic here is that all the clients would make http requests to the HAProxy server listening on port 80, then HAProxy would check the value of the custom header called 'x-mycustom-header' that the clients added and based on that value, it will forward the client to either backend_server_one or backend_server_two.
For testing purposes both HAProxy and the two backends are on the same box but listening on different ports. HAProxy on port 80, server1 is on 127.0.0.1:3000 and server2 is on 127.0.0.1:4000.
cat haproxy.cfg
#---------------------------------------------------------------------
# Example configuration. See the full configuration manual online.
#
# http://www.haproxy.org/download/1.7/doc/configuration.txt
#
#---------------------------------------------------------------------
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
frontend main
bind :80
mode http
log global
option httplog
option dontlognull
option http_proxy
option forwardfor except 127.0.0.0/8
maxconn 8000
timeout client 30s
use_backend backend_server_one if { req.hdr(x-mycustom-header) server-one }
use_backend backend_server_two if { req.hdr(x-mycustom-header) server-two }
default_backend backend_server_one #when the header is something else default to the first backend
backend backend_server_one
mode http
balance roundrobin
timeout connect 5s
timeout server 5s
server static 127.0.0.1:3000 #change this ip to your 1st backend ip address
backend backend_server_two
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
server static 127.0.0.1:4000 #change this ip to your 2nd backend ip address
To test that this works you can open two netcat listeners, one on port 3000, and then the other on port 4000, run them on differnt screens or different ssh sessions.
nc -l 3000 # in the first screen
nc -l 4000 # in a second screen
Then after you do a sudo systemctl reload haproxy to make sure that HAProxy is reloaded with your new config file, you can make an http GET request on port 80 and provide the "x-mycustom-header: Server-one" header.
You will be able to see the request in the output of the netcat instance that is listening on port 3000.
Now change the header to "x-mycustom-header: Server-two" and make a second GET request, and you will see that the request reached to the second netcat instance this time, which is listening on port 4000, which indicates that this works.
Tested on ArchLinux

The Microsoft Team has responded an confirmed this is at the moment not possible.

Related

Is there a way to "host" an existing web service on port X as a network path of another web service on port 80?

What I'm trying to do is create an access website for my own services that run on my linux server at home.
The services I'm using are accessible through <my_domain>:<respective_port_num>.
For example there's a plex instance which is listening on port X and transmission-remote (a torrenting client) listening on port Y and another custom processing service on port Z
I've created a simple website using python flask which I can access remotely which redirects paths to ports (so <my_domain>/plex turns into <my_domain>:X), is there a way to display these services on the network paths I've assigned to them so I don't need to open ports for each service? I want to be able to channel an existing service on :X to <my_domain>/plex without having to modify it, I'm sure it's possible.
I have a bit of a hard time to understand your question.
You certainly can use e.g. nginx as a reverse proxy in front of your web application, listen to any port and then redirect it to the upstream application on any port - e.g. your Flask application.
Let's say, my domain is example.com.
I then can configure e.g. nginx to listen on port 80 (and 443 for SSL), and then proxy all requests to e.g. port 8000, where Flask is running locally.
Yes, this is called using nginx as a reverse proxy. It is well documented on the internet and even the official docs. Your nginx.conf would have something like:
location /my/flask/app/ {
# Assuming your flask app is at localhost:8000
proxy_pass http://localhost:8000;
}
From user's perspective, they will be connecting to your.nginx.server.com/my/flask/app/. But behind the scenes nginx will actually forward the request to your app, and serve its response back to the user.
You can deploy nginx as a Docker container, I recommend doing this as it will keep the local files and configs separate from your own work and make it easier for you to fiddle with it as you learn. Keep in mind that nginx is only HTTP though. You can't use it to proxy things like SSH or arbitrary protocols (not without a lot of hassle anyway). If the services generate their own URLs, you might also need to configure them to anticipate the nginx redirects.
BTW, usually flask is not served directly to the internet, but instead nginx talks to something like Gunicorn to handle various network related concerns: https://vsupalov.com/what-is-gunicorn/

How to create a secure internal route in the CloudFoundry environment (Swisscom AppCloud)

I would like to create a secure internal route between two applications within the same space/organization. It should never be possible to reach the Node.js application from the outside. My Java application connects via HTTP to the Node application (running on express).
I have now tried to setup the desired configuration by creating a route called example-route.apps.internal and assigned it to the Node application. As a next step, I've opened the port (I've tried 443, 80, 8080) in the network configuration of the Java application (with the destination being the Node app). I restaged both applications.
Then, I opened a Java connection to the link http://example-route.apps.internal/test123. I've also tried to use https. The result was the same. Java refused to conncet to this URL.
Now, the following questions:
How can I properly set up this communication? Should I resolve this internal DNS somehow? Which port is the correct one if I just use the port of the env variable? How should I read this port from the other application?
How secure is the communication, if HTTP is used instead of HTTPS? (I assume HTTPS is not possible internally). Is it as safe as an HTTPS connection from the outside? Which devices are between, how far out does the connection go?
Thank you!
I think you're almost there.
Then, I opened a Java connection to the link http://example-route.apps.internal/test123. I've also tried to use https. The result was the same. Java refused to conncet to this URL.
You should use http://example-route.apps.internal:8080/test123. Your app is set to listen on $PORT, which is always 8080 in current versions of CF.
Normally you don't need to worry about this because your traffic goes in through Gorouter which translates for you (maps external port 80 -> internal 8080). With internal routes, traffic is direct so there is no transformation. That's why you need to use port 8080 in your URL.
Alternatively, you could use a service discovery mechanism like Eureka or Consul, but it's not a requirement. In this case, the service would know it's listening on 8080 and register that in the registry.
As far as HTTPS, that's tricky. Your app is only listening on 80/HTTP. You would have to change it to listen on 443/HTTPS, but then you need certs and different server configuration. It's technically possible, but it's a whole can of worms.
In some newer versions, Envoy is present and accepts HTTPS traffic into a container, can make HTTPS easier but it's still not a slam dunk (at the time of writing, at least). I expect this will get better in the future.
Should I resolve this internal DNS somehow?
Internal DNS helps with locating your other apps, not the port. Otherwise you'd need to manage IP addresses, which change often, and that would require something like Eureka or Consul.
Which port is the correct one if I just use the port of the env variable?
See above.
How should I read this port from the other application?
It's always 8080 at the moment, and has been for multiple years. It's unlikely to change, so you could probably hard code or set it in a config file safely.
How secure is the communication, if HTTP is used instead of HTTPS? (I assume HTTPS is not possible internally).
Is it as safe as an HTTPS connection from the outside? Which devices are between, how far out does the connection go?
Traffic would not be accessible externally as it wouldn't leave the Cell in some cases or worst case it goes between two Cells, but traffic would be visible internally since it's not encrypted. That means you need to have more trust on your CF provider, who would have access to internal traffic.
If it were HTTPS, only someone with the key would be able to decrypt it. You would still have to trust your provider though as they could likely get the key & use it to decrypt traffic. It would just be more work for them than if traffic is unencrypted.
Hope that helps!

How to set up nginx upstream to check response and set this server to 'down' when response is a 40x

I have three servers. All three use nginx 1.10.3. Two of these servers also host a nodejs application which is only listening on port 2403. The corresponding nginx-instances serve these nodejs apps via port 80. The third server deals as a load balancer and does a simple round robin via upstream to these other two servers.
As far as I know when one of the upstream-servers is down nginx removes these servers from the upstream and does health-checks every n seconds. If the server is back online it's readded.
But now I have the following situation:
One of the nodejs applications crashes. The nginx-instance is still running and listenes on port 80. The nginx-loadbalancer checks port 80, gets a response and therefore continues with the round-robin (but the app isn't reachable of course). Now I would like to setup nginx that, for example, it checks the response of the upstream and when there is a header (there should be a 403/503 header, shouldn't it?) it removes the broken/down server and only uses the last one till it's working again.
How can I do that?
I can provide the nginx configs if you like.

HAProxy health check

My current setup has 2 HAProxies configured with keepalived for High Availability, the 2 proxies serve as a Reverse Proxy and Load Balancer for virtual webservices. I know that HAProxy can check the health of its backend (I've already configured this) but my question is something else.
At my company there's a F5 Big-IP Load Balancer which serves as the first line of defense, it will redirect requests to my HAProxies when needed.
I need to know if there is a way to let my F5 Big-IP check the health of the HAProxies frontend, so when the proxies are booting no requests will be lost.
Thanks
There used to be a mode health option but in recent versions the easiest way is to use a monitor-uri on a given port:
listen health_check_http_url
bind :8888
mode http
monitor-uri /healthz
option dontlognull
You can use the monitor-uri in a frontend and select it with an ACL too but the port version is much clear and straightforward.
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-mode
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-monitor-uri
From the HAProxy Reference Manual:
Health-checking mode
--------------------
This mode provides a way for external components to check the proxy's health.
It is meant to be used with intelligent load-balancers which can use send/expect
scripts to check for all of their servers' availability. This one simply accepts
the connection, returns the word 'OK' and closes it. If the 'option httpchk' is
set, then the reply will be 'HTTP/1.0 200 OK' with no data, so that it can be
tested from a tool which supports HTTP health-checks. To enable it, simply
specify 'health' as the working mode :
Example :
---------
# simple response : 'OK'
listen health_check 0.0.0.0:60000
mode health
# HTTP response : 'HTTP/1.0 200 OK'
listen http_health_check 0.0.0.0:60001
mode health
option httpchk
From the HAProxy Docs
Example:
frontend www
mode http
acl site_dead nbsrv(dynamic) lt 2
acl site_dead nbsrv(static) lt 2
monitor-uri /site_alive
monitor fail if site_dead
Checkout the reference documentation.
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-monitor-uri
<uri> is the exact URI which we want to intercept to return HAProxy's
health status instead of forwarding the request.
When an HTTP request referencing <uri> will be received on a frontend,
HAProxy will not forward it nor log it, but instead will return either
"HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure
conditions defined with "monitor fail". This is normally enough for any
front-end HTTP probe to detect that the service is UP and running without
forwarding the request to a backend server. Note that the HTTP method, the
version and all headers are ignored, but the request must at least be valid
at the HTTP level. This keyword may only be used with an HTTP-mode frontend.
Monitor requests are processed very early. It is not possible to block nor
divert them using ACLs. They cannot be logged either, and it is the intended
purpose. They are only used to report HAProxy's health to an upper component,
nothing more. However, it is possible to add any number of conditions using
"monitor fail" and ACLs so that the result can be adjusted to whatever check
can be imagined (most often the number of available servers in a backend).

How to randomly choose the outgoing address from an IPv6 pool using Node.JS?

I'm trying to create and run a Node.JS proxy in a machine that has a pool of IPv6 addresses. I want the proxy to randomly choose one of these addresses for each request (making it difficult for the websites to track record of users' requests).
With wget I can achieve this by using the attribute --bind-address as following:
wget --bind-address OUTGOING_IP http://www.example.com/
Is there any way to achieve the same behavior using Node.JS?
If you want to make outbound HTTP requests from different IPs, have a look for "localAddress" option under "http.request":
http://nodejs.org/docs/latest/api/http.html#http_http_request_options_callback
If you want to start a TCP server to listen on a particular IP bound to your host, you would probably want to specify it when you create the server [i.e. server.listen(PORT, HOST)]:
http://nodejs.org/docs/latest/api/net.html#net_class_net_server
-- ab1

Resources