Does ISTIO support's Hooking? - hook

Does ISTIO provides support on Hooking [Methodology to augment a service request with additional information to let the load balancing and dynamic routing algorithms properly route it, WITHOUT needing to change the invocating or serving service itself.)

I think you can achieve that effect, if I understand you correctly, by Istio Route Rules. You can match the rule by a cookie or by a URL path, and then influence Load Balancing or Routing.

Related

Routing subdomains to certain applications in Azure Application Gateway?

I've been trying out Application Gateway, and have managed to get to the point where hosting 2 applications in different pools, albeit with same port is possible using the "host" header to choose where i intended to be directed.
However, what i actually intended to do was route subdomains to certain applications.
For example, my application gateway is "app-gw.example.com", and i have 2 Azure Functions sat behind that, for simplicity, func1.example.com and func2.example.com. (They actually have distinct domains themselves, not subdomains).
I would like to route "func1.app-gw.example.com"'s traffic to func1.example.com, and "func2.app-gw.example.com" to "func2.example.com".
However, i can't seem to figure this out. Can someone explain how this can be done?
I've had also some success hosting on different ports and using the listener + routes to direct to each individual site, but they should rather be on the same port, which rules this out.
I've also tried messing with URL Rewrites, but wasn't able to get something useful from that either.
EDIT: I think maybe i'm missing something here. Perhaps i need something that points the domain names to the application gateway, and then route on that? For example:
Site 1, reachable at func1.example.com may have an entry called "func1-gw.example.com", which actually just points to the application gateway, however, the application gateway now knows that it's really supposed to be going to "func1"?
Sounds like a DNS record pointing to the gateway may work, but then i wonder how to do the routing, hmm.
Thanks.
As you are already aware of Application Gateway multiple site hosting, you can enhance the Application Gateway to route the traffic based on the URLs.
Below references might help you configure the URL based routing.
URL Path Based Routing
Application Gateway redirection
Configure URL redirection on an application gateway

PrimeFaces in a WAF environent, internal and external URLs

Say we have an internal URL https://my.internal.url (in our case a Liferay Portal) and from a web application firewall an external URL https://my.external.url pointing to this internal URL.
The internet user is using the external URL.
PrimeFaces extends attributes like for example
onclick="...;window.open('https://my.interal.url'..."
This leads to CORS problems.
The HTTP header Access-Control-Allow-Origin is not an option, since the internal URL is internal.
We'll talk with the WAF people about URL replacement, but I'd like to know wether or not we can tell PrimeFaces to use the external URL (or maybe relative URLs in case this would work).
The portal doesn't know about the external URL but of course we could implement this as a configuration option.
(watching the source code, there are more occurences of the internal URL outside of the jsf/PrimeFaces portlet, so I add the liferay tag too)
Update
The question is obsolete, WAF has to handle this correctly (an old SSL environment did it, a new WAF environment doesn't)
You say
The portal doesn't know about the external URL
however, any properly configured reverse proxy (or WAF) should forward the actual host name used to request the current page.
On Apache httpd's mod_proxy_http, this is done with the option ProxyPreserveHost On. When forwarding with AJP, the host is automatically forwarded. Other WAF/Proxy configurations - of course - differ. But the proper way to generate the URL is to let the generating server know what URLs it should generate.
If you need to worry about the proper host name, you'll need to do so by request: Liferay is well able to use Virtual Host names to distinguish between different sites - and if they're completely different, you might be signed in to one of them, but not to the other. This has a repercussion on the permissions.
Have the infrastructure handle it for you. Don't write code (or application configuration) for it.

Understanding complex website architecture (reactjs,node, CDN, AWS S3, nginx)

Can somebody explain to me the architecture of this website (link to a picture) ? I am struggling to understand the different elements in the front-end section as well as the fields on top, which seem to be related to AWS S3 and CDNs. The backend-section seems clear enough, although I don't understand the memcache. I also don't get why in the front end section an nginx proxy is needed or why it is there.
I am an absolute beginner, so it would be really helpful if somebody could just once talk me through how these things are connected.
Source
Memcache is probably used to cache the results of frequent database queries. It can also be used as a session database so that authenticated users' session work consistently across multiple servers, eliminating a need for server affinity (memcache is one of several ways of doing this).
The CDN on the left caches images in its edge locations as they are fetched from S3, which is where they are pushed by the WordPress part of the application. The CDN isn't strictly necessary but improves down performance by caching frequently-requested objects closer to where the viewers are, and lowers transport costs somewhat.
The nginx proxy is an HTTP router that selectively routes certain path patterns to one group of servers and other paths to other groups of servers -- it appears that part of the site is powered by WordPress, and part of it node.js, and part of it is static react code that the browsers need to fetch, and this is one way of separating the paths behind a single hostname and routing them to different server clusters. Other ways to do this (in AWS) are Application Load Balancer and CloudFront, either of which can route to a specific server based on the request path, e.g. /assets/* or /css/*.

Reverse proxy in Azure with Web Apps

I'm moving from Apache on Linux to Azure Web Apps and I have a specific url (mysite.com/blog and everything under it) that is configured with a reverse proxy so the end user doesn't know that the content is actually coming from another service.
I'm sure I can do this within Web Apps (which runs on IIS) but I can't find any documentation on how to do this. As a backup I'm open to putting another service in front of my Web App.
How can I accomplish this in Azure?
Update: I did try using another service - Functions. My architecture looks like this:
This works in production but I'm hitting snags in development. /blog may or may not work depending on the entry point. In prod, our DNS will be configured so mysite.com points to mysite-proxy.azurewebsites.net and, therefore, any URI the user hits will work. In dev, however, we may want to browse to hit /blog from the Traffic Manager which will route us to /blog on the webapp which doesn't exist. Same problem, of course, if we go to /blog directly on the webapp. I tried to show these examples on the right side of the diagram.
I would like to find a solution so the webapp itself can handle the /blog proxying and then we can determine whether it's worth the speed and cost tradeoff compared to the existing solution.
You might want to checkout Azure Functions Proxies: https://learn.microsoft.com/en-us/azure/azure-functions/functions-proxies
Sounds like you want an Application Gateway (caution, costs like $15/day)
The AGW can have multiple listeners against multiple hostnames, including path-based routing.
You will want to add two backends, one for the /blog endpoint and one for the non-/blog stuff. The backends just take the IP or FQDN of the target resource, in this case you will have:
blogBackend: myblog.com
defaultBackend: myWebapp.azurewebsites.net
Then you need to add listeners for your public-facing domain, it would be like:
myHttpListener: port 80, hostname=mywebsite.net
myHttpsListener: port 443, hostname=mywebsite.net
Then you need an HTTP setting
myHttpSetting: protocol=HTTPS, port=443, useWellKnownCACert=Yes, HostnameOverride=Yes,pick from backend target
Then you need rules, one for http=>https redirect, and the other for handling the pathing
myRedirectRule: type=basic, listener=myHttpListener, backendtargettype=redirection, targettype=listener, target=myHttpsListener
myRoutingRule: type=path-based, listener=myHttpsListener, targettype=backendpool, target=defaultBackend, httpSetting=myHttpSetting, pathRules=
path=/* name=root backendpool=defaultBackend
path=/blog name=blog backendpool=blogBackend
You can create additional http settings and assign them to the path rules to change the behaviour of the reverse proxy. For example, you can have the user's URL be https://mywebsite.net/blog, but have the path stripped on the request to the blog so the request looks like myblog.com instead of myblog.com/blog
There's a lot of tiny parts, but the app gateways can handle multiple applications at scale. Biggest thing is to watch out for the cost since this is more of an enterprise solution.

Can Varnish Read a List of Backend Hosts from a Text File

Is there any way for varnish to read a list of backend urls from a text file, and then proxy cache misses to a random url taken from the text file?
What I imagine is something like this pseudocode...
/var/services/backend-urls.conf
http://backend-host-1/path/to/application
http://backend-host-2/path/to/application
http://backend-host-3/path/to/application
# etc
varnish config
sub vcl_miss {
// read a list of urls from a text file
backendHosts = readFile("/var/services/backend-urls.conf");
//choose a random url from the file
randomHost = chooseLineAtRandom(backendHosts);
//proxy the request to the random host
set req.backend = randomHost;
}
To provide some background, I work on a server system that comprises a number of backend applications that currently sit behind a front-end running apache. We are evaluating replacing the apache layer with varnish so we can benefit from the caching capabilities of varnish. We also have a service discovery framework that knows the endpoint locations for each backend application (the endpoint urls change periodically as new hosts emerge or are taken out of service).
Currently we use the RewriteMap functionality in mod_rewrite to route requests to the backend services. Then we have a process to maintain the lists of backend services based upon the contents of the service discovery framework.
All this works well for us in apache, except that apache is like using a sledgehammer to crack a nut. All we really want is the reverse proxy loigc, and the caching in varnish would be helpful too.
Is there any way to have varnish read the list of backend urls from an external resource?
Without resorting to custom vmod/c modules, the quick answer is no.
The VCL instruction are being compiled within varnish, and that rules out run-time inclusions.
But why not include within the VCL a separate backend vcl which includes the current backends.
that vcl file could be written out on demand. Then using varnishadm CLI command you could request a new compile of the VCL, therefore bringing the config live.
I can see two potential solutions.
The first is to have something generate your VCL and backends such as Chef or some custom scripting. You can then process the text file into backend definitions and the necessary VCL to invoke them. To handle the requirement for the random backend you could use a director. I've not dealt with directors myself but it looks like they are meant to solve that requirement. When changes to the backends occur you could rerun the generation script/Chef and tell Varnish to reload its configuration either using varnishadm or service varnish reload to avoid a full restart.
The second would be to implement it in C, either via a VMOD as Marcel Dumont suggests or possibly using inline C in your VCL.
With vmod_dynamic you can just use any DNS name as a backend or even service records.
For your use case, one option would be to set up an SRV record in DNS pointing to all your servers and then just use that as for example in the basic-stub.vtc test case.

Resources