How to map unique dns names to service fabric port - azure

I have a local service fabric cluster which has 6-7 custom http endpoints exposed. I use fiddler to redirect these to my service like so:
127.0.0.1:44300 identity.mycompany.com
127.0.0.1:44310 docs.mycompany.com
127.0.0.1:44320 comms.mycompany.com
etc..
I've never deployed a cluster in azure before, so there's some intricacies that i'm not familiar with and I can't find any documentation on. I've tried a multiple times to deploy and tinker with the load balancers/public ips with no luck.
I know DNS CNAMES can't specify ports, so I guess that I have to have separate public IP for each hostname I want to use and then somehow internally map that to the port. So i end up with something like this:
identity.mycompany.com => azure public ip => internal redirect / map => myservicefabrichostname.azure.whatever:44300
my questions are:
1) is this the right way to go about it? or is there some fundamental method that i'm missing
2) do I have to specify all these endpoints (44300, 44310, 44320...) when creating the cluster (it appears to set up a load of load balancer rules/probes) or will this be unnecessary if I have multiple public IPs), i'm unsure if this is for internal or external access.
thanks
EDIT:
looks like the azure portal is broken :( been on phone with microsoft support and it looks like it's not displaying the backendpools in the load balancer correctly, so you can't set up any new nat rules.
Might be able to write a powershell script to get round this though
EDIT 2:
looks like Microsoft have fixed the bug in the portal, happy times

Instead of using multiple ip addresses you can use a reverse proxy. Like HAProxy, IIS (with rewriting), the built-in reverse proxy, or something you build yourself or reuse. The upside of that is that is allows for flexibility in adding and removing underlying services.
All traffic would come in on one endpoint, and then routed in the right direction (your services running on various ports inside the cluster). Do make sure your reverse proxy is high available.

Related

Deployed small footprint tanzu application service(tas) in Azure,without no domains.Can i access the ccapi and apps manager with the IP?

Could deploy Bosh and small footprint tanzu application service(tas) in Azure, without using the domains.All Vms are running.Can i access the ccapi and apps manager with the IP address instead of the api.SYSTEMDOMAIN?
The short answer is no. You really, really want to have DNS set up properly.
Here's the long answer that is more nuanced.
All requests to your foundation go through the Gorouter. Gorouter will take the incoming request, look at the Host header and use that to determine where to send the request. This happens the same for system services like CAPI and UAA as it does for apps you deploy to the foundation.
DNS is a requirement because of the Host header. A browser trying to access CAPI or an application on your foundation is going to set the Host header based on the DNS entry you type into your browser's address bar. The cf CLI is going to do the same thing.
There are some ways to work around this:
If you are strictly using a client like curl where you can set the Host header to arbitrary values. In that way, you could set the host header to api.system_domain and at the same time connect to the IP address of your foundation. That's not a very elegant way to use CF though.
You can manually set entries in your /etc/hosts` (or similar on Windows). This is basically a way to override DNS resolution and supply your own custom IP.
You would need to do this for uaa.system_domain, login.system_domain, api.system_domain and any host names you want to use for apps deployed to your foundation, like my-super-cool-app.apps_domain. These should all point to the IP of the load balancer that's in front of your pool of Gorouters.
If you add enough entries into /etc/hosts you can make the cf CLI work. I have done this on occasion to bypass the load balancer layer for troubleshooting purposes.
Where this won't work is on systems where you can't edit /etc/hosts, like customers or external users of software running on your foundation or if you're trying to deploy apps on your foundation that talk to each other using routes on CF (because you can't edit /etc/hosts in the container). Like if you have app-a.apps_domain and app-b.apps_domain and app-a needs to talk to app-b. That won't work because you have no DNS resolution for apps_domain.
You can probably make app-to-app communication work if you are able to use container-to-container networking and the apps.internal domain though. The resolution for that domain is provided by Bosh DNS. You have to be aware of this difference though when deploying your apps and map routes on the apps.internal domain, as well as setting network policy to allow traffic to flow between the two.
Anyway, there might be other hiccups. This is just off the top of my head. You can see it's a lot better if you can set up DNS.
The most easy way to achieve a portable solution is a service like xip.io that will work out of the box. I have setup and run a lot of PoCs that way, when wildcard DNS was something that enterprise IT was still oblivious about.
It works like this (excerpt from their site):
What is xip.io?
xip.io is a magic domain name that provides wildcard DNS
for any IP address. Say your LAN IP address is 10.0.0.1.
Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
...and so on. You can use these domains to access virtual
hosts on your development web server from devices on your
local network, like iPads, iPhones, and other computers.
No configuration required!

Change Tiny Proxy IP per request on EC2

I'm fairly new to Proxy servers and how they work exactly.
I recently span up an AWS EC2 instance to act as a proxy server using tiny proxy. Everything seems to work just fine however i am curious about something. Is it possible to configure tiny proxy to use a different public IP each time it makes a request ? I looked into AWS Elastic IP's but don't quite understand how those might fit in this scenario.
Not possible. Public IPs are allocated to the instance during launch. You can allocate multiple IPs using Elastic IPs like you mentioned but you can't get IPs per request like you asked. What's your use case?

SSL Certs for single IP- two ports, same URL website

We've a project that is to go live very soon and we ran into this issue when dealing with developers. This is two JDEdwards (ERP) website which are hosted on a single IBM WebSphere webserver, currently using a FQDN, and different ports assignment for DEV and TEST users. Websites as such are -
DEV
https://jdeweb01dev.corporate.company.com:100/jde/owhtml/
TEST
https://jdeweb01dev.corporate.company.com:101/jde/owhtml/
There is only one IP configured for the above server FQDN but we will eventually give common name like JdeDev.company.com JdeTest.company.com or something.
We want to implement SSL cert for our Test/Dev environments, but how would we implement this on IIS or IBM Web SPhere, as well as on DNS level. Sine the only difference between the URLs is port numbers and both lead to different websites. I'm open for suggestions on how we can improve the design as well or how to make the current design work.
Another important thing to consider, the two websites will be accessed between two different Domain Forests which have transient Trust. This is a JDEdwards project.
Appreciate any help on this!
In order to configure HTTPS binding in IIS site binding, just configure a certificate in IIS site binding module.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-an-iis-hosted-wcf-service-with-ssl
Also, this could be accomplished by the Netsh http command.
netsh http add sslcert ipport=0.0.0.0:8000
certhash=0000000000003ed9cd0c315bbb6dc1c08da5e6
appid={00112233-4455-6677-8899-AABBCCDDEEFF}
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-a-port-with-an-ssl-certificate
After you have set up the FQDN in DNS entries, you could specify the Hostname field in order to access the service with the server fully qualified domain name.
Feel free to let me know if there is anything I can help with.
WebSphere supports multiple virtual hosts, each with its own alias(es), which can be a combination of DNS name and port. The built-in default_host will typically have an alias for the server/node name and the * wildcard for all ports. You then assign a specific virtual host to an application when you deploy it.

Find external IP address of node js server in all scenarios

I have build a node js code for an API server. Part of one feature is that when it starts, it should be able to know its own IP, despite the type of setup of the server where it is running.
The classic scenario is not that hard (I think). There are several options, like using the os module and find the ip or the external interface. I am sure there are other ways and some might be better, but this is the way I have been doing it so far. Feel free to add alternatives as informative as possible.
There is this case that I stumbled on. In one case, the web server was running on a google cloud instance. This instance has two IPs, one internal and one external. What I want is the external IP. However, when I use the method above, the actual external IP is not part of the object returned. The internal IP is declared as being considered as non-internal. Even when I run different commands from within the server command line, the only IP returned is the one that is actually internal and cannot be used to access the node server.
From what I understand, the instance itself is not aware of it's external IP. There might be a dns (I think) that redirects requests made to the external IP towards the correct instance.
While reading in the internet I read that problems getting the server's correct external IP might also rise when using load balancing or proxies.
The solution I thought about is to have the node js code make a request towards a service that I will build. This service will treat the node js servers as clients, and will return their external IPs. From experiments that I have done, the req object contains among others the information of the client's IP. So I should check first req.connection.remoteAddress and then the first element of req.headers['x-forwarded-for']. Ideally the server would make a request towards itself, but
I know there are external API like https://api.ipify.org?format=json that do just that - return the actual IP. But I would very much like to have the node js servers independent of services I cannot control.
However, I really am hoping that there are better solutions out there than making a request from the server which returns the server IP.
However, I really am hoping that there are better solutions out there
than making a request from the server which returns the server IP.
It is not really possible, you always rely on some kind of external observer / external request.
While reading in the internet I read that problems getting the
server's correct external IP might also rise when using load balancing
or proxies.
This is because not in all scenarios your own device is able to be self-aware of its external ip. There might be sitting behind some network, that means external address assigned to devices that forwards the WAN to it. (example : router) so when you try to obtain external ip from the devices interface itself, you end up obtaining an ip but inside the scope of the routers LAN and not the one used for external requests .
So if you really want to
Have a method to use in all scenarios
Not rely on 3rd party services
Only Solution :
Build your own ip echo service (you maintain and can use for future projects).

How to get haproxy to use a specific cluster computer via the URI

I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.

Resources