aks + firewall. How to route eggress trafic via multiple Ip address - azure

My basic problem is to run multiple containers that make HTTP requests to a test server. I need the source IP registered by the test server to be different for the request made by each container. I am using azure AKS.
So far I follow the documentation from:
https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic#restrict-egress-traffic-using-azure-firewall
The above works fine. I need to run another container with a different IP address. For that I created a new Ip address to the firewall, a new Kubernetes service, and a new NAT rules connecting them. That didn't work: the source Ip registered by the test server is the same firewall IP.
The Documentation also states that:
"If needed, you can generalize the steps above to forward the traffic to your preferred egress solution, following the Outbound Type userDefinedRoute documentation."
For that, I create a cluster with vm-set-type of VirtualMachineScaleSets and load-balancer-sku of Standard. Try the above steps and it didn't work. Also, I created a new route on the Route table connecting the internet to the new public IP.. nothing
I am lack of ideas. I don't know if I mess something up. Anyway.. Any idea is welcome. Thanks in advance.

Related

Application Gateway Rules need recreating after private DNS is updated

I have an application gateway which has a routing rule. The routing from application gateway to the VM is based up FQDN (I use Azure private DNS to internally map the VM IP to the FQDN).
To switch traffic to a different VM (as part of an upgrade pipeline) I update the private dns with the new machines IP.
This results in the backend health failing.
Oddly, manually updating the backend pool, or the routing rule in exactly the same form resolves this issue.
Any ideas whats going on? It feels like its caching the DNS?
There are at least 2 solutions.
Stop/Start application gateway: https://learn.microsoft.com/en-us/cli/azure/network/application-gateway?view=azure-cli-latest
Re-write any of the application gateways config as part of the deployment pipeline
In my case I chose to switch from routing based of FQDN to that the IP address of the VM.
This makes use option 2.

Google cloud platform load balacing to remove redirect port 8443 from domain name

I have a tomcat server with port 8080 which is running on a Google cloud platform VM instance. Also i have enabled SSL for my server. In that i have deployed my web application. When i enter my domain name in browser my application will be running.
But it will be appended with the port 8443. It looks like hostname:8443. By using load balancing in GCP i can able to achieve it. But i am new to GCP so i don't know how to configure and all. Eventhough i have configured but it shows some error like problem with backend service.
Kindly anyone can help me to resolve this.
If I understand correctly you would like to know whether in the DNS record you need to add VM instance External IP or Load Balancer’s External IP address. If my understanding is correct, in order to use Load Balancer, you need to put the load balancer’s External IP in your DNS A record.
Regarding your 1 backend service is unhealthy, I would request you to check ‘Firewall rules’ section of GCP’s Creating Health Checks documentation. You need to create ingress firewall rules applicable to all VMs being load balanced to allow traffic from health check prober IP ranges. You did mentioned which load balancer you are using. You will find GCP load balancers offering from this link. Based on the Load Balancer you are using, you need to create an appropriate heal check firewall rule.
I would recommend posting this type of questions in ServerFault as StackOverflow is for Q&A for professional and enthusiast programmers.

How do I expose Kubernetes service to the internet?

I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.

AWS CLI to restrict inbound connections from a dynamic IP

My internet provider doesn't offer static IP, so I have to connect to my AWS instances with a dynamic IP. That means that my VPC security group in AWS has a ssh port that can be accessed from every IP (source: 0.0.0.0/0), obviously if you have the key.
I would want to restrict this rule, and I was thinking of writing a CLI script that revokes this 0.0.0.0 rule and creates a new inbound rule with my (dynamic) IP.
Is it possible? Is it a good idea?
You could connect through a VPN. Then SSH from inside the VPN.
setup a software VPN (OpenVPN, OpenSwan) on an existing instance and open just that port to the outside world. Once setup it would essentially be free if you are running it on an instance that you would normally run. This will have a little more setup involved but it's not too hard.
Previously I suggested the Amazon VPC VPN. But that requires a static IP so that will not work

How to configure my Azure VM Endpoint ACL to allow connection from my Azure Webjob on the same portal

I have a WebJob on an Azure Website that needs to connect to a VM Endpoint to make REST calls.
My Endpoint is configured to deny all except my company's IP range. Now what rule would I need to add or url should I use so my webjob can connect to the endpoint?
I have tried the following without success:
Allow my website virtual IP address in the ACL
Connect to the endpoint using the internal IP instead of the DNS without changing
the ACL
Connect to the endpoint using the public virtual IP instead
of the DNS without changing the ACL
This works but is not what I am looking for:
Remove the current ACL and allow all
Keep the ACL but add a /16 rule with my website IP
Thank you for your help, and let me know if you need precision!
I need the same thing but it seems as though is not possible right now. Looking at this answer on a related question:
Azure Web Sites do not have dedicated outbound IP addresses for each
deployment. This precludes you from using ACLs or Virtual Networks to
connect to your Redis / Solr virtual machines.
So even though you can have a (reasonably) fixed incoming IP address on Azure Websites, the outgoing address is highly unpredictable and as far as I can see, the only exclusion that you could make was to restrict it to the entire range of IP addresses for that data centre which is far from ideal.
A solution moving forward will be to connect your Azure Website and the VM on the same Virtual Network. As of my writing this it is still in Preview so it still is not ready for production use just yet.
Here is more information on it: http://azure.microsoft.com/blog/2014/09/15/azure-websites-virtual-network-integration/

Resources