How Can I Expose to Outside My Services on Azure? - azure

I have Azure Kubernetes cluster and i want to expose my services.How can i expose them with 1 IP and nodePorts ? My services using TCP so i can not use ingress.I do not want to buy an external Load Balancer for every service.Is there a way to achieve this on Azure ?

If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
you can use external-ips to access your application with http://External_IP:ServicePort

Related

Azure Kubernetes Cluster - Accessing and interacting with Service(s) from on-premise

currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.

Azure Container Services (AKS) - Exposing containers to other VNET resources

I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.
For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.
Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).
My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?
I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET?
Thank you in advance!
If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints.
I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.
Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator

How to expose multiple kubernetes services trough single azure load balancer?

I want to expose multiple services trough single load balancer. Each service points to exactly one pod.
So far I tried to:
kubectl expose <podName> --port=7000
And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod.
So far I can connect to pod using external IP and specified port.
Depends on how you want to separate services on the same IP. The two ways that come to my mind are :
use NodePort services and then map some ports from your LB to that part on your cluster nodes. This gives separation by port.
way more interesting in my opinion is to use Ingress/IngressController. You would expose only IC on standard ports like 80 & 443 and then it will map to your services by hostname and uri
In Azure container service, Azure will use Load Balancer to expose k8s services, like this:
root#k8s-master-E27AE453-0:~# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jasonnginx 10.0.41.194 52.226.33.200 8080:32011/TCP 4m
kubernetes 10.0.0.1 <none> 443/TCP 11m
mynginx 10.0.144.49 40.71.230.60 80:32366/TCP 5m
yournginx 10.0.147.28 40.71.226.23 80:32289/TCP 4m
root#k8s-master-E27AE453-0:~#
Via Azure portal, check Azure load balancer frontend IP configuration(different IP address):
ACS will create Load Balancer rules and add rontend IP address automatically.
How to expose multiple kubernetes services trough single azure load
balancer?
ACS expose k8s services through that Azure Load Balancer, do you mean you want to expose k8s services with a single Public IP address?
If you want to expose k8s services with a single public IP address, as Radek said, maybe you should use Nginx Ingress Controller.
The Ingress Controller works like this:
Thanks guys. I think I have found viable solution to my problem. I should have been more specific about what I'm going to do.
I want to host game server over UDP. So any kubernetes ingress controller is not really an option, since they rarely support UDP routing.
I also don't need to host multitude of services on single machine 1-4 of pods per single node is probably the maximum.
I have found about using :
hostNetwork: true
in yaml config and it actually works pretty well for this scenario.
I get IP directly from host node. I can then select matching node within load balancer and create NAT or load balancing rule
Create multiple nodePort type service, and fix the nodePort.
And the cloud load balancer, set multiple listener groups. The listen port is same as the service's nodeport, and the target are all the worker nodes.

Expose containers to private network

I am looking for a way to create a docker cluster (probably kubernetes) on azure, and expose the containers only via a vnet to my datacenter.
Is such a setup possible?
That is that the container services can only be access via the vpn that is created. So that the container can use private resources (mainly database) not available in the azure cloud?
And so that I can access the resources in the cloud, only from my dc.
Yes, that is perfectly possible. depending on your setup you need to deploy regular kubernetes cluster and use site-to-site VPN to connect networks or use ACS engine to deploy kubernetes into existing vnet\subnet.
You would also need to tweak your network security group rules to allow traffic to flow (if you have them).
https://github.com/Azure/acs-engine/tree/master/examples/vnet
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough
https://blogs.technet.microsoft.com/canitpro/2017/06/28/step-by-step-configuring-a-site-to-site-vpn-gateway-between-azure-and-on-premise/
I am looking for a way to create a docker cluster (probably
kubernetes) on azure, and expose the containers only via a vnet to my
datacenter.
Yes, we just create k8s pod, and not expose it to internet. Then create S2S VPN connect Azure Vnet to your DC, in this way, your DC's VMs can connect to Azure K8S pod via Azure private IP address.
Update:
If you want to connect your K8S pods via VPN, we can create Azure route table to achieve that.
More information about create route table, please refer to my another answer.

Access Azure Service Fabric application from internet

I think I'm missing something that is on surface.
I have created SF cluster in Azure. It has a Load Balancer and a network with 3 VMs (nodes) which have IP addresses in 10.0.0.0/16.
When I'm asking Load balancer for application endpoint it responds with node IP address. (I captured packets with WireShark) But I can't access it because the network is private.
A bit more info about my case: 3xA0 instances, net.tcp:20001 endpoints, firewall allow connections, ports opened and listening, i have public IP address assigned to balancer, probe for service port.
On your load balancer you will need to assign a public IP address. You can find some really good detailed guides in the documentation for this.
Ok Here is it:
When you want to communicate to the service from outside the cluster - just use load balancer IP and you don't need the naming server communication. Load balancer has probs that can check ports on each node in cluster and forward your request to random instance which has service you are asking.
When you want to communicate one microservice to another within the cluster then you have 2 options:
ask naming service through load balancer and then communicate to the service directly.
if you know for sure that the service should be on every node in your cluster - you can just communicate to localhost directly.
When you want to communicate from separate vm to the microservice in the cluster from within cluster's virtual network (you can connect WebApp to the cluster using vpn) then you can ask naming service through load balancer but using Service fabric HTTP API because you will not be able to use service fabric classes on VM wich doesn't have Service Fabric SDK installed. Here is an example of service resolving: https://github.com/lAnubisl/ServiceFabricHttpApiClient

Resources