Setup different kubernetes cluster - azure

I created an azure kubernetes cluster via "az acs create".
I exec an existing POD
kubectl exec -it mypod /bin/bash
and make a curl http://myexternlip.com/raw
The IP I get is the public ip address of the k8s-agents... So far so good.
Now I create an azure kubernetes cluster via "acs-engine".
Make the same "exec" as above-mentioned...
Now I can't find the IP in any azure component. Neither in the agents, nor in the load balancers.
Where is this IP configured?
Regards,
saromba

This should have the information you're looking for:
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-load-balancing
In short, when you expose a service with type=loadbalancer, ACS creates a Public IP address resource that K8S uses.

Related

cannot access ACI restful endpoint deployed to VN

I deployed a docker image to an ACR and then to an ACI with a command like this:
az container create
--resource-group myrg
--name myamazingacr
--image myamazingacr.azurecr.io/test3:v1
--cpu 1
--memory 1
--vnet myrg-vnet
--vnet-address-prefix 10.0.0.0/16
--subnet default
--subnet-address-prefix 10.0.0.0/24
--registry-login-server myamazingacr.azurecr.io
--registry-username xxx
--registry-password xxx
--ports 80
This all works without error and the IP of the ACI is 10.0.0.5 and there is no FQDN as it is a VN. I think this makes sense.
When I run the image outside Azure (i.e. on my local machine where I created the image) I can successfully access an endpoint like this:
http://127.0.0.1/plot
http://127.0.0.1/predict_petal_length?petal_width=3
[127.0.0.1] indicates that I run the image on the local machine.
However, this does not work:
http://10.0.0.5/plot
http://10.0.0.5/predict_petal_length?petal_width=3
I get:
This site can’t be reached10.0.0.5 took too long to respond.
What could be wrong please?
PS:
Maybe it is related to this:
https://learn.microsoft.com/en-us/answers/questions/299493/azure-container-instance-does-not-resolve-name-wit.html
I have to say I find Azure really frustrating to work with. Nothing really seems to work. Starting with Azure ML to ACIs ...
PPS:
this is what our IT says - tbh I do not fully understand ...
• Private endpoints are not supported so we need to create a vnet in the resource group peer it to the current dev vnet and we should be good
• We basically need to know how we can create an ACR with the network in an existing vnet in a different resource group. I am struggling to find the correct way to do this.
Since you have deployed your ACI into an Azure virtual network, your containers can communicate securely with other resources in the virtual network. So you could access the ACI endpoint in the Azure vNet.
For example, you can try to deploy a VM in the vNet but in a different subnet than your ACI, then you can try to access the ACI endpoint from the Azure VM.
Alternatively, you can expose a static IP address for a container group by using an application gateway with a public frontend IP address.
The possible reason for your issue is that you set the wrong IP address for your application to listen to. The IP address 127.0.0.1 is a localhost or loopback IP that only can be used inside the machine. Take a look here. So you can try to change the IP into 0.0.0.0. This one is accessible outside.

Securing Kubernetes API on Azure only accessible by Local IP (RFC1918)

Notice that when I create a Azure Kubernetes, by default it creates the API with an *.azmk8s.io FQDN and it is external facing.
Is there a way to create with local IP instead? If "yes", can this be protected by NSG and Virtual Network to limit connections coming via Jump Server?
If there is any drawback creating to only allow internal IP?
Below is the command I used to create:-
az aks create -g [resourceGroup] -n
[ClusterName]  --windows-admin-password [SomePassword]
--windows-admin-username [SomeUserName] --location [Location] --generate-ssh-keys -c 1 --enable-vmss --kubernetes-version 1.14.8 --network-plugin azure
Anyone tried https://learn.microsoft.com/en-us/azure/aks/private-clusters? If that still allows external facing app but private management API?
why not? only the control plane endpoint is different. in all other regards - its a regular AKS cluster.
In a private cluster, the control plane or API server has internal IP
addresses that are defined in the RFC1918 - Address Allocation for
Private Internets document. By using a private cluster, you can ensure
that network traffic between your API server and your node pools
remains on the private network only.
this outlines how to connect to private cluster with kubectl. NSG should work as usual

Unable to communicate between the docker containers from the same subnet after using CNI to assing IPs from azure vnet

I was trying to give IP address to docker container from an azure vnet using the CNI plugin, I was successfully able to give IP address to containers but they were not communicating with each other.
I followed this blog, which is based on this microsoft document.
I created two containers using commands:
sudo ./docker-run.sh alpine1 default alpine
and
sudo ./docker-run.sh alpine2 default alpine
I checked that the IP of alpine1 is 10.10.3.59 and IP of alpine2 is 10.10.3.61 which are the IPs I created inside a network interface as given in the above docs. So they did receive the IPs from a subnet inside vnet but when I ping alpine1 from alpine2 as ping 10.10.3.59, it doesn't work, am I missing something here ? Or I have to do some other configuration after this ?

SSH to Azure's Kubernetes managed master node

I just deployed a managed Kubernetes cluster with Azure Container Service. My deployment includes a single agent machine over the managed cluster and an Azure disk attached to it for persistent storage.
The problem I am facing is that I don't know how to ssh this agent server. I read that you should be able to ssh the master node and connect to the agent from there but as I am using a managed Kubernetes master I can't find the way of doing this.
Any idea? Thank you in advance.
The problem I am facing is that I don't know how to ssh this agent
server.
Do you mean you create AKS and can't find master VM?
If I understand it correctly, that is a by design behavior, AKS does not provide direct access (Such as with SSH) to the cluster.
If you want to SSH to the agent node, as a workaround, we can create a public IP address and associate this public IP address to the agent's NIC, then we can SSH to this agent.
Here are my steps:
1.Create Public IP address via Azure portal:
2.Associate the public IP address to the agent VM's NIC:
3.SSH to this VM with this public IP address:
Note:
By default, we can find ssh key when we try to create AKS, like this:
Basically, you don't even have to create a public IP to that node. Simply add public ssh key to the desired node with Azure CLI:
az vm user update --resource-group <NODE_RG> --name <NODE_NAME> --username azureuser --ssh-key-value ~/.ssh/id_rsa.pub
Then run temporary pod with (Don't forget to switch to the desired namespace in kubernetes config):
kubectl run -it --rm aks-ssh --image=debian
Copy private ssh key to that pod:
kubectl cp ~/.ssh/id_rsa <POD_NAME>:/id_rsa
Finally, connect to the AKS node from pod to private IP:
ssh -i id_rsa azureuser#<NODE_PRIVATE_IP>
In this way, you don't have to pay for Public IP and in addition, this is good from security perspective.
The easiest way is to use the below, this will create a tiny priv pod on the node and access the node using nsenter.
https://github.com/mohatb/kubectl-wls

From what source-IP range do outbound connections from Pods appear to come?

I want to set up connections from a kubernetes cluster (created via az acs create with mostly default settings) to an Azure Postgresql instance, and I'd like to know what source-IP range to enter in postgres HBA (this is the thing Azure calls a firewall-rule under az postgres server).
The thing is, although I can see from the console errors (when using psql to test) what the current IP is that the cluster requests come from
FATAL: no pg_hba.conf entry for host "x.x.x.x" [...]
... I just don't see this IP address anywhere in the cluster properties - and anyway, it would seem a very fragile configuration to just whitelist this one IP address without knowing how it's assigned.
(In the Azure Portal, I do see one "Public IP" associated with the cluster master, but that's not the same as the IP seen by postgres, and, I assume, mainly for ingress.)
So ideally, does ACS let me control the outbound IP addresses for the cluster? And if not, can I figure out programmatically what IP or range of IPs to allow?
It should be the external IP for the node that the pod is scheduled on, e.g. on container engine:
$ kubectl get no -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
gke-cluster-1-node-1 Ready 58d v1.5.4 <example node IP> Container-Optimized OS from Google 4.4.21+
$ ssh gke-cluster-1-node-1
$ curl icanhazip.com
<example node IP>
$ kubectl get po -o wide | grep node-1
example-pod-1 1/1 Running 0 11d <pod IP> gke-cluster-1-node-1
$ kubectl exec -it example-pod-1 curl icanhazip.com
<example node IP>
does ACS let me control the outbound IP addresses for the cluster? And
if not, can I figure out programmatically what IP or range of IPs to
allow?
Based on my knowledge, Azure container service expose docker application to public via Azure load balancer, load balancer will get a public IP address.
By the way, we can't specify which public IP address will associate to Azure load balancer.
After we can expose the application to the internet, we can add the public IP address to your Postgresql's postgres HBA.

Resources