I just deployed a managed Kubernetes cluster with Azure Container Service. My deployment includes a single agent machine over the managed cluster and an Azure disk attached to it for persistent storage.
The problem I am facing is that I don't know how to ssh this agent server. I read that you should be able to ssh the master node and connect to the agent from there but as I am using a managed Kubernetes master I can't find the way of doing this.
Any idea? Thank you in advance.
The problem I am facing is that I don't know how to ssh this agent
server.
Do you mean you create AKS and can't find master VM?
If I understand it correctly, that is a by design behavior, AKS does not provide direct access (Such as with SSH) to the cluster.
If you want to SSH to the agent node, as a workaround, we can create a public IP address and associate this public IP address to the agent's NIC, then we can SSH to this agent.
Here are my steps:
1.Create Public IP address via Azure portal:
2.Associate the public IP address to the agent VM's NIC:
3.SSH to this VM with this public IP address:
Note:
By default, we can find ssh key when we try to create AKS, like this:
Basically, you don't even have to create a public IP to that node. Simply add public ssh key to the desired node with Azure CLI:
az vm user update --resource-group <NODE_RG> --name <NODE_NAME> --username azureuser --ssh-key-value ~/.ssh/id_rsa.pub
Then run temporary pod with (Don't forget to switch to the desired namespace in kubernetes config):
kubectl run -it --rm aks-ssh --image=debian
Copy private ssh key to that pod:
kubectl cp ~/.ssh/id_rsa <POD_NAME>:/id_rsa
Finally, connect to the AKS node from pod to private IP:
ssh -i id_rsa azureuser#<NODE_PRIVATE_IP>
In this way, you don't have to pay for Public IP and in addition, this is good from security perspective.
The easiest way is to use the below, this will create a tiny priv pod on the node and access the node using nsenter.
https://github.com/mohatb/kubectl-wls
Related
I'm currently having some issues to sign in to a private AKS Cluster with the following commands:
az account set --subscription [subscription_id]
az aks get-credentials --resource-group [resource-group] --name [AKS_cluster_name]
After I typed those two commands it ask me to authenticate through the web with a code that is generated by AZ CLI, and after that, I have the following issue on the terminal:
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code RTEEREDTE to authenticate.
Unable to connect to the server: dial tcp: lookup aksdusw2aks01-0581cf8f.hcp.westus2.azmk8s.io: i/o timeout
What could be the potential issue? How can I successfully login to a private AKS Cluster?
Notes:
I have some other clusters and I'm able to login to them through the terminal without having any type or kind of errors.
You cant use kubectl to access the API Server of a private AKS cluster, thats the design by making it private (no public access). You will need to use az aks command invoke to invoke commands through the Azure API:
az aks command invoke -n <CLUSTER_NAME> -g <CLUSTER_RG> -c "kubectl get pods -A"
Timeouts typically mean something somewhere is dropping packets and there is no response. This might be the security policies and/or traffic rules inside your Azure cloud that are configured for your AKS cluster. You can double check this is coherent with the one where you say your cluster is accessible.
In order to interact with the private cluster you'll need to run your command lines from endpoint that has access to the VNET that the AKS cluster is in. So you'll need a VM in that VNET, or a VNET that is peered, VPN in etc.
The point of the private cluster is to prevent access from external sources, only connected networks are allowed.
You can also, as mentioned by Wlez, use command invoke, but this is probably suited to occasional use, rather than responsive, frequent access.
Notice that when I create a Azure Kubernetes, by default it creates the API with an *.azmk8s.io FQDN and it is external facing.
Is there a way to create with local IP instead? If "yes", can this be protected by NSG and Virtual Network to limit connections coming via Jump Server?
If there is any drawback creating to only allow internal IP?
Below is the command I used to create:-
az aks create -g [resourceGroup] -n
[ClusterName] --windows-admin-password [SomePassword]
--windows-admin-username [SomeUserName] --location [Location] --generate-ssh-keys -c 1 --enable-vmss --kubernetes-version 1.14.8 --network-plugin azure
Anyone tried https://learn.microsoft.com/en-us/azure/aks/private-clusters? If that still allows external facing app but private management API?
why not? only the control plane endpoint is different. in all other regards - its a regular AKS cluster.
In a private cluster, the control plane or API server has internal IP
addresses that are defined in the RFC1918 - Address Allocation for
Private Internets document. By using a private cluster, you can ensure
that network traffic between your API server and your node pools
remains on the private network only.
this outlines how to connect to private cluster with kubectl. NSG should work as usual
I have created a container service and have set the orchestration to swarm. I have 1 master and 2 agents. I was expecting the swarm to be initiated automatically but it doesn't appear so. I need to remote onto each VM to connect it to a swarm manager.
Whilst I can connect to my master VM via SSH, I don't see how to connect to either of the agent vm's in the scale set.
I've tried the following in git bash, based on the instance names listed in the scale set....
$ ssh moconnor#swarm-agentpool-16065278-vmss_1 -i /c/Users/Matthew.OConnor/azure
which links to my private SSH key, but get the following error....
ssh: Could not resolve hostname swarm-agentpool-16065278-vmss_1: Name or service not known
I assume this is because swarm-agentpool-16065278-vmss_1 is neither a valid ip or dns, but how do I get this value for each VM in the scale set?
The following works for connecting to my master...
ssh moconnor#saseleniummgmt.ukwest.cloudapp.azure.com -i /c/Users/Matthew.OConnor/azure
According to this section in the guide, I should be seeing some inbound NAT rules for each VM in the scale set.
For me this screen is empty...
and it doesn't allow me to add anything due to the following message...
Full virtual machine scale set support for the portal is coming soon. Adding or editing references between load balancers and scale set virtual machines is currently disabled for load balancers that contain an existing association with a scale set.
How do I connect to VM's in a scale set created with container services?
You could ssh to master VM and find the agent private IP on Azure Portal.
Then you could ssh the agent instance, for example.
ssh -i ~/.ssh/id_rsa <username>#10.0.0.5
Note: The id_rsa is same with master VM.
I am using Azure Container Registry to store my private docker image and Azure Container Instance to deploy it.
I get a public IP address which is OK for verification and simple preview, but not usable (or shareable with customer) since the IP address is dynamic.
Is there a way to set up fully qualified domain name that i can use instead of changing IP address on every container restart?
Browsing through the documentation does not reveal anything about that.
You can now set a dns-name-label as a property for your container group. Details are shared in this answer - hope this helps and thanks for being a user!
Azure Container Group IP Address disappeared
Is there a way to set up fully qualified domain name that i can use
instead of changing IP address on every container restart?
Unfortunately, for now, Azure does not support to set a static public IP address for instance, Azure Container Instance still in preview.
In the future, we will expand our networking capabilities to include
integration with virtual networks, load balancers, and other core
parts of the Azure networking infrastructure.
More information about Azure Container Instance network, please refer to this link.
As a workaround, we can deploy a VM and run docker on it, set static public IP address for this VM, then restart docker we will not lose this public IP address.
When using a docker-compose.yml file on ACI(Azure Container Instances), domainname property is used for FQDN.
services:
service-name:
image: ****
domainname: **FQDN**
Looks like you can do it via Azure CLI using the dns-name-label flag:
az container create --resource-group myResourceGroup --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --dns-name-label aci-demo --ports 80
src here
This will result in the following FQDN: aci-demo.westeurope.azurecontainer.io (westeurope being your location)
I created an azure kubernetes cluster via "az acs create".
I exec an existing POD
kubectl exec -it mypod /bin/bash
and make a curl http://myexternlip.com/raw
The IP I get is the public ip address of the k8s-agents... So far so good.
Now I create an azure kubernetes cluster via "acs-engine".
Make the same "exec" as above-mentioned...
Now I can't find the IP in any azure component. Neither in the agents, nor in the load balancers.
Where is this IP configured?
Regards,
saromba
This should have the information you're looking for:
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-load-balancing
In short, when you expose a service with type=loadbalancer, ACS creates a Public IP address resource that K8S uses.